This document summarizes the results of estimating equations for taxes, consumption, and money supply (M2) using a structural macroeconomic model. The model contains 11 behavioral equations estimated using two-stage least squares, with some recursive equations estimated using ordinary least squares. Historical simulations from 1960-1993 show close fits between actual and predicted values for taxes and consumption. Forecasts for 1994-1995 also closely match actual tax and consumption values.
The document describes a regression analysis of housing prices in Boston suburbs using 14 variables. A full model found some variables like industrial proportion and age to be insignificant. A reduced model and transforming the dependent variable to its natural log improved the model fit. Stepwise selection confirmed one more variable as insignificant. The final model with 9 significant variables had high predictive power.
Macroeconometric analysis of Ecuador's inflation before and after dollarization, proposing a model to explain where Ecuador's inflation comes from nowadays
STA457 Assignment Liangkai Hu 999475884Liang Kai Hu
The document analyzes the time series properties of property index returns (NPI) and other asset class returns. It finds NPI exhibits inertia due to infrequent transactions. To address this, it uses autoregressive (AR) and moving average (MA) models to unsmooth the NPI data. The best-fitting models are AR(7) and MA(8). It then estimates factor loadings by regressing unsmoothed NPI returns on Fama-French factors. While some factors are statistically significant, model fits remain poor based on other diagnostics. The document explores using principal component analysis to improve factor fitting.
The study examines the effect of inflation, investment, life expectancy and literacy rate on per capita GDP across 20 countries using ordinary least squares regression. Initially, the regression results show inflation, investment and literacy rate have a negative effect, while life expectancy has a positive effect on per capita GDP. Sri Lanka, USA and Japan are identified as potential outliers based on their high residuals. Running the regression after removing these outliers improves the model fit and explanatory power of the variables. Diagnostic tests find no evidence of misspecification or heteroskedasticity, validating the OLS estimates.
A Study on the Short Run Relationship b/w Major Economic Indicators of US Eco...aurkoiitk
The objective of this study
was to develop an economic indicator system for the US
economy that will help to forecast the turning points in the
aggregate level of economic activity. Our primary concern
is to study the short run relationship between the major
economic indicators of US economy (eg: GDP, Money
Supply, Unemployment Rate, Inflation rate, Federal Fund
Rate, Exchange Rate, Government Expenditure &
Receipt, Crude Oil Price, Net Import & Export).
The document analyzes automobile sales data from 1984-2004 in North America to develop demand, production, and cost curves. Multiple regression analysis is used to create demand and production equations based on independent variables like price, income, population, substitute vehicles, recession indicators, and unemployment. The regression output is analyzed to determine the impact of each variable on demand and construct a demand function. Coefficients of determination, T-tests, intercorrelation tests, and F-tests are examined to evaluate the regression results and demand equation. The analysis aims to facilitate discussion of automobile demand and production in North America over the time period.
The document describes a regression analysis of housing prices in Boston suburbs using 14 variables. A full model found some variables like industrial proportion and age to be insignificant. A reduced model and transforming the dependent variable to its natural log improved the model fit. Stepwise selection confirmed one more variable as insignificant. The final model with 9 significant variables had high predictive power.
Macroeconometric analysis of Ecuador's inflation before and after dollarization, proposing a model to explain where Ecuador's inflation comes from nowadays
STA457 Assignment Liangkai Hu 999475884Liang Kai Hu
The document analyzes the time series properties of property index returns (NPI) and other asset class returns. It finds NPI exhibits inertia due to infrequent transactions. To address this, it uses autoregressive (AR) and moving average (MA) models to unsmooth the NPI data. The best-fitting models are AR(7) and MA(8). It then estimates factor loadings by regressing unsmoothed NPI returns on Fama-French factors. While some factors are statistically significant, model fits remain poor based on other diagnostics. The document explores using principal component analysis to improve factor fitting.
The study examines the effect of inflation, investment, life expectancy and literacy rate on per capita GDP across 20 countries using ordinary least squares regression. Initially, the regression results show inflation, investment and literacy rate have a negative effect, while life expectancy has a positive effect on per capita GDP. Sri Lanka, USA and Japan are identified as potential outliers based on their high residuals. Running the regression after removing these outliers improves the model fit and explanatory power of the variables. Diagnostic tests find no evidence of misspecification or heteroskedasticity, validating the OLS estimates.
A Study on the Short Run Relationship b/w Major Economic Indicators of US Eco...aurkoiitk
The objective of this study
was to develop an economic indicator system for the US
economy that will help to forecast the turning points in the
aggregate level of economic activity. Our primary concern
is to study the short run relationship between the major
economic indicators of US economy (eg: GDP, Money
Supply, Unemployment Rate, Inflation rate, Federal Fund
Rate, Exchange Rate, Government Expenditure &
Receipt, Crude Oil Price, Net Import & Export).
The document analyzes automobile sales data from 1984-2004 in North America to develop demand, production, and cost curves. Multiple regression analysis is used to create demand and production equations based on independent variables like price, income, population, substitute vehicles, recession indicators, and unemployment. The regression output is analyzed to determine the impact of each variable on demand and construct a demand function. Coefficients of determination, T-tests, intercorrelation tests, and F-tests are examined to evaluate the regression results and demand equation. The analysis aims to facilitate discussion of automobile demand and production in North America over the time period.
This document analyzes Apple's stock performance around earnings announcement dates from 2004 to 2017. It finds:
1) Cumulative abnormal returns and unexplained earnings are calculated for Apple and the market around each earnings date.
2) Regressions show little correlation between unexplained earnings and returns, likely because Apple's stock moves more on new product announcements than earnings.
3) Analyst expectations for Apple three months out prove quite accurate, suggesting their estimates incorporate new product information.
4) Analyzing Apple's stock around new product release dates instead of earnings may create an effective investment strategy.
Turkey financial stress risk index and econometric modeling using garch and m...YUNUS EMRE Ozcan
This document presents research on developing a Turkey Financial Risk Index (TFRI) using daily data from the interbank, credit, equity, and foreign exchange markets between 2004-2014. Researchers constructed the index using principal components analysis and compared equal-weighting. GARCH and Markov switching models were used to model the index. The results showed the CDS market was the most significant factor influencing the index. The researchers conclude the TFRI can help monitor financial stress and be used as an early warning signal or exogenous variable in other models.
The document analyzes the relationships between inflation, unemployment, and interest rates using a vector autoregression (VAR) model. It finds that:
1) All three variables - inflation, unemployment, and interest rates - are stationary based on tests of their autocorrelation functions.
2) A VAR with a lag length of 2 is optimal based on information criteria.
3) In the estimated VAR models, lags of inflation and unemployment are significant predictors of current inflation, while only lags of unemployment are significant for current unemployment.
4) Diagnostic tests of residuals show white noise, validating the fitted VAR models. Forecasts for 2015-2016 are also generated from the models.
Simple Regression Years with Midwest and Shelf Space Winter .docxbudabrooks46239
Simple Regression Years with Midwest and Shelf Space Winter 2016 Page 1
Lecture Notes for Simple Linear Regression
Problem Definition: Midwest Insurance wants to develop a model able to predict sales
according to time with the company.
Results for: MIDWEST.MTW
Data Display
Row Sales Years with Midwest xy y2 x2
1 487 3 1461 237169 9
2 445 5 2225 198025 25
3 272 2 544 73984 4
4 641 8 5128 410881 64
5 187 2 374 34969 4
6 440 6 2640 193600 36
7 346 7 2422 119716 49
8 238 1 238 56644 1
9 312 4 1248 97344 16
10 269 2 538 72361 4
11 655 9 5895 429025 81
12 563 6 3378 316969 36
y=4855 x=55 xy=26,091 y
2
=2,240,687 x
2
=329
(x)
2
= 3025
(y)
2
= 23571025
Scatterplot of Midwest Data
Graphs>Scatterplot
Years with Midwest
S
a
le
s
9876543210
700
600
500
400
300
200
Scatterplot of Sales vs Years with Midwest
Evaluate the bivariate graph to determine whether a linear relationship exists and the
nature of the relationship. What happens to y as x increases? What type of relationship do
you see?
Simple Regression Years with Midwest and Shelf Space Winter 2016 Page 2
Dialog box for developing correlation coefficient
Explore Linearity of Relationship for significance using t distribution
Pearson Product Moment
Correlation Coefficient
Stat>Basic Stat>Correlation
Correlations: Sales, Years with Midwest – Minitab readout
Pearson correlation of Sales and Years with Midwest = 0.833
P-Value = 0.001
Formula for computing correlation coefficient
2222
yynxxn
yxxyn
r
Hypothesis for t test for significant correlation
H0: =0
H1: ≠0
Decision Rule: Pvalue and critical ratio/critical value technique
Critical Ratio of t
t=
r
r
n
1
2
2
Conclusion:
Interpretation:
Simple Regression Years with Midwest and Shelf Space Winter 2016 Page 3
Simple linear regression assumes that the relationship between the dependent, y
and independent variable, x can be approximated by a straight line.
Population or Deterministic Model – For each x there is an exact value for y.
y = 0 + 1(x) +
y - value of independent variable
(x) - value of independent variable
0 - Value of population y intercept
1 - Slope of population regression line
- Epsilon represents the difference between y and y’. Epsilon also accounts for the independent
variables that affect y but are not in the model. (The .
1. A multiple regression model was run to analyze the relationship between a dependent variable (Y) and 3 independent variables (X1, X2, X3) based on data from 1987-2016.
2. The results showed that a one unit increase in X1 would increase Y by 0.16, a one unit increase in X2 would increase Y by 0.13, and a one unit increase in X3 would increase Y by 0.007. However, none of these relationships were statistically significant.
3. Additional regression runs between the dependent variable and each independent variable individually did not show any statistically significant relationships either. This suggests the independent variables are not good predictors of the dependent variable based on this
1. A linear regression model was estimated to relate the number of cars sold (dependent variable) to the number of TV ads (independent variable) based on weekly data from 5 weeks.
2. The regression results show that the number of TV ads has a statistically significant impact on car sales based on the F-test and t-tests.
3. The estimated regression equation found that for every additional TV ad, car sales are predicted to increase by 5 units on average, with an intercept of 10 cars sold even without any TV ads.
This document analyzes quantitative data using various statistical techniques to examine fixed deposit rates in different areas over a 10-year period. It uses a two-sample t-test to determine if demand differs across metropolitan, city and town areas. Multiple linear regression is employed to understand the relationship between total personal wealth and factors like average deposit rates, interest rates, and government bond rates. Seasonal forecasting techniques predict that quarter 4 sees the highest demand on average for all three areas. The analysis aims to provide insights to help the Ministry of Finance forecast deposit rates and understand demand trends.
This document discusses multivariate analysis (MVA), which involves observing and analyzing multiple outcome variables simultaneously. It describes key components of MVA like variates, measurement scales, and statistical significance. Various MVA techniques are explained, including cross correlations, single-equation models, vector autoregressions, and cointegration. An example using crime rate data from US states is provided. Applications of MVA in fields like marketing, quality control, process optimization, and research are also mentioned.
This document proposes improvements to existing customer lifetime value models. It discusses deriving current models A and B, which discount average revenues over a subscriber's expected duration. The improvements consider estimating future cash flows and growth rates through regression analysis, accounting for other revenue streams, and incorporating the value of a subscriber's social network. The proposed model uses discounted cash flow analysis and least squares regression to forecast revenues and growth rates for each subscriber, considering revenues from mobile, TV, broadband and the revenues of subscribers within their social network. It requires subscriber revenue and call data to implement the analysis.
This document discusses forecasting daily streamflow in two Kenyan watersheds, Yala and Sondu, using the Kalman filter. Concurrent daily rainfall and streamflow data from 1972-1981 were used to identify transfer function models relating rainfall to streamflow. The Kalman filter was applied to dynamically update the model parameters based on new observations. Results showed the transfer function models adequately described the rainfall-runoff relationship and recursive simulations using the Kalman filter reproduced the temporal variations and mass balance of observed streamflows. However, the models tended to underestimate flows during extreme events.
This document discusses using R packages to model and simulate annuities while accounting for variability and uncertainty. It introduces the lifecontingencies package for actuarial calculations and describes collecting demographic and financial data. Baseline assumptions are defined and the Lee-Carter model is used to project mortality. Variability is assessed through simulating process variance in lifetimes and annuity present values, and parameter variance by generating random life tables. Interest rates are modeled with Vasicek dynamics and inflation is linked to interest rates through cointegration. The overall algorithm simulates annuity distributions by projecting cash flows over simulated lifetimes and interest/inflation rates.
Getting things right: optimal tax policy with labor market dualityGilbert Mbara
We develop a dynamic general equilibrium model in which firms evade the employer contribution component of social security taxes by offering some workers non-formal contracts. When calibrated, the model yields estimates of dual labor market participation consistent with empirical evidence for the EU14 countries and the US. We investigate the optimal mix of the avoidable and unavoidable components of labor taxes and analyze the fiscal and macroeconomics effects of bringing the composition to the welfare optimum. We find that partial labor tax evasion makes tax revenues more elastic, but full tax compliance is not necessarily a welfare enhancing policy mix.
The document discusses a company called 3DP that is considering two options - launching a new 3D printer product or selling the patent license. It provides information on the estimated costs of product development and market potential for the product. It also provides details on a potential offer from another company to purchase the patent license. The document asks two questions: 1) Calculate the expected monetary value of the two options and recommend the decision based on financial considerations. 2) Calculate the exchange rate change needed to change the recommended decision and its probability.
INTRODUCTION TO TIME SERIES REGRESSION AND FORCASTINGSPICEGODDESS
What Is Time Series Regression? Time series regression is a statistical method for predicting a future response based on the response history (known as autoregressive dynamics) and the transfer of dynamics from relevant predictors.
This document summarizes the results of an econometrics analysis examining the relationship between macroeconomic variables in the US and Italy. It tests for unit roots and cointegration, estimates vector autoregression models in levels and first differences, and analyzes impulse response functions and variance decompositions. The key findings are: 1) some variables are stationary while others have unit roots; 2) there are two cointegrating relationships; 3) monetary shocks have a significant positive effect on GDP for several quarters in the levels model; 4) variance decompositions show monetary shocks do not explain significant portions of GDP variance.
YOURLASTNAME, YourFirstNameAssignment 5 - REVISEDSave this d.docxdanielfoster65629
YOURLASTNAME, YourFirstName
Assignment 5 - REVISED
Save this document as A5-YourLastNameFirstName.doc. Type or copy and paste your responses to the instructions and questions into this document following each instruction or question.
1. You want to estimate a model for the demand for electricity by households in the U.S. States of the following form: Quantity of electricity consumed= f (Price of electricity, Price of gas, Income, Housing).
FOR THE REVISION YOU SHOULD USE QELEC/HOUSING AS THE DEPENDENT VARIABLE AND INCOME/HOUSING AS THE INDEPENDENT VARIABLE AND OMIT HOUSING AS AN INDEPENDENT VARIABLE. COMPARE THE RESULTS OF THIS REVISION WITH YOUR ORIGINAL RESULTS.
Obtain the most up-to-date data for sales of electricity, the residential price of electricity and the price of natural gas from the website of the U.S. Energy Information Administration (http://www.eia.gov/electricity/ and http://www.eia.gov/naturalgas/) . Obtain data on the number of housing units from the U.S. Bureau of the Census http://www.census.gov/popest/data/housing/totals/2012/index.html)
and personal Income from the U.S. Bureau of Economic Analysis (http://www.bea.gov/regional/index.htm)
Add each series to an EViews workfile and change the names to QELEC, PELEC, PGAS, INCOME and HOUSING:
a. Carefully identify the series including providing the url for the data.
b. Tell why you think the data series is or is not exactly what you should use in your estimation.
c. Paste the variable names for each series and only the first and the last observations into your assignment.
The Excel file Assignment 5 data contains all the identifying information for the data I used. You mayhave found some slightly different data.
STATE
QELEC
PELEC
PGAS
INCOME
HOUSING
1
Alabama
3256.000
10.99000
12.54000
181816.0
2189545.
51
Wyoming
313.0000
10.28000
8.540000
32018.00
265162.0
2. Type the equation you will estimate and explain it including your theory and the expected signs for each estimated coefficient.
Theory of demand QELEC = C(1)*PELEC + C(2)*PGAS + C(3)*INCOME +C(4)HOUSING + C(5)
C(1) –
C(2) +
C(3) +
C(4)+
C(5) no expectation
Dependent Variable: QELEC
Sample (adjusted): 1 51
Included observations: 50 after adjustments
White heteroskedasticity-consistent standard errors & covariance
Variable
Coefficient
Std. Error
t-Statistic
Prob.
PELEC
-94.46757
53.30388
-1.772246
0.0831
PGAS
39.22659
51.56934
0.760657
0.4508
INCOME
-0.008121
0.003297
-2.463220
0.0177
HOUSING
0.001853
0.000400
4.628823
0.0000
C
994.6281
356.9909
2.786144
0.0078
R-squared
0.876198
Mean dependent var
2726.780
Adjusted R-squared
0.865193
S.D. dependent var
2541.893
S.E. of regression
933.2830
Akaike info criterion
16.60993
Sum squared resid
39195776
Schwarz criterion
16.80114
Log likelihood
-410.2483
Hannan-Quinn criter.
16.68274
F-statistic
79.62067
Durbin-Watson stat
1.967556
Prob(F-statistic)
0.000000
Wald F-statistic
21.47470
Prob(Wald F-st.
Forecasting for Economics and Business 1st Edition Gloria Gonzalez Rivera Sol...vacenini
This document discusses solutions to exercises from a textbook on forecasting economics and business. It includes regressions of consumption growth on income growth and real interest rates. The regressions provide some support for the permanent income hypothesis by showing consumption responds less than proportionately to income changes. Lagged income is also found to impact current consumption growth. Time series plots and definitions of GDP, exchange rates, interest rates and unemployment are also analyzed for stationarity.
The document discusses decline curve analysis (DCA) for estimating reserves in conventional and unconventional reservoirs. It proposes using a Fetkovich type curve method in Microsoft Excel and comparing results to commercial software. The key steps are identifying the hyperbolic decline curve from production data, forecasting future production using the curve equation, and comparing actual vs predicted production to evaluate accuracy. DCA provides more accurate reserve estimates than other methods using less data as it accounts for production trends over time.
CTO Insights: Steering a High-Stakes Database MigrationScyllaDB
In migrating a massive, business-critical database, the Chief Technology Officer's (CTO) perspective is crucial. This endeavor requires meticulous planning, risk assessment, and a structured approach to ensure minimal disruption and maximum data integrity during the transition. The CTO's role involves overseeing technical strategies, evaluating the impact on operations, ensuring data security, and coordinating with relevant teams to execute a seamless migration while mitigating potential risks. The focus is on maintaining continuity, optimising performance, and safeguarding the business's essential data throughout the migration process
Radically Outperforming DynamoDB @ Digital Turbine with SADA and Google CloudScyllaDB
Digital Turbine, the Leading Mobile Growth & Monetization Platform, did the analysis and made the leap from DynamoDB to ScyllaDB Cloud on GCP. Suffice it to say, they stuck the landing. We'll introduce Joseph Shorter, VP, Platform Architecture at DT, who lead the charge for change and can speak first-hand to the performance, reliability, and cost benefits of this move. Miles Ward, CTO @ SADA will help explore what this move looks like behind the scenes, in the Scylla Cloud SaaS platform. We'll walk you through before and after, and what it took to get there (easier than you'd guess I bet!).
This document analyzes Apple's stock performance around earnings announcement dates from 2004 to 2017. It finds:
1) Cumulative abnormal returns and unexplained earnings are calculated for Apple and the market around each earnings date.
2) Regressions show little correlation between unexplained earnings and returns, likely because Apple's stock moves more on new product announcements than earnings.
3) Analyst expectations for Apple three months out prove quite accurate, suggesting their estimates incorporate new product information.
4) Analyzing Apple's stock around new product release dates instead of earnings may create an effective investment strategy.
Turkey financial stress risk index and econometric modeling using garch and m...YUNUS EMRE Ozcan
This document presents research on developing a Turkey Financial Risk Index (TFRI) using daily data from the interbank, credit, equity, and foreign exchange markets between 2004-2014. Researchers constructed the index using principal components analysis and compared equal-weighting. GARCH and Markov switching models were used to model the index. The results showed the CDS market was the most significant factor influencing the index. The researchers conclude the TFRI can help monitor financial stress and be used as an early warning signal or exogenous variable in other models.
The document analyzes the relationships between inflation, unemployment, and interest rates using a vector autoregression (VAR) model. It finds that:
1) All three variables - inflation, unemployment, and interest rates - are stationary based on tests of their autocorrelation functions.
2) A VAR with a lag length of 2 is optimal based on information criteria.
3) In the estimated VAR models, lags of inflation and unemployment are significant predictors of current inflation, while only lags of unemployment are significant for current unemployment.
4) Diagnostic tests of residuals show white noise, validating the fitted VAR models. Forecasts for 2015-2016 are also generated from the models.
Simple Regression Years with Midwest and Shelf Space Winter .docxbudabrooks46239
Simple Regression Years with Midwest and Shelf Space Winter 2016 Page 1
Lecture Notes for Simple Linear Regression
Problem Definition: Midwest Insurance wants to develop a model able to predict sales
according to time with the company.
Results for: MIDWEST.MTW
Data Display
Row Sales Years with Midwest xy y2 x2
1 487 3 1461 237169 9
2 445 5 2225 198025 25
3 272 2 544 73984 4
4 641 8 5128 410881 64
5 187 2 374 34969 4
6 440 6 2640 193600 36
7 346 7 2422 119716 49
8 238 1 238 56644 1
9 312 4 1248 97344 16
10 269 2 538 72361 4
11 655 9 5895 429025 81
12 563 6 3378 316969 36
y=4855 x=55 xy=26,091 y
2
=2,240,687 x
2
=329
(x)
2
= 3025
(y)
2
= 23571025
Scatterplot of Midwest Data
Graphs>Scatterplot
Years with Midwest
S
a
le
s
9876543210
700
600
500
400
300
200
Scatterplot of Sales vs Years with Midwest
Evaluate the bivariate graph to determine whether a linear relationship exists and the
nature of the relationship. What happens to y as x increases? What type of relationship do
you see?
Simple Regression Years with Midwest and Shelf Space Winter 2016 Page 2
Dialog box for developing correlation coefficient
Explore Linearity of Relationship for significance using t distribution
Pearson Product Moment
Correlation Coefficient
Stat>Basic Stat>Correlation
Correlations: Sales, Years with Midwest – Minitab readout
Pearson correlation of Sales and Years with Midwest = 0.833
P-Value = 0.001
Formula for computing correlation coefficient
2222
yynxxn
yxxyn
r
Hypothesis for t test for significant correlation
H0: =0
H1: ≠0
Decision Rule: Pvalue and critical ratio/critical value technique
Critical Ratio of t
t=
r
r
n
1
2
2
Conclusion:
Interpretation:
Simple Regression Years with Midwest and Shelf Space Winter 2016 Page 3
Simple linear regression assumes that the relationship between the dependent, y
and independent variable, x can be approximated by a straight line.
Population or Deterministic Model – For each x there is an exact value for y.
y = 0 + 1(x) +
y - value of independent variable
(x) - value of independent variable
0 - Value of population y intercept
1 - Slope of population regression line
- Epsilon represents the difference between y and y’. Epsilon also accounts for the independent
variables that affect y but are not in the model. (The .
1. A multiple regression model was run to analyze the relationship between a dependent variable (Y) and 3 independent variables (X1, X2, X3) based on data from 1987-2016.
2. The results showed that a one unit increase in X1 would increase Y by 0.16, a one unit increase in X2 would increase Y by 0.13, and a one unit increase in X3 would increase Y by 0.007. However, none of these relationships were statistically significant.
3. Additional regression runs between the dependent variable and each independent variable individually did not show any statistically significant relationships either. This suggests the independent variables are not good predictors of the dependent variable based on this
1. A linear regression model was estimated to relate the number of cars sold (dependent variable) to the number of TV ads (independent variable) based on weekly data from 5 weeks.
2. The regression results show that the number of TV ads has a statistically significant impact on car sales based on the F-test and t-tests.
3. The estimated regression equation found that for every additional TV ad, car sales are predicted to increase by 5 units on average, with an intercept of 10 cars sold even without any TV ads.
This document analyzes quantitative data using various statistical techniques to examine fixed deposit rates in different areas over a 10-year period. It uses a two-sample t-test to determine if demand differs across metropolitan, city and town areas. Multiple linear regression is employed to understand the relationship between total personal wealth and factors like average deposit rates, interest rates, and government bond rates. Seasonal forecasting techniques predict that quarter 4 sees the highest demand on average for all three areas. The analysis aims to provide insights to help the Ministry of Finance forecast deposit rates and understand demand trends.
This document discusses multivariate analysis (MVA), which involves observing and analyzing multiple outcome variables simultaneously. It describes key components of MVA like variates, measurement scales, and statistical significance. Various MVA techniques are explained, including cross correlations, single-equation models, vector autoregressions, and cointegration. An example using crime rate data from US states is provided. Applications of MVA in fields like marketing, quality control, process optimization, and research are also mentioned.
This document proposes improvements to existing customer lifetime value models. It discusses deriving current models A and B, which discount average revenues over a subscriber's expected duration. The improvements consider estimating future cash flows and growth rates through regression analysis, accounting for other revenue streams, and incorporating the value of a subscriber's social network. The proposed model uses discounted cash flow analysis and least squares regression to forecast revenues and growth rates for each subscriber, considering revenues from mobile, TV, broadband and the revenues of subscribers within their social network. It requires subscriber revenue and call data to implement the analysis.
This document discusses forecasting daily streamflow in two Kenyan watersheds, Yala and Sondu, using the Kalman filter. Concurrent daily rainfall and streamflow data from 1972-1981 were used to identify transfer function models relating rainfall to streamflow. The Kalman filter was applied to dynamically update the model parameters based on new observations. Results showed the transfer function models adequately described the rainfall-runoff relationship and recursive simulations using the Kalman filter reproduced the temporal variations and mass balance of observed streamflows. However, the models tended to underestimate flows during extreme events.
This document discusses using R packages to model and simulate annuities while accounting for variability and uncertainty. It introduces the lifecontingencies package for actuarial calculations and describes collecting demographic and financial data. Baseline assumptions are defined and the Lee-Carter model is used to project mortality. Variability is assessed through simulating process variance in lifetimes and annuity present values, and parameter variance by generating random life tables. Interest rates are modeled with Vasicek dynamics and inflation is linked to interest rates through cointegration. The overall algorithm simulates annuity distributions by projecting cash flows over simulated lifetimes and interest/inflation rates.
Getting things right: optimal tax policy with labor market dualityGilbert Mbara
We develop a dynamic general equilibrium model in which firms evade the employer contribution component of social security taxes by offering some workers non-formal contracts. When calibrated, the model yields estimates of dual labor market participation consistent with empirical evidence for the EU14 countries and the US. We investigate the optimal mix of the avoidable and unavoidable components of labor taxes and analyze the fiscal and macroeconomics effects of bringing the composition to the welfare optimum. We find that partial labor tax evasion makes tax revenues more elastic, but full tax compliance is not necessarily a welfare enhancing policy mix.
The document discusses a company called 3DP that is considering two options - launching a new 3D printer product or selling the patent license. It provides information on the estimated costs of product development and market potential for the product. It also provides details on a potential offer from another company to purchase the patent license. The document asks two questions: 1) Calculate the expected monetary value of the two options and recommend the decision based on financial considerations. 2) Calculate the exchange rate change needed to change the recommended decision and its probability.
INTRODUCTION TO TIME SERIES REGRESSION AND FORCASTINGSPICEGODDESS
What Is Time Series Regression? Time series regression is a statistical method for predicting a future response based on the response history (known as autoregressive dynamics) and the transfer of dynamics from relevant predictors.
This document summarizes the results of an econometrics analysis examining the relationship between macroeconomic variables in the US and Italy. It tests for unit roots and cointegration, estimates vector autoregression models in levels and first differences, and analyzes impulse response functions and variance decompositions. The key findings are: 1) some variables are stationary while others have unit roots; 2) there are two cointegrating relationships; 3) monetary shocks have a significant positive effect on GDP for several quarters in the levels model; 4) variance decompositions show monetary shocks do not explain significant portions of GDP variance.
YOURLASTNAME, YourFirstNameAssignment 5 - REVISEDSave this d.docxdanielfoster65629
YOURLASTNAME, YourFirstName
Assignment 5 - REVISED
Save this document as A5-YourLastNameFirstName.doc. Type or copy and paste your responses to the instructions and questions into this document following each instruction or question.
1. You want to estimate a model for the demand for electricity by households in the U.S. States of the following form: Quantity of electricity consumed= f (Price of electricity, Price of gas, Income, Housing).
FOR THE REVISION YOU SHOULD USE QELEC/HOUSING AS THE DEPENDENT VARIABLE AND INCOME/HOUSING AS THE INDEPENDENT VARIABLE AND OMIT HOUSING AS AN INDEPENDENT VARIABLE. COMPARE THE RESULTS OF THIS REVISION WITH YOUR ORIGINAL RESULTS.
Obtain the most up-to-date data for sales of electricity, the residential price of electricity and the price of natural gas from the website of the U.S. Energy Information Administration (http://www.eia.gov/electricity/ and http://www.eia.gov/naturalgas/) . Obtain data on the number of housing units from the U.S. Bureau of the Census http://www.census.gov/popest/data/housing/totals/2012/index.html)
and personal Income from the U.S. Bureau of Economic Analysis (http://www.bea.gov/regional/index.htm)
Add each series to an EViews workfile and change the names to QELEC, PELEC, PGAS, INCOME and HOUSING:
a. Carefully identify the series including providing the url for the data.
b. Tell why you think the data series is or is not exactly what you should use in your estimation.
c. Paste the variable names for each series and only the first and the last observations into your assignment.
The Excel file Assignment 5 data contains all the identifying information for the data I used. You mayhave found some slightly different data.
STATE
QELEC
PELEC
PGAS
INCOME
HOUSING
1
Alabama
3256.000
10.99000
12.54000
181816.0
2189545.
51
Wyoming
313.0000
10.28000
8.540000
32018.00
265162.0
2. Type the equation you will estimate and explain it including your theory and the expected signs for each estimated coefficient.
Theory of demand QELEC = C(1)*PELEC + C(2)*PGAS + C(3)*INCOME +C(4)HOUSING + C(5)
C(1) –
C(2) +
C(3) +
C(4)+
C(5) no expectation
Dependent Variable: QELEC
Sample (adjusted): 1 51
Included observations: 50 after adjustments
White heteroskedasticity-consistent standard errors & covariance
Variable
Coefficient
Std. Error
t-Statistic
Prob.
PELEC
-94.46757
53.30388
-1.772246
0.0831
PGAS
39.22659
51.56934
0.760657
0.4508
INCOME
-0.008121
0.003297
-2.463220
0.0177
HOUSING
0.001853
0.000400
4.628823
0.0000
C
994.6281
356.9909
2.786144
0.0078
R-squared
0.876198
Mean dependent var
2726.780
Adjusted R-squared
0.865193
S.D. dependent var
2541.893
S.E. of regression
933.2830
Akaike info criterion
16.60993
Sum squared resid
39195776
Schwarz criterion
16.80114
Log likelihood
-410.2483
Hannan-Quinn criter.
16.68274
F-statistic
79.62067
Durbin-Watson stat
1.967556
Prob(F-statistic)
0.000000
Wald F-statistic
21.47470
Prob(Wald F-st.
Forecasting for Economics and Business 1st Edition Gloria Gonzalez Rivera Sol...vacenini
This document discusses solutions to exercises from a textbook on forecasting economics and business. It includes regressions of consumption growth on income growth and real interest rates. The regressions provide some support for the permanent income hypothesis by showing consumption responds less than proportionately to income changes. Lagged income is also found to impact current consumption growth. Time series plots and definitions of GDP, exchange rates, interest rates and unemployment are also analyzed for stationarity.
The document discusses decline curve analysis (DCA) for estimating reserves in conventional and unconventional reservoirs. It proposes using a Fetkovich type curve method in Microsoft Excel and comparing results to commercial software. The key steps are identifying the hyperbolic decline curve from production data, forecasting future production using the curve equation, and comparing actual vs predicted production to evaluate accuracy. DCA provides more accurate reserve estimates than other methods using less data as it accounts for production trends over time.
CTO Insights: Steering a High-Stakes Database MigrationScyllaDB
In migrating a massive, business-critical database, the Chief Technology Officer's (CTO) perspective is crucial. This endeavor requires meticulous planning, risk assessment, and a structured approach to ensure minimal disruption and maximum data integrity during the transition. The CTO's role involves overseeing technical strategies, evaluating the impact on operations, ensuring data security, and coordinating with relevant teams to execute a seamless migration while mitigating potential risks. The focus is on maintaining continuity, optimising performance, and safeguarding the business's essential data throughout the migration process
Radically Outperforming DynamoDB @ Digital Turbine with SADA and Google CloudScyllaDB
Digital Turbine, the Leading Mobile Growth & Monetization Platform, did the analysis and made the leap from DynamoDB to ScyllaDB Cloud on GCP. Suffice it to say, they stuck the landing. We'll introduce Joseph Shorter, VP, Platform Architecture at DT, who lead the charge for change and can speak first-hand to the performance, reliability, and cost benefits of this move. Miles Ward, CTO @ SADA will help explore what this move looks like behind the scenes, in the Scylla Cloud SaaS platform. We'll walk you through before and after, and what it took to get there (easier than you'd guess I bet!).
ScyllaDB Leaps Forward with Dor Laor, CEO of ScyllaDBScyllaDB
Join ScyllaDB’s CEO, Dor Laor, as he introduces the revolutionary tablet architecture that makes one of the fastest databases fully elastic. Dor will also detail the significant advancements in ScyllaDB Cloud’s security and elasticity features as well as the speed boost that ScyllaDB Enterprise 2024.1 received.
TrustArc Webinar - Your Guide for Smooth Cross-Border Data Transfers and Glob...TrustArc
Global data transfers can be tricky due to different regulations and individual protections in each country. Sharing data with vendors has become such a normal part of business operations that some may not even realize they’re conducting a cross-border data transfer!
The Global CBPR Forum launched the new Global Cross-Border Privacy Rules framework in May 2024 to ensure that privacy compliance and regulatory differences across participating jurisdictions do not block a business's ability to deliver its products and services worldwide.
To benefit consumers and businesses, Global CBPRs promote trust and accountability while moving toward a future where consumer privacy is honored and data can be transferred responsibly across borders.
This webinar will review:
- What is a data transfer and its related risks
- How to manage and mitigate your data transfer risks
- How do different data transfer mechanisms like the EU-US DPF and Global CBPR benefit your business globally
- Globally what are the cross-border data transfer regulations and guidelines
An All-Around Benchmark of the DBaaS MarketScyllaDB
The entire database market is moving towards Database-as-a-Service (DBaaS), resulting in a heterogeneous DBaaS landscape shaped by database vendors, cloud providers, and DBaaS brokers. This DBaaS landscape is rapidly evolving and the DBaaS products differ in their features but also their price and performance capabilities. In consequence, selecting the optimal DBaaS provider for the customer needs becomes a challenge, especially for performance-critical applications.
To enable an on-demand comparison of the DBaaS landscape we present the benchANT DBaaS Navigator, an open DBaaS comparison platform for management and deployment features, costs, and performance. The DBaaS Navigator is an open data platform that enables the comparison of over 20 DBaaS providers for the relational and NoSQL databases.
This talk will provide a brief overview of the benchmarked categories with a focus on the technical categories such as price/performance for NoSQL DBaaS and how ScyllaDB Cloud is performing.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
Guidelines for Effective Data VisualizationUmmeSalmaM1
This PPT discuss about importance and need of data visualization, and its scope. Also sharing strong tips related to data visualization that helps to communicate the visual information effectively.
Elasticity vs. State? Exploring Kafka Streams Cassandra State StoreScyllaDB
kafka-streams-cassandra-state-store' is a drop-in Kafka Streams State Store implementation that persists data to Apache Cassandra.
By moving the state to an external datastore the stateful streams app (from a deployment point of view) effectively becomes stateless. This greatly improves elasticity and allows for fluent CI/CD (rolling upgrades, security patching, pod eviction, ...).
It also can also help to reduce failure recovery and rebalancing downtimes, with demos showing sporty 100ms rebalancing downtimes for your stateful Kafka Streams application, no matter the size of the application’s state.
As a bonus accessing Cassandra State Stores via 'Interactive Queries' (e.g. exposing via REST API) is simple and efficient since there's no need for an RPC layer proxying and fanning out requests to all instances of your streams application.
An Introduction to All Data Enterprise IntegrationSafe Software
Are you spending more time wrestling with your data than actually using it? You’re not alone. For many organizations, managing data from various sources can feel like an uphill battle. But what if you could turn that around and make your data work for you effortlessly? That’s where FME comes in.
We’ve designed FME to tackle these exact issues, transforming your data chaos into a streamlined, efficient process. Join us for an introduction to All Data Enterprise Integration and discover how FME can be your game-changer.
During this webinar, you’ll learn:
- Why Data Integration Matters: How FME can streamline your data process.
- The Role of Spatial Data: Why spatial data is crucial for your organization.
- Connecting & Viewing Data: See how FME connects to your data sources, with a flash demo to showcase.
- Transforming Your Data: Find out how FME can transform your data to fit your needs. We’ll bring this process to life with a demo leveraging both geometry and attribute validation.
- Automating Your Workflows: Learn how FME can save you time and money with automation.
Don’t miss this chance to learn how FME can bring your data integration strategy to life, making your workflows more efficient and saving you valuable time and resources. Join us and take the first step toward a more integrated, efficient, data-driven future!
QA or the Highway - Component Testing: Bridging the gap between frontend appl...zjhamm304
These are the slides for the presentation, "Component Testing: Bridging the gap between frontend applications" that was presented at QA or the Highway 2024 in Columbus, OH by Zachary Hamm.
Supercell is the game developer behind Hay Day, Clash of Clans, Boom Beach, Clash Royale and Brawl Stars. Learn how they unified real-time event streaming for a social platform with hundreds of millions of users.
MySQL InnoDB Storage Engine: Deep Dive - MydbopsMydbops
This presentation, titled "MySQL - InnoDB" and delivered by Mayank Prasad at the Mydbops Open Source Database Meetup 16 on June 8th, 2024, covers dynamic configuration of REDO logs and instant ADD/DROP columns in InnoDB.
This presentation dives deep into the world of InnoDB, exploring two ground-breaking features introduced in MySQL 8.0:
• Dynamic Configuration of REDO Logs: Enhance your database's performance and flexibility with on-the-fly adjustments to REDO log capacity. Unleash the power of the snake metaphor to visualize how InnoDB manages REDO log files.
• Instant ADD/DROP Columns: Say goodbye to costly table rebuilds! This presentation unveils how InnoDB now enables seamless addition and removal of columns without compromising data integrity or incurring downtime.
Key Learnings:
• Grasp the concept of REDO logs and their significance in InnoDB's transaction management.
• Discover the advantages of dynamic REDO log configuration and how to leverage it for optimal performance.
• Understand the inner workings of instant ADD/DROP columns and their impact on database operations.
• Gain valuable insights into the row versioning mechanism that empowers instant column modifications.
Communications Mining Series - Zero to Hero - Session 2DianaGray10
This session is focused on setting up Project, Train Model and Refine Model in Communication Mining platform. We will understand data ingestion, various phases of Model training and best practices.
• Administration
• Manage Sources and Dataset
• Taxonomy
• Model Training
• Refining Models and using Validation
• Best practices
• Q/A
Communications Mining Series - Zero to Hero - Session 2
Housing Starts Forecast
1. John Montgomery
Econ 401/Dr. Townsend
December 7, 2009
Appendix 14.1 is a highly aggregated model of real gross domestic product and its
major components. The Model contains 11 behavioral equations and two identities. One
of these identities is for real disposable income, and the other is the accounting identity
for real GDP. Each equation within the model is estimated using two stage least squares.
There are 12 endogenous variables: personal consumption expenditures, GDP, rate of
growth of CPI, nonresidential fixed investment, change in business inventories,
residential fixed investement, imports of goods and services, average yield on AAA
corporate bonds, interest rate on 3-month treasury bills, personal and indirect business tax
payments, civilian unemployment rate, wage inflation, and disposable personal income.
In addition to these endogenous variables, there are 9 exogenous variables: government
purchases of goods and services, potential GDP, money stock, household net worth, rate
of growth of oil prices, corporate profits, rate of growth of labor productivity, transfer
payments to persons, and exports of goods and services.
The instruments used for the individual behavioral equations differ compared to
what we will be using for our model. Furthermore this model uses two-stage least
squares for each of the equations, and we use ordinary least squares for the recursive
equations.
Comparatively the model provides a good forecast, and the flow chart is a good
representation of the equation visually.
2. Case set four calls for us to create a simplified structural model of the U.S.
economy. The model uses the Fair method, which uses two stage least squares, and
includes the lagged dependent and independent variables as instruments. These lagged
variables are included as such in order to obtain consistent parameter estimates when
autocorrelated disturbances create a problem.
The model contains 11 behavioral equations, and two identities. The majority of
the equations are estimated using two stage least squares, although there are three
recursive equations which are estimated using the ordinary least squares method. Using
quarterly data from 1960-1993 I have created a historical simulation which I will explain
here.
Dependent Variable: TAX
Method: Two-Stage Least Squares
Date: 12/07/09 Time: 20:36
Sample: 1960Q1 1993Q4
Included observations: 136
Convergence achieved after 7 iterations
Instrument list: C GDPPOT INFL INR INV IR M M2 RL RS X YPD
GDP(-1) TAX(-1)
Lagged dependent
variable & regressors
added to instrument list
Variable Coefficient Std. Error t-Statistic Prob.
C -3.967408 22.99851 -0.172507 0.8633
GDP 0.186861 0.005053 36.98054 0.0000
AR(1) 0.781575 0.054284 14.39795 0.0000
R-squared 0.995351 Mean dependent var 790.8930
Adjusted R-squared 0.995281 S.D. dependent var 235.3508
S.E. of regression 16.16703 Sum squared resid 34762.61
F-statistic 14231.47 Durbin-Watson stat 2.331248
Prob(F-statistic) 0.000000
Inverted AR Roots .78
3. The first equation examined is the equation for tax. It is a very simple equation, and is
the calculation of total business and personal taxes. Its instruments are potential gdp, inflation,
nonresidential fixed investment, change in business inventories, residential fixed investment,
imports of goods and services, the money stock, average yield on AAA corporate bonds, interest
rates on three-month treasury bills, exports, disposable personal income, gross domestic product
lagged by one quarter, and finally itself lagged by one quarter. The high r-squared number
indicates that we should have a very good fitting line, and we also see a Durbin-Watson statistic
within the acceptable range. I have used the auto-regressive model to help correct for any serial
correlation, so that explains why we have such a good D-W stat.
1400
1200
1000
800
600
400
200
1960 1965 1970 1975 1980 1985 1990
TAX TAX (Baseline)
Above is the historical simulation of taxes, and as our r-squared value had indicated we
have a decently nice fitting line. The MAPE for the historical simulation is .05%.
4. 1320
1300
1280
1260
1240
1220
1200
94Q1 94Q2 94Q3 94Q4 95Q1 95Q2 95Q3 95Q4
TAX TAX (Scenario 1)
Above is the ex-post ante forecast for the tax equation. We have been able to generate a
fairly strong forecast which has a MAPE of .019.
Dependent Variable: CONS
Method: Two-Stage Least Squares
Date: 12/07/09 Time: 20:38
Sample: 1960Q1 1993Q4
Included observations: 136
Convergence achieved after 44 iterations
Instrument list: C G GDPPOT INFL INR INV IR M M2 RL WINF X
CONS CONS(-2) NETWRTH(-1) YPD(-1)
Lagged dependent
variable & regressors
added to instrument list
Variable Coefficient Std. Error t-Statistic Prob.
C -146.5984 35.31959 -4.150627 0.0001
YPD 0.192170 0.039694 4.841345 0.0000
NETWRTH 0.040520 0.009706 4.174715 0.0001
RS -5.241978 1.330045 -3.941204 0.0001
CONS(-1) 0.586638 0.085641 6.849938 0.0000
AR(1) 0.406659 0.116241 3.498412 0.0006
R-squared 0.999569 Mean dependent var 2834.458
Adjusted R-squared 0.999552 S.D. dependent var 873.7046
5. S.E. of regression 18.49246 Sum squared resid 44456.23
F-statistic 60252.24 Durbin-Watson stat 2.165461
Prob(F-statistic) 0.000000
Inverted AR Roots .41
The above table is the results of the two stage least squares regression for the
consumption equation. Personal consumption represents two-thirds of GDP and is one of the
most important behavioral equations within the entire model. Because of the presence of the
lagged dependent variable in the equation, and in accordance with Fair’s method, I have included
the consumption variable lagged twice upon itself in the instruments. In addition to this I have
included a lagged variable of both net worth and personal disposable income because they are
also endogenous variables. Again, we notice a high r-squared value, indicating a good-fitting
line. Also, the Durbin-Watson statistic is within its accepted values, which has happened again
because of the addition of the autoregressive model. The negative coefficient present for the
variable representing the three-month treasury bill interest rates makes sense as one can
assume that as consumption increases, the interest on these would in turn decrease. The
positive coefficients for both net worth and disposable personal income also makes sense as it is
only logical to assume that consumption would increase as these two variables do as well.
4500
4000
3500
3000
2500
2000
1500
1000
1960 1965 1970 1975 1980 1985 1990
CONS CONS (Baseline)
6. Above is the graph for the historical simulation of consumption, and as our r-
squared value indicates we have a strong fit; the MAPE for the historical simulation of
consumption is .017%.
4640
4600
4560
4520
4480
4440
4400
94Q1 94Q2 94Q3 94Q4 95Q1 95Q2 95Q3 95Q4
CONS CONS (Scenario 1)
Above is a graphical representation of the ex-post ante forecast for the
consumption equation. Although it looks like it is dipping far below the actual line, it
really isn’t, as can be seen in a graphical representation including the historical
simulation.
4800
4400
4000
3600
3200
2800
2400
2000
1600
1200
1965 1970 1975 1980 1985 1990 1995
CONS (Scenario 1)
CONS
CONS (Baseline)
7. As you can see there is actually a very close fitting ex-post forecast provided, and
the MAPE of .01%.
Dependent Variable: M
Method: Two-Stage Least Squares
Date: 12/07/09 Time: 21:14
Sample: 1960Q1 1993Q4
Included observations: 136
Convergence achieved after 5 iterations
Instrument list: C CONS G GDP GDPPOT INFL INR INV M2 RL RS X
YPD(-1)
Lagged dependent
variable & regressors
added to instrument list
Variable Coefficient Std. Error t-Statistic Prob.
M(-1) 0.997952 0.019728 50.58575 0.0000
C -5.780378 8.064005 -0.716812 0.4748
YPD 0.002797 0.003503 0.798507 0.4260
AR(1) 0.120054 0.089127 1.347008 0.1803
R-squared 0.996581 Mean dependent var 345.2206
Adjusted R-squared 0.996503 S.D. dependent var 182.2808
S.E. of regression 10.77870 Sum squared resid 15335.81
F-statistic 12825.49 Durbin-Watson stat 1.984664
Prob(F-statistic) 0.000000
Inverted AR Roots .12
The next equation is for imports of goods and services. The r-squared value is strong,
and the Durbin-Watson statistic is again within the acceptable region. The positive coefficient of
personal disposable income makes sense in the fact that the more money people have, the more
they will spend, and the more goods and services we will import.
8. 800
700
600
500
400
300
200
100
1960 1965 1970 1975 1980 1985 1990
M M (Baseline)
The historical simulation shows a decent fitting line, and the simulation has become a
strong trend. The MAPE for the import equation is .092%.
920
900
880
860
840
820
800
780
94Q1 94Q2 94Q3 94Q4 95Q1 95Q2 95Q3 95Q4
M (Scenario 1) M
9. 1000
900
800
700
600
500
400
300
200
100
1965 1970 1975 1980 1985 1990 1995
M (Scenario 1) M M (Baseline)
The first graph above shows the ex-post forecast, and the graph directly below shows the
ex-post forecast included with the actual numbers, and the historical simulation. The MAPE for
the ex-post forecast is .06%, and it continues along the trend that the historical simulation begins.
Dependent Variable: INR
Method: Two-Stage Least Squares
Date: 12/07/09 Time: 20:42
Sample (adjusted): 1960Q2 1993Q4
Included observations: 135 after adjustments
Convergence achieved after 26 iterations
Instrument list: C CONS G GDPPOT INFL INV IR M M2 X YPD GDP(
-1) INR(-1) RL(-5)
Lagged dependent
variable & regressors
added to instrument list
Variable Coefficient Std. Error t-Statistic Prob.
C 21.67766 108.1270 0.200483 0.8414
GDP 0.107208 0.015837 6.769528 0.0000
RL(-4) -6.854006 3.604487 -1.901520 0.0594
AR(1) 0.977314 0.021144 46.22132 0.0000
R-squared 0.995463 Mean dependent var 425.6459
Adjusted R-squared 0.995359 S.D. dependent var 126.2919
S.E. of regression 8.603287 Sum squared resid 9696.167
F-statistic 9578.989 Durbin-Watson stat 1.365430
Prob(F-statistic) 0.000000
10. Inverted AR Roots .98
Moving forward we next look at the equation for nonresidential investment, and
immediately we notice that it has a positive effect on aggregate economic activity. However, it has
a negative effect on the opportunity cost of investment. Again, we see a high r-squared value,
which translates to a good fitting line.
800
700
600
500
400
300
200
100
1960 1965 1970 1975 1980 1985 1990
INR INR (Baseline)
The historical simulation shows a line that doesn’t fit quite as well as many of the
previous equations historical simulations have, and we see a MAPE of .144%.
11. 740
720
700
680
660
640
620
94Q1 94Q2 94Q3 94Q4 95Q1 95Q2 95Q3 95Q4
INR (Scenario 1) INR
800
700
600
500
400
300
200
100
1960 1965 1970 1975 1980 1985 1990
INR (Scenario 1)
INR
INR (Baseline)
As we look at the above graphs we also see a larger separation between the actual
numbers, and the ex-post forecast. The MAPE for nonresidential investment is .058%.
Dependent Variable: IR
Method: Least Squares
Date: 12/07/09 Time: 20:43
12. Sample: 1960Q1 1993Q4
Included observations: 136
Convergence achieved after 33 iterations
Variable Coefficient Std. Error t-Statistic Prob.
C 12.99230 60.40403 0.215090 0.8300
YPD(-1) 0.048791 0.013085 3.728651 0.0003
RS(-1) -3.810494 0.941601 -4.046825 0.0001
AR(1) 0.949368 0.029951 31.69789 0.0000
R-squared 0.961015 Mean dependent var 190.7934
Adjusted R-squared 0.960129 S.D. dependent var 43.58628
S.E. of regression 8.703175 Akaike info criterion 7.194223
Sum squared resid 9998.374 Schwarz criterion 7.279890
Log likelihood -485.2072 F-statistic 1084.643
Durbin-Watson stat 1.109263 Prob(F-statistic) 0.000000
Inverted AR Roots .95
Residential investment is a variable that reflects household demand for new homes. It is
estimated as a function of real disposable income and the cost of borrowing. We are using the
interest rates for three-month treasury bills as a proxy for mortgage rates.
280
240
200
160
120
80
1960 1965 1970 1975 1980 1985 1990
IR IR (Baseline)
13. The historic simulation shows an actual set of values that oscillates regularly between
peaks and troughs, but the simulation almost begins to show a trend. The MAPE for the historical
simulation is .13%.
272
268
264
260
256
252
94Q1 94Q2 94Q3 94Q4 95Q1 95Q2 95Q3 95Q4
IR IR (Scenario 1)
The ex-post forecast shows a forecast that falls below the values of the actual numbers.
The MAPE is .032%.
Dependent Variable: INV
Method: Two-Stage Least Squares
Date: 12/07/09 Time: 21:23
Sample: 1960Q1 1993Q4
Included observations: 136
Convergence achieved after 10 iterations
Instrument list: C CONS G GDPPOT INFL INR IR M M2 RL RS X INV(
-2) (GDP-CONS-GDP(1)+CONS(-1))
Lagged dependent
variable & regressors
added to instrument list
Variable Coefficient Std. Error t-Statistic Prob.
C 2.837186 1.626528 1.744321 0.0834
D(GDP-CONS) 0.360108 0.058365 6.169931 0.0000
INV(-1) 0.709656 0.054278 13.07454 0.0000
AR(1) -0.182547 0.106412 -1.715472 0.0886
R-squared 0.675370 Mean dependent var 21.58603
14. Adjusted R-squared 0.667992 S.D. dependent var 22.24099
S.E. of regression 12.81529 Sum squared resid 21678.58
F-statistic 50.06773 Durbin-Watson stat 2.073371
Prob(F-statistic) 0.000000
Inverted AR Roots -.18
The next equation is for the change in business inventories. Reasearch has shown that
much of the variation in real output growth over the course of a business cycle can be attributed
to variations in the rate of inventory accumulation. This equation is estimated as a function of the
change in the difference between total output and consumption.
200
160
120
80
40
0
-40
-80
1960 1965 1970 1975 1980 1985 1990
INV INV (Baseline)
The historic simulation of business inventories is represented graphically above.
Immediately one’s eyes would be drawn to the beginning of the cycle in which there is an
impossibly large peak in the simulation. This peak could be controlled through the use of a
dummy variable, but doesn’t affect the simulation greatly. The MAPE of the historical simulation
is the largest of all the equations at 2..77%. However, it is important to note that this number is
still below the 5% threshold that is generally considered in good form for a forecast.
15. 80
70
60
50
40
30
20
10
94Q1 94Q2 94Q3 94Q4 95Q1 95Q2 95Q3 95Q4
INV (Scenario 1) INV
200
160
120
80
40
0
-40
-80
1960 1965 1970 1975 1980 1985 1990 1995
INV (Scenario 1)
INV
INV (Baseline)
The above graphs show the ex-post forecast for the equation regarding business
inventories. The MAPE improves from the historical simulation to .449%.
Dependent Variable: RS
Method: Two-Stage Least Squares
Date: 12/07/09 Time: 20:52
16. Sample: 1960Q1 1993Q4
Included observations: 136
Convergence achieved after 8 iterations
Instrument list: C CONS G INR INV IR M RL X INFL(-1) RS(-1) M2(-1)
YPD(-1)
Lagged dependent
variable & regressors
added to instrument list
Variable Coefficient Std. Error t-Statistic Prob.
C -44.76644 12.53438 -3.571492 0.0005
YPD 0.014637 0.003054 4.792746 0.0000
M2 -0.021874 0.005354 -4.085905 0.0001
INFL 0.303852 0.129569 2.345099 0.0205
AR(1) 0.956617 0.022165 43.15966 0.0000
R-squared 0.906337 Mean dependent var 6.210196
Adjusted R-squared 0.903477 S.D. dependent var 2.809331
S.E. of regression 0.872807 Sum squared resid 99.79487
F-statistic 325.5120 Durbin-Watson stat 1.981729
Prob(F-statistic) 0.000000
Inverted AR Roots .96
Short-term interest rates (rates on three-month treasury bills) are modeled as a
normalization of a traditional money demand equation. When personal disposable income
increasing demand for money increases, but decreases when real short-term interest rates rise
as the opportunity cost of holding money increases. The r-squared values for this equation are
lower than other equations, and that makes sense. Interest rates are more volatile than any of
the other variables, and therefore much more difficult to predict.
17. 20
16
12
8
4
0
-4
1960 1965 1970 1975 1980 1985 1990
RS RS (Baseline)
As you can see the historical simulation isn’t quite as fitted as many of the other
simulations that I have introduced today. The spike in the 80’s is consistent with Paul Volker
increasing the interest rates to battle inflation. The MAPE for this historical simulation is .59%.
6.4
6.0
5.6
5.2
4.8
4.4
4.0
3.6
3.2
94Q1 94Q2 94Q3 94Q4 95Q1 95Q2 95Q3 95Q4
RS RS (Scenario 1)
The MAPE for the ex-post forecast is .179%.
Dependent Variable: RL
18. Method: Least Squares
Date: 12/07/09 Time: 20:53
Sample: 1960Q1 1993Q4
Included observations: 136
Convergence achieved after 9 iterations
Variable Coefficient Std. Error t-Statistic Prob.
C 0.301862 0.110459 2.732789 0.0071
RS 0.188788 0.020330 9.286057 0.0000
RL(-1) 0.822268 0.021359 38.49828 0.0000
AR(1) 0.213139 0.088868 2.398388 0.0179
R-squared 0.987126 Mean dependent var 8.211863
Adjusted R-squared 0.986833 S.D. dependent var 2.743314
S.E. of regression 0.314787 Akaike info criterion 0.555132
Sum squared resid 13.08002 Schwarz criterion 0.640798
Log likelihood -33.74896 F-statistic 3373.660
Durbin-Watson stat 2.028879 Prob(F-statistic) 0.000000
Inverted AR Roots .21
This is the regression for average yield on AAA bonds. It is a member of the recursive
block, so it was run using only ordinary least squares.
20
16
12
8
4
0
1960 1965 1970 1975 1980 1985 1990
RL RL (Baseline)
The MAPE for the historic simulation is .38%.
19. 8.8
8.4
8.0
7.6
7.2
6.8
6.4
94Q1 94Q2 94Q3 94Q4 95Q1 95Q2 95Q3 95Q4
RL (Scenario 1) RL
The MAPE for the ex-post fore cast is .08%.
Dependent Variable: UR
Method: Least Squares
Date: 12/07/09 Time: 22:46
Sample (adjusted): 1960Q3 1993Q4
Included observations: 134 after adjustments
Convergence achieved after 8 iterations
Variable Coefficient Std. Error t-Statistic Prob.
C 6.582626 1.181766 5.570160 0.0000
(D(LOG(GDP)))-(D(LOG(GDPPOT))) -3.592488 2.730454 -1.315711 0.1906
AR(1) 0.973305 0.019730 49.33012 0.0000
R-squared 0.949410 Mean dependent var 6.178109
Adjusted R-squared 0.948637 S.D. dependent var 1.554937
S.E. of regression 0.352400 Akaike info criterion 0.774035
Sum squared resid 16.26835 Schwarz criterion 0.838912
Log likelihood -48.86033 F-statistic 1229.218
Durbin-Watson stat 0.650476 Prob(F-statistic) 0.000000
20. Inverted AR Roots .97
The unemployment rate is estimated according to a tradition Okun’s law equation relating
change in the unemployment rate to the change in GDP. It makes sense that there is a negative
effect of the unemployment rate on GDP. This equation is also in the recursive block, and
therefore is estimated using ordinary least squares.
11
10
9
8
7
6
5
4
3
1960 1965 1970 1975 1980 1985 1990
UR UR (Baseline)
The MAPE for the historical simulation is .19%.
21. 6.8
6.6
6.4
6.2
6.0
5.8
5.6
5.4
94Q1 94Q2 94Q3 94Q4 95Q1 95Q2 95Q3 95Q4
UR (Scenario 1) UR
The MAPE for the ex-post forecast is .13%.
Dependent Variable: WINF
Method: Two-Stage Least Squares
Date: 12/07/09 Time: 20:55
Sample: 1960Q1 1993Q4
Included observations: 136
Convergence achieved after 8 iterations
Instrument list: C CONS G GDP GDPPOT INFL(-1) INR INV IR M
NETWRTH PRFT RL RS TR UR WINF(-1) X
Lagged dependent
variable & regressors
added to instrument list
Variable Coefficient Std. Error t-Statistic Prob.
C -14.26324 3.091834 -4.613198 0.0000
INFL 0.691501 0.014116 48.98761 0.0000
UR(-2) 0.032879 0.096548 0.340545 0.7340
PROD 0.152321 0.046329 3.287830 0.0013
AR(1) 0.934047 0.033211 28.12424 0.0000
R-squared 0.999885 Mean dependent var 47.85147
Adjusted R-squared 0.999882 S.D. dependent var 28.87696
S.E. of regression 0.314002 Sum squared resid 12.91624
F-statistic 285411.2 Durbin-Watson stat 1.438084
Prob(F-statistic) 0.000000
Inverted AR Roots .93
22. The annual rate of growth in wages will be a positive function of overall price inflation, a
negative function of the unemployment rate, and a positive function of productivity growth. We
have a very strong r-squared value, indicating a good fiiting line.
120
100
80
60
40
20
0
1960 1965 1970 1975 1980 1985 1990
WINF WINF (Baseline)
The MAPE for the Historic simulation is .11%
110
109
108
107
106
105
104
103
94Q1 94Q2 94Q3 94Q4 95Q1 95Q2 95Q3 95Q4
WINF WINF (Scenario 1)
The MAPE for the ex-post forecast is .01%
23. Dependent Variable: INFL
Method: Two-Stage Least Squares
Date: 12/07/09 Time: 20:57
Sample (adjusted): 1960Q2 1993Q4
Included observations: 135 after adjustments
Convergence achieved after 19 iterations
Instrument list: C CONS CONS(-2) G GDP(-1) GDPPOT INV IR M
NETWRTH PRFT RL RS TR WINF(-1) X YPD
Lagged dependent
variable & regressors
added to instrument list
Variable Coefficient Std. Error t-Statistic Prob.
C 2.645615 2.952583 0.896034 0.3719
WINF 0.676700 0.140916 4.802149 0.0000
CONS(-1) 0.000505 0.001582 0.318843 0.7504
POIL 0.092758 0.022974 4.037453 0.0001
INFL(-1) 0.479845 0.090355 5.310660 0.0000
AR(1) 0.926117 0.041998 22.05126 0.0000
R-squared 0.999943 Mean dependent var 72.17086
Adjusted R-squared 0.999941 S.D. dependent var 38.74432
S.E. of regression 0.296935 Sum squared resid 11.37400
F-statistic 456242.9 Durbin-Watson stat 2.116613
Prob(F-statistic) 0.000000
Inverted AR Roots .93
The annual rate of growth in the consumer price index is estimated to be a function of
wage inflation, consumer demand, and oil prices. We have a high r-squared value, and the
Durbin-Watson statistic falls within the accepted values.
24. 160
140
120
100
80
60
40
20
1960 1965 1970 1975 1980 1985 1990
INFL INFL (Baseline)
The MAPE for the historic simulation is .10%.
154
153
152
151
150
149
148
147
146
94Q1 94Q2 94Q3 94Q4 95Q1 95Q2 95Q3 95Q4
INFL INFL (Scenario 1)
The MAPE for the ex-post forecast is .006%.
25. 7000
6000
5000
4000
3000
2000
1960 1965 1970 1975 1980 1985 1990
GDP GDP (Baseline)
After completing estimations of all the equations we can simulate the model as a
complete system. The above simulation is the historical look at GDP. It is a good fitting line, and
we are ultimately given a MAPE of .05%
7000
6000
5000
4000
3000
2000
1960 1965 1970 1975 1980 1985 1990 1995
GDP (Scenario 1)
GDP
GDP (Baseline)
Above is a graph of the historic simulation, actual numbers, and ex-post forecast
combined into one. From this view we see that the ex-post forecast looks pretty good. Below is a
closer look at the ex-post forecast.
26. 6900
6800
6700
6600
6500
6400
94Q1 94Q2 94Q3 94Q4 95Q1 95Q2 95Q3 95Q4
GDP GDP (Scenario 1)
The MAPE based on this simulation is .008%. This is a strong forecast for the gross
domestic product.
6000
5000
4000
3000
2000
1000
0
1960 1965 1970 1975 1980 1985 1990
YPD YDP_0
Looking at the results for the disposable personal income equation confirm our findings
for gross domestic product. The steady growth of personal disposable income is consistent with
the growth of gross domestic product. The MAPE of the historical simulation for personal
disposable income is .06%.
27. 6000
5000
4000
3000
2000
1000
0
1960 1965 1970 1975 1980 1985 1990 1995
YDP_1 YPD YDP_0
5900
5850
5800
5750
5700
5650
5600
94Q1 94Q2 94Q3 94Q4 95Q1 95Q2 95Q3 95Q4
YPD YDP_1
The MAPE for the ex-post forecast of personal disposable income is .01%.