This dissertation consists of three empirical chapters. The first chapter examines the extent to which real-world agents are rational in making quantitative expectations, an issue over which there is much debate. In this chapter dynamic models for new plant-level survey data are estimated in order to test rationality for manufacturing plants that report expectations of capital expenditures. An advantage of such data is that rationality is tested in environments where agents may not have knowledge of each others' expectations, so strategic motives for biases or inefficiencies are minimized. Model estimates and tests suggest that weak implications of rational expectations are rejected, as are adaptive expectations. The second chapter examines expectations formation in the economists' laboratory as psychologists have documented several biases and heuristics that describe deviations from Bayesian updating-a standard assumption for economists. Indeed, Confirmation Bias predicts that individuals will exhibit systematic errors in updating their beliefs about the state of the world given a stream of information. This chapter examines this bias within a non-strategic environment that motivates experimental subjects financially to provide probability estimates that are close to those of a Bayesian. Subjects revise their estimates of the state of the world as they receive signals. Comparing their estimates to those of a Bayesian shows that subjects display conservatism by underweighting new information. In addition, subjects display confirmation bias by differentiating between confirming and disconfirming evidence. The third chapter seeks to determine, through reduced-form Phillips curves estimates and a structural model, whether the indicator relationship between capacity utilization and inflation has diminished as in recent years high levels of capacity utilization have not led to higher inflation. In Canada, the capacity utilization rate is benchmarked to survey data, thereby providing a unique opportunity to empirically analyze this macroeconomic relationship. Estimates of time-varying parameters and structural break models indicate that there have been breaks over time in the relationship. The timings of the breaks suggest that increasing competitiveness and a rules-based monetary policy may help account for the demise of the relationship. Estimates of a monopolistically competitive sticky-price model economy qualitatively lend credence to this conjecture.
We investigate the difference between confidence in a belief distribution versus confidence over multiple priors using a lab experiment. Theory predicts that the average Bayesian posterior is affected by the former but is unaffected by the latter. We manipulate confidence over multiple priors by varying the time subjects view a black-and-white grid, of which the relative composition represents the prior. We find that when subjects view the grid for a longer duration, they have more confidence, under-update more, placing more weight on priors and less weight on signals when updating. Confidence within a belief distribution is varied by changing the prior beliefs; subjects are insensitive to this notion of confidence. Overall we find that confidence over multiple priors matters when it should not and confidence in prior beliefs does not matter when it should.
Methodologies for analyzing the forces that move and shape national economies have advanced markedly in the last thirty years, enabling economists as never before to unite theoretical and empirical research and align measurement with theory. In Structural Macroeconometrics , David DeJong and Chetan Dave provide the unified overview and in-depth treatment analysts need to apply these latest theoretical models and empirical techniques. The authors' emphasis throughout is on time series econometrics. DeJong and Dave detail methods available for solving dynamic structural models and casting solutions in the form of statistical models with empirical implications that may be analyzed either analytically or numerically. They present the full range of methodologies for characterizing and evaluating these empirical implications, including calibration exercises, method-of-moment procedures, and likelihood-based procedures, both classical and Bayesian. The book is complete with a rich array of implementation algorithms, sample empirical applications, and supporting computer code. Structural Macroeconometrics is tailored specifically to equip readers with a set of practical tools that can be used to expedite their entry into the field. DeJong and Dave's uniquely accessible, how-to approach makes this the ideal textbook for graduate students seeking an introduction to macroeconomics and econometrics and for advanced students pursuing applied research in macroeconomics. The book's historical perspective, along with its broad presentation of alternative methodologies, makes it an indispensable resource for academics and professionals.
Collection of procedures to execute likelihood evaluation using the particle filter. The likelihood function is associated with the optimal growth model, outlined in Section 10.2.2.2. of Macroeconometric Analysis. The authors request that use of these code in published work be acknowledged by citation of the textbook Macroeconometric Analysis, as well by the citation of any other researchers recognized within the documentation that accompanies the code.
This paper investigates the relative importance of shocks to total factor productivity (TFP) versus the marginal efficiency of investment (MEI) in explaining cyclical variations. The literature offers contrasting results: TFP shocks are important in neoclassical environments, while relatively unimportant in neo-Keynesian environments. A model with endogenous capital utilization captures both results depending upon the degree of nominal rigidity. In the model, MEI shocks create a wedge between the nominal returns on bonds and capital. Nominal rigidities activate this wedge and place the relative importance on MEI shocks, while TFP shocks dominate when prices are perfectly flexible.
The sixth amendment to the U.S. Constitution stipulates that individuals accused of violating the law by the state must be allowed a speedy and impartial trial by a set of jurors drawn from the community in which the alleged violation of the law took place. Most empirical research on the determinants of the time it takes a jury to render a verdict relies on data from the activities of mock juries. In this paper we analyze a unique and informative dataset on actual juries that mitigates the causal inference issues arising from the study of time to decision of mock juries. The empirical results indicate that 6 person juries are no quicker than 12 person juries; as cases become more complex and/or more severe, juries deliberate longer; non-unanimous decisions take longer to reach than unanimous ones; panels that saw many potential jurors excused during voir dire end up deliberating longer than panels with fewer challenges.