Content area
Full Text
1 Introduction
Reviewers of anomaly studies often ask researchers to provide evidence demonstrating the robustness of the studied effect over time. Concerns typically include whether the effect is statistically significant over the entire study period, whether the magnitude of the effect has varied over time, and if the effect has diminished (been traded away) since being first identified in the literature. The suspicion is that the full sample results might be driven by the returns from a shorter time frame or that the anomaly no longer exists. The most common response by authors is to break their sample into subperiods and examine each subperiod for evidence of the effect. Doing so typically has two consequences which jointly can be misinterpreted as a lack of subperiod robustness. First, statistical significance is found in few, if any, of the subperiods. Second, estimates of the anomaly's magnitude display substantial variation over time. These two results often lead authors to conclude that the effect is not consistently present over time and/or that the full sample results are driven by a few spurious subperiods. We demonstrate that both of these results are exactly what researchers can expect to find even if an effect is real, significant, and stationary in the full sample.
We examine the statistical implications of subperiod analysis and suggest a regression-based test of structural change to determine whether the mean level of an anomaly is nonstationary (i.e. not constant). Calendar anomaly studies (e.g. the day-of-the week effect, the January effect, and the turn-of-the-month effect) almost universally use a regression-based framework for estimation with indicator variables to designate different calendar times. For simplicity, we couch our analysis in terms of this standard framework. However, the insights offered are more broadly applicable to any kind of analysis in which a full sample is broken into subsamples. We first analytically demonstrate how the reduction in sample size associated with subperiod analysis results in a loss of power to statistical tests of the anomaly. Next, using simulation, we demonstrate the degree of variation or nonstationarity that can appear to be present across subperiods even when the studied effect is truly stationary. For a real context, we use the recently documented "Halloween effect" anomaly to demonstrate the implications of traditional subperiod analysis...