An Empirical Test of an Ipo Performance Prediction Model: Are There "Blue Chips" among Ipos?

2008 
ABSTRACT An earlier study of 563 firms which issued IPOs during 1997 identified and estimated a three-stage algorithm in which basic accounting variables and indices available at the time of the IPO were found to predict mean annual wealth appreciation from buy-and-hold stock ownership for the ensuing three years. Firm size predicted membership in the middle sixth and seventh deciles; sales, receivables turnover, andretained earnings per assets predicted the top quintile; current debt and selling costs predicted the lowest quintile. Since February 2001 market trends have been generally negative. The current paper confirms the earlier model despite negative currents. PURPOSE OF THIS STUDY An earlier investigation ( Miller, 2003) uncovered a non-linear and, indeed, non-metric anomaly in the joint distributions of the wealth appreciation of companies with new initial public offerings and certain accounting data made public at or around the date of the offering. The earlier study was purely exploratory and consisted of specifying the model and estimating the parameters of a three-stage prediction scheme. The model was able to predict approximately three-fourths of the firms correctly into three segments of wealth appreciation. The three segments were the "MID" comprised of the sixth and seventh deciles, "TOP" or the top quintile, and "LOW" or the bottom quintile. It is the objective of this study to evaluate the performance of the model in the face of the generally poor market conditions of the two years immediately posterior to model construction (March, 2001 to July 2003). INTRODUCTION It is not rare to find examples of data mining in the literature relating financial data to stock market and general business performance. Even the most influential of the early papers on company failure prediction (e.g., Beaver, 1967, Altman, 1968, and Edminster, 1972) might be accused of too-enthusiastic opportunism by their use of repeated analyses (one suspects) until a statistically significant formulation appeared. And, to make matters worse, sample sizes were very small and drawn as convenience samples rather than probability samples. As is apparent from these cautionary examples, data mining is not always a complementary term. It is also called "data dredging" or the over-working of data, and is a natural result of the statistician's desire to do a thorough job. It may be said that the goal of any statistical analysis is to uncover statistical significances. See Fisher (1986) for a broader discussion of the tensions between the statistician and his client. There is also a careful discussion of the problem in the paper and subsequent comments in Chatfield (1995). Chatfield underscores the potentials for disaster whenever a model is uncovered and fit to a set of data, and then tested on the same set. This is especially true in the cases of step-wise regression and time series analysis. While this is not a novel idea, he goes further to argue that split sample designs are also suspect and that models should preferably be tested on data gathered at another time. Only then can his "model selection biases" be removed. More generally, it can be argued that there are two stages in any kind of scientific enterprise. Tukey (1977) has developed a broad range of powerful "exploratory data" tools to assist the researcher in uncovering explanatory models. But he would agree that there is still a need for "confirmatory" analysis (Tukey, 1980). Good scientific procedure calls for such confirmation not to come from the model source, but from independent investigators operating in other sites on related, but not identical, datasets. The approach of this paper is strictly "exploratory" and the confirmatory phase will be left as a follow-up exercise. As part of a fundamental reflection on the theoretical underpinnings of the statistical analysis, Hand (1996) has expanded on the opening provided by Velleman and Wilkinson (1993), who were criticizing the psychophysicist Stevens' (1951) data measurement scale hierarchy (nominal, ordinal, interval and ratio) that has become almost routinely accepted in much of scientific work, especially the social sciences and business research. …
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []