Contest column January 11, 2001 (performance) $$
Contest column December 14, 2000 (performance) $$
Contest column November (performance) $$
Contest column October 6, 2000 (performance) $$
Contest column September 18, 2000 (performance) $$
Contest column August 20, 2000 (performance) $$
Contest column July, 2000 (performance) $$
Contest column June 8, 2000 (performance) $$
Contest column May 8, 2000 (performance) $$
Contest column April, 2000 (performance) $$
Contest column March 9, 2000 (performance) $$
Contest column February 10, 2000 (performance) $$
Contest column January 12, 2000 (performance) $$
Contest column December 9, 1999 (results) $$
Contest column November 4, 1999 (results) $$
Contest column October 14, 1999 (results) $$
Contest column September 9, 1999 (results) $$
Contest column August 5, 1999 (results) $$
Contest column July 8, 1999 (results) $$
Contest column June 10, 1999 (results) $$
Contest column May 6, 1999 (results) $$
Contest column April 8, 1999 $$
Contest column March 10, 1999 $$
The Wall Street Journal
Previous contestsIn 1988 the Wall Street Journal began a contest that was inspired by Burton Malkiel’s book A Random Walk Down Wall Street. In the book, the Princeton Professor theorized that "a blindfolded monkey throwing darts at a newspaper’s financial pages could select a portfolio that would do just as well as one carefully selected by experts."
The Journal set out to create an entertaining contest to test Malkiel's theory and give its readers some new investment ideas in the process. Wall Street Journal staff members typically play the role of the monkeys (the Journal listed liability insurance as one reason for not going all the way and actually using live monkeys).
The contest has become a popular feature for the Journal and has also drawn much interest and commentary from journalists, investors, and academics. Several academic papers have been written about the contest and its implications (summaries and links are included below).
The contest began on October 4, 1988 and since then more than 100 contests have been completed under the current rules. Initially the contest lasted one month, but recognizing that the publication of the contest was creating a publicity effect on the pro’s stock picks, the Journal began measuring the results over a six month period beginning in 1990.
The rules have changed at various times during the contest, but the current rules are as follows. Each month four "professionals" are given the opportunity to select one stock (long or short) for the following six months. The stocks must meet the following criteria.
- Market capitalization must be at least $50 million.
- Daily trading volume must be at least $100,000.
- Price must be at least $2.
- Stocks must be listed on the NYSE, AMEX, or NASDAQ and any foreign stocks must have an ADR.
The pro's stock picks compete against four stocks usually chosen by Journal staffers flinging darts at the Wall Street Journal stock tables, which are pasted to a board. At the end of six months, the price appreciation for the pro’s stocks and the dartboard stocks are compared (dividends are not included). The two best performing pros are invited back for the next contest and two new professionals are added. In the latest twist to the contest, the Journal has begun taking stock picks from Journal readers which will also be compared with the pro's and dart's picks (see 4/8/99 article $$).
On October 7, 1998 the Journal presented the results of the 100th dartboard contest. So who won the most contests and by how much? The pros won 61 of the 100 contests versus the darts. That’s better than the 50% that would be expected in an efficient market. On the other hand, the pros losing 39% of the time to a bunch of darts certainly could be viewed as somewhat of an embarrassment for the pros. Additionally, the performance of the pros versus the Dow Jones Industrial Average was less impressive. The pros barely edged the DJIA by a margin of 51 to 49 contests. In other words, simply investing passively in the Dow, an investor would have beaten the picks of the pros in roughly half the contests (that is, without even considering transactions costs or taxes for taxable investors).
The pro’s picks look more impressive when the actual returns of their stocks are compared with the dartboard and DJIA returns. The pros average gain was 10.8% versus 4.5% for the darts and 6.8% for the DJIA.
Some commentators have therefore concluded that the contest offers some proof that the pros have beaten the forces of chance and the Journal described the pros as "comfortably ahead of the darts" in the dartboard column published on 3/10/99 ($$). However, that conclusion is not shared by many others that have analyzed the contest. Malkiel and other academics have responded to those that consider the contest to be a victory for the pros with what amounts to a collective response of "not so fast my friend" (as they like to say on ESPN).
Researchers that have come to the defense of the darts argue that the contest has some unique circumstances that deserve elaboration. It can easily be argued that the contest itself and the rules of the contest tilt the odds in the pro’s favor. In fact, the academics seem to argue that it's not the darts that are on the losing end. Rather, they argue that investors that buy the pro’s recommend stocks are "naïve" and that those investors are acting on nothing more than "noise."
Before the contest even began, Professor Malkiel had suggested that the results would be affected by an announcement effect. In other words, the very act of publishing the pro’s picks in the Journal could cause those stocks to rise as the hundreds of thousands of Journal readers (the Journal’s current circulation is listed at over 1.7 million) open their morning paper and react to the recommendations of the pros. Professor Malkiel suggests to Investor Home that the pros advantage effectively disappears if you (1) account for the fact that the pros pick relatively riskier stocks and (2) measure returns from the day after the column appears (thereby eliminating the announcement effect).
There have actually been several very thorough studies that have analyzed the contest in great detail. In "The Dartboard Column: Second-Hand Information and Price Pressure," Brad Barber and Douglas Loeffler (Journal of Financial and Quantitative Analysis, June 1993) addressed the question of whether the pro's stock picks created temporary buying pressure by naïve investors (known as the "price pressure hypothesis") or reveal relevant information (otherwise known as the "information hypothesis"). The authors found evidence for both but also came to some interesting conclusions.
Two days following publication, the pro picks had average abnormal returns of 4%. However, those returns partially reversed within 25 days. Those returns were nearly twice the level of abnormal returns documented in previous research on analyst recommendations and the volume of pro’s stocks nearly doubled after the contest publication (which at the time was greater than the volume increase of the Journal's "Heard on the Street" column). They also found that the pros picked stocks with (1) lower dividends, (2) higher historic and projected EPS growth, and (3) slightly higher PE ratios and betas.
Professor Bing Liang studied the contest over an even longer period and published a paper in the January 1999 issue of the Journal of Business titled "Price Pressure: Evidence from the ‘Dartboard’ Column." A previous study titled The "Dartboard" Column: The Pros, the Darts, and the Market can be downloaded in its entirety from the Social Science Research Network. Liang analyzed contests from January 1990 through November 1994.
Liang also documented a 2-day announcement effect, which reversed within 15 days. Liang found that the returns were intertwined with the pro’s track record. That is, returning pros' picks had larger announcement effects. Yet over the full period, even the returning pros picks did not outperform. His research supported the "price pressure hypothesis" or the theory that abnormal returns and volume were driven by noise trading from naïve investors. On average, investors following the experts’ recommendations lost 3.8% on a risk-adjusted basis over a 6-month holding period. The announcement effect was greater for NASDAQ stocks than NYSE stocks. In addition to increased volume, spreads on the pro stocks declined.
Liang concluded that the pros neither outperformed the market nor the darts. According to Liang, the pros supposed superior performance could be explained by the small sample size, the announcement effect, and the missing dividend yields. One of the strongest criticisms of the contest is the fact that the Journal measures performance by price appreciation only, despite the fact that total return is measured by both price appreciation and dividends. For the period that Liang studied, the pro’s stocks had an average dividend yield of 1.2% versus yields for the darts of 2.3% and 3.1% for the DJIA average.
Liang found that the pro’s stocks were riskier (they had higher betas than the market and the dart's stocks) and had higher relative strength at the beginning of the contest. Liang also found abnormal volume in the pro's stocks before the contest announcement. This could be coincidental or could indicate that someone knew the pro’s picks were coming and traded on them prior to the columns. Interestingly, the dartboard stocks tended to perform well after the contest ended. See also Monkey business from Forbes (6/14/99).
An additional study that will appear in a future edition of the Journal of Finance is also live on the web. In Liquidity Provision and Noise Trading: Evidence from the "Investment Dartboard" Column, Jason Greene and Scott Smart reached similar conclusions to those of Liang but focused on market maker activity and the bid-ask spread around the column publication. They concluded "that the column generates temporary price pressure by increasing noise (i.e., uninformed) trading from its readers." Most of the abnormal return disappeared in a few weeks. Initial returns and volume were higher for the stocks recommended by analysts with successful records, but the stocks with the greatest run-ups had the largest price reversals.
The Wall Street Journal has certainly created an intriguing and entertaining contest. Unfortunately, as the Journal openly admits, it is not a perfect test of the efficient market hypothesis. One problem is that the Wall Street Journal is so respected and popular, that the contest itself impacts the results. Perhaps a good comparison that demonstrates the problem with the contest is the system used for testing most medical and pharmaceutical products. Before a product is approved for public use it must complete a series of "double-blind" studies to determine its usefulness and potential side effects. In a double-blind study, neither the test administrators nor the patients know who is getting the real product and who is getting a placebo. This prevents both the study personnel and the patients from being biased and allows for untainted results.
The Wall Street Journal’s dartboard contest is unfortunately, a long way from being a double-blind study. The contestants know in advance that their picks are about to be published and the Journal has no authority to prevent the contestants from trading in their stocks (or the dartboard stocks). Additionally, the contestants get the benefit of including their rationale, which typically occupies several paragraphs in the column. For instance, in the column dated 3/10/99 each pro pick had two paragraph commentaries with blurb’s like
- the stock "continues to perform head and shoulders above its competitors,"
- "it deserves a premium multiple based on its performance and its cash flow," and
- "it's an attractive valuation based on the strength of their position in the market."
These positive comments are read by thousands of investors. Those investors might agree with the rationale and act accordingly, thereby driving the stock price in the process and resulting in the announcement effect documented in the studies. The dartboard stocks simply get listed with no rationale.
Of course, expecting the darts to get equal treatment is a little like the Washington Generals expecting equal publicity in their games against the Harlem Globetrotters. The fans of course, are really there to see the Globetrotters and the Journal readers are really interested in hearing about the pro’s picks, particularly the pros that seem to have a hot hand. The darts are in a sense, just a side-show to the pros.
For the pros, the contest is obviously serious business. A winning pick can be great PR and result in substantial goodwill for their investment practices. A poor showing can be a major embarrassment in front of millions of Journal readers.
So is their anything the Journal can do to structure the contest to accurately test Malkiel’s theory without tainting the results in the process? Ideally, in double-blind spirit, the Journal would have to conduct the contest without publishing the picks or the results until the contest was already over. Additionally, the pros would have to be prevented from knowing if they are actually entered in the contest to be completely certain that they don’t impact the stocks in some way.
At a minimum, to be fair to the darts (and the efficient market hypothesis), the Journal could start calculating the performance results with dividends included. Not including dividends in the contest is a little like sending two boxers into the ring without telling one of them that the judges don’t give any points for body blows. The pros shoot for low dividend stocks just as a boxer would only throw head shots. But the brainless darts of course, don't know that they should be aiming for low dividend stocks. The fact that the pros have selected stocks with lower dividend yields implies that the pros have taken advantage of this unfair rule.
In a somewhat related issue, the Journal’s use of the DJIA is somewhat biased for the pros. The DJIA is a high dividend index of seasoned companies and is dollar-weighted. Since the pros are allowed to pick from thousands of stocks, a much more appropriate benchmark would be an equal-weighted index that includes stocks on all the exchanges. The Wilshire 5000 would be a better benchmark (but still not ideal since it is value-weighted).
A dartboard scenario that is entertaining to imagine and might balance the odds would be to announce the eight stock picks, but not to disclose who made them until the contest was over. Imagine the confusion that would be created if the Journal just listed the eight picks with eight blurbs (four from the pros and four blurbs either made-up or submitted by individuals). Another potentially humorous scenario that might test investor's reactions to the column would be to identify the individual darts. Perhaps the same dart is consistently picking the best performers. If dart #3 for instance, picks several winners in a row, perhaps investors would start buying dart #3 stock picks on publication day in addition to the stock picks of hot pros.
Some of these scenarios are obviously offered in jest, but with more than 100 contests now in the books and a new millennium quickly approaching, perhaps it's a good time for a change. Since the academics suggest that a primary conclusion of the current contest is that naïve investors follow the pro’s picks, maybe changing the rules isn't such a bad idea. After all, we are talking about a contest that originated from what was initially considered by many people to be an absurd theory - that a primate could pick stocks as well as intelligent, well educated, and highly compensated investment professionals.
Investor Home's review and links for Business Week's Inside Wall Street Column
Last update 1/16/2001. Copyright © 2001 Investor Home. All rights reserved. Disclaimer