Strengths and Weaknesses of Papers in Moody’s Mega Math Challenge 2007
Download this perspective. [52K PDF]
In an effort to enhance the educational experience of those students who participated in the Challenge, and in recognition of their effort, time, and the high hopes they had of rising to the top of the heap of papers, here are some notes, from a judging perspective, on the papers that advanced to the second round of judging:
First, we saw many different kinds of approaches. There were two parts to the problem: picking the stocks to put into the portfolio; and then deciding how to allocate the money among the chosen set of stocks in the portfolio. Most teams took some function of the indicators presented and used them to decide what mix of stocks to choose. The function of the indicator variables was most often a weighted sum of the indicator variables. The weighting was usually somewhat ad hoc (equal weights, or weights assigned in a somewhat ad hoc fashion to emphasize a particular indicator). A more sophisticated approach would have been to choose weights based on a statistical analysis or regression analysis to find a set of indicators that best predicted some measure of performance, but this would have required a bit more time and computing capability than was available to the teams. Once the stocks were chosen for the portfolio, many team then allocated the money in some sort of proportional fashion (proportional to an indicator or a composite indicator).
Recall the two aspects of the problem: choosing which stocks to place in the portfolio and how to allocate the money among those stocks. Some teams were strong on one aspect of the problem but weak on the other.
Recall too that this was a math modeling contest. Judges were looking for teams that proposed models, analyzed, and tested them. Some of the proposed models were very sophisticated, and some rather simple. No matter how simple or sophisticated the model, judges expected to see a clear explanation of the model and an analysis/justification for why this done.
No model was dismissed as incorrect as long as a well-reasoned and consistent approach was taken. However, some models faired less well in the final results. Models that were based on products and quotients of indicators, where one of the factors could potentially be zero in the general case, tended not to work, for example.
Many teams failed to think about the testing aspects. A few proposed clever schemes that used some sort of Monte Carlo simulation based on past statistics to see how their methods and models would have fared on past data or on other sets of data (say, in other market sectors). Some teams gave a lot of thought to how the indicators could be normalized and compared – converting data to z-scores, for example.
Some teams created computer codes to implement their ideas (such as Monte Carlo simulations). Such codes were often based on quite creative ideas, but some teams did not adequately explain in the text what they had tried to implement in computer codes.
Finally, the summary of the work done was important as well. The ideal summary not only conveyed the results but also gave a brief, clear explanation for the approach taken and methods used.