Wednesday, February 26, 2014

What The Numbers Say About Who Will Win At The Oscars

Using a statistical model to forecast this year’s big awards. And just maybe give you the boost to finally win that Oscar pool.













A large Oscar statue is seen in the Dolby Ballroom during the 86th Oscars Governors Ball press preview in Hollywood, Calif., Feb.20.


Fred Prouser / Reuters











One of the marks of a major American cultural-commercial event, as long as winners and losers are being declared, is that large numbers of people will be putting money on it. Where TV money flows — toward the Super Bowl, toward March Madness — gambling money tends to follow. Sometimes it even feels like betting is the only thing propping up otherwise-dying practices. How else do you explain Floyd Mayweather, competing decades past his sport's glory days, still pulling in record-setting purses and showing up on highest-paid athletes lists? Or how the Kentucky Derby leads SportsCenter all while the crowd at Churchill Downs seems like it's from the same era as when Secretariat ran? An event that can attract a good wager, more often than not, finds a way to stick around.

So perhaps it is no coincidence that the Academy Awards, and the associated office Oscar pools, are the one event that can draw audiences like no other non-football event.

Yet compared to forecasting something like the Final Four, the Academy Awards, particularly in the major categories that most people care about, are rather predictable. Unlike sports, our other favorite gambling outlet, the Oscars are not the result of any future — and uncertain — competition being put on that night. They are more like guessing the results of an election that has already taken place.

So even if a small group of voters are making a collection of highly subjective judgments, the results of those judgments are predictable if there are some external indications of the way they are leaning. In politics, these indications are polls. For the Oscars, the indications are other award shows.





















Lupita Nyong'o, a nominee for Best Supporting Actress, in a scene from 12 Years a Slave.


Jaap Butendijk / Fox Searchlight











My method in generating the following forecasts was fairly straightforward. I gathered data going back to 1996 on the winners of each pre-Oscar award to determine its predictive power. Each category considered a separate set of relevant precursor awards with weights based on their historical performance in that category.

The most predictive awards in each category varied. For example, the top award at the Directors Guild Awards has been the strongest signal of success in the Best Picture category, matching the Oscar winner 78% of the time. For Best Director, the Critics Choice Movie Awards (CCMAs) were the most predictive at 78% as well. Each award was given a weighting according to this percentage, with one exception that I'll discuss later.

I generated forecasts based on how many of these precursor awards each nominee won and how reliable those awards have been at predicting the Oscar winner. Finally, each sum was then balanced by how predictable the category was overall. This means that, all else equal, I was more confident in my results in a category whose precursor awards were 60% accurate on average than in one whose awards were 50% accurate.

The most important awards in my model were those given out by guild associations, whose voting body overlaps the most with Academy voters. High-profile televised awards like the Golden Globes, British Academy of Film and Television Arts Awards, and the surprisingly predictive CCMAs were also given bigger shares. The impact of lower-profile shows, like those given out by critics associations, were minimal except in a few cases (the New York Film Critics Circle and Los Angeles Film Critics Association are significant in a few categories, like Best Actor).

While it is possible to construct a model that adheres too closely to historical data and tells you nothing about the future — this is known as "backfitting" a model — since my inputs are relatively standardized and I've minimized the amount with which I've subjectively tinkered with the model, previous success is more significant. And in that regard my model performs very well, correctly predicting most categories going back the past few years. My results accurately forecasting five or six awards using a similar model were published here last year, with the lone miss coming in the peculiar Ben Affleck-less Best Director category.

This method produces a number of otherwise expected results, hewing close to conventional wisdom. But forecasts are as much about having the right perspective as they are about guessing the winner. It's important to know which nominees are true favorites and which are favorites by just a hair. Most of all, they're about having a more realistic sense of what's most likely to happen.







View Entire List ›

Click here to view full content

No comments:

Post a Comment