Podcast

Predictive analytics to forecast the Oscars and your business results

Callahan’s senior business analyst, James Meyerhoffer-Kubalik, and longtime film critic and Oscar enthusiast, Eric Melin, built an algorithm and used predictive analysis to forecast which movie would have the best chance of winning “Best Picture” at the 2020 Oscars. Using a predictive model, they created a formula capable of predicting the winner with 86% accuracy.

Callahan uses the same predictive models to forecast business results. In this podcast, James and Eric discuss the model used to predict the Oscars, and how the same approach and strategic analysis can predict business outcomes and drive the marketing strategy for your brand.

Listen here:


(Subscribe on iTunesStitcherGoogle PlayGoogle PodcastsPocket Casts or your favorite podcast service. You can also ask Alexa or Siri to “play the Uncovering Aha! podcast.”)

Welcome to Callahan’s Uncovering Aha! podcast. We talk about a range of topics for marketing decision-makers, with a special focus on how to uncover insights in data to drive brand strategy and inspire creativity. Featuring James Meyerhoffer-Kubalik and Eric Melin.

James:
I’m James Meyerhoffer-Kubalik, Senior Business Analyst at Callahan.

Eric:
And I am Eric Melin and Creative Director of First Person. And we are here to talk about the Oscars for the second year in a row. We’re going to do some amazing prediction and analysis courtesy of Mr. James right here who has created an algorithm that kicked my ass last year and it looks like is it going to do the exact same thing this year

James:
Oh breaking hearts again, Eric.

Eric:
what’s that? You’re going to break my heart?

James:
I’m going to break your heart.

Eric:
Yeah, yeah.

James:
I do that.

Eric:
So essentially James, last year you worked on an old, improving an older algorithm that you had before that you’d run for several years and me being a foam critic and a longtime Oscar enthusiast, I came to you with some ideas for some tweaks and you applied those tweaks and we ended up with an algorithm that predicted with I believe what, 0.5% or something-

James:
Yeah it was-

Eric:
that Green Book was going to win-

James:
Yeah-

Eric:
the Oscar for Best Picture.

James:
Yeah, we said last year too. They should have held hands and skipped through the finish line. Those two movies. Definitely. So, yeah, so like, like you said it, it originally started at Wichita state as a class project and we just used like IMDB ratings, IMDB categories, Golden Globe nominations, Golden Globe wins. And the model was only about adjusted R squared of 0.36. And so that’s when, you know, about a year and a half ago when I started and I met with you, we got together and we talked about it, how can we improve this model? And you gave me a lot of good ideas and I think we’re better than Nate silver, which I looked and I don’t even know if he’s still putting out his model on online or not.

James:
But yeah, kind of what we did based on our conversations, we got more variables that we found out were statistically significant. We also broke up the years, I started with 1980 data from 1985 and worked my way forward. And then based on having that conversation with you, we cut those data sets appropriately based on changes to the voting body so that we were able to get the most accurate model that is representational of today. Yep.

Eric:
Yeah. And it wasn’t just changes on the voting body, it was a very significant change in the way that the Academy had people picking Best Picture, which I’d like to explain a little bit now if I could.

James:
Absolutely.

Eric:
All right, so in 2009 a best picture expanded from five to 10 and this was something that they had in place in the years, 1936 to 1943 believe it or not. And this was a reaction to the fact that ratings were declining on TV and The Dark Knight was not nominated for best picture in 2008. So people thought, Hey, if we expand that to 10 we will likely get more big Hollywood movies in there instead of just these small indie films that usually to get in there, more people will tune in and the Oscars will be a bigger deal. They did that for two years, and then in 2011 they made a really big change that’s been in place since then. They started requiring a minimum of 5% of first place votes in order to receive a Best Picture, nod, so now they’re doing what’s called a weighted or a preferential ballot.

Eric:
And this ballot means that they’re now the people who are now voting the, the closed body of the Academy of Motion Picture Arts and Sciences, which is really important. We always have to stress this. This is not a publicly voted on award. This is a very closed body whose membership has changed in age and demographics over the years. But they started ranking them from first to 10th and only in 2011 has it been not exactly five or 10. So every year the number of nominees can be different. I believe one year it was eight, but every other year it’s been nine. And this is because each movie has to get that minimum threshold of 5% of first place votes in order to get a nomination. So this year, again we have nine nominees.

James:
Mm-hmm (affirmative).

Eric:
So do you want to introduce the nominees for Best Picture this year in 2020?

James:
Sure. So the nominees that we had that we worked the model off of, we had 1917, Once Upon a Time, The Irishman, which I believe was more of a Netflix production, Marriage Story, Little Women, Jojo Rabbit, Ford v Ferrari, Parasite and the Joker.

Eric:
Now the Netflix thing is really interesting because last year we had Roma, which was my favorite film of the year and also was an early favorite to win because of all the critics awards that it had won. And this was the reason that you threw out the box office things because Netflix doesn’t actually report box office. And this year we had The Irishman and Marriage Story, which are both Netflix films.

James:
Correct, correct. So we did find that it was statistically significant last year and what we found out, it was kind of an obscurity thing. So the better it did in terms of box office on average up to the Oscars, actual worse it did according to the model. But like you said, once we had last year with Roma, and this year with the Irishman and the Marriage Story, you almost have to completely get rid of that variable altogether.

Eric:
Yeah, I think a lot of algorithms are going to have to keep recalibrating themselves each year because of this. Netflix is increasing, you know, the amount of promotion that they’re putting into these movies. And honestly in the case of The Irishman, they were the only ones in Hollywood who would give Martin Scorsese the $200 million that he needed to de-age three main characters throughout like 40 or 50 years of their lives. And so Netflix is really, really dedicated to putting in the money that they need to win a freaking Oscar. And so this year is no exception.

James:
Yeah, no, you made a really good point there. I think, so the thing in terms of confidence in the model, you know, as you know, Netflix becomes more of a player, amazon becomes more of a player. I mean we will really just moving forward, start to work off those years in which, you know, it was applicable. So like the model I know we have from 2011 to 2019 or current, it only has really like one year based on Roma, but we were still able to accurately predict, so going forward it should gain in strength as we, you know these things become more of a commonality into the future.

Eric:
Right. And we should also mention that since Oscar’s so white three years ago, there’s been a big push in The Academy to get different people and younger people from all over the world, not just white old men in Hollywood to be part of The Academy. And so I think that changing memberships is going to affect things as well. And it would be interesting if you could, in the future if you started weighting the, the more recent nominees a little bit more or have you done that already?

James:
I haven’t.

Eric:
More recent winners.

James:
I haven’t just because I would need more years of data. it gets complicated. We kind of did that in a sense when we cut the data so that we don’t have that tail end from 1985 to 1995 weighing in. By cutting those, we do have more instances from this timeframe, which would be more representational of that. Definitely. So-

Eric:
Yeah.

James:
So in a way we have, but I see what you’re saying. Yeah.

Eric:
Well it’s interesting. I just don’t think a lot of people have thought about how many changes the Academy has made over the last decade. And so those, that, those old sets of data, you know, back in the days when Out of Africa used to win and Gandhi and Chariots of Fire, that is just not, not happening anymore, that kind of thing. You know what I mean?

James:
Yeah.

Eric:
Especially in a year when Joker has 11 nominations, super well, super villain movie, whatever you want to call it, or a character drama disguise. This one has more nominations than any film in the field. You know, it is kind of funny to look back on it 10 years after The Dark Knight and go, Oh well, you know, they made the change and it took them a decade to get Joker in thereafter after his last appearance.

James:
It was a good but disturbing movie. I will say that. I’m no critic, but it was definitely a little disturbing. So, but…

Eric:
All right, well let’s talk about this then really quick because a lot of people don’t know how the weighted ballot works. And this is really, really interesting and I think a lot of Oscar voters don’t understand how it works either. And so I think that they’re moving their positions around based on, you know, trying to help a movie that they like over another film. But what they don’t understand is that if they don’t actually rank their films in order from one to nine, their favorite to their least favorite, they’re actually doing the other films a disservice.

Eric:
So just let me run through this really quick. So when The Academy’s accountants first get the ballots, they tabulate how many number one votes every nominee has, then they eliminate the title that has the least on everyone’s ballot, right? So let’s say Joker is your number one movie. But you know, when they look at all of the ballots, Joker is the least favorite. Joker is now removed as your number one movie and your new number one movie is now let’s say Parasite. Okay, so now you are helping Parasite. But what if you just put Parasite in there? You know, not because you liked it, you know, but you were actually trying to give a big boost to something else. Well, Parasite is now your number one movie until all the other things are, or until Parasite is then eliminated.

Eric:
So that process is repeated over and over until just two films remain. And then the film out of those two with the most number one votes is the winner. So some ballots number eight choice could actually contribute to the winner because if the seven films above them get deleted and all of a sudden their vote for let’s say Marriage Story, counts as a number one vote. And so this is a really strange way to do this.

James:
Yeah.

Eric:
from a statistical standpoint. Tell me what it means to you.

James:
Well, I, from a statistical standpoint, in terms of my model, like you said, there’s the way they rank them and re-rank them. I think the interesting thing that kind of goes along with this model is that there’s a pattern that goes on with, you know, previous awards that, you know, a movie may win. And then based on these, reading these patterns, statistically we’re able to see like the weight of what those previous awards were to the Oscars. And it was able to say, you know, if this happens then they’re more likely to win the Oscars. So I know kind of in the model that we built, I know we talked about this in great depths about a year ago, but I’ll just, I just kind of have for everyone, I just kind of want to read out what these are and kind of what the rank of them are. So everybody has a good understanding of them.

James:
So the top weighted variable for Oscar’s Best Picture was the DGA for outstanding director. So the director of the film that won that, that film went on more times than not to win Oscar’s Best Picture. After that, it was a Critic’s Choice Award based on your recommendations and then the Producers Guild Award for Best Picture.

James:
And then this part, and I know kind of, we talked about your, your feelings last year on the Golden Globe. I don’t think you’re still probably not too big of a fan of the Golden Globes. We had Golden Globe nominations and the interesting thing there is if you were nominated for the Golden Globes, you were more likely to win Oscar’s Best Picture. But if you win the Golden Globes, you’re actually, it’s reading the negative affect on the model. And we also talked about the domestic box offers averages from this time it started running up until the Oscars or what, however many weeks it ran. And then finally we had the BAFTA Best Film had a negative impact. So whoever won that was like least likely to win Oscar’s Best Picture.

Eric:
Now the BAFTA this year are not until a couple days before the Oscars.

James:
Correct.

Eric:
So that data we don’t have in the model this year, but I believe that as an Oscar prognosticator I can correctly predict that 1917 will win this award because Sam Mendes is British and this is a very British film.

James:
Yeah.

Eric:
Even though it’s, you know, a World War I Epic or whatever. So if 1917 wins this like I think it will, will that bring 1917’s chances down of winning Best Picture?

James:
So actually I did a lot of reading ahead of time out of prognosticators and I actually went ahead and put that in the model as them winning. So these results either way, but the way I ran, I ran it with 1917 as the BAFTA winner and without and 19- Oh I can’t, I don’t want to spoil things until you’re ready. But-

Eric:
Yeah well-

James:
That, that, that didn’t change the ranking of 1917 I will say that.

Eric:
Interesting.

James:
I was trying to think ahead. You know, cause I looked, it was on, I think the BAFTAs are on February 2nd you know, so.

Eric:
Right, right, right. Okay. So looking at the DGA, the Critic’s Choice, the PGA, the Golden Globes and the BAFTAs, how far back did you go knowing that since 2001 the model’s been cut but, but how far back did you go before that?

James:
We went all the way back to 1985.

Eric:
Okay.

James:
In our time machine.

Eric:
So I have some interesting things for you. I did, I looked up some stuff since 2011 right.

James:
Mm-hmm (affirmative).

Eric:
So if we’re just using, if we’re using the stuff since then, the DGA has had four match-ups since 2011 and the Critic’s Choice has had five.

James:
Mm-hmm (affirmative).

Eric:
So maybe at this point, you know, we would have to see going forward, but if the DGA gets it wrong, this tier and the Critics Choice gets it right, then there’s a possibility that it’s going to go up. Right?

James:
Right, yeah no-

Eric:
Because we’re saying we’ll say the board is more predictive.

James:
Right, yeah. And so we’ll try to rerun this every year. And like you said, there’s, I think last year we didn’t have the problem with the BAFTA, but this year we did. But yeah, you’re exactly right there. So,

Eric:
okay. Okay. So here’s another fun thing. So James, we’re going to reveal what movie James’ algorithm picked of winning the Best Picture this year at the Oscars. But before we do, I’ll have to tell you that I am rooting for a film called Parasite. This is a South Korean film. It’s the first time that a South Korean movie’s ever been nominated for International Language Film, much less Best Picture. And it has won a metric crap ton of awards from local critics groups leading up to this and had a historic win a couple of weeks ago at the Screen Actors Guild Award.

Eric:
The Screen Actors Guild have a award called Best Ensemble, which is kind of their equivalent of Best Picture. And the Screen Actors Guild is just, you know, actors. Actors happen to be the biggest voting body of The Academy. So you can say in that way, you know, maybe they are predictive even though they don’t have a direct corollary from Best Ensemble to Best Picture. Long story short, Parasite won Best Ensemble. It’s the first time in the history of SAG that a foreign language film cast has won this award. And so everybody is thinking there’s a lot of momentum for Parasite going forward. So I asked James, I said, “Hey! Can we look at SAG? Can you consider adding that in and see if it’s a statistically significant?”

James:
And I said “You Betcha”

Eric:
He did. He said, “you betcha.” I looked up that since 2011 SAG’s Best Ensemble Film is only matched up with Best Picture twice.

James:
Yep.

Eric:
James looked at it and what did you come up?

James:
It was not statistically significant or essentially saying the same thing. It didn’t match up, you know, with Best Picture enough times to be read out by the model.

Eric:
Yes. So as you can imagine, what I thought was some momentum headed to Parasites way, my favorite film of the year, James has crushed my dreams again. And spoiler alert, Parasite did not come out on top in his predictive algorithm. So let’s go, let’s go from the bottom to the top and tell people from nine to one.

James:
Okay.

Eric:
You know, where your Best Picture model ranking is right now.

James:
Should I announce them like David Letterman?

James:
No? All right.

James:
Can I get a drum roll? For number nine we have the Joker, and number eight we have Parasite and I’m sorry about that Eric.

Eric:
I think there’s still a chance.

James:
Okay. Hey,

Eric:
In my heart of hearts

James:
So you’re saying there’s a chance.

James:
Number seven, we had Ford v Ferrari. Number six we had Jojo rabbit. Number five we had Little Women, which I did see with my wife. It was enjoyable. Number four was the Marriage Story, number three was The Irishman. Number two, Once Upon a Time, leaving-

Eric:
I’ll stop you right there.

James:
Okay.

Eric:
Once Upon a Time won the Golden Globe this year for comedy. And we have to remember the Golden Globes has two Best Pictures. They have comedy/musical, and then drama. So Once Upon a Time wins the comedy/musical award and the winner of best drama at the Golden Globes this war, is also the movie that you’re predicting to win Best Picture.

James:
Yep. Are you ready for number one?

Eric:
I am.

James:
All right. I need to see a smile on your face for this one. I am happy. My model read out correctly. And number one is 1917 which is an amazing film in my opinion. But I’m not a critic.

Eric:
Yeah, yeah. It’s, it’s just fine. It’s no Dunkirk, but it’s just fine.

James:
I agree to disagree.

Eric:
So let’s think about this or let’s talk about this. The percentage difference between 1917 and its next biggest were runner-up Once Upon a Time in Hollywood, what’s that?

James:
It was about 6% so this would, the way the model read out is that 1917 will get 27.7% of first place votes. And then Once Upon a Time is likely to get 21.8% of first place votes and then it goes on down the list to the Joker at the end.

Eric:
Gotcha. Gotcha. And we should also mention that the DGA helped Sam Mendes, he won the DGA this year, over 1917, the British director of that film. It was not even nominated for a SAG, and it won the drama for the Golden Globe. But in a Critic’s Choice, which recently I would say is a little bit more predictive. Once Upon a Time in Hollywood wins that. So-

James:
Right.

Eric:
I think Once Upon a Time in Hollywood, still has to be considered in any version of, of somebody who’s voting with their heart, who’s extremely biased. And again, I want to point out James’ algorithm not biased. Me, totally biased. I think we still have to consider Once Upon a Time in Hollywood, the third place, one here. But my heart tells me that Parasite has got to be number three in people’s hearts.

James:
I’ll take my data versus your heart. We should make a band called Data versus your heart.

Eric:
So let’s wrap this up. You know, 1917 is a way better movie than Green Book and if it wins Best Picture as a film critic, I’ll be just fine with that. But the important thing here is that algorithms don’t look at quality of films, algorithms don’t have opinions about films. Algorithms are simply looking at the data. And the reason that you do this is because all of this kind of thinking can be put in people’s marketing campaigns and company’s marketing campaigns in terms of front end strategy. Do you want to talk about that a little bit?

James:
Correct. So we use similar things at what I do here at Callahan is I do a lot of forecasting, meaning that I’ll take some measurement of someone’s business and I will predict out into the future what’s expected. So it really gives you a benchmark to see how you’re performing against. We also do other things with algorithms, whether triggered algorithms to understand the impacts that rain might have on let’s say a moss product where rain happens, two weeks go by, moss grows. We’re able to figure all that stuff out using statistical analysis and algorithms.

Eric:
Nice, nice. Yeah, I mean, honestly, I think this year it’s interesting that you’re always concerned about the Best Picture because when it comes to the Oscars, I believe that all four of the acting categories are already wrapped up. I think that Best Picture is probably still the most interesting one.

James:
Yeah.

Eric:
And and like last year with Roma, we have a very popular foreign language film that is right there in the mix. And I would like to make this last plug in, in this podcast to ask everybody to please try to see Parasite if you can. It actually comes out on Blu-ray, DVD, and digital services this week, so definitely worth checking out.

Eric:
All right, well, I guess, I guess that’s it. We’re wrapping this up, James, congratulations on another year of awesome predictive analysis.

James: Thank you!

Eric:
And I hope you, I hope you lose.

James:
Oh that hurts. I appreciate Eric, thanks for joining.

Eric:
Yeah, absolutely.

You’ve been listening to the Uncovering Aha Podcast. Callahan provides data savvy strategy and inspired creativity for national consumer brands. Visit us at callahan.agency to learn more.