clock menu more-arrow no yes mobile

Filed under:

Revisiting My 2018-19 MBB Computer Projections

The moral of the story: the Pac-12 can’t get it’s act together

NCAA Basketball: NCAA Tournament-Second Round-Washington vs North Carolina Rick Osentoski-USA TODAY Sports

Before the season started I laid out my computer projections for the final Adjusted Efficiency Margin (as used at kenpom.com) of each team in the Power 6 basketball conferences: ACC, Big East, Big 10, Big 12, Pac-12, and SEC. Now that the NCAA Tournament is over let’s revisit those projections to see how I did. I tried to avoid as much technical jargon as possible but if your eyes glaze over slightly, I apologize. But let’s be real this article’s more about transparency than engrossing narrative.

Washington and the Pac-12

For the second straight year my system greatly overestimated the strength of the Pac-12. 11 of the 12 teams in the conference under-performed my projection for them with Colorado the lone exception. Washington and Oregon were neck and neck in my projections with the Ducks coming out in 1st place with a projected mark of 21.28 over UW’s 20.37. Despite the fact that the Huskies won the conference going away, Oregon’s late surge propelled them to 1st place in the conference’s AEM rankings so while my mark for both was off, the order was ultimately correct.

After the clear 2 leaders at the top I had USC, UCLA, then Arizona in the next tier. All 3 of those teams were significantly worse than I projected as Arizona fell 8.4 below my projection, USC by 10.4, and UCLA by 11.2. The chemistry was toxic at all 3 programs and UCLA obviously had the upheaval of their coach being fired midway through the season. Instead of finishing in 3rd, 4th, and 5th they ended up 6th, 7th, and 8th.

There was another clump of a tier between Arizona State, Utah, Stanford, Oregon State, and Colorado with the Sun Devils the clear leader among that group. I actually almost nailed ASU exactly as they were only 0.5 lower than my projection but that was good for 3rd in the conference rather than 6th. Oregon State also came in higher in the actual standings even though their actual AEM was lower than projected. As mentioned earlier, Colorado was the only team in the conference to see an improvement on my projection as they wound up 5th. I had Utah and Stanford as tied for 7th place whereas they almost tied for 9th place. Utah did much better in the conference standings than their Efficiency Margins would’ve dictated.

Finally, Washington State and Cal brought up the rear as expected although I thought California would narrowly edge out Washington State rather than finishing substantially behind. Regardless, both coaches were fired for their efforts and it seems well deserved.

2018-19 Pac-12 Conference Projections vs. Actual Performance

Team My Proj AEM Actual AEM My Proj AEM Rank Actual AEM Rank Media Poll Rank
Team My Proj AEM Actual AEM My Proj AEM Rank Actual AEM Rank Media Poll Rank
Oregon 21.28 17.86 1 1 1
Washington 20.37 14.28 2 2 3
USC 18.69 8.25 3 6 5
UCLA 17.51 6.34 4 8 2
Arizona 15.71 7.35 5 7 4
Arizona St. 11.95 11.55 6 3 6
Utah 10.94 5.85 7 9 8
Stanford 10.94 5.47 8 10 9
Oregon St. 10.81 8.33 9 5 10
Colorado 9.55 10.75 10 4 7
California 4.65 -6.84 11 12 11
Washington St. 3.37 -3.79 12 11 12

While the teams themselves may have under-performed, my system did a pretty good job of determining who the best players in the conference would be. My top-8 projected players in the conference all made at least honorable mention all-conference and the only 3 players that didn’t out of the top-13 all missed at least 10 games due to injury. Every player to make an all-conference team I had in the top half of the conference.

So why did I shoot too high in my projections for the Huskies?

If you pop in the actual minutes played and actual net points per possession for each player rather than the one from the previous season the team oddly enough actually did even better than I was projecting.

There are three potential flaws that led to the discrepancy. The first is that the multiplier included for the UW seniors may have been too large. Noah Dickerson and Matisse Thybulle as 4-star seniors got huge boosts in their baseline rating that make them worth double the amount of a 4-star freshman. This is backed up by the data because about half of 4-star freshmen don’t end up making a very meaningful contribution but in this case working with averages may have hurt.

The second is that I lower the coaching factor for coaches with a limited amount of head coaching experience. You would think this would’ve worked against Coach Hopkins but it actually helped him since he received a negative rating from Year 1 (and will get a negative rating in Year 2 as well). That rating is entirely based on where you were projected versus where you ended up. Both seasons the Huskies have failed to live up to my projections. The two options are that either my system is inflating Washington’s projection or that Hop isn’t getting the most out of his talent. The eye test clearly indicates that Washington has been using the talent well so my inclination is to think that UW is just overrated in my system.

Which leads to the third potential flaw: the zone. I use Synergy’s player data to power the individual player ratings, taking into account the net points created minus net points allowed on defense. It can be very difficult to determine in the zone to whom to attribute a possession. That means there are fewer possessions tied to an individual player as opposed to just “Team Defense” and that boosts their individual ratings since being credited as the primary defender on any made basket hurts your score. One of my offseason projects will be to look at the scores for teams that run exclusively zone and find out if I need to include a zone adjustment factor that knocks down their projected scores a little bit.

Even though I overshot the actual performance, I ultimately got pretty darn close in my projection for UW’s postseason outcomes. I gave the Huskies a 38.5% chance of making the NCAA tournament but getting eliminated before the Sweet 16 (the highest percentage likelihood) which is ultimately what happened.

The Rest of the Country

If you’ve made it this far, congratulations. You now get a chance to think I’m not a complete hack. If you only look at the Pac-12 schools then I got within the final AEM by 5 or less on just 4 of 12 schools or 33%. My hit rate was substantially higher for the rest of the country. My projection was within 0-2 for 16 schools (21.3%), within 2-5 for 25 schools (33.3%), within 5-10 for 23 schools (30.7%) and off by more than 10 for 11 schools (14.7%).

Those numbers look slightly better if you adjust for playing time discrepancies. Obviously, my projections can’t use actual minutes played from the get go because the season hasn’t happened yet but if I had known about injuries/suspensions etc. then it would ultimately shift 2 of the schools from the off by 10+ category into the off by less than 2 range.

There’s not a true pattern in determining which schools lived up to my expectations but these were the programs that finished right about where I expected: Arizona State, Baylor, Clemson, Colorado, Kansas State, Kentucky, Louisville, LSU, Marquette, Minnesota, Mississippi State, Ohio State, Oklahoma, TCU, Wisconsin, and Xavier.

It’s a lot easier to determine the trends in the ones where I missed by a ton. The national title game was played between Virginia and Texas Tech coached by Tony Bennett and Chris Beard respectively. Those two coaches have the highest raw coaching scores of anyone in my system. But because those can sometimes be volatile I cut the maximum possible coaching bonus/penalty allowed to half of what their historical track record has been. However, it was already confirmed about Tony Bennett and now seems to be for Chris Beard that you can just expect them to have the full bonus. If you make that change then instead of being off by about 13 for each it goes down to about 6.

The two biggest swings and misses in opposite directions were for Vanderbilt and Purdue. The Boilermakers were losing 4 seniors off of a very good team and had just 1 senior on this year’s team and only two total 4-star players. They didn’t miss a beat mostly due to Carsen Edwards’ brilliance but it will result in a big boost to Matt Painter’s coaching rating for next year. Clearly, Purdue’s players were better than expected they just didn’t get the opportunity behind all of the veterans. There’s a chance this kind of negative bias might seep into my rating for Washington next season if Naz Carter, Jamal Bey, and/or Elijah Hardy make huge leaps.

Vanderbilt saw the loss of 5-star freshman PG Darius Garland within a few weeks and accounting for that dropped my projection down from 20.66 to 17.78. But they still bottomed out at a lowly 0.81. Losing your starting PG means a lot more in real life than on the computer and they just never found a rhythm. The team was also made up of a lot of fringe 4-stars who got credit for their talent when they probably should’ve been a notch below.

***
You can follow me @UWDP_maxvroom for all your UW Men’s Basketball News and Notes