Every year I put together team predictions from my ranking system for the power six basketball conferences and every year I like to look back and see how well they translated. If you’re a regular reader of mine you know the basis of my system but for the uninitiated...
My algorithm looks at a player’s recruiting ranking out of high school and their years of college basketball experience to look at the average performance of a P6 player in that talent/experience bracket and compare it to their actual performance. I use Synergy Sports points per possession data on offense and defense to determine overall performance. Each player on a roster gets a per minute numerical grade based on how they have lived up to expectations for their talent/experience and the more minutes they play, the more it contributes towards predicting the overall team performance.
The target for my ranking system is KenPom’s adjusted efficiency margin (aEM) which takes into account strength of schedule and per possession strength to predict future performance. A team that loses a bunch of close games can be better than their record reflects.
Trying to figure out a team’s performance in advance is harder than ever in the transfer portal era. One of the great flaws of the one and done model that Kentucky and Duke tried to enact was that the players didn’t have built up experience playing together to build chemistry. Nowadays just about every roster is supplemented with transfers and it’s harder than ever to judge when a team will gel and when it will completely fall apart.
When I make my pre-season predictions I do so assigning 5 starters to play 70% of the team’s minutes (28 mpg) and 5 bench players to play 30% of the team’s minutes (12 mpg). Most teams don’t play a full 10-man rotation all season but the point of doing that is to try to account for a team’s depth and the inevitable injury or two. Even if the plan is to play all 5 starters 35 minutes per game, if two players go down for a month or two you’re now in trouble.
The graph below on the left compares the team’s actual aEM against my predicted aEM for them in the preseason using that 70/30 delineation. The graph on the right shows how my predictions would’ve looked if you had been able to tell me exactly how many minutes each player was going to play. (Apologies if you’re on mobile, log back in on a laptop or desktop to be able to see it more clearly).
You’ll notice a few things about those graphs. The first is that the two don’t look all that different. An r-squared value tells how much of the variation in a chart like this is due to my formula where 1 would be a perfect match and 0 would be completely random. For the one on the left the r-squared is about 0.30 and on the right it’s 0.32. So having the actual minutes totals was a little helpful but it didn’t make all that much of a difference. For 54% of teams, knowing the actual minutes totals got me closer to the real-life finish but that means in 46% of teams it made my predictions worse.
You may also notice the large grouping of teams that are in the bottom middle which signifies that I thought they were going to be at least decent and they ended up being terrible. That grouping is primarily why I struggled so much this year. Last year my r-squared value was about 0.43 when there were only three teams that I thought would have an aEM above 10.0 that ended up having one below 5.0 (Nebraska, Georgia Tech, and Lousiville). This year there were 10(!) such teams.
I’m not taking all of the blame on that one. This was a particularly terrible year for power conference teams in general. Last season there were only 3 teams that finished with an aEM below 0 (Georgia, Oregon State, and Georgetown) and OSU was the worst at -5.36. This year we saw 8 teams finish with an aEM below 0 and Cal and Louisville finished below -8.0. I’ll talk a little bit more about some of those individual teams later on but it just wasn’t reasonable for any projection system to think that we’d see that many teams get that bad that quickly.
Now that we’ve gone through the high-level trends, let’s look at the Huskies specifically. Coming into the season I had the Huskies pegged for an aEM of 12.14. That was with an expected starting lineup of senior guard Noah Williams, senior former 4-star PJ Fuller, senior former 4-star Jamal Bey, senior former 5-star Keion Brooks, and junior former 4-star Franck Kepnang. None of those players had been an outright star last year but all played at least as a heavy reserve on a P6 team, all were upperclassmen, and all but one came from a good recruiting pedigree.
The only reason I had Washington as low as 7th in the Pac-12 is because of Mike Hopkins’ track record of underachieving with his rosters. That adjustment knocked UW down from what on paper was the 4th best compilation of talent/experience in the Pac-12.
Then of course, reality hit. Franck Kepnang got beat out for the starting job by Braxton Meah then tore his ACL, Noah Williams missed more than half the year due to injury, and true freshmen Keyon Menifield (and then Koren Johnson) eventually shoved P.J Fuller out of the rotation with their superior play. While it turns out that Meah and Menifield were both clear above average players, my rating system saw a sub-200 ranked true freshman and a 3-star junior who hardly saw the court last season.
That shift in minutes distribution meant that using the actual playing time made a huge difference in Washington’s outlook. The prediction with the real minutes shifted from 12.14 to 8.75 which was quite a bit closer to UW’s actual aEM finish of 5.92 and one of the biggest changes of any team.
The lesson unfortunately is that my guesses always look better if I were to actually apply Hop’s entire underachievement penalty instead of the 50% total I normally use. I generally only go part of the way because there’s always some random variation happening even if a coach consistently underperforms. But every year it seems Hop’s teams go a little lower than I would’ve thought. File that away for when I go over the initial 2024 projections once the transfer portal dies down a little.
UCLA was the favorite for most entering the season but my model liked Arizona to finish 1st and surprisingly had Oregon slotted also ahead of the Bruins at 2nd. UCLA had the requisite star player as Jaime Jaquez was the 3rd ranked player in the conference while Tyger Campbell was 7th. Jaylen Clark was ultimately underrated at 20th after excelling in a reserve role last year but that was still a solid ranking. The big problem for the Bruins was their bench. David Singleton was a clear 6th man candidate but spots 7-10 combined to play about 4 minutes per game for UCLA the previous year. Ultimately, Mick Cronin agreed that depth was lacking by narrowing things down to a 7-man rotation by midseason. That lack of depth came back to bite the Bruins in the tournament once Jaylen Clark and Adem Bona both suffered injuries.
Oregon did the opposite and dropped off quite a bit from my projections. However, it’s pretty clear to see the reasons. About 30% of the drop gets accounted for based on the playing time distribution. The Ducks suffered a string of concurrent injuries early in the year which knocked them back quite a bit. There was also just a failure to live up to expectations from Will Richardson and 5-star Kel’el Ware. My system had Richardson as the #4 player in the conference in the preseason but for the 2nd consecutive year he disappeared in big games and most Duck fans were ready to drive him to the airport by his last game. Meanwhile, #10 overall recruit Kel’el Ware could barely crack the rotation by the end of the year and drove himself to the airport by entering the transfer portal.
The other 2 schools besides UCLA to significantly overperform my projections were USC and Utah. The USC portion was easy to see coming. My system was shockingly down on the Trojans and if I could have manually inflated one team’s projections in the preseason, it would have been USC. Meanwhile I had 3 of Utah’s starters among the bottom-ten in the conference. Lazar Stefanovic (who just transferred to UCLA) became a much better player as a sophomore and Branden Carlson took another leap as a senior. He helped anchor a defense that went from 189th to 37th in year 2 under Craig Smith in a leap that was pretty difficult to see coming.
The only other team to dramatically underperform my projections from the Pac-12 was Cal and we’ll get to them a little bit later.
Connecticut (Actual aEM: 29.86, Projected aEM: 18.6)
The Huskies won the national title and did so by smashing everyone’s expectations. I had them 2nd in the Big East and technically they finished just 4th in the conference standings but demolished every non-Big East opponent they faced all season. Adama Sanogo and Tristen Newton were in the top-7 players in the Big East in my preseason rankings but Jordan Hawkins improved from solid backup freshman to a likely lottery pick while Donovan Clingan was possibly the best backup center in the country as a true freshman. Connecticut overperformed each of the past 2 seasons but with the best roster Hurley had put together he took it to another level this season.
Alabama (Actual aEM: 27.28, Projected aEM: 16.25)
The Crimson Tide blew it in the NCAA tournament but entered as the #1 overall seed. The biggest reason for that jump was freshman Brandon Miller becoming hands down the best freshman in the sport (on the court). The average 15th rated freshman (like Miller was) over the past decade finished with 104 net points per Synergy and Brandon Miller was at 467. That’ll do it. Fellow freshman Noah Clowney had even lower expectations as the 75th ranked recruit but went from that to the current 20th ranked draft prospect. Getting arguably the #1 and #10 freshmen in the country when neither were ranked in the top-ten out of HS is a good way to overperform projections.
Utah (Actual aEM: 11.46, Projected aEM: 1.78)
See Pac-12 section.
Marquette (Actual aEM: 22.38, Projected aEM: 13.02)
Shaka Smart made headlines by intentionally avoiding the transfer portal last cycle. Instead he saw massive internal improvement. Tyler Kolek had 37/28/81% shooting splits as a sophomore. Then this past season he won Big East PoTY with 51/40/80% splits despite a bigger workload. Kind of hard to see that one coming. They weren’t the only ones. 3 other players jumped from between 75-105 net points to 250+. Marquette didn’t add any D1 transfers and had 0 impact freshmen but just about every returning player took a major leap all at the same time which is how you go from the 65th to the 7th best offense.
UCLA (Actual aEM: 27.29, Projected aEM: 18.01)
See Pac-12 Section.
Louisville (Actual aEM: -9.85, Projected aEM: 10.37)
The lack of star power was obvious even in the preseason. I didn’t have any Cardinals in my top-25 players in the ACC. But I also only had one rotation player in the bottom-35 bench players in the conference. That suggested a team with solid depth in Kenny Payne’s first season as a head coach. Instead, Louisville lost their first 9 games and didn’t gel in the slightest. Last year Louisville finished 127th. That was their first season outside the top-60 since 2000. Even if you thought they were going to be last in the ACC (and not everyone did) there’s no way you could have anticipated Louisville being this historically awful to sink to 290th.
LSU (Actual aEM: 1.14, Projected aEM: 19.3)
The Tigers made headlines last spring when at one point they had literally 0 scholarship players on the roster due to everyone
getting paid by recruited by Will Wade choosing to leave when he was fired. It looked like new coach Matt McMahon did about as good a job of building a roster on the fly as you could hope though. He brought with him several of the best players off a really good Murray State team plus added premier transfer Adam Miller. One of those Murray State imports (KJ Williams) lived up to expectations but no one else had a good season as LSU collapsed from a 12-1 start to 14-19. Score one for the “continuity is king” crowd.
Florida State (Actual aEM: -2.41, Projected aEM: 15.71)
This one is similar to Louisville. The Seminoles had finished in the top-30 for 5 straight seasons before sliding to 105th last year. It was reasonable to think there would be a bounce back. Instead, FSU continued to sink all the way down to 205th. Hamilton loves height and the Seminoles were the 2nd tallest team in the country but they finished 239th in defense so it just didn’t translate across the board. If FSU doesn’t recover next year it might be a sign that the game has passed Leonard Hamilton by at 74 years old.
California (Actual aEM: -8.54, Projected aEM: 7.48)
In recent years Cal has been bad but that’s because Mark Fox hadn’t been able to recruit talent to Berkeley rather than him being an awful game coach. And if Cal had the roster they’d been imagining in the offseason they might not have been historically bad. But everything went wrong for Cal. Factoring in playing time dropped the projection to 3.18 which was the biggest jump of any single team but obviously didn’t quite go far enough. It is possible if not likely that only 3 of the 10 leaders in minutes for Cal last season are still on a P6 roster this upcoming year (mostly due to transferring down).
Georgetown (Actual aEM: -3.63, Projected aEM: 11.44)
The Hoyas fall into the Louisville/Florida State category. In the first 4 seasons under head coach Patrick Ewing they never finished worse than 100th (bad but not awful). Then last year they fell to 175th and despite a complete portal retooling of the roster the slide kept going to 219th this season which resulted in Ewing’s dismissal. Similar to Louisville there was a lack of preseason star power with 0 players among the top-15 in the Big East in the projections and only 1 in the top-25. That held true throughout as only Primo Spears was in the top-18 in net points and that was mostly because he averaged 37 minutes per game and it’s a counting stat.