Commentary on 2007 Season

2007 - the 'upset' season - witnessed fierce competition for the national college football title right down to the wire. One of the most exciting seasons in recent memory from a spectator's standpoint. From a purely ratings perspective, the fact that there were no stand-out teams with tough schedules who were able to dominate means that 2007 National Champion Louisiana State (12-2) has one of the lowest national champion ratings since the Washington Huskies back in 1984. In fact, 2006 Louisiana St. (11-2), which only ranked fourth in the QPRS standings last year (behind Florida, Ohio State, and Southern Cal), had an eerily close overall rating (380.123) and schedule strength (53.49) to this year's champion team. Just for historical context, I'm also attaching my QPRS American College National Champions list which goes back to 1940 and shows that increasingly, a team usually needed an overall rating in the 400's to become the champion. None of this year's teams broke into the QPRS Top 100 American College Football teams since 1940 list either.

The amazing upsets of this past year served to highlight just how off-target many of the pre-season polls or rankings could be. You obviously can't stop human nature and everyone enjoys to speculate and have their pre-season favorites. However, I have always been against polls or rating systems which actually make use of a pre-season ranking as a starting point by assigning teams a subjective pre-season ranking order. I think we all remember a few seasons ago how Auburn missed a chance at playing in the national title game only because they started off too low in the pre-season polls. An arbitrary starting pecking order based on pre-season 'gut feelings' only serves to pollute or skew the year-long validity of a poll or ranking which incorporates such subjective information. Serious rankings should be based only on results which take place on the football field once the season is underway.

One thing which QPRS also does is 'self-correct' with regard to the difficulty of a team's opponents as the year progresses. Just as an example, when Oregon St. beat California this year, it was facing a 5-0 opponent and it got a nice bounce in the polls for defeating a highly-ranked, unbeaten team. By the time 'lowly' Stanford beat them near season's end, California had lost five times and the game went by almost unnoticed. Does Stanford deserve less credit for beating California in 2007 than Oregon State? It was the same California team - pretty much the same players, same coaches. What happened is that Oregon State was simply the first team chronologically to learned how to exploit California's vulnerablities. These flaws were there when Tennessee, Arizona or Oregon played them beforehand but they simply didn't get the job done. In some polls or rankings, Oregon State would get credit for beating a highly-ranked 5-0 California (100%) while Stanford would only get credit for beating a forgotten 6-6 (50%) California which by then had dropped well out of the Top 25. So in QPRS, at year's end, both Oregon State and Stanford (and all the other teams before, after and in-between) get the same credit for having beaten a California team with a 7-6 season record. Obviously, margins of victory will differ and may affect the ratings but they all played against 2007 California.

<- Parent Directory

Clyde Berryman / hardbraking at hotmail dot com