I first separate the "major" D-1A schools from the "minor" D-1A schools. "Minor" primarily consists of Big West, MAC, and assorted independents (not ND or Navy). There is a 3rd category of D-1AA schools that have games against D-1A schools. These are not rated, but must be listed so as to assign values.
I first assign values to each team's schedule before the season, with the value being determined by how many D-1A major, D-1A minor, and D-1AA teams that team plays. That schedule factor is constant and is used all season.
I use basic win-loss record (filtered by strength of schedule), margin of victory (or defeat) -- with no additional credit for blowouts. Now let me explain this "strength of schedule" factor. Actually, since team A is evaluated before the season on its schedule, if it does not play all major D-1A teams, it will be penalized, and therefore, the credit for a win by Team B over Team A will be adjusted downward. But, a win by Team C over Team B will NOT take into account Team A's schedule.
Wins over D-1AA teams are severely minimized in value.
Let me explain why Sagarin (or anyone else) who rates D-1A and D-1AA teams on the same list is wrong: 85 vs. 65. There is no way that the best 1-AA team (UMass?) could play a lineup of 1-A schools and not get beaten up by season's end. It is totally apples and oranges. Sure, the starters on those top 1-AA teams may be able to hang with, let's say, the 40th best 1-A team for ONE GAME -- but everyone gets injuries and those teams have no depth.
Likewise, a school like Marshall or Miami (O) or Idaho can play ONE GAME well, but a season of tough opponents would take its toll. Marshall and Idaho won bowl games over better teams, and Miami beat UNC. But, a steady diet of that kind of opposition would wear down the MAC type of schools. After all, Marshall struggled to beat Wofford, a mediocre 1-AA team.
I hope that this gives you a general idea of what I am doing. Again, I wait until each 1-A team has played 4 games before I run the ratings; otherwise, there would be wild fluctuations based on strength of early schedules.
Sagarin has ratings from the beginning. He must use last year's data, preseason expectations, or some other nonsense. Last year has NOTHING to do with THIS YEAR. Neither do prognostications.
Preseason polls do not have any relationship to reality, either.
Russel Henderson / firstname.lastname@example.org