• Follow us on Twitter @buckeyeplanet and @bp_recruiting, like us on Facebook! Enjoy a post or article, recommend it to others! BP is only as strong as its community, and we only promote by word of mouth, so share away!
  • Consider registering! Fewer and higher quality ads, no emails you don't want, access to all the forums, download game torrents, private messages, polls, Sportsbook, etc. Even if you just want to lurk, there are a lot of good reasons to register!

Rating teams based on All-time poll rankings

Bestbuck36;1075048; said:
Impossible! They'd never come that far north. :chompy:

May very well be true... Gettysburg is at around 39 degrees north latitude. Iowa City's at around 41 degrees north latitude.

The trial incursion up north only got up to Gettysburg for our southern brothers. They didn't fare too well. They were doing alright until they got characteristically cocky and tried to run up the score by bringing what WWH would call a safety blitz. History tells us that they got scorched on that play and lost the battle and the momentum in the war. So maybe they've adopted the "once bitten..." principle when it comes to engaging in competitive endeavors north of the Mason-Dixon line. :biggrin:

Edit: corrected first sentence... thanks to DBB for pointing out my reversal of long/lat...
 
Last edited:
Upvote 0
shetuck;1075244; said:
May very well be true... Gettysburg is at around 77 degrees north longitude. Iowa City's at around 91 degrees north.

The trial incursion up north only got up to Gettysburg for our southern brothers. They didn't fare too well. They were doing alright until they got characteristically cocky and tried to run up the score by bringing what WWH would call a safety blitz. History tells us that they got scorched on that play and lost the battle and the momentum in the war. So maybe they've adopted the "once bitten..." principle when it comes to engaging in competitive endeavors north of the Mason-Dixon line. :biggrin:

OK, I like the comparison of Pickett's Charge to a Safety Blitz.

But the problem is that you gave the Longitude of the places in question, which proves only that Iowa City is farther west of Greenwich than is Gettysburg.

But just to be clear:
  • Gettysburg is at 39.5 degrees North LATITUDE
    • ...and 77.14 degrees West LONGITUDE
  • Iowa City is at 41.4 degrees North LATITUDE
    • ...and 91.32 degrees West LONGITUDE
I'm not sure what North Longitude is, but if Iowa City were at 91 degrees North LATITUDE; it would be 1 degree North of the North Pole, wherever the hell that is.

Geography Police: Out...
 
Upvote 0
DaddyBigBucks;1075301; said:
OK, I like the comparison of Pickett's Charge to a Safety Blitz.

But the problem is that you gave the Longitude of the places in question, which proves only that Iowa City is farther west of Greenwich than is Gettysburg.

But just to be clear:
  • Gettysburg is at 39.5 degrees North LATITUDE
    • ...and 77.14 degrees West LONGITUDE
  • Iowa City is at 41.4 degrees North LATITUDE
    • ...and 91.32 degrees West LONGITUDE
I'm not sure what North Longitude is, but if Iowa City were at 91 degrees North LATITUDE; it would be 1 degree North of the North Pole, wherever the hell that is.

Geography Police: Out...

oops. my bad. you're right. i got 'em mixed up. i'll go back and fix it for posterity's sake...

nonetheless, my point was that Iowa City is farther north (farther away from the equator) than Gettysburg. :biggrin:

re Pickett's Charge... there's a whole thing about that and it would be interesting to see where WWH draws his admiration for the safety blitz in terms of military history.
 
Upvote 0
Here's a slightly different take on ranking teams based on their final rankings in every AP poll since 1936, or final Coaches poll since 1950 (or average of the two). The teams are given a % based on how many years they make the final poll, and an average for all of those rankings. Dividing the average ranking by the percentage yields the overall score, with the lower number being the better result.

##. Team..........Score...% in final poll...Avg final ranking

01. Oklahoma......10.56........68...........7.18
02. Notre Dame....11.32........72...........8.15
03. TSUN..........11.75........77...........9.05
04. Alabama.......12.06........66...........7.96
05. Ohio State....12.21........70...........8.55
06. Nebraska......13.85........62...........8.59
07. Southern Cal..15.00........62...........9.30
08. Texas.........15.48........62...........9.60
09. Tennessee.....16.32........63..........10.28
10. Penn State....17.90........59..........10.56
11. Florida St....18.26........50...........9.13
12. Miami, Fl.....21.16........45...........9.52
13. UCLA..........22.53........45..........10.14
14. LSU...........22.96........45..........10.33
15. Auburn........23.04........49..........11.29

Per the tOSU press release for the season opener, this was compiled by Dr. Robert Lemieux of McDaniel College in Westminster, Md.
 
Upvote 0
This has been updated after the final polls for the 2008 season.

OK, here's how this was calculated. I took each team's ranking in every year-end poll since the AP started in 1936. Once 2 polls were involved, I always used the higher ranking. Sliding scale points were awarded for every year that a team ended up ranked, and 10 points were deducted for each losing season. The scale was determined before seeing where teams ended up.

For each year since 1936, a team earns points based on these criteria:

NC (#1) in either poll = 100 points
02 -> 05 = 65, 55, 50, 45 points, respectively
06 -> 10 = 40, 37, 34, 32, 30 points
11 -> 20 = 28, 26, 24, 22, 20, 18, 16, 14, 12, 10 points
21 -> 25 = 08, 06, 04, 03, 02 points
non-ranked, but .500 or above = 0 points
losing record for the year = minus 10 points

Here are the all-time totals, updated after the 2008 final polls:

01. 2353 - Oklahoma
02. 2255 - Notre Dame
03. 2191 - Ohio State
04. 2050 - Alabama
05. 1970 - Michigan (-10 for losing record in 2008)
06. 1942 - USC
07. 1724 - Texas
08. 1669 - Nebraska
09. 1523 - Tennessee(-10 for losing record in 2008)
10. 1454 - Penn State
11. 1171 - Miami
12. 1112 - Florida State
13. 1044 - LSU
14. 1023 - Georgia
15. 1016 - Florida
16. 0944 - Auburn (-10 for losing season in 2008)
17. 0879 - UCLA (-10 for losing season in 2008)
18. 0687 - Arkansas (-10 for losing season in 2008)
19. 0671 - Michigan State
20. 0579 - Washington (-10 for losing season in 2008)
21. 0531 - Georgia Tech
22. 0471 - Ole Miss
23. 0424 - Texas A&M (-10 for losing season in 2008)
24. 0409 - Pittsburgh
25. 0404 - Clemson

Other schools: Colorado(403), Minnesota(389), Syracuse(337), Army(317), Wisconsin(282), Iowa(248), Purdue(135), Illinois (23)

Since 2007, I have created separate ratings by adding National Championship credit for those earned prior to 1936, on a sliding scale based on 12-year periods.

1869-1899 - 10 points for each MNC (no top teams here, almost all Ivy League)
1900-1911 - 25 points for each MNC
1912-1923 - 50 points for each MNC
1924-1935 - 75 points for each MNC

Here are the all-time totals, updated with the pre-1936 MNC points:

01. 2480 - Notre Dame (MNCs in '24, '29, '30)
02. 2353 - Oklahoma
03. 2325 - Alabama (MNCs in '25, '26, '34, 2/3 for '30)
04. 2191 - Ohio State
05. 2170 - Michigan (MNCs in '01, '02, '23, '33)
06. 2142 - USC (MNCs in '31, '32, 2/3 for '28)
07. 1724 - Texas
08. 1669 - Nebraska
09. 1523 - Tennessee
10. 1454 - Penn State
11. 1171 - Miami
12. 1112 - Florida State
13. 1044 - LSU
14. 1023 - Georgia
15. 1016 - Florida
16. 0944 - Auburn
17. 0879 - UCLA
18. 0687 - Arkansas
19. 0671 - Michigan State
20. 0656 - Georgia Tech (MNCs in '17, '28)
21. 0579 - Washington
22. 0539 - Minnesota (MNCs in '34, '35)
23. 0534 - Pittsburgh (MNCs in '10, '16' '18)
24. 0474 - Texas A&M (MNC in '19)
25. 0471 - Ole Miss
26. 0404 - Clemson
27. 0403 - Colorado

Note - USC and Bama received 50 points, rather than 75, for disputed titles in '28 and '30, respectively

Note - Illinois, with MNCs in '14, '23, and '27 fails to make the top 30.
 
Upvote 0
This has been updated after the final polls for the 2009 season.

OK, here's how this was calculated. I took each team's ranking in every year-end poll since the AP started in 1936. Once 2 polls were involved, I always used the higher ranking. Sliding scale points were awarded for every year that a team ended up ranked, and 10 points were deducted for each losing season. The scale was determined before seeing where teams ended up.

For each year since 1936, a team earns points based on these criteria:

NC (#1) in either poll = 100 points
02 -> 05 = 65, 55, 50, 45 points, respectively
06 -> 10 = 40, 37, 34, 32, 30 points
11 -> 20 = 28, 26, 24, 22, 20, 18, 16, 14, 12, 10 points
21 -> 25 = 08, 06, 04, 03, 02 points
non-ranked, but .500 or above = 0 points
losing record for the year = minus 10 points

Here are the all-time totals, updated after the 2009 final polls:

01. 2353 - Oklahoma
02. 2255 - Notre Dame
03. 2236 - Ohio State
04. 2150 - Alabama
05. 1960 - Michigan (-10 for losing record in 2009)
06. 1952 - USC
07. 1789 - Texas
08. 1691 - Nebraska
09. 1523 - Tennessee
10. 1488 - Penn State
11. 1183 - Miami
12. 1112 - Florida State
13. 1071 - Florida
14. 1060 - LSU
15. 1023 - Georgia
16. 0944 - Auburn
17. 0879 - UCLA
18. 0687 - Arkansas
19. 0661 - Michigan State (-10 for losing season in 2009)
20. 0569 - Washington (-10 for losing season in 2009)
21. 0555 - Georgia Tech
22. 0481 - Ole Miss
23. 0429 - Pittsburgh
24. 0414 - Texas A&M (-10 for losing season in 2009)
25. 0407 - Clemson

Other schools: Colorado(393), Minnesota(379), Syracuse(327), Army(307), Wisconsin(300), Iowa(285), Purdue(125), Illinois (13)

Since 2007, I have created separate ratings by adding National Championship credit for those earned prior to 1936, on a sliding scale based on 12-year periods.

1869-1899 - 10 points for each MNC (no top teams here, almost all Ivy League)
1900-1911 - 25 points for each MNC
1912-1923 - 50 points for each MNC
1924-1935 - 75 points for each MNC

Here are the all-time totals, updated with the pre-1936 MNC points:

01. 2480 - Notre Dame (MNCs in '24, '29, '30)
02. 2425 - Alabama (MNCs in '25, '26, '34, 2/3 for '30)
03. 2353 - Oklahoma
04. 2236 - Ohio State
05. 2160 - Michigan (MNCs in '01, '02, '23, '33)
06. 2152 - USC (MNCs in '31, '32, 2/3 for '28)
07. 1789 - Texas
08. 1691 - Nebraska
09. 1523 - Tennessee
10. 1488 - Penn State
11. 1183 - Miami
12. 1112 - Florida State
13. 1071 - Florida
14. 1060 - LSU
15. 1023 - Georgia
16. 0944 - Auburn
17. 0879 - UCLA
18. 0687 - Arkansas
19. 0680 - Georgia Tech (MNCs in '17, '28)
20. 0661 - Michigan State
21. 0569 - Washington
22. 0554 - Pittsburgh (MNCs in '10, '16' '18)
23. 0529 - Minnesota (MNCs in '34, '35)
24. 0481 - Ole Miss
25. 0464 - Texas A&M (MNC in '19)
26. 0407 - Clemson
27. 0393 - Colorado

Note - USC and Bama received 50 points, rather than 75, for disputed titles in '28 and '30, respectively

Note - Illinois, with MNCs in '14, '23, and '27 fails to make the top 30.
 
Upvote 0
Seems incredibly arbitrary to me. All it really tells you is relative perception. You could simply reverse rank the polls per season (i.e. #1 overall gets 25 points, #25 overall gets 1 point). This has the benefit of getting rid of your biases in the point systems, though either way (whether its yours or the APs) you can't get rid of the bias. Also, why do you go with the higher of multiple polls? Why not just use the AP's so you have a common set of standards and criteria. Without some restrictions, this is almost telling us nothing.
 
Upvote 0
Diego-Bucks;1720293; said:
Seems incredibly arbitrary to me. All it really tells you is relative perception. You could simply reverse rank the polls per season (i.e. #1 overall gets 25 points, #25 overall gets 1 point). This has the benefit of getting rid of your biases in the point systems, though either way (whether its yours or the APs) you can't get rid of the bias. Also, why do you go with the higher of multiple polls? Why not just use the AP's so you have a common set of standards and criteria. Without some restrictions, this is almost telling us nothing.

A cumulative scoring system based on teams' best poll finishes through most of college football's history tells us nothing? I'm not sure whether you're saying polls or championships or tradition are meaningless, but either way I disagree. This is at least a very interesting method of rating programs over time, and I don't think the resulting rankings look all that arbitrary.
 
Upvote 0
Diego-Bucks;1720293; said:
Seems incredibly arbitrary to me. All it really tells you is relative perception. You could simply reverse rank the polls per season (i.e. #1 overall gets 25 points, #25 overall gets 1 point). This has the benefit of getting rid of your biases in the point systems, though either way (whether its yours or the APs) you can't get rid of the bias. Also, why do you go with the higher of multiple polls? Why not just use the AP's so you have a common set of standards and criteria. Without some restrictions, this is almost telling us nothing.

I compiled this thinking that it would be a better indicator of rating teams all-time rather than just using total wins, or the number of national championships, which is what most all-time listings utilize. My thought was that teams would earn some credit based on their final ranking each year they were ranked, and that the final poll ranking was a better indicator of performance than a simple won-loss record, since the poll voters factor in schedule strength. I created my point value in the belief that there's a much greater difference in prestige between being ranked #1 vs. #2 than there is between #9 and #10, or #24 and #25.

I added in the 10-point deduction for a losing season in order to penalize teams for truly bad seasons more than those who just missed being ranked.

I used the higher ranking in order to avoid controversy, such as not giving a team full credit in a year in which they won a National Title in 1 poll but not the other. I also believed using the higher poll rating was a method of removing the bias of an individual poll.

I'm deeply sorry that you find my efforts of such little value. If you come up with another method, and take the time to compile the data, I'll offer my comments.
 
Upvote 0
I compiled this thinking that it would be a better indicator of rating teams all-time rather than just using total wins, or the number of national championships, which is what most all-time listings utilize. My thought was that teams would earn some credit based on their final ranking each year they were ranked, and that the final poll ranking was a better indicator of performance than a simple won-loss record, since the poll voters factor in schedule strength. I created my point value in the belief that there's a much greater difference in prestige between being ranked #1 vs. #2 than there is between #9 and #10, or #24 and #25.

I added in the 10-point deduction for a losing season in order to penalize teams for truly bad seasons more than those who just missed being ranked.

I used the higher ranking in order to avoid controversy, such as not giving a team full credit in a year in which they won a National Title in 1 poll but not the other. I also believed using the higher poll rating was a method of removing the bias of an individual poll.

I'm deeply sorry that you find my efforts of such little value. If you come up with another method, and take the time to compile the data, I'll offer my comments.
Certainly I didn't mean to offend, your work is most definitely exhaustive in what its trying to do. I simply see a few problems with your formula that make it more harmful than helpful to your goal of trying to rate teams based on all time rank.

The 1st being that you are categorizing the points on a subjective concept (ranking and perception of that ranks relative worth) without considering that the strength or value of that rank are almost all relative to the year (i.e. your subjective points on value aren't considering that each individual season produces a subjective value). Combining this with the rolling point scale, and it is beneficial to teams to be #1 in a down year and punishes teams that are highly competitive in a brutal year. A #1 ranking achieved by going 14-0, by beating another team that is 13-0 and by having the other Top-Ten teams as clear cut 2nd tiers will cause this #1 ranking to be much more "valuable" than in another year. In recent context, '05 Texas being #1 might be considered more valuable than '07 LSU being #1, this is because of who Texas beat that season vs. LSU's just winning a war of attrition in 2007. That being said, in 2007 being the #8 team, which is Kansas (12-1 winner of Orange Bowl) might be more valuable than being the #8 rank in '02 Iowa, which had the benefit of playing many weak teams.

Now you would be rewarding a stronger perception with more points. Kind of what you are going for. To make this statistically feasible, why not deduct points from teams based on losses? So the #1 team in an undefeated season gets the full 100 points, but the #1 team in a 2 loss season gets... well, less than that. You state that there are greater differences in prestige of #1 to the #10, and I agree. But there are also differences in prestige from one great #1 team, from a more controversial and lower #1 team from a different season.

Also, why are their points for gaining a national title? When you are rewarding points for a #1 final ranking on your rolling point scale, you are already rewarding points based on their final rank. The national title points are a double-reward, but then by doing this you aren't doing these ratings based on all-time poll rankings alone.

I suppose what it comes down to is that you are objective in that you reward points on an unmovable rolling-scale for historical ranks, but then subjective by including other factors like losing seasons and national titles. Also, by using only one poll (whichever poll you might agree with or feel has the highest criterion for selection), you will avoid rewarding two teams for the same rankings while additionally fighting against the arguably worse biases of other polls. I only try to point out that the analysis is inconsistent in being objective at times and subjective at others. This works against you a bit in that it can act contrarily to your goal. By picking and choosing one method of analysis you will more easily quantify what you are trying to look at rather than your hybrid approach.

I don't think that your analysis is value-less, but it can be greatly improved on a subject that is not easily analyzed in any historically significant way. I must applaud your efforts while feeling compelled to offer my input on improvements.
 
Upvote 0
Diego-Bucks;1720621; said:
I only try to point out that the analysis is inconsistent in being objective at times and subjective at others. This works against you a bit in that it can act contrarily to your goal. By picking and choosing one method of analysis you will more easily quantify what you are trying to look at rather than your hybrid approach.

The point system he uses is completely arbitrary, but his results are an objective measure based on the system he created. And his ranking system is infinitely more informative than any common measure such as all time wins or total number of championships.

Diego-Bucks;1720621; said:
The 1st being that you are categorizing the points on a subjective concept (ranking and perception of that ranks relative worth) without considering that the strength or value of that rank are almost all relative to the year (i.e. your subjective points on value aren't considering that each individual season produces a subjective value). Combining this with the rolling point scale, and it is beneficial to teams to be #1 in a down year and punishes teams that are highly competitive in a brutal year. A #1 ranking achieved by going 14-0, by beating another team that is 13-0 and by having the other Top-Ten teams as clear cut 2nd tiers will cause this #1 ranking to be much more "valuable" than in another year. In recent context, '05 Texas being #1 might be considered more valuable than '07 LSU being #1, this is because of who Texas beat that season vs. LSU's just winning a war of attrition in 2007. That being said, in 2007 being the #8 team, which is Kansas (12-1 winner of Orange Bowl) might be more valuable than being the #8 rank in '02 Iowa, which had the benefit of playing many weak teams.

Now you would be rewarding a stronger perception with more points. Kind of what you are going for. To make this statistically feasible, why not deduct points from teams based on losses? So the #1 team in an undefeated season gets the full 100 points, but the #1 team in a 2 loss season gets... well, less than that. You state that there are greater differences in prestige of #1 to the #10, and I agree. But there are also differences in prestige from one great #1 team, from a more controversial and lower #1 team from a different season.

College football rankings from any given year are based on individual voters perception of the relative strength of the teams. Trying to compare teams strength between years would be insane. Maybe in 2002 every team was just so much worse than in 2007. And Miami and Ohio States' records are just a reflection of that instead of being far superior to 2007 LSU. Still, even if a #6 ranked team one year is stronger than a #6 ranked team the following year, the variance will average out over the course of 75 years. Adding metrics makes the system harder to understand, more arbitrary, and takes away from the usefulness of the system. Taking away points for losses would also punish teams that play a harder schedule/are in a harder conference.

Diego-Bucks;1720621; said:
Also, why are their points for gaining a national title? When you are rewarding points for a #1 final ranking on your rolling point scale, you are already rewarding points based on their final rank. The national title points are a double-reward, but then by doing this you aren't doing these ratings based on all-time poll rankings alone.

Not true. He gives 100 points to 1st, 65 to 2nd.

Really, no system is perfect. All rankings will be arbitrary and based on the subjectivity of the polls. So, a ranking system will be judged based on its usefulness. I think your suggestions take away from the simplicity of the ranking system and add other problems.

These rankings are the most useful/best measure of programs all-time strength. The results are interesting and I appreciate your work BB.
 
Upvote 0
Back
Top