• New here? Register here now for access to all the forums, download game torrents, private messages, polls, Sportsbook, etc. Plus, stay connected and follow BP on Instagram @buckeyeplanet and Facebook.

Wall St. Journal: Oklahoma won 2003 BCS

BB73

Loves Buckeye History
Staff member
Bookie
'16 & '17 Upset Contest Winner
For anybody who decides to get college football information from the Wall Street Journal, just look at this article. It talks about computer polls; I bolded the significant sentence. The Land Thieves will be glad to know they actually won the NC in the Sugar Bowl against LSU.

wsj.com

What does an internist have to do with college football rankings? A lot.
By ALLEN ST. JOHN
Staff Reporter of THE WALL STREET JOURNAL
August 29, 2005; Page R10

Every Saturday evening from now until the beginning of January, Peter Wolfe will follow the same ritual.

He'll sit down at his computer around six o'clock and start entering the scores from 320 college football games played that afternoon into a database. Seven hours later, around one in the morning, the 51-year-old internist at UCLA Medical Center will be done, and sometime on Sunday, after adding a few late West Coast scores, he'll push the run button. The computer will then spit out rankings for every Division I-A college-football team in America.

<TABLE cellSpacing=0 cellPadding=1 width=254 align=left border=0><TBODY><TR><TD vAlign=top width=243 bgColor=#7194ba><!-- Start Nest --><TABLE cellSpacing=0 cellPadding=0 width=243 bgColor=#ffffff border=0><TBODY><TR vAlign=top><TD width=8 height=1 rowSpan=99><SPACER width="8" type="block" height="1"></TD><TD class=plnEleven style="PADDING-BOTTOM: 5px; PADDING-TOP: 5px">THE JOURNAL REPORT
g.gif

b.gif

http://online.wsj.com/page/0,,2_1175,00.html See the complete Football report.


</TD><TD width=8 height=1 rowSpan=99><SPACER width="8" type="block" height="1"></TD></TR></TBODY></TABLE><!-- End Nest --></TD><TD width=9><SPACER width="9" type="block" height="5"></TD></TR><TR><TD width=252 colSpan=2 height=12><SPACER width="252" type="block" height="12"></TD></TR></TBODY></TABLE>
While Dr. Wolfe is a fanatical football fan, he's not merely indulging his passion. Dr. Wolfe's rankings are included in the rankings of the Bowl Championship Series, or BCS, which will help determine which teams will play in which major bowl game, and ultimately which two teams will square off for the National Championship in the Rose Bowl on Jan. 4. His hobby makes him one of college football's true movers and shakers.

Special Teams

The BCS rankings comprise three elements. The first is the USA Today/Fox Coaches Poll, of 62 Division I-A college-football coaches. The second is the Harris Interactive College Football Poll, of 114 former coaches, players, administrators and media members. The Harris Poll replaces the AP Media Poll, which was dropped from the BCS formula following a testy cease-and-desist letter from the Associated Press during the off-season last year. ("And to the extent that the public has equated or comes to equate the AP Poll with the BCS rankings, the independent reputation of the AP Poll is lost.")

The third of the equally weighted elements is an average of six computer rankings by small or solo contributors, including Dr. Wolfe. The BCS includes the computer polls in its formula because "we felt there were some weaknesses in the human polls," such as regional biases and an overemphasis on preseason rankings, says BCS coordinator Kevin Weiberg. "The computer polls provide a more objective way of rating college football teams."

So how did a guy who spends his days diagnosing E. coli infections get involved in determining whether or not USC should rank ahead of Oklahoma?

Dr. Wolfe had been crunching numbers since his college days, and as his hobby evolved, he began publishing his results online. Then, in 2002, "one day I get a call at the office, and it's [BCS founder] Roy Kramer," Dr. Wolfe recalls. "He asked me if I wanted to participate, and I said yes."

Despite his official status, Dr. Wolfe's methods haven't changed much from the days when he was working strictly for fun. He still works alone, although he spends a bit of extra time checking his numbers.

"The pressure to get the scores right is intense," he says; the BCS distributed a total of more than $93 million in revenue to participating conferences last year. Conferences that sent a team to one of the top bowls received a minimum of $14 million.

Dr. Wolfe and the other computer analysts -- Richard Billingsley, Wes Colley, Kenneth Massey, Jeff Sagarin and the team of Jeff Anderson and Chris Hester -- don't see any of that cash. While he declines to reveal his exact compensation, Dr. Wolfe is quick to characterize it as "an honorarium." "I'm not going to retire on it," he laughs.

In layman's terms, Dr. Wolfe's methodology works like "six degrees of separation." The two main elements of all the computer rankings are the teams' records and their strength of schedule. In short, if two teams are undefeated, the team that played the tougher schedule should be ranked higher.

That's where "six degrees" comes in. To determine strength of schedule, Dr. Wolfe looks at how teams perform against common opponents. Given enough steps, Dr. Wolfe explains, he can link -- and thus rate -- both national champion USC and my alma mater, Division III University of Chicago, which doesn't have a single scholarship athlete.

This method is more systematic than the two human polls, which rely on the off-the-cuff opinions of experts and generally reflect all sorts of presuppositions about teams -- and sometimes politics. Last year, for instance, the Texas coach Mack Brown publicly campaigned for votes in a news conference and leapfrogged Cal for a bowl bid.

"The computer pays attention to all 117 Division I-A teams," says Dr. Wolfe. "And the computer doesn't know anything about the great tradition of X University." So a historically powerful school doesn't get any points from the computer analysts if it doesn't perform well.

As in competitive diving, when the computer analysts present their results, the BCS throws out the highest and lowest rankings for each school and averages the remaining rankings.

"I don't think there's a ton of disagreement between the computer rankings," says Jeff Anderson of the Anderson & Hester rankings. "But we don't walk in lock step either."

In the ranking that determined last year's bowl bids, all six computer polls had Auburn ranked third, and USC and Oklahoma ranked first and second in some order.

Why do disparities arise? Different methodologies. For instance, "we tend to focus more on conference strength," says Dr. Anderson, whose day job is as a political-science professor at the Air Force Academy.

Dr. Anderson notes that teams play two-thirds of their games in their own conference. And every conference ends up with an overall .500 record in these contests, since every victory by a conference team is offset by a loss by the in-conference opponent. Thus, it's difficult to judge the strength of a team simply by looking at its record against conference rivals.

For instance, the teams in the mighty Southeast Conference and the Mid-American Conference will both end up with .500 in-conference records. "But there's a huge difference in the quality of the teams in those two conferences," Dr. Anderson says. The SEC has five teams in the USA Today preseason top 25, and the MAC has none. So, to determine how strong a conference is, Dr. Anderson places a special emphasis on how its teams fare in nonconference games.

Bowled Over

For all their influence, though, the computer analysts have seen their power diminish in the past couple of seasons, as the BCS tweaks its formula to answer criticism of its rankings.

The system was founded in 1998 to eliminate the controversy surrounding the national championship. At the time, there was no championship game, and the two top-ranked teams in Division I-A usually didn't face off in one of the bowls. Instead, the No. 1 team was determined by polls.

The BCS added a championship game -- which rotates among the BCS bowl games -- and devised a ranking formula to determine which teams would play for the title. From 1998 to 2003, the BCS used five elements in figuring its rankings -- the media poll, coaches' poll, computer poll, as well as a team's won-lost record and its strength of schedule, as computed independently by the BCS. But instead of simplifying things, the system created its own brand of furor.

In 2000, the University of Miami was ranked second by the voters, but Florida State was ranked higher by the computer analysts and had a better strength of schedule. So Florida State played -- and lost to -- the No. 1 team, Oklahoma, in the national-championship game. The same thing happened the following year, when Nebraska -- ranked fourth by the human polls -- went to the championship game instead of Oregon, which was No. 2 among the voters.

After that, the BCS tinkered with the formula, enjoining the computer analysts from using a team's margin of victory as a factor in their rankings. This removed the incentive for coaches to run up scores to an unsportsmanlike degree, as well as answering the criticism that in the computer rankings Nebraska's four blowout wins during the season had offset a 62-36 loss to Colorado. In response to the change, some of the computer analysts changed their formulas, and two dropped out, while Dr. Wolfe joined the BCS fray.

But the controversies didn't stop. In 2003, USC finished first in the voter polls, but third in the overall BCS rankings thanks to the computer polls and strength-of-schedule rankings. Thus, the team went to the Rose Bowl, but didn't qualify for the national-championship game against LSU.

After USC won the Rose Bowl, the AP voters chose it as the No. 1 team in the nation. But, because of BCS rules, the coaches' poll was obliged to pick Oklahoma as No. 1, since it had won the official championship game. That embarrassing split decision prompted the BCS to eliminate strength of schedule and team record as separate elements in its formula.

The decision didn't entirely eliminate strength of schedule as a part of the system, since the computer analysts rely on it heavily. But now the human voters' opinion counts for two-thirds of the vote, not just one-half -- effectively giving them control of the rankings. When the two human polls disagree on the top two teams, though, the computer polls come into play.

While the computers ignore the letters on the jersey and the margin of victory on the scoreboard, human voters don't. The computers may dock teams for playing relatively easy schedules, but the voters will likely be impressed by the easy wins. And the voters are more likely to keep defending champions and preseason favorites at the top of the polls until they lose.

"I can tell the computer to behave as if Florida beat Florida A&M by one point," says Dr. Wolfe. "But the humans are going to know what that score is."

Still, it's hard for teams to take advantage of these changes, even though they set their own out-of-conference schedules. Steve Lopes, USC's senior associate athletic director, notes that his school scheduled next month's game against a relatively weak Arkansas squad back in 2001, and has already committed to games against Nebraska and Ohio State stretching out until 2009.

"We don't know how good Nebraska is going to be in 2006 and 2007. We don't know how good Ohio state is going to be in 2008 and 2009," Mr. Lopes says. "You look for programs with a national name, but you have no idea how it's going to impact your strength of schedule."

A Lightning Rod

With the recent tweaks in the system, Dr. Wolfe and his colleagues have been relegated to a watchdog role -- assuming, of course, that the Harris Poll reaches a consensus with the USA Today coaches' poll the way the AP poll usually did. Since the computer analysts rely on strength of schedule, they will continue to keep teams from scheduling nothing but patsies, and help parse the difference between an undefeated team in a weak conference and a one-loss team in a strong one.

Even with this somewhat diminished role, the analysts are quick to defend the system. "The BCS quickly became a whipping boy, but it's so much better than what we had before, where the best way to get to the top of the polls was to play nobody and rise through the standings by attrition," says Dr. Anderson. "You're far more likely to see the best two teams playing in the BCS championship game than in the finals of the NCAA basketball tournament."

But that that message has been a hard sell to both media members who tend to support a national playoff system, and irate rank-and-file fans frustrated by their team's ranking.

"Once I got an email from a Kansas State fan," Dr. Anderson says with a laugh. "He said that we had perpetuated as great an injustice as slavery."
 
AJHawkfan said:
Damn. That's bad. I wonder if they stole this article from the New York Times?
It's possible. Almost all New York City media members, living in the center of the universe in their minds, are pathetically inadequate in understanding and covering the best sport in this country, college football.
 
Upvote 0
Back
Top