Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
I've been trying to understand the logic of the FPI, but its just not possible.
Good example.
The description claims the rankings are based on where they expect teams to finish the rest of the season, based on whatever wonky ass math they decide to throw out.
Iowa State is ranked ahead of Baylor
Baylor has 0 losses Iowa State has 4
Baylor beat Iowa State head to head
They project Baylor to finish with 11 wins and Iowa State with 7
Yet according to their rankings Iowa State is the "better team" the rest of the season.
They're obviously using some in-house Machine Learning. They even talk about other people's algorithms at times.
Nominally, id assume this was supervised. But it's such a cluster, it almost has an unsupervised 'learning to play video game by pressing random buttons' feel to it.
But who has any idea what their 'factors' are. Given the stratification of conferences, conf and/or team identity must be one. That is to say, the algorithm knows CFP/AP/Coaches over-ranks certain teams and treats that as emblematic of reality... ie an objective fact rather than a subjective perception.
They have several versions of same thing too... ie, trying to predict what CFP will vote. Even though the voters change every year... and thus the dynamics around edge cases are inherently not a constant ...
In short, i think it's a highly complex nonsense generator with some implicitly built-in business biases to skew the numbers.
It's fuck ntre ame week here.
Just this week? Need step that hate game up, bruh.
I love how they believe Clemson's remaining schedule of Wake Forest Gump and South (I just got beat but App State)Carolina is tougher than OSU with Penn State and Michigan......
They're obviously using some in-house Machine Learning. They even talk about other people's algorithms at times.
Nominally, id assume this was supervised. But it's such a cluster, it almost has an unsupervised 'learning to play video game by pressing random buttons' feel to it.
But who has any idea what their 'factors' are. Given the stratification of conferences, conf and/or team identity must be one. That is to say, the algorithm knows CFP/AP/Coaches over-ranks certain teams and treats that as emblematic of reality... ie an objective fact rather than a subjective perception.
They have several versions of same thing too... ie, trying to predict what CFP will vote. Even though the voters change every year... and thus the dynamics around edge cases are inherently not a constant ...
In short, i think it's a highly complex nonsense generator with some implicitly built-in business biases to skew the numbers.
It’s a simple least squares model based on what he’s explained to me. When I’ve called him out on it he says it ‘does what it is supposed to do’ and then references least squared error. That’s when I went after him on his choice of loss functions.
Anyhow, at a minimum, for early season game predictions to be of any value, he should be considering a Bayesian model where the prior is based on things like returning players, their stats, attendance at road games and other similar things. Picking the prior should not be overly difficult as one in that position has ample historical data to work with and can use a multitude of features from a prior year to predict the following year.
It used to be that things like a Gibbs Sampling / Markov Chain Monte Carlo model was a LOT of work to pull off. In ‘95 I was involved in writing a Bayes Multinomial Probit in a matrix algebra language (SAS/IML) and it sucked... there was no direct function for a Kronecker product if I recall correctly among other things... but now, with the general availability of every model under the sun with Python and R, there’s no excuse for shit models. If the results are not good, it just means the modeler is uninformed or fucking lazy. In this case, I think it may be a combination of the two.
I mean hell, he probably doesn’t even need to go that complex. I imagine that an XGBoost model that included some features to represent past year’s history and some type of feature to represent time (number of game played in current season) would create interactions that would fairly naturally weight early season predictions to the historical data and create a more accurate prediction in weeks 1-5.
It’s a simple least squares model based on what he’s explained to me. When I’ve called him out on it he says it ‘does what it is supposed to do’ and then references least squared error. That’s when I went after him on his choice of loss functions.
Anyhow, at a minimum, for early season game predictions to be of any value, he should be considering a Bayesian model where the prior is based on things like returning players, their stats, attendance at road games and other similar things. Picking the prior should not be overly difficult as one in that position has ample historical data to work with and can use a multitude of features from a prior year to predict the following year.
It used to be that things like a Gibbs Sampling / Markov Chain Monte Carlo model was a LOT of work to pull off. In ‘95 I was involved in writing a Bayes Multinomial Probit in a matrix algebra language (SAS/IML) and it sucked... there was no direct function for a Kronecker product if I recall correctly among other things... but now, with the general availability of every model under the sun with Python and R, there’s no excuse for shit models. If the results are not good, it just means the modeler is uninformed or fucking lazy. In this case, I think it may be a combination of the two.
I mean hell, he probably doesn’t even need to go that complex. I imagine that an XGBoost model that included some features to represent past year’s history and some type of feature to represent time (number of game played in current season) would create interactions that would fairly naturally weight early season predictions to the historical data and create a more accurate prediction in weeks 1-5.
It’s a simple least squares model based on what he’s explained to me. When I’ve called him out on it he says it ‘does what it is supposed to do’ and then references least squared error. That’s when I went after him on his choice of loss functions.
Anyhow, at a minimum, for early season game predictions to be of any value, he should be considering a Bayesian model where the prior is based on things like returning players, their stats, attendance at road games and other similar things. Picking the prior should not be overly difficult as one in that position has ample historical data to work with and can use a multitude of features from a prior year to predict the following year.
It used to be that things like a Gibbs Sampling / Markov Chain Monte Carlo model was a LOT of work to pull off. In ‘95 I was involved in writing a Bayes Multinomial Probit in a matrix algebra language (SAS/IML) and it sucked... there was no direct function for a Kronecker product if I recall correctly among other things... but now, with the general availability of every model under the sun with Python and R, there’s no excuse for shit models. If the results are not good, it just means the modeler is uninformed or fucking lazy. In this case, I think it may be a combination of the two.
I mean hell, he probably doesn’t even need to go that complex. I imagine that an XGBoost model that included some features to represent past year’s history and some type of feature to represent time (number of game played in current season) would create interactions that would fairly naturally weight early season predictions to the historical data and create a more accurate prediction in weeks 1-5.
It’s a simple least squares model based on what he’s explained to me. When I’ve called him out on it he says it ‘does what it is supposed to do’ and then references least squared error. That’s when I went after him on his choice of loss functions.
Anyhow, at a minimum, for early season game predictions to be of any value, he should be considering a Bayesian model where the prior is based on things like returning players, their stats, attendance at road games and other similar things. Picking the prior should not be overly difficult as one in that position has ample historical data to work with and can use a multitude of features from a prior year to predict the following year.
It used to be that things like a Gibbs Sampling / Markov Chain Monte Carlo model was a LOT of work to pull off. In ‘95 I was involved in writing a Bayes Multinomial Probit in a matrix algebra language (SAS/IML) and it sucked... there was no direct function for a Kronecker product if I recall correctly among other things... but now, with the general availability of every model under the sun with Python and R, there’s no excuse for shit models. If the results are not good, it just means the modeler is uninformed or fucking lazy. In this case, I think it may be a combination of the two.
I mean hell, he probably doesn’t even need to go that complex. I imagine that an XGBoost model that included some features to represent past year’s history and some type of feature to represent time (number of game played in current season) would create interactions that would fairly naturally weight early season predictions to the historical data and create a more accurate prediction in weeks 1-5.
You know.... I used to work with a guy who I bet you'd really get along with well. (No one really liked him much either.)
da fuq are you babbling about? Lol