• Follow us on Twitter @buckeyeplanet and @bp_recruiting, like us on Facebook! Enjoy a post or article, recommend it to others! BP is only as strong as its community, and we only promote by word of mouth, so share away!
  • Consider registering! Fewer and higher quality ads, no emails you don't want, access to all the forums, download game torrents, private messages, polls, Sportsbook, etc. Even if you just want to lurk, there are a lot of good reasons to register!

ESPN (A bunch of Death-Spiraling maroons)

It’s a simple least squares model based on what he’s explained to me. When I’ve called him out on it he says it ‘does what it is supposed to do’ and then references least squared error. That’s when I went after him on his choice of loss functions.

Anyhow, at a minimum, for early season game predictions to be of any value, he should be considering a Bayesian model where the prior is based on things like returning players, their stats, attendance at road games and other similar things. Picking the prior should not be overly difficult as one in that position has ample historical data to work with and can use a multitude of features from a prior year to predict the following year.

It used to be that things like a Gibbs Sampling / Markov Chain Monte Carlo model was a LOT of work to pull off. In ‘95 I was involved in writing a Bayes Multinomial Probit in a matrix algebra language (SAS/IML) and it sucked... there was no direct function for a Kronecker product if I recall correctly among other things... but now, with the general availability of every model under the sun with Python and R, there’s no excuse for shit models. If the results are not good, it just means the modeler is uninformed or fucking lazy. In this case, I think it may be a combination of the two.

I mean hell, he probably doesn’t even need to go that complex. I imagine that an XGBoost model that included some features to represent past year’s history and some type of feature to represent time (number of game played in current season) would create interactions that would fairly naturally weight early season predictions to the historical data and create a more accurate prediction in weeks 1-5.
This post is so ratio'd.

Now tell us the Super Lotto numbers and maybe you'll have some credibility around here....
 
Upvote 0
It’s a simple least squares model based on what he’s explained to me. When I’ve called him out on it he says it ‘does what it is supposed to do’ and then references least squared error. That’s when I went after him on his choice of loss functions.

Anyhow, at a minimum, for early season game predictions to be of any value, he should be considering a Bayesian model where the prior is based on things like returning players, their stats, attendance at road games and other similar things. Picking the prior should not be overly difficult as one in that position has ample historical data to work with and can use a multitude of features from a prior year to predict the following year.

It used to be that things like a Gibbs Sampling / Markov Chain Monte Carlo model was a LOT of work to pull off. In ‘95 I was involved in writing a Bayes Multinomial Probit in a matrix algebra language (SAS/IML) and it sucked... there was no direct function for a Kronecker product if I recall correctly among other things... but now, with the general availability of every model under the sun with Python and R, there’s no excuse for shit models. If the results are not good, it just means the modeler is uninformed or fucking lazy. In this case, I think it may be a combination of the two.

I mean hell, he probably doesn’t even need to go that complex. I imagine that an XGBoost model that included some features to represent past year’s history and some type of feature to represent time (number of game played in current season) would create interactions that would fairly naturally weight early season predictions to the historical data and create a more accurate prediction in weeks 1-5.

I remember my first one, too.

TI-81_Calculator_on_Graph_Screen.jpg
 
Upvote 0
It’s a simple least squares model based on what he’s explained to me. When I’ve called him out on it he says it ‘does what it is supposed to do’ and then references least squared error. That’s when I went after him on his choice of loss functions.

Anyhow, at a minimum, for early season game predictions to be of any value, he should be considering a Bayesian model where the prior is based on things like returning players, their stats, attendance at road games and other similar things. Picking the prior should not be overly difficult as one in that position has ample historical data to work with and can use a multitude of features from a prior year to predict the following year.

It used to be that things like a Gibbs Sampling / Markov Chain Monte Carlo model was a LOT of work to pull off. In ‘95 I was involved in writing a Bayes Multinomial Probit in a matrix algebra language (SAS/IML) and it sucked... there was no direct function for a Kronecker product if I recall correctly among other things... but now, with the general availability of every model under the sun with Python and R, there’s no excuse for shit models. If the results are not good, it just means the modeler is uninformed or fucking lazy. In this case, I think it may be a combination of the two.

I mean hell, he probably doesn’t even need to go that complex. I imagine that an XGBoost model that included some features to represent past year’s history and some type of feature to represent time (number of game played in current season) would create interactions that would fairly naturally weight early season predictions to the historical data and create a more accurate prediction in weeks 1-5.
This post has been forwarded to Fox so that they can get better modeling than ESPN. Much like Urban showing how commentators can raise their game.
I'm sure a recruiter will be reaching out to you. :wink:
 
Upvote 0
This post has been forwarded to Fox so that they can get better modeling than ESPN. Much like Urban showing how commentators can raise their game.
I'm sure a recruiter will be reaching out to you. :wink:


LOL. I'd frankly love that job when I retire. I did have a guy who used to work for me who quit to go work at Stats Inc. https://www.statsperform.com/ <- For those who've not read the post game credits. He was pretty solid. U of Chicago undergrad in math and Minnesota MS in Stats (of note The U has historically been a power house stats program). As a business person I always thought that he threw away his degrees for that job, but he clearly did it for love of the work. Got paid peanuts for what he could have made in industry, but who am I to judge. He was also a huge baseball fan. Pretty sure he worshiped at the alter of Billy Beane.
 
Upvote 0
It’s a simple least squares model based on what he’s explained to me. When I’ve called him out on it he says it ‘does what it is supposed to do’ and then references least squared error. That’s when I went after him on his choice of loss functions.

Anyhow, at a minimum, for early season game predictions to be of any value, he should be considering a Bayesian model where the prior is based on things like returning players, their stats, attendance at road games and other similar things. Picking the prior should not be overly difficult as one in that position has ample historical data to work with and can use a multitude of features from a prior year to predict the following year.

It used to be that things like a Gibbs Sampling / Markov Chain Monte Carlo model was a LOT of work to pull off. In ‘95 I was involved in writing a Bayes Multinomial Probit in a matrix algebra language (SAS/IML) and it sucked... there was no direct function for a Kronecker product if I recall correctly among other things... but now, with the general availability of every model under the sun with Python and R, there’s no excuse for shit models. If the results are not good, it just means the modeler is uninformed or fucking lazy. In this case, I think it may be a combination of the two.

I mean hell, he probably doesn’t even need to go that complex. I imagine that an XGBoost model that included some features to represent past year’s history and some type of feature to represent time (number of game played in current season) would create interactions that would fairly naturally weight early season predictions to the historical data and create a more accurate prediction in weeks 1-5.

source.gif
 
Upvote 0
It’s a simple least squares model based on what he’s explained to me. When I’ve called him out on it he says it ‘does what it is supposed to do’ and then references least squared error. That’s when I went after him on his choice of loss functions.

Anyhow, at a minimum, for early season game predictions to be of any value, he should be considering a Bayesian model where the prior is based on things like returning players, their stats, attendance at road games and other similar things. Picking the prior should not be overly difficult as one in that position has ample historical data to work with and can use a multitude of features from a prior year to predict the following year.

It used to be that things like a Gibbs Sampling / Markov Chain Monte Carlo model was a LOT of work to pull off. In ‘95 I was involved in writing a Bayes Multinomial Probit in a matrix algebra language (SAS/IML) and it sucked... there was no direct function for a Kronecker product if I recall correctly among other things... but now, with the general availability of every model under the sun with Python and R, there’s no excuse for shit models. If the results are not good, it just means the modeler is uninformed or fucking lazy. In this case, I think it may be a combination of the two.

I mean hell, he probably doesn’t even need to go that complex. I imagine that an XGBoost model that included some features to represent past year’s history and some type of feature to represent time (number of game played in current season) would create interactions that would fairly naturally weight early season predictions to the historical data and create a more accurate prediction in weeks 1-5.

I know I’m late, but I have one too,

5B0DCBF3-060D-43C7-B810-8DFCBC629045.gif
 
Upvote 0
It’s a simple least squares model based on what he’s explained to me. When I’ve called him out on it he says it ‘does what it is supposed to do’ and then references least squared error. That’s when I went after him on his choice of loss functions.

Anyhow, at a minimum, for early season game predictions to be of any value, he should be considering a Bayesian model where the prior is based on things like returning players, their stats, attendance at road games and other similar things. Picking the prior should not be overly difficult as one in that position has ample historical data to work with and can use a multitude of features from a prior year to predict the following year.

It used to be that things like a Gibbs Sampling / Markov Chain Monte Carlo model was a LOT of work to pull off. In ‘95 I was involved in writing a Bayes Multinomial Probit in a matrix algebra language (SAS/IML) and it sucked... there was no direct function for a Kronecker product if I recall correctly among other things... but now, with the general availability of every model under the sun with Python and R, there’s no excuse for shit models. If the results are not good, it just means the modeler is uninformed or fucking lazy. In this case, I think it may be a combination of the two.

I mean hell, he probably doesn’t even need to go that complex. I imagine that an XGBoost model that included some features to represent past year’s history and some type of feature to represent time (number of game played in current season) would create interactions that would fairly naturally weight early season predictions to the historical data and create a more accurate prediction in weeks 1-5.

 
Upvote 0
It’s a simple least squares model based on what he’s explained to me. When I’ve called him out on it he says it ‘does what it is supposed to do’ and then references least squared error. That’s when I went after him on his choice of loss functions.

Anyhow, at a minimum, for early season game predictions to be of any value, he should be considering a Bayesian model where the prior is based on things like returning players, their stats, attendance at road games and other similar things. Picking the prior should not be overly difficult as one in that position has ample historical data to work with and can use a multitude of features from a prior year to predict the following year.

It used to be that things like a Gibbs Sampling / Markov Chain Monte Carlo model was a LOT of work to pull off. In ‘95 I was involved in writing a Bayes Multinomial Probit in a matrix algebra language (SAS/IML) and it sucked... there was no direct function for a Kronecker product if I recall correctly among other things... but now, with the general availability of every model under the sun with Python and R, there’s no excuse for shit models. If the results are not good, it just means the modeler is uninformed or fucking lazy. In this case, I think it may be a combination of the two.

I mean hell, he probably doesn’t even need to go that complex. I imagine that an XGBoost model that included some features to represent past year’s history and some type of feature to represent time (number of game played in current season) would create interactions that would fairly naturally weight early season predictions to the historical data and create a more accurate prediction in weeks 1-5.

Well, I understood what you said and I think you are dead right. There is no excuse for crap models with so many options in R these days. Given the number of parameters to be estimated and what he has been saying to you, I thought that another option might be that he was fitting a multilevel latent class regression model. After reading this, I was wondering if perhaps one way to handle the parameters to sample size issue might be to fit a multilevel partial least squares model.

This is something that I also want to play with in retirement. I would be pretty cool if we had a BP model.
 
Upvote 0
Well, I understood what you said and I think you are dead right. There is no excuse for crap models with so many options in R these days. Given the number of parameters to be estimated and what he has been saying to you, I thought that another option might be that he was fitting a multilevel latent class regression model. After reading this, I was wondering if perhaps one way to handle the parameters to sample size issue might be to fit a multilevel partial least squares model.

This is something that I also want to play with in retirement. I would be pretty cool if we had a BP model.

I’m all for it but access to data is what would trip me up. Actually, less access to data... finding, obtaining and maintaining the data. My guess is that a lot of it could be had for a few of those $77.95 subscription fees we all pay.

For the maths stuff, let’s head over to https://www.buckeyeplanet.com/forum/threads/official-statistical-analysis-thread.601062/
 
Upvote 0
Well, I understood what you said and I think you are dead right. There is no excuse for crap models with so many options in R these days. Given the number of parameters to be estimated and what he has been saying to you, I thought that another option might be that he was fitting a multilevel latent class regression model. After reading this, I was wondering if perhaps one way to handle the parameters to sample size issue might be to fit a multilevel partial least squares model.

This is something that I also want to play with in retirement. I would be pretty cool if we had a BP model.

I’m all for it but access to data is what would trip me up. Actually, less access to data... finding, obtaining and maintaining the data. My guess is that a lot of it could be had for a few of those $77.95 subscription fees we all pay.

For the maths stuff, let’s head over to https://www.buckeyeplanet.com/forum/threads/official-statistical-analysis-thread.601062/

For crying out loud you two, go grab DBB and GET A ROOM! Oh, right. You just said as much.
 
Upvote 0
Back
Top