Golf Analytics

How Golfers Win

Tag Archives: projections

The Aging Curve for PGA Tour Golfers

This is a short study I conducted on the typical aging curve for PGA Tour golfers. I stress again, this is the typical aging curve for the average PGA Tour member. As I discuss below, it is not likely to reflect the aging curves of the most elite golfers.

Sample & Design:
All PGA Tour golfers who in Year 1 played in >20 PGA Tour [1] rounds and who in Year 2 played at least 1 round of golf worldwide. I studied 2009-2010, 2010-2011, 2011-2012, and 2012-2013. My sample included 916 pairs of seasons.

I then compared these golfers in all worldwide rounds in Year 1 and in Year 2. I regressed each Year 1 and Year 2 to PGA Tour Average (0.00) using the equation Y=(.6944*X)+0.01. I regressed because I want the best estimate of a golfer’s “true talent”. Golf performance is heavily influenced by luck; over a normal 85 round season, a golfer’s displayed performance represents approximately 70% skill and 30% luck.

The delta of Year 2 – Year 1 provided my comparison point. I did not weight my data.

I included only golfers who appeared in >20 PGA Tour rounds in Year 1 because it is rare for a golfer to accumulate >20 PGA Tour rounds and subsequently fail to record a single round worldwide because of the nature of the international golf tour structure. Golfers who fail to re-qualify for the PGA Tour almost always are able to play on the Web.com Tour the following season. If I had used all golfers with >20 rounds in Year 1, many golfers who performed poorly on the Web.com Tour would’ve fallen completely out of my sample because they would have played on minor tours for which I do not gather data. By measuring only PGA Tour players I ensure that no matter how lucky or unlucky, good or bad a player was in Year 1, it’s very likely they will be included in the data for Year 2.

AGE       delta      N
19           0.02        3
20           -0.02      2
21           -0.01      4
22           0.00        8
23           -0.02      8
24           0.02        11
25           -0.04      16
26           0.00        23
27           0.01        30
28           0.00        42
29           -0.01      47
30           0.04        45
31           0.01        49
32           0.00        45
33           -0.01      44
34           0.04        46
35           0.00        47
36           0.00        51
37           0.06        51
38           0.04        38
39           0.05        35
40           0.02        39
41           0.07        41
42           0.04        29
43           0.00        27
44           0.07        22
45           0.13        18
46           0.07        28
47           0.09        22
48           0.08        15
49           0.08        16
50           0.13        11
51           0.01        6
52           0.04        2

aging2

Discussion:
The aging curve for this sample is basically flat from the age 21 to age 34, with a significant year-by-year decline beginning in the late 30s. This indicates that the golfers in this sample did not generally improve or decline due to age until the mid-30s. The sample is small until age 26, but it’s possible to observe a slight improvement of -0.01/season. From age 26-36 the decline is less than 0.01/season. From 37-47 the decline accelerates to 0.06/season. After 47, the sample is relatively small, but shows continued significant decline.

Obviously this is surprising, as I anticipated finding a normal aging curve where an athlete reaches peak performance in the late 20s before declining beginning in the mid-30s. Instead, the sample hardly improved through the late 20s and even slightly declined by the mid-30s. After that, the sample followed the sharp decline in the late 30s and 40s which is anticipated from other athletics-focused aging studies.

My main hypothesis about why golfers show no age related improvement relates to the sample I chose to work with. This study measures the typical PGA Tour professional. Most of the public is familiar with golfers who have remained on Tour for many years, decades even, like Tiger Woods, Phil Mickelson, and Ernie Els. However, the PGA Tour is a very transitory competition. Around 225 golfers play more than 20 rounds in a season, but only 125 retain full playing privileges the following seasons; the rest attempt to qualify via Q-School or, failing that, play with reduced status or on the minor league Web.com Tour. Playing on the PGA Tour is very lucrative – purses are on average ten times larger than Web.com Tour purses, meaning players earn approximately ten times more money on the PGA Tour. The Web.com Tour qualifies only the best 25 golfers to the PGA Tour every season, meaning not even 10% of the Web.com golfers receive promotion to the PGA Tour.

Because of this financial disparity, only a third of golfers who competed regularly on the Web.com Tour in 2013 earned more than the US median household income for 2013 (~$51,000). Professional golf requires endless hours of practice, separation from family/friends, and constant travel between tournament venues that regularly cover at least three or four continents. It may be that the average PGA Tour golfer just cannot handle the constant grind of professional golf and his skills slowly deteriorate from very early in his career. Because it’s unlikely that the average PGA Tour pro will even maintain their membership from year, most professional golfers face years of yo-yoing up and down between the lucrative PGA Tour and the relative penury of the Web.com Tour. Viewed like that, it’s understandable why the typical player does not improve.

Understand that there are many forces at work to produce the small improvements or declines due to age. Golfers certainly become more experienced at reading greens, making club decisions, and choosing how to play shots as they play deeper into their careers. At the same time, the athletic decline observed in other sports affects a golfer’s ability to generate club head speed or repeat their swing. Many commentators talk about how older players get the “yips” and putt worse than they did when younger. At the same time, golf requires constant dedication to practice and preparation. A golfer that isn’t prepared to commit hours to practice each day is going to watch his skills erode. It is likely that the aging curve observed above is a combination of all these factors.

Again, I have to stress how I looked at typical PGA Tour professionals. There are likely many different aging curves based on ability. I would be stunned if the aging curve for elite golfers resembled this slow decline. Golfers who are elite can expect significant and sustained rewards for high levels of performance. Elite golfers are unlikely to lose their playing privileges on the PGA Tour, so they know that by maintaining their practice and preparation they can expect to earn more than a million dollars in prize money per season plus endorsements and appearance fees. That is what fuels golfers like Mickelson and Vijay Singh to take care of their bodies, to practice, to prepare for each tournament, and to withstand the weekly grind of playing in different tournaments.

Future Work:
I’d like to follow this study up with one that does weight the data by rounds played. I’m also less comfortable with my regression technique than I would like. Instead of regressing every observed value by a fixed ~30% to the mean, I’ll regress the observed by adding a certain number of rounds of average play. For example, past work I’ve done estimates that adding 25.5 rounds of 0.00 properly regresses the observed data.

[1] – I defined PGA Tour rounds as any PGA Tour (co-)sponsored tournament plus the World Golf Championships and Majors.

Advertisements

Predicting Professional Performance of Collegiate Golfers (Part III)

Last month I posted several studies which measured how well collegiate golfers performed once they reached the professional level, compared to their Sagarin Rating during college. I updated my database with Challenge Tour results from 2011-2013 so this post is an update of those prior studies with slightly larger samples. Later this week I’ll post the results of a study using only the final two years of college performance to see if that predicts professional performance better.

This study uses the sample methodology as the study linked above in Part II. The sample size is 52, average # of college seasons was 3.4, average college performance was 70.8, average professional performance in Z-Score was +0.15.

college golf regression 3

 

The results were less predictive with the larger sample, but still R=0.59 stands as fairly predictive of professional performance. The equation to use is Pro Performance = (0.2113*Avg Sagarin) – 14.796.

Using this predictor my projections for several golfers who recently turned pro follow:
Justin Thomas -0.17
Chris Williams -0.03
T.J. Vogel +0.14
Cody Gribble +0.17
Pedro Figueiredo +0.18
Max Homa +0.21
Kevin Phelan +0.27
Jace Long +0.29
Steven Fox +0.43

Steven Fox’s Professional Future

I’ve never liked writers who bury the lead, so – Steven Fox’s pro future is most likely bleak. Fox announced he was turning pro today and will aim to qualify for the Web.com Tour through qualifying school this fall. Fox had competed for Tennessee-Chattanooga for the past four seasons and won the 2012 US Amateur title. However, he’ll most likely struggle to ever distinguish himself on the major tours if his past play is any indication.

First let’s examine Fox’s college performance. During his freshman and sophomore years he rated 306th and 213th in the country according to Jeff Sagarin’s rankings, but developed enough to rank 104th and 84th in his final two years in school, with his US Amateur victory coming between those two seasons. Now, that’s a perfectly respectable college career, but nothing that indicates serious professional success. Fox’s career Sagarin Rating was 72.1 – 346th among all college players who competed between 2008-2013. Even if we consider only his final two seasons, his Sagarin Rating was 71.4 – which would rank about 150th. Among the 80 players who played college golf between 2008-2013 and recorded at least 20 rounds in a season on the PGA/European/Web.com Tours, Fox’s career Sagarin Rating would rank 68th. Of the thirteen players who finished college with a worse rating than him, only Keegan Bradley has achieved any success on the PGA Tour and that only after several years playing on the developmental tours.

Fox’s situation is fairly unique, however, when considered next to those 80 players. Because of his US Amateur victory, he’s received exemptions into nine different PGA Tour tournaments this season, including three of the Majors. It’s almost unprecedented for a player of his talent level in college to play 18 Major Tour rounds so soon after leaving college. The only problem is Steven Fox has been awful in those 18 rounds. He’s played one standard deviation worse than PGA Tour average during those 18 rounds and missed every cut. For comparison purposes, the PGA Tour average is 0.00, the European Tour average is around +0.20, and the Web.com Tour average is around +0.35. Typically the worst full seasons by Major Tour players approach +1.00. Obviously such a poor performance should be regressed towards the mean a lot. My research has shown you should add 25.5 rounds of average play to a sample to regress it, which would result in Fox rating around a +0.41, below Web.com Tour average.

Alternatively, using the college to pro projection model I introduced last week, we can plug Fox’s career 72.05 Sagarin Rating into the Pro=(0.277*College)-19.453 equation. Fox’s college play indicates he’s expected to play at around +0.48, similar to his regressed Z-Score numbers and also worse than Web.com Tour average.

Using a composite of the two models, it’s reasonable to expect Fox to be worse than Web.com Tour average if he competes next season. However, even getting on the Tour will require him to qualify in Q-School. I have no numbers on the expected quality of the Q-School field because it’s the first season under these qualification rules, but it seems likely that there will be many players as good or better than Fox. All considered, it’s impossible to fault Fox for turning pro (like I absolutely would have if he had left Chattanooga after last year’s US Amateur victory), but expectations for his performance should be set extremely low. His college and pro record just do not measure up well against what typically indicates professional success.

Predicting Professional Performance of Collegiate Golfers (Part II)

Yesterday, I posted a comprehensive look at the performance of collegiate golfers during their first season in Major Tour (PGA/Web.com/European) professional golf and examined how correlated those results were to Jeff Sagarin’s Rankings at Golfweek. However, I was concerned about that study largely because I did not remove golfers who took several years to actually record >20 Major Tour rounds. It can often be very difficult for the non-elite college golfers to play regularly on the Major Tours right after graduation. Many play on the minor league tours (eGolf/NGA) or one of the international tours (Challenge/Asian/etc.). Obviously, this introduces bias into the study if we’re comparing, for example, Jordan Spieth’s season right after leaving college and Chesson Hadley’s season three years after he graduated. Part of success in pro golf is learning how to endure the grind of a season – securing entrance into tournaments, sponsors, and performing well enough to earn a living. Add to that that golfers almost always become better players from their early 20s to mid 20s, and I’m not sure I trust the reliability of yesterday’s study.

To correct for that bias, I removed all seasons from the sample that occurred more than one year after the golfer’s last in collegiate golf. For example, Keegan Bradley last competed in college in 2008, but did not record Major Tour rounds until 2010. He’s dropped from my sample along with roughly half of the seasons. I followed the same methodology as yesterday using only the seasons that met this new criteria.

N=35, average college seasons = 3.4, average Sagarin = 70.7, average pro performance in Z-Score = 0.10

college golf regession 2

The results showed a much stronger correlation than yesterday (R=0.70). In fact, this correlation is almost exactly equal to what I found earlier this week when examining the correlation between sets of professional seasons. This indicates that Sagarin’s Rankings are an extremely valuable predictor of professional success, even more so than what I found yesterday.

Predicting Professional Performance of Collegiate Players (REDUX)

Let’s start with some tough love. This was a laughably lazy attempt at performing this study. Hopefully this effort will be less awful, considering I think I have a somewhat proper sample.

Quickly, I’m looking for the correlation between Jeff Sagarin’s college golf rankings and my own Z-Score Model Ratings. Sagarin publishes the best (only?) math based ranking of college golfers. There’s obviously issues of sample size in the college game (most teams play <15 tournaments at normally three rounds), but Rankings are fairly strongly correlated between seasons (R=.62) even though players are in a volatile period in their golf development. To combat concerns about sample-size, I’ve averaged the golfer’s Ranking over their college career. This isn’t ideal either, but, again, a max of 45 rounds isn’t something I’m comfortable using.

Once I had those, I looked in my Z-Score database for the first instance of those players playing >20 rounds in one season and took their Z-Score from that season. A few concerns about this method of finding seasons: 1) if a golfer has less than 20 rounds in every season, they won’t show up at all, 2) if a golfer has less than 20 rounds before getting greater than 20 rounds in a subsequent season, that first season will be ignored, and 3) it can often be several years before a golfer accumulates >20 rounds on the PGA/Web.com/European Tours (I do not have eGolf/NGA/Challenge/Asian/etc. Tour Ratings). Of these concerns, #1 isn’t that big of a deal. Plenty of collegians don’t have the game for high level pro golf – there are less than 1,000 guys who play regularly on the three Tours I track and I’ve gathered data on the top 500 golfers from each season. #2 isn’t very concerning. The sample has to be set somewhere. #3 concerns me the most because comparing a 26 year old with three seasons of minor tour golf to a 21 year old right out of college is kind of apples and oranges, but perhaps I’ll run another study in the future that excludes those data points.

First information about my sample. N=80, all but three golfers had at least 2 seasons of college golf (average was 3.2 seasons) – Spieth, Todd Baek, and Roger Sloan had the single seasons, the average performance in college was a 71.1, and the average performance in pro golf was a +0.15 Z-Score (below average). I’ve chosen to display the pro results in terms of strokes better than/worse than average. Divide by 3 to get the corresponding Z-Score.

The results were much less irrelevant than that turd I linked above:

college golf regression

The correlation was R=.49, which indicates that we can predict pro performance on a roughly 50% Sagarin/50% mean basis. The equation to use is y=0.47x-32.9 where y is pro performance in Strokes to average (divide by 3 to get Z-Score) and x is Sagarin Rating. For comparison, I’ve found that the correlation between back-to-back professional seasons is about 70% (70% Year 1 + 30% mean) and correlation between back-to-back college seasons is about 63% (63% Year 1 + 37% mean). Based on the concerns I laid out above, I think that’s not terrible.

Unsurprisingly, this method of predicting would have misses. Sagarin did not think highly of Keegan Bradley coming out of St. John’s. Whether it was poor play or an awful schedule, Keegan averaged a 72.5 over three years in school. Keegan was one of those guys who turned pro and who took several seasons to record Major Tour rounds. He graduated in 2008 and didn’t record a Major Tour round until 2010.

I do take solace in the fact that no golfer who averaged better than a 70.0 (basically in the top 15 each season) failed to perform better than the sample average in their first season. This indicates that success in college is correlated with success in professional golf. In fact, only a single player with a Sagarin below 70.0 in the entire 2005-2013 sample (who has graduated) has failed to record 20 or more Major Tour rounds – (Arnond Vongvanij, who has exclusively played on the Asian Tour, has a professional win, and is ranked 218th by the Official World Goal Ranking).

This method predicts success for the best collegian golfer not currently in pro golf, Justin Thomas (69.2), who plans to turn pro after the Walker Cup. He’s recorded a -0.06 Z-Score in 12 rounds dating back to 2012.

Predicting Golfer Performance in Full Seasons

This is a long overdue post. My main interest in golf analytics is predicting future performance and this post will lay-out how well I can predict full seasons from prior data. The inputs are simple: 1) my Z-Score data from the 2009-2012 seasons collected from the PGA, Web.com, and European Tours and 2) my Z-Score data for all rounds on those three Tours so far in 2013. For all of these regressions I’ve limited my analysis to golfers with greater than 25 rounds in 2012 and 2013. I’ve run regressions using 1 Year (2012), 2 Years (2011-12), 3 Years (2010-12), and 4 Years (2009-12), all on 2013. Also, I’ve weighted the 1 Year and 2 Years samples by recency and regressed those on 2013.

Image

Prior work I’ve done suggests that simply adding 25 rounds of PGA Tour average (0.00) to a golfer’s sample is a good prediction of what they will do going forward. That means a golfer with 50-100 rounds in a season (pretty much all Tour regulars) will be regressed anywhere from 20% to 33% to the mean. The above results are completely in line with that prior finding, showing regression of anywhere from 28% to 31%.

What surprised me though was the similarity of the results. This suggests that if you’re not going to weight data, it doesn’t actually help to include more data in your results. You’re pretty much stuck with ~30% regression no matter what. However, even weighting the data barely improves the overall accuracy of the results.

Now, this is a fairly basic regression of data. It includes golfers with very few rounds in both seasons (though increasing the rounds requirement to >50 with the weighted 2 Year data produced no improvement in accuracy), it includes golfers who were playing regularly on all three Tours, and it ignores a host of other information like age, status on the Tours, injuries, etc. But the results are pretty compelling – predicting golfer performance is difficult even over a full season, so much so that we should regress observed performance 30% to the mean to best predict future performance.

Predicting College Players in Pro Golf

With the recent end of the NCAA Golf season, top collegiate golfers are terminating their amateur status and entering pro tournaments with the aim of earning enough money to earn status on one of the major tours for 2014. Jordan Spieth was the first to turn pro this season, entering several PGA Tour tournaments early in the year and earning Special Temporary Member status. Just this past week, Alabama’s Justin Thomas and Washington’s Chris Williams turned pro at the Travelers Championship and both made the cut. The main question with these freshly minted pros from my point of view is just how good are they compared to a PGA regular?

Luckily, Golf Week/Jeff Sagarin publishes yearly rankings of college golfers measuring their performance throughout the college season. This season, the top ranked golfer was Michael Kim, the low amateur at the US Open. Sagarin reports the ratings on a scale mirroring a typical golf score, with the most elite golfers earning ratings in the 68.0s while the 25th best golfer in a season will rate around a 70.0-70.5. How do those ratings translate into the Z-Score Rating System I use?

To find out I constructed a list of golfers that both were ranked in Sagarin’s top 25 for 2010-11 or 2011-12 before turning pro and had at least 8 rounds on the Web.com, European, or PGA Tours in the year after their college season ended (basically June to May). There were 18 such golfers that met both criteria including notables like Harris English, Bud Cauley, and Jordan Spieth. For those 18, I gathered Sagarin ratings for their last two seasons in college (or only one season if they left after freshman year) and their Z-Scores, adjusted for strength of field, for all rounds they played in the year following the end of their last college season. The N for the golfers ranged from 8 rounds to 108 rounds.

I then ran a linear regression with the average Sagarin rating of the final two seasons in college as the independent variable and their Z-Score in the year following as the dependent variable. For the full data-set of 18 golfers, the R2 was only 0.09 (graph of data is below) – meaning for our data collegiate performance explained about 30% of performance as a pro.

collegegolfergraph

Now, what can we learn from such a small sample? Perhaps not much, especially since there is likely to be survivor bias in the data. The golfers who perform best in their first few tournaments are more likely to get additional sponsor’s exemptions, gain status on a major tour, and get additional rounds of data. Golfers who struggle early will be forced to play on the NGA Tour or other lower tours that I do not have data for. Perhaps the most important thing is that a golfer would have to play to a rating of 69.3 in college to be projected as a 0.00 as a pro (basically average for the PGA Tour). Only Michael Kim (who is staying an amateur) and Brandon Stone (who just turned pro at this past week’s BMW International and finished T10) were that good in 2012-13.

I will return to this subject when I finish compiling Challenge Tour (European minor league) and NGA Tour and eGolf Tour data for past seasons which will give me a much larger sample to work with and from which to draw conclusions.