Golf Analytics

How Golfers Win

Putting Driven Performance Changes are Illusory

Last week I posted about how repeatable performance on different shot types was from season to season. Tee to green play is more repeatable than putting which is more repeatable than scrambling. That makes sense once you realize that golfers play 2-3 times more tee to green shots than meaningful putts in a round; there’s just more inherent randomness in a season’s worth of putts than in a season’s worth of tee to green shots. Golfers play even fewer scrambling shots resulting in even more randomness in a season’s worth of scrambling.

Last month I also examined how repeatable small samples (4-8 tournaments) of putting performances are, in the context of discussing why I expected Jimmy Walker’s performance to regress to the mean. That micro-study indicated that there was very little correlation between a golfer’s performance in a 4-8 tournament sample of putts and the following 4-8 tournament sample of putts. In the whole, performances in such short samples regress almost entirely to the mean.

Those two lines of inquiry led me to examine whether putting was more random than tee to green performance. I have always believed that improvements/declines that were driven by over-performance in putting were less real than those driven by tee to green over-performance, but I had never actually tested that hypothesis. The key question is whether changes in performance driven by putting are less persistent than those driven by tee to green play. That is when a golfer performs better over the first half of a season, and much of the improvement can be traced back to an improvement in his putting stats, will that golfer continue to perform better in the second half of the season? The evidence says changes in performance driven by putting are more illusory than changes in performance driven by tee to green play.

Design:

I gathered the tournament by tournament overall, tee to green, and putting performances of all PGA Tour golfers in rounds measured by the ShotLink system for 2011-Present. I divided those rounds into roughly half-season chunks (January-May 2011, May-November 2011, January-May 2012, May-November 2012, January-May 2013, May-September 2013, October 2013-Present). Each chunk included around 15-18 tournaments. I considered all golfers who recorded at least 20 rounds in consecutive half-season chunks.

To measure putting performance I used the PGA Tour’s Strokes Gained Putting stat and to measure tee to green performance I used my own overall ratings with putting performance subtracted out. This methodology is consistent with my measurement of tee to green performance in numerous recent work.

Half-Season Correlations by Shot Type:

First, I measured how repeatable putting and tee to green performance was between half-season samples, much like the full-season samples used in this study. I included all golfers with at least 20 rounds in consecutive half-season samples and compared each half-season to the half-season that directly followed, including 2nd halves to 1st halves of following calendar years. This yielded samples of ~800 golfers for both tee to green and putting. Graphs are below.

half tee to green

half putting

Tee to green performance was again more repeatable than putting performance. In the study linked above consecutive full-seasons of tee to green performance were correlated at a R=0.69 level. I found a correlation of R=0.62 between consecutive half-seasons, understandably less given the smaller number of rounds/shots played. The full-season correlation for putting was R=0.55. Half-season putting performances were similarly less correlated than full-seasons at R=0.40. Both these findings are consistent with the understanding that randomness between samples increases when fewer rounds/shots are compared. Most importantly, putting is less repeatable than tee to green play.

Persistence of Changes in Performance by Shot Type:

Next, I measured how persistent changes in performance are when considering putting and tee to green play. That is, when a golfer improves their putting over a half-season sample, how much of that performance is retained in the following half-season? If 100% of the performance is retained, changes in putting performance over a half-season entirely represent a change in true talent. If 0% of the performance is retained, changes in putting performance over a half-season entirely represent randomness. The same for tee to green play. My assumption was that a larger percent of performance would be retained for tee to green play than putting, meaning that half-season samples of putting are more affected by randomness than half-seasons of tee to green play.

To measure the effect, I first established prior expectations of performance for every golfer in my sample. I simply averaged performance in tee to green play and putting for the three years prior to the beginning of each half-season sample. For example, for the May-November 2011 sample, I averaged play between May 2008 and May 2011. This is not an ideal measure of performance, but it provides a consistent baseline for comparisons to be made.

I removed all golfers from the sample who had no prior performances. This reduced my sample to around 750 consecutive half-seasons.

The values I compared were the initial delta (Prior minus 1st Half-season) and the subsequent delta (Prior minus 2nd Half-season). Using this method I can find how persistent a change in performance is between to half-seasons. I did this considering putting and tee to green play. Graphs are below.

persist tee to green

persist putting

Changes in tee to green play were twice as persistent as changes in putting play, meaning golfers who improved their tee to green play retained twice as much of those improvements as golfers who improved a similar amount in putting. Golfers maintained around 60% of their tee to green improvements, but only 30% of their putting improvements. This indicates that putting performances regress more sharply to prior expectations than tee to green performances.

Are Putting Performances More Illusory?

Finally, I gathered the data from above to measure whether changes in performance driven by putting less real than changes in performance driven by tee to green play. I ran a linear regression using the initial delta for overall performance and the initial delta for putting performance as independent variables and the subsequent delta for overall performance as the dependent variable. In short, given a certain overall change in performance and a certain change in putting performance over the first half-season, how much of that overall change in performance is retained over the second half-season?

As the following table shows golfers retain much more of their improvement or decline when that improvement or decline occurred in tee to green shots than if it occurred in putting. The columns show improvements/declines in overall play (considering all shots) and the rows show improvements/declines solely in putting. The table shows that a golfer who improves overall by 0.50 strokes will retain only a quarter of their improvement if all of the improvement was due to putting (0.50), while they will retain over half of their improvement if none of the improvement was due to putting (0.00). The equation used to produce this chart is Subsequent Delta = (0.56 * Initial Overall Delta) – (0.28 * Initial Putting Delta).

delta comparisons

Discussion:

These findings should fundamentally alter how we discuss short-term changes in performance. I’ve already shown repeatedly that performances better than prior expectation will regress to the mean over larger samples. That idea is consistent across sports analytics. However, these findings indicate that the amount of regression depends on which part of a golfer’s game is improving or declining. Golfers who improve on the basis of putting are largely getting lucky and will regress more strongly to the mean than golfers who are improve on the basis of the tee to green game. Those who improve using the tee to green game are showing more robust improvements which should be expected to be more strongly retained.

The golfers who represent either side of this for the 2014 season are Jimmy Walker and Patrick Reed. I’ve discussed both in the past month, alluding to how Walker’s improvements were almost entirely driven by putting and how Reed’s were mostly driven by tee to green play. Based off these findings, Reed is more likely to retain his improvements over the rest of the season, all else being equal, than Walker.

 

All graphs/charts are denominated in strokes better or worse than PGA Tour average. Negative numbers indicate performances better than PGA Tour average.

Advertisements

6 responses to “Putting Driven Performance Changes are Illusory

  1. E April 1, 2014 at 9:53 PM

    Good stuff man

  2. Pingback: The Most Overrated Golfers at the US Open | Golf Analytics

  3. Pingback: Predicting Putting Performance by Distance | Golf Analytics

  4. Pingback: Don’t Trust a Hot Putter | Golf Analytics

  5. Pingback: Most Improved in 2015 | Golf Analytics

  6. Todd April 27, 2016 at 10:29 AM

    Great stuff Jake. I have been studying professional golf recently hoping to be able to create a model that could be used to predict a golfer’s performance for a tournament. I have been primarily focusing on strokes gained however the data the pga publishes for strokes gained: tee to green is merely the relative performance of a golfer’s round compared to the average score of all golfers for that round minus strokes gained:putting. Broadie publishes the strokes gained for the top few players for Drive, Appr, Short, & Putt each week on the PGA’s website, but I have not seen the data for stokes gained (Drive, Appr, Short) anywhere else on the site.

    I believe that a golfer’s score includes luck. I have reached the same conclusion that significant numbers (positive or negative) in strokes gained: putting over the short term is primarily luck and will soon revert to the mean.

    I have read several articles in your blog and was curious if you have been able to develop a good model for predicting performance and if so what are some of the key components it includes.

    Best Regards,
    Todd

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: