Golf Analytics

How Golfers Win

Monthly Archives: September 2013

Best Rounds of 2013 (and the best round ever?)

The 2013 PGA Tour season concluded today at the Tour Championship. I hope to write some more comprehensive season retrospectives in the next few months, but with events remaining on both the Web.com and European Tours, a complete recap will come at the end of the calendar year. However, a short post on the best PGA Tour rounds of the season is appropriate. These are the top-five rounds relative to the field and adjusted for the strength of the field (so a 67 at Merion is roughly equal to a 61 at TPC Scottsdale).

5. Boo Weekley, 4th round at the Tampa Bay Championship
The Copperhead Course at Innisbrook is considered fairly difficult and averaged a 71.6 during the final round. Weekley started the round six back of the trio of Justin Leonard, Kevin Streelman, and George Coetzee and in 35th place, but jumped all the way to solo 2nd with a 63, 8.6 strokes better than the field. Weekley followed up this performance with his first victory since 2008 at the Colonial.

4. Keegan Bradley, 1st round at the Byron Nelson Championship
Keegan’s opening round 60 was the second on the PGA Tour in 2013, following Phil’s 60 at the Waste Management Open on a very easy TPC Scottsdale course. The field played TPC Four Seasons in 69.8 that day. Keegan entered the final round in the lead, but Sang-Moon Bae beat him by a stroke to claim his first PGA Tour title.

3. Tiger Woods, 2nd round at the WGC-Bridgestone
Tiger has always dominated at Firestone, but this year’s seven stroke victory was something special. His 2nd round 61 followed a first round 66 and opened up a comfortable seven stroke lead that he would take into the clubhouse Sunday. The field played Firestone in 71.2 during this round.

2. Matt Kuchar, 3rd round at the BMW Championship
This round got lost in the hoopla around the #1 round on this list, but Kuchar shot a 61 when Conway Farms played to a 70.3. Unfortunately for Kuchar, he started the round 16 strokes back of the leader and followed his 61 with a 73 to finish T24. His week was a pretty good example of why momentum likely doesn’t exert influence over golf performance.

1. Jim Furyk, 2nd round at the BMW Championship
It’s funny that Conway Farms was lit-up for the two best rounds of the year as it actually played much more difficult than either East Lake or TPC Boston, which saw a 64 and 62 as the lowest rounds. The field shot 71.1, meaning Furyk’s round registered around 13 strokes better than what an average PGA Tour player would’ve shot that day. Like Kuchar, Furyk didn’t play particularly well in the other three rounds and settled for solo third.

I haven’t checked my entire database, but I’m pretty confident it is the best round on the three major tours since at least 2008. The other two 59s (Appleby in 2010 at the Greenbrier and Goydos in 2010 at the John Deere) came on very easy courses, as did David Duval’s 59 at the 1999 Bob Hope Classic and Chip Beck’s 59 at the Las Vegas Invitational. Bill Barnwell investigated where it ranks all-time at Grantland and determined it was 9th best all-time. He didn’t adjust for the talent of the field however (the BMW field was around 0.3 standard deviations better than PGA average) which would boost Furyk’s round above all but the top four on Barnwell’s list and make it the best round since 1996.

Predicting Performance based on the Previous Weekend is Hard

Possibly the most overused idea among those involved in predicting golf is that a player’s performance the previous week’s tournament holds a lot of predictive power over their performance in the current tournament. Golfers who win are said to be “in form”, while those who struggle and miss the cut are said to be struggling with their game. However, there’s never been any evidence provided by anyone that the prior week should be factored into a prediction about the following week.

To examine whether this is actually a thing I gathered performance data from the last ten weeks of PGA tournaments (AT&T to Deutsche Bank) from a sample of over 300 golfers. If a golfer played in consecutive weeks, I added his rounds to the sample. In the end I had 497 pairs of prior week->following week data.

The results weren’t exactly promising for “in-from” advocates. The correlation between week 1 and week 2 was only R=0.19. This means if you only knew the performance of a golfer in the prior week, you would predict the following week by regressing that performance over 80% towards the mean (which was -0.07 for the sample).

Now, perhaps there is a prior week effect, but it only appears when you factor underlying skill into the regression. To test this, I took each golfer’s weighted two year Z-Score from the week following the Travelers Championship and added that variable to the prior week variable to predict the following week. The results follow:

R=0.32, y=(0.94*SKILL)+(.10*PRIOR)+0.16, both variables were statistically significant at the 95% level

However, this indicates that underlying skill is over nine times more important than the prior week in predicting the following week. In other words, to predict how a golfer will perform the following week, look at how he’s performed in the last two years rather than just how he played the week earlier.

Steven Fox’s Professional Future

I’ve never liked writers who bury the lead, so – Steven Fox’s pro future is most likely bleak. Fox announced he was turning pro today and will aim to qualify for the Web.com Tour through qualifying school this fall. Fox had competed for Tennessee-Chattanooga for the past four seasons and won the 2012 US Amateur title. However, he’ll most likely struggle to ever distinguish himself on the major tours if his past play is any indication.

First let’s examine Fox’s college performance. During his freshman and sophomore years he rated 306th and 213th in the country according to Jeff Sagarin’s rankings, but developed enough to rank 104th and 84th in his final two years in school, with his US Amateur victory coming between those two seasons. Now, that’s a perfectly respectable college career, but nothing that indicates serious professional success. Fox’s career Sagarin Rating was 72.1 – 346th among all college players who competed between 2008-2013. Even if we consider only his final two seasons, his Sagarin Rating was 71.4 – which would rank about 150th. Among the 80 players who played college golf between 2008-2013 and recorded at least 20 rounds in a season on the PGA/European/Web.com Tours, Fox’s career Sagarin Rating would rank 68th. Of the thirteen players who finished college with a worse rating than him, only Keegan Bradley has achieved any success on the PGA Tour and that only after several years playing on the developmental tours.

Fox’s situation is fairly unique, however, when considered next to those 80 players. Because of his US Amateur victory, he’s received exemptions into nine different PGA Tour tournaments this season, including three of the Majors. It’s almost unprecedented for a player of his talent level in college to play 18 Major Tour rounds so soon after leaving college. The only problem is Steven Fox has been awful in those 18 rounds. He’s played one standard deviation worse than PGA Tour average during those 18 rounds and missed every cut. For comparison purposes, the PGA Tour average is 0.00, the European Tour average is around +0.20, and the Web.com Tour average is around +0.35. Typically the worst full seasons by Major Tour players approach +1.00. Obviously such a poor performance should be regressed towards the mean a lot. My research has shown you should add 25.5 rounds of average play to a sample to regress it, which would result in Fox rating around a +0.41, below Web.com Tour average.

Alternatively, using the college to pro projection model I introduced last week, we can plug Fox’s career 72.05 Sagarin Rating into the Pro=(0.277*College)-19.453 equation. Fox’s college play indicates he’s expected to play at around +0.48, similar to his regressed Z-Score numbers and also worse than Web.com Tour average.

Using a composite of the two models, it’s reasonable to expect Fox to be worse than Web.com Tour average if he competes next season. However, even getting on the Tour will require him to qualify in Q-School. I have no numbers on the expected quality of the Q-School field because it’s the first season under these qualification rules, but it seems likely that there will be many players as good or better than Fox. All considered, it’s impossible to fault Fox for turning pro (like I absolutely would have if he had left Chattanooga after last year’s US Amateur victory), but expectations for his performance should be set extremely low. His college and pro record just do not measure up well against what typically indicates professional success.

Predicting Professional Performance of Collegiate Golfers (Part II)

Yesterday, I posted a comprehensive look at the performance of collegiate golfers during their first season in Major Tour (PGA/Web.com/European) professional golf and examined how correlated those results were to Jeff Sagarin’s Rankings at Golfweek. However, I was concerned about that study largely because I did not remove golfers who took several years to actually record >20 Major Tour rounds. It can often be very difficult for the non-elite college golfers to play regularly on the Major Tours right after graduation. Many play on the minor league tours (eGolf/NGA) or one of the international tours (Challenge/Asian/etc.). Obviously, this introduces bias into the study if we’re comparing, for example, Jordan Spieth’s season right after leaving college and Chesson Hadley’s season three years after he graduated. Part of success in pro golf is learning how to endure the grind of a season – securing entrance into tournaments, sponsors, and performing well enough to earn a living. Add to that that golfers almost always become better players from their early 20s to mid 20s, and I’m not sure I trust the reliability of yesterday’s study.

To correct for that bias, I removed all seasons from the sample that occurred more than one year after the golfer’s last in collegiate golf. For example, Keegan Bradley last competed in college in 2008, but did not record Major Tour rounds until 2010. He’s dropped from my sample along with roughly half of the seasons. I followed the same methodology as yesterday using only the seasons that met this new criteria.

N=35, average college seasons = 3.4, average Sagarin = 70.7, average pro performance in Z-Score = 0.10

college golf regession 2

The results showed a much stronger correlation than yesterday (R=0.70). In fact, this correlation is almost exactly equal to what I found earlier this week when examining the correlation between sets of professional seasons. This indicates that Sagarin’s Rankings are an extremely valuable predictor of professional success, even more so than what I found yesterday.

Predicting Professional Performance of Collegiate Players (REDUX)

Let’s start with some tough love. This was a laughably lazy attempt at performing this study. Hopefully this effort will be less awful, considering I think I have a somewhat proper sample.

Quickly, I’m looking for the correlation between Jeff Sagarin’s college golf rankings and my own Z-Score Model Ratings. Sagarin publishes the best (only?) math based ranking of college golfers. There’s obviously issues of sample size in the college game (most teams play <15 tournaments at normally three rounds), but Rankings are fairly strongly correlated between seasons (R=.62) even though players are in a volatile period in their golf development. To combat concerns about sample-size, I’ve averaged the golfer’s Ranking over their college career. This isn’t ideal either, but, again, a max of 45 rounds isn’t something I’m comfortable using.

Once I had those, I looked in my Z-Score database for the first instance of those players playing >20 rounds in one season and took their Z-Score from that season. A few concerns about this method of finding seasons: 1) if a golfer has less than 20 rounds in every season, they won’t show up at all, 2) if a golfer has less than 20 rounds before getting greater than 20 rounds in a subsequent season, that first season will be ignored, and 3) it can often be several years before a golfer accumulates >20 rounds on the PGA/Web.com/European Tours (I do not have eGolf/NGA/Challenge/Asian/etc. Tour Ratings). Of these concerns, #1 isn’t that big of a deal. Plenty of collegians don’t have the game for high level pro golf – there are less than 1,000 guys who play regularly on the three Tours I track and I’ve gathered data on the top 500 golfers from each season. #2 isn’t very concerning. The sample has to be set somewhere. #3 concerns me the most because comparing a 26 year old with three seasons of minor tour golf to a 21 year old right out of college is kind of apples and oranges, but perhaps I’ll run another study in the future that excludes those data points.

First information about my sample. N=80, all but three golfers had at least 2 seasons of college golf (average was 3.2 seasons) – Spieth, Todd Baek, and Roger Sloan had the single seasons, the average performance in college was a 71.1, and the average performance in pro golf was a +0.15 Z-Score (below average). I’ve chosen to display the pro results in terms of strokes better than/worse than average. Divide by 3 to get the corresponding Z-Score.

The results were much less irrelevant than that turd I linked above:

college golf regression

The correlation was R=.49, which indicates that we can predict pro performance on a roughly 50% Sagarin/50% mean basis. The equation to use is y=0.47x-32.9 where y is pro performance in Strokes to average (divide by 3 to get Z-Score) and x is Sagarin Rating. For comparison, I’ve found that the correlation between back-to-back professional seasons is about 70% (70% Year 1 + 30% mean) and correlation between back-to-back college seasons is about 63% (63% Year 1 + 37% mean). Based on the concerns I laid out above, I think that’s not terrible.

Unsurprisingly, this method of predicting would have misses. Sagarin did not think highly of Keegan Bradley coming out of St. John’s. Whether it was poor play or an awful schedule, Keegan averaged a 72.5 over three years in school. Keegan was one of those guys who turned pro and who took several seasons to record Major Tour rounds. He graduated in 2008 and didn’t record a Major Tour round until 2010.

I do take solace in the fact that no golfer who averaged better than a 70.0 (basically in the top 15 each season) failed to perform better than the sample average in their first season. This indicates that success in college is correlated with success in professional golf. In fact, only a single player with a Sagarin below 70.0 in the entire 2005-2013 sample (who has graduated) has failed to record 20 or more Major Tour rounds – (Arnond Vongvanij, who has exclusively played on the Asian Tour, has a professional win, and is ranked 218th by the Official World Goal Ranking).

This method predicts success for the best collegian golfer not currently in pro golf, Justin Thomas (69.2), who plans to turn pro after the Walker Cup. He’s recorded a -0.06 Z-Score in 12 rounds dating back to 2012.

Predicting Golfer Performance in Full Seasons

This is a long overdue post. My main interest in golf analytics is predicting future performance and this post will lay-out how well I can predict full seasons from prior data. The inputs are simple: 1) my Z-Score data from the 2009-2012 seasons collected from the PGA, Web.com, and European Tours and 2) my Z-Score data for all rounds on those three Tours so far in 2013. For all of these regressions I’ve limited my analysis to golfers with greater than 25 rounds in 2012 and 2013. I’ve run regressions using 1 Year (2012), 2 Years (2011-12), 3 Years (2010-12), and 4 Years (2009-12), all on 2013. Also, I’ve weighted the 1 Year and 2 Years samples by recency and regressed those on 2013.

Image

Prior work I’ve done suggests that simply adding 25 rounds of PGA Tour average (0.00) to a golfer’s sample is a good prediction of what they will do going forward. That means a golfer with 50-100 rounds in a season (pretty much all Tour regulars) will be regressed anywhere from 20% to 33% to the mean. The above results are completely in line with that prior finding, showing regression of anywhere from 28% to 31%.

What surprised me though was the similarity of the results. This suggests that if you’re not going to weight data, it doesn’t actually help to include more data in your results. You’re pretty much stuck with ~30% regression no matter what. However, even weighting the data barely improves the overall accuracy of the results.

Now, this is a fairly basic regression of data. It includes golfers with very few rounds in both seasons (though increasing the rounds requirement to >50 with the weighted 2 Year data produced no improvement in accuracy), it includes golfers who were playing regularly on all three Tours, and it ignores a host of other information like age, status on the Tours, injuries, etc. But the results are pretty compelling – predicting golfer performance is difficult even over a full season, so much so that we should regress observed performance 30% to the mean to best predict future performance.