Golf Analytics

How Golfers Win

Monthly Archives: August 2018

Optimal start hole (24 for 1 playoff)

Many of you have probably seen there was a 24 man playoff for the 64th place in match play at the US Amateur. The USGA sent 6 foursomes off the 17th hole planning to alternate the 17th and 18th holes until one player prevailed. In the end, two of the 24 made birdie at the 17th and advanced to the 18th where one made triple and the other made bogey to win. In total, the playoff took 90 minutes to complete (26 player-holes were played).

During I suggested it would be an interesting exercise to find the optimal starting hole for the playoff. In the end, 26 individual holes were played in 90 minutes, but as I’ll find later the median expected holes played was 31 starting with the 17th and 35% of the time the playoff was expected to extend past the 18th hole.

To determine the optimal hole, I looked at three different outcome statistics:

  1. median holes played (counting each time a player played a hole as a hole played, the minimum is 24 if there is one winner after the first playoff hole)
  2. don’t return to the starting hole (at the least the playoff ends after two holes)
  3. stay under 50 holes played total

These are arbitrary and I’m sure the USGA’s decision was mainly made based on proximity to the warm-up areas and clubhouse.

I used the scoring stats for the 2010 US Open at Pebble Beach as a USGA event in summer more represents the conditions than a PGA Tour event in February. The US Am scoring is available here for anyone who wants to replicate this analysis. Just organize your CSV as hole_no, score_to_par, count where score_to_par shows -2 for eagle, -1 for birdie, and so on.

Results
Based on these results in terms of limiting holes played (lowest median value) the 5th, 12th, and 17th (all par 3s) stand-out. In the case of the 5th and 17th, the par 3 is followed by a par 5. The 6th hole at 38 median holes is the least optimal.

In terms of reducing chances of returning to the starting hole (finish in two or fewer playoff holes), the 5th, 12th, and 17th again triumph with about 64-65% chance of lasting only two holes. The combination of #18-#1 comes in last with only 41% ending in two or fewer playoff holes.

The least interesting outcome is avoiding playing more than 50 holes. If 26 holes actually took 1.5 hours, a reasonable guess is that 50 holes would have taken around 3 hours – finishing as the first round of 64 match was nearing the end of the front 9.

In those terms, starting at the 5th and 6th holes gives only a 90% chance of finishing in 50 or fewer holes played, while starting at the 17th and 18th holes yields a 99.5% chance of finishing in 50 or fewer holes played.

Based on that, starting at the 17th hole ranks as one of the clear optimal options, if not the most optimal! Kudos USGA. Kudos to you too if you can keep your solution under my absurd 313 lines.

Code
I’ve posted my very for loop heavy code below for anyone who wants to replicate this.



library(dplyr)
library(tidyr)
library(readr)

# bring in prepared CSV showing data in form hole_no, score_to_par, count

scoring_data <- read_csv("pebble-beach-scoring-2010.csv")

# players qualifying, (just one player advancing here) and field size (# in playoffs)

PQ <- 1
FIELD <- 24

# calculates percentages of birdie, par, bogey, etc.

setup_scoring %
  
  group_by(hole_no) %>%
  
  mutate(perc = count / sum(count)) %>%
  
  ungroup() %>%
  
  select(-count) %>%
  
  spread(score_to_par, perc) %>%
  
  gather(score_to_par, perc, -hole_no) %>%
  
  mutate(perc = ifelse(is.na(perc), 0, perc),
         score_to_par = as.numeric(score_to_par)) %>%
  
  spread(score_to_par, perc) %>%
  
  # we'll use a random function later on so define which parts of the 0 to 1 continuum reflects probability of birdie, par, etc.
  
  mutate(eagle = `-2`,
         birdie = `-1` + eagle,
         par = `0` + birdie,
         bogey = `1` + par,
         worse = 1 - bogey) %>%
  
  select(hole_no, eagle:worse)

# we now enter the for loop hacks zone
# create a data frame with a row for each hole_no and competitor (24 players)

competitors <- vector("list", FIELD)

for(p in 1:FIELD) {
  
  d %
    
    mutate(comp = p)
  
  competitors[[p]] <- d
  
}

competitors <- bind_rows(competitors)

# link consecutive holes including #18 to #1
# with some knowledge of realistic back to back holes you could expand to cover all options (for example #3 to #17 or #16 to #4)

two_holes <- vector("list", 18)

for(h in 1:18) {
  
  d %
    
    filter((hole_no == h | hole_no == h + 1) | (h == 18 & hole_no %in% c(1, 18)))
  
  #
  
  two_holes[[h]] %
    
    mutate(start_hole = h)
  
}

two_holes <- bind_rows(two_holes)

# run the main simulation for loop

tictoc::tic()

#

it <- 1000

holes_data <- vector("list", 18)

#

for(h in 1:18) {
  
  data %
    filter(start_hole == h)
  
  sim_data <- vector("list", it)
  
  for(i in 1:it) {
    
    # the logic here is that we're just simulating a single run of the first hole & removing anyone who does not earn the best score
    # we then filter the data for the next hole and continue on
    # this can be for looped as well
    
    first_hole %
      
      filter(hole_no == h) %>%
      
      mutate(s = runif(n(), min = 0, max = 1),
             
             s = ifelse(s < eagle, -2,
                        ifelse(s < birdie, -1,
                               ifelse(s < par, 0,
                                      ifelse(s %
      
      mutate(rk = rank(s, ties.method = "min")) %>%
      
      filter(rk == 1) %>%
      
      select(comp) %>%
      
      as.list() %>%
      .[[1]]
    
    #
    
    left_after_1 <- length(first_hole)
    
    #
    
    second_hole %
      
      filter(hole_no != h & comp %in% first_hole) %>%
      
      mutate(s = runif(n(), min = 0, max = 1),
             
             s = ifelse(s < eagle, -2,
                        ifelse(s < birdie, -1,
                               ifelse(s < par, 0,
                                      ifelse(s %
      
      mutate(rk = rank(s, ties.method = "min")) %>%
      
      filter(rk == 1) %>%
      
      select(comp) %>%
      
      as.list() %>%
      .[[1]]
    
    #
    
    left_after_2 <- length(second_hole)
    
    #
    
    third_hole %
      
      filter(hole_no == h & comp %in% second_hole) %>%
      
      mutate(s = runif(n(), min = 0, max = 1),
             
             s = ifelse(s < eagle, -2,
                        ifelse(s < birdie, -1,
                               ifelse(s < par, 0,
                                      ifelse(s %
      
      mutate(rk = rank(s, ties.method = "min")) %>%
      
      filter(rk == 1) %>%
      
      select(comp) %>%
      
      as.list() %>%
      .[[1]]
    
    #
    
    left_after_3 <- length(third_hole)
    
    #
    
    fourth_hole %
      
      filter(hole_no != h & comp %in% third_hole) %>%
      
      mutate(s = runif(n(), min = 0, max = 1),
             
             s = ifelse(s < eagle, -2,
                        ifelse(s < birdie, -1,
                               ifelse(s < par, 0,
                                      ifelse(s %
      
      mutate(rk = rank(s, ties.method = "min")) %>%
      
      filter(rk == 1) %>%
      
      select(comp) %>%
      
      as.list() %>%
      .[[1]]
    
    #
    
    left_after_4 <- length(fourth_hole)
    
    #
    
    fifth_hole %
      
      filter(hole_no == h & comp %in% fourth_hole) %>%
      
      mutate(s = runif(n(), min = 0, max = 1),
             
             s = ifelse(s < eagle, -2,
                        ifelse(s < birdie, -1,
                               ifelse(s < par, 0,
                                      ifelse(s %
      
      mutate(rk = rank(s, ties.method = "min")) %>%
      
      filter(rk == 1) %>%
      
      select(comp) %>%
      
      as.list() %>%
      .[[1]]
    
    #
    
    left_after_5 <- length(fifth_hole)
    
    #
    
    sixth_hole %
      
      filter(hole_no != h & comp %in% fifth_hole) %>%
      
      mutate(s = runif(n(), min = 0, max = 1),
             
             s = ifelse(s < eagle, -2,
                        ifelse(s < birdie, -1,
                               ifelse(s < par, 0,
                                      ifelse(s %
      
      mutate(rk = rank(s, ties.method = "min")) %>%
      
      filter(rk == 1) %>%
      
      select(comp) %>%
      
      as.list() %>%
      .[[1]]
    
    #
    
    left_after_6 <- length(sixth_hole)
    
    #
    
    results <- tibble::tibble(a1 = left_after_1,
                              a2 = left_after_2,
                              a3 = left_after_3,
                              a4 = left_after_4,
                              a5 = left_after_5,
                              a6 = left_after_6,
                              total = (24 + a1 + a2 + a3 + a4 + a5 + a6),
                              ends_by = ifelse(a1 == PQ, 1,
                                               ifelse(a2 == PQ, 2,
                                                      ifelse(a3 == PQ, 3,
                                                             ifelse(a4 == PQ, 4,
                                                                    ifelse(a5 == PQ, 5,
                                                                           ifelse(a6 == PQ, 6, 7)))))),
                              start_hole = h)
    
    sim_data[[i]] <- results
    
  }
  
  holes_data[[h]] <- bind_rows(sim_data)
  
}

#

results <- bind_rows(holes_data)

tictoc::toc()

# calculate results based on the starting hole

hole_results %
  
  group_by(start_hole) %>%
  
  summarize(median_holes = median(total),
         mean_holes = mean(total),
         
         ends_in_1 = mean(ends_by < 2),
         ends_in_2 = mean(ends_by < 3),
         ends_in_3 = mean(ends_by < 4),
         ends_in_4 = mean(ends_by < 5),
         
         fewer_31 = mean(total < 31),
         fewer_41 = mean(total < 41),
         fewer_51 = mean(total %
  
  ungroup()


Advertisements

Measuring Consistency

Consistency is often talked about in golf – by golfers themselves, by the media, and by fans – but it’s one of those undefined words that can be used in any desired way to get your point across. What I’m going to talk about today is week-to-week consistency of performance.

First, we need to set a baseline for performance to compare a player’s week-by-week play too. Finishing top 10 in an event represents a very positive performance for a Tour average player, but much closer to expectations for world #1. At 15th Club we use Performance Index to benchmark what we expect in a particular week from each player. For example, this week at the WGC Bridgestone we expect Dustin Johnson and Justin Rose to be the best performers and Kevin Na to be close to the middle of the pack.

With Performance Index shown in strokes versus the field, that’s the metric of choice for measuring performance. The truly elite performances in the last two seasons have been events like Brooks Koepka’s first US Open, Hideki Matsuyama at the Bridgestone, and Molinari at Quicken Loans. These are in excess of 5+ strokes versus the field per round.

With measures of performance and expectation established, it’s as simple as measuring how each player at each event measures up over 2017 and 2018. For example, Dustin Johnson had a low of about -5 strokes versus expected per round at the 2017 Memorial and a high of about +3 strokes versus expected per round at the 2018 Tournament of Champions. On the graphs later, I’ve chosen to show this measure as strokes per two rounds to show two round and four round events on the same level.

Measured this way performance is skewed with a longer left tail – in other words a player’s poor events will be further from 0 than his best events. There could be many reasons for this from injury, the effects of pressure when playing well, the tee-time effect (players playing well on the weekend play in the typically tougher afternoon), less motivation if far from the lead, and other more esoteric reasons behind how performance is judged. The average event for top-level players is about -0.5 strokes versus expected per two rounds.

10th percentile performance is roughly -5 strokes versus expected per two rounds and 90th percentile performance is roughly +3 strokes versus expected per two rounds. An elite player with a 10th percentile performance will very likely miss the cut, while an average player with a 90th percentile performance will have a great chance at a top 20.

To measure consistency, I’m taking the average absolute difference between actual and expected performance (difference from zero) of every event a golfer has played since the start of 2017. These range from about 1.6 strokes versus expected per two rounds for the most consistent to 5.0 strokes versus expected per two rounds for the least consistent. In other words, if the expected score for the most consistent player is 140 over two rounds, on average the difference between their actual score and 140 will be 1.6 strokes better or worse.

More consistent players in 2017-18

Less consistent players over 2017-18

Both lists have a range of players, with the less consistent featuring some guys like Kaufman and Willett who have struggled over the last two years along with guys like Landry and Mitchell who have played well. Same for the consistent list, Casey is one of the best in the world, while most are in the 100th to 250th best in the world zone.

Along with actually measuring consistency, it’s critical to discuss why it matters. Elite players whose games are very consistent will give themselves fewer opportunities to win than otherwise expected. At the same time, weaker players whose games are very inconsistent will win and be in contention often, but will also no-show most weeks. In a world where everything from your media/fan legacy to World Rankings to prize money is based largely on heavily rewarding victories, both of these facts skew how players are viewed.

I’ll close with a few notable players:

Justin Rose
Justin Rose-consistencyRose has managed to avoid any genuinely poor events with his worst being a T63 at the Bridgestone last year where he was 8 back after 36 holes and had a lackluster weekend. On the other hand, he’s had eight extreme over-performing events of which he’s won four and lost in a playoff in another.

Paul Casey
Paul Casey-consistency
An average player would be expected to have 7 extreme good or poor events in 37 events, but Casey has had only three – two of them being his only missed cuts and the third being his 6th at the 2017 Masters. Casey ranks as one of the best performing golfers in the world over 2017-18, but has struggled to generate the kind of winning performances his peers do.

Jon Rahm
Jon Rahm-consistency
Rahm has a reputation as a volatile player and that’s borne out here. He’s had more extreme events than expected, but mostly skewed towards the positive. He has made his poor events notable with the 2017 & 2018 US Opens and nearly the 2018 Open appearing among his low-lights.