On Monday, I wrote about how Brandt Snedeker had just re-entered the top 50 in the Official World Golf Rankings after his win at Pebble Beach, earning enough points where he will likely be invited to all the majors and WGC events which use OWGR as a criterium for entry. Snedeker had fallen outside the top 50 – despite playing at a level that placed him around 30th in the world in actual on-course performance – largely because he hadn’t won since July 2013. He had slipped behind a host of European Tour golfers who had won on their Tour, but whose actual on-course performance was inferior towards Snedeker’s over the past two seasons. I’m going to go into a bit more detail about how the OWGR harms US based golfers, transferring exemptions into majors and important WGC events to lesser golfers from non-PGA Tour circuits.
Broadie and Rendleman (2012) went into a lot of detail about the bias inherent in the OWGR. A encourage you to at least peruse that paper. They basically rated all golfers from 2002-2010 using actual on-course performance and then compared those ratings to the OWGR. Their findings indicate that PGA Tour golfers are ranked significantly lower than golfers from the other major tours when controlling for on-course performance. Basically, the fields in non-PGA Tour events are systematically overrated, making a win in the Malaysian Open or Nordea Masters look more comparable to a win in a full field PGA Tour event.
This bias is starkly visible. Below I’ve plotted the percentage of rounds the golfers in the OWGR top 100 played on the European Tour in the past two years (2/2013-2/2015) and the difference between where both my rating system and the Sagarin/Golfweek rating system rank golfers and where the OWGR ranks them. For example, my rating has Brooks Koepka 33rd, while OWGR has him 19th. That is represented on the chart as +14. I’ve included the Sagarin/Golfweek numbers as they’re the best publicly available objective system to compare to.
Note that most of the golfers who have played mostly on the European Tour appear above the origin, indicating they are rated higher in my objective system and Sagarin’s objective system than they are in the OWGR. This means they’re earning places into majors/WGC events which their performance doesn’t necessarily show they deserve.
What actually happens with the OWGR is it does not properly evaluate the strength of field. The way the ratings are calculated, even objectively very weak European Tour fields receive a minimum number of ranking points comparable to PGA Tour events. The ratings are also recursive, meaning that events receive credit for all top 200 OWGR ranked players who enter, which means overrated European Tour fields lead to overrated European Tour players which leads to more overrated European Tour fields. At no point is the OWGR designed to step back and ask in reality, how good is this field?
When you compare an average PGA Tour event (Zurich Classic) to an average European Tour event (Omega Masters), the difference in field quality is stark. I’ve plotted the number of golfers in nine different bins of quality, from elite (those -2.3 strokes better than average or more) to those who are awful (those +2.3 strokes worse than average or more).
So two very different fields. A good player would have a small chance of winning the Zurich (perhaps 2-3% for a player of Brandt Snedeker’s ability), while that same player would be one of the favorites in the Omega Masters (perhaps 6-7% to win). The catch is that the OWGR awarded 30 points to the Omega Masters winner and only 36 points to the Zurich winner. These differences continue all the way down the leaderboard, systematically awarding more points in European Tour events than for comparable PGA Tour performances.
The difference in field quality is reinforced when you consider all the events on each Tour – even ignoring the co-sponsored WGCs and majors. Ranking them side by side as below, the comparable European event has a field approximately half a stroke worse in overall quality than a similar PGA Tour event. Field quality is in terms of strokes better than an average pro (approximately the 200th best golfer in the world).
Combining the objective quality of field with the OWGR points awarded to the winner of each tournament produces the graph below. I’ve charted all 2014 PGA Tour, European Tour, and majors/WGCs. The best fit line is the amount of points each tournament should award if the points were solely based on objective quality of the field.
Notice the cluster of events in the bottom left; those 15 tournaments have the field quality of Web.com Tour events while awarding an average of 22 points. The bias inherent in the OWGR largely stems from those 15 tournaments – mainly events in South Africa and Asia like the Malaysian Open, Thailand Classic, and South African Open which were played in the past two months. In fact, right now the OWGR is likely as biased towards European Tour golfers as it will get all year – just in time to award exemptions into the WGC event at Doral, the Masters, and the WGC match-play event.