Over the past few months I have given quite a bit of thought to how when I look at the development of hockey analytics I see so much that seems familiar to me. In my last post I described how, in my view, the divide between old school ways of understanding hockey and contemporary analytics seem fall into the same well-worn groves as the qualitative-quantitative division that is common in academia.
Another thing that is familiar to me is the difficulty in gaining support for news ways of approaching a given topic. In academia my area of study was genocide, war crimes, and crimes against humanity. Such topics typically viewed as being outside of the subject matter of criminology, which is supposed to deal with offenses that are laid out by the state in criminal codes. By contrast, crimes like genocide are often committed in the name of a state against its own citizens. In the realm of traditional criminology a pick pocket operating in the downtown of a North American city is clearly a part of the subject matter of criminology, while the Nazi extermination of millions of innocent people, including Jews, Sinti, Roma, homosexuals, individuals with mental handicaps, etc, is not.
One thing that really struck me when doing the theoretical component of my dissertation is that, as the new kids on the block in terms of genocide research, criminologists were really caught in a catch-22. Initially, the most common approach was to apply existing criminological theories and methods to the topic of genocide, and show how the conclusions were similar to what was already being discussed in existing genocide scholarship. The problem was that focusing on similarities gave the impression that adding criminology was redundant and essentially pointless. Building support for the inclusion of criminology to genocide studies, and the include of genocide in the subject matter of criminology, meant showing that criminology could bring something new and useful to the table. However, introducing information or approaches that were entirely new also resulted in issues, because the familiar terms of the discussion were shifted in ways that more traditional genocide scholars were not always enthused about (note: my dissertation falls into this category). In the end, saying that genocide should be a part of criminology involved more than just applying theories and methods used in criminology to a new topic area. Instead, it was a much more complex endeavour that centered upon simultaneously drawing upon, and recreating, both criminology and genocide studies.
When I look at some of what is done in hockey analytics I definitely see the same type of catch-22, where analytics is caught between providing entirely new perspectives but is, at the same time, trying to also tacitly conform to some older principles of “common sense.” In a sense, it is re-drawing the map of hockey theory in a similar process as described above: by simultaneously drawing upon, and recreating, our understanding of the game.
Same but Different
Travis Yost wrote what I think is an excellent piece for TSN called “Drawing Penalties a Valuable NHL Skill.” Building on Eric Tulsky’s work, Yost argues that drawing penalties is a skill that is repeatable from season to season. As such, some NHL players are better at it than others. The caveat to all of this is that it is really penalty differential that matters, because if you take the same amount of penalties or more than you draw you still end up hurting your team over the long run.
I am a big fan of reading comments sections, and the comments section for Yost’s article contained this interesting exchange:
The first poster in this exchange is looking at Yost’s piece through the lens of old school hockey analysis, and he makes the point that there is absolutely nothing new or surprising here. He makes it clear that conventional hockey wisdom already accounts for what is being described in the article. The response is equally interesting: while the description of what is happening may not be new, the fact that the phenomenon has been quantified is.
The catch-22 here is that while confirming existing common wisdom, by quantifying and testing it, functions to facilitate discussions of common subject matter that fans can follow regardless of whether they are old school or analytics geeks, it does not do much by way of illustrating the value of analytics. (note: a discussion of models could have helped this issue to a degree, but this too would have been a tradeoff.) I can pretty much guarantee that the people who are into analytics will see the initial comment as missing the point entirely, while those who are not convinced about the value of analytics will not be swayed by the inherent value of quantifying what we already know. Furthermore, each poster probably left thinking the other just doesn’t get it.
Would You put Him on Your First Line?
Another example of a catch-22 faced by the analytics community is how to categorize players who drive possession but do not have particularly good offensive production. A good example of this is Eric Condra from the Ottawa Senators. Although Condra has great possession stats year after year, I have not seen anyone from the analytics community argue that he should be played on the team’s top line. Instead, it is generally accepted that Condra is a fantastic bottom six player in the sense that he does not hurt the team at all while he is on the ice, and his contract is perfectly in line with what you expect to pay for role players who do not score all that much.
This interpretation of Condra is very reasonable and well thought out, and it is framed in such as way that people outside of the analytics community can easily get it. However, the reason non-analytics (or anti-analytics) hockey fans can easily “get” this spin on Condra is because it draws upon terminology that the old school can relate to: he fills a role, he does not hurt the team when he is out there, and he is on a good contract. All of this falls under “common sense.”
The catch-22 occurs when other players who drive possession well are not given a great deal of ice time, or spend time in the pressbox. If, on one hand, examples like Condra show that being good at driving possession does not necessarily make you a top line player, can possession numbers be used to make a case for another player to get more ice time? Sure, you could very carefully make a seemingly bullet-proof argument that does just that, but it potentially opens up the criticism that the standards are not evenly applied. While empirical data can provide pretty clear evidence, interpretation of that evidence can easily, and unintentionally turn into a leisurely stroll through a minefield.
Common Sense Confirmation
I am a big fan of models. One person I know spent a great deal of time putting together a model of player value, only to toss it out when he looked at the first output and discovered that a third line player (I apologize for not remember who it was) was valued higher than Sidney Crosby, who was ranked down at about 50th in the league. I found it interesting, because it instantly reminded me of the first time I visited the War-On Ice site. I went straight to the goaltender data and asked it to rank order goalies from the past several years by save percentage. A list of names I did not recognize, or barely recognized, came up. I then played with filters (specifically minutes played) for a few minutes until I got what seemed to me to be the “right” analysis.
Setting aside the issue of sample size, which is a real thing that can easily mess up analyses, the common feature to these stories is that even though the numbers are being generated, they are not being accepted unless they conform to some degree with what “common sense” or “the eye test” is telling us. However, the catch-22 here is that while our analysis has to make sense in order to be convincing, the value of analytics is badly eroded if we do not simply follow the numbers wherever they lead us.
During the recent All Star Game, Mike Burse put together a table that he called “Analytic Ranking of the 2015 NHL All Stars.” When I opened the page the first thing I noticed was Nick Foligno at the top of the list, and when I quickly scanned down the table I saw a bottom 5 of Patrick Kane, John Tavares, Steven Stamkos, Brent Seabrook, and Anze Kopitar. I have to admit, my jaw nearly hit the floor. My first inclination was to pour over the methodology, specifically the weighting, to see where it had been messed up in such a way as to invert the entire rank ordering. In the end I did not, and I paid him a compliment for sticking to his weighting. For the record, I absolutely loved his response:
@StefanWolejszo thanks! At the end of the day the numbers generate discussion and may make us question our thinking about players.
— Mike Burse (@PuckingNumbers) January 25, 2015
I still don’t know if his numbers are right, or if there is such a thing as “right numbers” to begin with. After all, defining and evaluating who or what is good will always have a subjective component (in this case selection of metrics and weighting scheme). However, and important catch-22 is still present here in full force. If you use common sense to validate your analysis it opens up questions about whether you should have bothered doing an analysis to begin with. On the other hand, you invite serious questions about the validity of your method if you follow the numbers you commit to regardless of where they take you (particularly if they take you places that appear to be at odds with common sense or the eye test).
Hockey analytics is in a tough spot. On one hand, you don’t really need a new metric or fancy tests to tell you that Crosby or Stamkos are really good at scoring, or that Rask or Rinne are exceptional in net. An analysis that confirms what is already in the store of “common knowledge” can easily be viewed as redundant or even useless. On the other hand, if your carefully constructed model ranks Crosby as the 50th best player in the league, and Stamkos down in 75th place, you invite an entirely different set of criticisms. There is no win here, which makes sense because the new kid on the block rarely has an easy path to making new friends.
The way this is playing out in the criminology of genocide is that change is very slow. A new generation of scholars is emerging who see the topic as a part of criminology, and an older generation of criminologists who seek to exclude the topic from the field of study are retiring one at a time. What was a fringe topic in the 1990s has grown considerably to be sure. However, my memory of the reality of this process is giving a paper in a mixed session with assorted criminology topics, and having people leave in the middle of my presentation after giving me horrified looks for speaking of such things.
In my estimation hockey analytics is currently in a similar spot. Conferences are starting to pop up, and a new generation of fans are starting to embrace the numbers. However, teams are still dominated by the old school folks who will often use analytics to confirm what the eye test tells them. As Brian Burke put it, they use it like a drunk uses a lamp post: for support, and not illumination. The fix for this is time. However, humans like me are impatient by their very nature.