It's what we do.

Young players have it pretty rough in footy. Learning a new level of game in a newly professional environment, many straight out of high school, it’s little wonder that even the best first-year kids don’t instantly end up in the upper echelons of the competition.

This makes evaluating young players very hard – we look for signs of future performance rather than just their present contributions – and the Rising Star award seems to do likewise. Voting for the Award is done on a 5-4-3-2-1 basis by a panel of experts and we have no clear idea why they vote the way they do, but we assume it’s a combination of both present output and intangible perceptions of potential, plus the bloke from South Australia voting for his former team’s nominee.

Andrew McGrath has today been awarded the prize, with 51 votes out of a possible 55 (nine of eleven judges gave him maximum) and the full leaderboard was as follows:

Andrew McGrath – 51

Ryan Burton – 41

Sam Powell-Pepper – 35

Charlie Curnow – 27

Eric Hipwood – 10

Sam Petrevski-Seton – 3

Lewis Melican – 1

Tom Phillips – 1

This post makes use of the Player Approximate Value, or PAV, method of player valuation which we unveiled yesterday. Below is a chart of the PAVs we have derived for each player nominated for the Rising Star this season, as well as some of the most notable non-nominees.

(We are still working on a “PAV per game” calculation that allows comparisons across seasons which contain different lengths due to finals, but here the simple calculation is valid because nobody has played finals in 2017 yet)

Applying the PAV to this year’s Rising Star candidates suggested that Sam Powell-Pepper was the most valuable to his side this year followed closely by Ryan Burton. The winner, Andrew McGrath from the Dons, performed less well. Sean Darcy, who wasn’t even nominated, was most valuable on a per-game basis in his stint as ruck for Fremantle and the other two who might have merited nominations for season output were Matthew Kennedy and Jarrod Berry. Only Jason Castagna played every game this year.

These scores aren’t necessarily great by league standards – SPP was 157th overall this year, while Burton was the 51st best in defensive PAV – which illustrates just how steep the learning curve and how hard the road ahead for even the best young players.

Why didn’t McGrath top the PAV for Rising Stars?

HPN thinks the answer to this question is that McGrath seems to have played as a non-rebounding mid-sized defender type, with a lot of “empty carb” disposals. His main notable characteristics were, according to the AFL website’s article, that he ranked among candidates “first for handballs, second for disposals and second for effective disposals”. A lot of voters for traditional awards, especially those decided post-season, look for counting stats as an easy indication of ability.

PAV doesn’t incorporate raw disposal counts into any of its valuations, and he has clearly he performed less well than some other Rising Star players in PAV-associated things like clearances, inside-50s, tackles, rebound-50s, etc. His most notable rating was a 4.9 in Defensive PAV, the fifth highest overall, suggesting he did pretty well in terms of one percenters, marks and avoiding giving free kicks. However, PAV suggests that if a defender should have been chosen, then that person should have been Burton.

With a more mature group of players around him, such as Heppell, Merrett, Hurley, Goddard, Kelly, and to an extent Watson, the critical disposals often fell to their hands, where Burton was asked to carry a far greater load for Hawthorn, and SPP was asked to do a lot in the centre of the field from day one for Port Adelaide.

We don’t doubt for a second that McGrath may end up the better player of the three vote leaders (he was pick one for a reason), but Essendon had the luxury of easing him into football as a cog with a less-damaging role, and giving him excellent support. McGrath has obviously performed the role with sufficient promise and aplomb to satisfy the voting judges.

One of the oldest questions in global team sport is: what is a player really worth? To come up with a workable answer for this, we have leant heavily on work undertaken by Bill James, Doug Drinen and Chase Stuart, and looked at several different sporting codes and how they attribute player value within the team environment.

This post will describe in detail the player valuations we’ve derived under a method we’re calling Player Approximate Value (PAV). We’ve given hints of these valuations in past posts such as this one about recent retirees and this one running through statistical “awards”. We are planning to use the values we’ve derived here to replace earlier methods of trade and draft valuations, and will continue running other PAV-based analysis, so you’ll see a lot more of it in future.

Valuing players

Much of the modern advanced sport analysis can be traced back to one man: Bill James. From the publication of the first The Bill James Baseball Abstract in 1977, James has created a language to describe the sport beyond it’s base components, and has emphasised using statistics to support other obvious judgements.

In 1982 James introduced a concept called the value approximation method, a tool to produce something he called Approximate Value. He did so by stating:

“The value approximation method is a tool that is used to make judgements not about individual seasons, but about groups of seasons. The key word is approximation, as this is the one tool in our assortment which makes no attempt to measure anything precisely. The purpose of the value approximation method is to render things large and obvious in a mathemtatical statement, and thus capable of being put to use so as to reach other conclusions.”

The resultant product produced by James was inexact, but able to generally differentiate bad seasons from good seasons, and good seasons from great. James used basic achievements to apportion value, based on traditional baseball statistics. Over the years James experimented with a series of different player value measures, but he revisited Approximate Value several times, most notably in 2001. However, much of James’s later efforts focused around other methods of player valuation, and Approximate Value remains an often overlooked part of his prior work.

In 2008 Doug Drinen, of Pro-Football Reference, decided to adapt James’s original formula to evaluate which individual college postseason award was most predictive of future NFL success, but was confronted by a lack of comparable data for football players. This initial effort, while a noble attempt, was critized for using very basic statistics – games played, games started and Pro Bowls played. Whilst the results largely conformed with logic, notable outliers existed – ordinary players that saw out lengthy careers on poor teams.

Unwittingly, we created a similar method to both the original 1982 James formula and the first Drinen formula, which we used to create a Draft Pick Value chart. The method created a common currency that could be used to value the output of players drafted from 1993 to 2004, and to also predict the future output of players (1993 is considered by most to be the first true draft, as it comes two years after the cessation of the traditional under 19 competition and after the various AFL zones were wound back).

The most common criticism of the chart was, like the original Drinen analysis, it was too narrow in ignoring the quality of games versus the quantity of games played. For most players, the relationship between games played and the quality of the player is relatively linear – bad players tend not to play a lot of football before they are delisted. Due to the strict limitations placed on AFL lists, and the mandatory turnover of about 7% of each side each season, players who fail to perform tend not to stay in the AFL. A small modification we made in 2016 was to add a component of quality – namely a weighting by Brownlow Medal votes, which applied a weighting for Brownlow-implied value of players selected at each draft position above and beyond just games played.

However, the original formula still had the issue of valuing Doug Hawkins as having a better career than Michael Voss – which is patently ridiculous. And the modified formula, though doing a better job of valuation, still felt slightly incomplete.

Later in 2008 Drinen came up with the measure we know today as Approximate Value, by splitting contributions into positions and determining positional impact on overall success. Whilst it still is an approximate value measure, it was far more accurate than any other NFL value measure to date. Approximate Value is still used as a historical comparison tool of player value, worth and contribution across a variety of applications, not limited to draft pick value charts, trade evaluation and the relative worth of players across careers.

What have we done

Player Approximate Value, or PAV for short, is a partial application of the final Drinen version of AV, but applied to the AFL after a range of testing. In the vein of CARMELO and PECOTA, it is unashamedly named after Matthew Pavlich, who happens to be one of the most valuable performers in recent years under the PAV measurement now proudly bearing his name.

Basic AFL statistics are very good at determining a player’s involvement and interaction with play, but relatively poor in evaluating how effective that interaction was. On the other hand, basic statistics are reasonably effective at determining how good a team is both across a season and within each individual game. Drinen’s AV, and now PAV, both combine these two elements.

PAV consists of two components – Team Value and Player Contribution.

Team Value

When developing AV, PFR recognised that the team is the ultimate in a team sport, an approach that we fundamentally agree with. PFR split up an NFL team’s ability into two components – offence and defence. Both were evaluated on points per drive adjusted for league average.

Luckily, we accidentally stumbled on a similar approach in 2014 when trying to determine team strength, however we split strength into three categories corresponding with areas of the field – offence, midfield and defence. Unlike American Football, possession in the AFL does not alternate after a score, and turnovers aren’t always captured in basic statistics. However, after learning from Tony Corke that inside-50s are one of the stats which correlate most strongly with wins, we landed on an approach of utilising them to approximate the “drive” of the NFL.

The formulas, similar to those used in the HPN Team Ratings, which are all ratios measured as a percentage of league average:

Team Offence: (Team Points/Team Inside-50s) / League Average

Team Midfield: (Team Inside-50s/Opposition Inside-50s)

Team Defence: This is a little more complex.

Defence Number (DN) =(Team Points Conceded/Team Inside-50s Conceded)/ League Average

Team Defence = (100*((2*DN-DN^2)/(2*DN)))*2

All three categories are inherently pace-adjusted, and as such there is no advantage to quick or slow teams racking up or denying opposition stat counts.

Each season is apportioned a total number of PAV points (we’re just saying “PAVs”) in each category, at a rate of 100 * the number of teams in the competition. For example in 2017 there were 1800 Offence PAVs, 1800 Defence PAVs and 1800 Midfield PAVs, or 5400 PAVs overall. This ensures that individual seasons are comparable over time, regardless of the number of teams in the competition at any time.

Unfortunately, inside-50s have only been tracked since the 1998 season. For seasons before then, we have utilised points per disposal, which roughly approximates the team strengths of the inside 50 approach. There are some differences but they are relatively marginal overall – with very few club seasons moving by more than 3%.

We feel that these three basic statistics can articulate the strength of a team better than any other approach we have seen, and it happens to match the approach taken when creating AV.

Player Involvement

This is the part where HPN has deviated from the approach of Drinen and James. As positions are not strictly defined and recorded as tightly in Australian Rules as in the NFL, it would be impractical at best to use positions as a starting point for developing a player value system.

Instead, we considered that the best way for us as amateurs from the general public to identify a player’s involvement was through those same basic and public statistics. Whereas the team value as calculated above used a relatively small number of statistical categories, player involvement can be much more complicated.

To allocate value, we relied on a number of intuitive decisions, statistical comparisons and peer testing, refining until the results were satisfactory.

The first attempt we made with the guidance of Tony Corke’s statistical factors that correlate with winning margin, then making some subjective decisions made from there. This attempt produced “sensible” results and also correlated reasonably with Brownlow medal votes.

The formulae were then fine-tuned by testing subjective player rankings on a group of peers. The formulas were also tested further against Brownlow Medal votes, All Australian selections, selected best and fairest results and Champion Data’s Official AFL Player Ratings.

Although no source is perfect, PAV was largely able to replicate the judgements of these other sources, especially that of the Official Player Ratings. Generally, if a player has a higher PAV across a season, they will receive more Brownlow Medal votes:

In the end, PAV and its results were tested on a wider scale via blind testing on the internet (stealing the approach taken by Drinen when he created AV), and the results largely confirmed the valuations taken by PAV. The formulae for each line are:

The weightings and multipliers used in each component formula will necessarily look a bit arbitrary, but are the results of adjustment and tweaking until the results lined up with other methods of ranking and evaluating players as described above.

As the collection of several of these measures only commenced in 1998, we have also adapted another formula for the pre-1998 seasons which correlates extremely strongly with the newer formula. Whilst we feel it is less accurate than the newer formula, it still largely conforms to the findings of the newer formula. This formula was created by trying to minimise the standard deviation for each player’s PAV across the last five seasons of AFL football. Around 5% of players have a difference in value of more than one PAV between the new and old formulas.

We will publish the pre-1998 formula in the not-too-distant future.

Putting It Together

The final step combines individual player scores and team strength calculations to produce the final PAV for each player. This is done in two steps.

Firstly, the individual component scores for each team are compiled. Each player’s individual player score is converted to a proportion of total team score, telling us the proportion of value they contributed to that area of the ground.

Secondly, the team value (i.e. team strength as outlined above) is multiplied by the proportion of the component score for each player.

An example will help illustrate this.

In 2016 the Blues midfield earned 96.71 Midfield PAVs across the whole side (being below league average). Bryce Gibbs accrued a Midfield Score of 3984, and the team tallied up 37702 in midfield score in total. As a result, Gibbs contributed 10.567% of the total Midfield Score for Carlton, and receives that part of the 96.71 Midfield PAVs that Carlton had gained – or 10.22 MidPAVs.

These calculations are done for every player in the league for every side. The overall PAV value for each player is merely the three component values added together. For Gibbs in 2016, this is his 10.22 MidPAVs with 6.86 OffPAVs and 3.21 DefPAVs for a total of 20.29 PAVs all up. Which is pretty good.

What PAV should be able to tell you

Two key advantages of PAV, we feel, are that it can be replicated based entirely on publicly available statistics, and that by using a pre-1998 method, we have derived a fairly long set of historical values.

While HPN intends to publish PAV to a finer degree than PFR, there still remains a great deal of approximation in the approach. This is especially the case for pre-1998 values, which rely on a far smaller statistical base. We cannot definitively state that these are the exact values of each player relative to other players; however we feel that the approximation is closer than any other method that has as long a time series made with publicly available data. It is possible, and indeed likely, that some lower-ranked players are better players than those above them in certain years.

What we are more confident of is that the values are indicative of player performance relative to others across a longer period of time. Or; in that given year, player X was likely more valuable than player Y, or at least to their team.

As it draws its fundamental values from team rankings, it is much harder to draw a higher value from a bad team than it is a good team. This scales player value to more highly rate performance in a good side, and specifically it highly rates players in good parts of the field in good sides.

As a group, players with a PAV of 18 should be better (or have had a better year) than those who are ranked with 16. As a rule of thumb a season with a PAV of over 20 should be considered to be a great season, with any PAV over 25 should be considered exceptional. This varies slightly for different positions – an All Australian key position defender may have a lower overall PAV than a non-All Australian midfielder, but with an extremely high rating in the defence component.

Below is a list of the players with the highest season long PAVs between 1988 and 2016:

2017 isn’t finalised yet, but the top end of the list to date is populated with Brownlow Medallist years and players considered to be the absolute elite of the league over the past two decades. While there are some year-on-year PAVs that conflict with common opinion, these top-end player-years do not contain any. Yes, that Stynes year was that good.

On a career basis, the top rated players should be fairly uncontroversial:

The top ten players on this list not only had successful careers, but also incredibly long careers as well. Note that this is current to 2016, so Gary Ablett Jr has more value to come.

Every player made multiple All Australian teams, and a majority were considered at different points of time to be the “best player in the game”. As such, PAV ends up being a measure of not only quantity of effort but also of quality.

What are the weaknesses of PAV?

Like almost any rating system, there are always blind spots – especially in early phases of development. Like in almost any rating system in any sport, there appears to be a slight blind spot in valuing truly pure negating defenders. Consider Darren Glass, possibly the finest shutdown KPD of the AFL era. He is somewhat overlooked from an overall perspective by PAV:

Glass’s Defence PAV still remains elite during this era, but he provided little to no value to any other part of the Eagles’ performance across the period. It’s worthwhile to compare Glass to the namesake of PAV for a comparison:

This is a lesson to sometimes look beyond the headline figure to the components that make it up. It’s worthwhile look beyond the Overall PAV figure for the relevant component figure for the player’s specific role, especially for specialist players. We can also see with a player like Pavlich that his shifting role over his career is revealed by PAV. Generally, a component PAV of more than 10 for a specialist player will place them in contention for an All Australian Squad selection (cf. Glass above), if not selection in the side itself.

Occasionally a season pops up that defies conventional wisdom, such as Shane Tuck’s highly rated 2005 season, or Adem Yze, who rates so highly via PAV as to suggest he was under-recognised throughout his career.

However, Insight Lane brought a very interesting observation to our attention this week, from Bill James himself:

As noted at the top, we’ll be applying this system throughout the draft and trade period to evaluate trades and draft picks, and probably in a lot of other analysis from here on out, as well. Stay tuned in the coming days for an All-Australian team based on PAV.

In our time developing and testing PAV, it has usually confirmed our conventional thinking, but occasionally surprised us. Which makes us think we might be on the right track. With system comes the ability to analyse, so the goal for us in developing this approach is to emulate and augment subjective judgments with a systematic valuation, rather than to create a value system alien to an actual “eye test”.

If you have any comments or questions about PAV, please feel free to contact us via twitter (@hurlingpeople), or email us at hurlingpeoplenow [at] gmail [dot] com. We are more than willing to take any feedback on board, and if you want to use or modify the formulas yourself, feel free to do so (just credit us).

Thanks to all that provided help, assistance and the reason for the development of PAV, namely Rob Younger, Matt Cowgill, Ryan Buckland, Tony Corke, James Coventry, Daniel Hoevanaars… and everyone we are forgetting here. We will add more when we remember who we have forgotten.

We are into the final round of season 2017, and what a great time to look at the fixture that awaits us and see how those matchups would look if just a few things had broken a bit differently. Join us as we journey into the football multiverse and explore what might have been.

First up, the table below is the usual HPN team ratings.

We just want to note here first of all that Brisbane are currently, adjusted for opponent defensive strength (they don’t get to play themselves after all and they have a terrible defence) the best offence in the comp. That is, they have scored more per inside-50, adjusted for opponent, than any other side this year. What a weird season.

The top 8 here is the actual current top 8; bar Essendon being very slightly behind West Coast. In all likelihood the Bombers will make finals unless the Eagles can beat the Crows and jump either a losing Melbourne or Essendon via a loss or vaulting them on percentage.

The HPN team ratings over the year would expect to see the Swans in the top 4, we don’t need to rehash why that hasn’t happened. Geelong being outside the top 4 is about to be a recurring theme on our journey, alluded to in the title of the post.

So let’s go with some hypothetical ladders, from alternate universes:

What if every losing team had scored another goal?

Below is what the ladder would look like if every losing team had scored another goal, reversing a lot of results. We haven’t recalculated percentages but current percentages have been included as a guide:

The Tigers, who have been on the wrong side of a number of storied narrow defeats, would sit half a game clear heading into the final round, and they and Adelaide would have had the top two spots sewn up weeks ago. In this universe, Damien Barrett is floating the prospect of Richmond and Adelaide tanking to try to avoid GWS or Sydney and play Port Adelaide instead.

Down in tenth would sit Geelong, out of contention in finals as they rued last minute losses to Fremantle, Hawthorn, Port Adelaide and North Melbourne.

The current North Melbourne vs Brisbane Spoonbowl would instead see the Lions trying to jump Fremantle and yet again escape a wooden spoon.

What if we could bloody kick straight?

A simplistic and somewhat inaccurate measure of luck is scoring shot conversion. All things being equal, the expectation is that inaccuracy or accuracy regresses to the mean over time. Figuring Footy has done some wonderful work fleshing this out by adding scoring expectations, but for this exercise, let’s assume everyone coverts scoring shots at the same rate.

Port Adelaide now sit top 2, their accuracy having secured them wins over West Coast and Richmond at the cost of a loss to St Kilda. The Saints, naturally, make the 8 on this measure, as do a Hawthorn presumably not hobbled by Will Langford’s set shots.

The teams dumped from the finals, assuming everyone kicked straight, are Sydney (who would hypothetically still remain in contention this week), and Essendon (who would be long gone). The Bombers crash to 40 points, sitting well out of finals, thanks to draws with Hawthorn and the Bulldogs and losses to Geelong and Collingwood. This would be compensated only by the cold comfort of having beaten Brisbane, in an ever fading “revenge for the 2001 Grand Final” type manner.

We should note that that shot quality produced and conceded differs by team. Sydney for instance have conceded the equal 2nd-lowest quality chances (they’ve done similar for a few years) and Port Adelaide take a lot of low quality chances so it’s no surprising they’re kicking a higher number of behinds per goals.

Essendon generate and concede scoring shots of roughly average quality, so they’re probably more likely to have benefited from something approaching pure luck in scoring shot accuracy terms.

What if everyone only played each other once?

In this world, the season is 17 games long and starts in May or has time off for representative clashes or something. Or, as is looking more likely, is the front half of a 17-5 type scenario.

Below we’ve compiled the first result this year for every clash, ignoring double-up return games. We’ve also assumed the upcoming weekend of matches is Round 17, and excluded any previous clashes between teams playing this week (eg the previous GWS-Geelong draw is omitted).

Here, we see teams down to Collingwood still in distant contention for finals, the Pies apparently having been bad in return games this year. They, here, need to beat Melbourne and rely on unlikely losses by those above them.

The top 8 hasn’t changed, and West Coast are still relying on beating Adelaide, but in this world the Crows need to win to lock down a top two spot while Richmond will know whether top 4 is up for grabs by Saturday night.

In a 17-5 world, the entire bottom six would have been long settled, with these clubs facing little to play for (assuming the points are reset for the final five matches). Additionally, the top 3 would have also faced several weeks of near meaningless footy before the split. If the points aren’t reset in this 17-5 world, several teams would have several more dead rubbersin the last few weeks of the season, and there would be a decent chance that 7th, 8th and maybe 9th would finish with more wins than 5th and 6th.

These are just some of the reasons that the 17-5 proposal is not a good thought bubble – we promise to look at more of them later down the track.

What if teams won exactly as many games as they “should” have?

Now we’re stepping into the realm of abstract footy geometry, where the laws of football premiership ladder physics such as “you can only win whole games” no longer apply.

Each year we run an analysis of the footy fixture’s imbalance incorporating a Pythagorean Expectation assessment of team strength as well as straight wins and losses. Pythagorean Expectation tell us how many games a team “should” have won based on their scores for and against. It’s probably best thought of as a quantification of the intuition that teams with a higher percentage are better. It’s another measure of luck, and tends to punish teams who only win by small margins. We used the method to help project the 2017 ladder as well and it had Hawthorn finishing 12th.

Here, we’ve used it to work out how far over or under each team in 2017 is from the expectations created by their scoring. That ladder is below.

Finally, we have a ladder which doesn’t put Brisbane last. Fremantle look like they’ve won three more games than they should have, and on Pythagorean expectations might be expected to have won just the five games this year. Spoonbowl in this world happened already and Freo lost.

Our current top eight remains the top eight in the Pythagorean ideal world.

Port Adelaide, by virtue of the extreme flat track tendencies we documented last week, appear in this universe to have won an extra 1.5 games, while Sydney also sit a game and probably percentage inside the top 4, their early season weakness reduced to the abstraction of a slightly dampened balance of scores for-and-against.

But of course there’s one final source of luck.

What if the fixture was completely fair?

Here, we’ve stuck with Pythagorean expectations but used it to work out the impact, in fractions of a win, of the uneven fixture.

The fixture in an 18 team, 22 game season is impossible to make fair, but in our final bizarre universe, it’s what’s happened.

Each team’s “expected wins impact” is the difference between the strength of their opponent sets (including double-ups) and what would be expected to happen if they played everyone the same number of times (ie, the average of every other team’s strength).

We’re still in “fractions of a win” territory here, but the table below is interesting.

At the top of the ladder, Adelaide and GWS have faced difficult fixtures and would be expected to do even better if they faced the same strength teams as everyone else.

In this universe where wins come in fractions and the fixture is impossibly fair, St Kilda jump into the 8 by a full one third of a win thanks to a fair fixture, at the expense of the Bombers. West Coast still sit 9th, while the Bulldogs lurk closer to the eight than they do in reality, a win over the Hawks potentially enough to get them into the finals.

This ladder tells us that the teams most benefited by a soft fixture this season are Gold Coast, Richmond, North Melbourne, and Essendon, to the tune of about half a win each. We’ve noted Richmond’s bad luck with close games above, but perhaps this is balanced by having benefited from the softer draw they got as a bottom-6 team last year.

After round 21 there is little movement in relative rankings, but Sydney and GWS rise into our informally-defined historical “premiership” frame.

However, it’s the increasingly anomalous Port Adelaide, theoretically a contender, which we want to focus on here.

The popular opinion of Port Adelaide being unable to match it with other good sides is well and truly borne out when we dig into their performance on our strength ratings by opponent. We have in the past broken up statistics by top 8 and bottom 10 and used them to call Josh Kennedy (but also Dean Cox) a flat track bully back in 2015. Then in 2016 we ran an opponent-adjusted Coleman to see who was kicking the goals against tough opponents (turns out: Toby Greene and Josh Jenkins). This time we’ve looked at whole teams.

Simply put, Port Adelaide are the best side in the competition against weak opponents and they’re about as good as North Melbourne against the good teams.

Below is a chart where we have calculated strength ratings through the same method as we always do using whole-of-season data, but separate ratings are derived for matches against the top half and bottom half of the competition as determined by our ratings above.

Most clubs, predictably, have done better against the bad sides than the good ones. Port Adelaide, however, take this to extremes. They rate as 120% of league average in their performance against the bottom nine sides. Not even Adelaide or Sydney look that good, over the year, in beating up on the weaker teams.

That’s why we’ve been rating Port so highly this year – their performance, even allowing for the scaling we apply for opponent sets, has been abnormally, bizarrely good to the extent that it’s actually outweighed and masked their weaknesses against quality teams. Their sub-97% rating against top sides is 13th in the league, ahead of only North, Carlton, Fremantle and the Queensland sides. This divergence is more than double the size of the variance for any other team.

It appears that the problem mostly strikes the Power in between the arcs. Against bottom sides, their midfield strength is streets ahead of any other side at 141% of the league average, meaning they get nearly three inside-50s for every two conceded. This opportunity imbalance makes their decent defence look better and papers over a struggling forward line. Against quality sides, that falls apart and they get less inside-50s than their opponents.

Looking elsewhere, Adelaide stands out as looking stronger against quality opposition, with their midfield and offence fairing substantially better than against weaker sides – a couple of whom have, of course, embarrassed them throughout the year

The Hawks and two strugglers in North Melbourne and Carlton also seem to acquit themselves better against the top sides than against their own weight class. For North, their inside-50 opportunities dry up against good sides but they make better use of the forward entries – they rate as above league average, offensively, against the top nine teams. For Carlton, unsurprisingly, it’s their stifling defence who step up, and the same is true of Hawthorn.

St Kilda’s forward efficiency and Richmond’s defensive efficiency have also been a lot higher against top sides, but the converse is true of the two teams’ opposite lines.

At the other end of the table, Geelong, Sydney and especially the Bulldogs are the other finals contenders with the biggest worries about sustaining their output against quality opposition. Sydney’s midfield struggles to control territory, slightly losing the inside-50 battle on average against the top half of the competition while bullying weaker sides (their offensive efficiency is actually slightly higher however). The Bulldogs and Geelong share these midfield issues but their forward lines also struggle under quality defensive heat.

But it really is Port Adelaide who stand out here. Their output against weaker sides is really good and shouldn’t be written off. There’s obviously quality there, and they sit in striking distance of the top 4 with a healthy percentage. However, it wouldn’t be a stretch to call their overall strength rating fraudulent given its composition and we will be regarding them with a bit of an asterisk from here. Unless they can bridge the gap and produce something against their finals peers, even a top 4 berth is likely to end in ashes.

As the 2017 Home & Away season winds to its inevitable conclusion, movement returns to the HPN Team Ratings.

The Swans are beginning to crest towards the “Premiership Contender” part of the HPN Team Ratings, which we loosely define as an overall team rating of more than 105% and individual component ratings north of 100%. After an extremely sluggish start down back, Sydney is now the third best side in the competition defensively – with a fair chance of leaping over Port into second.

We’ve mentioned this before, but the return of Dane Rampe has played a critical role to improvement. Some defenders are versatile, some are extremely good at their job, but Rampe is the rare combination of the two. Rampe’s return has allowed Grundy to move to a more negating role, and taken some of the pressure of Lewis Melican, who has blossomed as a result. Having Rampe’s ability to cover ground and contest as a their man up has allowed the other Swan half backs a little more freedom to attack knowing that there is a safety net behind.

The Swans still have issues – namely in the non-Franklin, non-Papley parts of their forward line, but they are starting to approach their 2016 form.

Switching with Sydney this week is Geelong, who are a fundamentally different team without Dangerfield and Selwood. Duncan and Hawkins missing this week does not help either. The sprint towards finals has turned into a limp just as Geelong run into one of the harder parts of their schedule.

Port didn’t lose a place this week but they lost significant ground in everyone’s eyes including those of our ratings, with another loss to a top eight side on the resume. No-one doubts the raw talent of the Power forward line, but their ability to score against good defences is becoming concerning.

For that matter, on the form of the last two weeks, Melbourne looks more like the Demons of 2008 than earlier this year. The constant shifts of players around the ground has seemingly led to a loss of cohesiveness, either players running into each other and spoiling each other when they do. Time is not on the Demons side here either, and if they can’t turn it around against the undermanned Saints this week their season may over.

Every side left in the battle for the flag this year has a flaw, or several, that may stop them from hoisting the cup. From haphazard forward delivery leading to poor conversion up forward (Richmond), to a loss of the territory battle (Eagles and Bombers), to a forward set up that requires a side’s best midfielder to play forward for massive chunks instead (Bulldogs), each side has an Achilles Heel. Even Adelaide, of which we pointed out last week.

For many, the Giants present as the most evenly balanced team, but they are yet to get their best 22 on the park this year at the same time. On paper the Giants at full strength are probably the most formidable matchup – but as 2017 has shown football isn’t played on paper. Even at full strength the Giants seem susceptible to multiple quality tall forwards and quick spreading run, such as the set up employed by Adelaide so effectively.

While there were a number of interesting results and upsets last week, the HPN Team Ratings largely stayed unchanged from a ranking point of view. At this stage of the season our method of rating teams gets quite firm in its views, comparing as it does the entire season’s work of each club in order to provide a good basis for historical comparison.

Perhaps the biggest change at the top end is Geelong slowly closing the gap on the top two, who soften a little bit at the top. GWS continue to lose touch with the top end, and missing almost their entire first choice forward line this week they have a hard assignment against a mostly fit Melbourne. As Matt Cowgill from The Arc/ESPN outlined this week, the Manuka match-up shapes as one of the most pivotal games this week, alongside almost every match this round.

In related news: it’s a fantastic time to be a footy fan.

Richmond made the biggest leap this week, from 8th into 6th, leapfrogging a disappointing Melbourne and swapping places with the Dons. West Coast is only a fraction outside the top 8 teams, and the St Kilda match-up this week looms as a de-facto elimination game for both sides.

Now onto the question posed at the top of the column.

The best of the 2017 retirees

There’s a high calibre group of already-announced retires, all undisputed champions of the game who nonetheless vary quite markedly in the types of achievements and qualities for which they are recognised.

This week, HPN has decided (with the help of a few friends) to look at different ways to split the careers of these five great players, and try to work out who was the best of the bunch, once all is said and done.

Team Success

Many among us (including Michael Jordan) consider a championship title to be the most relevant thing when determining who was truly the best player of a group. The goal of almost all professional sport is to win at the peak level of competition, with all else being ancillary to this pursuit.

To determine this with these five players, we have graded the players on the most simple of scales: two points for a premiership, one point for a grand final loss, none for a draw (sorry Nick).

(tie): Mitchell, Hodge (9 points)

Riewoldt (2 points)

Priddis (1 point)

Thompson (0 points)

Mitchell and Hodge are tied at the top here, as a result of both being teammates during the Hawks’ ultra-successful run between 2008 and 2015. As all Saints fans can remember, St Kilda lost two Grand Finals under the captaincy of Nick Riewoldt, including one that they definitely should have won. Matt Priddis missed out on the Eagles’ 2006 premiership win, even if he was on the list at the time, but played in the 2015 loss to the Hawks. And Scott Thompson has never tasted the limelight on the last Saturday in September (or October).

Individual Awards

Brownlow Medal

Surprisingly, of these group of five players, only two have had the Brownlow Medal hung from their necks. To split all and any ties, we have used total Brownlow Medal votes as the tiebreaker.

Mitchell (1 medal, 220 votes)

Priddis (1 medal, 146 votes)

Thompson (155 votes)

Riewoldt (149 votes)

Hodge (131 votes)

It turns out that the midfielders’ award is really a midfielders’ award. At the start of the 2016 season Sam Mitchell sat in a tie for first all time (with Gary Ablett Jr) for most Brownlow Medal votes (adjusting for the crazy voting system in the mid-1970s). Mitchell had an incredibly long and consistent career, one which was often masked by the excellence of his teammates. Priddis somehow jagged the 2014 medal in what might not have been his best season, but the medal is his nonetheless.

Among all players who never won a Brownlow, Scott Thompson is one of the highest career vote getters; behind luminaries such as Leigh Matthews, Brent Harvey, Scott West, Garry Wilson and Kevin Bartlett. That is very good company to be in, and perhaps the dreaded Victorian media bias means Thompson isn’t getting the recognition that he deserved through his career.

Nick Riewoldt has polled as well as almost any key position forward in history, although he only peaked at a high of 17 votes in any one year. And Luke Hodge, who often did his best work off a half back flank, was often ignored by the umpires in the minds of the Brownlow in favour of star teammates.

Club B&F

Riewoldt (7 wins)

Mitchell (5 wins)

(tie) Priddis, Hodge, Thompson (2 wins)

Each club votes differently and may judge their best and fairest awards on different criteria, but they are still a good way to see how clubs value their own player. All five players took home at least two club champion awards, but Riewoldt is way ahead of the pack with seven.

All-Australian

Riewoldt (five-time AA, three-time AA squad)

Mitchell (three-time AA, four-time AA squad)

Hodge (three-time AA, two-time AA squad)

Priddis (one-time AA, two-time AA squad)

Thompson (one-time AA, one-time AA squad)

Riewoldt stands alone here again, with his performances up forward regularly being recognised as being the best in the game. Thompson suffers here from the glut of elite midfielders that were in the league recently.

Statistics

As we have alluded to in recent weeks, HPN have been developed a player value system over the last year named PAV (after Matthew Pavlich). It is derived entirely from publicly available stats on afltables. We have been teasing it for the past few weeks, and we will drop the methods and formulas after the season is wrapped up and we have some time on our hands.

But for now, we can look at PAV (which is determined on a player’s contribution to a team’s effort in 3 areas of the ground, weighted by the strength of the team in that area that year) for each of the retirees. Here’s the data and graph for the five players across their careers.

For context: a perfectly average team will have 300 PAV across its list in a given year. A season above 20 is generally a sign of All-Australian contention (depending on position). A PAV north of 12 is generally an average contributor. Seasons of 25 PAV or more are relatively rare and outstanding.

Peak PAV

Hodge

Riewoldt

Mitchell

Thompson

Priddis

According to PAV ratings, not only was Luke Hodge’s 2005 season is the best single season by any season of the retirees, but his 2010 and 2006 seasons were distant second and third and ahead of any other player-season here. Unlike Brownlow Medal voting, PAV is more agnostic when it comes to rating the value and impact of defenders and forwards because it assigns values for all three parts of the ground and sums them. This is demonstrated by the relatively high Hodge and Riewoldt placings.

Below are the component ratings for Hodge, Riewoldt and Mitchell, showing the relative contribution of midfield, offence and defence ratings to each season’s total. Note the shifting roles played by Hodge over the years as defence or midfield contribution rises and falls, compared to the purer midfield and forward roles of Riewoldt and Mitchell.

Riewoldt’s best year, his 2004 season, saw him walk away with multiple media and other voting awards for best player, but he was stiffed by the umpires in the Brownlow (PAV had him as the 3rd best player that year, behind Judd and Akermanis).

In their Brownlow years of 2012 and 2014, PAV rated Mitchell and Priddis as the 12th and 13th most valuable players in the league respectively. Mitchell’s Brownlow was of course the 2012 medal, awarded in retrospect. 2012 was also Thompson’s best year, and he was just shaded by Mitchell, rating 13th. We should note, however, that in a lot of these ratings the differences were fairly minimal and since PAV stands for “player approximate value”, when scores are similar the exact order is not necessarily meaningful – a 21.8 versus a 21.6 has very minimal difference, and could even come down to the mistaken compilation of given statistics.

Career PAV

Mitchell

Hodge

Riewoldt

Thompson

Priddis

We have imputed a final 2017 value based on the season to date – these may shift with the final few games of each career, but the shift shouldn’t be significant since most of the season has been played. The margins between the top three are quite slim, but the results should hold.

With the shortest career of the bunch, Priddis was always going to struggle with respect to total career value produced, however he still produced more than the average value of number one draft picks. Of the five players, Thompson got off to the slowest start, but had the longest stretch of “good-to-great” seasons, with nine straight years where he should have been in All Australian squad contention. This slow start, along with the longest tail of the five players, meant that the other three greats would shave him.

Subjective Ratings

For this measure, we asked three of our favourite football writers/analysts to rank the players from one to five, on whatever grounds or method they choose. They had no idea of our work above. They are:

But we note that two of the three are West Coast supporters, so take the last two spots with a small grain of salt. All three we surveyed unanimously had Mitchell-Riewoldt at 1st and 2nd in that order – as did the other dozen or so people we asked in our day to day lives.

In summary

We seem to keep coming back to there being two clear tiers here – Mitchell, Hodge and Riewoldt in some order, then Priddis and Thompson. Mitchell comes out as the closest thing to a consensus “best” but Riewoldt isn’t far behind

The biggest outlier method – even more so than team success – turns out to be the Brownlow Medal which we have no compunction about saying quite simply undervalues both Riewoldt and Hodge.

By the same token however, when we look at various uses of our PAV, it becomes apparent that the inclusion of Priddis and Thompson in this comparison isn’t spurious and they aren’t really out of place, even if their recognition as individual greats hasn’t been as forthcoming. As we noted, a potential Victorian media bias – which has foundations in media theory and international sporting debate – may have an impact on the public perceptions of non-Victorian based players.

Something we like about the PAV approach as we’ve tested and analysed it is the way it identifies lesser-lights who had careers or seasons which were comparable to better recognised and more widely noted achievements. That has certainly happened here.

Thompson had a very long career of consistently high value to his (second) club while Priddis, a late starter, still came in and performed at a similar level almost immediately. Every one of these five players had careers which outperformed the expectations of a number one draft pick and it’s no insult to say that Priddis or Thompson are fourth or fifth among this group.

One thing the HPN Team ratings tends to do by the end of a season is to break teams into different levels by the end of the season, diplomatically sorting the haves from the have-nots in a somewhat graphically pleasing manner.

Using the magic of “the Snipping tool”, we can clearly demonstrate that this has likely already happened in 2017.

This doesn’t indicate that a team from a lower tier can’t beat a team from a tier above – far from it. It instead indicates the average quality of the teams through a thoroughly chaotic season, whose ending is less clear than any in recent memory.

For this week’s column, let’s run through the tiers, with some help from other footy stats people on the web for clarity. FMI’s great CoSPYES is as good a place for footy stat nerds and newbies alike to dive in.

Tier 4: Not Making Finals This Year, But Not Hopeless

The five teams here are likely welded to the bottom of the ladder this season, but all (on their day) have shown glimpses of light.

Brisbane’s forward line has been a revelation this year, with the emergence of Eric Hipwood and the late career improvement of Dayne Zorko providing focal points that the Lions have been seeking since the days of Brown. The Lions remind us somewhat of those early Giants teams, with young focal points offset by a smattering of resourceful and wily older players. Most of Brisbane’s attacks are of a high risk, high reward nature – if they fall over before getting inside 50, they often concede a goal the other way. The problem moving forward for Brisbane is the near constant string of injuries to key players – Dayne Beams hasn’t been fit since wearing black and white, Christensen and Rockliff have also struggled to play full seasons – and the aging of their best players. Zorko turns 29 next year, Martin 31 and the Beams-Rockliff-Rich combo will be 28. None of these guys will likely be in the next Brisbane premiership side, or even finals team. Ryan Buckland wrote about the Lions in great depth earlier this week – have a read if you are keen for good takes.

Fremantle were predicted by one of HPN’s pre-season prediction methods to finish with the spoon, so their first two months of the season was a mild surprise built off close wins. Their next two months were less surprising. The Dockers have several players who can nearly single-handedly win games own their own (Fyfe, Mundy, Neale, Walters), but lack for depth.

HPN wrote about how Gold Coast’s draw opened up for a potential finals run, then in a throwback to previous seasons, several of their key players got hurt. Tom Lynch has a relatively quiet year (merely one of the best young key forwards in the comp, instead of the best), and Ablett remains a make-or-break character for the Suns.

The Blues’s GWS asset recycling program has had its ups and downs, but HPN favourite Caleb Marchbank nearly justifies its existence. For all the talk about the resurgence of Liam Jones, Marchbank is perhaps a more important cog down back, one for which they sorely missed in the loss last week to Brisbane. Carlton’s defence is alright, and Matty Kruezer is having a career year, but they look a couple of years from being a couple of years away from contention.

Tier 3 – There’s A Chance, Just Not Much Of One

The five teams here have at least a shot of making the finals, but in some cases the chances are very small. In brackets are the top 8% likelihood according to The Arc’s Elo modelling.

Hawthorn (6%) sold the boat down the river, and now sit on the outer banks of the finals. They paid a lot for O’Meara and a bit for Mitchell. This quality over quantity move may end up looking problematic if it turns out the Hawks need to find eight or ten new players for upcoming seasons, rather than just two, because they won’t have new draft resources for a while without culling their list further. The 2017 Hawks are barely familiar to those golden era teams, as even the players who remain sort of play different roles. The use of Hodge as a super versatile full back/half back flank hybrid is similar to what the Demons are doing with Nev Jetta, but more effective due to Hodge’s talent. Tim O’Brien has also quietly been turning into a tall target down the line (but 20cm smaller than, say, Rory Lobb), something the Hawks desperately needed. Jack Gunston is a halfback now. Finals are probably a step too far this year, but there’s a path there.

In Buckley’s do-or-die year, Collingwood (0.6%) has shown some fight. Their midfield is probably not “the best in the league” but we’ve got them second on inside-50 differential and the midfield is holding them together. Unfortunately Collingwood has significantly struggled at either ends of the ground, with Melbourne offcuts Jeremy Howe and Lynden Dunn playing critical roles down back (which is generally not a good sign). Up forward, Darcy Moore has continued his development, but lacks consistent support and a second tall forward to deflect attention.

St Kilda (18.4%) are as flaky as anyone this year. Over the year they’ve been modestly below average in all three areas of the HPN Team Ratings. This probably fairly reflects this year’s developing squad, and may simply suggest general across the park improvement and reducing the gap between their best and worst is the order of the day. Their top 8 destiny is in their own hands, because they mostly play teams immediately near them on the ladder.

The Bulldogs (34.8%) have played bursts of good footy this year, but mostly have been a disappointment to most punter’s expectations. This actually isn’t much of a change from last year where they also finished the year rating a fairly modest 7th (but just barely in the frame as a premiership-quality side) before an improbable finals run. Preseason, we had them finishing about 7th or 5th on our two projection methods and that may still end up around the mark, with the Arc giving them a 1 in 3 chance at finals.

It’s probably unfair to judge the Dogs for not living up to a crazy month last year where everything went as perfectly as it could. The biggest problem down at Whitten Oval is that the Dogs have an impotent attack, largely driven by a lack of continuity up forward. It has declined from being average last year to flat-out terrible this year – they are the second worst at converting inside-50s into points on the scoreboard.

West Coast (40.3%) have been dissected at length and it’s getting harder to reject the thesis that they just cannot work out how to travel well. The main evidence against that currently is some weaker 2017 efforts at home. West Coast’s vaunted forward line from previous years has suffered from the prolonged absence of Kennedy, and the decline of LeCras, whilst their midfield has seemed a little one-paced when both Priddis and Mitchell run through at the same time. One paced midfields can work; but usually when that pace is fast.

Mitchell and Priddis are not fast. And it doesn’t work being that slow, as potentially evidenced by Priddis’ shock retirement just a month after seemingly extending his career by a year.

Tier 2 – Can Make A Grand Final Run, If Everything Falls Into Place

Richmond (85.8%), simply put, have defended the best and scored the worst in the 2017 AFL season. It’s an amazing variance that’s quite a lot greater than most other historic contenders with defence-first approaches and ratings. Richmond’s defence of opponent inside-50s is better than all but about a dozen defences since 1998, but no other team in that bracket was so poor at converting their own inside-50s. The 2005 model Neil Craig Crows had both a better offence and very strong midfield. Peak 2011 St Kilda had a worse midfield but better offence and defence. Sydney in 20005 were more well-rounded, with a weaker defence but stronger forwards. The team the Tigers actually currently most resemble is the 2007 Adelaide Crows who narrowly lost an elimination final to a young Hawthorn.

As Rob from Figuring Footy pointed out earlier this year, Richmond’s defence has denied shots near the square extremely well – even if they have slipped a bit in recent weeks.

Melbourne (81%)have had a chaotic year personnel-wise and like GWS we probably haven’t seen their best 22 on the park all at once. More than nearly any other side, their best (wins over Adelaide and Port Adelaide) has been a long way from their worst (losses to Fremantle and North Melbourne), and to predict how they’ll perform in September is folly from here. Melbourne’s finals chances will largely depend on who they can put on the park and how well they have gelled. It is also worth noting here that they are one of only two sides to beat the top two, along with Geelong.

Essendon (56.8%) are outperforming a lot of expectations this year and should make finals if they can capitalise on a soft draw. That’s a big if for a side who smashed Port Adelaide then lost to Brisbane. Essendon have fantastic bookends and a weak but gradually improving midfield that doesn’t protect the defensive 50 particularly well or create a large volume of inside 50s. Their efficiency inside 50 is second only to the Crows, and their defence has been quite reasonable – both ends led by likely All Australians (Hurley and Daniher). Of the sides currently in the finals, they are the most likely to drop out.

Sydney (89.9%) are for many the form side right now, and a 10-1 record is very good, but after some bad luck early in the year they have benefited from the same since then. The Swans have racked up a couple of close wins against similar sides, and were able to encounter Melbourne and GWS while they were well and truly understrength. They are yet to play Adelaide and Geelong at their home bases, which will likely indicate if their recent run has been more luck or skill. The return of Dane Rampe has been incredibly important for the Swans – the rare Swiss Army knife of a defender that can play at an elite level as both a small and a tall. Matt Cowgill at ESPN took a good at Sydney’s post round 6 resurgence this week – well worth a read.

Like Melbourne, GWS (95.3%) have fielded a 2/3rds strength side for much of this year. As promising as Harry Perryman looks, he shouldn’t be playing for a top four side at this point of his career. Like Melbourne, it is impossible to assess how good the Giants are right now. There’s a fair chance that the Giants won’t get to field their best 22 before the finals this year – with Griffen still around a month away, and Deledio making slow but sure progress in the reserves. With a full-ish squad to pick from, the Giants might have a few selection issues, but the good kind instead of the bad. On paper, a full strength Giants side doesn’t have a weakness – which should scare all the sides above them on the ladder.

Tier 1: The Premiership Favourites (if they can get everything together)

Port Adelaide (92.5%) get a lot of stick for not beating top sides, but according to the HPN Team Ratings they continue to be competition front runners. They’ve often been frustrating in 2017, but have been competitive in most of their losses to other top sides. They’ve also beaten up horribly on bottom sides, which may skew their ratings a little higher than what would truly be expected.

We still have them as the second strongest team based mostly on their dominant performance through the midfield. Much of that has been led by fringe candidate for “best ruckman in the league” Patrick Ryder, and the physically imposing Ollie Wines. Port dominate inside-50 entries, controlling the middle of the ground for lengthy periods, but are hampered by pretty average scoring power despite the strong performances of Robbie Gray and Charlie Dixon this year. Like several teams in lower tiers, the issue for Port appears to be one of depth and whether the marginal best 22 players can step up for finals. HPN understands the doubts about them, but we think the fundamentals are fine and they’re as likely as anyone to make a run in September in this chaotic season.

Adelaide (100%) are the likely minor premiers and the current premiership favourites. There is very good reason for this, as they look like the most complete side so far this year. They seem to have rediscovered some of their best form, and perhaps worked out some responses to the Sloane tag and the defensive strategies that caused them issues against Melbourne and Hawthorn.

The Crows’ loss to the Hawks was driven by the numbers Hawthorn placed behind the ball, with the weight of numbers making up for the haphazardness of the defensive setup. Melbourne did that as well, with the exploitation of the ‘plus two’ set-up at centre contests creating a forward thrust that the Crows found hard to counter. Both sides employed manic defensive pressure, and exploited their small foward lines to beat the Crows for forward-50 ground balls.

Midfield depth may be the issue in combating these approaches. Their midfield strength is reasonable but as it’s a measure of inside-50 differential, a lot of the work in ball movement and defence is probably falling on their half back and half forward lines. The other question is whether their multi-dimensional and rapid attacking style can hold up to the defensive powerhouses of the league in September, as it failed to do so last year. On the other hand, if the only problems we can identify are to do with possible counter tactics, that tends to suggest you’re in pretty good nick.

In short, this has been a great season of footy, and it only shapes to get better.