The Better MetaAnalysis of Paladins DataChucky Ellisonhttps://www.thebettermeta.com/2019-03-21T05:34:44-04:00Personalized Stats (Discord Bot)<p>Up until now, The Better Meta has been about showing what champions/legendaries/compositions etc. work across huge groups of players.
If you’ve ever wished you could find out detailed stats about <em>you</em> as a player, or <em>your</em> individual matches, then you are in luck.
We now have a <a href="https://discord.gg/jVuYrfM">TBM Discord bot</a> that provides individual, personalized stats about you or your friends.
The bot gives you stats about each of your champions, your matches, and even tells you how stacked the odds are when you start a match.</p>
<p>To use the bot, just <a href="https://discord.gg/jVuYrfM">click on the link to our Discord chat</a>, and start typing commands.
Once in Discord, you can get detailed explanations for any <code>command</code> by typing <code>!help command</code>.</p>
<p>In the rest of this post, I’ll give an overview of the commands that are currently available and what they do.</p>
<h2 id="current-match-stats">Current Match Stats</h2>
<p>So I think this is the big one.
As soon as the draft is over, you can get information about all the other players in the match.
I run this for pretty much all my matches since it gives me an idea of what to expect, who to worry about, and who to focus on.</p>
<p>In the screenshot below, “Rnk” is Hi-Rez’s in-game rank (e.g., Plat 3), while the ratings are <a href="/posts/2017/07/20/ratings/">The Better Meta champion ratings</a> that we use to build all our skill-based graphs.
The “chance to win” calculation is based on the champion composition of the two teams, as well as champion-specific and player-overall ratings.
It correctly predicts the winner about 2/3 of the time.</p>
<p><img class="bot-screenshot" src="/assets/discord_bot/current-6defc6f103d72398d02f8bc38d38f4e3bbdd00c9a5dc6324893cd4cb229e0279.png" /></p>
<p>There’s a similar <code>!match</code> command to get the same details about matches that happened in the past.</p>
<h2 id="player-stats">Player Stats</h2>
<p>This command gives you an idea of how good a player is at each champion, and how that compares to other people.
Because people often play differently in casual and ranked, there’s different ratings for each queue.
This is a great, objective way to figure out which champions you’re actually effective with.
People are often surprised about which champions they do the best with.</p>
<p><img class="bot-screenshot" src="/assets/discord_bot/stats-5bfaa7d9b0833eebcdbf269a9bd58d9003a9160b69ace142854e6a03a8dc5d89.png" /></p>
<h2 id="detailed-player-stats">Detailed Player Stats</h2>
<p>Here you can dive deeper into a player’s stats for a specific champion, broken down by legendary.
The different stats are: dps (damage per second), sps (shielding per second), kpm (kills per minute), hps (heals per second), xpm (deaths per minute), and cps (credits per second).
You get a breakdown for both casual and ranked, and a comparison of your recent matches with your overall stats, to see how you’re improving (or not!).</p>
<p><img class="bot-screenshot" src="/assets/discord_bot/champ-dd95b6212e3cb840c7e86f33e41c4f5261c523ee856e0e3eb12666a6d35cea16.png" /></p>
<p>This one is brand new as of this week, so the details of this one may change with feedback.</p>
<h2 id="recent-matches">Recent Matches</h2>
<p>This breaks down your performance in specific, recent matches, in terms of the same dps, kpm, etc. stats as above.
It compares you to other people who play the same champion in the same game mode to get an idea of how you played vs how most people play that champion in that mode.</p>
<p><img class="bot-screenshot" src="/assets/discord_bot/recent-3c8a122cfb2f474eb60ab399acea4a704363fb89a04bc680b73f3c6ac89ddd6a.png" /></p>
<p>You might find (as I did) that you play a support champion more damage-heavy and heal-light than other people.
If it works for you (your ratings are good), then maybe you’d take this information and lean into it—build a loadout around it, see how far you can push it.
Or, maybe you’d want to try dialing it back to see if you can sacrifice a little damage for a lot more healing.
You can even compare it to our <a href="/charts/individual_champion_performance/">Individual Champion Performance graphs</a> to see how you match up against other champions.</p>
<h2 id="graphing-your-ratings-over-time">Graphing Your Ratings Over Time</h2>
<p>The last command I’ll mention is actually a graphical command.
It gives you a scatterplot showing your overall ranked rating through time, after each and every match you’ve played.
As you can see below, you’ll be able to find plateaus, periods of growth, and (sadly) slumps.
This helps you figure out if your practice is paying off, or if your overall strategy of drafting is working.
Are you picking champions that work for you?
Is it time to try something different?</p>
<p><img class="bot-screenshot" src="/assets/discord_bot/graph-7529f623d3172058e1f1d0eff50632f7a62e6201790349ee9cbe06b46e3b484b.png" /></p>
<p>The actual graph is zoomable in Discord.</p>
<h2 id="summary">Summary</h2>
<p>So that’s an overview of what <a href="https://discord.gg/jVuYrfM">our Discord bot</a> currently offers.
I’m planning on adding all sorts of other commands as well, especially as I get feedback.
We also talk a lot of stats in the Discord, and there’s a ton of graphs there that haven’t yet made it to the site, so it might be worth dropping by just to check those out.
For whatever reason, I hope you drop by and check it out.
Mention <code>@pseucrose</code> in Discord to send me a hello!</p>
2018-10-20T00:00:00-04:00https://www.thebettermeta.com/posts/2018/10/20/discord-bot/Ratings and Champion Skill<p>By far, the most common request we received after the launch of The Better Meta was to incorporate player ranking (aka ELO, MMR, etc.).
“Putting up some averages is cool, but it’s not going to cut it,” we heard.
“Of course Sha Lin does poorly on average, he’s hard to play! But <em>good</em> Sha Lins are amazing!” or “Who cares that Torvalds have the highest winrate—Torvalds are terrible! They’re only good at killing noobs.”
We agreed, and after a bit of work, we’re here to give you the data!
In this post, we’re going to walk you through how we rated players and how to use ratings to figure out which champions are advantaged.</p>
<p>tl;dr: the new graphs are <a href="/charts/champion_rating_overview">here</a>.</p>
<h2 id="preliminaries">Preliminaries</h2>
<p>The analysis done in this post uses data from five months of competitive matches ranging from OB42 to OB53, which amounts to over 2550K matches between about 600K different people.
This means each competitive player played an average of 42 games (median 12).
Further breaking it down per champion, it’s only 6 games (median 2)!
The following chart shows the number of players Y who’ve played at least X matches:</p>
<p><a name="matches_per_player"></a></p>
<div class="chart-wrapper">
<div id="matches_per_player" class="chart-body" style="height: 400px;">
<div class="loader">Loading...</div>
</div>
</div>
<p>This chart shows that there are a <em>lot</em> of “noobs”, so we absolutely have to take skill into consideration when looking at champion ability.
Let’s try to separate the wheat from the chaff.</p>
<p>Pro tip: all the graphs in this post are interactive!
Hover your mouse or click to see detailed information.</p>
<h2 id="basic-ratings">Basic Ratings</h2>
<p>Hi-Rez does not currently provide in-game ratings publicly, so we had to build our own ratings system.
Let’s take a look at that.</p>
<p>The most basic way to judge “how good” someone is at Paladins is to look at their winrate.
A perfect player would win every game and would have a winrate of 100%; the worst player would have a winrate of 0%; in general, most players are somewhere in the middle.</p>
<p>However, players are generally matched up against players of the same skill level (a post for another time!), which means winrate isn’t enough by itself.
We can do much better by taking into account the difficulty of the match—if you beat a bunch of terrible players, it says less about your skill than if you beat fantastic players.</p>
<p>Luckily, other people have done a lot of the hard work for us.
There are a number of rating algorithms like <a href="https://en.wikipedia.org/wiki/Elo_rating_system">ELO</a>, <a href="https://en.wikipedia.org/wiki/TrueSkill">TrueSkill</a>™, and <a href="https://en.wikipedia.org/wiki/Glicko_rating_system">Glicko</a>.
We will be using Glicko-2, a variation of the Glicko algorithm, with some class and team extensions described below.</p>
<h3 id="how-ratings-work-generally">How Ratings Work Generally</h3>
<p>Glicko-2 keeps track of two pieces of information about each player concerning their skill level: an estimated rating (our best guess of a player’s real rating) and a measure of uncertainty (how confident we are about that guess).
The basic idea is that everyone starts at the same rating estimate (medium) and same uncertainty (high).
For every game played, the estimated rating gets adjusted up (on a win) or down (on a loss), and the uncertainty goes down.<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>
Winning an easy game increases your rating a little bit, while losing an easy game decreases it a lot; additionally, games with other players whose ratings we’re more certain about will change your rating more dramatically.
If you’re that kind of person, all the gory statistical details of this process can be found on <a href="http://www.glicko.net/glicko.html">Glickman’s website</a>.</p>
<p>People often wonder why ratings systems don’t take into account things like damage done or kill/death ratios.
One major problem with this idea is that it encourages the wrong kind of behavior (see the <a href="https://en.wikipedia.org/wiki/Cobra_effect">cobra effect</a>). If people get points for kills, they will kill even when they should be healing; if they get points for heals, they will heal even when they should be retreating.
As long as ratings are tied directly to winning and losing matches, there’s no way to game the system.</p>
<h3 id="extensions-to-glicko">Extensions to Glicko</h3>
<p>Glicko is designed for games with two symmetric players (e.g., chess).
Paladins has both champions and teams, so we have to do something special to handle them.</p>
<p><a name="extension-champion"></a></p>
<h4 id="dealing-with-champions">Dealing with Champions</h4>
<p>We take a very restrictive and simple view of champions.
Each person gets a completely separate rating for each champion they play.
The matches a person plays with champion A do not affect, in any way, that person’s champion B rating.</p>
<p>The reason we do this is simple: people aren’t good at all champions.
Just because you’re a top-tier Androxus doesn’t mean you’re fantastic at playing Torvald.
In fact, because we keep ratings separate, we can look for correlations between champions (another upcoming post!).</p>
<p>In the following sections, whenever we refer to “a player’s rating” or even just “a rating”, we always mean “a player’s rating for a particular champion”.
This is quite a mouthful, so we don’t usually say the whole phrase.
Similarly, “a player” means “a person playing a particular champion”.
When a person switches champions, they’re a different player.</p>
<h4 id="dealing-with-teams">Dealing with Teams</h4>
<p>To deal with the 5v5 nature of Paladins, we use the <a href="http://rhetoricstudios.com/cyrad/thesis/">composite opponent</a> technique and extend it to take into consideration teammates.
For each match, we pretend as if each particular player played a 1v1 game against an imaginary opponent with a rating that is the sum of the player’s opponents’ ratings minus the sum of the allies’ ratings.</p>
<p>For example, if you have a rating of 2000 on a team with {1700, 1600, 1400, 1400} rated allies, and you played against a team of {2100, 1600, 1500, 1500, 1300}, then we pretend as if you’ve played a game against a single opponent with a rating of (2100 + 1600 + 1500 + 1500 + 1300) - (1700 + 1600 + 1400 + 1400), or 8000 - 6100, or 1900.
From your perspective, it’s as if you played against an opponent of a slightly lower rating than your own (1900 vs 2000).
This is done for every player in the match, so ten composite matches are given to the Glicko algorithm.</p>
<p>In our testing, our extension seems to have slightly more predictive power compared to the original technique (which only adds up the opponents), and indeed does the best of any technique we tried.
Taking into consideration “carries” (e.g., skewed team rating distributions) is something we’ve started looking into, but doesn’t seem to affect things very much.</p>
<h2 id="paladins-ratings">Paladins Ratings</h2>
<p>Now that we can rate players, we’re going to start looking at the data.
We’ll step through a series of modifications to a graph and wind up at the new <a href="/charts/champion_rating_overview">Champion Ratings</a> graph.</p>
<h3 class="nocount">Step 1: Overall Distribution of Ratings</h3>
<p>Let’s start with a look at how ratings are distributed:</p>
<div class="chart-wrapper">
<div id="all_ratings" class="chart-body" style="height: 400px;">
<div class="loader">Loading...</div>
</div>
</div>
<p>This <a href="https://en.wikipedia.org/wiki/Histogram">histogram</a> shows how common different ratings are for people who’ve played at least 10 games with the champion being rated.
For any given rating, it shows how many players have nearly the same rating (e.g., 76108 players are rated between 1675 and 1725).</p>
<p>If you add up all the bins in this histogram, you get about ~740K total players.
We know <a href="#matches_per_player">from above</a> that there are ~330K actual people who have played at least 10 games, so this means each person has a little more than two champions in this histogram on average.
The rightmost (2400) bin has 600 ratings in it, which means it represents 600/740K, or 0.08%, or the top eight hundredths of one percent of all ratings.
In contrast, the middle bin contains about 10% of the ratings.</p>
<p>Remember, these rating numbers are for our system only and are not the same as those in-game, nor are they the same as on any other Paladins site!
The rating numbers themselves are arbitrary—all that matters is the shape of the distribution.
The shape is a “bell curve”, and the data is roughly <a href="https://en.wikipedia.org/wiki/Normal_distribution">normally distributed</a> (with a mean of 1709 and a standard deviation of 196).
Most players fall somewhere in the middle, and there are relatively few who are terrible or amazing.</p>
<p>As <a href="#extension-champion">described above</a>, every person gets a separate rating for each champion they play, so if you play multiple champions, you’d be represented in this chart more than once.</p>
<div class="chart-wrapper">
<div id="ratings_3a" class="chart-body" style="height: 400px;">
<div class="loader">Loading...</div>
</div>
</div>
<p>This is basically the same graph as above, but we’re drawing lines between the tops of where each bar was.<sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup>
Just as before, the value Y at a rating X is how many players have about that rating (to within 50).</p>
<h3 class="nocount">Step 3: Breaking Things Down by Champion</h3>
<p>Remember, every person has a different rating for every champion they play.
This means the Step 2 graph contains your great Ying rating somewhere on the right as well as your terrible Androxus rating somewhere on the left, etc.
Let’s split each champion out separately and see what we get:</p>
<div class="chart-wrapper">
<div id="ratings_3b" class="chart-body" style="height: 400px;">
<div class="loader">Loading...</div>
</div>
</div>
<p>This graph shows how many of each champion has a particular rating.
Each line in this graph represents the distribution of success of a different champion, and if you were to add them all together, you’d get back to the previous graph.</p>
<p>Probably the easiest thing to see in this graph is that some of the curves are “bigger” (they have more area beneath them) than others.
The area underneath a champion’s curve is a direct measure of the champion’s popularity: the more players who’ve played a particular champion, the more space their curve takes up on the graph.
New champions like Zhin and Ash simply don’t have as many players as older champions like Makoa, so their curves are much closer to the x-axis.</p>
<h3 class="nocount">Step 4: Normalizing by Champion Popularity</h3>
<p>We want to find the best champions even if they are new or they aren’t popular, so let’s go ahead and normalize everything by popularity:</p>
<div class="chart-wrapper">
<div id="ratings_3c" class="chart-body" style="height: 400px;">
<div class="loader">Loading...</div>
</div>
</div>
<p>Now all the curves trace out the same amount of area, so they are directly comparable.
We haven’t shifted anything right or left—we’ve only scaled each curve up or down.</p>
<p>With that change, two specific curves are crying out for our attention: the purple one on the right and the orange one on the left.
Why are they sticking out while the others are jumbled together in the middle?
This time it’s because they represent “exceptional” champions—Torvald is the purple curve and Kinessa is orange.
As you may know from our <a href="/charts/individual_champion_performance/">Champion Performance</a> chart, Torvald has been doing particularly well (on average), and Kinessa particularly poorly.</p>
<h3 class="nocount">Step 5: Focus on Kinessa</h3>
<p>Let’s focus on Kinessa, and by way of comparison, Drogoz, for a second:</p>
<div class="chart-wrapper">
<div id="ratings_3d" class="chart-body" style="height: 400px;">
<div class="loader">Loading...</div>
</div>
</div>
<p>These two graphs show how Kinessa’s and Drogoz’s ratings are distributed.
They have a similar shape (indeed, in the previous chart we saw that all champion curves have similar shapes), but they are shifted horizontally and stretched a bit.</p>
<p>What can we learn from this picture?
For starters, an average Kinessa has a rating around 1550, while an average Drogoz has a rating around 1750.
This isn’t too surprising, since we already knew that Kinessa does worse on average than other champions.
Considering that most ratings fall somewhere between 1100 and 2400, a difference of 200 doesn’t seem all that bad!
However, nearly every rating worse than the 1700 average is dominated by Kinessa, while every rating greater than 1700 is dominated by Drogoz.</p>
<h3 class="nocount">Step 6: Stack to Compare</h3>
<p>To investigate this relationship in more detail, we’d like to know which champion has more or less of each rating.
By stacking them on top of one another (just by adding them), we make it easy to see which champion has the most people at any given advantage.
Here’s what that looks like:</p>
<div class="chart-wrapper">
<div id="kinessa_stacked1" class="chart-body" style="height: 400px;">
<div class="loader">Loading...</div>
</div>
</div>
<p>In case you’ve never seen an area chart before, it may be helpful to look at the equivalent bar chart:</p>
<div class="chart-wrapper">
<div id="kinessa_stacked1_bar" class="chart-body" style="height: 400px;">
<div class="loader">Loading...</div>
</div>
</div>
<p>Now we can easily see what proportion of each rating is granted to each champion.
This makes it super easy to compare ratings as long as they fall somewhere towards the middle, but it’s still really hard to see what’s going on in the extremes.</p>
<h3 class="nocount">Step 7: Fill to 100%</h3>
<p>To get the full picture we just need to make all the tiny bars bigger.
Let’s scale up every bin to be the exact same height:</p>
<div class="chart-wrapper">
<div id="kinessa_stacked2" class="chart-body" style="height: 400px;">
<div class="loader">Loading...</div>
</div>
</div>
<p>Now we can clearly see the makeup of the entire range of performance<sup id="fnref:squash"><a href="#fn:squash" class="footnote">3</a></sup> for whatever champions we want.
We see as before that Kinessas tend to represent more of the lower ratings while Drogoz represents more of the upper end, but we can also see that the trend continues even in the extreme ratings.
Since Kinessa still maintains a significant presence at even the highest ratings, it’s possible to be a good Kinessa player.</p>
<p>The best of the best players are in the the rightmost (2400) bin.
Inside, we see that it’s made up of 25% Kinessas and 75% Drogozes.
We already normalized by popularity, so this discrepancy reflects the fact that it simply requires more skill (practice, talent, etc.) to be as good with Kinessa as with Drogoz.
On the other hand, Drogoz is almost entirely absent at the lowest levels of play—this means it’s very difficult to be as bad at Drogoz as you can be with a Kinessa.
For every worst possible Drogoz, there are about 16 equally bad Kinessas.</p>
<h3 class="nocount">Step 8: Get Rid of Ratings</h3>
<p>So far, we’ve been looking entirely at our made up ratings numbers.
Even though the rightmost bin represents the top 0.08% of players, just how much does skill matter?
To put the finishing touch on our graph, let’s turn the ratings into something a little more universal—winrates.</p>
<p>We can figure out how much advantage a particular rating confers by looking at the relationship between them:</p>
<div class="chart-wrapper">
<div id="advantage_vs_winrate" class="chart-body" style="height: 400px;">
<div class="loader">Loading...</div>
</div>
</div>
<p>This graph shows, in practice, how a team’s rating advantage affects their chance to win.
For example, if your team has a 600 point ratings advantage versus another team, then you have a 65% chance to win.
This relationship<sup id="fnref:fit"><a href="#fn:fit" class="footnote">4</a></sup> means we can avoid looking at ratings directly, and instead look at how advantaged players are:</p>
<div class="chart-wrapper">
<div id="kinessa_stacked3" class="chart-body" style="height: 400px;">
<div class="loader">Loading...</div>
</div>
</div>
<p>On the x-axis, we’re now using “advantage” numbers.
In this context, we say you have an advantage of X if when you replace an average player in a balanced game, your team would be X% more likely to win than before.</p>
<p>This is the graph we’ve been working towards—with this we can compare two or more champions across different levels of play.
Let’s add in all the other champions and take one final look.</p>
<h2 id="analysis">Analysis</h2>
<p>Here are all the champion ratings available at the end of OB53 put into our fancy chart:</p>
<div class="chart-wrapper">
<div id="ratings_4" class="chart-body" style="height: 400px;">
<div class="loader">Loading...</div>
</div>
</div>
<p>Just as before, the height of each colored band represents how many ratings there are of a particular champion at a particular rating.</p>
<p>We’ve already examined Kinessa and Drogoz in detail, but they stand out on this graph as well.
Looking at Kinessa’s performance on the extreme left, she is by far the easiest of all champions to be terrible with.
On the other side of the graph, we have champions like Drogoz, Bomb King, Buck, and Torvald who make up most of the highest ratings.
While the highest ratings in the game are consistently achieved using those champions, conversely, very few people have been able to get high ratings for champions like Skye or Makoa.<sup id="fnref:3"><a href="#fn:3" class="footnote">5</a></sup></p>
<p>It’s really interesting to look at Androxus: his band opens up on both ends, which means that it’s very easy to be bad with him, but it’s also possible to be great.
While there are a few other champions that show this to a lesser extent (Kinessa and Cassie are other examples), Androxus is relatively even on both ends.
This is definitely behavior we can’t see looking at averages, so it’s exciting that we could capture it.</p>
<h2 id="conclusion">Conclusion</h2>
<p>If you followed through this post, you should understand how to read the new <a href="/charts/champion_rating_overview">Champion Rating Overview</a> graph.
Hopefully this graph shows you what you’ve always wanted to know about champion ratings, but if you have any other ideas of graphs you’d like to see related to ratings, let us know and we’ll give it a shot!
We’ve got a few ideas up our sleeves already, but we’d love to hear yours on <a href="//www.twitter.com/thebettermeta">Twitter</a>.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>Ideally we’d also increase the uncertainty <del>at the beginning of every patch and</del> when players don’t play frequently, but we’re not doing this yet. [Edit 2017-08-16: We are now increasing uncertainty at the beginning of every patch.]&nbsp;<a href="#fnref:1" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:2">
<p>The y-axis could be made to show <a href="https://en.wikipedia.org/wiki/Probability_density_function">probability density</a> by dividing through by the total number of players and the width of a bucket (50), but we think using player counts is more intuitive.&nbsp;<a href="#fnref:2" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:squash">
<p>We have been surreptitiously squashing the most extreme data into the left- and rightmost two bins to ensure enough data for comparison. In this final chart, it means that the first and last bins are really cumulative sums, being sums of the more extreme ratings, while all other bins are not. Not a single person goes unaccounted for, even though there are so few people so far out.&nbsp;<a href="#fnref:squash" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:fit">
<p>The relationship is very closely fit by a <a href="https://en.wikipedia.org/wiki/Logistic_function">logistic function</a>.&nbsp;<a href="#fnref:fit" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:3">
<p>All of this is related to “skill floors” and “skill ceilings”, but we’re avoiding these terms because the meanings aren’t universally agreed upon.&nbsp;<a href="#fnref:3" class="reversefootnote">&#8617;</a></p>
</li>
</ol>
</div>
2017-07-20T00:00:00-04:00https://www.thebettermeta.com/posts/2017/07/20/ratings/Basic Legendaries Data<p>I’ve added a flat table showing competitive <a href="/charts/legendaries/">Legendaries</a> data broken down by patch.
I wanted to go ahead and get this out there before it was finished because I think it’s useful.</p>
<p>Although much of the data is surprising in that legendaries don’t seem to affect the winrate a whole lot, there are a few nice gems.
For example, Zhin’s “Retaliation” is significantly better than the other options, but is only chosen 10% of the time!
Go take a look at your favorite champs and see what you think.</p>
2017-06-18T00:00:00-04:00https://www.thebettermeta.com/posts/2017/06/18/legendaries-table/First Look at Team Composition<p>With the newest <a href="/charts/role_splits/">Role Splits</a> chart, I’ve taken our first step toward analyzing team composition.
It gives a simple breakdown of how many of each role is viable in competitive play.</p>
<p>On first posting, it looks like flanks are seriously underpowered.
If you’re considering between flank and something else… go something else.
It’s also hard to go wrong with another tank, no matter how many you already have.</p>
2017-05-04T00:00:00-04:00https://www.thebettermeta.com/posts/2017/05/04/role-splits/Matches Per Minute<p>I got super bored waiting in the competitive queue the other night, so I decided to figure out when is the best time to play.
I’ve added a new chart, <a href="/charts/matches_per_minute/">Matches Per Minute</a> that gives the average number of competitive matches that happen in one minute throughout the day.
It’s broken down by region, so you can look at the data that’s relevant for you.</p>
2017-04-22T00:00:00-04:00https://www.thebettermeta.com/posts/2017/04/22/matches-per-minute/Welcome!<p>Welcome to <em>The Better Meta</em>.</p>
<p>My goal for this site is to collect lots of quantitative analyses on Paladins competitive data.
I’ll organize these analyses into readable charts that update automatically, so as the game is patched, people can take a look at how the patches actually affect gameplay.
The first chart compares <a href="/charts/individual_champion_performance/">basic champion stats across time</a>, and includes winrate, damage, popularity, and so forth.</p>
<p>I decided to make this site after hearing so much back-and-forth about the state of the Paladins meta.
People were avoiding certain champions (Kinessa), and fervently picking others (Makoa).
Were these picks justified?
How would we ever figure it out without looking at the data?</p>
<p>Access to the underlying match data was kindly provided by <a href="http://www.hirezstudios.com/">Hi-Rez</a>, and is freely available to anyone who requests an API key from them.</p>
2017-04-17T00:00:00-04:00https://www.thebettermeta.com/posts/2017/04/17/launch/