One could, of course, go down the rabbit hole of comparing dozens of other combinations, but I’ve touched on what I think, at least, are the big ones. There’s been summaries while I went, but I thought I would wrap things up with some closing thoughts, and a couple more…amusing…looks at the data.
Do gaming club members perform better than us solo, Ronin types? Assuming that all the players not reporting a gaming club aren’t in gaming clubs (a reasonable, but unverifiable supposition):
The 51% of tournament entrants who reported club membership had a median score of 2771, and those who were on their own had a median score of 2022. And, in contrast to the other comparisons we’ve gone through, this one is statistically significant (p = 0.01). A visual look:
There were clearly players on both ends of the spectrum in both categories, but particularly top players seem to more commonly be club members (indeed, of the Top 5, only 1 isn’t a club member). I doubt club membership causes tournament wins (though I’m sure some club recruitment types might want it to be) but more players serious enough to place in a very competitive tournament probably play enough for club membership to make sense.
Sure, it’s all well and good to win the tournament, but what’s the use if you don’t look good doing it? Looking at all the armies by their hobby score rather than their score in the tournament:
- Blood Angels (BA): 22
- Combined Chaos (CC): 18
- Chaos Space Marines (CSM): 26
- Dark Angels (DA): 19.5
- Chaos Daemons (CD): 20
- Dark Eldar (DE): 18.5
- Double-Tau (DT): 18
- Eldar (E): 22
- Eldar/Dark Eldar (E/DE): 22.5
- Grey Knights (GK): 21
- Imperial Guard (IG): 24.5
- Necrons (NE): 18
- Orks (ORK): 19
- Space Marines (SM): 20
- Sisters of Battle (SoB): 22
- Space Wolves (SW): 19
- Tau (Tau): 23.5
- Taudar (Tau/E): 22
- Tyranids (NID): 23
- Overall Median: 21
Serious Space Wolf players, pick it up – the only army with an entirely below median score for both tournament performance and hobby scores is the Wolves. Either that, or blame it on Games Workshop, and lobby for less “Wolves Riding Wolves While Duel-Wielding Combi-Wolves” models. The difference between armies for hobby score is about as insignificant as it gets (p=0.49). Beyond some apparently beautifully painted Tau forces, everyone seems to manage an even distribution, and there’s no clear “hobbyist’s choice” army – at least not among hobbyists who also frequent the competitive tournament scene.
There are limitations to any analysis. This is, after all, only one tournament’s results, of only 194 players. It is not a random sample, and not a comprehensive look at the entire game. It almost certainly doesn’t represent your local meta.
But it is an attempt – and if all goes well, a preview of things to come. My hope is to continue to analyze tournaments, doing a “rolling analysis” that builds from previous results, and make some interesting, interactive tools from the data.
Looking back at the LVO as a “glimpse of 6th Edition”, however limited, was interesting. While many of the “power lists” of the Edition are indeed quite strong, they are rarer than I expected them to be. And some of the narratives around the meta in 6th Edition and how broken the tournament scene was were…off. Taudar, while consistent performers, weren’t tournament crushing leviathans, nor actually were pure Eldar lists. It does however appear that a lot of the narrative around the state of Chaos is correct – Best Chaos is Allied Chaos.
Allied armies were extremely popular, and at the upper end of the tournament, a consistently performing better than their non-allied counterparts, but for the middle of the pack, it didn’t much matter. Some things definitely need some work though – from the perspective of the LVO results, the Dark Angels, Space Wolves and Orks are pathologically non-competitive army. While the Orks are getting a codex soon, and the Space Wolves are hopefully on the slate, the Dark Angel codex is fairly recent, and thus might not be revised for a good long while. The overall meta was actually fairly healthy however, with most armies having their performance centered mostly around the overall average result. And even the most powerful lists were fairly rare – there’s not a huge pull of a few power lists making up the majority of games. The game, while there is always room for improvement, is not nearly as broken as the gloomiest parts of the Internet suggest.
Incidentally, in the name of openness, all of the code and data for the analysis conducted in the last four posts (and most of the code and data for this blog generally) is available on GitHub.