Aggregate statistics from a sample of Field Test sites
These are aggregate statistics from a sample of 8 Field Test sites, a range of large and small sites, representing 993 agents. I don't have total numbers of participants but I feel this is a representative sample.
These are all the non-zero statistics from each site. This will include some agents who were not actually on site and did not participate, but it is impossible to work out who. Some of the very low end figures could be ignored but I have left them as is. If the non-participating agents could be excluded, all the metrics would be pretty close to the bell curve.
While Hexathalon was intended to test the limits of the system, it is clear the majority of agents met or exceeded the set tasks. Should the event be run again with the same metrics, and if the system didn't collapse, I think these would be reasonable tasks to set:
- Collect at least 12 media artifacts
- Make at least 50 successful portal hacks
- Deploy at least 100 resonators
- Deploy at least 30 mods
- Gain at least 60 glyph points
- Walk at least 3.5 km
Slightly more challenging but achievable by the majority. However I would allow a little more time, say 3 hours in total.
Whatever the tasks, there will always be a long tail and some extreme outliers.
Before they go increasing walking distance, they better use OS location API and not just measure distance + time between portal interactions. I've walked 6.5km per google fit tracking - made a big circle around our area to touch every artifact portal - and then a couple missions with the remaining time. I barely got past 3km - 3.08 to be exact and this only updated from 2.89 or so just a few seconds before event was over. Another thing is the distance tracking was so inconsistent in our group. There was 6 people walking, and their walk distances ranged from 2.x to 4.x despite us all doing pretty much same movement pattern and touching same portals.
Here are some slightly modified statistics trying to exclude non-participating agents. Assumptions I have made are:
This reduces the sample size to 924 agents.
There were 873 agents who got at least 1 artifact. Cutting off all stats to the top 873 agents doesn't actually change the charts much, except for walking.
Two big skew factors:
1. Whether agents at a given site knew you could get more than one media item from the same portal, and that the count was for all media and not unique media (as they might be familiar with from anomaly scoring.)
2. The apparently large subset of agents who had their tracker dramatically undercounted, picked up the pace at the end to make sure they got the full 3000 meters, and then had another big jump when their tally adjusted up in a big leap.
Many thanks for compiling these stats, though.
The apparently large subset of agents who had their tracker dramatically undercounted, picked up the pace at the end to make sure they got the full 3000 meters, and then had another big jump when their tally adjusted up in a big leap
I'm included in that. I was at about 2940m when Prime crashed. I wasted time waiting for it to restart then switched to Redacted, making a mad dash to do what I could in the last 5 min. I ended up with 3141m.
This will be the final update with aggregates of 1379 agents from 13 sites, which is more than a sufficient representative sample. I have attempted to exclude stats from all agents who collected 0 artifacts. It may be possible there were some on site who didn't get any media but this number would be minuscule. I have also limited the extent of the X axis on some of the charts with extremely long tails, although the outliers are still captured in the overall statistics.
Should the event be run again with the same metrics, I have slightly revised what I think would be achievable: in 3 hours
Am I still the only agent with 50 Artifact points?
I wasn't going to name you but yes. There was also a 49 and 2 x 48.
The highest figures for each metric from the 13 sites I've aggregated are:
All very interesting. It would be handy if you added a marker for the top 10%. I know that number would be different from site to site, but knowing the overall score for each task would be useful for making targets for future events.