Effort Models
What is an Effort Model?
Effort models are the ways we can mathematically represent and model how much time it would take a type of decision making mechanism to fully map a decision space of W options.
The point of Effort Models is to capture how well each method scales and compare the different types of decision making mechanisms.
Why do this?
Mapping complete decision spaces might seem unusual - after all, organizations don't typically use traditional polls to capture them. There's a good reason for this: it would be prohibitively expensive, as our math will demonstrate. But this creates a measurement problem.
Letβs use an analogy to illustrate, consider how we benchmark computer hardware: Imagine comparing two graphics cards, a $30 card and a $2,000 card by only running the original Doom from 1993. Both GPUs would easily push 300+ frames per second, leading to a seemingly reasonable conclusion: "Performance is identical; just buy the cheaper one."
This conclusion, while arithmetically correct, completely misses the point. The moment you load a modern AAA game at 4K resolution with ray-tracing enabled, the budget card collapses to single-digit frame rates or crashes entirely, while the high-end GPU maintains buttery smooth performance. The simplistic benchmark hid the exact part of the performance curve where the architectural differences become not just relevant, but decisive.
Traditional decision mechanisms are essentially that budget GPU β perfectly adequate for simple, disconnected yes/no questions (the coordination equivalent of original Doom), but they collapse under the computational pressure of dozens of interrelated parameters and trade-offs. MetaPoll, by contrast, is ready for the "4K ultra settings" coordination scenario from the beginning.
We're flipping the standard question on it's head, instead of: "How do MetaPolls compare to how traditional polls are currently used?"
We instead ask: "What happens when we ask each system to capture the full complexity of a multi-dimensional decision space?"
The purpose is not to be ok with the status quo, but ask how we might open up a new world of possibilities with better scaling decision technology. The computational complexity analysis below answers precisely these questions, and the scaling gap is, to put it mildly, dramatic.
Performance Table Results
Without needing to dive into the math, this section summarizes the results.
In each of these charts we will be comparing MetaPoll's Multi-Dimension Consensus Trees to each other type of decision mechanism.
DISCLAIMER: While we've taken as much care to calculate as accurately as we can, we acknowledge that our calculations, approach, and assumptions may be incorrect. This math has not been formally proofed as we lack the resources to do so at the time of posting this information. However, we feel even our initial findings are worth showing. That being said, we encourage you to check the math yourself, and if you see any issues with the calculations, assumptions approach or results posted, we would welcome speaking with you to correct them.
Pairwise Comparisons vs MDCT
6 seconds
6 seconds
1Γ (equal)
1,800 (β30 minutes)
77 (β1 minute)
23Γ
47,461,907 (β1.5 yrs)
309 (β5 minutes)
153,812Γ
813,397,231 (β25.8 years)
926 (β15 minutes)
878,670Γ
3,052,750,275 (β97 years)
1,543 (β25 minutes)
1,978,634Γ
18,391,313,790 (β583 years)
3,086 (β50 minutes)
5,960,148Γ
1,194,920,596,718 (β37,891 Years)
30,857 (β4.5 hrs)
77,448,557Γ
2,854,957,249,297,320 (β90,530,101 years)
308,571 (β85 hrs)
9,252,176,271Γ

Last updated