MDCT Effort Model
This is the mathematical model for how we're calculating time spent to map a decision space of options.
Introduction to the MetaPoll (MDCT) Effort Model
In designing mechanisms like MetaPoll's MDCT for mapping complex decision spaces, understanding voter effort is crucial—it's the bottleneck that determines whether people engage deeply or drop off.
This model quantifies two key aspects: expression effort , the time needed to rank preferences in the UI, and interpretation effort , the time to review aggregated results.
By factoring in tree structure, selective participation, and navigation costs, we aim to create a realistic estimate that surfaces MetaPoll's efficiency: low barriers for input, quick insights from output.
DISCLAIMER: While we've taken as much care to calculate as accurately as we can, we acknowledge that our calculations, approach, and assumptions may be incorrect. This math has not been formally proofed at the time of posting this information. However, we feel even our initial work is worth showing. That being said, we encourage you to check the math yourself, and if you see any issues with the calculations, assumptions approach or results posted, we would welcome speaking with you to correct them.
Bottom line up front:
Instead of going through all the background math and leading up to the final calculations, we'll provide a condensed calculation of the final results for your convenience.
These variables, drawn from our standardized set, focus on those directly used in and calculations.
Set Variable Values:
(20% of layers fully skipped)
(40% of options unranked within engaged layers) we believe these to be conservative though these are adjustable based on poll topic and audience.
Example Poll Primitive Values:
Computed Derivative Values:
Equations for MetaPoll's and
Voter Expression Effort
is the time required by the Expresser (Voter) to express their preferences in a MetaPoll.
This sums ranking time across engaged layers and doubled navigation (for back-and-forth layer traversal).
Calculating using the treasury example poll:
Interpretation Effort
is the time required by the Interpreter to view and understand the results in a MetaPoll.
This covers a user exploring all MetaPoll results (full option tree) and doubled navigation (exploring results).
Calculating using the treasury example poll:
Example MetaPoll used in calculations:
MDCT Mathematical Framework
In exploring hierarchical systems for collective decision-making, like those in MetaPoll's Multi-Dimensional Consensus Trees (MDCTs) it's valuable to have a framework that's both precise and inviting.
This model draws from decision theory to quantify scale, effort, and engagement in trees. We'll begin with foundational concepts, then build to the variables and their math, using everyday language where possible while preserving rigor for technical readers.
Domains, Types, and Constraints:
Each variable has a clear domain (range of possible values) and constraints to avoid nonsense results.
Integers: ℕ₀ (non-negative, including 0), ℕ (positive, starting from 1).
Reals: ℝ≥0 (non-negative), [0,1] (fractions between 0 and 1 inclusive).
Key constraints: For example, by construction.
The high level: Primitives, Derivatives, and Dependencies
Primitives:
Derivatives:
Dependency tree:
The option space is calculated by:
(via Theorem 1).
The apathy rate is calculated by:
(via Theorem 2).
The branch engagement rate is calculated by:
The option engagement rate is calculated by:
.
Culminating in total voting effort:
.
And, total result interpretation effort:
.
This graph shows how the model progresses from raw structure to measurable effort.
MetaPoll's MDCT Variables
The full list of variables used in calculating MDCT effort.
: The number of leaf nodes, the options with no children (domain: ℕ₀). Rationale: This highlights endpoints that represent executable choices; in the example, .
: Total non-leaf nodes (dimensions and sub-dimensions). Rationale: Counts the structural layers voters navigate; in treasury, .
: Engaged non-leaf nodes, Rationale: Represents visited layers; derivative for selectivity; in example MetaPoll, .
: The total rankable options, (domain: ℕ). Rationale: This sums all engageable elements (dimensions, sub-dimensions, leaves), serving as the flattened total for effort calculations; in the treasury example, .
: Average branching factor, avg options per branch (sub-options per non-leaf). Rationale: Measures items per layer; in example, .
: The maximum branching factor, most options per branch (domain: ℕ ≥ 1). Rationale: This spots high-effort spots; property: ; in the example, .
: Average tree depth across paths (domain: ℝ > 0). Rationale: Scales with layering; in example, (though not directly used here, informs broader context).
: The maximum depth, the longest path from top to bottom (domain: ℕ). Rationale: This deals with uneven layers; bound: , but usually <20 for usability; in the MetaPoll example, .
: The overall fraction of options ranked (domain: [0,1]). Rationale: This measures engagement; plain text equation: ; assuming independence (no correlation between layer skips and unranking); if correlated, Z < product as an upper bound.
: The average fraction left unranked in layers (domain: [0,1], e.g., 0.4-0.5 from voter patterns). Rationale: This shows selective focus.
: The fraction of layers or nodes ignored entirely (domain: [0,1], e.g., 0.5-0.7). Rationale: This reflects broader disinterest.
: Average ranked items per engaged layer, . Rationale: Captures partial ranking; derivative; in example, ≈ 6.14 × 0.6 = 3.68.
: Average time per voting interaction (domain: ℝ≥0 in seconds, e.g., 5s for ranking). Rationale: UI-specific cost for active input.
: Average time per interpretation interaction (domain: ℝ≥0 seconds, e.g., 2s for reading). Rationale: Lower cost for passive review.
: The time per layer navigation (domain: ℝ≥0, in seconds, e.g., 3s for clicks/loads). Rationale: Overhead for moving between branches; added to account for real UI friction in both and .
: The time a voter spends expressing preferences (domain: ℝ≥0, in seconds). Rationale: Key for low-effort designs; plain text asymptotic: ; scaling linearly with depth and rankings; in the treasury example, .
: The time to read and understand results (domain: ℝ≥0, in seconds). Rationale: This focuses on post-vote clarity, often shorter than ; plain text asymptotic: ; in the treasury example, .
Connections to Classic Ideas in Math and Decision Theory
Tree variables like and map to value trees in multi-criteria decision analysis (Belton and Stewart, 2002):
MDCT ≡ hierarchical attribute tree, with as average level depth and as mean sub-criteria count.
Axioms, Theorems, and Properties: The Logical Backbone
We lay out axioms (basic truths) first, then theorems with proofs, and corollaries for extra insights. The goal being to make some things verifiable.
Axioms:
Axiom 1 (Positivity): (nothing negative, fractions bounded).
Axiom 2 (Additivity): Rankables split into disjoint non-leaves and leaves: .
Theorems:
Theorem 1 (Additivity of W): . Proof: From Axiom 2, non-leaves and leaves are disjoint, and together they cover all rankables. So, the count is . Corollary 1.1: For flat spaces , , so .
Theorem 2 (Engagement Fraction): . Proof: Assuming independence—probability of unranking given engagement equals , with covariance zero—the joint engagement is the product: let , ; then . Corollary 2.1: If positive correlation (covariance > 0, e.g., apathy spreads), Z < product, giving an upper bound.
Theorem 3 (Monotonicity of Effort): (effort rises with engagement). Proof: (using R_{AVG} ≈ B_{AVG} Z). The derivative , as all terms are positive . Corollary 3.1: As Z → 0, V → 0 (no engagement means no time spent).
Theorem 4 (Upper Bound on Effort): Proof: Worst case: voter engages all layers ( layers) and ranks the widest layer's items ( per layer), each at full , with doubled navigation. Sum . Corollary 4.1: For big trees, V is linear in engaged layers.
Closing
Looking at these calculations, the mathematical framework appears promising, though we must acknowledge it hasn't undergone formal proof. The model rests on reasonable assumptions about voter behavior and UI interaction times, but real-world deployment will inevitably reveal edge cases and refinements needed.
What strikes me most is the fundamental insight: by decomposing complex decisions into hierarchical trees and allowing selective engagement, we can achieve 10,000x and greater improvements in effort reduction, something that seemed impossible with traditional voting mechanisms. If these efficiency gains hold in practice, MetaPoll's MDCT represents a paradigm shift in how we capture collective intelligence at scale.
The ability to map high-dimensional preference spaces while keeping voter effort between logarithmic and linear opens many new possibilities.
As with any coordination technology, the proof will come through implementation and iteration, but the mathematical foundations suggest we're on the right track toward more expressive, yet dramatically more scalable collective intelligence tools like MetaPoll.
Last updated