MDCT Effort Model

This is the mathematical model for how we're calculating time spent to map a decision space of WW options.

Introduction to the MetaPoll (MDCT) Effort Model

In designing mechanisms like MetaPoll's MDCT for mapping complex decision spaces, understanding voter effort is crucial—it's the bottleneck that determines whether people engage deeply or drop off.

This model quantifies two key aspects: expression effort VV, the time needed to rank preferences in the UI, and interpretation effort TT, the time to review aggregated results.

DISCLAIMER: While we've taken as much care to calculate as accurately as we can, we acknowledge that our calculations, approach, and assumptions may be incorrect. This math has not been formally proofed at the time of posting this information. However, we feel even our initial work is worth showing. That being said, we encourage you to check the math yourself, and if you see any issues with the calculations, assumptions approach or results posted, we would welcome speaking with you to correct them.

Bottom line up front:

Instead of going through all the background math and leading up to the final calculations, we'll provide a condensed calculation of the final results for your convenience.

These variables, drawn from our standardized set, focus on those directly used in VV and TT calculations.

To see the full set of variables, and more fully explore the background math, read this section.

Set Variable Values:

  • ZSKIP=0.2Z_{SKIP} = 0.2 (20% of layers fully skipped)

  • ZUNRANKED=0.4Z_{UNRANKED} = 0.4 (40% of options unranked within engaged layers) we believe these to be conservative though these are adjustable based on poll topic and audience.

  • NAVTIME=3secondsNAV_{TIME} = 3_{seconds}

  • TIMEV=5secondsTIME_V= 5_{seconds}

  • TIMET=2secondsTIME_T = 2_{seconds}

Example Poll Primitive Values:

  • M=28M = 28

  • W=179W = 179

  • BAVG6.14B_{AVG} ≈ 6.14

Computed Derivative Values:

  • MENGAGED28×0.8=22.4M_{ENGAGED} ≈ 28 × 0.8 = 22.4

  • RAVG6.14×0.6=3.68R_{AVG} ≈ 6.14 × 0.6 = 3.68

Equations for MetaPoll's VV and TT

Voter Expression Effort VV

VV is the time required by the Expresser (Voter) to express their preferences in a MetaPoll.

V(MENGAGED×RAVG×TIMEV)+(MENGAGED×NAVTIME×2)V ≈ (M_{ENGAGED} × R_{AVG} × TIME_V) + (M_{ENGAGED} × NAV_{TIME} × 2)

This sums ranking time across engaged layers and doubled navigation (for back-and-forth layer traversal).

Calculating VV using the treasury example poll:

V(22.4×3.68×5)+(22.4×3×2)412+134=546seconds(9minutes)V ≈ (22.4 × 3.68 × 5) + (22.4 × 3 × 2) ≈ 412 + 134 = 546_{seconds} (≈9_{minutes})

Interpretation Effort TT

TT is the time required by the Interpreter to view and understand the results in a MetaPoll.

T(M×BAVG×TIMET)+(M×NAVTIME×2)T ≈ (M × B_{AVG} × TIME_T) + (M × NAV_{TIME} × 2)

This covers a user exploring all MetaPoll results (full WWoption tree) and doubled navigation (exploring results).

Calculating TT using the treasury example poll:

T(28×6.14×2)+(28×3×2)=282+168=450seconds(7.5minutes)T ≈ (28 × 6.14 × 2) + (28 × 3 × 2) = 282 + 168 = 450_{seconds} (≈7.5_{minutes})

Example MetaPoll used in calculations:

Example MetaPoll is written in MPTS syntax

MDCT Mathematical Framework

In exploring hierarchical systems for collective decision-making, like those in MetaPoll's Multi-Dimensional Consensus Trees (MDCTs) it's valuable to have a framework that's both precise and inviting.

This model draws from decision theory to quantify scale, effort, and engagement in trees. We'll begin with foundational concepts, then build to the variables and their math, using everyday language where possible while preserving rigor for technical readers.

Domains, Types, and Constraints:

Each variable has a clear domain (range of possible values) and constraints to avoid nonsense results.

  • Integers: ℕ₀ (non-negative, including 0), ℕ (positive, starting from 1).

  • Reals: ℝ≥0 (non-negative), [0,1] (fractions between 0 and 1 inclusive).

  • Key constraints: For example, ZUNRANKED+(1ZUNRANKED)=1 Z_{UNRANKED} + (1 - Z_{UNRANKED}) = 1 by construction.

Utility of the rules is to make framework reliable, like setting guardrails in a simulation to prevent crashes.

The high level: Primitives, Derivatives, and Dependencies

Primitives:

M,L,DMAX,DAVG,BAVG,BMAX,ZUNRANKED,ZSKIP,TIMEV,TIMET,NAVTIMEM, L, D_{MAX}, D_{AVG}, B_{AVG}, B_{MAX}, Z_{UNRANKED}, Z_{SKIP}, TIME_V, TIME_T, NAV_{TIME}

Derivatives:

W,Z,MENGAGED,RAVG,V,TW,Z,M_{ENGAGED},R_{AVG}, V, T

Dependency tree:

The option space is calculated by:

W=M+LW = M + L (via Theorem 1).

The apathy rate is calculated by:

Z=(1ZUNRANKED)×(1ZSKIP)Z = (1 - Z_{UNRANKED}) × (1 - Z_{SKIP}) (via Theorem 2).

The branch engagement rate is calculated by:

MENGAGED=M×(1ZSKIP).M_{ENGAGED} = M × (1 - Z_{SKIP}).

The option engagement rate is calculated by:

RAVG(BAVG×Z)R_{AVG} ≈ (B_{AVG} × Z) .

Culminating in total voting effort:

V=(MENGAGED×RAVG×TIMEV)+(MENGAGED×NAVTIME×2)V = (M_{ENGAGED} × R_{AVG} × TIME_V) + (M_{ENGAGED} × NAV_{TIME} × 2).

And, total result interpretation effort:

T(M×BAVG×TIMET)+(M×NAVTIME×2)T ≈ (M × B_{AVG} × TIME_T) + (M × NAV_{TIME} × 2) .

This graph shows how the model progresses from raw structure to measurable effort.

MetaPoll's MDCT Variables

The full list of variables used in calculating MDCT effort.

  • LL: The number of leaf nodes, the options with no children (domain: ℕ₀). Rationale: This highlights endpoints that represent executable choices; in the example,L151L ≈ 151 .

  • MM: Total non-leaf nodes (dimensions and sub-dimensions). Rationale: Counts the structural layers voters navigate; in treasury, M28M ≈ 28.

  • MENGAGEDM_{ENGAGED}: Engaged non-leaf nodes, MENGAGED=M×(1ZSKIP).M_{ENGAGED} = M × (1 - Z_{SKIP}). Rationale: Represents visited layers; derivative for selectivity; in example MetaPoll, MENGAGED28×0.8=22.4M_{ENGAGED} ≈ 28 × 0.8 = 22.4.

  • WW: The total rankable options, W=M+LW = M + L (domain: ℕ). Rationale: This sums all engageable elements (dimensions, sub-dimensions, leaves), serving as the flattened total for effort calculations; in the treasury example, W=179W = 179.

  • BAVGB_{AVG}: Average branching factor, avg options per branch (sub-options per non-leaf). Rationale: Measures items per layer; in example, BAVG6.14B_{AVG} ≈ 6.14.

  • BMAXB_{MAX}: The maximum branching factor, most options per branch (domain: ℕ ≥ 1). Rationale: This spots high-effort spots; property: BMAXBAVGB_{MAX} ≥ B_{AVG}; in the example, BMAX=12B_{MAX} = 12 .

  • DAVGD_{AVG}: Average tree depth across paths (domain: ℝ > 0). Rationale: Scales with layering; in example, DAVG3D_{AVG} ≈ 3 (though not directly used here, informs broader context).

  • DMAXD_{MAX}: The maximum depth, the longest path from top to bottom (domain: ℕ). Rationale: This deals with uneven layers; bound: 1DMAX<1 ≤ D_{MAX} < ∞ , but usually <20 for usability; in the MetaPoll example, DMAX=4D_{MAX} = 4.

  • ZZ: The overall fraction of options ranked (domain: [0,1]). Rationale: This measures engagement; plain text equation: Z=(1ZUNRANKED)×(1ZSKIP) Z = (1 - Z_{UNRANKED}) \times (1 - Z_{SKIP}); assuming independence (no correlation between layer skips and unranking); if correlated, Z < product as an upper bound.

  • ZUNRANKEDZ_{UNRANKED}: The average fraction left unranked in layers (domain: [0,1], e.g., 0.4-0.5 from voter patterns). Rationale: This shows selective focus.

  • ZSKIPZ_{SKIP}: The fraction of layers or nodes ignored entirely (domain: [0,1], e.g., 0.5-0.7). Rationale: This reflects broader disinterest.

  • RAVGR_{AVG}: Average ranked items per engaged layer, RAVG=BAVG×(1ZUNRANKED)R_{AVG} = B_{AVG} × (1 - Z_{UNRANKED}) . Rationale: Captures partial ranking; derivative; in example, RAVGR_{AVG} ≈ 6.14 × 0.6 = 3.68.

  • TIMEVTIME_V: Average time per voting interaction (domain: ℝ≥0 in seconds, e.g., 5s for ranking). Rationale: UI-specific cost for active input.

  • TIMETTIME_T: Average time per interpretation interaction (domain: ℝ≥0 seconds, e.g., 2s for reading). Rationale: Lower cost for passive review.

  • NAVTIMENAV_{TIME}: The time per layer navigation (domain: ℝ≥0, in seconds, e.g., 3s for clicks/loads). Rationale: Overhead for moving between branches; added to account for real UI friction in both VV and TT.

  • VV: The time a voter spends expressing preferences (domain: ℝ≥0, in seconds). Rationale: Key for low-effort designs; plain text asymptotic: V=O(DAVGRAVG)V = O(D_{AVG} * R_{AVG}); scaling linearly with depth and rankings; in the treasury example, V(22.4×3.68×5)+(22.4×3×2)412+134=546seconds(9minutes)V ≈ (22.4 × 3.68 × 5) + (22.4 × 3 × 2) ≈ 412 + 134 = 546_{seconds} (≈9_{minutes}).

  • TT: The time to read and understand results (domain: ℝ≥0, in seconds). Rationale: This focuses on post-vote clarity, often shorter than VV; plain text asymptotic: T=O(M×BAVG)T = O(M × B_{AVG}); in the treasury example, T(28×6.14×2)+(28×3×2)=282+168=450seconds(7.5minutes)T ≈ (28 × 6.14 × 2) + (28 × 3 × 2) = 282 + 168 = 450_{seconds} (≈7.5_{minutes}).

Connections to Classic Ideas in Math and Decision Theory

Tree variables like DAVGD_{AVG} and BAVGB_{AVG} map to value trees in multi-criteria decision analysis (Belton and Stewart, 2002):

MDCT ≡ hierarchical attribute tree, with DAVGD_{AVG} as average level depth and BAVGB_{AVG} as mean sub-criteria count.

Axioms, Theorems, and Properties: The Logical Backbone

We lay out axioms (basic truths) first, then theorems with proofs, and corollaries for extra insights. The goal being to make some things verifiable.

Axioms:

  • Axiom 1 (Positivity): Z[0,1]Z ∈ [0,1] (nothing negative, fractions bounded).

  • Axiom 2 (Additivity): Rankables split into disjoint non-leaves and leaves: ML=M ∩ L = ∅.

Theorems:

  • Theorem 1 (Additivity of W): W=M+LW = M + L. Proof: From Axiom 2, non-leaves and leaves are disjoint, and together they cover all rankables. So, the count is ML=M+L|M ∪ L| = |M| + |L|. Corollary 1.1: For flat spaces (DMAX=1)(D_{MAX} = 1) , M=0M = 0 , so W=LW = L .

  • Theorem 2 (Engagement Fraction): Z=(1ZUNRANKED)×(1ZSKIP)Z = (1 - Z_{UNRANKED}) × (1 - Z_{SKIP}). Proof: Assuming independence—probability of unranking given engagement equals ZUNRANKEDZ_{UNRANKED}, with covariance zero—the joint engagement is the product: let MENGAGED=1ZSKIPM_{ENGAGED} = 1 - Z_{SKIP} , RAVG=1ZUNRANKEDR_{AVG} = 1 - Z_{UNRANKED} ; then Z=MENGAGED×RAVGZ = M_{ENGAGED} × R_{AVG} . Corollary 2.1: If positive correlation (covariance > 0, e.g., apathy spreads), Z < product, giving an upper bound.

  • Theorem 3 (Monotonicity of Effort): V/Z>0∂V/∂Z > 0 (effort rises with engagement). Proof: V=(MENGAGED×(BAVG×Z)×TIMEV)+(MENGAGED×NAVTIME×2)V = (M_{ENGAGED} × (B_{AVG} × Z) × TIME_V) + (M_{ENGAGED} × NAV_{TIME} × 2) (using R_{AVG} ≈ B_{AVG} Z). The derivative V/Z=MENGAGED×BAVG×TIMEV>0∂V/∂Z = M_{ENGAGED} × B_{AVG} × TIME_V > 0 , as all terms are positive (MENGAGED0,BAVG1,TIMEV>0)(M_{ENGAGED} ≥ 0, B_{AVG} ≥ 1, TIME_V > 0). Corollary 3.1: As Z → 0, V → 0 (no engagement means no time spent).

  • Theorem 4 (Upper Bound on Effort): VM×BMAX×ITIMEV+M×NAVTIME×2V ≤ M × B_{MAX} × I_{TIME_V} + M × NAV_{TIME} × 2 Proof: Worst case: voter engages all layers ( MM layers) and ranks the widest layer's items ( BMAXB_{MAX} per layer), each at full ITIMEVI_{TIME_V} , with doubled navigation. Sum M×BMAX×ITIMEV+M×NAVTIME×2≤ M × B_{MAX} × I_{TIME_V} + M × NAV_{TIME} × 2. Corollary 4.1: For big trees, V is linear in engaged layers.

Closing

Looking at these calculations, the mathematical framework appears promising, though we must acknowledge it hasn't undergone formal proof. The model rests on reasonable assumptions about voter behavior and UI interaction times, but real-world deployment will inevitably reveal edge cases and refinements needed.

What strikes me most is the fundamental insight: by decomposing complex decisions into hierarchical trees and allowing selective engagement, we can achieve 10,000x and greater improvements in effort reduction, something that seemed impossible with traditional voting mechanisms. If these efficiency gains hold in practice, MetaPoll's MDCT represents a paradigm shift in how we capture collective intelligence at scale.

The ability to map high-dimensional preference spaces while keeping voter effort between logarithmic and linear opens many new possibilities.

As with any coordination technology, the proof will come through implementation and iteration, but the mathematical foundations suggest we're on the right track toward more expressive, yet dramatically more scalable collective intelligence tools like MetaPoll.

Last updated