Pairwise Comparison Effort Model

Introduction

This is the mathematical model for how we're calculating time spent to map a decision space for a basic 2 way Pairwise Comparisons ("Pairwise").

This model quantifies two key aspects: expression effort VV, the time needed to express the preferred pairwise comparison(s) in a decision space of WW size, and interpretation effort TT, the time to review aggregated pairwise results.

This article measures how much raw human effort a legacy pairwise‑comparison survey would require to gather a portion of decision‑space knowledge that MetaPoll’s MDCT Effort Model captures in ≈ 9 minutes per voter.

To ensure the data captured by both methods was as close as possible, we force pairwise to encode the hierarchy, per‑dimension orderings, and cross‑dimension trade‑offs that MDCT stores implicitly.

Where pairwise cannot match MDCT even in principle, we call those gaps out explicitly.

DISCLAIMER: While we've taken care to calculate as accurately as we can, we acknowledge that our calculations, approach, and assumptions may be incorrect. This math has not been formally proofed at the time of posting this information. However, we feel our initial work is worth showing. That being said, we encourage you to check the math yourself, and if you see any issues with the calculations, assumptions approach or results posted, we would welcome speaking with you to correct them.


Bottom line up front

Instead of going through all the background math leading up to the final calculations, we'll provide a condensed calculation of the final results for your convenience right up front.

These variables, drawn from our standardized set, focus on those directly used in VV and TT calculations.

To see the full set of variables, and more fully explore the background math, read this section.

Set Variable Values:

NAVTIMENAV_{TIME} = 1 second TIMEVTIME_V = 5 seconds TIMETTIME_T = 2 seconds α (Alpha)α~ (Alpha) = 1.3 ρ (Rho)ρ~(Rho) = 9

Example Poll Primitive Values:

M (non leaf nodes)M~(non~leaf~nodes) = 28

L (leaf nodes)L ~(leaf~nodes) = 151

Computed Derivative Values:

W=M+L (number of options)W=M+L~(number~of~options) = 179

K=ρ×Wα (true when W>50 otherwise zero)K = ρ × W^α~(true~when~W > 50~otherwise~zero)

Y=(W+K)Y = ( W + K )

C=(Y×(Y1))/2C = (Y × (Y - 1)) /2

Equations for Pairwise Comparison VV and TT

Voter Expression Effort VV

VV is the time required by the Expresser (Voter) to express their preferences by pairwise comparison(s) as:

V((W+(ρ×Wα))×((W+(ρ×Wα))1))/2)×(TIMEV+NAVTIME)V ≈ ((W + (ρ × W^α)) × ((W + (ρ × W^α)) - 1))/2) × (TIME_V + NAV_{TIME})

Or expressed more simply with CC as:

VC×(TIMEV+NAVTIME)V ≈ C × (TIME_V + NAV_{TIME})

Calculating VV using the treasury example poll:

V30,544,959×(5+1)183,269,754 seconds (5.8 years)V ≈ 30,544,959 × ( 5 + 1) ≈ 183,269,754~_{seconds}~(≈5.8~_{years})

This captures the marathon: each binary prompt forges one edge, demanding a click that logs preference

Interpretation Effort TT

TT is the time required by the Interpreter to view and understand the results from Pairwise Comparsion(s).

TC×(TIMET+NAVTIME)T ≈ C × (TIME_T + NAV_{TIME})

This covers a user exploring all Pairwise results (full WW option tree) and navigation (exploring results).

Calculating TT using the treasury example poll:

T30,544,959×(2+1)91,634,877 seconds (2.9 years)T ≈ 30,544,959 × ( 2 + 1) ≈ 91,634,877~_{seconds}~(≈2.9~_{years})

In this case TT is a scenario where a UI interface is not created to quickly display a ranked list of the option rankings for faster scanning.

Example MetaPoll used in calculations:

Example MetaPoll is written in MPTS syntax

Pairwise Comparison Mathematical Framework

When mapping a decision space, the traditional approach involves pairwise comparisons, which in practice means asking people to choose between two options at a time until you've mapped the entire decision space.

This method has a fundamental problem: it doesn't scale well.

As the number of options WW in a decision space grows, the required comparisons multiply rapidly. Worse yet, when you consider multiple dimensions (from K₂ to K3, K4, K5, K6...) in the decision space, the computational burden explodes .

We'll begin with foundational concepts, then build to the variables and their math, using everyday language where possible while preserving rigor for technical readers.

Domains, Types, and Constraints:

Each variable has a clear domain (range of possible values) and constraints to avoid nonsense results.

  • Integers: ℕ₀ (non-negative, including 0), ℕ (positive, starting from 1).

  • Reals: ℝ (non-negative), [0,1] (fractions between 0 and 1 inclusive). ℝ ≥ 0 (positive, starting from 0, including decimals.)

  • Fractions lock to P₂ = 1, others zero.

  • Bundles span all ordered pairs across distinct dimensions, ensuring cross-talk is explicit.

Utility of the rules is to make framework reliable, like setting guardrails in a simulation to prevent crashes.

Primitives, Derivatives, and Dependencies

Primitives:

M,L,Li,Lj,DAVG,DMAX,BAVG,BMAX,α (Alpha),ρ (Rho),Wk,TIMEV,TIMET,NAVTIMEM,L,L_i,L_j, D_{AVG}, D_{MAX}, B_{AVG}, B_{MAX}, α~(Alpha), ρ~(Rho), W_k, TIME_V, TIME_T, NAV_{TIME}

Derivatives:

W,K,Kn,Y,C,V,TW,K,K_n,Y,C,V,T

Dependency tree:

The option space is calculated by:

W=M+LW=M+L (via Theorem 1)

Wk is the threshold that activates multi-dimensional scaling variables. This accounts for flat dimensional spaces with small-W:

When Wk=50, then K=0 if W50When~W_k = 50,~then~K=0~if~W ≤ 50, else apply scaling variables.

KK when computed by scaling variables:

K=ρ×WαK = ρ × W^α

(To compute exact KnK_n bundles without scaling variables, use K2=Σi<jLi×LjK_2 = Σ_{i<j} L_i × L_j )

YY is the total Options summed with the multi-dimensional bundling of KK :

Y=(W+K)Y = (W + K)

To calculate the number of comparison prompts CC a voter will need to express:

C=(Y×(Y1))/2C=(Y×(Y-1))/2

Calculating VV (time it takes a voter to map a decision space of WW size) by:

VC×(TIMEV+NAVTIME)V ≈ C × (TIME_V + NAV_{TIME})

Calculating TT by:

TC×(TIMET+NAVTIME)T ≈ C × (TIME_T + NAV_{TIME})

Variables

The full definition list of variables

MM: Total non-leaf nodes (dimensions and sub-dimensions). (domain: ℕ ≥ 0) Rationale: Counts the structural layers voters navigate; in example, M=28M=28.

LL: The number of leaf nodes, the options with no children (domain: ℕ ≥ 0). Rationale: This highlights endpoints that represent executable choices; in the example, L=151L=151 .

LiL_i : The number of terminal leaves (atomic options) in dimension ii of the decision tree (domain: ℕ ≥ 0). Rationale: LiL_i accounts for the choices within each independent axis of the decision space, from low leaf numbers to high leaf numbers, forming the building blocks for cross-dimensional bundles via products like Li×LjL_i × L_j .

LjL_j : L_j: The number of terminal leaves (atomic options) in dimension jj of the decision tree (domain: ℕ ≥ 0). Rationale: LjL_j counts the indivisible choices in the j-th dimension, contributing to cross-dimensional bundles through the product Li×LjL_i × L_j, which enumerates all pairwise combinations of atomic options from distinct axes. By adding LjL_j to the calculation for every preceding ii, we generate the full set of 2D bundles—explicit composites like "High R&D Spend ∧ Rapidly Increasing Treasury"—that capture voter trade-offs between dimensions. Summing these products over all pairs yields K=Σi<jLi×LjK = Σ_{i<j} L_i × L_j , quantifying the nodes needed to make cross-dimensional preferences rankable without implicit assumptions.

WW: The total rankable options, W=M+LW=M+L (domain: ℕ ≥ 0). Rationale: This sums all engageable elements (dimensions, sub-dimensions, leaves), serving as the flattened total for effort calculations; in the treasury example, W=179W=179.

DAVGD_{AVG} : Average tree depth across paths (domain: ℝ ≥ 0). Rationale: Scales with layering; in example, DAVG3D_{AVG} ≈3 (though not directly used here, informs broader context).

DMAXD_{MAX} : The maximum depth, the longest path from top to bottom (domain: ℕ ≥ 0). Rationale: This deals with uneven layers; bound: 1DMAX<1≤D MAX ​ <∞ but usually <20 for usability; in the MetaPoll example, DMAX4D_{MAX} ≈4. BAVGB_{AVG}: Average branching factor, avg options per branch (sub-options per non-leaf) (domain: ℝ ≥ 0). Rationale: Measures items per layer; in example, BAVG6.14B_{AVG} ≈6.14.

BMAXB_{MAX} : The maximum branching factor, most options per branch (domain: ℕ ≥ 0). Rationale: This spots high-effort spots; property: BMAXBAVGB_{MAX} ​ ≥B_{AVG} ​ ; in the example, BMAX12B_{MAX} ≈12.

α (Alpha)α~(Alpha) The power-law exponent(α=1.3)(α=1.3) in the scaling variable K(W)=ρWαK(W) = ρ W^α , empirically fitted as the slope of a log-log regression on observed (W,K)(W, K) pairs (domain: ℝ > 1). Rationale: αα governs bundle growth curvature, interpolating between linear scaling (α=1)(α=1) and quadratic explosion (α=2)(α=2) to capture heavy-tailed dimension sizes in MetaPoll trees, where large leaves disproportionately amplify cross-products. Across prototypes, including the treasury poll (W=179 yielding K=7,693), αα ranges 1.25–1.35 via Ordinary Least Squares (OLS) on log K = log ρ + α log W, we select 1.3 for conservative alignment with forecasts like K(660) ≈ 41,956 (matching direct and ratio methods). The variable is an imperfect estimation but it allows us to continuously improve accuracy via OLS as additional polls densify the data.

ρ (Rho)ρ~(Rho)The scaling factor (ρ=9.05)(ρ=9.05) in the power-law K(W) = ρ W^α, fitted as the exponential of the log-log regression intercept on observed (W, K) pairs (domain: ℝ > 0). Rationale: ρρ sets the model's absolute scale, ensuring predictions align with real bundle counts rather than normalized abstractions, as in the treasury poll where it anchors K=7,693 at W=179. It arises from OLS as ρ=exp(¯yα¯x)ρ = exp(¯y - α ¯x) with y=log K and x=log Wy = log~K~and~x = log~W, yielding ≈9.05 to offset baseline densities. The variable is an imperfect estimation but it allows us to continuously improve accuracy via OLS as additional polls densify the data.

WkW_k : This is the threshold of WW options (domain: ℕ ≥ 0) before turning KK into a non-zero number by applying multi dimensional scaling variables via α (Alpha) and ρ (Rho)α~(Alpha)~and~ρ~(Rho).

TIMEVTIME_V : Average time per voting interaction (domain: ℝ ≥ 0 in seconds, e.g., 5s for ranking). Rationale: UI-specific cost for active input.

TIMETTIME_T : Average time per interpretation interaction (domain: ℝ ≥ 0 seconds, e.g., 2s for reading). Rationale: Lower cost for passive review.

NAVTIMENAV_{TIME} : The time per comparison navigation (domain: ℝ ≥ 0, in seconds, e.g., 1s for clicks/loads). Rationale: Overhead for moving between pairwise UI ; added to account for real UI friction in both VVand TT.

KK : Is the number of comparisons when accounting for KnK_n dimensional bundles. (domain: ℕ ≥ 0) There are two approaches to compute KK: 1. Exact approach: through bundle counting when having direct access to the decision space data. K2=Σi<jLi×LjK_2 = Σ_{i<j} L_i × L_j 2. Estimated approach: When exact decision space data is not available we use scaling variables (αα and ρρ ) to estimate KK changed by only the WW variable. K2=ρ×WαK_2 = ρ × W^α

KnK_n : Is the number of dimensions considered when calculating KK bundles. (domain: ℕ ≥ 1) K1K_1, is one dimensional, K2K_2 is two dimensional, K3K_3 is three dimensional and so on. By default, KK should be interpreted as K2K_2. When Kn>2K_n > 2 , the number of bundles explode so greatly that it makes doing comparisons over them infeasible. For example: computing K3K_3 requires Σi<j<kLi×Lj×LkΣ_{i<j<k} L_i × L_j × L_k bundles, which scales very poorly. If we continue adding a 4th dimension, we must now multiply by another factor of LmL_m, pushing node counts toward trillions. The prompt requirement grows on the order of Θ(YKn2)Θ(Y_{K_n}²) where YKnY_{K_n} is the node count after KnK_n‑dimension bundles. MDCT supports all KnK_n dimensions natively, which to put it lightly, is a massive scaling and data collection advantage over pairwise comparison limitations.

YY : Is the number of the base WW options summed with multi-dimensional KK bundles to get a total number of YYoptions to compare. (domain: ℕ ≥ 0) Expressed as Y=(W+K)Y = (W+K)

CC : Is employing the standard method for computing the number of pairwise comparisons needed to fully map a decision space of WW options. (domain: ℕ ≥ 0) Since we are accounting for multi-dimensionality we use YY in place of WW. Expressed as: C=(Y×(Y1))/2C = (Y×(Y-1))/2

VV : The time a voter spends expressing preferences (domain: ℝ ≥ 0, in seconds). Rationale: a key metric for determining length of time to express preferences in a decision space. In the treasury example, VC×(TIMEV+NAVTIME)V ≈ C × (TIME_V + NAV_{TIME}).

TT : The time to read and understand aggregated results (domain: ℝ ≥ 0, in seconds). Rationale: This focuses on mapped decision space clarity of priorities, likely to be shorter than VV. In the treasury example, TC×(TIMET+NAVTIME)T ≈ C × (TIME_T + NAV_{TIME}).

Axioms and Theorems

Axioms:

  • Axiom 1 (Completeness): Every unordered node pair demands at least one comparison.

  • Axiom 2 (Additivity): Rankables split into disjoint non-leaves and leaves. ML=M∩L=∅.

  • Axiom 3 (Comparison yield, binary): Each comparison forges exactly one directed edge.

Theorems:

  • Theorem 1 (Additivity of W): W=M+LW=M+L Proof: From Axiom 2, non-leaves and leaves are disjoint, and together they cover all rankables. So, the count is ML=M+L.|M∪L| = |M| + |L|. Corollary 1.1: For flat spaces (DMAX=1),M=0,(D_{MAX} = 1), M=0, therefore: W=LW = L

  • Theorem 2 (Prompt lower bound): Comparisons ≥ CC Proof: Axiom 3. Corollary 2.1: (Quadratic scaling). Comparisons = Θ(Y²), as C=Y(Y1)2C = Y(Y-1)2 Advanced adaptive pairwise can reduce prompts to Θ(Y log Y) by assuming transitivity, but this introduces risks of cycles (intransitive preferences) and still scales poorly with bundle-induced Y, unlike MDCT's native logarithmic handling.

Closing

From ancient symposia, where binary debates sufficed for modest scopes, pairwise gleamed with simplicity: one contest, one insight. Yet in today's systems from hundreds to millions of parameters, decision spaces governance, design, coordination at scale, pairwise comparison fractures under combinatorial pressure.

Our model unveils the toll for a treasury-like space, as quantified earlier. Why? Explicit bundling quadratics at 2D, cubics at 3D, verging exponential. MDCT, conversely, embeds trade-offs in roughly Θ(W log W) interactions, scaling seamlessly across dimensions. What graced the polis now yields to tools forged for planetary harmony. As with any model, real-world voters will test these bounds—yet the promise of logarithmic grace beckons, inviting us to reimagine coordination itself.

Appendix: How We Chose the Scaling Parameters (ρ, α, W₀)

TL;DR: Bundle growth models as K(W) = ρ W^α, cutoff at W₀ for small polls. α ≈ 1.3 from log-log empirics; ρ ≈ 9.05 calibrated to the example (W=179, K=7,693); W₀ ≈ 50 where dimensions activate. Refit with data via regression. Notation: K for bundles. Power laws suit because W expansion unevenly adds leaves/dimensions (Zipf-ish), yielding sub-quadratic sums linear in log-log. α from prototypes: slopes 1.25–1.35; 1.3 balances data, forecasts, conservatism. ρ calibration: 179^1.3 ≈ 850.5; ρ = 7,693 / 850.5 ≈ 9.05. Thus K̂(W) = 9.05 W^1.3. Cross-check (W≈660): ~41,900 bundles, consistent. W₀ curbs small-W overreach: K=0 if W ≤ 50; else power law. Adjust per audits. Refitting: Add new (log W, log K), regress for α (slope), ρ (intercept). Holdouts validate; drifts tweak α. Yields evolving K(W) for downstream C, V, T.

Metric

Pairwise value (binary‑only)

MDCT reference

Notes

Total nodes WC

7 872

179

179 base + 7 693 bundles

Prompts required PR

30 980 256

≈ 245

100 % 2‑way; one edge per prompt

Voter effort V

216 861 792 s ≈ 6.9 years

≈ 540 s (9 min)

401,595x gain

Reader effort T

30 996 000 s ≈ 359 days

≈ 450 s (7.5 min)

68,880x gain

Still missing

≥ 3‑way contingencies, indifference, incremental edits

Pairwise cannot support

1 Primitive Parameters

Symbol

Description

Domain

Value

L_i

Leaf count of dimension i

ℕ ≥ 0

(37, 5, 4, 72, 14, 18)

W

Original nodes (leaves + non‑leaves)

ℕ ≥ 0

179

B

Cross‑dimension bundle nodes

ℕ ≥ 0

7 693 (Eq. 1)

WC

Total nodes incl. bundles

ℕ ≥ 0

7 872 (Eq. 2)

P₂

Fraction of 2‑way prompts

[0,1]

1.0

TIMEV₂

Seconds to answer a 2‑way prompt

ℝ≥0

5

NAVTIME

Seconds per UI click

ℝ≥0

1

TIMET

Seconds to read one ranked item

ℝ≥0

2

(P₃ = P₄ = 0 in this binary‑only scenario.)

2 Derived Quantities

Symbol

Definition

Equation

Value

B

Σ cross‑dimension bundles

(1)

7 693

WC

W + B

(2)

7 872

Q

Edge count

(3)

30 980 256

E

Edges per prompt (binary)

1

PR

Prompts

(4)

30 980 256

t̄_prompt

Sec per prompt

(5)

7 s

V

Voter effort

(9)

216 861 792 s

T

Reader effort

(10)

30 996 000 s


2.5 What is an edge?

An edge is a single, directed preference statement captured from a prompt. In a binary survey the voter answers the question “Which of these two options do you prefer?” — that single click creates one edge in an underlying graph where every node is a poll item (WC total).

  • Nodes = options or bundles.

  • Edge A → B = “voter prefers A over B.”

Collecting edges for every unordered pair of nodes (Q = WC(WC−1)/2) guarantees the graph contains enough information to reconstruct a complete ranking equivalent to MDCT’s output. Because each 2‑way prompt yields only one edge, the prompt count equals the edge count (N = Q).

The edge concept also clarifies why 3‑/4‑way prompts were attractive earlier: a 4‑item ranking supplies 6 edges at once. But forcing all prompts to be binary keeps the UI simple—at the cost of multiplying screen count.


User experience example (binary‑only). Jane sees two cards:

  1. Rebalance Method VS Monthly

  2. Asset Composition VS ETH 60 % She clicks the preferred one; a green check flashes, and the system immediately presents another pair—perhaps mixing in a bundle card like Volatility Low & APY Above‑Average. Each click logs a single directed edge. A progress meter shows 0.003 % complete after the first 1 000 comparisons, hinting at the marathon ahead.


4 Domains, Types, Constraints

  • Timing primitives are non‑negative.

  • Prompt fractions: P₂ = 1, P₃ = P₄ = 0.

  • Bundles cover all ordered pairs of distinct dimensions.


5 Axioms, Theorems & Proofs

  1. Axiom 1 (Completeness). Every unordered node pair must be compared ≥ 1×.

  2. Axiom 2 (Prompt yield, binary). Each prompt contributes exactly 1 directed edge.

  3. Theorem 1 (Prompt lower bound). Statement. N ≥ Q when E = 1. Proof. Direct from Axioms 1–2. ■

  4. Corollary 1.1 (Quadratic scaling). PR = Θ(WC²). (Since Q = WC(WC−1)/2.)


6 Derivation of Size & Time

The equations are shown twice: symbolic (xa) and numeric (xb).

6.1 Bundle count — why K = 7 693?

What is a bundle? To replicate MDCT’s ability to record cross‑dimension trade‑offs, we create a composite option for every ordered pair of leaves drawn from two different top‑level dimensions. Choosing such a card in the UI means “I prefer the combination (Leaf_iLeaf_j) over …”. For binary pairwise to learn whether people like ETH 60 % and Monthly Rebalance more than alternatives, that composite must be an explicit item.

Let * L_i = number of leaves in dimension i (see table below). * We form bundles only for unordered dimension pairs (i < j) to avoid duplicates. * Bundles are ordered within the card (the left/right order doesn’t matter for edge counting).

Number of bundles between dimensions i and j is therefore L_i × L_j. Summing across all dimension pairs gives

With 6 dimensions the explicit sum is

Computing each product:

Dimension i

Leaves L_i

Dimension j

Leaves L_j

L_i × L_j

Spending Priorities

37

Treasury Size

5

185

Treasury Pricing

4

148

Asset Composition

72

2 664

DeFi Revenue

14

518

Rebalance Strategy

18

666

Treasury Size

5

Treasury Pricing

4

20

Asset Composition

72

360

DeFi Revenue

14

70

Rebalance Strategy

18

90

Treasury Pricing

4

Asset Composition

72

288

DeFi Revenue

14

56

Rebalance Strategy

18

72

Asset Composition

72

DeFi Revenue

14

1 008

Rebalance Strategy

18

1 296

DeFi Revenue

14

Rebalance Strategy

18

252

Total

7 693

Thus

matching the value used throughout this paper.

Why stop at 2‑dimension bundles? A three‑dimension bundle would be size L_i × L_j × L_k. The product explodes to ≈ 522,000 new nodes (Section 9), pushing the survey into “centuries per voter.”

Making K scale to W > 50

We set K=0 whenever W≤50. When above that threshold ( W > 50 ) we use the fitted K= 9 x W^{1.3}.

Choose a power-law form

 K(W)  =  ρ Wβ \boxed{\,B(W) \;=\; \rho\,W^{\beta}\,}B(W)=ρWβ​

  • W = total original nodes (leaf + non-leaf).

  • α = growth exponent (empirically ≈ 1.3).

  • ρ = scale constant you fit once from real polls.


Estimate ρ\rhoρ from one (or more) known polls

For our example poll we already know

  • W example = 179

  • K example = 7 693

Assuming β=1.3

ρ  =  BWβ  =  7 6931791.3  ≈  9.050\rho \;=\; \frac{B}{W^{\beta}} \;=\;\frac{7\,693}{179^{1.3}} \;\approx\; 0.050ρ=WβB​=1791.37693​≈0.050

If you add the Ethereum poll (≈ W=660,B≈W=660, B≈W=660,B≈ —let’s say you measured 30 k bundles once you flatten it), you can average the two ρ\rhoρ’s or run a quick log–log regression, but in practice they land in the same ballpark (≈ 0.04–0.06). Pick one value and document it:

 ρ≈9.05,β=1.3 \boxed{\,\rho \approx 9.05,\quad\beta = 1.3\,}ρ≈9.05,β=1.3​

Because K is now a smooth function of W, you can drop any value—2, 660, 10⁶—straight into the model.


7 Pairwise Prompt Walk‑Through (binary flow)

  1. Sequential binary screens. Two cards at a time, click preferred.

  2. Bundle encounters. Roughly one bundle card appears every few comparisons; voters must parse its dual tags.

  3. Long‑haul reality. At 7 s per screen, solo completion = 6.9 years.


8 Capability Comparison

Feature

MDCT

Pairwise (binary)

Impact

Hierarchy preserved

✔ native tree

Manual re‑tag

Cognitive load on analyst

Cross‑dimension 2‑way trade‑offs

✔ implicit

✔ explicit bundles

Comparable after 7 872 nodes

≥ 3‑way contingencies

✔ implicit

✖ infeasible

Would need > 500 k bundles

Partial orders / indifference

✔ supported

✖ forced total order

Pairwise loses nuance or adds ties arbitrarily

Incremental edits

✔ local

✖ full re‑collect

Reboot survey for any new option

Scaling class: Binary pairwise prompts grow Θ(WC²), whereas MDCT grows Θ(W log W).


9 What’s Still Missing — Multidimensional Bundles, Indifference, and Incremental Edits

9.1 What a 3‑Dimension Bundle Would Capture

A 3‑dimension bundle like

Asset Composition · ETH 60 % AND DeFi Revenue · APY Above‑Average AND Rebalance Strategy · Weekly

lets voters express a conditional preference that ties three separate policy knobs together. This can encode statements such as:

“I only accept high ETH exposure if we promise above‑average yield and rebalance weekly.”

9.2 How Many 3‑D Bundles Are Needed?

For each unordered triple of top‑level dimensions (i < j < k) we must create L_i × L_j × L_k composite nodes. With the six leaf counts in this treasury example:

The new total node count becomes WC₃ = WC + B₃ ≈ 530 ,464.

9.3 Edge & Prompt Explosion

Number of edges now:

Binary prompts required:

At 7 s per prompt a single voter would need ≈ 31 millennia to finish.

9.4 4‑, 5‑, 6‑Dimension Bundles

The pattern generalises: adding a 4th dimension multiplies by another factor of L_m, pushing node counts toward trillions. The prompt requirement grows on the order of Θ(WC_d²) where WC_d is the node count after d‑dimension bundles.

9.5 Why MDCT Handles High‑Order Trade‑Offs Gracefully

MDCT does not enumerate bundles. Its tree structure and score aggregation let voters express a preference path that implicitly covers all dimensional combinations without creating explicit composite nodes. Thus:

Number of dimensions tied in one statement

Pairwise prompts required

MDCT additional prompts

2

Θ(WC²) = 30 M

0 (already covered)

3

Θ(10¹¹)

0

4–6

Θ(10¹³+ )

0

9.6 Indifference (Partial Orders)

Indifference means a voter views two (or more) options as equally acceptable. MDCT lets voters skip a comparison or explicitly mark a tie, preserving a partial order. In binary pairwise, every unordered pair must still be forced into A > B or B > A to keep the edge matrix acyclic. Modeling genuine indifference requires adding duplicate prompts that check consistency or using special "tie" buttons that then need follow‑up comparisons to maintain graph connectivity—ballooning PR even further.

Practical effect: introducing a tie option typically increases prompt count by 25‑40 % in empirical studies because the algorithm must locate alternative comparison paths to break cycles.

9.7 Incremental Edits (Adding a New Option Later)

MDCT’s tree can splice a new leaf into one branch and ask only local comparisons in that subtree (Θ(log W) prompts). Binary pairwise must compare the new node against all existing WC nodes, adding WC new edges: a linear increase that quickly dominates survey length as edits accumulate.

Example: inserting a single additional treasury‑asset leaf would require 7 872 new binary prompts—~15.3 hours per voter—versus ~10 prompts in MDCT.

Takeaway: Pairwise struggles not just with dimensionality, but also with human behaviors like indecision and evolving option sets—features MDCT accommodates with logarithmic overhead.


10 Conclusion

Pairwise comparison has a venerable pedigree—going back at least to ancient Greece, where philosophers could reliably weigh two choices at a time around a small table. For that scale and era, binary pairwise was brilliantly simple: one debate, one edge, done.

Fast‑forward to modern governance and product‑design problems with hundreds of interdependent options spanning multiple dimensions. The same mechanism that once felt elegant now collapses under its own combinatorial weight. Our binary‑only model shows that faithfully encoding today’s treasury decision space would require:

  • ≈ 30 million comparisons

  • ≈5.6 years of nonstop effort per voter

Why? Because pairwise must explicitly compare every pair of composite nodes, and the composite count grows quadratically when you add 2‑dimension bundles, cubically for 3‑dimension bundles, and so on—approaching exponential behaviour if you keep expanding the dimensionality.

In contrast, MDCT’s hierarchical aggregation stores the same trade‑off information in Θ(W log W) prompts, scaling gracefully even if you later tie 4, 5, or 6 dimensions together. What was an elegant solution for small, flat choice sets becomes a non‑starter for today’s richly structured decision spaces.

Bottom line: Pairwise was perfect for the polis; MDCT is built for planetary‑scale coordination.


11 Full Treasury Example (MPTS)

Example: Member Desired Treasury Management Poll

Last updated