State Change Loop

The challenge of collective decision-making isn't just about aggregating preferences—it's about recognizing that decisions occupy different positions in what I call the "state-change loop" (inspired by the military based OODA loop), and understanding these positions is crucial for designing effective group coordination systems.

The Foundation: Human Scale State Change

To understand how decisions differ, we first need to examine how humans naturally process and respond to the world around them.

Every conscious human continuously cycles through the state change loop:

Humans constantly move through this loop through their whole conscious lives.
  1. Sense - Gathering information about the current environment by seeing, hearing, tasting, touching, smelling.

  2. Judgment - Processing and evaluating the sense information to create a mental model of the current state.

  3. Desired Outcome - Determining what outcome we want in a future state.

  4. Plan Action - Strategizing what plans are most likely to achieve the desired outcome state.

  5. Take Action - Executing the selected plan.

  6. Observe - Assessing how close the new state is with the desired outcome state, and beginning the cycle again.

Awake? No escape.

A person is always cycling through these steps, even during activities when people believe they are doing nothing, like meditation.

It may start when they sense tension in their head, they judge they are stressed, they decide their goal is to be in a state of peace, they decide that meditation will achieve their desired outcome and they plan how they'll do it, then they do the meditation, lastly they observe whether they achieved the peaceful state they wanted.

Execution can fail

However there's something a whole lot more profound to notice. Steps can fail or succeed.

Think of each step in the loop like a bucket that can be filled anywhere from 0% to 100%. How much it's filled is the quality that the step was executed at.

A blind person would get a 0% on the vision part of sense execution since they can't collect any vision sense data through their eyes. While someone else with glaucoma might execute sense vision at 20%.

But this is just scratching the surface, to do this justice let's break down how good or bad execution at each step might look like with more detail.

Sensing execution

Determines how much information you can gather about your environment. Imagine someone who is blind and deaf trying to cross a busy highway—their sensing execution is near 0% because they can't see or hear the cars coming. The best they could do is smell the exhaust and rubber, feel the concrete under their feet as they walk, and the air rushing past them as cars passed by.

Compare this to someone with perfect vision and hearing who can sense every vehicle, its speed, and direction—that's close to 100% sensing execution. Most of us fall somewhere in between, maybe missing some details but catching the important stuff.

Judgment execution

Is your ability to use that sense data to make an accurate model of what's happening around you. Even if our blind and deaf person somehow knew cars were present, they couldn't judge which direction they're coming from, how fast they're moving, or when it's safe to cross. Someone with good judgment execution takes their sense data and correctly understands the situation—"that car is 200 meters away, moving 85 kph, so I have about 5 seconds before it reaches me."

Conversely, someone who can see and hear the cars might have an acute or chronic mental disorder that causes them to misjudge the situation, for example, someone who had 7 shots of tequila might walk into traffic that they can see, that's 0% judgement execution.

Outcome execution

Requires people to decide and clearly define what their goals are. Execution improves when you know your longer-term objectives and have good execution from sensing and judgment. It gets worse when you're missing sense information, lack imagination, or have made poor judgments about your situation.

Someone might want to "get across the highway safely" (good outcome execution) versus just "get across fast" (poor outcome execution that ignores the safety goal).

Something else worth noting here is that people can limit their desired outcomes by not having the confidence or self-esteem to consider themselves worthy of an outcome like getting a raise at work and this actually stems from having limiting judgements about themselves.

Planning execution

Requires a planner to consider different ways to get the outcome they've decided they want, and then select the most likely plan to succeed. A good plan needs to consider relevant conditions (weather, resources, competition, etc.), constraints (time, budget, physical limitations), and be realistic about the capabilities of the person or agent or system that will execute the plan.

High planning execution means creating a strategy and selecting tactics that are likely to succeed by being well adapted to reality.

Taking action execution

Requires an entity such as people or AI agents or even software systems that are capable enough to pull off the plan—we'll call these entities operators. Even a perfect plan fails with poor execution. Operators need to actually act out the plan that's been created. This is where skill, experience, available tools, resources, and physical capability all matter.

You could have a brilliant escape plan, but if you can't run fast enough or think clearly under pressure, your action execution will be low.

Good planning takes into account who the operator is. The planner must realistically judge if the operator is likely to be able to execute a plan or not.

Observation execution

Is your ability to accurately assess and learn from your actions in relation to the desired outcome. After acting, you observe the results and compare them to what you wanted to achieve. Learning can happen here with good execution, and even improve your memory or skills which can help improve future execution in other steps.

Poor observation execution means missing or even ignoring important feedback or drawing wrong conclusions about what worked and what didn't, this could potentially reduce future execution. Once complete, the observation step feeds back into the sensing phase of your next decision loop.

When one step falls, the others after it follow.

Dependency Dominos

The critical point is that each step depends on the ones before it—you can't have good judgment if your sensing is terrible, you can't plan well if you don't understand your situation, and it's impossible to execute successfully if you don't know what success is.

The State Change Loop framework reveals a critical mathematical truth about decision chains: the quality of execution at any step is bounded by the quality of all preceding steps.

An example of the dominos:

100% x 100% x 30% x 60% x 95% = 17.1% final effectiveness

The mathematics are unforgiving: excellence in later stages cannot compensate for fundamental flaws in earlier ones. A brilliant execution team (95%) implementing a mediocre strategy (60%) toward an unclear outcome (30%) will produce mediocre results (30% x 60% x 95% = 17.1%), no matter how hard they work.

This creates a cascading effect where early failures compound dramatically, often in ways that aren't immediately obvious to the people making decisions.

Let's examine how this plays out in practice.

Scenario 1: The Sensing Failure

Consider a startup trying to build a new social media app. The founders conduct user research, but they only survey their friends and colleagues—educated, tech-savvy millennials in San Francisco. Their sensing execution is perhaps 30%—they're gathering some data, but missing the vast majority of their marketable user base.

This poor sensing cascades immediately. Their judgment execution might operate at 80% efficiency on the data they do have—they're smart people who analyze well—but 80% of 30% is only 24% effective judgment about the real market. When they define their desired outcome as "build an app that helps young professionals increase their earnings by over 30% per year," they might execute this step at 90% clarity, but they're being clear about the wrong thing. Their actual market understanding is 90% × 24% = 21.6%.

By the time they reach strategy and execution, even perfect performance can only deliver 21.6% of optimal results. They build a beautifully designed app that completely misses what most users actually want. The failure wasn't in their coding skills or marketing budget—it was baked in from day one by inadequate sensing.

Scenario 2: The Judgment Catastrophe

Now imagine a city council trying to address traffic congestion. They have excellent data—detailed traffic studies, citizen surveys, economic analysis. Their sensing execution is 90%. But the council is deeply polarized, with half the members misinterpreting the data to support their pre-existing belief that all transportation problems can be solved by getting people out of cars, while the other half cherry-picks statistics to argue that the only solution is building more roads. Despite having the same comprehensive dataset, they reach completely opposite conclusions about what the numbers actually mean. Their judgment execution crashes to 20%—they have great data, but their political biases prevent them from processing it objectively to create a realistic model of what is really going on.

This creates a particularly insidious failure pattern. Because their sensing was so good, they feel confident in their decisions. They might achieve 85% outcome execution—clearly defining goals like "reduce average commute time by 25%"—but they're basing these goals on fundamentally flawed analysis. 85% × 20% = 17% effective outcome setting.

Their planning execution suffers similarly. Even if they have competent urban planners (80% planning capability), those planners are working from politically distorted requirements. The council demands solutions that fit their ideological biases rather than what the data actually suggests would work. 80% × 17% = 13.6% effective planning.

The resulting infrastructure projects predictably fail, often making traffic worse while consuming millions in public funds.

Scenario 3: The Outcome Ambiguity Trap

A software company decides they want to "improve user experience"—a goal that sounds reasonable but provides almost no guidance for action. Their sensing might be decent (70%—they have some user feedback) and their judgment might be solid (75%—they understand their technical constraints), but their outcome execution is catastrophically low (15%—the goal is too vague to be actionable).

This ambiguity doesn't just limit the outcome step—it makes effective planning nearly impossible. How do you create a strategy to "improve user experience"? Different team members will interpret this completely differently. Some focus on speed, others on visual design, others on feature completeness. Even excellent planners (90% capability) can only achieve 90% × 15% = 13.5% effective planning when working toward an unclear goal.

The execution team faces an even worse situation. They might be highly skilled developers (85% execution capability), but they're implementing a confused plan toward an ambiguous goal. 85% × 13.5% = 11.5% effective execution.

The result is a series of disconnected improvements that don't meaningfully impact user satisfaction, despite months of skilled work.

The Maths

These scenarios reveal why so many well-intentioned projects fail despite having talented, hardworking people involved.

The mathematics are simple but brutal:

  • Scenario 1: 30% × 80% × 90% × 80% × 85% = 14.7% final effectiveness

  • Scenario 2: 90% × 20% × 85% × 80% × 85% = 9.2% final effectiveness

  • Scenario 3: 70% × 75% × 15% × 90% × 85% = 6.0% final effectiveness

Even modest failures at early stages create devastating downstream effects.

This explains why so many organizations struggle with execution—they're not failing because people aren't trying hard enough, they're failing because early mistakes compound mathematically.

What if there are no big failures at any step?

A decision chain where every step performs at a "pretty good" 80% still delivers only 32.8% of optimal results:

  • 50% × 50% × 50% × 50% × 50% = 3.125%

  • 60% × 60% × 60% × 60% × 60% = 7.776%

  • 70% × 70% × 70% × 70% × 70% = 16.807%

  • 80% × 80% × 80% × 80% × 80% = 32.768%

  • 90% × 90% × 90% × 90% × 90% = 59.049%

  • 95% × 95% × 95% × 95% × 95% = 77.378%

  • 99% × 99% × 99% × 99% × 99% = 95.099%

Here's what this looks like visually:

The curve makes the lesson impossible to miss: when success (or agreement, reliability, etc.) must be repeated five times in a row, even modest improvements in the single-step percentage compound into dramatic gains—or losses—in the final outcome. A process that is “only” 80% reliable at each step still fails two times out of three by the time five steps are complete (overall success ≈ 33%). Pushing each step to 90% improves the chain to 59%, and at 95% you finally clear three-quarters reliability.

Only near-perfect 99% on all steps approach certainty at ≈ 95%.

In practice this means that if you need a high-confidence end result, investing in the quality of every individual step is non-negotiable—small per-step gains pay exponential dividends, while small per-step lapses are exponentially costly.

All dominos are equal

The mathematics are clear: early, less visible processes matter just as much as every other step in the decision chain. Ignore even one step in the loop and the last domino falls.

Creator's Leverage

This is why MetaPoll creators matter so much. They are sensing the decision space of possible options and structures, then using their judgment to select and frame the choices that voters can coordinate around.

  • A creator with poor sensing might miss crucial options or stakeholder perspectives.

  • A creator with biased judgment might frame choices in ways that predetermine outcomes.

  • A creator who conflates outcome, strategy, and execution decisions creates the kind of muddled choices that lead to poor coordination.

As a MetaPoll creator, you're not just organizing options—you're setting a ceiling on the quality of coordination that can take place within that MetaPoll.

The Outcome Blindness Problem

Here's where things get interesting: we've traditionally put far too much focus on arguing about "how" we should do things. People debate plans, strategies, and tactics endlessly on Twitter and in politics, but they skip the most important question—what outcome are we actually trying to achieve?

Consider the absurdity: what's the point of arguing about which policy we should implement if we haven't first agreed on what we're trying to accomplish? It's like five roommates debating whether to install a single-stage or dual-stage furnace when it's -40°C outside, without first discussing what temperature they want the house to be. Most people just care about being warm—only the technical folks need to worry about BTU ratings and heat exchangers.

This suggests a powerful insight about power distribution. Outcomes are the most universally accessible thing to agree on. Everyone can meaningfully participate in deciding "we want the house to be 21°C" or "we want clean drinking water" or "we want a community swimming pool". These conversations don't require specialized knowledge—they require understanding your own preferences and values.

The technical debates about implementation methods? Let the proven experts handle those. Good furnace installers should choose the right equipment. Urban planners should design the traffic solutions. Software engineers should pick the right algorithms. Regular people should stop arguing about things they are not experts in. But the people affected by these decisions should collectively determine what success looks like.

This is how you actually decentralize power: focus discussion and alignment around the desired outcome, not the plans or execution. (This is one of the core ideas behind Outcome Democracy.)

Why not align around Judgements?

Judgment presents a fundamental coordination problem: it's inherently internal and subjective. Judgment lives in our individual minds—it's how we process and interpret information based on our personal knowledge, experience, and biases. Unlike outcomes, which exist (or can exist) in the shared external world, judgments remain trapped in our internal cognitive landscapes.

This creates practical problems for group coordination. Judgments are abstract and difficult to verify or materialize into anything tangible. How do you align a group around one person's judgment that "families will feel safer if we design the park with better sight-lines and more lighting"? That judgment makes sense within that person's mental framework, but it's nearly impossible for others to fully understand or validate the complex web of assumptions, safety perceptions, and design theories that led to it.

When we let individual judgment dominate group decisions, we get fragmented results—smart people analyzing the same community needs but reaching completely opposite conclusions about what design principles should guide the park. One person's judgment emphasizes rose bushes are ugly and should be avoided, another's believes that naturalistic landscapes improve mental health, and a third thinks that chainlink fences break too easily so they are a waste of money. Each judgment feels perfectly rational from their perspective, but these judgments are too abstract to create anything tangible.

Outcomes offer a different path. When we focus group coordination on shared outcomes, we leverage the fact that everyone can judge these goals for themselves while still creating alignment around external, observable targets that everyone can verify. "Build a park with playground equipment for children under 12, 2 meter wide concrete walking paths, an all season botanical garden area of 30 by 30 meters, and a standard size basketball court" is something everyone can understand and measure, regardless of their internal judgment about roses, fences or community psychology.

Outcomes are goals we can consciously evaluate and decide we want to manifest in the external world—a shared reality that affects everyone equally and can be objectively assessed.

Applying the Loop

Understanding the state change loop transforms how we think about collective decision-making. It reveals why so many governance systems fail despite good intentions, why technical expertise matters more for some decisions than others, and why outcome-focused coordination might be the key to both democratic participation and effective implementation.

We'll be applying what we've learned about the State Change Loop to classify MetaPoll options using the OSE Class framework—helping MetaPoll creators design decision structures that match the cognitive realities of how humans actually process change, both individually and collectively.

Last updated