State Change Loop
The challenge of collective decision-making isn't just about aggregating preferencesâit's about recognizing that decisions occupy different positions in what I call the "state-change loop" (inspired by the military based OODA loop), and understanding these positions is crucial for designing effective group coordination systems.
The Foundation: Human Scale State Change
To understand how decisions differ, we first need to examine how humans naturally process and respond to the world around them.
Every conscious human continuously cycles through the state change loop:

Sense - Gathering information about the current environment by seeing, hearing, tasting, touching, smelling.
Judgment - Processing and evaluating the sense information to create a mental model of the current state.
Desired Outcome - Determining what outcome we want in a future state.
Plan Action - Strategizing what plans are most likely to achieve the desired outcome state.
Take Action - Executing the selected plan.
Observe - Assessing how close the new state is with the desired outcome state, and beginning the cycle again.
Awake? No escape.
A person is always cycling through these steps, even during activities when people believe they are doing nothing, like meditation.
It may start when they sense tension in their head, they judge they are stressed, they decide their goal is to be in a state of peace, they decide that meditation will achieve their desired outcome and they plan how they'll do it, then they do the meditation, lastly they observe whether they achieved the peaceful state they wanted.
Execution can fail
However there's something a whole lot more profound to notice. Steps can fail or succeed.
Think of each step in the loop like a bucket that can be filled anywhere from 0% to 100%. How much it's filled is the quality that the step was executed at.
But this is just scratching the surface, to do this justice let's break down how good or bad execution at each step might look like with more detail.
Sensing execution
Determines how much information you can gather about your environment. Imagine someone who is blind and deaf trying to cross a busy highwayâtheir sensing execution is near 0% because they can't see or hear the cars coming. The best they could do is smell the exhaust and rubber, feel the concrete under their feet as they walk, and the air rushing past them as cars passed by.
Compare this to someone with perfect vision and hearing who can sense every vehicle, its speed, and directionâthat's close to 100% sensing execution. Most of us fall somewhere in between, maybe missing some details but catching the important stuff.
Judgment execution
Is your ability to use that sense data to make an accurate model of what's happening around you. Even if our blind and deaf person somehow knew cars were present, they couldn't judge which direction they're coming from, how fast they're moving, or when it's safe to cross. Someone with good judgment execution takes their sense data and correctly understands the situationâ"that car is 200 meters away, moving 85 kph, so I have about 5 seconds before it reaches me."
Conversely, someone who can see and hear the cars might have an acute or chronic mental disorder that causes them to misjudge the situation, for example, someone who had 7 shots of tequila might walk into traffic that they can see, that's 0% judgement execution.
Outcome execution
Requires people to decide and clearly define what their goals are. Execution improves when you know your longer-term objectives and have good execution from sensing and judgment. It gets worse when you're missing sense information, lack imagination, or have made poor judgments about your situation.
Someone might want to "get across the highway safely" (good outcome execution) versus just "get across fast" (poor outcome execution that ignores the safety goal).
Something else worth noting here is that people can limit their desired outcomes by not having the confidence or self-esteem to consider themselves worthy of an outcome like getting a raise at work and this actually stems from having limiting judgements about themselves.
Planning execution
Requires a planner to consider different ways to get the outcome they've decided they want, and then select the most likely plan to succeed. A good plan needs to consider relevant conditions (weather, resources, competition, etc.), constraints (time, budget, physical limitations), and be realistic about the capabilities of the person or agent or system that will execute the plan.
For example, if an intruder breaks into your home, your survival plan will be dramatically different if you're a disabled person in a wheelchair versus a combat-ready navy seal.
High planning execution means creating a strategy and selecting tactics that are likely to succeed by being well adapted to reality.
Taking action execution
Requires an entity such as people or AI agents or even software systems that are capable enough to pull off the planâwe'll call these entities operators. Even a perfect plan fails with poor execution. Operators need to actually act out the plan that's been created. This is where skill, experience, available tools, resources, and physical capability all matter.
You could have a brilliant escape plan, but if you can't run fast enough or think clearly under pressure, your action execution will be low.
Observation execution
Is your ability to accurately assess and learn from your actions in relation to the desired outcome. After acting, you observe the results and compare them to what you wanted to achieve. Learning can happen here with good execution, and even improve your memory or skills which can help improve future execution in other steps.
Poor observation execution means missing or even ignoring important feedback or drawing wrong conclusions about what worked and what didn't, this could potentially reduce future execution. Once complete, the observation step feeds back into the sensing phase of your next decision loop.
Dependency Dominos
The critical point is that each step depends on the ones before itâyou can't have good judgment if your sensing is terrible, you can't plan well if you don't understand your situation, and it's impossible to execute successfully if you don't know what success is.
An example of the dominos:
100% x 100% x 30% x 60% x 95% = 17.1% final effectiveness
The mathematics are unforgiving: excellence in later stages cannot compensate for fundamental flaws in earlier ones. A brilliant execution team (95%) implementing a mediocre strategy (60%) toward an unclear outcome (30%) will produce mediocre results (30% x 60% x 95% = 17.1%), no matter how hard they work.
This creates a cascading effect where early failures compound dramatically, often in ways that aren't immediately obvious to the people making decisions.
This is why great MetaPoll design matters so muchâgetting the foundation right isn't just important, it's mathematically essential for success.
Let's examine how this plays out in practice.
Scenario 1: The Sensing Failure
Consider a startup trying to build a new social media app. The founders conduct user research, but they only survey their friends and colleaguesâeducated, tech-savvy millennials in San Francisco. Their sensing execution is perhaps 30%âthey're gathering some data, but missing the vast majority of their marketable user base.
This poor sensing cascades immediately. Their judgment execution might operate at 80% efficiency on the data they do haveâthey're smart people who analyze wellâbut 80% of 30% is only 24% effective judgment about the real market. When they define their desired outcome as "build an app that helps young professionals increase their earnings by over 30% per year," they might execute this step at 90% clarity, but they're being clear about the wrong thing. Their actual market understanding is 90% Ă 24% = 21.6%.
By the time they reach strategy and execution, even perfect performance can only deliver 21.6% of optimal results. They build a beautifully designed app that completely misses what most users actually want. The failure wasn't in their coding skills or marketing budgetâit was baked in from day one by inadequate sensing.
Scenario 2: The Judgment Catastrophe
Now imagine a city council trying to address traffic congestion. They have excellent dataâdetailed traffic studies, citizen surveys, economic analysis. Their sensing execution is 90%. But the council is deeply polarized, with half the members misinterpreting the data to support their pre-existing belief that all transportation problems can be solved by getting people out of cars, while the other half cherry-picks statistics to argue that the only solution is building more roads. Despite having the same comprehensive dataset, they reach completely opposite conclusions about what the numbers actually mean. Their judgment execution crashes to 20%âthey have great data, but their political biases prevent them from processing it objectively to create a realistic model of what is really going on.
This creates a particularly insidious failure pattern. Because their sensing was so good, they feel confident in their decisions. They might achieve 85% outcome executionâclearly defining goals like "reduce average commute time by 25%"âbut they're basing these goals on fundamentally flawed analysis. 85% Ă 20% = 17% effective outcome setting.
Their planning execution suffers similarly. Even if they have competent urban planners (80% planning capability), those planners are working from politically distorted requirements. The council demands solutions that fit their ideological biases rather than what the data actually suggests would work. 80% Ă 17% = 13.6% effective planning.
The resulting infrastructure projects predictably fail, often making traffic worse while consuming millions in public funds.
Scenario 3: The Outcome Ambiguity Trap
A software company decides they want to "improve user experience"âa goal that sounds reasonable but provides almost no guidance for action. Their sensing might be decent (70%âthey have some user feedback) and their judgment might be solid (75%âthey understand their technical constraints), but their outcome execution is catastrophically low (15%âthe goal is too vague to be actionable).
This ambiguity doesn't just limit the outcome stepâit makes effective planning nearly impossible. How do you create a strategy to "improve user experience"? Different team members will interpret this completely differently. Some focus on speed, others on visual design, others on feature completeness. Even excellent planners (90% capability) can only achieve 90% Ă 15% = 13.5% effective planning when working toward an unclear goal.
The execution team faces an even worse situation. They might be highly skilled developers (85% execution capability), but they're implementing a confused plan toward an ambiguous goal. 85% Ă 13.5% = 11.5% effective execution.
The result is a series of disconnected improvements that don't meaningfully impact user satisfaction, despite months of skilled work.
The Maths
These scenarios reveal why so many well-intentioned projects fail despite having talented, hardworking people involved.
The mathematics are simple but brutal:
Scenario 1: 30% Ă 80% Ă 90% Ă 80% Ă 85% = 14.7% final effectiveness
Scenario 2: 90% Ă 20% Ă 85% Ă 80% Ă 85% = 9.2% final effectiveness
Scenario 3: 70% Ă 75% Ă 15% Ă 90% Ă 85% = 6.0% final effectiveness
Even modest failures at early stages create devastating downstream effects.
This explains why so many organizations struggle with executionâthey're not failing because people aren't trying hard enough, they're failing because early mistakes compound mathematically.
All dominos are equal
The mathematics are clear: early, less visible processes matter just as much as every other step in the decision chain. Ignore even one step in the loop and the last domino falls.
Creator's Leverage
This is why MetaPoll creators matter so much. They are sensing the decision space of possible options and structures, then using their judgment to select and frame the choices that voters can coordinate around.
Without great MetaPoll authorship, even the most engaged voters are fundamentally limited by the quality of choices they're given.
A creator with poor sensing might miss crucial options or stakeholder perspectives.
A creator with biased judgment might frame choices in ways that predetermine outcomes.
A creator who conflates outcome, strategy, and execution decisions creates the kind of muddled choices that lead to poor coordination.
As a MetaPoll creator, you're not just organizing optionsâyou're setting a ceiling on the quality of coordination that can take place within that MetaPoll.
The Outcome Blindness Problem
Here's where things get interesting: we've traditionally put far too much focus on arguing about "how" we should do things. People debate plans, strategies, and tactics endlessly on Twitter and in politics, but they skip the most important questionâwhat outcome are we actually trying to achieve?
Consider the absurdity: what's the point of arguing about which policy we should implement if we haven't first agreed on what we're trying to accomplish? It's like five roommates debating whether to install a single-stage or dual-stage furnace when it's -40°C outside, without first discussing what temperature they want the house to be. Most people just care about being warmâonly the technical folks need to worry about BTU ratings and heat exchangers.
This suggests a powerful insight about power distribution. Outcomes are the most universally accessible thing to agree on. Everyone can meaningfully participate in deciding "we want the house to be 21°C" or "we want clean drinking water" or "we want a community swimming pool". These conversations don't require specialized knowledgeâthey require understanding your own preferences and values.
The technical debates about implementation methods? Let the proven experts handle those. Good furnace installers should choose the right equipment. Urban planners should design the traffic solutions. Software engineers should pick the right algorithms. Regular people should stop arguing about things they are not experts in. But the people affected by these decisions should collectively determine what success looks like.
This is how you actually decentralize power: focus discussion and alignment around the desired outcome, not the plans or execution. (This is one of the core ideas behind Outcome Democracy.)
Applying the Loop
Understanding the state change loop transforms how we think about collective decision-making. It reveals why so many governance systems fail despite good intentions, why technical expertise matters more for some decisions than others, and why outcome-focused coordination might be the key to both democratic participation and effective implementation.
We'll be applying what we've learned about the State Change Loop to classify MetaPoll options using the OSE Class frameworkâhelping MetaPoll creators design decision structures that match the cognitive realities of how humans actually process change, both individually and collectively.
Last updated
