The Signal/Noise Framework: How to Filter What Actually Matters in Innovation Research
Not all innovation research is created equal. This framework gives you a fast, repeatable system for deciding what to act on and what to file under 'interesting but useless.'
Why Most People Read Research Wrong
When a new study drops claiming "psychological safety increases team performance by 40%," most managers do one of two things:
- Implement it immediately — reformat their 1:1s, put "psychological safety" on the agenda, buy the book.
- Dismiss it — "That's just one study. Doesn't apply to us."
Both responses are wrong. Both miss the point. And both waste the one thing you don't have: decision-making bandwidth.
The Signal/Noise Framework is our operating system for reading organizational research. It tells you, in under ten minutes per study, whether you should act, monitor, or move on.
The Framework
Every piece of research gets filtered through four questions. In order.
Question 1: Effect Size or Effect Nothing?
Before you read the conclusion, find the effect size. Not the p-value — that only tells you whether the effect is real. The effect size tells you whether it's meaningful.
A common benchmark:
- Small effect (d < 0.2): Real, but unlikely to move the needle in a complex organizational system.
- Medium effect (0.2–0.8): Worth your attention. This can make a difference.
- Large effect (d > 0.8): High-signal. This is probably worth acting on.
Most research that goes viral has small effect sizes dressed up in exciting headlines. Filter at this step and you eliminate 60% of the noise.
Question 2: Lab or Field?
Laboratory studies of creativity, collaboration, and leadership are useful for understanding mechanisms. They are not reliable guides to what happens in organizations.
Organizations have:
- Pre-existing hierarchies and power dynamics
- Institutional memory and history
- Competing incentives
- Implementation friction
If a finding comes only from lab studies, treat it as a hypothesis to test, not a practice to adopt. If it has field validation — real organizations, real contexts, over real time — the signal goes up significantly.
Question 3: Who Benefits From This Being True?
This is the uncomfortable question that most practitioners skip.
Who funded the study? Who published it? Who profits from this becoming consensus? Researchers have career incentives. Consultants have business incentives. Publishers have virality incentives.
This doesn't mean the finding is wrong. It means you should weigh it accordingly. A study funded by a software company finding that software improves collaboration deserves more scrutiny than an independent replication of the same finding.
Question 4: What's the Mechanism?
Correlation doesn't change behavior. Mechanisms do.
If you understand why a finding holds — what's actually happening in the system — you can:
- Predict when it applies and when it doesn't
- Adapt it to your specific context
- Spot when you're violating the conditions that make it work
If a study can't answer "why does this work?", the actionable value drops sharply.
Using the Framework
Here's how this works in practice. Let's run a famous claim through it:
Claim: "Open office layouts increase collaboration."
Effect size: Multiple studies show small or negative effects on collaboration (despite positive effects on noise). 🔴
Lab or field? Most data is field data — but often from self-report surveys, which inflate perceived collaboration. 🟡
Who benefits? Real estate developers, cost-cutting executives, and furniture companies all benefit from this being true. 🔴
Mechanism: The proposed mechanism (visibility → interaction → collaboration) ignores the countervailing mechanism (overstimulation → headphones → isolation). 🔴
Verdict: Don't act on this. The research pattern is weak, conflicted-interest, and the mechanism story has a known flaw. This is noise.
The Signal/Noise Score
You don't need to formalize this, but if your team wants a simple scoring system:
| Question | Signal | Noise | |---|---|---| | Effect size | Medium or large | Small or unreported | | Lab or field? | Field-validated | Lab only | | Who benefits? | Neutral parties | High conflict of interest | | Mechanism? | Clear and tested | Absent or vague |
3–4 signal answers: Act. Design an experiment or direct implementation. 2 signal answers: Monitor. Track but don't invest heavily yet. 0–1 signal answers: Move on. File under "interesting" and spend your attention elsewhere.
The Uncomfortable Implication
Most of what passes as "evidence-based management" doesn't survive this filter.
That doesn't mean ignore research. It means raise your standard for what you act on. The gap between "the research is interesting" and "I should change how my organization works" is much wider than most practitioners acknowledge.
Signal/Noise closes that gap.
This framework was developed from research literacy principles in Pfeffer & Sutton's "Hard Facts, Dangerous Half-Truths and Total Nonsense" (2006) and Kahneman's work on reference class forecasting. We've adapted it for the practitioner context.
Found this useful?
Share it with your team or follow for more.
Get the translation, not the textbook.
Join practitioners getting concrete frameworks, research breakdowns, and actionable systems — delivered without the academic fluff.
Also follow on LinkedIn for daily sharp takes.