Back to blog
Growth #growth#experiments#metrics

Growth Experiments in 2026: A Framework That Prevents Randomness

Most growth work is busywork. A structured experimentation framework using ICE scoring: hypotheses, constraints, measurement, and decision loops that compound.

14 min · January 1, 2026 · Updated January 27, 2026
Topic relevant background image

TL;DR

  • Experiments need hypotheses and success metrics — without them, you can’t decide
  • Use ICE scoring (Impact × Confidence × Ease) to prioritize objectively
  • Run fewer experiments with higher quality — target 80% yielding statistically reliable learnings
  • Log every decision so you don’t repeat mistakes — learning compounds
  • The weekly decision loop: review results → decide (ship/iterate/kill) → record what you learned
  • Establish baseline KPIs before launching any experiment

Why Most Growth Work Is Busywork

Random tactics without structure:

  • Waste resources on low-impact tests
  • Yield inconclusive results
  • Get repeated because nobody remembers what was tried
  • Don’t compound into understanding

The Experiment Paradox

More ExperimentsBetter Experiments
Lots of activityFocused learning
Many inconclusiveMostly decisive
No pattern emergesMental model builds
Tactics over strategyStrategy informs tactics

Run fewer experiments with higher quality.


The Experiment Template

Every experiment needs these elements:

Required Components

ComponentDescriptionExample
HypothesisIf we change X, Y will improve because Z”If we add social proof to pricing page, conversion will increase because users need validation”
MetricWhat specifically will changePricing page → checkout conversion
SegmentWho is being testedNew visitors from paid channels
DurationHow long to run2 weeks minimum
Success thresholdWhat counts as a win+10% relative improvement
Sample sizeStatistical requirements1,000 visitors per variant
Rollback planWhat if things go wrongRevert to control immediately

Example Experiment Doc

## Experiment: Social Proof on Pricing Page

### Hypothesis
If we add customer logos and testimonials to the pricing page,
checkout conversion will increase by 10%+ because enterprise
visitors need social validation before purchasing.

### Design
- Control: Current pricing page
- Variant: Add logos above pricing table + 2 testimonials

### Success Metrics
- Primary: Pricing → Checkout conversion (+10%)
- Secondary: Time on page (monitor, no threshold)
- Guardrail: Bounce rate (must not increase >5%)

### Segment
New visitors from paid enterprise campaigns

### Duration
2 weeks (minimum 1,000 visitors per variant)

### Rollback Trigger
Conversion drops >15% after 3 days

### Owner
Growth Lead

### Start Date
2026-02-01

ICE Scoring for Prioritization

ICE is the gold standard framework for ranking growth experiments.

The Three Dimensions

DimensionQuestionScale
ImpactHow much will this move the key metric if successful?1-10
ConfidenceHow sure are you it will work? (Data, research, precedent)1-10
EaseHow simple is it to build and launch?1-10

Calculating ICE Score

Two common approaches:

Average: (Impact + Confidence + Ease) / 3

Multiply: Impact × Confidence × Ease

Multiplication gives more separation between ideas.

ICE Scoring Example

ExperimentImpactConfidenceEaseICE Score
Social proof on pricing769378
New checkout flow943108
Email sequence update578280
Referral program83496

Rank by score and work top-down.

When to Adjust Weighting

ContextEmphasize
New to testingEase (build confidence)
Executive scrutinyImpact (need big wins)
Low trafficEase + statistical feasibility
Scaling phaseImpact (bigger bets)

The Weekly Decision Loop

Structure prevents drift and enables compounding.

Weekly Cadence

DayActivity
Monday AMReview last week’s experiment results
Monday PMDecide: ship, iterate, or kill
TuesdayQueue next experiments
Wed-ThuExecute and monitor
FridayLog learnings, prep for Monday

Decision Framework

ResultDecisionAction
Clear win (> threshold)ShipRoll out to 100%, document learning
Marginal win (< threshold)IterateImprove and re-test, or combine with other wins
No effectKillStop, document why it didn’t work
NegativeKill immediatelyRevert, document learning
InconclusiveExtend or killEither need more time or sample too small

The Decision Log

Every experiment gets a closing entry:

## Experiment: Social Proof on Pricing Page
Status: SHIPPED

### Result
+14% conversion (significant at p<0.05)

### Learning
Enterprise visitors respond strongly to peer validation.
Logo recognition matters more than testimonial length.

### Follow-up
Test adding case study links for even higher lift.

### Closed
2026-02-15 by Growth Lead

Before You Experiment: Baseline KPIs

You can’t measure improvement without knowing your starting point.

Essential Baselines

KPIWhyHow Often
Funnel conversion ratesKnow each step’s current performanceWeekly
Activation rateNew user success baselineWeekly
Retention curvesCohort performanceMonthly
CAC by channelAcquisition efficiencyMonthly
Revenue per visitorOverall efficiencyWeekly

Baseline Hygiene

PracticeWhy
Segment by sourceChannels behave differently
Track trends, not just snapshotsSeasonality matters
Document methodologyReproducible measurement
Flag anomaliesKnow when something’s off

A/B Testing Best Practices

Statistical Requirements

ElementGuideline
Sample sizeCalculate before starting (power analysis)
DurationMinimum 1-2 weeks, capture weekly cycles
Significancep < 0.05 for most decisions
One primary metricMultiple metrics = multiple comparison problem

Common Mistakes

MistakeProblemFix
Stopping earlyFalse positivesPre-commit to duration
Too many variantsDiluted sampleMax 2-3 variants
Changing mid-testInvalidates resultsLock test after start
No guardrail metricsMiss negative effectsAlways monitor key metrics

When A/B Testing Doesn’t Work

SituationAlternative
Low trafficSequential testing
Complex changesBefore/after with caution
UX redesignsQualitative + quantitative
PricingSurvey + cohort analysis

Prioritization Frameworks Beyond ICE

PIE Framework

FactorQuestion
PotentialHow much improvement is possible?
ImportanceHow valuable is improving this page?
EaseHow difficult to implement?

RICE Framework

FactorDescriptionUnit
ReachHow many users affectedNumber
ImpactEffect on metricScale
ConfidenceCertainty of success%
EffortResources requiredPerson-weeks

Score = (Reach × Impact × Confidence) / Effort

When to Use Which

FrameworkBest For
ICEQuick prioritization, early stage
PIEPage-level optimization
RICEFeature-level decisions

Building an Experiment Backlog

Idea Sources

SourceHow to Capture
User interviewsPain points → experiment ideas
Support ticketsCommon issues → fixes
AnalyticsDrop-off points → optimizations
Competitor analysisWhat they do → what to test
Team brainstormsWeekly idea collection

Backlog Structure

## Growth Experiment Backlog

### Scored (Ready to Run)
1. Social proof on pricing (ICE: 378)
2. Email sequence update (ICE: 280)
3. Homepage hero test (ICE: 245)

### Needs Scoring
- Exit-intent popup
- Chatbot on docs
- Annual pricing nudge

### Parked
- Mobile app push notifications (needs mobile first)
- Enterprise landing page (needs content)

Backlog Hygiene

FrequencyAction
WeeklyAdd new ideas
Bi-weeklyScore unsorted ideas
MonthlyPrune stale ideas
QuarterlyTheme review

Experiment Velocity

Target Metrics

MetricTargetWhy
Experiments/month4-8Sustained learning
Conclusive rate80%+Quality over quantity
Win rate30-40%Some wins expected
Time to decision2-4 weeksAvoid dragging

Velocity vs. Quality

Low VelocityRight BalanceToo Fast
1 test/month1-2 tests/week1 test/day
No learningSteady learningLow quality
Missed opportunitiesCompounding gainsInconclusive results

Documenting Learnings

Why Documentation Matters

Without DocsWith Docs
Repeat same testsBuild on history
New hires start overOnboard with context
Random tacticsPattern recognition
No institutional memoryCompounding knowledge

Learning Repository Structure

/experiments
  /2026-Q1
    /social-proof-pricing.md
    /email-sequence-v2.md
    /checkout-simplification.md
  /2026-Q2
    /...
  /meta
    /what-works.md  (patterns that win)
    /what-fails.md  (patterns that lose)
    /framework.md   (how we experiment)

Pattern Recognition

After 20+ experiments, look for:

  • What consistently works in your product?
  • What never moves the needle?
  • Which segments respond to which tactics?
  • What baseline changes indicate?

Implementation Checklist

Setup:

  • Define baseline KPIs
  • Choose testing tool
  • Create experiment template
  • Set up decision log
  • Establish weekly cadence

For each experiment:

  • Write hypothesis
  • Define success threshold
  • Calculate sample size
  • Set duration
  • Identify guardrail metrics
  • Document rollback plan

Weekly:

  • Review completed experiments
  • Decide: ship/iterate/kill
  • Document learnings
  • Score new ideas
  • Queue next experiments

Monthly:

  • Review experiment velocity
  • Analyze win rate
  • Identify patterns
  • Update baselines

FAQ

What’s the biggest experiment failure?

No clear success threshold. If you don’t define “win” before starting, you’ll rationalize any result. Commit to the threshold upfront.

How do I get more experiment ideas?

SourceMethod
Users”What almost stopped you from buying?”
AnalyticsWhere do people drop off?
SupportWhat do people complain about?
CompetitorsWhat do they do that we don’t?
TeamWeekly brainstorm sessions

What if my traffic is too low?

OptionTrade-off
Longer test durationSlower learning
Bigger effect sizes onlyMiss small wins
Sequential testingMore complex analysis
Qualitative researchLess statistical rigor

Should I test everything?

No. Some decisions don’t need A/B tests:

  • Obvious bugs (just fix them)
  • Legal requirements (no choice)
  • Very small changes (not worth the setup)
  • Strategic bets (test after shipping)

How do I handle multiple tests at once?

ScenarioApproach
Tests on different pagesRun in parallel
Tests on same pageRun sequentially
Overlapping audienceUse exclusion groups

Sources & Further Reading

Interested in our research?

We share our work openly. If you'd like to collaborate or discuss ideas — we'd love to hear from you.

Get in Touch

Let's build
something real.

No more slide decks. No more "maybe next quarter".
Let's ship your MVP in weeks.

Start Building Now