Forecast Mechanics
Forecasting is not prediction. It is pattern recognition. Accurate forecasts require separating commit from upside from fantasy. The Commit Protocol enforces evidence standards that eliminate opinion from the process.
The Prediction Problem
The word "forecast" implies prediction. It suggests looking into the future and divining what will happen.
This framing is wrong. It sets up forecasting as a guessing game where success depends on clairvoyance.
Forecasting is not prediction. It is aggregation.
A forecast aggregates evidence from individual deals to produce a portfolio-level projection. The accuracy of the forecast depends entirely on the quality of that evidence. When evidence is rigorous, the forecast is reliable. When evidence is opinion, the forecast is noise.
The goal of forecast mechanics is not to predict better. It is to gather better evidence and aggregate it systematically.
The Commit Protocol
Most organizations use some version of the Commit/Upside/Best Case framework. The problem is that category assignment is subjective. Reps put deals in Commit because they feel confident. Managers adjust based on their own intuition. The categories become containers for optimism rather than indicators of probability.
Remotir's Commit Protocol replaces subjective assignment with evidence requirements. Each category has specific criteria. A deal qualifies for a category only when evidence exists, not when someone believes it belongs there.
Category Definitions
Commit: Will close this period.
Evidence required:
- Verbal commitment from economic buyer (documented)
- Contract in legal/procurement process
- No outstanding objections or blockers identified
- Close date confirmed by buyer within the period
A deal without all four evidence points does not belong in Commit, regardless of how confident the rep feels.
Upside: Should close this period, pending resolution of identified factors.
Evidence required:
- Champion has confirmed intent to recommend approval
- Decision timeline is within the period (buyer confirmed)
- Budget has been identified (specific amount, specific source)
- Known factors to resolution are documented (e.g., "pending legal review," "awaiting stakeholder meeting")
Upside deals have positive momentum but lack the certainty of Commit. The factors standing between current state and closed deal are known and bounded.
Best Case: Could close this period if conditions align.
Evidence required:
- Opportunity is qualified (PAIN Threshold met)
- Buyer has expressed intent to move forward
- Timeline is plausible within the period
- Factors to resolution are not fully identified or controlled
Best Case deals are real opportunities that could accelerate. They are not fantasies, but they require factors outside current visibility to align.
Excluded: Will not close this period.
Deals that do not meet Best Case criteria should not be in the forecast for this period. They may close eventually, but they are not contributing to this period's projection.
The Evidence Chain
Each category forms a chain with the one below it:
- Commit = Upside + economic buyer verbal + contract in process
- Upside = Best Case + champion committed + budget confirmed + decision timeline set
- Best Case = Qualified + buyer intent + plausible timeline
Movement up the chain requires evidence addition. A deal in Upside cannot move to Commit simply because the rep feels better about it. It moves when economic buyer commitment is documented and contract process begins.
The Weighted Probability Matrix
Category assignment is the qualitative layer. Probability weighting is the quantitative layer.
Each category carries an expected conversion rate based on historical performance:
| Category | Typical Conversion | Range |
|---|---|---|
| Commit | 90-95% | 85-98% |
| Upside | 55-70% | 45-75% |
| Best Case | 20-35% | 15-40% |
These rates should be calibrated to your specific data. Analyze past quarters: what percentage of Commit deals closed? Upside? Best Case? Use actuals, not assumptions.
Building the Forecast
Step 1: Categorize
Every in-period deal is assigned a category based on evidence (not judgment).
Step 2: Weight
Multiply deal value by category probability.
Step 3: Sum
Aggregate weighted values for total forecast.
Example:
| Deal | Value | Category | Probability | Weighted |
|---|---|---|---|---|
| A | $100k | Commit | 92% | $92k |
| B | $80k | Commit | 92% | $74k |
| C | $150k | Upside | 65% | $98k |
| D | $120k | Upside | 65% | $78k |
| E | $200k | Best Case | 30% | $60k |
| F | $90k | Best Case | 30% | $27k |
| Total | $740k | $429k |
Forecast: $429k
This forecast is not a guess. It is a calculation based on categorized evidence and calibrated probabilities.
The Confidence Interval
A point forecast ($429k) is useful but incomplete. Reality will not be exactly $429k. It will fall within a range.
Calculate the range by summing category floors and ceilings:
Floor (pessimistic): Commit × 85% + Upside × 45% + Best Case × 15%
Ceiling (optimistic): Commit × 98% + Upside × 75% + Best Case × 40%
For the example above:
- Floor: $141k + $90k + $44k = $275k
- Ceiling: $172k + $203k + $116k = $491k
Forecast: $429k (range: $275k - $491k)
The range communicates uncertainty. A forecast that says "we will do $429k" implies precision that does not exist. A forecast that says "$275k-$491k with expected value of $429k" reflects the probabilistic nature of the projection.
The Roll-Up Problem
In organizations with multiple layers, forecasts roll up from reps to managers to directors to CRO. At each layer, aggregation compounds error.
The Over-Commitment Problem
Reps tend to over-commit. They want to show confidence. They put deals in Commit that belong in Upside.
If each of 10 reps over-commits by 10%, the rolled-up forecast is 10% high. Systematic bias at the individual level becomes significant error at the aggregate level.
The Adjustment Problem
Managers respond to over-commitment by applying "haircuts." They discount rep forecasts by some percentage based on historical accuracy.
This creates new problems:
- Accurate reps are penalized (their forecasts are discounted when they should not be)
- Inaccurate reps are enabled (they learn that over-commitment will be corrected for them)
- The haircut percentage is itself a guess, adding another layer of uncertainty
The Solution: Evidence at the Deal Level
The Commit Protocol solves the roll-up problem by requiring evidence at the deal level, not judgment at the manager level.
A manager reviewing a Commit deal does not ask "Do I believe this will close?" They ask "Is the evidence for Commit present?" The evidence is either documented or it is not. There is no room for optimism inflation.
When every deal is correctly categorized, the roll-up is reliable. The math works because the inputs are clean.
Forecast Cadence
Forecasting is not a monthly event. It is a continuous process with rhythmic checkpoints.
Weekly Forecast Update
Every week, review and update:
- Category assignments for all in-period deals
- New evidence that changes categories
- Deals at risk (stalling, champion change, competitive entry)
- New opportunities that enter the period
The weekly forecast should change. Reality changes. The forecast should track reality, not remain static.
Monthly Deep Review
Monthly, conduct deeper analysis:
- Compare current forecast to beginning-of-period forecast (how much has it moved?)
- Analyze forecast accuracy for recently closed periods
- Identify reps or segments with systematic bias
- Recalibrate probability weights if actuals deviate from expectations
Quarterly Post-Mortem
After each quarter closes:
- What was forecasted vs. actual?
- Which deals moved categories during the quarter?
- What evidence was present or absent for misses?
- What process or criteria changes would improve accuracy?
The post-mortem feeds back into process improvement. Forecasting is a skill that develops through iteration.
The Commitment Conversation
The forecast call should not be a guessing game. It should be a structured review of evidence.
The Right Questions
For Commit deals:
- "Where is the documentation of the economic buyer's verbal commitment?"
- "What is the status of the contract in legal/procurement?"
- "What, if anything, could prevent close within the period?"
For Upside deals:
- "What is the specific decision timeline the buyer has communicated?"
- "What budget has been identified, and where does it come from?"
- "What are the factors to resolution, and when will they resolve?"
For Best Case deals:
- "What evidence supports close within this period versus next?"
- "What would need to happen for this to move to Upside?"
- "Are there factors we have not yet identified?"
The Wrong Questions
- "How do you feel about this deal?"
- "Do you think it will close?"
- "What's your gut say?"
Feelings and gut instincts are not evidence. They should not drive forecast categories.
Case Study: The Forecast Transformation
A Remotir client (enterprise software, $40M ARR, 50-person sales team) had forecast accuracy averaging 71%. Miss direction was random: sometimes high, sometimes low, with no pattern.
The Diagnosis:
We audited forecast calls and found:
- 60% of "Commit" deals lacked documented economic buyer commitment
- 45% of "Upside" deals had no confirmed decision timeline
- Category assignment was based on rep confidence, not evidence
- Managers applied varying haircut percentages (5-25%) based on individual judgment
The categories were labels for optimism levels, not indicators of evidence.
The Implementation:
- Defined Commit Protocol with specific evidence requirements
- Created CRM fields for each evidence criterion (date/doc required)
- Trained reps on evidence gathering and documentation
- Restructured forecast calls around evidence review, not deal narrative
- Calibrated probabilities based on 8 quarters of historical data
The Results (4 quarters post-implementation):
- Forecast accuracy improved from 71% to 92%
- Commit category conversion improved from 78% to 94%
- Forecast variance decreased 60%
- Board confidence in sales projections increased significantly
The insight: The company did not have a forecasting problem. They had an evidence problem. Once evidence became the basis for categorization, the math worked.
Conclusion: Evidence, Not Opinion
The forecast should be an aggregation of evidence, not an aggregation of opinions.
When a sales leader presents a forecast to the CEO, they should be able to say: "These deals are in Commit because they meet the Commit evidence criteria. Here is the documentation. These deals are in Upside because they meet Upside criteria. The probabilities are calibrated to historical conversion rates. The math produces this number."
This is not guessing. This is calculation.
The Commit Protocol eliminates the wishful thinking that corrupts forecasts. It replaces "I feel confident" with "Here is the evidence." It replaces "I think it will close" with "The criteria are met."
Evidence can be verified. Opinion cannot.
Build your forecast on evidence. Verify the evidence. Trust the math.
Key Frameworks
References
- Gartner (2024). Sales Forecasting Accuracy Report.
- CSO Insights (2023). World-Class Forecasting Practices.
- Clari (2024). Revenue Operations Forecasting Study.