Scoring Your Opportunities

The Sales VP Who Built the Wrong Thing First

A VP of Sales got excited about using AI to generate personalized proposals. It sounded transformative—custom proposals in minutes instead of hours. His team spent three months building the system, integrating it with their CRM, training the model on past winning proposals.

The result: AI struggled with the complexity. Proposals needed heavy editing—sometimes more work than starting from scratch. The quarterly volume wasn’t high enough to justify the investment. The whole initiative quietly disappeared.

Meanwhile, daily email triage—boring, unsexy, something nobody would announce at an all-hands meeting—would have saved his team 10+ hours per week. But email triage didn’t sound impressive. It didn’t feel like transformation. So he never considered it.

This is the prestige trap. We gravitate toward AI applications that sound important rather than ones that work well. Chapter 4 helped you map your decisions. This chapter gives you a scoring system to tell which ones are actually good AI candidates—and which will waste your time despite sounding promising.

The framework is called FFCC—Frequent, Forgiving, Clear, Contained. It’s not complicated, but it’s counterintuitive. It will almost certainly tell you to start somewhere different than your instincts suggest. That’s the point.

Why Your Instincts Will Mislead You

When choosing where to apply AI, most people make three predictable mistakes.

We overweight rarity. That quarterly strategic analysis feels more important than daily email processing. It’s special. It goes to executives. It deserves AI help. But rarity is the enemy of AI learning—you need frequency to iterate and improve.

We underweight volume. Ten minutes saved daily doesn’t feel significant. Two hours saved quarterly feels substantial. But the math tells a different story:

  • 10 minutes/day × 250 workdays = 41 hours saved per year
  • 2 hours/quarter × 4 quarters = 8 hours saved per year

The “trivial” daily savings delivers 5x more value than the “substantial” quarterly savings.

We ignore failure modes. When imagining AI assistance, we picture it working perfectly. We don’t imagine the confident errors, the hallucinated details, the subtle mistakes that require extensive review. Tasks where errors are catchable and consequences are minimal make better starting points than tasks where errors are expensive.

The FFCC framework counteracts these instincts. It forces you to score opportunities on dimensions that actually predict success rather than dimensions that feel important.

Think of it as a filter for your Chapter 4 decision map. You identified where you make decisions. Now you need to know which of those decisions are good AI candidates. The scoring reveals what your intuition hides.

I’ve seen teams waste months on low-scoring tasks because they seemed strategic, while high-scoring tasks sat ignored because they seemed trivial. The framework prevents that mistake.

The FFCC Framework

FFCC stands for Frequent, Forgiving, Clear, Contained. Each dimension captures something important about whether an AI workflow will succeed or struggle.

Frequent

The question: How often does this decision or task occur?

Why it matters: Frequency determines ROI. A workflow you use daily generates 250 times more value annually than one you use annually. But frequency delivers a second, subtler benefit: more learning cycles. When you use a workflow daily, you iterate daily. You discover what works, refine your prompts, catch failure modes—all faster than if you’re only touching the workflow quarterly.

How to score it:

Score Frequency
5 Multiple times daily
4 Daily
3 Weekly
2 Monthly
1 Quarterly or less

Forgiving

The question: What happens if AI gets it wrong?

Why it matters: All AI makes mistakes. That’s not a bug—it’s the fundamental nature of probabilistic systems. The question isn’t whether errors will occur; it’s whether errors can be caught and corrected before they cause damage.

Forgiving tasks have built-in safety nets. An internal draft that goes wrong gets fixed before anyone outside the team sees it. An email subject line that underperforms gets replaced in the next campaign. Contrast this with a contract clause that’s wrong but goes unnoticed, or medical advice that a patient acts on before anyone reviews it.

How to score it:

Score Error Consequences
5 Errors caught easily, no external impact (internal drafts, personal research)
4 Minor rework required, limited visibility (team documents)
3 Moderate correction needed, some external visibility (client communications with review)
2 Significant consequences if missed (financial reports, legal documents)
1 Severe or irreversible consequences (medical advice, safety-critical systems)

Clear

The question: Are inputs and outputs well-defined?

Why it matters: AI performs best with clear specifications. “Make this better” produces inconsistent, often unusable results. “Summarize this meeting in 3 bullet points highlighting action items, decisions made, and open questions” produces something you can actually use.

Clarity isn’t just about prompting skill—some tasks are inherently clearer than others. A task with an existing template, defined format, and concrete success criteria is clearer than one that requires judgment about what “good” means in each situation.

How to score it:

Score Clarity Level
5 Explicit format, clear criteria, examples available
4 Defined structure, most parameters clear
3 General expectations clear, some ambiguity
2 Significant interpretation required
1 “I’ll know it when I see it”

Contained

The question: Can this task be isolated from other systems and processes?

Why it matters: Dependencies create complexity. If an AI workflow requires integration with five systems, approval from three stakeholders, and coordination with two teams, problems multiply at every junction. Contained tasks can be improved without disrupting everything else. You can iterate faster, fix problems more easily, and prove value before expanding scope.

How to score it:

Score Containment Level
5 Fully self-contained, no dependencies
4 Minimal dependencies, clear handoff points
3 Some integration required, manageable scope
2 Multiple dependencies, coordination needed
1 Deeply embedded in complex workflows

How to Use the Scorecard

The scoring process is straightforward:

  1. List 5-10 opportunities from your Chapter 4 decision map
  2. Score each on all four dimensions (1-5)
  3. Multiply to get the total: Frequent × Forgiving × Clear × Contained
  4. Rank by total score
  5. Validate against business context

The multiplication matters. A task that scores 5/5/5/5 gets a perfect 625. A task that scores 1/5/5/5 gets only 125—still reasonable. But a task that scores 1/1/2/2 gets only 4. The framework heavily penalizes tasks that score low on multiple dimensions.

This is intentional. A task that’s infrequent AND unforgiving AND unclear AND embedded in complex workflows is almost guaranteed to fail as an AI starting point. The multiplication catches these combinations; addition wouldn’t. A 1 in any dimension is a warning. Multiple 1s are a stop sign.

Interpreting Your Scores

Score Range What It Means
400-625 Excellent candidate—start here
200-399 Good candidate—consider for your second wave
100-199 Proceed with caution—validate thoroughly before investing
Below 100 Avoid for now—revisit when you have more AI experience

The Business Value Check

High FFCC score alone isn’t enough. A task could score 625 but contribute nothing to your goals. After ranking by FFCC, ask three questions:

  • Does this decision actually affect outcomes I care about?
  • Will success here build momentum for larger initiatives?
  • Is this aligned with current priorities?

High FFCC + High Business Value = Best starting point. The scorecard tells you what’s likely to succeed. Business judgment tells you what success is worth.

A task that scores 625 but has zero business impact is a waste of time. A task that has massive business impact but scores 16 is setting yourself up for failure. You need both: high suitability AND meaningful value. The best starting points satisfy both criteria.

What if your highest-scoring task has low business value? Either reconsider whether you’re undervaluing routine efficiency gains (often people do), or look for a task that scores in the 200-400 range with higher impact. The goal is the best combination, not the highest score alone.

The Framework in Action

The HR Director’s Scorecard

Marcus manages HR for a manufacturing company. His instinct said to start with compensation analysis—high visibility, executive attention, strategically important. But he scored his opportunities:

Opportunity F F C C Score
Job posting drafts 4 5 5 5 500
Interview debrief summaries 5 4 4 4 320
Policy question responses 5 3 3 4 180
Quarterly comp analysis 1 2 2 2 8

His “strategic” comp analysis scored 8. His “mundane” job postings scored 500.

Marcus started with job postings. Draft time dropped from 45 minutes to 10 minutes per posting. With 100+ postings per year, that was significant time savings. The high frequency—multiple postings per week—meant rapid learning. He refined his prompts through dozens of iterations in the first month alone.

After three months of building capability on high-scoring tasks, Marcus approached comp analysis differently. Instead of trying to automate the whole thing, he used AI for the information-gathering sub-tasks that had clearer inputs and outputs: summarizing market data, identifying outliers, drafting initial findings. The strategic judgment stayed human. The research prep got faster.

The key lesson from Marcus’s experience: the FFCC scorecard didn’t say comp analysis was unimportant. It said comp analysis was a bad starting point. By the time Marcus got to it, he had the skills and credibility to approach it effectively. Starting there would have meant learning on the hardest possible task—and probably failing before proving any value.

The Project Manager’s Discovery

Devon manages a dozen client projects at a marketing agency. The biggest pain point felt like monthly client presentations—3-4 hours each, twelve clients, potentially 40+ hours monthly. AI-generated presentations sounded transformative.

But the scores told a different story:

Opportunity F F C C Score
Daily status update drafts 5 5 5 5 625
Meeting note summaries 5 4 4 4 320
Client email drafts 4 3 3 4 144
Monthly presentations 2 2 2 2 16

Presentations scored 16. Status updates scored 625.

Devon started with status updates despite feeling like it was “too easy to bother with.” Result: 10-12 hours saved per week. The concentrated pain of presentations had masked the distributed drain of status updates. The time per presentation felt worse because it was all at once. But the cumulative time on status updates was actually higher—just invisible because it was spread across small chunks.

The daily frequency also meant Devon learned fast. By week three, the prompt had been refined dozens of times. When Devon eventually approached presentations, the skills transferred—and instead of automating entire presentations, Devon automated the high-FFCC sub-tasks within them.

Both Marcus and Devon discovered the same thing: what feels like the biggest problem isn’t necessarily the best place to start. Pain intensity and AI suitability are different dimensions. The FFCC framework measures suitability, not pain—and that difference matters.

Common Objections

“The highest-scoring tasks feel too mundane.”

That’s the point. Mundane, frequent tasks compound into massive time savings. Start there, prove value, build capability. Then tackle the prestigious projects with earned credibility and developed skills. The sequence matters more than the starting point’s impressiveness.

“My most important decisions score low.”

Important and AI-suitable are different qualities. Some decisions score low precisely because they’re complex, rare, or high-stakes—which is why they need human judgment rather than AI automation. The framework working correctly means identifying what shouldn’t be automated, not just what should.

“What if two opportunities have similar scores?”

Go with the one where you have more control over the workflow. Lower dependency means faster iteration. Faster iteration means faster learning. When scores are close, break ties with containment.

“Should I ever start with a low-scoring opportunity?”

Only if there’s a compelling strategic reason—urgent deadline, executive priority, proof of concept for funding. Even then, set expectations that it will be harder and success is less certain. Low scores don’t mean impossible; they mean higher risk and lower likelihood of clean wins.

“My boss wants me to start with something impressive, not mundane.”

Use the scorecard to have that conversation. Show the math: daily status updates at 625 versus quarterly reports at 24. Explain that starting with high-scoring tasks builds the capability that makes impressive tasks possible. Frame mundane wins as the foundation for strategic ones.

“What about tasks that score high but feel too simple?”

Simple is good. Simple means you’ll get wins quickly, learn what works, and build momentum. Complex tasks that score low will consume time, produce ambiguous results, and risk poisoning your organization’s attitude toward AI. Start simple, succeed visibly, then graduate to complex. The order matters.

“Can I improve a task’s score by changing how I do it?”

Sometimes. A low-clarity task might become clearer if you create templates first. A low-containment task might become more contained if you establish better handoff points. But be honest about whether you’re actually changing the task or just inflating scores to justify what you wanted to do anyway. The framework works when you score honestly.

Your Monday Morning Action Item

Take your three starred decisions from Chapter 4’s 10-Decision Sprint.

Score each on FFCC: - Frequent: How often? (1-5) - Forgiving: What if AI’s wrong? (1-5) - Clear: Are inputs/outputs defined? (1-5) - Contained: Can it be isolated? (1-5)

Multiply to get total scores.

Your highest-scoring opportunity is your pilot candidate for Chapter 7. But the scoring also tells you what to watch for:

  • Low “Forgiving” score? Build extra review processes.
  • Low “Clear” score? Expect more prompt iteration.
  • Low “Contained” score? Map dependencies before starting.

The FFCC scorecard doesn’t just tell you where to start—it tells you what challenges to anticipate once you do.

Bring your scored opportunities to Chapter 6, where we’ll help you choose the single best starting point from among your top candidates.

The FFCC framework might tell you something different than your instincts suggested. That’s not a bug—it’s the framework doing its job. Trust the scores, not your desire for impressive-sounding projects. Mundane wins first, prestigious wins later. The sequence matters more than the starting point’s glamour.