Choosing Where to Start

The Two-Week Decision That Cost Six Weeks

A marketing manager completed the FFCC scoring exercise and found herself with three opportunities scoring above 400. All three were legitimate starting points. The scores were close enough that the framework alone couldn’t pick a winner.

So she analyzed more. She reconsidered her scores. She asked colleagues for input. She researched each opportunity deeper. She wanted to make sure she chose the “right” one.

Six weeks later, she finally committed to an option—the same one that had scored highest from the beginning.

Meanwhile, a colleague who’d just picked his top scorer and started was already seeing results. He’d refined his workflow through dozens of iterations. He’d saved measurable hours. He’d moved on to his second opportunity while she was still deciding on her first.

Analysis paralysis is real. At some point, choosing matters more than choosing perfectly. This chapter helps you make that final call—not through more analysis, but through commitment criteria that cut through hesitation.

You’ve done the work. You mapped your decisions in Chapter 4. You scored them with FFCC in Chapter 5. The hard thinking is done. What remains is committing—and that’s harder than it sounds. Choosing feels risky because it means closing options. But not choosing is its own choice, and it’s almost always worse.

When the Scorecard Isn’t Enough

The FFCC framework does most of the work. Most of the time, it produces a clear winner. But sometimes you end up with multiple opportunities scoring similarly, and the scorecard alone can’t decide.

That’s when you need tiebreakers—factors that matter but aren’t captured in the four FFCC dimensions.

Tiebreaker 1: Energy Alignment

Which opportunity are you most curious about?

Motivation matters more than you might think. You’ll iterate more with work that interests you. You’ll persist through the inevitable early friction. You’ll pay attention to what’s working and what isn’t, rather than just going through the motions.

Between two equal FFCC scores, pick the one that feels more interesting. The one you’re slightly more curious to try. The one you’d rather spend your learning cycles on.

This isn’t about passion—it’s about engagement. A slightly lower-scoring opportunity that holds your attention will outperform a higher-scoring one that bores you.

When two consultants I worked with had similarly-scored opportunities, one chose based on pure FFCC scores and the other chose based on curiosity. Three months later, the curiosity-driven choice had evolved into a sophisticated workflow. The “optimal” choice had been abandoned after two weeks of half-hearted effort. Energy compounds over time.

Tiebreaker 2: Quick Win Potential

Which opportunity can show results in two weeks or less?

Early wins build momentum. They prove to you (and anyone watching) that AI actually works in your context. They generate the confidence and credibility that sustains longer-term efforts.

Some opportunities show results immediately—you save time on the first instance. Others require setup, learning curves, or volume before results become visible.

When scores are close, lean toward quick visibility. A small win this week beats a larger theoretical win next month.

Think about it this way: quick wins generate data. You learn what works. You discover unexpected obstacles. You refine your approach. All of that compounds. A slow-building opportunity might theoretically produce more value, but you won’t know if it’s actually working until much later—and by then you’ve invested significant time.

Tiebreaker 3: Resource Accessibility

Which opportunity can you start with what you already have?

If one option requires new software, multiple approvals, or extensive training while another can start with tools already on your laptop, start with what’s accessible.

The hidden cost of “better” tools is delay. Every day spent waiting for access is a day not learning. Simple tools you have now beat sophisticated tools you’ll get eventually.

The “One Monday Morning” Test

Ask yourself: Can I start this workflow on Monday morning with tools I already have access to?

If yes, there’s no barrier except your own hesitation. Start.

If no, list what needs to happen first. Be specific—is it a tool you need? Approval? Data access? Training? Then either address those prerequisites this week, or pick a different opportunity that passes the Monday morning test.

The 80/20 Decision Rule

Here’s a simple rule for when scores are close: if your top opportunity scores 80% or more of your second-place option, just pick the top one and move on.

The marginal benefit of choosing “perfectly” is almost certainly less than the cost of delayed action.

Examples:

  • Top scorer: 480, Second: 400 → 480 is 120% of 400. Clear choice—pick 480.
  • Top scorer: 350, Second: 320 → 350 is 109% of 320. Close enough—use tiebreakers.
  • Top scorer: 400, Second: 380 → 400 is 105% of 380. Too close to matter—either works.

When scores are within 20% of each other, the framework has done its job by eliminating poor options. Any of your top scorers will work. The choosing phase should end here.

The Commitment Contract

Once you choose, write it down. Literally. Studies on goal-setting consistently show that written commitments outperform mental intentions.

Your commitment contract needs four elements:

1. The specific task: “My first AI workflow is: [exactly what you’ll do]”

2. The duration: “I will try this for: [2-4 weeks minimum]”

3. The success criteria: “Success looks like: [specific, measurable outcome]”

4. The evaluation date: “I will evaluate on: [specific date]”

Example commitment contract:

“My first AI workflow is: drafting meeting summary notes from client call transcripts. I will try this for 3 weeks. Success looks like: summary generation in under 5 minutes with 90%+ accuracy on key points. I will evaluate on March 14.”

Put this somewhere you’ll see it. Tell someone if accountability helps. The act of writing creates commitment that vague intentions don’t.

Why does writing matter? Because vague intentions are easy to abandon. “I should try AI for something” becomes “I’ll get to that eventually” becomes “I never really committed to that anyway.” A written commitment with specific dates and metrics creates a structure that resists drift.

The commitment contract also helps you resist the temptation to constantly reconsider. When you’ve written “I’m evaluating on March 14,” you’re less likely to abandon ship on March 8 because you had one frustrating session. The contract creates space for the workflow to actually develop.

Context Matters: Solo vs. Team

How you choose depends partly on whether you’re implementing alone or with others.

Individual Contributors

You’re the only stakeholder. This simplifies everything:

  • No permission needed (within your role boundaries)
  • No coordination required
  • You can change course quickly if something isn’t working
  • Failure affects only you

For individual contributors, the choosing phase should be as short as possible. Run the framework, apply tiebreakers if needed, make the call. The learning happens in doing, not deciding.

Department Heads

You have additional considerations:

  • Consider piloting with one team member before rolling out wider
  • Choose a task visible enough to demonstrate value but contained enough to limit risk
  • Plan for knowledge sharing once the workflow proves successful
  • Keep initial scope small: one person, one task, one workflow

For department heads, the first win matters more than the best win. Build credibility before building scope.

Setting Up for Success

Before you start, define your pilot parameters:

Duration: 2-4 weeks minimum. Anything shorter doesn’t give enough learning cycles. You need time to iterate, encounter edge cases, and stabilize the workflow.

Volume: At least 10-20 instances of the task. You need pattern recognition—what works, what fails, what needs adjustment. A handful of uses won’t give you that.

Metrics: What will you measure? Time saved is the obvious one, but also consider: quality of output, effort required for review, error rate, your own energy and satisfaction.

Exit Criteria

Decide in advance when to continue, pivot, or stop:

Continue: Success metrics met, energy still high, clear path to expanding.

Pivot: Some value but not enough—try a different opportunity from your scored list.

Stop: Clear failure, overwhelming obstacles, or better opportunities have emerged.

Having exit criteria prevents both premature abandonment and sunk-cost persistence. If you’ve defined “stop” conditions in advance, you won’t keep grinding on something that isn’t working. And if you’ve defined “continue” conditions, you won’t quit too early because of normal early friction.

Most people either give up too quickly (one bad result and they’re done) or persist too long (sunk cost keeps them grinding on something that clearly isn’t working). Exit criteria defined in advance protect you from both failure modes. You know when to keep going. You know when to stop. You don’t have to make that judgment in the moment when emotions are running high.

Two Paths to a First Win

The Consultant’s Quick Choice

Nadia, an independent consultant, scored four opportunities. Meeting summaries scored 420; email drafts scored 400. She spent two weeks hesitating, reconsidering, looking for certainty.

Then she recognized the pattern: perfectionism disguised as thoroughness. She applied the 80/20 rule: 420 vs. 400 was only a 5% difference. Either would work. She applied the energy tiebreaker: meeting summaries felt more interesting—she was curious whether AI could capture nuanced client conversations.

She wrote her commitment contract and started the next Monday.

By day five, she’d saved 4 hours across three meetings. By week two, she’d refined her prompts and expanded to all client meetings. The skills she developed transferred directly to her next opportunity.

Her reflection: “I spent two weeks deciding and two weeks implementing. I should have spent zero weeks deciding and four weeks implementing.”

The pattern Nadia discovered is common among high performers: the perfectionism that helps with quality work becomes an obstacle when choosing. The same instinct that makes you careful about output makes you overthink input decisions. Recognizing this pattern is the first step to breaking it.

By week three, Nadia had not only mastered meeting summaries but had already started applying similar techniques to email drafts—her second-highest scorer. The skills transferred. The momentum carried. Starting fast meant progressing fast.

The Team Lead’s Calculated Start

James, an IT support manager, faced a different challenge. His organization was skeptical of AI after a previous initiative had failed. Three opportunities scored above 400, all close enough to justify starting.

He applied the accessibility tiebreaker with a political lens: which success would position him for future opportunities?

Client-facing status updates scored slightly higher but required compliance approval and carried reputational risk. Internal ticket summaries could be piloted quietly, measured objectively, and documented as evidence for future proposals.

He chose internal summaries. After four weeks of documented success—140 tickets, 12 hours saved, zero quality issues—his director invited him to pilot client communications. The internal win created permission for the external opportunity.

His reflection: “I wanted to go straight to the impressive stuff. But starting smaller built the credibility that made bigger wins possible.”

The contrast between Nadia and James illustrates context-dependent choosing. Nadia could move fast because she was the only stakeholder. James needed to move strategically because organizational trust was at stake. Both made good choices—but “good” looked different in each context.

What they shared: both stopped analyzing and started doing. Both committed to specific timelines. Both learned more in two weeks of implementation than they would have in two more weeks of deliberation.

Common Objections

“What if I choose wrong?”

You’ll learn faster by starting than by analyzing. Even a “wrong” choice teaches you about AI workflows—what prompts work, what review processes help, how to iterate. Worst case, you pivot after two weeks with useful experience. Paralysis teaches nothing.

“I have constraints that limit my choices.”

Work within them. If you can’t access certain data or tools, score those opportunities lower and move on. Constraints are real—work around them rather than wishing them away.

“My top opportunity requires setup I don’t have time for.”

Pick your second-highest scorer that requires no setup. Getting started matters more than starting perfectly. You can always return to the higher-scoring option once you’ve built momentum and capability.

“What if success creates expectations I can’t meet?”

Define scope explicitly from the beginning. “I’m piloting AI for meeting summaries” doesn’t promise organization-wide transformation. Start small, communicate clearly, expand only when ready.

“Everyone on my team has different opinions about where to start.”

That’s actually a sign you have multiple good options. Use the FFCC scores as a tiebreaker. If opinions still differ, pick the option with the highest scorer and clearest ownership. One person pilots, learns, then shares. Starting anywhere beats debating everywhere.

“I’m worried about failing publicly.”

Start with something low-visibility. Internal tasks, personal productivity, behind-the-scenes work. Build a track record before going visible. Most failures happen in private and teach valuable lessons. Most successes can be made public later.

“The opportunity I want to try doesn’t score highest.”

If it’s close (within 20%), energy matters more than pure score. If it’s not close (your preferred option scores significantly lower), ask yourself why you’re drawn to it. Sometimes it’s legitimate insight the framework missed. Sometimes it’s the prestige trap from Chapter 5 in disguise. Be honest about which.

Your Monday Morning Action Item

Make your choice now. Not tomorrow, not after one more analysis—now.

Review your FFCC scores from Chapter 5. If you have a clear winner, that’s your starting point. If scores are close, apply the tiebreakers:

  • Energy: Which excites you most?
  • Quick win: Which shows results fastest?
  • Accessibility: Which can you start Monday?

Write your commitment contract:

  • “My first AI workflow is: ________________”
  • “I’m starting on: [date within 7 days]”
  • “Success looks like: ________________”
  • “I’ll evaluate results on: [2-4 weeks from start]”

Put this somewhere visible. Tell someone if accountability helps.

Part 3 of this book will show you how to build your first workflow. But you can’t build until you’ve chosen. The audit phase is complete. You know your decision landscape. You’ve scored your opportunities. You’ve made your choice.

Now it’s time to build.

The audit phase of this book is complete. Part 2 has given you the tools to see your work clearly, score your opportunities systematically, and commit to a starting point confidently. Part 3 will show you how to actually build your first workflow—the mechanics of turning a decision into an AI-assisted process. But that work begins with choice. Make it now.