Designing Inputs

The Prompt Obsession Problem

A customer success manager spent weeks perfecting her prompt language. She tried different phrasings, added detailed instructions, experimented with formatting. She studied prompt engineering guides. She A/B tested variations. Results stayed inconsistent.

Then she tried something different: instead of changing the prompt, she changed what information she included. Same basic instruction—“assess risk level for this account”—but with customer history, stakeholder changes, usage trends, and CSM notes attached.

Output quality jumped immediately.

She hadn’t become better at prompting. She’d become better at feeding the AI the information it actually needed to do the job.

This pattern repeats constantly. A copywriter struggles with AI drafts that miss the mark. He refines his prompts endlessly—adding persona descriptions, tone guidelines, detailed instructions. Output improves marginally. Then he tries attaching examples of approved content, the client’s messaging guide, and a detailed reader persona. Suddenly the AI produces first drafts worth editing.

The copywriter didn’t need a fancier prompt. He needed to share what he already knew.

Everyone obsesses over prompt engineering. “What’s the magic phrase?” “Should I say ‘you are an expert’ or ‘act as an expert’?” “Do I need bullet points or paragraphs?”

These questions matter—to a point. But research consistently shows that context quality is the primary driver of output quality. Once you’ve moved past basic prompt structure, additional prompt refinement yields diminishing returns. Better context continues to improve results.

A mediocre prompt with clean, structured, relevant context outperforms a perfect prompt with sparse data. Every time.

You built your first workflow in Chapter 7. You adapted it for your role in Chapter 8. Now it’s time to examine the component that determines whether those workflows actually produce useful output: the input.

Why Inputs Matter More Than Prompts

Most people approach AI like a vending machine. Put in the right code, get out the right snack. Find the magic prompt, get perfect results.

This mental model is wrong.

AI doesn’t follow instructions the way software executes code. It generates responses based on patterns in its training data, shaped by everything you provide. What you provide—the input—determines what patterns it can apply. Without relevant context, even sophisticated instructions produce generic output.

The Prompt-Obsession Trap

The internet is full of “prompt engineering” content. Magic prompts. Secret techniques. The one weird trick that will unlock AI’s potential. This creates a predictable trap:

  • Spend hours wordsmithing prompts
  • Search for perfect phrasing
  • Get marginally better results
  • Conclude AI isn’t ready for your use case

A study of 1,500 academic papers on prompt engineering found a consistent pattern: basic prompt improvements can lift accuracy from 40% to 75% quickly. But beyond that initial jump, gains come not from wordsmithing, but from better context, cleaner data, and smarter workflow design.

Adding your seventh example to a prompt rarely helps. Adding relevant background information almost always does.

The Input Advantage

AI can only work with what you provide. It doesn’t have access to:

  • Your company’s context
  • Your customer’s history
  • Your team’s preferences
  • Your industry’s norms
  • Your project’s constraints

When AI output feels generic, it’s usually because you gave it generic information. When it misses important nuances, it’s usually because those nuances weren’t in the input.

The customer success manager’s original input was minimal: account name, MRR, health score. No wonder the AI produced generic risk assessments. A human analyst would struggle with the same sparse data.

Her redesigned input included stakeholder status (champion had left), usage trends (40% decline), renewal timing (45 days away), and historical patterns (73% churn rate for similar situations). The AI suddenly had enough context to produce actionable analysis.

Better inputs close the knowledge gap between what AI knows generally and what it needs specifically.

The Inverse Leverage

Most people spend their AI optimization time like this:

  • 80% on prompt refinement
  • 20% on input quality

Results come from the opposite allocation:

  • 80% of output quality comes from input quality
  • 20% comes from prompt refinement

LangChain’s 2025 State of Agent Engineering report found that 32% of organizations cite output quality as their top AI barrier—and most of those quality issues trace back to poor context management, not LLM limitations.

The single biggest predictor of AI success isn’t model selection or prompt sophistication. It’s context engineering: what information you provide, how you structure it, and what you leave out.

The Input Inventory

Before designing better inputs, audit what you have. Most people vastly underestimate the information available to them.

Mapping What You Have

Information comes in four categories:

Structured data: The organized information in your systems. CRM records, project tools, databases, spreadsheets. This data is already formatted and (usually) accurate.

Unstructured data: Raw content that contains valuable context. Emails, documents, chat logs, meeting notes, customer feedback. Less organized but often more revealing.

Contextual knowledge: What you know that isn’t written anywhere. Institutional knowledge, past decisions, stakeholder preferences, unspoken norms. This is the expertise in your head that new employees take months to absorb.

Real-time information: Current state that changes frequently. Active issues, recent developments, pending decisions. This context has a short shelf life.

Take ten minutes and list what you have in each category for the task you’re automating. You’ll likely find more sources than you expected.

The Gap Analysis

Now ask three questions:

What would a perfect assistant need? Imagine training a brilliant new hire to do this exact task. What would you tell them? What context would make them effective? That’s your input ideal.

What do you actually have access to? Not what exists somewhere in the company—what you can actually get into an AI input within your workflow’s time constraints.

What’s missing that you could start capturing? Sometimes the gap reveals information you should be documenting but aren’t. A customer success manager might realize she’s never recording champion departures—a critical input she could start tracking.

The gap analysis often reveals low-hanging fruit: information that exists but isn’t being used, or information worth capturing because it would transform output quality.

Input Sources by Role

Different roles have different input landscapes:

Department heads have access to team data across their function. Project management tools, 1:1 notes, performance data, cross-functional communications. The challenge is aggregating team-wide context without overwhelming the input.

Individual contributors often have deep knowledge about specific projects or clients. Task requirements, client communications, research materials, historical decisions. The challenge is making implicit expertise explicit.

CEOs have breadth without depth. Business metrics, calendar context, email volume, strategic direction. The challenge is curating signal from noise when everything seems relevant.

Senior leaders sit between strategy and operations. Market intelligence, organizational signals, board materials, competitive dynamics. The challenge is connecting high-level context to specific decisions.

Whatever your role, the pattern is the same: identify what you have, identify the gaps, and bridge the difference between available information and what your AI workflow actually receives.

The Three Types of Input

Not all input is the same. Understanding the three types helps you structure inputs that consistently produce useful output.

Context

Context is background information that helps AI understand the situation. It answers: Who is involved? What’s the history? What are the constraints?

For a customer risk assessment, context includes: - Account tenure and relationship history - Stakeholder map and changes - Industry and competitive factors - Previous issues and how they were resolved

For a content draft, context includes: - Brand voice and messaging guidelines - Target audience and their concerns - Competitive positioning - What’s worked well before

Context shapes interpretation. The phrase “urgent request” means different things for a demanding client versus a low-maintenance one. Without context, AI applies generic patterns.

Content

Content is the actual material to be processed. It answers: What specifically am I working with?

For a summary workflow, content is the document or meeting transcript.

For a response draft, content is the message being responded to.

For an analysis task, content is the data being analyzed.

Content is usually the obvious part—people remember to include the thing they want AI to work on. The mistake is providing content without context, expecting AI to understand it the way you do.

Constraints

Constraints are boundaries and requirements. They answer: What must be included, excluded, or formatted a certain way?

Constraints include: - Format requirements (length, structure, sections) - Tone guidelines (formal, casual, technical) - Mandatory inclusions (specific topics, data points, calls to action) - Explicit exclusions (competitor mentions, sensitive topics, unconfirmed information) - Audience considerations (what they know, what they don’t, what they care about)

Constraints prevent AI from generating technically correct but practically useless output. “Summarize this meeting” might produce a comprehensive summary when you needed only action items. “Summarize this meeting: action items only, maximum 5 bullets, assignee and deadline for each” produces usable output.

The Layered Input Pattern

Structure your inputs in three layers:

## Context
[Background on situation, stakeholders, history, constraints]

## Content
[Actual material to process]

## Requirements
[Format specifications, inclusions, exclusions, output constraints]

## Request
[What you want done with the above]

This structure forces completeness. When you can’t fill a section, you’ve identified a gap. When the AI output misses the mark, you can trace which layer was inadequate.

A copywriter’s layered input might look like:

## Context
Client: SecureCloud Solutions
Voice: Confident but not salesy. Technical credibility without jargon.
Audience: IT managers at mid-sized companies, scanning during busy workday
Avoid: Fear-based messaging, competitor comparisons

## Content
Topic: Why alert fatigue is a security risk
Angle: Position human-led response team as solution
[ATTACHED: 2 approved blog posts for voice reference]

## Requirements
Length: 1,000-1,200 words
Include: 3-5 subheadings, specific statistics
CTA: Download case study
Tone: Authoritative but accessible

## Request
Write a first draft following the client's voice guidelines.

The AI can’t miss the voice requirements—they’re explicitly stated with examples attached. The format constraints are clear. The request is simple because the layers above did the heavy lifting.

Testing Your Inputs

Input design isn’t theoretical. You test whether inputs produce useful output.

The A/B Test Method

Run the same task with different input configurations. Keep the prompt constant and vary what information you provide.

Compare: - Minimal input (just the content) - Content plus context - Content, context, plus constraints - Full layered input with reference examples

Document which configuration produces the best results. You’ll often discover that specific inputs create disproportionate improvement—like including reference examples or adding stakeholder context.

The Completeness Check

Before running a workflow, ask: What question might the AI need answered that isn’t in my input?

If you were a new employee doing this task, what would you ask before starting? Those questions reveal missing inputs.

Common gaps: - Who is this for? (Audience context missing) - What’s happened before? (History missing) - What counts as success? (Constraints missing) - What should I avoid? (Exclusions missing)

The Consistency Test

Run the same workflow multiple times. If results vary significantly, inputs lack sufficient context to produce consistent output.

Inconsistency usually means the AI is filling gaps with generic patterns—and different runs fill those gaps differently. More specific inputs narrow the variance.

The Expert Test

This is the most reliable check: Would a human expert produce good output with only this information?

Hand your input (without access to anything else) to a capable colleague. Ask them to complete the task. If they ask clarifying questions, those questions reveal input gaps. If they produce generic work because they lack context, the AI will too.

The expert test prevents a common mistake: assuming AI can infer what you mean. If an expert couldn’t infer it, neither can AI.

Building Your Input Template

Recurring workflows deserve input templates—standardized structures that ensure consistency and completeness.

The Template Approach

For any workflow you run repeatedly:

  1. Identify the stable context that applies to every instance. This goes in the template permanently.

  2. Identify the variable content that changes each time. These become fill-in-the-blank sections.

  3. Specify the constraints that define good output. These rarely change.

  4. Include reference examples when available. They anchor quality better than descriptions.

Example Template Structure

Here’s a template for customer risk assessment:

## Account Context (stable per account)
- Company: [name]
- Industry: [industry and segment]
- Account tenure: [months/years]
- Contract value: [MRR and tier]
- Renewal date: [date or N/A]

## Stakeholder Status (update when changes occur)
- Executive sponsor: [name, role, engagement level]
- Primary champion: [name, role, or DEPARTED with date]
- Day-to-day contact: [name, role]

## Current Situation (variable per assessment)
Usage trends (90 days): [key metrics with direction]
Recent activity: [support tickets, calls, meetings]
Open issues: [current concerns or requests]
CSM notes: [recent observations]

## Benchmark Context (stable per segment)
Healthy accounts this segment: [key metrics]
Churn predictors: [historical patterns]

## Request
Assess risk level with specific factors and recommended actions.

The template separates what’s stable (account context, benchmarks) from what changes (current situation, recent notes). A team member can run this workflow by updating the variable sections—no expertise required.

Template Maintenance

Templates aren’t static. As you learn what inputs matter most:

  • Add inputs that consistently improve output
  • Remove inputs that don’t affect quality
  • Refine the structure based on what the AI uses effectively
  • Update reference examples when better ones emerge

A template that worked six months ago might need revision. Treat templates as living documents, improved through iteration.

Common Objections

“I don’t have time to gather all that context.”

Build input gathering into your workflow trigger. If gathering context takes fifteen minutes, that’s part of the workflow—not separate preparation. The time you spend on inputs saves double or triple in revision time. A five-minute input investment typically saves twenty minutes of output fixes.

“My information is scattered across too many systems.”

Start with one source. Add sources incrementally as you see results. Perfect inputs aren’t required—better inputs are sufficient. A customer success manager might start with just CRM data, then add support ticket context once the basic workflow is stable.

“I don’t know what context the AI needs.”

Apply the expert test: What would you tell a smart new hire to complete this task? Start there. When output misses the mark, the gap often reveals what was missing. Input design is iterative—each run teaches you what to include next time.

“This feels like extra work.”

It’s frontloaded work that prevents backloaded revision. The choice isn’t between input effort and no effort—it’s between effort on inputs or effort on fixing output. Input effort is predictable and one-time per workflow. Output fixing is variable and repeats forever.

“Different tasks need different inputs.”

Yes—that’s why you build templates per workflow, not one universal template. But the structure (context, content, constraints) applies to every task. Learn the pattern once, apply it everywhere.

Your Monday Morning Action Item

Take the workflow you built in Chapter 7 and audit its inputs:

  1. List every piece of information currently included in your input
  2. Apply the expert test: Could a capable colleague do this task with only this input?
  3. Identify one gap: What’s missing that would improve output?
  4. Add that input to your next workflow run
  5. Compare output quality before and after

The input audit often reveals obvious gaps. A meeting summary workflow missing attendee context. A status report missing milestone dates. A customer response missing account history. A draft email missing tone guidelines.

One additional input can transform output quality.

Chapter 10 shows you how to evaluate and improve that output. But output quality starts here—with what you give the AI to work with. Get inputs right, and the rest of the workflow becomes dramatically easier.

The customer success manager didn’t need better prompts. She needed better inputs. So do you.