Getting Better Answers
Most of us begin using artificial intelligence like Google: we ask a question, get an answer, and the interaction is over. This simple, transactional exchange works for basic tasks, but it limits the quality of the results to what we already know to ask for.
A more powerful approach is to treat AI as a thinking partner by engaging in a back-and-forth dialogue. The focus moves from getting one specific answer to building a durable shared understanding of how you work and whats important to you. You provide context, the AI offers a structured response, and that response, in turn, helps you ask a better, more insightful follow-up question. This creates a cycle of improvement that leads to far better results.
This progression from tool to partner is the core lesson we’ve learned over the last year. Here’s how the two approaches stack up:
Your Role: Task Director vs. Thinking Partner
The AI’s Function: Executes Commands vs. Contributes to Shared Goal
Time frame: In the Moment vs. Across a Conversation
Primary Focus: Getting a specific answer vs. Developing a deeper understanding
Moving focus from the left side of the list to the right shows the journey we’ve been on. We started out with single-shot prompts also, but frustration and curiosity led us down a path of deep research into human-AI collaboration and context engineering. After months of sprinting and patching our systems as we learned, we took a deliberate pause to rebuild our entire process from the ground up.
Now, we’re back to share the blueprint of what worked. This post is the first in a three-part series detailing our framework for collaborative intelligence. We invite you to follow along, share your own lessons, and join the conversation as we work to advance AI literacy.
The Core Principle: Context is Everything
The single most important skill for working with an AI is providing good context.
Imagine the AI is a new, talented, and eager team member. The context you provide is the project brief you give them.
While most people intuitively understand that the content of this brief is important, many overlook that the format you use to present that information is just as critical for getting a high-quality result.
This simple idea has an important takeaway: your job is no longer just about writing the perfect sentence. Instead, you become a project manager whose main task is to give your new team member a clear and effective briefing. Every piece of information you provide helps them understand the project and do their best work. The quality of the AI’s thinking and output is a direct result of the quality of the brief you provide.
This leads to a first principle of working with AI: structure shapes how an AI thinks. The format you use to provide information guides the AI toward different ways of working.
Structured formats, like bullet points, numbered lists, and clear headers, are like a detailed project plan. They encourage the AI to think in an organized and logical way. Use these for tasks that require planning, analysis, or research.
Conversational formats, like paragraphs and open-ended questions, are like a brainstorming session. They encourage the AI to be more creative and generative. Use these for developing new ideas, writing first drafts, or exploring possibilities.
The team member analogy also helps us understand a common problem. Just as a person can get confused by a long, rambling meeting with no clear agenda, an AI’s focus can get cluttered during a long conversation. This “mental clutter” can make it forget key instructions or produce less relevant work. AI retains the beginning and end of long entries best. Pay attention to risk of lost context and details in the middle.
As a good project manager, you need to keep the project on track. This involves actively managing the conversation. You can do this by periodically asking for a “thread summary” to ensure you and the AI are on the same page. This repetition builds context scaffolding for the AI, and keeps your own thinking on track through iterations and exchanges.
For example: “Let’s summarize our progress so far: what are the key decisions we’ve made and what are our next steps?” This “reboots” the AI’s focus with a clean, condensed set of instructions, ensuring it stays focused on what’s most important.
The Anatomy of a Powerful Prompt: 6 Key Components
A well-constructed prompt is the primary tool for building context. While simple requests may only require one or two components, mastering all six is essential for tackling complex projects. The six components are as follows :
Role: Assign a specific persona to the AI to prime it with a particular set of skills and a point of view. For example: “Act as a senior marketing strategist specializing in client communication for creative freelancers.”
Task Instruction: Give a clear, specific, and unambiguous action for the AI to perform. For example: “Draft a short, proactive email to my client list that introduces a new ‘strategic planning session’ service.”
Background Context: Provide the essential “who, what, where, when, and why” that informs the AI’s response. For example: “I’m a freelance videographer. I’ve been getting feedback that my turnaround time is great, but some clients want more strategic input during the planning phase. I want to turn this feedback into a new, billable service.”
Examples: Provide concrete instances of the desired pattern, format, tone, or style. Showing is more effective than telling. For example: “For the tone, model this example: ‘You spoke, I listened. Many of you have mentioned wanting more creative strategy upfront, so I’m excited to announce...’”
Output Format: Specify the exact structure required for the response. This ensures the output is usable and well-organized. For example: “Structure the email with: 1. A compelling subject line. 2. A brief, personal opening that acknowledges the feedback. 3. A clear description of the new service. 4. A simple call to action.”
Quality Criteria: Define the success conditions for the task. This is a clear statement of what “good” looks like. For example: “The email must sound confident and proactive, not defensive. It should frame this new service as a positive evolution of my business, driven by client needs.”
When an AI produces a poor response, you can use these six components as a checklist to diagnose what information was missing from your prompt.
Practical Tips and Best Practices
Developing an effective workflow with an AI is a learnable skill. The following practices provide a clear path for getting better results.
Foundational Practices
Treat the First Output as a Draft: Don’t expect a perfect answer on the first attempt. The initial response from an AI should be viewed as a strong starting point. The real value emerges through iteration. Use follow-up prompts to challenge assumptions, ask for clarification, and guide the AI toward a higher-confidence level final product.
Build a Prompt Library: When a particular prompt or prompt structure works well, save it. Maintaining a simple document organized by task type (e.g., “Strategic Planning Prompts,” “Creative Writing Prompts”) creates a personal knowledge base of proven starting points. This library prevents the need to reinvent the wheel for every new project.
Ask for Conversation Summaries: An AI’s “working memory” can become cluttered during long interactions. Before pausing or moving of from a session, ask the AI to consolidate the thread: “Summarize our conversation so far: key decisions made, insights discovered, and where we’re headed next.” This creates a clean piece of context that can be used to restart the conversation later without any loss of momentum.
A Note on Trust and Verification
A critical component of using AI effectively is understanding its limitations. AI models can generate incorrect information, known as “hallucinations,” and state it with absolute confidence. Building a healthy sense of skepticism is essential for avoiding mistakes.
Be particularly vigilant with certain types of information :
Specific data points: Statistics, dates, names of individuals, or precise figures.
Recent events: Most models have a knowledge cut-off date and cannot provide reliable information about very recent events.
Responses that seem too perfect: An answer that is overly comprehensive can sometimes be a sign of a plausible-sounding fabrication.
Overly confident responses: If you ask your AI to respond like a PHD, expect the responses will sound highly confident, even if context is incomplete.
The guiding principle must be to always verify important facts with a quick search or by consulting a primary source. You always serve as the final fact-checker.
Conclusion: Better Inputs, Better Outputs
The quality of output from an AI is a direct reflection of the quality of input provided. For more useful results, try moving from issuing single commands to engaging in a collaborative conversation where you actively shape the context.
The context you build and the formats you choose are the keys to moving beyond simple answers and unlocking more useful, reliable, and intelligent results.
By applying the practical steps in this guide—building better prompts with the six key components, iterating on outputs, and always verifying critical information—you can make AI a far more powerful and effective tool for your work.
What have you found useful in your own AI collaborations?
Patrick and the Spiral Bridge Collaboration