Advanced Prompting Techniques
System prompts, chain of thought, few-shot learning, structured output, and Canvas for collaborative editing.
What You'll Learn
- Master the prompt formula: Role + Context + Task + Format + Constraints
- Use system prompts and custom instructions to set persistent behavior
- Apply chain-of-thought and step-by-step reasoning techniques
- Leverage few-shot prompting with examples for consistent output
- Use Canvas for collaborative editing and structured document creation
The Anatomy of a Perfect Prompt
Module 1 covered the basics of prompting. This module turns you into someone who can consistently get high-quality outputs on the first or second try, not the fifth or sixth.
The most reliable prompt structure follows five components, and you do not need all five every time, but knowing them gives you a framework to diagnose why a prompt is not working.
Role: Tell ChatGPT who it should be. "You are a senior data analyst at a Fortune 500 company" produces very different output than "You are a friendly tutor explaining concepts to a high school student." The role shapes vocabulary, depth, assumptions, and tone.
Context: Provide the background information ChatGPT needs. Your industry, your audience, the problem you are solving, relevant constraints, prior decisions. The more context, the less ChatGPT has to guess, and guessing is where quality drops.
Task: State exactly what you want done. Not "help me with my presentation" but "create a 10-slide outline for a board presentation on Q2 revenue growth, with talking points for each slide and suggested data visualizations."
Format: Specify the output structure. Bullet points, numbered list, table, JSON, markdown, email format, slide-by-slide breakdown. ChatGPT follows format instructions precisely when you provide them.
Constraints: Set boundaries. Word count, tone, what to include, what to exclude, reading level, technical depth. "Keep it under 200 words" or "Do not use jargon" or "Include at least 3 specific examples" are all useful constraints.
When a prompt produces poor output, diagnose which component is missing. Usually it is context or constraints.
Quick Test: Five-Component Prompt
Take a task you need done today and write two versions.
1. Write it casually, as you normally would.
2. Rewrite it using all five components: Role, Context, Task, Format, and Constraints.
3. Send both to ChatGPT and compare the outputs.
Save the five-component version as a template you can reuse by swapping in different details.
System Prompts and Custom Instructions
A system prompt is a set of instructions that tells ChatGPT how to behave before the conversation even starts. In the ChatGPT interface, you set this via Custom Instructions (Settings > Personalization), which act as a persistent system prompt across all your conversations.
Custom Instructions have two fields:
"What would you like ChatGPT to know about you?" This is your context field. Include: your profession, your industry, your expertise level, your common use cases, your audience, any preferences about tools or frameworks you use. Example: "I am a product manager at a B2B SaaS startup. My audience is enterprise IT buyers. I frequently need help with PRDs, competitive analysis, and customer interview scripts."
"How would you like ChatGPT to respond?" This is your behavior field. Include: preferred tone, format preferences, level of detail, whether you want caveats and disclaimers, whether you prefer lists or prose. Example: "Be direct and concise. Use bullet points by default. Skip disclaimers and caveats. When I ask for writing, default to a professional but conversational tone. If my request is ambiguous, ask clarifying questions instead of guessing."
These instructions are prepended to every conversation. They compound over time because ChatGPT does not need you to repeat "be concise" or "I work in healthcare" in every new chat. Configure them once, refine them over the first week, and you will notice a meaningful improvement in baseline output quality.
For the API and Custom GPTs, system prompts are even more powerful. You can include detailed behavior rules, output format specifications, tool usage instructions, and guardrails. Module 4 covers this in depth.
Optimize Your Custom Instructions
Open Settings > Personalization > Custom Instructions. Write a detailed "about me" paragraph (profession, industry, audience, common tasks) and a "response style" paragraph (tone, format, detail level, what to avoid). Test it by opening a new chat and asking a work-related question. Compare the output to what you got before custom instructions. Refine weekly.
Chain-of-Thought and Step-by-Step Reasoning
Chain-of-thought (CoT) prompting is one of the most impactful techniques for improving ChatGPT's output on complex tasks. The idea is simple: instead of asking for the answer directly, ask ChatGPT to think through the problem step by step before giving its final answer.
The simplest version is just adding "Think step by step" or "Let's work through this systematically" to your prompt. This single phrase measurably improves performance on math problems, logic puzzles, planning tasks, and multi-step analysis.
A more structured approach is to explicitly define the steps: "First, analyze the current situation. Second, identify the key problems. Third, brainstorm three possible solutions for each problem. Fourth, evaluate each solution on cost, timeline, and risk. Fifth, recommend the best approach with justification."
Why does this work? When you ask for an immediate answer, ChatGPT generates the response in one pass, which means it can miss considerations or take shortcuts. When you ask it to break the problem down, each step informs the next, and the final output is more thorough and accurate.
When to use CoT: Strategic planning, complex analysis, debugging code, evaluating options, writing that requires logical structure, any task where the process matters as much as the output.
When to skip CoT: Simple factual questions, creative brainstorming (where you want volume over precision), quick formatting tasks, short responses.
For the hardest reasoning tasks, use the o-series models (o1, o3) which have chain-of-thought built into their architecture. These models think internally before responding and are dramatically better at math, science, and complex logic.
Try This Yourself
Pick a decision you are currently weighing at work or in a project. Ask ChatGPT the question twice. First, just ask for the answer directly. Then ask the same question but add: "Think through this step by step. First, identify the key factors. Then evaluate the pros and cons of each option. Finally, give me your recommendation with reasoning." Compare the depth and usefulness of the two responses.
Few-Shot Prompting: Teaching by Example
Few-shot prompting means giving ChatGPT examples of what you want before asking it to produce new output. It is the single most effective technique for getting consistent, on-brand results.
The pattern is straightforward:
-
Show ChatGPT 2-3 examples of the input/output pattern you want
-
Then give it a new input and let it follow the pattern
For example, if you want product descriptions in a specific style:
"Here are examples of product descriptions in our brand voice:
Product: Wireless earbuds Description: Sound that moves with you. 8 hours of battery, IPX5 water resistance, and bass that hits different. Your commute just got an upgrade.
Product: Laptop stand Description: Your spine will thank you. Aluminum build, adjustable height, cable management built in. Works with every laptop from 11 to 17 inches.
Now write a description for: Portable phone charger"
ChatGPT will match the tone, length, and structure of your examples with remarkable consistency. This works for any format: email replies, social media posts, code comments, support responses, meeting summaries, you name it.
Zero-shot is asking without examples (most prompts). One-shot is giving one example. Few-shot is giving 2-5 examples. More examples generally means more consistent output, but you hit diminishing returns after about 5.
Few-shot prompting is particularly powerful for tasks where tone, style, or format matters and is hard to describe in words. Instead of trying to explain your brand voice in abstract terms, just show three examples and let the pattern recognition do the work.
Build a Prompt Library
Create a document (Google Doc, Notion, whatever you use) and save your best prompts with the few-shot examples included. When you need a product description, email, or social post, grab the template, swap in the new details, and you will get consistent results every time. This library becomes one of your most valuable professional assets.
Canvas: Collaborative Editing with AI
Canvas is ChatGPT's side-by-side editing workspace, available on Plus and above. It solves a specific problem: when you are iterating on a long document or code file, working within the chat interface becomes clunky. You have to scroll through conversation history, the AI regenerates the entire document for small changes, and you lose track of what version you are on.
Canvas opens a dedicated editing panel next to the chat. The document lives in the panel, and you can highlight specific sections and give targeted instructions. "Make this paragraph more concise" applies only to the highlighted text, not the whole document. This granular control is transformative for long-form writing and coding.
Writing mode features: adjust reading level (from kindergarten to graduate), change length, add polish, translate to other languages, and add emojis. Each operation applies to the highlighted selection or the entire document.
Coding mode features: add comments, fix bugs, add logging, translate between programming languages, and review for best practices. You can run code directly in Canvas and see the output.
The workflow that makes Canvas valuable: generate a first draft in regular chat, then say "open this in Canvas" (or click the Canvas icon). From there, iterate section by section. Fix the introduction. Strengthen the examples in section 3. Add a more specific CTA at the end. Each edit is surgical rather than wholesale.
Canvas also supports version history. You can step back through previous versions of the document, which means you can experiment aggressively without fear of losing good work.
For coding tasks, Canvas is particularly powerful because you can ask ChatGPT to explain a function, refactor it, then test it, all in one workspace without context-switching.
Canvas Editing Exercise
Ask ChatGPT to write a 500-word blog post on a topic in your field. When it generates the response, open it in Canvas. Now: (1) Highlight the introduction and ask to make it punchier. (2) Highlight a paragraph and ask to add a specific example. (3) Use the reading level slider to adjust complexity. (4) Highlight the conclusion and ask for a stronger CTA. Notice how each edit is precise and does not disrupt the rest of the document.
Core Insights
- The five-part prompt formula (Role + Context + Task + Format + Constraints) gives you a diagnostic framework: when output is poor, identify which component is missing
- Custom Instructions act as a persistent system prompt that improves every conversation without you needing to repeat yourself. Configure them in your first week and refine monthly
- Chain-of-thought prompting ("think step by step") measurably improves output on complex tasks by forcing the model to reason through the problem before answering
- Few-shot prompting (providing 2-3 examples) is the most reliable way to get consistent tone, format, and style. Build a prompt library with your best templates
- Canvas transforms long-form editing from a frustrating chat-based workflow to a surgical, section-by-section collaboration. Use it for any document or code file longer than a few paragraphs