Hey future-proof friends 💜
I feel like we've reached peak optimisation irony.
AI was supposed to save us time, which it did.
But now we just spend that saved time fixing what AI got wrong.
Most people are still far from getting the results they actually need from AI. They're getting results that sound good but in reality miss the mark.
Prompting is one of the most important skills you can learn right now if you actually want to future-proof yourself.
The goal is to get so good at prompting that you can make AI surprise you with results that actually push the boundaries of your thinking and are better than what you expected.
And what better way to do that than to actually understand the models from the people who built them.
OpenAI recently released their official GPT-5.2 prompting guide and it's long read. I read it so you don't have to and that’s what we’re diving into in this newsletter.
TL;DR - today’s lineup:
One Serious Deep Dive 💡: The 5 prompting patterns from OpenAI's guide that stop ChatGPT from giving you useless answers (plus the universal prompt template you can steal)
Copy-Paste Prompt 🤖: The research assistant prompt that gets ChatGPT to actually research properly without the back-and-forth
Piping Hot AI Tea 🫖: Claude becomes your desktop assistant, Amazon's always-listening wearable, the medical AI race heating up, and regulators investigating xAI's deepfake problem
💌 Your say genuinely shapes this newsletter: there’s a one‑click feedback poll at the very end. I genuinely check for feedback like a maniac after I send this out because I really want to know what you honestly think. So thank you!
One Serious Deep Dive 💡
I Read OpenAI's Official Prompting Guide So You Don't Have To (Here Are The 5 Takeaways That Actually Matter)
You ask a question, ChatGPT gives you an answer. The answer sounds good, but it's useless if it missed what you actually needed (or isn’t better than what you could have come with with by yourself).
OpenAI's GPT-5.2 prompting guide is basically a blueprint for fixing that. GPT-5.2 is more accurate and better at following instructions than previous versions, but it's still extremely sensitive to how you ask.
Here are the 5 patterns from the guide that reliably make outputs clearer, shorter, and more correct.
Pattern 1: Control length (or ChatGPT decides for you)
ChatGPT will write as much as it thinks you need, which is usually too much.
If you don't set a length limit, you're letting the model decide what "enough" means. And its definition of enough is different from yours.
The fix is to simply add this to your prompt:
Answer in:
- 1 short paragraph (max 4 sentences)
- Then up to 5 bullets: "What matters", "Why", "What to do next", "Risks", "Open questions"
No extra commentary.This stops the essay-length responses that waste your time and gives you exactly what you need. Sharp, focused, actionable.
You can use this any time you want a straight answer without the fluff.
Pattern 2: Kill scope creep (stop the "helpful" extras)
GPT-5.2 is good at its job… sometimes it’s too good.
It will add extra features you didn't ask for, suggest improvements you don't need, and throw in bonus steps that complicate things. It's trying to be helpful, but what you actually need is for it to do exactly what you asked and nothing more.
The fix:
Do exactly what I asked—nothing more.
- Do not add extra sections, features, or "nice-to-haves."
- If you notice improvements, list them under "Optional ideas" but do not apply them.
- Choose the simplest valid interpretation when something is ambiguous.This single constraint saves massive amounts of time because it stops the "helpful detour" loop where ChatGPT keeps adding things you have to undo.
When to use this: Writing content, creating plans, building anything where you need control over scope. Especially useful for coding or technical tasks.
Pattern 3: Handle ambiguity without fake confidence
This is the silent killer of bad AI outputs: ChatGPT sounds certain even when your prompt is vague.
You ask a fuzzy question - it gives you a confident answer. The answer is wrong, but you don't know that because it sounded so sure of itself.
The fix:
If my request is ambiguous or missing key info:
- Call out what's missing in 1 sentence.
- Then do ONE of the following:
A) Ask up to 3 clarifying questions, OR
B) Provide 2-3 plausible interpretations, each with labeled assumptions.
Never invent exact numbers, quotes, or references when unsure.This turns "confident nonsense" into "useful thinking" by forcing ChatGPT to admit when it doesn't have enough information.
When to use this: Research, planning, any situation where getting it wrong has consequences. Also great for brainstorming when you're not totally sure what you're asking for yet.
Pattern 4: Long content? Force re-grounding
If you paste something long - meeting notes, a contract, a research paper - ChatGPT can confidently summarise the wrong parts.
It gets lost in the scroll and can miss important details. It can even hallucinate and confidently tell you things that aren't in the document.
The fix:
Before answering:
1) List the 5-8 most relevant points from my pasted text (short bullets).
2) Restate my constraints in your own words (1-2 sentences).
3) Then answer, and reference where each key claim came from (e.g., "from the Pricing section / from paragraph about X").
If something is missing, do not guess—ask up to 3 questions.This forces ChatGPT to actually read the whole thing and anchor its answer to specific parts of your content.
When to use this: Any time you're working with long documents. Contracts, research, meeting notes, dense PDFs. Anything where accuracy matters.
Pattern 5: The 20-second self-check (for high-stakes stuff)
Most people use ChatGPT like a vending machine… Insert prompt, receive answer, share it on.
Bad idea for anything important.
The fix:
Before finalising:
- Scan your answer for any specific numbers, dates, or claims that are not grounded in what I provided.
- Identify any unstated assumptions.
- Replace overly absolute language with qualified language when appropriate.
Return the final answer only after this check.This is a tiny quality-control loop that catches unstated assumptions and ungrounded claims before they become dangerous mistakes.
When to use this: Anything high-stakes. Legal stuff, finance, compliance, medical info, job offers, pricing. Anywhere a wrong detail really matters.
The Universal Prompt Recipe (Use This for Everything)
If you want one template that works for most situations, here it is:
Role: You are a [coach/editor/analyst/tutor/assistant].
Goal: Help me achieve [specific outcome].
Context: Here's what you need to know:
- [background]
- [audience]
- [what I already have / what I tried]
Constraints:
- Do not [add new assumptions / invent numbers / add extra sections].
- If something is missing, [ask up to 3 questions] OR [state assumptions clearly].
- Keep it to [X sentences] or [Y bullets].
Output format:
- Section 1: [short answer]
- Section 2: [bullets/table/checklist]
- Section 3: [next steps]
Quality bar:
- Use simple language.
- Be specific and actionable.
- If uncertain, say what depends on what.Fill in the brackets and adjust based on what you're doing.
Provide as much context as possible
I always like to add “ask me clarifying questions” at the end so that the AI can decide (and tell me) what it actually needs from me to give me the best result.
Want to see how I use AI to 10x my content and audience growth?
I’ve created a done-for-you content ideation, creation, management and publishing system to help you leverage AI to 10x your content creation (without losing your authenticity in the process).
It’s called the AI Content Multiplier and the waitlist is now open - join now and I’ll send you a 30% off early bird discount as soon as it launches (which is soon).

One Powerful Prompt 🤖
The Research Assistant Prompt
Use this when you need ChatGPT to research something extensively:
Act as a research assistant.
Task: Research [topic].
Rules:
- Prefer web research over assumptions when facts may be uncertain.
- Resolve contradictions and cite sources for key claims.
- Stop when additional research is unlikely to change the conclusion.
Output:
- Key findings (bullets)
- Contradictions + resolution
- Practical takeaways
Do not ask clarifying questions. Cover all plausible interpretations.Piping Hot AI Tea 🫖
1. Anthropic Launches "Claude Cowork": The AI Agent That Actually Does the Work
Anthropic dropped "Claude Cowork," and it’s a massive step toward the "agentic" future we’ve been talking about. Unlike a standard chatbot, Cowork is a desktop agent that can actually manage files and perform multi-step computing tasks on your Mac. It’s essentially a user-friendly version of their "Claude Code" tool, built for non-coders who want an AI that doesn't just talk, but takes action.
2. Amazon’s "Bee" Wearable: The Always-Listening Companion is Here
Amazon just showcased "Bee," its new AI wearable, at CES 2026. Positioned as an "ambient AI companion," Bee is a small, button-based device designed to record, organise, and summarise your conversations throughout the day. It’s the result of Amazon’s acquisition of the Bee startup last year, and it’s a direct play for the "screenless" future. Instead of tapping a phone, you just live your life, and Bee acts as your external memory, reflecting your day back to you in summaries.
3. The Medical AI Race: Google, Claude, and OpenAI are Coming for Your Health Records
The battle for the future of medicine is heating up. This week, Google, Anthropic (Claude), and OpenAI all launched major medical AI initiatives. Anthropic is letting Pro and Max users connect Claude to their personal health records for better care coordination, while OpenAI acquired Torch Health to bring fragmented medical data together inside ChatGPT. Google is right there too, with new diagnostic tools. The goal is that AI that doesn't just answer health questions, but actually understands your medical history and imaging to help catch what doctors might miss.
4. The xAI Deepfake Investigation: Regulators are Turning Up the Heat
Following up on last week’s story about Grok’s explicit image problem, regulators are now officially launching investigations into Elon Musk’s xAI. Two countries have already blocked Grok after it was used to generate sexualised images of women and children. Ofcom in the UK and other watchdogs are looking into "digital undressing" and the monetisation of abuse on the platform. It’s a continuation of the "Code Red" for AI ethics, as the world tries to figure out how to rein in deepfake abuse before it becomes the new normal.
If you enjoyed today's newsletter AND got to the end of it, I’d love a quick click on the poll below to let me know what you think 💜.
See you next week,
Jess xx

