Most people treat ChatGPT like a search box: they type a vague request and hope the model guesses the rest. The fastest way to get better answers is to ask questions that include context, constraints, and a clear output format. This guide gives you practical question types (with copy‑paste templates) for work, learning, creativity, and decision-making—plus a simple way to iterate when the first answer isn’t great.
Related: prompt recipe · reduce hallucinations · prompt lists · work prompts · daily prompts · random prompt
ChatGPT is best at helping you think with structure. If you only ask for “ideas,” you’ll get generic ideas. Instead, ask for a deliverable you can use right away: a checklist, a table of trade‑offs, a draft email, a meeting agenda, a lesson plan, or a test plan. If your prompt contains: goal → context → constraints → format, the answer becomes drastically more relevant. If you want a reusable framework, see the prompt recipe.
Clarifying questions are the most underrated prompt trick. They force the model to surface missing information, and they help you turn a fuzzy thought into a concrete plan. Use this when you feel stuck, overwhelmed, or unsure what you even need.
Ask me 7 clarifying questions about [topic].
Then summarize my answers in 5 bullets and propose a plan.
I want help with [goal]. Before suggesting anything:
1) ask me up to 6 questions
2) explain why each question matters
3) then give me 3 plan options with pros/cons.
Great follow-ups after the first answer:
Want more “ask first” prompts? Browse work & productivity prompts.
Creation prompts work best when you specify the audience and the constraints. “Write a blog post” is vague; “Write a 900‑word blog post for busy founders with a friendly tone and a 5‑part outline” is actionable. Ask for multiple versions so you can choose a direction, then iterate.
Write 3 versions of [thing] for [audience]:
1) friendly, 2) direct, 3) playful.
Keep each under 120 words.
Create a clear outline for [topic].
Audience: [who]. Goal: [what you want the reader/user to do].
Constraints: [tone/length/keywords].
Output: H2 headings + 3 bullets each.
Generate 10 options for [headline/subject line/hook].
Make them diverse (curious, contrarian, practical, emotional, funny).
For creativity and conversation starters, you’ll like the prompt categories: funny and deep.
Decision prompts should include criteria. “Which is better?” isn’t answerable without knowing what you value. The trick is to force a comparison table, then ask for a recommendation with reasoning. This is useful for product choices, career decisions, and planning trade‑offs.
I’m choosing between [A], [B], [C].
My priorities are: [1], [2], [3].
Make a decision matrix and recommend the best option with reasoning.
Help me decide between [A] and [B].
Constraints: [budget/time/skills].
Output:
- a pros/cons table
- risks + mitigations
- your recommendation
- what information would change your mind.
If the decision is high-stakes, ask the model to slow down:
Improvement prompts turn ChatGPT into an editor or reviewer. The key is to define what “good” looks like: clearer, shorter, more persuasive, more friendly, less jargon, more structure, etc. If you provide your draft, ask for both critique and a revised version.
Critique this draft for clarity and tone, then rewrite it.
If you remove anything, explain why: [paste text]
Rewrite this to be:
- 15% shorter
- clearer for a beginner
- more confident (not hypey)
Then give 3 alternate openings: [paste text]
Act as a reviewer.
Give me: 1) the 5 biggest issues, 2) suggested fixes, 3) a rewritten version.
Language models can sound confident even when they guess. If accuracy matters, constrain the model and make it checkable. A few small changes reduce hallucinations dramatically:
Use only the information in the text below.
If something is missing, say “unknown”.
Then answer in bullets: [paste]
List your assumptions and label confidence (high/medium/low).
Then give a verification checklist of what I should confirm.
For a deeper walkthrough, see How to Reduce Hallucinations and How to Evaluate AI Answers.
Below are compact templates you can copy into ChatGPT. Replace the bracketed parts. For more, browse all prompt lists or the daily archive at /daily/.
Act as a project manager.
Goal: [goal].
Constraints: [deadline/budget/tools].
Output:
1) milestones
2) weekly plan
3) risks + mitigations
4) success metrics.
Write a concise email to [recipient] about [topic].
Tone: [friendly/direct].
Constraints: under 140 words.
End with a clear call to action.
Teach me [topic] like I’m new.
Then give an example.
Then quiz me with 5 questions and correct my answers.
Here’s an error and code.
1) explain likely causes
2) propose fixes
3) show corrected code
4) add a minimal test.
Error: [...]
Code: [...]
If you want prompts that are fun or philosophical, jump to Funny Questions or Deep Questions. If you want to stress test reasoning, try Prompts That Test the Limits of an LLM.
Even with good questions, the first answer is usually a draft. Treat the chat like a workshop: get a version, then refine it. Here are patterns that work across almost any topic—from writing to coding to life decisions.
Ask for multiple approaches first. This prevents the model from locking onto one interpretation too early and gives you a menu. Then ask it to pick the best option for your constraints.
Give me 5 different approaches to [goal].
Then recommend the best one for: [constraints].
Finally, write a 7-step plan to execute it.
If you’re writing, designing, or planning, ask for critique before a rewrite. Critique tells you what to improve and why. This is especially useful for landing pages, emails, resumes, and product specs.
First critique this for clarity, structure, and tone.
Then rewrite it with your improvements: [paste]
When you care about correctness, ask for assumptions and edge cases. This reduces confident-sounding gaps. For example, in coding, edge cases often reveal bugs; in decision-making, they reveal hidden risks.
Before answering, list assumptions you're making.
Then answer.
Then list 10 edge cases or failure modes and how to handle them.
A checkable answer is a safer answer. Ask for a checklist, test plan, or rubric. If you can validate the output step-by-step, you’ll spot mistakes quickly and the model becomes more useful.
Answer this, then add:
1) a checklist I can follow,
2) common mistakes,
3) how to verify the result.
If you want prompts specifically designed for structured work outputs, browse work & productivity prompts. If you want prompts that explore reasoning boundaries, try LLM limit tests.
Good questions are specific, include context, and ask for a clear outcome (plan, list, explanation, table, or draft). If you’re unsure, ask ChatGPT to ask you clarifying questions first.
Add constraints (time, budget, audience, tone), request a format, and iterate. A reliable loop is: generate 3 options → pick the best for your constraints → improve the chosen option.
Ask for deliverables: meeting agendas, project plans, checklists, email drafts, risk logs, or decision matrices. See Work & Productivity prompts for more templates.
Provide your source text, ask for assumptions, label uncertainty, and request a verification checklist. Start with How to Reduce Hallucinations.
More: all guides · all prompts · suggest a prompt