Advanced Prompting Techniques for Gemini CLI
By Gemini Guides on 6/10/2025
You've mastered the basics of Gemini CLI. You can ask questions, chat, and generate code. But sometimes, the output isn't quite right. It's too generic, misses the point, or isn't in the format you need.
The secret to leveling up your AI interaction lies in prompt engineering. By crafting better inputs, you can command the model to produce dramatically better outputs.
Conceptual Framework: Thinking About Prompts
Before diving into specific techniques, it's helpful to have a mental model for what makes a good prompt. This short video provides an excellent overview of key prompt engineering concepts in just a few minutes.
Video Tutorial: Learn Better Prompt Engineering in 7 Minutes
Here are four advanced techniques to turn your prompts into precision instruments.
1. Set a Persona: The Expert in the Machine
The model's default voice is a generalist. To get expert-level answers, tell the model who to be.
The Prompt:
gemini "Act as a senior database administrator specializing in PostgreSQL.
I have a query that is running slow. Explain the possible reasons and suggest
optimizations. Here's the query: SELECT..."
The Power: By setting a persona, you prime the model to use specific terminology, focus on relevant details (like indexing and query plans), and provide a much higher-quality, domain-specific answer.
2. Provide Rich Context: Don't Make the AI Guess
The most common reason for a bad response is a lack of context. The model can't read your mind or your screen. You have to show it.
The Prompt:
# Don't just ask "Why is my code broken?"
# Instead, show the code AND the error.
(cat my_file.go && echo "---" && go run my_file.go 2>&1) | gemini "Here is my Go code and the error it produces. Explain the error and fix the code."
The Power: This prompt combines the source code and the error message into a single context, giving the AI everything it needs to make an accurate diagnosis. The 2>&1
ensures both standard output and error are captured.
3. Few-Shot Prompting: Teach by Example
If you need a response in a specific format, the best way is to show the model exactly what you want. This is called "few-shot" or "in-context" learning.
The Prompt:
gemini "I'll provide text, you extract key entities into JSON.
Text: 'Apple Inc., founded in 1976 by Steve Jobs and Steve Wozniak, is headquartered in Cupertino, California.'
JSON: {\"company\": \"Apple Inc.\", \"location\": \"Cupertino, California\", \"founders\": [\"Steve Jobs\", \"Steve Wozniak\"]}
---
Text: 'The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France.'
JSON:"
The Power: By providing one or more complete examples (Text
and JSON
), you train the model for this specific task. It will follow your format precisely for the new text you've provided.
4. Chain of Thought: Force Step-by-Step Reasoning
For complex problems, a model might jump to a conclusion and get it wrong. You can force it to slow down and "show its work" by instructing it to think step-by-step.
The Prompt:
gemini "I need to migrate a WordPress site to a new server.
Create a detailed, step-by-step plan. Think step by step. Start with
backups and end with DNS changes. For each step, mention the necessary shell commands."
The Power: The phrase "Think step by step" is a powerful instruction. It encourages the model to break down the problem into a logical sequence, reducing the chance of errors and producing a more comprehensive, actionable plan.
By mastering these techniques, you'll move from simply asking questions to truly directing the AI. Experiment with combining them to craft the perfect prompt for any task.