KI & Automation
March 10, 2026

The best custom instructions for ChatGPT

How to use custom instructions in ChatGPT & Co.: typical use cases, sample prompt, data protection, testing and agents as the next step.

The best custom instructions for ChatGPT

Less manual, more automated?

In an initial consultation, let's find out where your biggest needs lie and what optimization potential you have.

Whether ChatGPT, Claude, Gemini, or Copilot: the function has different names, but the effect is the same: you give the model a stable framework so that you don't have to explain it again in every prompt. This article is about user-defined instructions, known as persistent instructions: specifications that an LLM should take into account permanently, such as tone, format, level of detail, target audience, or typical tasks. This applies regardless of the provider; sometimes they are called “custom instructions,” sometimes “personal instructions,” and sometimes “preferences.”

What are user-defined instructions used for?

You can use the instructions to turn chatbots from an “all-purpose responder” into a reliable work tool: You use them to define once how the model should respond so that the output is directly usable every time. Most often, this involves format and quality rules, style and tone, target audience, and role/context focus. Many also use them as a safety net: asking for clarification when something is unclear, not guessing, avoiding sensitive data. The common purpose is always the same: less repetition, fewer follow-up prompts, more consistency, because preferences don't have to be renegotiated with every request. In ChatGPT, for example, such “custom instructions” are stored centrally and then applied to conversations instead of having to be written into the prompt each time.

Typical use cases

To illustrate how custom instructions can help in everyday use of LLMs, here are a few specific use cases:

Working mode for usable answers

Always the same output format: Key message → 3 options → Risks → Next steps. Fewer queries, fewer “walls of text.”

Permanently set tone and writing style

e.g., “short, factual, no buzzwords, German,” or “friendly, empathetic, informal.” Saves 80% of style corrections.

Target group translator

“Explain everything for ambitious laypeople/decision-makers, define technical terms when they first appear.” Makes content immediately shareable.

Role/focus persona for recurring tasks

E.g., “Act as a project manager: structure tasks, risks, responsible parties.” Or “as an editor: headline variants, clear thesis, no filler sentences.”

Quality assurance and “safety rails”

"If information is missing: ask 1 to 2 questions. If unsure: say so. No speculation.“ Reduces hallucinations and false security.

Language and format standards for everyday use

e.g., ”always use Markdown with headings,“ ”always use bullet points,“ ”always use tables for comparisons,“ ”always include a short summary at the end."

What could a system prompt look like?

And now let's lay our cards on the table: What could a persistent instruction look like in concrete terms? One thing is clear: Everyone who uses LLMs has their own style. But to get an idea, an exemplary system prompt like this might be quite useful:

Respond in a short, structured, and actionable manner by default. ‍

FORMAT

1) Key message in 1 sentence

1) Key message in 1 sentence

2) 3 options (1–2 sentences each)

3) Risks/trade-offs (max. 3 bullet points)

4) Next steps (max. 5 bullet points, specific)

RULES

- No empty phrases, no repetition of my question.

- If information is missing: ask a maximum of 2 follow-up questions instead of guessing.

- If you are unsure: say so clearly.

Where are the user-defined instructions stored?

User-defined, persistent instructions are not stored in the chat window, but in the settings under “Personalization”—either in the user interface (UI) or in the code via a programming interface (API). For end users, this means that you enter your preferences in the app/website in the personalization settings; they are linked to your account and are then automatically applied to new conversations without you having to re-enter them each time.

The same principle applies in developer setups: there, the instruction is built into every request as a system message/system prompt, usually on the server side or in a workflow tool, so that all users and sessions run consistently according to the same rules.

How many instructions can I use – and how do I change them?

In most LLM apps, you have a section for persistent instructions. There, you can store multiple rules as bullet points – one instruction, many sub-points. If you need different modes (e.g., work mode vs. creative mode), you can manually switch the text by replacing the instruction in the settings. A simple workflow is practical: Save 2 to 3 finished instruction blocks as text modules (e.g., in Notion/OneNote/Word) and copy the appropriate block into the settings as needed.

UI vs. API

The difference between UI (user interface) and API (application programming interface) is important because it explains where you set persistent instructions in the first place. When you use an LLM normally via the website or app, you work in the UI: you enter your instructions once in the settings and they automatically apply to new chats. The API is the programming interface – you use it when you build or automate something yourself (e.g., an internal tool, a workflow, a script).

Tips for storing system prompts

1. Wording

Write instructions so that the model knows exactly what you want. This works best with clear, positive rules (“Answer in 5 bullet points,” “Ask questions if there are gaps”) rather than prohibitions (“not so long,” “no bad answers”). Keep the instructions short and prioritize: better to have 5 strong rules than 20 moderately clear ones. After saving, immediately test with 2-3 typical questions to see if the format, tone, and level of detail really work.

2. Data protection

Treat persistent instructions like normal work data. No passwords, no health data, no personal information from customers or employees. If you work in a team, don't write anything that wouldn't be allowed in an internal manual. Use placeholders (“Customer A,” “Project X”) and move sensitive details to the specific chat—or better yet, to secure, approved systems.

3. Testing and operation

Changes to the system prompt should not happen “live.” Test new versions first in a small sandbox (separate test chat or test profile) with the same standard cases each time. Define 1 to 2 sample responses as a reference (“this is how it should look”) and version your prompts (v1, v2, v3) so that you can quickly revert if problems arise. After rolling out, briefly observe whether the rules are being consistently followed and only then refine them further.

The next step: agents instead of just instructions

Persistent instructions ensure consistent responses. If you want to not only “respond immediately” to recurring tasks, but also have them processed automatically, you'll reach the next level of maturity: agents. An agent is more than just “good prompting.” It is a system that not only responds, but also performs work. An agent has a goal, follows a process logic, can access tools depending on the platform (e.g., files, calendar, email, web, apps), and ultimately delivers a result, not just text.

In practice, this is the transition from “getting answers” to automation solutions: recurring processes are modeled as workflows and processed (partially) autonomously by the agent. Typical patterns include “check input → extract information → create decision proposal → ask questions → write output to a target system.” Depending on the level of maturity, this ranges from simple templates (agent guides you step by step) to true process automation via tools such as Power Automate, n8n, Zapier, or internal APIs. The added value: fewer context switches, fewer manual transfers, more reproducible quality.

Conclusion

Persistent instructions are the fastest way to take an LLM from “somewhat helpful” to “reliably useful.” They save time because tone, format, and quality rules don't have to be re-explained with every request, and they increase consistency because the model has a stable framework. The most important success factor here is not creativity, but clarity: a few prioritized rules that you test and refine as needed. If you do this properly, you'll get fewer follow-up prompts, less text garbage, and significantly more output that can be used directly. If you want to turn this into repeatable processes, the next step is obvious: agents that combine these rules with goals, tools, and workflow logic and process tasks (partially) automatically.

FAQs

Can I use multiple persistent instructions at the same time?

There is usually only one instruction area per account, but you can store multiple rules as bullet points in it. For different modes, save 2 to 3 variants as text modules and swap them out as needed.


What is the difference between a persistent instruction and a normal prompt?

A normal prompt only applies to the current request. Persistent instructions set a permanent framework for tone, format, and behavior across many requests.


What should I never write in persistent instructions?

No passwords, no health data, and no personal data (e.g., customer names, private contact information). Use placeholders and only insert sensitive details where it is really necessary and approved.


When are instructions no longer sufficient and when do I need agents/automation?

If you don't just want answers, but want to reliably process recurring tasks (e.g., check → summarize → draft → transfer to the tool). Then agents/workflows make sense because they combine goals, steps, and tool access.

Less manual, more automated?

Let's arrange an initial consultation to identify your greatest needs and explore potential areas for optimisation.

SLOT 01
Assigned

Pro Quartal arbeiten wir nur mit maximal sechs Unternehmen zusammen, um die besten Ergebnisse zu erzielen.

SLOT 02
Assigned

Pro Quartal arbeiten wir nur mit maximal sechs Unternehmen zusammen, um die besten Ergebnisse zu erzielen.

SLOT 03
Assigned

Pro Quartal arbeiten wir nur mit maximal sechs Unternehmen zusammen, um die besten Ergebnisse zu erzielen.

SLOT 04
Assigned

Pro Quartal arbeiten wir nur mit maximal sechs Unternehmen zusammen, um die besten Ergebnisse zu erzielen.

SLOT 05
Assigned

Pro Quartal arbeiten wir nur mit maximal sechs Unternehmen zusammen, um die besten Ergebnisse zu erzielen.

SLOT 06
Available

To achieve the best possible results, we limit the number of companies we work with to a maximum of six per quarter.

faq

Your questions, our answers

What does bakedwith actually do?

bakedwith is a boutique agency specialising in automation and AI. We help companies reduce manual work, simplify processes and save time by creating smart, scalable workflows.

Who is bakedwith suitable for?

For teams ready to work more efficiently. Our customers come from a range of areas, including marketing, sales, HR and operations, spanning from start-ups to medium-sized enterprises.

How does a project with you work?

First, we analyse your processes and identify automation potential. Then, we develop customised workflows. This is followed by implementation, training and optimisation.

What does it cost to work with bakedwith?

As every company is different, we don't offer flat rates. First, we analyse your processes. Then, based on this analysis, we develop a clear roadmap including the required effort and budget.

What tools do you use?

We adopt a tool-agnostic approach and adapt to your existing systems and processes. It's not the tool that matters to us, but the process behind it. We integrate the solution that best fits your setup, whether it's Make, n8n, Notion, HubSpot, Pipedrive or Airtable. When it comes to intelligent workflows, text generation, or decision automation, we also use OpenAI, ChatGPT, Claude, ElevenLabs, and other specialised AI systems.

Why bakedwith and not another agency?

We come from a practical background ourselves: founders, marketers, and builders. This is precisely why we combine entrepreneurial thinking with technical skills to develop automations that help teams to progress.

Do you have any questions? Get in touch with us!