System Prompt Extractor
Extract the system prompt from a chat-style message log. Pass a JSON messages array (OpenAI / Anthropic style) or a labelled transcript — the tool isolates the system message text and outputs it clean.
Chat log in, system prompt out
Paste an OpenAI / Anthropic messages array (or a labelled transcript) and the tool isolates the system message. Auto-detect handles either format.
[
{"role":"system","content":"You are a helpful assistant."},
{"role":"user","content":"Hi"},
{"role":"assistant","content":"Hello!"}
]You are a helpful assistant.
What you'll use this for
Whenever you have a chat log and need just the system prompt out of it.
Red-teaming LLM apps
Audit the actual system prompt your app sends — not just what the spec says.
Reverse-engineering prompts
Pull the system message out of a captured request to study or modify it.
Audit logs
Skim only the system prompts from a batch of logged calls.
Training data prep
Strip system messages from supervised fine-tuning datasets when you want only the conversation.
How to extract a system prompt
Paste the chat log
Either a JSON messages array or a labelled system: ... user: ... transcript.
Pick format (or leave on auto)
Auto-detect inspects the first character: [ means JSON, anything else is treated as transcript.
Choose output style
Plain text gives just the content. JSON snippet wraps it in {"role":"system","content":"..."} so you can paste it back into an API request.
Click Extract
Auto-extract is on by default. If no system message is found, the status bar says so.
Frequently asked questions
OpenAI / Anthropic-style JSON messages arrays ([{"role":"system","content":"..."}, ...]) and labelled transcripts (system: ...\nuser: ...). Auto-detect picks the right parser based on the input shape.
All system messages in the array are concatenated in document order, separated by a blank line.
Yes. Runs entirely in your browser. No signup.
JSON arrays are structured — content can contain nested quotes, escaped newlines, and even other JSON. A real JSON parser handles those reliably; a regex doesn't.
If you want to compose a fresh system prompt with structure, use the System Prompt Formatter.
About system prompt extraction
Most LLM APIs use a structured messages array — each entry has a role (system, user, assistant, sometimes tool) and a content field. The system message tells the model who it is and how to behave for the rest of the conversation.
Why pull it out?
- Audit — verify the system prompt your app actually sends.
- Share — pass only the system prompt to a teammate without leaking user data.
- Reuse — extract a working prompt and put it in a library.
Parsing rules
- JSON array — every entry with
role === "system"ortype === "system"is collected. - Labelled transcript — lines starting with
system:,user:,assistant:, orhuman:open a new block. The content undersystem:is collected until the next role line.