HCODX/System Prompt Extractor
100% browser-based · OpenAI / Anthropic / generic

System Prompt Extractor

Extract the system prompt from a chat-style message log. Pass a JSON messages array (OpenAI / Anthropic style) or a labelled transcript — the tool isolates the system message text and outputs it clean.

Chat log
System prompt
Extract options
Format a system prompt
Input size
0 B
Output size
0 B
Result
Status
Ready
Example

Chat log in, system prompt out

Paste an OpenAI / Anthropic messages array (or a labelled transcript) and the tool isolates the system message. Auto-detect handles either format.

Messages array
[
  {"role":"system","content":"You are a helpful assistant."},
  {"role":"user","content":"Hi"},
  {"role":"assistant","content":"Hello!"}
]
Extracted
You are a helpful assistant.
Use cases

What you'll use this for

Whenever you have a chat log and need just the system prompt out of it.

Red-teaming LLM apps

Audit the actual system prompt your app sends — not just what the spec says.

Reverse-engineering prompts

Pull the system message out of a captured request to study or modify it.

Audit logs

Skim only the system prompts from a batch of logged calls.

Training data prep

Strip system messages from supervised fine-tuning datasets when you want only the conversation.

Step by step

How to extract a system prompt

1

Paste the chat log

Either a JSON messages array or a labelled system: ... user: ... transcript.

2

Pick format (or leave on auto)

Auto-detect inspects the first character: [ means JSON, anything else is treated as transcript.

3

Choose output style

Plain text gives just the content. JSON snippet wraps it in {"role":"system","content":"..."} so you can paste it back into an API request.

4

Click Extract

Auto-extract is on by default. If no system message is found, the status bar says so.

FAQ

Frequently asked questions

OpenAI / Anthropic-style JSON messages arrays ([{"role":"system","content":"..."}, ...]) and labelled transcripts (system: ...\nuser: ...). Auto-detect picks the right parser based on the input shape.

All system messages in the array are concatenated in document order, separated by a blank line.

Yes. Runs entirely in your browser. No signup.

JSON arrays are structured — content can contain nested quotes, escaped newlines, and even other JSON. A real JSON parser handles those reliably; a regex doesn't.

If you want to compose a fresh system prompt with structure, use the System Prompt Formatter.

About

About system prompt extraction

Most LLM APIs use a structured messages array — each entry has a role (system, user, assistant, sometimes tool) and a content field. The system message tells the model who it is and how to behave for the rest of the conversation.

Why pull it out?

  • Audit — verify the system prompt your app actually sends.
  • Share — pass only the system prompt to a teammate without leaking user data.
  • Reuse — extract a working prompt and put it in a library.

Parsing rules

  • JSON array — every entry with role === "system" or type === "system" is collected.
  • Labelled transcript — lines starting with system:, user:, assistant:, or human: open a new block. The content under system: is collected until the next role line.
Related

Related tools