Prompt Template Variables
Substitute {{variables}} in LLM prompt templates. Pass a JSON variable map and the engine expands every reference. Useful for batch prompt generation and A/B testing.
Template + vars → prompt
Every {{key}} is replaced by the matching value in the JSON map. Missing keys are flagged but left in place so you can spot them in the output.
Hello {{name}},
You ordered {{count}} {{item}}.
Total: ${{price}}
Thanks!Hello Alice, You ordered 3 books. Total: $45.99 Thanks!
What you'll use this for
Anywhere a prompt needs to be filled in once per user / row / variant before being sent to a model.
Batch generation
Render the same prompt N times against N rows of variables for evals or bulk completions.
A/B test variants
Compose one template, swap the variable map per cohort, ship the cleanest expansion.
Email templates
Personalize subject lines and bodies for transactional or marketing sends.
Code generation
Seed scaffolding prompts with project name, language, and feature flags.
How to substitute variables
Write the template
Use {{key}} placeholders. Whitespace inside the braces is ignored.
Provide JSON
Drop a valid JSON object on the right. Keys map 1:1 to placeholders. Dots descend into nested keys.
Render
Auto-render keeps the output fresh. Missing keys are reported in the status bar.
Copy or download
Save the rendered prompt to disk or paste it straight into your API client.
Frequently asked questions
Yes. Use dot notation: {{user.name}} reads vars.user.name. Multiple levels are fine.
Yes — by numeric index: {{items.0}} reads the first element of vars.items.
Yes. No signup, no limits. Everything runs locally in your browser.
The engine only replaces matched {{key}} tokens that resolve to a defined value. Unmatched {{...}} blocks are left untouched, so you can use them as literal text by giving them a name that isn't in the map.
Not provided — extracting variables from a rendered prompt requires the original template. Keep both side by side instead.
About template substitution
Prompt templates are a thin layer of Handlebars-style interpolation: drop variables into prose, fill them in per call. Keeping templates in version control while the data lives elsewhere makes prompts auditable and re-runnable.
Syntax supported
{{key}}— plain reference.{{ key }}— whitespace inside braces is ignored.{{a.b.c}}— nested keys via dot notation.{{list.0}}— array index via numeric path segment.
Why not Mustache / Handlebars?
- This is a deliberately tiny subset — no helpers, no partials, no loops.
- For real templating use the full library at build time. For LLM prompts, plain substitution is almost always enough.
- Values that aren't strings are JSON-stringified, so objects render as JSON literals.