Your system prompt was never a secret.
Extracted via JustAsk, a fully autonomous self-evolving code agent framework.
Extracted system prompts from 45 commercial LLMs and code agents. Click any row to view. Claude Code prompts verified against leaked source.
| # | Model | Provider | Consistency |
|---|
Help expand the gallery. Submit extraction results or request new models.
| Step | What to do |
|---|---|
| 1. Extract | Use JustAsk or your own method to extract a system prompt via the model's API. |
| 2. Verify | Run multiple extractions and compute self-consistency. Higher consistency = more reliable result. |
| 3. Submit | Open an Issue with model name, extracted prompt, and consistency score. We handle redaction. |
Some models confuse their own identity during extraction, revealing system-level prompt fragments from other models or internal codenames.
When probed about its system instructions, Qwen-2.5 occasionally prefixed responses with GPT-4's known system preamble before correcting itself.
Multiple models leaked internal codenames or version identifiers not present in their public documentation during multi-turn extraction.
Fine-tuned variants sometimes produced fragments of the base model's system prompt alongside their own customized instructions.
Claude Code's source was leaked via a .map file in the npm registry (March 2026). We compared it against our JustAsk extractions from January 2026.
Only missed "pip install" in bash restrictions
Embellished output format with brief reasons not in source
Missed completeness directive, output format wording drifted
Missed 2 entire sections, ~40% of code-style sub-items
JustAsk formulates extraction as an online exploration problem. No handcrafted prompts, no labeled data, no privileged access.
UCB-guided skill selection balances exploration and exploitation
Multi-turn extraction via hierarchical skill space
Self-evolving rules and statistics from interaction alone
Disclaimer — This project is released solely for academic safety research, responsible disclosure, and evaluation of LLM security. The purpose of this work is to help the research community understand system prompt confidentiality and develop effective defenses — not to enable harm. WE DO NOT ALLOW any use of these materials for unauthorized extraction, prompt theft, or exploitation of commercial systems. WE DO NOT ALLOW any misuse of this research.
| Date | Update | |
|---|---|---|
| 2026-03 | Ground-truth verification: Claude Code extractions match leaked source at 85-95% accuracy. | |
| 2026-02 | Open-sourced System Prompt Open gallery with 45 extracted system prompts. | |
| 2026-01 | Paper and JustAsk framework released. Initial extraction of 45 frontier LLMs. |
Xiang Zheng, Yutao Wu, Hanxun Huang, Yige Li, Xingjun Ma, Bo Li, Yu-Gang Jiang, Cong Wang
@article{zheng2026justask,
title={Just Ask: Curious Code Agents Reveal System
Prompts in Frontier LLMs},
author={Zheng, Xiang and Wu, Yutao and Huang, Hanxun
and Li, Yige and Ma, Xingjun and Li, Bo
and Jiang, Yu-Gang and Wang, Cong},
journal={arXiv preprint arXiv:2601.21233},
year={2026}
}