+ "details": "## Summary\n\nMultiple functions in `langchain_core.prompts.loading` read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to `load_prompt()` or `load_prompt_from_config()`, an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (`.txt` for templates, `.json`/`.yaml` for examples).\n\n**Note:** The affected functions (`load_prompt`, `load_prompt_from_config`, and the `.save()` method on prompt classes) are undocumented legacy APIs. They are superseded by the `dumpd`/`dumps`/`load`/`loads` serialization APIs in `langchain_core.load`, which do not perform filesystem reads and use an allowlist-based security model. As part of this fix, the legacy APIs have been formally deprecated and will be removed in 2.0.0.\n\n## Affected component\n\n**Package:** `langchain-core`\n**File:** `langchain_core/prompts/loading.py`\n**Affected functions:** `_load_template()`, `_load_examples()`, `_load_few_shot_prompt()`\n\n## Severity\n\n**High** \n\nThe score reflects the file-extension constraints that limit which files can be read.\n\n## Vulnerable code paths\n\n| Config key | Loaded by | Readable extensions |\n|---|---|---|\n| `template_path`, `suffix_path`, `prefix_path` | `_load_template()` | `.txt` |\n| `examples` (when string) | `_load_examples()` | `.json`, `.yaml`, `.yml` |\n| `example_prompt_path` | `_load_few_shot_prompt()` | `.json`, `.yaml`, `.yml` |\n\nNone of these code paths validated the supplied path against absolute path injection or `..` traversal sequences before reading from disk.\n\n## Impact\n\nAn attacker who controls or influences the prompt configuration dict can read files outside the intended directory:\n\n- **`.txt` files:** cloud-mounted secrets (`/mnt/secrets/api_key.txt`), `requirements.txt`, internal system prompts\n- **`.json`/`.yaml` files:** cloud credentials (`~/.docker/config.json`, `~/.azure/accessTokens.json`), Kubernetes manifests, CI/CD configs, application settings\n\nThis is exploitable in applications that accept prompt configs from untrusted sources, including low-code AI builders and API wrappers that expose `load_prompt_from_config()`.\n\n## Proof of concept\n\n```python\nfrom langchain_core.prompts.loading import load_prompt_from_config\n\n# Reads /tmp/secret.txt via absolute path injection\nconfig = {\n \"_type\": \"prompt\",\n \"template_path\": \"/tmp/secret.txt\",\n \"input_variables\": [],\n}\nprompt = load_prompt_from_config(config)\nprint(prompt.template) # file contents disclosed\n\n# Reads ../../etc/secret.txt via directory traversal\nconfig = {\n \"_type\": \"prompt\",\n \"template_path\": \"../../etc/secret.txt\",\n \"input_variables\": [],\n}\nprompt = load_prompt_from_config(config)\n\n# Reads arbitrary .json via few-shot examples\nconfig = {\n \"_type\": \"few_shot\",\n \"examples\": \"../../../../.docker/config.json\",\n \"example_prompt\": {\n \"_type\": \"prompt\",\n \"input_variables\": [\"input\", \"output\"],\n \"template\": \"{input}: {output}\",\n },\n \"prefix\": \"\",\n \"suffix\": \"{query}\",\n \"input_variables\": [\"query\"],\n}\nprompt = load_prompt_from_config(config)\n```\n\n## Mitigation\n\n**Update `langchain-core` to >= 1.2.22.**\n\nThe fix adds path validation that rejects absolute paths and `..` traversal sequences by default. An `allow_dangerous_paths=True` keyword argument is available on `load_prompt()` and `load_prompt_from_config()` for trusted inputs.\n\nAs described above, these legacy APIs have been formally deprecated. Users should migrate to `dumpd`/`dumps`/`load`/`loads` from `langchain_core.load`.\n\n## Credit\n\n- [jiayuqi7813](https://github.com/jiayuqi7813) reporter\n- [VladimirEliTokarev](https://github.com/VladimirEliTokarev) reporter\n- [Rickidevs](https://github.com/Rickidevs) reporter\n- Kenneth Cox (cczine@gmail.com) reporter",
0 commit comments