Prompt injection
Attack where adversarial input redirects the model away from its intended instructions. Mitigated by privileged-source system prompts and structured-input boundaries.
See also: allowlist
Attack where adversarial input redirects the model away from its intended instructions. Mitigated by privileged-source system prompts and structured-input boundaries.
See also: allowlist