Source injection
Security Best Practice: Protecting Against Prompt Injection from Data Sources
When integrating external data with a Large Language Model (LLM), it is crucial to treat all data as untrusted. Any source, from a scraped website to an uploaded document, can contain hidden instructions designed to manipulate the AI, a technique known as Prompt Injection.
A successful injection can cause the model to ignore its original purpose, leak sensitive information, or generate harmful content.
Key Takeaway: Before processing, always implement strict sanitization and validation on your data to neutralize potential threats.
Last updated
Was this helpful?