Is prompt injection only a concern for public bots
No. Prompt injection is about untrusted content, not just who can DM the bot. If your assistant reads external content (web search/fetch, browser pages, emails, docs, attachments, pasted logs), that content can include instructions that try to hijack the model. This can happen even if you are the only sender.
The biggest risk is when tools are enabled: the model can be tricked into exfiltrating context or calling tools on your behalf. Reduce the blast radius by:
using a read-only or tool-disabled “reader” agent to summarize untrusted content
keeping web_search / web_fetch / browser off for tool-enabled agents
sandboxing and strict tool allowlists
Details: Security.