Tag Archives: jailbreaking
Jailbreaking LLMs: Risks of giving end users direct access to LLMs in applications
All open LLM models are released with a certain amount of in-built guard rails as to what they will or will not answer. These guard rails are essentially sample conversational data that is used to train the models in the … Continue reading
Posted in Uncategorized
Tagged adversarial attacks on LLMs, jailbreaking, LLMs, risk management
1 Comment