A BankInfoSecurity article reports that with a little creativity, you can get a chatbot to share someone else’s passwords.
05/23/2024 3:30 P.M.
1 minute read
Researchers recently found that by using a little “creativity,” it’s possible—sometimes even easy—to trick a chatbot into revealing someone else’s passwords, according to a BankInfoSecurity article.
“Generative artificial intelligence chatbots are susceptible to manipulation by people of all skill levels, not just cyber experts,” according to the article.
This insight came from researchers at Immersive Labs, who held a public contest to see how easy it would be to trick a chatbot into revealing a password with different prompts.
More than 34,000 participants “used prompting techniques such as asking the bot for the sensitive information directly, or for a hint to what the password might be if it refused,” according to the article. “They also asked the bot to respond with emoticons describing the password, such as a lion and a crown if the password was Lion King. At higher levels with increasingly better security, the participants asked the bot to ignore the original instructions that made it safer and advised it to write the password backwards, use the password as part of a story or write it in a specific format such as Morse code and base 64.”
Kevin Breen, director of cyber threat research at Immersive, told SecurityWeek that “while many customer-developed chatbots employ their own guardrails to protect against the impact of prompt engineering, ‘Many of them have none of their own protections in place. They just rely on openAI’s guardrails—they just rely on using the gen-AI backend to do the hardening.’”
The Immersive Labs study called for public and private sector cooperation and corporate policies to help mitigate the security risks.
Remember, subscribe to ACA Daily and Member Alerts under your My ACA profile when logged in to acainternational.org.