Lukas Aichberger
Machine Learning ELLIS PhD at Johannes Kepler University Linz and University of Oxford
- Reposted by Lukas AichbergerHot take: I think we just demonstrated the first AI agent computer worm 🤔 When an agent sees a trigger image it's instructed to execute malicious code and then share the image on social media to trigger other users' agents This is a chance to talk about agent security 👇
- ⚠️ Beware: Your AI assistant could be hijacked just by encountering a malicious image online! Our latest research exposes critical security risks in AI assistants. An attacker can hijack them by simply posting an image on social media and waiting for it to be captured. [1/6] 🧵
- 💻 AI assistants, known as OS agents, autonomously control computers just like humans do. They navigate by analysing the screen and take actions via mouse and keyboard. OS agents could soon take over everyday tasks, saving users time and effort. [2/6]
- 🔓 Our work reveals that OS agents are not ready for safe integration into everyday life. Attackers can craft Malicious Image Patches (MIPs), subtle modifications to an image on the screen that, once encountered by an OS agent, deceive it into carrying out harmful actions. [3/6]
-
View full thread🏛️ This work was made possible with OATML and TVG at the University of Oxford (@ox.ac.uk). Special thanks to @yaringal.bsky.social, @adelbibi.bsky.social, @philiptorr.bsky.social, and @alasdair-p.bsky.social for their contributions. 📖 Read the paper: www.arxiv.org/abs/2503.10809
- Reposted by Lukas AichbergerOften LLMs hallucinate because of semantic uncertainty due to missing factual training data. We propose a method to detect such uncertainties using only one generated output sequence. Super efficient method to detect hallucination in LLMs.
- 𝗡𝗲𝘄 𝗣𝗮𝗽𝗲𝗿 𝗔𝗹𝗲𝗿𝘁: Rethinking Uncertainty Estimation in Natural Language Generation 🌟 Introducing 𝗚-𝗡𝗟𝗟, a theoretically grounded and highly efficient uncertainty estimate, perfect for scalable LLM applications 🚀 Dive into the paper: arxiv.org/abs/2412.15176 👇