Nicolas Papernot
Security and Privacy of Machine Learning at UofT, Vector Institute, and Google 🇨🇦🇫🇷🇪🇺 Co-Director of Canadian AI Safety Institute (CAISI) Research Program at CIFAR. Opinions mine
- Thank you to Samsung for the AI Researcher of 2025 award! I'm privileged to collaborate with many talented students & postdoctoral fellows @utoronto.ca @vectorinstitute.ai . This would not have been possible without them! It was a great honour to receive the award from @yoshuabengio.bsky.social !
- Reposted by Nicolas Papernot[Not loaded yet]
- Thank you to @schmidtsciences.bsky.social for funding our lab's work on cryptographic approaches for verifiable guarantees in ML systems and for connecting us to other groups working on these questions!
- How can we build AI systems the world can trust? AI2050 Early Career Fellow Nicolas Papernot explores how cryptographic audits and verifiability can make machine learning more transparent, accountable, and aligned with societal values. Read the full perspective: buff.ly/JjTnRjm
- Reposted by Nicolas Papernot[Not loaded yet]
- Reposted by Nicolas Papernot[Not loaded yet]
- Reposted by Nicolas Papernot[Not loaded yet]
- Reposted by Nicolas Papernot[Not loaded yet]
- Excited to share the first batch of research projects funded through the Canadian AI Safety Institute's research program at CIFAR! The projects will tackle topics ranging from misinformation to safety in AI applications to scientific discovery. Learn more: cifar.ca/cifarnews/20...
- Reposted by Nicolas Papernot[Not loaded yet]
- If you are submitting to @ieeessp.bsky.social this year, a friendly reminder that there is an abstract submission deadline this Thursday May 29 (AoE). More details: sp2026.ieee-security.org/cfpapers.html
- Reposted by Nicolas Papernot[Not loaded yet]
- Reposted by Nicolas Papernot[Not loaded yet]
- Reposted by Nicolas Papernot[Not loaded yet]
- Reposted by Nicolas Papernot[Not loaded yet]
- Reposted by Nicolas Papernot[Not loaded yet]
- Congratulations again, Stephan, on this brilliant next step! Looking forward to what you will accomplish with @randomwalker.bsky.social & @msalganik.bsky.social!
- Starting off this account with a banger: In September 2025, I will be joining @princetoncitp.bsky.social at Princeton University as a Postdoc working with @randomwalker.bsky.social & @msalganik.bsky.social! I am very excited about this opportunity to continue my work on trustworthy/reliable ML! 🥳
- The Canadian AI Safety Institute (CAISI) Research Program at CIFAR is now accepting Expressions of Interest for Solution Networks in AI Safety under two themes: * Mitigating the Safety Risks of Synthetic Content * AI Safety in the Global South. cifar.ca/ai/ai-and-so...
- I will be giving a talk at the MPI-IS @maxplanckcampus.bsky.social in Tübingen next week (March 12 @ 11am). The talk will cover my group's overall approach to trust in ML, with a focus on our work on unlearning and how to obtain verifiable guarantees of trust. Details: is.mpg.de/events/speci...
- For Canadian colleagues, CIFAR and the CPI at UWaterloo are sponsoring a special issue "Artificial Intelligence Safety and Public Policy in Canada" in Canadian Public Policy / Analyse de politiques More details: www.cpp-adp.ca
- One of the first components of the CAISI (Canadian AI Safety Institute) research program has just launched: a call for Catalyst Grant Projects on AI Safety. Funding: up to 100K for one year Deadline to apply: February 27, 2025 (11:59, AoE) More details: cifar.ca/ai/cifar-ai-...
- Reposted by Nicolas Papernot[Not loaded yet]
- If you work at the intersection of security, privacy, and machine learning, or more broadly how to trust ML, SaTML is a small-scale conference with highly-relevant work where you'll be able to have high-quality conversations with colleagues working in your area.
- Reposted by Nicolas Papernot[Not loaded yet]