Onno Eberhard
PhD Student in Tübingen (MPI-IS & Uni Tü), interested in reinforcement learning. Freedom is a pure idea. onnoeberhard.com
- Reposted by Onno Eberhard[Not loaded yet]
- Reposted by Onno EberhardNicolo Cesa-Bianchi and Matteo Papini are putting together a great unconference workshop at the @ellis.eu day at @euripsconf.bsky.social If you want to talk about RL, causality, bandits, online learning, join us there on December 2nd sites.google.com/view/ilir-wo...
- Reposted by Onno Eberhard[Not loaded yet]
- Reposted by Onno EberhardTruly chuffed for our fearless food physicists @mpipks.bsky.social + collabs from AT @istaresearch.bsky.social, IT & ES who won this year’s Ig Nobel - the #NobelPrize of hearts❤️for cracking the science of perfect pasta !🍝Kudos to all for intrepidly consuming lots of cheese in the name of science!😋
- I wrote a short post on our newest ICML paper addressed at people who are not experts in machine learning. Check it out!
- In our latest blog post, @onnoeberhard.com writes about work presented at #ICML2025 on partially observable reinforcement learning which introduces an alternative memory framework - “memory traces”. aihub.org/2025/09/12/m...
- A cute little animation: a critically damped harmonic oscillator becomes unstable with integral control if the gain is too high. Here, at K_i = 2, a Hopf bifurcation occurs: two poles of the transfer function enter the right-hand s-plane and the closed-loop system becomes unstable.
- Reposted by Onno Eberhard[Not loaded yet]
- Reposted by Onno Eberhard[Not loaded yet]
- Reposted by Onno Eberhard[Not loaded yet]
- Reposted by Onno Eberhard[Not loaded yet]
- I am in Vancouver at ICML, and tomorrow I will present our newest paper "Partially Observable Reinforcement Learning with Memory Traces". We argue that eligibility traces are more effective than sliding windows as a memory mechanism for RL in POMDPs. 🧵
- Great talk by @claireve.bsky.social about our joint work on memory traces this morning. Come join me at poster 94 if you want to know more! #RLDM2025
- Reposted by Onno Eberhard[Not loaded yet]
- Reposted by Onno Eberhard[Not loaded yet]
- Reposted by Onno Eberhard[Not loaded yet]
- Reposted by Onno Eberhard[Not loaded yet]
- I'm flying to Michigan today to present our new paper "A Pontryagin Perspective on Reinforcement Learning" at L4DC, where it has been nominated for the Best Paper Award! We ask the question: is it possible to learn an open-loop controller via RL? 🧵
- Reposted by Onno Eberhard[Not loaded yet]
- Reposted by Onno Eberhard[Not loaded yet]
- Reposted by Onno Eberhard[Not loaded yet]
- Reposted by Onno Eberhard[Not loaded yet]
- Reposted by Onno Eberhard[Not loaded yet]
- EWRL has been my favorite conference experience so far. Very excited that we are organizing it in Tübingen this year!
- Reposted by Onno EberhardMark your calendars, EWRL is coming to Tübingen! 📅 When? September 17-19, 2025. More news to come soon, stay tuned!
- Truly inspiring work.
- Are you tired of context-switching between coding models in @pytorch.org and paper writing on @overleaf.com? Well, I’ve got the fix for you, Neuralatex! An ML library written in pure Latex! neuralatex.com To appear in Sigbovik (subject to rigorous review process)
- Reposted by Onno Eberhard[Not loaded yet]
- Reposted by Onno Eberhard[Not loaded yet]