Announcing the 19th European Workshop on Reinforcement Learning!
We are looking forward to seeing you in Lille, France on October 5th — 7th, 2026.
More details to come (ewrl-org.github.io/ewrl-2026/in...) 👀
Announcing the 19th European Workshop on Reinforcement Learning!
We are looking forward to seeing you in Lille, France on October 5th — 7th, 2026.
More details to come (ewrl-org.github.io/ewrl-2026/in...) 👀
🎤 Announcing the 3rd workshop on Reinforcement Learning in Mannheim 🎤
We have an amazing lineup of speakers: @Mathieugeist, @gio_ramponi, Theresa Eimer, @SarahKeren_, @araffin2, @c_rothkopf, and @AdrienBolland
⏰ Friday 6th February
📍University of Mannheim
This one-day event brings together researchers, practitioners, and students interested in the theoretical and practical aspects of Reinforcement Learning (RL).
In addition to the talks, there will be a poster session, where everyone is welcome to present completed or ongoing work.
Organizers: Leif Döring, Théo Vincent , @claireve.bsky.social , Simon Weißmann
Exciting workshop for RL enthusiasts in Mannheim! 👇
Workshop on Reinforcement Learning 2026, taking place on 𝐅𝐞𝐛𝐫𝐮𝐚𝐫𝐲 𝟔, 𝟐𝟎𝟐𝟔, at the 𝐔𝐧𝐢𝐯𝐞𝐫𝐬𝐢𝐭𝐲 𝐨𝐟 𝐌𝐚𝐧𝐧𝐡𝐞𝐢𝐦, Germany.
Participation in the workshop is 𝐟𝐫𝐞𝐞 𝐨𝐟 𝐜𝐡𝐚𝐫𝐠𝐞!
Check the program and register: www.wim.uni-mannheim.de/doering/conf...
✨ The last day kicked off with an amazing talk by @katjahofmann.bsky.social
"World and Human Action Models for Gameplay Ideation" 🎮🤖
Exciting vision from the Game Intelligence team @msftresearch.bsky.social
Second day of #EWRL2025 kicking off with an inspiring talk by Peter Dayan!
“How could it be that we, or an agent, could want something that it does not like, or like something that it would not be willing to exert any effort to acquire?”
Here at #EWRL: demonstration of autonomous tomato harvesting by polybot.eu
#Robot #Harvesting #Learning
Fascinating talk by Amy Zhang
on Proto Successor Measures at #EWRL 2025.
euro-workshop-on-reinforcement-learning.github.io/ewrl18/progr...
#reinforcementlearning #robotics #machinelearning
Together, these contributions demonstrate how extended action representations and advanced policy models can advance the efficiency and versatility of RL.
Finally, we present diffusion policies as a more expressive policy class for maximum entropy RL, and highlight their advantageous properties for stability, flexibility, and scalability in complex domains.
Building on this foundation, we introduce a novel algorithm for skill discovery with MPs that leverages maximum entropy RL and mixture-of-expert models to autonomously acquire diverse, reusable skills.
However, standard MP-based approaches result in open-loop policies; to address this, we extend them with online replanning of MP trajectories and off-policy learning strategies that exploit single-time step information.
This parametrization allows black-box RL algorithms to adapt MP parameters to diverse contexts and initial states, providing a pathway toward versatile skill acquisition.
they encode trajectories with a concise set of parameters, naturally yielding smooth behaviors and enabling exploration in parameter space rather than in raw action space.
Abstract: Reinforcement learning (RL) with primitive actions often leads to inefficient exploration and brittle behaviors. Extended action representations, such as motion primitives (MPs), offer a more structured approach:
⏳ Just a few days to go!
We’re excited to share a glimpse of what Gerhard Neumann will tell us:
🎤Title: "From Extended Action Representations to Versatile Policy Learning in Reinforcement Learning"
📝 Abstract in the comments
🚀 Less than one week to EWRL 2025!
👀 Did you check the program? euro-workshop-on-reinforcement-learning.github.io/ewrl18/progr...
🎟️ Register here: site.pheedloop.com/event/EWRL/h...
Can’t wait to meet you all in person! #EWRL2025
I will suggest a framework for answering these questions
through the medium of potential-based shaping - in which 'liking'
provides immediate, but preliminary and ultimately cancellable,
information about the true, long-run worth of outcomes.
How could it be that
we, or an agent, could `want' something that it does not `like', or
`like' something that it would not be willing to exert any effort to
acquire?
I will talk about an
example of the complexity that has important psychological and neural
resonance - namely the distinct concepts of 'liking' and 'wanting'. The
former characterizes an immediate hedonic experience; and the latter the
motivational force associated with that experience.
Abstract: As reinforcement learners, humans and other animals are excellent at
improving their otherwise miserable lot in life. This is often described
in terms of optimizing utility. However, understanding utility in a
non-circular manner is surprisingly difficult.
🎙️ Let’s hear from our next speaker: Peter Dayan, Director of the Max Planck Institute for Biological Cybernetics.
🧠 Talk title: Liking, Shaping and Biological Alignment
Abstract in the comments.
✨ There’s still time to register for EWRL 2025!
Register here: site.pheedloop.com/event/EWRL/h...
This talk focuses on recent research advances from the Game Intelligence team at Microsoft Research, towards scalable machine learning architectures that effectively model human gameplay, and our vision of how these innovations could empower creatives in the future.
Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences.
Abstract: Modeling complex environments and realistic human behaviors within them is a key goal of artificial intelligence research.
Let’s take a look at our keynote speaker @katjahofmann.bsky.social's talk at #EWRL2025! 🎤
Title: "World and Human Action Models for Gameplay Ideation"
👉 Abstract in the comments
👉 Register here for EWRL 2025: site.pheedloop.com/event/EWRL/h...
📣 Early bird registration ends today!
Register and join us in Tübingen for EWRL 2025: site.pheedloop.com/event/EWRL/h...
📣Registration for EWRL is now open📣
Register now 👇 and join us in Tübingen for 3 days (17th-19th September) full of inspiring talks, posters and many social activities to push the boundaries of the RL community!
📢 Reviews are out!
Many thanks to all the authors who submitted to EWRL — and to the reviewers for their careful evaluations.
Stay tuned: updates on contributed talks are coming soon ⚡