News

📝 ‘EvoLLM’ & ‘EvoTransformer’ Papers Are Out 🎉

March 06, 2024

News, The internet, Tokyo, Japan

Stoked to share that two projects I worked on during my Google DeepMind student researcher time in Tokyo 🗼 are now available on arXiv! We explore the capabilities of Transformers for Evolutionary Optimization. More specifically, our first work, EvoLLM 💬, shows that LLMs, which were purely trained on text can be used as powerful recombination operators for Evolution Strategies. You can find the paper here. Furthermore, our second work, Evolution Transformer 🤖, uses supervised pre-training of Transformers to act like Evolution Strategies using Algorithm Distillation of teachers. We explore fine-tuning using meta-evolution and outline a strategy to train the Transformer in a fully self-referential fashion. You can find the paper here.

🎉 I am a RS & Founding Member @Sakana.AI 🐠

January 16, 2024

News, The internet, Tokyo, Japan

Super excited to share that I joined Sakana.AI as a research scientist and founding member. I will be working at the intersection of large models and evolution, building a nature-inspired foundation model. David’s and Yujin’s work has deeply shaped my own research agenda and I am stoked for everything that is still come! 🎉 Stoked to share that I joined @SakanaAILabs as a Research Scientist & founding member.@yujin_tang & @hardmaru's work has been very inspirational for my meta-evolution endeavors🤗Exciting times ahead: I will be working on nature-inspired foundation models & evolution 🐠/🧬. https://t.co/gCrITNZn97— Robert Lange (@RobertTLange) January 16, 2024

DAAD IFI Scholarship for @Oxford 🎉

October 06, 2023

News, The internet, Oxford, UK

Super excited to have received an IFI DAAD scholarship for visiting FLAIR @University of Oxford for 5 months starting in January, 2024. Very grateful to get this opportunity to finish up my PhD journey. Update: I unfortunately had to decline the scholarship and joined Sakana.AI as a research scientist and founding member.

📝 ‘NeuroEvoBench’ Accepted @NeurIPS DSB Track 2023 🎉

October 01, 2023

News, The internet, New Orleans, USA

Very happy to share that our recent work “NeuroEvoBench: Benchmarking Evolutionary Optimizers for Deep Learning Applications” has been accepted at the NeurIPS 2023 Datasets and Benchmarks Track. Checkout the paper, the code and feel free to drop me a note.

G-Research Travel Grant Awarded for @GECCO 🎉

July 06, 2023

News, The internet, Lisbon, Portugal

Super excited to have received a G-Research travel grant for attending GECCO 2023 in Lisbon. I will present our work on learned genetic algorithms and the evosax paper write-up as a poster paper. Very grateful G-Research for supporting my work!

📝 ‘Lottery Tickets in EvoOpt’ Accepted @ICML 2023 🎉

June 04, 2023

News, The internet, Hawaii, USA

Very happy to share that our recent work “Lottery Tickets in Evolutionary Optimization: On Sparse Backpropagation-Free Trainability” has been accepted at ICML 2023. Checkout the preprint, the code and feel free to drop me a note.

🧬 Learned Evolution Talks @AutoML & @DLCT 🎉

April 25, 2023

News, The internet, AutoML seminar & ML Collective

Super excited to give two talk on discovering new evolutionary optimizers using evolutionary meta-learning at the AutoML Seminar (April 27th) and at the DLCT reading group (April 28th). The talk covers our two recent papers: Learned Evolution Strategies: here Learned Genetic Algorithms: here Check out the slides below and here:

🐘 evosax Talk @PyData Berlin 🎙️

April 19, 2023

News, PyData Berlin, Berlin, Germany

Super excited to give a talk on evosax at this years PyData Berlin conference. Check out the slides below: 🎙️Stocked to present evosax tomorrow at @PyConDEIt has been quite the journey since my 1st blog on CMA-ES 🦎 and I have never been as stoked about the future of evo optim. 🚀Slides 📜: https://t.co/vw4LTcO1DJCode 🤖: https://t.co/ckZsxkLd00Event 📅: https://t.co/NpZhMa5LmW pic.twitter.com/dg8NNcyzwr— Robert Lange (@RobertTLange) April 18, 2023

Summer Research Internship @Google Brain 🎉

April 08, 2023

News, Google Brain, Tokyo, Japan

Very excited to share that I will be spending the summer at Google Brain working as a Student Researcher with the Tokyo team. The work by Yujin Tang, Yingtao Tian and David Ha has been really influential and inspired my work on attention-based ES/GA. I can’t wait to do great work – thank you for the opportunity.

Learned GA & evosax @GECCO 2023 🧑‍🔬

April 03, 2023

News, DeepMind, London, UK

My DeepMind internship project paper ‘Discovering Attention-Based Genetic Algorithms via Meta-Black-Box-Optimization’ got accepted as a full paper at GECCO 2023. We parametrize the set operations of selection, mutation rate adaptation & sampling in GAs via a small attention layer and meta-learn the weights on a task distribution. The resulting learned GA outperforms several baselines! The preprint can be found here. Furthermore, the evosax paper write-up got accepted as a poster paper. Very grateful to all reviewers and the open source community feedback.

Research Talk @UCL Dark 🎙️

February 16, 2023

News, University College London, London, UK

Got to give a talk at UCL Dark about my internship project on Evolutionary Meta-Learning, gymnax and evosax.

Critical fish shoals published @Nature Physics 🎉

February 10, 2023

News, TU Berlin, Berlin, Germany

Our SCIoI collaboration with the lab of Pawel Romanzcuk was published in Nature Physics. In the paper Luis Gomez-Nava and his co-authors analyze the collective diving behavior of fish shoals in Mexico. They find that real-life fish operate close to a phase transition (‘critical point’). Afterwards, they combine Machine Learning techniques and in-silico simulation to analyze the function of this system behavior. They demonstrate that it facilitates optimal information propagation in the face of environment perturbations such as predator attacks. Check out the paper!

Learned ES Accepted @ICLR 2023 🧑‍🔬

February 08, 2023

News, DeepMind, London, UK

My DeepMind internship project paper ‘Discovering Evolution Strategies via Meta-Black-Box-Optimization’ got accepted at ICLR 2023. We parametrize the set operation of recombination in ES via a small self-attention layer and meta-learn its weights on a task distribution. The resulting learned ES outperforms several baselines on control tasks. It can be even meta-trained in a self-referential fashion and reverse engineered into an analytical ES! Check out the preprint and Open Review discussion.

Research Talk @BLISS Group 🎙️

February 07, 2023

News, Technical University Berlin, Berlin, Germany

Got to give a talk at the BLISS group at TU Berlin about Lottery Tickets in Deep RL and beyond.

New evosax paper & v.0.1.0 release 🐘

December 10, 2022

News, The internet, Github

Super excited to share evosax release v.0.1.0 and an accompanying paper, which covers all the features and summarizes recent progress in hardware accelerated evolutionary optimization! The new additions include: Many new evolution strategies including ASEBO, Guided ES, Discovered ES, FR-CM-NES Many new genetic algorithms including MR-1/5-GA, SAMR-GA, GESMR-GA Wrappers for gymnax-powered fitness rollouts & evosax → EvoJAX compatible strategies Improved default hyperparameter settings All BBOB benchmarking functions in JAX Restart wrappers including IPOP and BIPOP Indirect encodings including MLP hypernetworks Checkout the repository and the arxiv preprint.

Research Talk @AIRL at Imperial 🎙️

November 15, 2022

News, Imperial College London, London, UK

Got to give a talk at the Adaptive & Intelligent Robotics Lab about Evolutionary Meta-Learning, gymnax and evosax.

gymnax Talk @MLC Research Jam 🏋️

August 24, 2022

News, The internet, ML Collective

I got to give a talk :speech_balloon: about the gymnax at the last ML Collective research jam. You can check out the recorded talk here:

Summer Research Internship @DeepMind 🎉

May 27, 2022

News, DeepMind, London, UK

I am super excited to share that I will be spending the summer at DeepMind working as a Research Scientist Intern. This is an absolute dream come true and I am looking forward to returning to London. I will be hosted by Sebastian Flennerhag and working within the Discovery Team led by Satinder Singh.

Rob will be TA @M2L Summer School 🧑‍🏫

May 20, 2022

News, M2L Summer School, Milan, Italy

Excited to share that I will be a teaching assistant at the M2L Summer School this September in Milan! Very much looking forward to teaching the ABC of JAX, to enjoy food, a set of outstanding talks and to give back to the community :hugs:

‘Lottery Tickets in DRL’ Talk @DLCT

April 15, 2022

News, The internet, All across the world

I got to give a small talk on our recent work “On Lottery Tickets and Minimal Task Representations in Deep Reinforcement Learning” (ICLR, 2022) at the Deep Learning Trends and Classics (DLCT) reading group organized by Rosanne Liu and the MLCollective. Checkout the preprint, the OpenReview discussion and feel free to drop me a note. This is joint work with the phenomenal Master student Marc Vischer and my supervisor Henning Sprekeler.

Rob @Deep Minds 🇩🇪 Podcast :microphone:

March 19, 2022

News, The internet, Deep Minds Podcast

I had a great time talking about my recent meta-learning research with Max & Matthias from the Deep Minds podcast 🎙 Check out the episode if you are interested in a Machine Learning perspective on the nature-nurture debate 🦎 or if you would like to hear me struggle with talking about my research in German 🇩🇪 (aka out-of-distribution generalization 😋). Thank you very much for the invitation & thoughtful questions! 🤗

evosax Talk @MLC Research Jam 🐘

March 09, 2022

News, The internet, ML Collective

I got to give a talk :speech_balloon: about the evosax at the last ML Collective research jam. You can check out the recorded talk here:

MLE-Infrastructure Talk @co:here 🎙️

February 23, 2022

News, The internet, Cohere

I got to give a talk :speech_balloon: about the MLE-Infrastructure at Cohere invited by João Guilherme Araújo. You can check out a related tutorial here: Link.

evosax Release 🎉 - Evolution Strategies in JAX 🦎

February 17, 2022

News, The internet, All across the world

I am more than excited to share that I have just released evosax – a JAX-based library of evolution strategies. evosax allows you to leverage JAX, XLA compilation and auto-vectorization/parallelization to scale ES to your favorite accelerators. The API is based on the classical ask, evaluate, tell cycle of ES. Both ask and tell calls are compatible with jit, vmap/pmap and lax.scan. It includes a vast set of both classic (e.g. CMA-ES, Differential Evolution, etc.) and modern neuroevolution (e.g. OpenAI-ES, Augmented RS, etc.) strategies. 👉

‘Lottery Tickets in DRL’ Spotlight @ICLR 2022

February 10, 2022

News, The internet, All across the world

Very happy to share that our recent work “On Lottery Tickets and Minimal Task Representations in Deep Reinforcement Learning” has been accepted as a Spotlight at ICLR 2022. Checkout the preprint, the OpenReview discussion and feel free to drop me a note. This is joint work with the phenomenal Master student Marc Vischer and my supervisor Henning Sprekeler.

Talk @CSHL NeuroAI Seminar 🧑‍🔬

January 05, 2022

News, The internet, Cold Spring Harbor Laboratory

I got to give a talk :speech_balloon: about my work on meta-learning not to learn (accepted at AAAI 2022) at the CHSL NeuroAI seminar invited by Tony Zador. You can check out the pre-print here: Link. Can memory-based meta-learning not only learn adaptive strategies 💭 but also hard-code innate behavior🦎? In our #AAAI2022 paper @sprekeler & I investigate how lifetime, task complexity & uncertainty shape meta-learned amortized Bayesian inference.📝: https://t.co/HPY8xJZkea pic.twitter.com/PuULv87Q4c— Robert Lange (@RobertTLange) December 16, 2021

Rob Was Interviewed @TalkRL Podcast :microphone:

December 18, 2021

News, The internet, All across the world

I had a fun time being interviewed by Robin Ranjit Singh Chauhan for the TalkRL podcast :microphone:. We discuss my recent papers on meta-learning innate behavior, lottery tickets in Deep RL and my work at the intersection of Hierarchical RL and language (Action Grammars). You can check out the episode here!

Received a Google Cloud Research Grant 🎊

December 14, 2021

News, The internet, All across the world

Happy to share that I received a Google Cloud Research Credit Grant to study the intersection of meta-learning and evolution strategies. The grant comes with 1000$ GCP credits and will be well spend on running JAX experiments with TPU acceleration! 🚀

‘Learning not to Learn’ Accepted @AAAI2022 🚀

December 07, 2021

News, The internet, All across the world

My very first first author ML conference paper has been accepted at AAAI 2022 🎉! In ‘Learning not to learn: Nature versus Nurture in Silico’ we investigate the interplay of ecological uncertainty, task complexity and the agents’ lifetime and its effects on the meta-learned amortized Bayesian inference performed by an agent. There exist two regimes: One in which meta-learning yields a learning algorithm that implements task-dependent information-integration and a second regime in which meta-learning imprints a heuristic or ’hard-coded’ behavior: Check out the tweeprint here! P.S.: Stay tuned for an updated paper version and the release of the open source code.

Swarm Identification @Champalimaud Symposium

October 11, 2021

News, Champalimaud Foundation, Lisbon, Portugal

Very happy to be presenting our work-in-progress “SoftEtho: A Gradient-Based Method for Scalable Identification of Ethological Models” at the poster sessions of the Champalimaud Research Symposium. This is joint work with Luis Gómez-Nava, Pawel Romanczuk and Henning Sprekeler. Feel free to drop me a note or hangout at the poster sessions in Lisbon.

Rob @CCN Algonauts Challenge Talk

September 04, 2021

News, The internet, All across the world

I am very happy to be presenting my 5th place submission to the Algonauts Challenge during the CCN Algonauts workshop next Tuesday (September 7th, 1.30pm UTC-4/EDT). The solution is based on SimCLR-v2 features and a Bayesian Optimization pipeline for the encoding models: Checkout my challenge report and code repository and feel free to drop me a note. Thank you very much to the organizers and fellow algonauts for this great experience. Update: You can watch the YouTube replay here:

Lottery Tickets in DRL @Sparsity in Neural Networks Workshop

July 07, 2021

News, The internet, All across the world

Very happy to be presenting our recent work “On Lottery Tickets and Minimal Task Representations in Deep Reinforcement Learning” at the Sparsity in Neural Networks Workshop. Checkout the preprint and feel free to drop me a note or hangout at the poster sessions on July, 8th and 9th. This is joint work with the phenomenal Master student Marc Vischer and my supervisor Henning Sprekeler.

Lottery Tickets in DRL @NERL ICLR Workshop

May 07, 2021

News, The internet, All across the world

I am very happy to be presenting our recent work “On Lottery Tickets and Minimal Task Representations in Deep Reinforcement Learning” at the ICLR ‘A Roadmap to Never-Ending RL’ workshop. We investigate the lottery ticket phenomenon in Deep Reinforcement Learning and provide evidence that most of the RL ticket effect can be attributed to the discovered pruning mask. Furthermore, the input layer mask discovered by Iterative Magnitude Pruning yields minimal task-sufficient representations. This mask can be used as a pair of “goggles” that compresses the representation. Dense agents trained on such a representation attain comparable performance at lower computational costs. Checkout the preprint and feel free to drop me a note or hangout at the poster sessions on May, 7th. This is joint work with the phenomenal Master student Marc Vischer and my supervisor Henning Sprekeler.

Rob @Medium TDS Featured Authors Series :bookmark:

April 08, 2021

News, The internet, All across the world

I had a great time talking to Towards Data Science about my path into Machine Learning. We talk about my transition from Economics to Data Science and Computational Neuroscience. It is an honour to be part of the ‘Featured Authors Series’. You can check out the full Medium interview here!

ML Street Talk :microphone: Episode with Tom Zahavy

March 24, 2021

News, The internet, All across the world

I had the honour to interview Dr. Tom Zahavy in the recent ML Street Talk episode together with Tim Scarfe and Yannic Kilcher. We discuss meta-gradients, JAX and the hardware lottery as well as the state and future of Deep RL. Check out the full episode here:

Talk @MIT Michael Carbin’s Lab

March 19, 2021

News, The internet, MIT

I got to give a talk :speech_balloon: about my recent work on lottery tickets in Deep Reinforcement Learning at Michael Carbin’s lab at MIT. Big thank you goes out to Jonathan Frankle for the kind invitation. This is joint work with my outstanding MSc student Marc A. Vischer and my supervisor Henning Sprekeler. Watch out for the pre-print!

Talk @Warwick PhD Statistics Seminar

February 23, 2021

News, The internet, Warwick University

I got to give a talk :speech_balloon: about my recent work on meta-learning not to learn at the University of Warwick PhD Statistics Seminar. You can check out the pre-print here: Link.

Rob @Virtual M2L :school:

January 06, 2021

News, The internet, All across the world

Happy new year! I am really happy to be attending the virtual (and first) edition of the Mediterranean Machine Learning (M2L) summer school. Get in touch if you want to chat about JAX, evolutionary algorithms or meta-learning! And stay tuned for some new #visual-ml-notes 📝 Big thank you to the organizers! Love, Rob

Learning not to Learn @MetaLearning NeurIPS Workshop

November 29, 2020

News, The internet, All across the world

I am very happy to be presenting my recent work on “Learning not to learn: Nature versus nurture in silico” at the NeurIPS Meta Learning workshop. We investigate the interplay of ecological uncertainty, task complexity and expected lifetime on the amortized Bayesian inference performed by memory-based meta learners. Checkout the preprint and feel free to drop me a note or hangout at the poster sessions on December, 11th.

Rob’s 1st Podcast - ML Street Talk :microphone:

July 05, 2020

News, The internet, All across the world

Dear virtual world, Last week I got to do my very first podcast. Exciting, right? I had a great time discussing my journey from Econ to ML & Collective Behaviour, social notions of intelligence & the Lottery Ticket Hypothesis! Thanks for having a podcast newbie! Checkout the full podcast by Tim Scarfe, Connor Shorten and Yannic Kilcher here: Love, Rob

Rob @Virtual MLSS :school:

June 27, 2020

News, The internet, All across the world

I am really happy to be attending the virtual edition of the MLSS Tübingen summer school where I will be presenting my most recent work on ‘Time Limits in Meta-Reinforcement Learning’. Get in touch if you want to chat about science, arts and ethology! Also I am looking forward to adding a new album to #visual-ml-notes 📝 Love, Rob

Visual-ML-Notes Launch :pencil2:

May 02, 2020

News, The internet, All across the world

Really happy to share Visual-ML-Notes ✍️ a virtual gallery of sketchnotes taken at Machine Learning talks 🧠🤓🤖 which includes last weeks #ICLR2020. Explore, exploit & feel free to share: :point_right: website 💻 & the repository 📝 Love, Rob P.S.: There will be an entire blogpost dedicated to how I go about sketching, the workflow and the post-processing. Stay tuned :heart:

Rob joins for.AI :heart:

March 14, 2020

News, The internet, All across the world

I am really excited to announce that I have joined for.AI as an independent researcher. for.AI is a mainly-remote coordinated international group of ML researchers. One aim - the production of useful & effective ML research. I am very much looking forward to new ideas, enthusiastic discussions and fruitful collaborations :hugs:! Such a great idea for the 21st century! Love, Rob P.S.: You can check out my solution to the coding challenge (comparing different pruning techniques) here!

Action Grammars @NeurIPS Workshops!

October 02, 2019

News, Neural Information Processing Systems Conference 2019, Vancouver, Canada

Super excited to share that my Master’s thesis project with Aldo Faisal got accepted to both the ‘Deep Reinforcement Learning’ & the ‘Learning Transferable Skills’ Workshop at NeurIPS 2019. I will be presenting the work within the DRL workshop in Vancouver and December! Check out the updated preprint here & let me know if you have any ideas/questions. Furthermore, code to replicate the results may be found here. Love, Rob P.S.: Here is my previous poster from CCN:

OIS Award Final Pitch Selection!

September 13, 2019

News, Technical University Berlin, Berlin, Germany

I am really excited to share that my project proposal on “Deep Swarm Shepherding - Benevolent Adaptation of Collective Behavior” has been selected for the final round of the Open Innovation in Science Award of the Einstein Center for Neurosciences Berlin. The goal of the award is to facilitate projects which fuse Open Innovation and Open Science in the context of neuroscience. It is jointly co-organized by the Ludwig Boltzmann Gesellschaft’s Open Innovation in Science Center (LBG OIS Center), QUEST and SPARK-Berlin. I am very honored and am looking forward to all the 3 minute pitches! If you are interested in learning more about how I intend to make the world a better place by combining Behavioral Tracking, Inverse Reinforcement Learning and Machine Theory of Mind come by. The final round of the selection process will be publicly carried out - here are all the key information: Location: Charité Campus Mitte, Charité CrossOver (CCO), Charitéplatz 1, 10117 Berlin. Date and Time: Thursday, October 10, 2019, 16:00-18:15 (official part), doors open at 15:45.

Action Grammars are going to CCN!

August 03, 2019

News, Technical University Berlin, Berlin, Germany

Super exciting news! Parts of my masters’s thesis project (supervised by Professor Aldo Faisal) got accepted at the Cognitive Computational Neuroscience conference 2019. We combine Hierarchical Reinforcement Learning & Grammar Induction to define a set of temporally-extended actions… aka an Action Grammar! The resulting temporal abstractions can be used to efficiently tackle imitation, transfer and online learning. Check out the preprint here! I am still in the process of extending the experiments and already looking forward to the poster presentations in Berlin (13th to 16th of September). The code will be open sourced as well. Hit me up if you are interested in the full story!

Kick-Off ‘Flexible Learning’ Reading Group @TUBerlin

July 26, 2019

News, Technical University Berlin, Berlin, Germany

Last week we got to kick-off our new “Flexible Learning” reading group at the Technical University Berlin where we cover recent papers in Meta-/Transfer-/Continual & Self-supervised Learning! We started by reading the latest first-author paper by Yoshua Bengio connecting Meta-Learning with causal inference. You can join our mailing list for more infos: click here. Here are all the relevant infos for the next meeting: Date: 7th of August Time: 11am Location: MAR Building TU Berlin - room 5.013 Paper: “Task-Driven Convolutional Recurrent Models of the Visual System” by Nayebi et al (2018; NeuRIPS) Massive thanks goes out to the co-organizing help of Thomas Goerttler, Joram Keijser & Nico Roth! Hit me up if you are interested in joining!

RAAI Conference & EEML - I am coming!

June 02, 2019

News, University of Bucharest, Bucharest, Romania

Bucharest - I am coming! Very happy to attend the Recent Advances in Artificial Intelligence conference from 28th to 30th of June. I will present my work on Deep Multi-Agent RL for swarm dynamics in a poster session. Furthermore, my work has also been selected to be presented at the super-duper awesome EEML summer school. Can’t wait to meet the hero of temporal abstractions Doina Precup and Mr “Policy Distillation” Andrei Rusu.

I will be giving a Talk @ENCODS FENS PhD Symposium!

May 24, 2019

News, Francis Crick Institute, London, United Kingdom

I am happy to announce that I will be giving a short talk at the ENCODS FENS PhD Symposium about the “Neural Suprise in Human Somatosensation” project I have been working on during my first ECN lab rotation together with Sam Gijsen, Miro Grundei, Dirk Ostwald and Felix Blankenburg. If you are interested in more details and the general paradigm, check out our GitRepo.

I got accepted into the SCIoI Excellence Cluster!

April 22, 2019

News, Technical University Berlin, Berlin, Germany

I got accepted into the Science of Intelligence Excellence Cluster! Starting in October 2019 I will be working on the project “Learning of Intelligent Swarm Behavior” under the supervision of Henning Sprekeler and Pawel Romanczuk. I am very happy to receive such generous funding and support from the excellence cluster. I will stay affiliated with the Einstein Center for Neurosciences. Furthermore, my work will combine strong evidence from cognitive neuroscience and animal psychology in order to study the computational basis of coordination and adaptation in large collectives.