Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
This is a page not in th emain menu
Metaphors are powerful tools to transfer ideas from one mind to another. Alan Kay introduced the alternative meaning of the term ‘desktop’ at Xerox PARC in 1970. Nowadays everyone - for a glimpse of a second - has to wonder what is actually meant when referring to a desktop. Recently, Deep Learning had the pleasure to welcome a new powerful metaphor: The Lottery Ticket Hypothesis (LTH).
The iPad is a revolutionary device. I take all my notes with it, read & annotate papers and do most of my conceptual brainstorming on it. But how about Machine Learning applications? In todays post we will review a set of useful tools & venture into the love story of the iPad Pro & the new Raspberry Pi (RPi).
JAX, Jax, JaX. Twitter seems to know nothing else nowadays (next to COVID-19). If you are like me and want to know what the newest hypetrain is about - welcome to todays blog post!
2019 - What a year for Deep Reinforcement Learning (DRL) research - but also my first year as a PhD student in the field. Like every PhD novice I got to spend a lot of time reading papers, implementing cute ideas & getting a feeling for the big questions. In this blog post I want to share some of my highlights from the 2019 literature.
TL;DR: This blog post provides an overview of trends & events from the Cognitive Computational Neuroscience (CCN) 2019 conference held in Berlin. It summarizes the keynote talks and provides my perspective and thoughts resulting from a set of stimulating days. More specifically, I cover recent trends in Model-Based RL, Meta-Learning and Developmental Psychology adventures. You can find all my notes here.
Automatic Differentiation (AD) is one of the driving forces behind the success story of Deep Learning. It allows us to efficiently calculate gradient evaluations for our favorite composed functions. TensorFlow, PyTorch and all predecessors make use of AD. Along stochastic approximation techniques such as SGD (and all its variants) these gradients refine the parameters of our favorite network architectures.
Before starting to write a blog post I always ask myself - “What is the added value?”. There is a lot of awesome ML material out there. And a lot of duplicates as well. Especially when it comes to all the flavors of Deep Reinforcement Learning. So you might wonder what is the added value of this two part blog post on Deep Q-Learning? It is threefold.
In January I was considering where to go with my scientific future. Struggling whether to stay in Berlin or to go back to London, I got frustrated with my technical progress. At NeuRIPS I encountered so much amazing work and I felt like there was too much to learn until reaching the cutting edge. I was stuck. And then my former Imperial supervisor forwarded me an email advertising this new Eastern European Machine Learning (EEML) summer school.
In today’s blog post we discuss Representational Similarity Analysis (RSA), how it might improve our understanding of the brain as well as recent efforts by Samy Bengio’s and Geoffrey Hinton’s group to systematically study representations in Deep Learning architectures. So let’s get started!
Hola guapos! After finally deciding to stay in Berlin, I felt the desire to structure myself and to establish routines which are going to help me tackle the next phase of my life. Due to a fortunate visit to the National Gallery book store in London, I got to pick up Austin Kleon’s amazing piece of work “Steal Like an Artist”. A beautifully collected and visualized set of tricks to foster creativity.
Hey there! As some of you might know I have been quite actively contributing to the Data Science Barcelona GSE blog. Writing about technical topics and addressing a broad audience is challenging and fulfilling at the same time. I hope that this blog is going to help me learn to tell great narratives and influence people. So stay tuned!
I got accepted into the Science of Intelligence Excellence Cluster! Starting in October 2019 I will be working on the project “Learning of Intelligent Swarm Behavior” under the supervision of Henning Sprekeler and Pawel Romanczuk. I am very happy to receive such generous funding and support from the excellence cluster.
I will stay affiliated with the Einstein Center for Neurosciences. Furthermore, my work will combine strong evidence from cognitive neuroscience and animal psychology in order to study the computational basis of coordination and adaptation in large collectives.
I am happy to announce that I will be giving a short talk at the ENCODS FENS PhD Symposium about the “Neural Suprise in Human Somatosensation” project I have been working on during my first ECN lab rotation together with Sam Gijsen, Miro Grundei, Dirk Ostwald and Felix Blankenburg. If you are interested in more details and the general paradigm, check out our GitRepo.
Bucharest - I am coming! Very happy to attend the Recent Advances in Artificial Intelligence conference from 28th to 30th of June. I will present my work on Deep Multi-Agent RL for swarm dynamics in a poster session. Furthermore, my work has also been selected to be presented at the super-duper awesome EEML summer school. Can’t wait to meet the hero of temporal abstractions Doina Precup and Mr “Policy Distillation” Andrei Rusu.
Last week we got to kick-off our new “Flexible Learning” reading group at the Technical University Berlin where we cover recent papers in Meta-/Transfer-/Continual & Self-supervised Learning! We started by reading the latest first-author paper by Yoshua Bengio connecting Meta-Learning with causal inference.
You can join our mailing list for more infos: click here.
Here are all the relevant infos for the next meeting:
- Date: 7th of August
- Time: 11am
- Location: MAR Building TU Berlin - room 5.013
- Paper: “Task-Driven Convolutional Recurrent Models of the Visual System” by Nayebi et al (2018; NeuRIPS)
Massive thanks goes out to the co-organizing help of Thomas Goerttler, Joram Keijser & Nico Roth! Hit me up if you are interested in joining!
Super exciting news! Parts of my masters’s thesis project (supervised by Professor Aldo Faisal) got accepted at the Cognitive Computational Neuroscience conference 2019. We combine Hierarchical Reinforcement Learning & Grammar Induction to define a set of temporally-extended actions… aka an Action Grammar! The resulting temporal abstractions can be used to efficiently tackle imitation, transfer and online learning.
Check out the preprint here! I am still in the process of extending the experiments and already looking forward to the poster presentations in Berlin (13th to 16th of September). The code will be open sourced as well. Hit me up if you are interested in the full story!
I am really excited to share that my project proposal on “Deep Swarm Shepherding - Benevolent Adaptation of Collective Behavior” has been selected for the final round of the Open Innovation in Science Award of the Einstein Center for Neurosciences Berlin. The goal of the award is to facilitate projects which fuse Open Innovation and Open Science in the context of neuroscience. It is jointly co-organized by the Ludwig Boltzmann Gesellschaft’s Open Innovation in Science Center (LBG OIS Center), QUEST and SPARK-Berlin.
I am very honored and am looking forward to all the 3 minute pitches! If you are interested in learning more about how I intend to make the world a better place by combining Behavioral Tracking, Inverse Reinforcement Learning and Machine Theory of Mind come by. The final round of the selection process will be publicly carried out - here are all the key information:
- Location: Charité Campus Mitte, Charité CrossOver (CCO), Charitéplatz 1, 10117 Berlin.
- Date and Time: Thursday, October 10, 2019, 16:00-18:15 (official part), doors open at 15:45.
Super excited to share that my Master’s thesis project with Aldo Faisal got accepted to both the ‘Deep Reinforcement Learning’ & the ‘Learning Transferable Skills’ Workshop at NeurIPS 2019. I will be presenting the work within the DRL workshop in Vancouver and December!
P.S.: Here is my previous poster from CCN:
I am really excited to announce that I have joined for.AI as an independent researcher. for.AI is a mainly-remote coordinated international group of ML researchers. One aim - the production of useful & effective ML research.
I am very much looking forward to new ideas, enthusiastic discussions and fruitful collaborations ! Such a great idea for the 21st century!
P.S.: You can check out my solution to the coding challenge (comparing different pruning techniques) here!
Really happy to share Visual-ML-Notes ✍️ a virtual gallery of sketchnotes taken at Machine Learning talks 🧠🤓🤖 which includes last weeks #ICLR2020.
P.S.: There will be an entire blogpost dedicated to how I go about sketching, the workflow and the post-processing. Stay tuned
I am really happy to be attending the virtual edition of the MLSS Tübingen summer school where I will be presenting my most recent work on ‘Time Limits in Meta-Reinforcement Learning’. Get in touch if you want to chat about science, arts and ethology! Also I am looking forward to adding a new album to #visual-ml-notes 📝
Published in UPF/UAB Public Online Repository, 2017
Barcelona GSE Masters Thesis which generalizes RandNLA to GLMs.
Recommended citation: Lange, Robert Tjarko. (2017). "Randomized Numerical Linear Algebra for Generalized Linear Models with Big Datasets." UPF/UAB Public Online Repository.
Published in Best (Applied) MAC/MRes/Specialism Project, Sponsored by Winton Capital at Imperial College London, 2018
Imperial College London Masters Thesis which provides a Context-Free Grammar based framework for learning temporal abstractions in Hierarchical Reinforcement Learning.
Recommended citation: Lange, Robert Tjarko. (2018). "Action Grammars: A Grammar Induction-Based Method for Learning Temporally-Extended Actions." Imperial College London - DoC - Best (Applied) MAC/MRes/Specialism Project 2018.