Talks


The things that robots don't need to learn ..., Talk during visit at CSAIL@MIT and HCRI@Brown. June 2016.

​The things that robots don't need to learn ...​ It is fascinating to observe the current wave of excitement in our community about deep learning. We seem to be tempted to believe that in order to reproduce intelligence in robots, all we need is more data and computation (and maybe a small tweak here and there); that everything there is to learn can be learned from scratch with a "master algorithm" that uses only very weak assumptions. I don't think that this approach is reasonable. There are many things that robots don't need to learn (physics, their own embodiment, the existence of objects, etc.). But how can we specify these "things" that robots don't need to learn and combine them with machine learning to fill in the rest? In this talk, I will summarize insights from our recent ICRA workshop on this topic and present three of our own approaches to this question: 1) representation learning with robotic priors, 2) patterns for learning with side information, and 3) recent results on incorporating structure from robotic methods into neural networks.

Publications

Rico Jonschkowski, Roland Hafner, Jonathan Scholz, and Martin Riedmiller. PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured State Representations. New Frontiers for Deep Learning in Robotics Workshop at RSS, 2017.

TL;DR: unsupervised learning of where things are and how they are moving
PDF, BibTex


Rico Jonschkowski and Oliver Brock. End-To-End Learnable Histogram Filters. Workshop on Deep Learning for Action and Interaction at NIPS, 2016.

TL;DR: a way to combine the algorithmic structure of Bayes filters with the end-to-end learnability of neural networks
(extension of the paper below)
PDF, BibTex


Rico Jonschkowski and Oliver Brock. Towards Combining Robotic Algorithms and Machine Learning: End-To-End Learnable Histogram Filters. Workshop on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics at IROS, 2016.

TL;DR: a way to combine the algorithmic structure of Bayes filters with the end-to-end learnability of neural networks
(preliminary version, see extended version above)
PDF, BibTeX


Rico Jonschkowski, Clemens Eppner*, Sebastian Höfer*, Roberto Martín-Martín*, and Oliver Brock. Probabilistic Multi-Class Segmentation for the Amazon Picking Challenge. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2016.

IROS Best Paper Award Finalist
TL;DR: in-depth analysis of the object-segmentation method used in our winning entry to the 2015 Amazon picking challenge with lessons that should be useful towards more generic robotic perception
Code, PDF, BibTeX


Clemens Eppner*, Sebastian Höfer*, Rico Jonschkowski*, Roberto Martín-Martín*, Arne Sieverling*, Vincent Wall* and Oliver Brock. Lessons from the Amazon Picking Challenge: Four Aspects of Building Robotic Systems. Proceedings of Robotics: Science and Systems, 2016.

RSS Best Systems Paper Award Winner
TL;DR: four aspects that improve our exploration and understanding of how to build robotic systems: modularity vs. integration, computation vs. embodiment, planning vs. feedback, generality vs. assumptions
Talk, Podcast, PDF, BibTeX


Rico Jonschkowski*, Sebastian Höfer*, and Oliver Brock. Patterns for Learning with Side Information. arXiv:1511.06429 [cs.LG] : 2016.

TL;DR: a new perspective on machine learning that goes beyond using only inputs and labels but uses side information
Code, PDF, BibTeX


Rico Jonschkowski and Oliver Brock. Learning State Representations with Robotic Priors. Autonomous Robots. Springer US 39(3):407-428, 2015.

TL;DR: state representations can be learned from raw sensory input by making these representations consistent with prior knowledge about interactions governed by physics = robotic priors
(extension of the two papers below)
Code, Video of Robot Experiment, PDF, BibTeX


Rico Jonschkowski and Oliver Brock. State Representation Learning in Robotics: Using Prior Knowledge about Physical Interaction. Proceedings of Robotics: Science and Systems, 2014.

TL;DR: state representations can be learned from raw sensory input by making these representations consistent with prior knowledge about interactions governed by physics = robotic priors
(preliminary version, see extended version above)
Talk, PDF, BibTeX


Rico Jonschkowski and Oliver Brock. Learning Task-Specific State Representations by Maximizing Slowness and Predictability. Proceedings of the 6th International Workshop on Evolutionary and Reinforcement Learning for Autonomous Robot Systems (ERLARS), 2013.

TL;DR: state representation learning requires optimizing the right characteristics of good representations, e.g. slowness and predictability
(preliminary version, see extended version above)
PDF, BibTeX