Cognitive Robotics (CORO)
This is an elective course designed for the Master of Autonomous Systems program at Hochschule Bonn-Rhein-Sieg. I taught this course for the first time in the summer semester 2023.
Description
As the name says, this course focuses on a subdiscipline of robotics called “cognitive robotics.” Cognitive robotics aims to endow robots with high-level cognitive capabilities that are inspired by and similar to humans, such as reasoning about complex objectives based on information about the world, acting skillfully, improving the available knowledge by regular learning, as well as performing introspective analysis for avoiding failures.
In this course, you will be exposed to research literature in the field of cognitive robotics, such that the goal is to (i) study a variety of methods that address different aspects of the problem and (ii) appreciate the complexity of developing autonomous robots with cognitive capabilities. The focus is on concrete techniques that deal with the aforementioned facets, not directly on biologically-inspired approaches to robotics.
The following are the concrete objectives of the course:
- Familiarising students with the fundamentals of cognitive robotics and various existing cognitive architectures
- Illustrating various developments in cognitive robotics through research papers in the field, particularly focusing on cognitive perception, manipulation, and task-oriented reasoning and learning
- Enabling students to think critically about various developments in cognitive robotics and their importance for autonomous robots
- Encouraging students to think about different cognitive aspects when working on robotics applications
Lectures
The following topics are covered in the course:
- Cognitive robotics introduction
- Cognitive architectures
- Cognition-enabled manipulation
- Affordances
- Active learning
- Lifelong learning
- Learning and acting based on conceptual constraints
- Using rich knowledge about the world
- Observation-based task learning
- Causal reasoning and learning
- Introspection and robot failures
- Failure management: Failure model learning, failure-aware planning, and reasoning about failures temporally
Assignments
Assignment 1: Cognitive architectures reading
There are two reading assignments for next week:
- Read sections 10.1-10.5 from chapter 10 of the Cognitive Robotics book (the chapter is on cognitive architectures)
- From the paper “40 years of cognitive architectures: core cognitive abilities and practical applications”, read sections 1-3 and section 11
Once you read them, please upload a markdown file (.md) with at least 10 questions / discussion points about the reading material (but you can add more questions if you like).
Assignment 2: A more detailed study on cognitive architectures
This is a group assignment; you will work with the same group of people as in the last lab class.
Each group needs to select one of the following architectures and study it in detail:
- Soar – J. E. Laird, K. R. Kinkade, S. Mohan, and J. Z. Xu, “Cognitive Robotics Using the Soar Cognitive Architecture,” in Cognitive Robotics Workshop at the 26th AAAI Conf. Artificial Intelligence, 2012. Available: https://www.researchgate.net/publication/264888205_Cognitive_Robotics_Using_the_Soar_Cognitive_Architecture
- ACT-R – F. E. Ritter, F. Tehranchi, and J. D. Oury. “ACT-R: A cognitive architecture for modeling cognition.” Wiley Interdisciplinary Reviews: Cognitive Science, vol. 10, no. 3, pp. e1488, 2019. Available: https://wires.onlinelibrary.wiley.com/doi/full/10.1002/wcs.1488`
- LIDA – S. Franklin, T. Madl, S. D’Mello and J. Snaider, “LIDA: A Systems-level Architecture for Cognition, Emotion, and Learning,” IEEE Trans. Autonomous Mental Development, vol. 6, no. 1, pp. 19-41, Mar. 2014. Available: https://ieeexplore.ieee.org/abstract/document/6587077
- DIARC – M. Scheutz, P. Schermerhorn, J. Kramer, and D. Anderson, “First steps toward natural human-like HRI,” Autonomous Robots, vol. 22, no. 4, pp. 411-423, 2007. Available: https://link.springer.com/article/10.1007/s10514-006-9018-3
- CLARION – R. Sun, “The CLARION cognitive architecture: Extending cognitive modeling to social simulation,” Cognition and multi-agent interaction, pp. 79-99, 2006. Available: https://scholar.google.com/scholar?cluster=13982617830671028657
The assignment then consists of two parts:
- Collect a list of 10 questions about your architecture and submit them on JupyterHub.
- Prepare a slide set in which you describe the architecture. The slide set should be uploaded to LEA. You will present this slide set during the next lecture.
In your slide set + presentation, particularly focus on the following aspects:
- What components does the architecture include?
- What insights from real cognitive systems are integrated in the architecture?
- What are some applications of the architecture to real robot systems?
- What would be the main benefits of using the architecture in robotics?
- Can you think of some challenges in using the architecture on real robots?
Only one person per group should upload the questions and the slide set.
Assignment 3: Learning from demonstration
Your task in this assignment is to develop a system that allows you to perform imitation learning from multiple demonstrations. Concretely, you need to:
- Develop a method that allows you to learn a model from multiple demonstrated trajectories, which you can then use to sample trajectories. Your model should enable learning trajectories of the 3D position of a robot’s end effector (optionally, you can consider the orientation as well). In the lecture, we discussed Gaussian Mixture Models as suitable for this task; however, feel free to explore alternative representations if you want (e.g. a Gaussian process).
- Integrate your method with a robot simulation of your choice and use it to perform learning from demonstration. For this, collect some demonstrations in the simulation and show that the robot can then execute trajectories sampled from your model. Ideally, the simulation you use should have a model of the Toyota HSR or the Kinova Gen3 (this will allow us to try your method on the real robot).
This is a group assignment; please work with the same classmates as before.
The assignment should be submitted on JupyterHub. The submission should include your code and a short discussion about the evaluation of the system in the simulation. Please also be ready to demonstrate your implementation during our lab class.
Assignment 4: Learning from demonstration continues: Learning task models
This assignment builds on the results of the previous assignment, such that your goal is to extend your demonstration-based learning system so that you can also acquire task models from demonstration, and then perform those tasks with your robot.
The following are the concrete things that you need to do in the assignment:
- Define a task to be learned by demonstration. You can perform a simple block arrangement / stacking task, or something more practically useful, such as table arrangement for a meal; feel free to be creative here.
- Extend your simulation from the last assignment so that you can perform demonstrations of the task. You are free to decide how you perform the demonstrations in your simulation - you can do this interactively by dragging objects from their original location to their goal location, or you can do it programmatically for simplicity.
- Implement a method that allows you to learn a task model (that is, a conceptual model of steps that your robot needs to perform in order to complete the task - e.g. a model such as (i) pick up plate, (ii) place plate at goal location, (iii) pick up fork, (iv) place fork next to plate, and so forth). I suggest following the representation in
B. Hayes and B. Scassellati, "Discovering task constraints through observation and active learning," in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2014, pp. 4442–4449. Available: https://doi.org/10.1109/IROS.2014.6943191
, but feel free to explore alternative representations if you want. Note: Please don’t just hardcode the task model - you need to learn it from demonstration (that’s the whole point of the assignment)!
- Combine your trajectory learning method from the last assigment with the task learning method so that you can learn both task models and trajectories for execution. Then, use your combined method to allow your robot to perform the learned task.
- (Optional step) Integrate active learning so that your robot can ask for demonstrations that clarify different aspects of the task.
Note that you don’t need to do object perception in the simulation; instead, assume that you have access to all the relevant information, such as object poses, colours, etc - you can obtain this in the simulation.
As before:
- This is a group assignment; please continue working with the same classmates.
- The assignment should be submitted on JupyterHub. The submission should include your code and a short discussion about the evaluation of the system in the simulation. Please also be ready to demonstrate your implementation during our lab class.