Wednesdays at 2pm in Upson 531.
Spring 2017 Schedule:
|1/25||Ross Knepper||Robotics Community Discussion|
|2/15||Chris Mavrogiannis||Robotic Personal Assistants Lab Chalk Talks|
|The Robotic Personal Assistants Lab (RPAL) under PI Prof. Knepper investigates technologies to make robots behave as peers in collaborative tasks with people. In this seminar, several members of the lab will give informal chalk talks to describe their current research. These talks are meant to be interactive and accessible to a robotics audience. Rather than polished talks, these are snapshots of works in progress. We hope that this session will serve as a template for other labs at Cornell to emulate.|
|2/22||Carlo Pinciroli||Robot Swarms as a Programmable Machine|
ABSTRACT: Robot swarms promise to offer solutions for applications that today are considered dangerous, expensive, or even impossible. Notable examples include construction, space exploration, mining, ocean restoration, nanomedicine, disaster response, and humanitarian demining. The diverse and large-scale nature of these applications requires the coordination of numerous robots, likely in the order of hundreds or thousands, with heterogeneous capabilities. Swarm engineering is an emerging research field that studies how to model, design, develop, and verify swarm systems. In this talk, I will discuss the aspects of swarm engineering that intersect with classical computer science. In particular, focusing on the concept of robot swarms as a “programmable machine”, I will analyze the issues that arise when one wants to write programs for swarms. After presenting Buzz, a programming language for swarms on which I worked during my postdoc, I will outline a number of open problems on which I intend to work over the next years.
BIO: Carlo Pinciroli is assistant professor at Worcester Polytechnic Institute, where he leads the NEST Lab. His research interests include swarm robotics and software engineering. Prof. Pinciroli obtained a Master’s degree in Computer Engineering at Politecnico di Milano, Italy and a Master’s degree in Computer Science at University of Illinois at Chicago, in 2005. He then worked for one year in several projects for Barclays Bank PLC group. In 2006 he joined the IRIDIA laboratory at Université Libre de Bruxelles in Belgium, under the supervision of Prof. Marco Dorigo. While at IRIDIA, he obtained a Diplôme d’études approfondies in 2007 and a PhD in applied sciences in 2014, and he completed a 8-month post-doctoral period. Between 2015 and 2016, Prof. Pinciroli was a postdoctoral researcher at MIST, École Polytechnique de Montréal in Canada under the supervision of Prof.
Giovanni Beltrame. Prof. Pinciroli published 49 peer-reviewed articles and 2 book chapters, and edited 1 book. In 2015, F.R.S.-FNRS awarded him the most prestigious postdoctoral scholarship in Belgium (Chargé des Recherches).
|3/1||Jim Jing and Scott Hamill||Modularity and Design|
|The Verifiable Robotics Research Group has been exploring different aspects of modularity in robot control and design. In this two part talk, Jim will describe current work on high-level control of modular robots (in collaboration with Mark Campbell’s and Mark Yim’s groups) and Scott will describe our initial thoughts on task-influenced design of modular soft robots (in collaboration with Rob Shepherd’s group).|
|3/29||Ryan O’Hern||Networking Many Small Satellites|
|4/5||CORNELL SPRING BREAK|
|4/12||Jesse Goldberg||Dopamine based error signals suggest a reinforcement learning algorithm during song acquisition in birds|
|Reinforcement learning enables animals to learn to select the most rewarding action in a given context. Edward Thorndike posed a simple solution to this problem in his Law of Effect: ‘Responses that produce a satisfying effect in a particular situation become more likely to occur again in that situation, and responses that produce a discomforting effect become less likely to occur again in that situation.’ This idea underlies stimulus-response, reinforcement, and instrumental learning and implementing it requires three pieces of information: (1) the action (response) an animal makes; (2) the context (situation) in which the action is taken; and (3) evaluation of the outcome (effect). In vertebrates, the basal ganglia have been proposed to integrate the three pieces of information required for reinforcement learning: (1) The situation, or current context, is thought to be signaled by a massive projection from the cortex to the striatum, the input layer of the BG; (2) The chosen action is signaled by striatal medium spiny neurons (MSNs) that drive behavior via projections to downstream motor centers; and (3) The evaluation of the outcome is transmitted to the striatum by midbrain DA neurons. These signals underlie a simple ‘three-factor learning rule’: If a cortical input is active (signifying a context), the MSN discharges (driving the action chosen), and an increase in DA subsequently occurs (signifying a good outcome), then the connection strength of the cortical input to the MSN is increased. Overall, by controlling the strength of the corticostriatal synapse, this dopamine-modulated corticostriatal plasticity governs which action will be chosen in a given context, placing DA in the premier position of determining what animals will learn and how they will behave. Here, I will discuss how our recent identification of dopaminergic error signals in birdsong support the potential generality dopamine modulated corticostriatal plasticity in implementing learning in a wide range of behaviors.|