Robotics Seminar Spring 2018

Wednesdays at 2:55-4:10pm, Upson 116 (The Lounge).

Light refreshments served starting at 2:30.

 

Robotics Seminar is sponsored by HRG Robotics 

Spring 2018 Schedule

 1/24  Hadas Kress-Gazit, Cornell University              Synthesis for Composable Robots: Guarantees and Feedback for Complex Behaviors
 Getting a robot to perform a complex task, for example completing the DARPA Robotics Challenge, typically requires a team of engineers who program the robot in a time consuming and error prone process and who validate the resulting robot behavior through testing in different environments. The vision of synthesis for robotics is to bypass the manual programming and testing cycle by enabling users to provide specifications – what the robot should do – and automatically generating, from the specification, robot control that provides guarantees for the robot’s behavior.

This talk will describe the work done in the verifiable robotics research group towards realizing the synthesis vision and will focus on synthesis for composable robots – modular robots and swarms. Such robotic systems require new abstractions and synthesis techniques that address the overall system behavior in addition to the individual control of each component, i.e. module or swarm member.

 1/31 Susan Fussell and Elijah Webber-Han                    Explorations using Telepresence Robots in the Wild

Mobile Robotic (Tele)Presence (MRP) systems are a promising technology for distance interaction because they provide both embodiment and mobility.  In principle, MRPs have the potential to support a wide array of informal activities, such as walking across campus, attending a movie or visiting a restaurant. However, realizing this potential has been challenging, due to a host of issues including internet connectivity, audio interference, limited mobility and limited line of sight. We will describe some ongoing work looking at the benefits and challenges of using MRPs in the wild.  The goal of this work is to develop a framework for understanding MRP use in informal social settings that captures key relationships among the physical requirements of the setting, the social norms of the setting, and the challenges posed for MRP pilots and people in the local environment.  This framework will then inform the design of novel user interfaces and crowdsourcing techniques to help MRP pilots anticipate and overcome challenges of specific informal social settings

Joint Work: Sue Fussell, Elijah Weber-Han, Dept. of Communication & Dept. of Info. Science at Cornell University

2/7 Panel, Cornell University                                            What We Talk About When We Talk About Design

 Panelists: Keith Evan Green, Kirstin Hagelskjaer Petersen, Guy Hoffman, Rob Shepherd and François Guimbretière.

As panelists, we will interact with each other and the audience on the topic of what design means for robotics and what robotics means for design. Panelists would also like to discuss briefly the Q-exam in design.

2/14 Bo Fu, Cornell University                                             Sailing in Space

Solar sail is a type of spacecraft propelled by harvesting momentum from solar radiation. Compared with spacecraft propelled by traditional chemical rockets or the more advanced electric propulsion engines, the unique feature of solar sails is that they do not use fuel for propulsion. This allows for the possibility of return-type (round-trip) missions to other heavenly bodies, which would be difficult or near impossible with conventional propulsion methods. This also makes solar sails highly promising candidates for service as interplanetary cargo ships in future space missions.

Solar sail research is quite broad and multi-disciplinary. In this talk, an overview of solar sail technology including the history, the fundamentals of photon-sail interaction, and the state of the art of solar sailing is presented. One specific area solar sail research – attitude dynamics and control – is discussed in detail. Attitude control of large sails poses a challenge because most methods developed for solar sail attitude control require the controller mass to scale with the sail’s surface area. This is addressed by a newly proposed tip displacement method (TDM), where by moving the wing tips, the geometry of sail film is exploited to generate the necessary control forces and torques. The TDM method is described as it applies to a square solar sail that consists of four triangular wings. The mathematical relationship between the displacement of the wing tip and the control torque generated is fully developed under quasi-static condition and assuming the wing takes on the shape of a right cylindrical shell. Results from further investigation by relaxing previous modeling assumptions are presented. Future research directions in aerospace engineering spanning field of autonomy, sensing, controls, and modeling are discussed.

2/21 Guy Hoffman, Cornell University                                Science Fiction / Double Feature: Design Q Exam and Nonverbal Behaviors
In this informal meeting of the robotics seminar, we will do good on our promise to discuss the structure of the new(ish) Design Q exam, including presentations by faculty of the expectations, war stories from students who took the Design Q, and Q&A (no pun intended). The second part of this double feature seminar is going to be a presentation and discussion on one of the classics papers at the foundation of HRI, Paul Ekman’s 1969 article “The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding”, which is at the basis of decades of research on body language, and a must-know for any researcher interested in HRI systems using gestures and facial expressions.For a non-light reading: http://www.communicationcache.com/uploads/1/0/8/8/10887248/the_repertoire_of_nonverbal_behavior_categories_origins__usage_and_coding.pdf
2/28  Ross Knepper, Cornell University                              Autonomy, Embodiment, and Anthropomorphism: the Ethics of Robotics

A robot is an artificially intelligent machine that can sense, think, and act in the world. Its physical, embodied aspect sets a robot apart from other artificially intelligent systems, and it also profoundly affects the way that people interact with robots. Although a robot is an autonomous, engineered machine, its appearance and behavior can trigger anthropomorphic impulses in people who work with it. In many ways, robots occupy a niche that is somewhere between man and machine, which can lead people to form unhealthy emotional attitudes towards them. We can develop unidirectional emotional bonds with robots, and there are indications that robots occupy a distinct moral status from humans, leading us to treat them without the same dignity afforded to a human being. Are emotional relationships with robots inevitable? How will they influence human behavior, given that robots do not reciprocate as humans would? This talk will examine issues such as cruelty to robots, sex robots, and robots used for sales, guard or military duties.

This talk was previously presented in spring 2017 as part of CS 4732: Social and Ethical Issues in AI.

3/7  Andy Ruina                                                                       How do people, and how should legged robots, avoid falling down?

What actuators does a person or legged robot have available to help prevent falls?
Only ones that can more the relative horizontal position of the support point and the center of mass.
What are these? Ankle torques, distortions of the upper body (bending at hips, swinging arms), stepping and pushing off.
Of these, by far the biggest control authority is in stepping and pushing off. And these can be well understood,
and well approximated, by a point mass model. Why? Because the same things that can’t help much, namely
ankle torques and upper body distortions, can’t hurt much either. Thus, we believe we can design a robust
balance controller using foot placement and pushoff and nothing else. And, reverse engineering, we think
this explains most of what people do also, at least when recovering from large disturbances. A balanced broomstick,
a Segway, a bicycle, a walking robot and a walking person all use the same basic idea.

3/14  Kirstin H. Petersen, Cornell University                       Multi-Robot Mini Symposium
 This Multi-Robot Mini Symposium will feature a series of brief talks by students and professors related to recent work on Multi-Robot/Swarm Robotics research. The goal is to identify and inspire new ideas among the multi-robot community at Cornell. We are looking for speakers – please notify Kirstin Petersen (khp37) if you would like to do a pitch! 
3/21 Robotics Debate/Discussion
This week we will host a debate/discussion on some topics in robotics. There is still time to contribute discussion questions here:
https://docs.google.com/document/d/1_H3M-WIM6UN_TMsNQvgW9sMoYGVBuFXwi14fEor5tDM/edit?usp=sharingAnything is fair game. The topics will be announced Wednesday morning. Good questions have an opportunity for deep discussion, support a variety of viewpoints, and engage the broad robotics community.
You may sign your name or leave your question anonymous. If you put your name, you are volunteering to give a few sentence explanation of the question and its implications.
-Ross
3/28 Steve Supron, Maidbot                                        Maidbot: Designing and Building Rosie the Robot for the Hospitality Industry
Steve Supron joined Maidbot as Manufacturing Lead during its incubation days over two years ago at REV Ithaca. Micah Green, a former Cornellian and the founder and CEO of Maidbot, hired Steve to help bring his dream of Rosie the Robot to the hotel industry. Steve will present the company’s story as well as the challenges and considerations of robotics in a hospitality setting. Steve will review some of the unique design decisions and technology and production choices the team has made along the way from early prototypes to testable pilot units and on to the production design.
4/4 Spring Break*
 *There will be no seminar on this date due to Spring Break.
4/11  Claudia Pérez D’Arpino, MIT                             Learning How to Plan for Multi-Step Manipulation in Collaborative Robotics

Abstract: The use of robots for complex manipulation tasks is currently challenged by the limited ability of robots to construct a rich representation of the activity at both the motion and tasks levels in ways that are both functional and apt for human-supervised execution. For instance, the operator of a remote robot would benefit from planning assistance, as opposed to the currently used method of joint-by-joint direct teleoperation. In manufacturing, robots are increasingly expected to execute manipulation tasks in shared workspace with humans, which requires the robot to be able to predict the human actions and plan around these predictions. In both cases, it is beneficial to deploy systems that are capable of learning skills from observed demonstrations, as this would enable the application of robotics by users without programming skills. However, previous work on learning from demonstrations is limited in the range of tasks that can be learned and generalized across different skills and different robots. I this talk, I present C-LEARN, a method of learning from demonstrations that supports the use of hard geometric constraints for planning multi-step functional manipulation tasks with multiple end effectors in quasi-static settings, and show the advantages of using the method in a shared autonomy framework.

Speaker Bio: Claudia Pérez D’Arpino is a PhD Candidate in the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, advised by Prof. Julie A. Shah in the Interactive Robotics Group since 2012. She received her degrees in Electronics Engineering (2008) and Masters in Mechatronics (2010) from the Simon Bolivar University in Caracas, Venezuela, where she served as Assistant Professor in the Electronics and Circuits Department (2010-2012) with a focus on Robotics. She participated in the DARPA Robotics Challenge with Team MIT (2012-2015). Her research at CSAIL combines machine learning and planning techniques to empower humans through the use of robotics and AI. Her PhD research centers in enabling robots to learn and create strategies for multi-step manipulation tasks by observing demonstrations, and develop efficient methods for robots to employ these skills in collaboration with humans, either for shared workspace collaboration, such as assembly in manufacturing, or for remote robot control in shared autonomy, such as emergency response scenarios. Web: http://people.csail.mit.edu/cdarpino/

4/18 Dr. Girish Chowdhary, UIUC, Co-Founder EarthSense Inc.            Autonomous and Intelligent Robots in Unstructured Field Environments

 *Moved to Upson 106*

Abstract: What if a team of collaborative autonomous robots grew your food for you? In this talk, I will demonstrate some key theoretical and algorithm advances in adaptive control, reinforcement learning, collaborative autonomy, and robot-based analytics my group is working to bring this future a lot nearer!

I will discuss my group’s theoretical and practical work towards the challenges in making autonomous, persistent, and collaborative field robotics a reality. I will discuss new algorithms that are laying the foundation for robust long-duration autonomy in harsh, changing, and uncertain environments, including deep learning for robot embedded vision, deep adversarial reinforcement learning for large state-action spaces, and transfer learning for deep reinforcement learning domains. I will also describe the new breed of lightweight, compact, and highly autonomous field robots that my group is creating and deploying in fields across the US. I will show several videos of the TerraSentia robot, which is being widely hailed as opening the doors to an exciting revolution in agricultural robotics by popular media, including Chicago Tribune, the MIT Technology Review, Discovery Canada and leading technology blogs. I will also discuss several technological and socio-economic challenges of making autonomous field-robotic applications with small robots a reality, including opportunities in high-throughput phenotyping, mechanical weeding, and robots for defense applications.

Speaker Bio: Girish Chowdhary is an assistant professor at the University of Illinois at Urbana-Champaign, and the director of the Distributed Autonomous Systems laboratory at UIUC. He holds a PhD (2010) from Georgia Institute of Technology in Aerospace Engineering. He was a postdoc at the Laboratory for Information and Decision Systems (LIDS) of the Massachusetts Institute of Technology (2011-2013), and an assistant professor at Oklahoma State University’s Mechanical and Aerospace Engineering department (2013-2016). He also worked with the German Aerospace Center’s (DLR’s) Institute of Flight Systems for around three years (2003-2006). Girish’s ongoing research interest is in theoretical insights and practical algorithms for adaptive autonomy, with a particular focus on field-robotics. He has authored over 90 peer reviewed publications in various areas of adaptive control, robotics, and autonomy. On the practical side, Girish has led the development and flight-testing of over 10 research UAS platform. UAS autopilots based on Girish’s work have been designed and flight-tested on six UASs, including by independent international institutions. Girish is an investigator on NSF, AFOSR, NASA, ARPA-E, and DOE grants. He is the winner of the Air Force Young Investigator Award, and the Aerospace Guidance and Controls Systems Committee Dave Ward Memorial award. He is the co-founder of EarthSense Inc., working to make ultralight agricultural robotics a reality.

4/25
5/2
5/9                    Kalesha Bullard, Georgia Tech

 

 

The schedule is maintained by Vanessa Maley (vsm34@cornell.edu) and Ross Knepper (rak@cs.cornell.edu). To be added to the mailing list, please email vsm34@cornell.edu.

Schedules for previous semesters

Comments are closed