Tuesdays at 3:00-4:00pm, Upson 106 (Conference Room Next to the Lounge).
Light refreshments served starting at 2:45.
Fall 2018 Schedule
|8/28||Shuo Li, Cornell University||Additive Manufacturing of Soft Robots|
|This talk will present multidisciplinary work from material composites and robotics. We have created new types of actuators, sensors, displays, and additive manufacturing techniques for soft robots and haptic interfaces. For example, we now use stretchable optical waveguides as sensors for high accuracy, repeatability, and material compatibility with soft actuators. For displaying information, we have created stretchable, elastomeric light emitting displays as well as texture morphing skins for soft robots. We have created a new type of soft actuator based on molding of foams, new chemical routes for stereolithography printing of silicone and hydrogel elastomer based soft robots, and implemented deep learning in stretchable membranes for interpreting touch. All of these technologies depend on the iterative and complex feedback between material and mechanical design. I will describe this process, what is the present state of the art, and future opportunities for science in the space of additive manufacturing of elastomeric robots.|
|Wed 9/5||Adam Bry, Skydio||Scaling up autonomous flight|
NOTE: Special time and location: 5pm on Wednesday in Upson 106
Abstract: Drones hold enormous potential for consumer video, inspection, mapping, monitoring, and perhaps even delivery. They’re also natural candidates for autonomy and likely to be among the first widely-deployed systems that incorporate meaningful intelligence based on computer vision and robotics research. In this talk I’ll discuss the research we’re doing at Skydio, along with the challenges involved in building a robust robotics software system that needs to work at scale.
Bio: Adam Bry is co-founder and CEO of Skydio, a venture backed drone startup based in the bay area. Prior to Skydio he helped start Project Wing at Google[x] where he worked on the flight algorithms and software. He holds a SM in Aero/Astro from MIT and a BS in Mechanical Engineering from Olin College. Adam grew up flying radio controlled airplanes and is a former national champion in precision aerobatics. He was named to the MIT Tech Review 35 list in 2016.
9/4/18 Bonus Seminar this week!
4-5 p.m. in 203 Thurston Hall
Architectural Robotics: Ecosystems of Bits, Bytes and Biology
|9/11||Andy Ruina, Cornell University||Some Thoughts on Model Reduction for Robotics|
| These are unpublished thoughts, actually more questions than thoughts.And not all that well informed. So audience feedback is welcome. Especially from people who know about how to formulate machine learning problems
(I already know, sort of, how to formulate MatSheen learning problems).One posing of many robotics control problems is as a general problem in `motor control’ (a biological term, I think).Assume one has a machine and the best model (something one can compute simulations with) one can actually get of the machine, its environment, its sensors and its computation abilities. One also has some sense of the uncertainty in various aspects of these.The general motor problem is this: Given a history of sensor readings and requested goals (commands), and all of the givens above, what computation should be done to determine the motor commands so as to best achieve the goals.”Best” means, most accurately and most reliably by whatever measures one chooses.If one poses this as an optimization problem over the space of all controllers (all mappings from command and sensor histories to the set of commands), it is too big a problem, even if coarsely discretized.Hence, everyone applies all manner of assumed simplifications before attempting to make a controller.The question here is this, can one pose an optimization problem for the best simplification? Can one pose it in a way such that finding a useful approximate solution could be useful?In bipedal robots there are various classes of simplified models used by various people to attempt to control their robots. Might there be a rational way to choose between them, or find better ones?As abstract as this all sounds, perhaps thinking about such things could help us make better walking-robot controllers.
|9/18||Group Discussion||Big-data machine learning meets small-data robotics|
| Abstract: Machine learning techniques have transformed many fields, including computer vision and natural language processing, where plentiful data can be cheaply and easily collected and curated. Training data in robotics is expensive to collect and difficult to curate or annotate. Furthermore, robotics cannot be formulated as simply a prediction problem in the way that vision and NLP can often be. Robots must close the loop, meaning that we ask our learning techniques to consider the effect of possible decisions on future predictions. Despite exciting progress in some relatively controlled (toy) domains, we still lack good approaches to adapting modern machine learning techniques to the robotics problem. How can we overcome these hurdles? Please come prepared to discuss. Here are some potential discussion topics:
Please bring your own questions for the group to discuss, too!
|9/25||Cheng Zhang, Cornell University||Sensing + Interaction On and Around the Body|
Abstract: Wearables are a significant part of the new generation of computing. Compared with more traditional computers (e.g., laptop, smartphones), wearable devices are more readily available for immediate use, but significantly smaller in size, creating new opportunities and challenges for on-body sensing and interaction. My holistic research approach (from problem understanding to invention to implementation and evaluation) investigates how to effectively exchange information between humans, their environment, and wearables. My Ph.D. thesis focuses on novel wearable input using on-body sensing through various high-level interaction gestures, low-level input events, and a redesign of the interaction. In this talk, I will highlight three projects. The first is a wearable ring that allows the user to input over 40 unistroke gestures (including text and numbers). It also shows how to overcome a limited training set size, a common challenge in applying machine learning techniques to real systems, through an understanding of the characteristics of data and algorithms. The second project demonstrates how to combine a strong, yet incomplete, understanding of on-body signal propagation physics with machine learning to create a novel yet practical sensing and interaction techniques. The third project is an active acoustic sensing technique that enables a user to interact with wearable devices in the surrounding 3D space through continuous high-resolution tracking of finger’s absolute 3D position. It demonstrates how to solve a technical interaction challenge through a deep understanding of signal propagation. I will also share my vision on future opportunities for on-body sensing and interaction, especially in high-impact areas, such as health, activity recognition, AR/VR, and more futuristic interaction paradigms between humans and the increasingly connected environment.
Bio: Cheng Zhang is an assistant professor in Information Science at Cornell University. He received his Ph.D. in Computer Science at Georgia Institute of Technology, advised by Gregory Abowd (IC) and Omer Inan (ECE). His research focuses on enabling the seamless exchange of information among humans, computers, and the environment, with a particular emphasis on the interface between humans and wearable technology. His Ph.D. thesis presents 10 different novel input techniques for wearables, some leveraging commodity devices while others incorporate new hardware. His work blends an understanding of signal propagation on and around the body with, when necessary, appropriate machine learning techniques. His work has resulted in over a dozen publications in top-tier conferences and journals in the field of Human-Computer Interaction and Ubiquitous Computing (including two best paper awards), as well as over 6 pending U.S. and international patents. His work has attracted the attention of various media outlets, including ScienceDaily, DigitalTrends, ZDNet, New Scientist, RT, TechRadar, Phys.org<http://phys.org/>, Yahoo News, Business Insider, and MSN News. The work that leverages commodity devices has resulted in significant commercial impact. His work on novel interaction on smartwatch was licensed by Canadian startup ProximityHCI to improve the smartwatch interaction experience.
|10/2||Short Student Talks||Titles Below|
Presenter 1: Alap Kshirsagar, Hoffman Research Group
Title: Monetary-Incentive Competition between Humans and Robots: Experimental Results
Abstract: In this talk, I will describe an experiment studying monetary-incentive competition between a human and a robot. In this first of its kind experiment, participants (n=60) competed against an autonomous robot arm in ten competition rounds, carrying out a monotonous task for winning monetary rewards. For each participant, we manipulated the robot’s performance and the reward in each round. We found a small discouragement effect, with human effort decreasing with increased robot performance, significant at the p < 0.005 level. We also found a positive effect of the robot’s performance on its perceived competence, a negative effect on the participants’ liking of the robot, and a negative effect on the participants’ self-competence, all at p<0.0001.
Presenter 2: Carlos Araújo de Aguiar, Green Research Group
Title: transFORM – A Cyber-Physical Environment Increasing Social Interaction and Place Attachment in Underused, Public Spaces
Abstract: The emergence of social networks and apps has reduced the importance of physical space as a locus for social interaction. In response, we introduce transFORM, a cyber-physical environment installed in under-used, outdoor, public spaces. transFORM embodies our understanding of how a responsive, cyber-physical architecture can augment social relationship and increase place attachment. In this paper we critically examine the social interaction problem in the context of our increasingly digital society, present our ambition, and introduce our prototype which we will iteratively design and test. Cyber-physical interventions at large scale in public spaces are an inevitable future, and this paper serves to establish the fundamental terms of this frontier.
|10/9||Neil Dantam, Colorado School of Mines||Task and Motion Planning: Algorithms, Implementation, and Evaluation|
|Everyday tasks combine discrete and geometric decision-making. The robotics, AI, and formal methods communities have concurrently explored different planning approaches, producing techniques with different capabilities and trade-offs. We identify the combinatorial and geometric challenges of planning for everyday tasks, develop a hybrid planning algorithm, and implement an extensible planning framework. In ongoing work, we are improving the scalability and extensibility of our task-motion planner and developing planner-independent evaluation metrics.|
|10/16||Dylan Shell, Texas A&M University||Active Imperception and Naïve Robots|
|If robots become deeply interwoven in our lives, they’ll learn a great deal about us. Such robots may then disclose information about us, either if compromised by ne’er-do-wells or more simply when observed by third parties. In this talk I’ll describe a few ways we’ve been thinking about robotic privacy recently. This will include a privacy-preserving tracking problem, where we’ll look at how one might think about estimators which are constrained so as to ensure they never know too much. And also how we can solve planning problems subject to stipulations on the information divulged during plan execution. In these cases, sensors can provide too much information and an important question is: What sort of sensors are needed to ensure that the robot has the opportunity to cultivate ignorance. This is a robot design question—one which we’ll also examine briefly.|
|10/23||Short Student Talks||Titles Below|
Speaker 1: Thais Campos de Almeida, Cornell University (Kress-Gazit Group)
Title: A novel approach to synthesize task-based designs of modular manipulators
Speaker 2: Yuhan Hu, Cornell University (Hoffman Group)
Title: Using Skin Texture Change to Design Social Robots
Abstract: Robots designed for social interaction often express their internal and emotional states through nonverbal behavior. Most robots use their facial expressions, gestures, locomotion, and tone of voice. In this talk, I will present a new expressive nonverbal channel for social robots in the form of texture-changing skin. This is inspired by biological systems, which frequently respond to external stimuli and display their internal states through skin texture change. I will present the design of the robot and some findings from the experiment of user-robot interaction.
Speaker 3: Haron Abdel-Raziq, Cornell University (Petersen Group)
Title: Leveraging Honey Bees as Cyber Physical Systems
Abstract: Honey bees, nature’s premiere agricultural pollinators, have proven capable of robust, complex, and versatile operation in unpredictable environments far beyond what is possible with state-of-the-art robotics. Bee keepers and farmers are heavily dependent on these honey bees for successful crop yields, evident by the $150B global pollination industry. This, coupled with the current interest in bio-inspired robotics, has prompted research on understanding honey bee swarms and their behavior both inside and outside of the hive. Prior attempts at monitoring bees are limited to the use of expensive, complicated, short range, or obstruction sensitive approaches. By combining traditional engineering methods with the honey bee’s extraordinary capabilities, we present a novel solution to monitor long-range bee flights by utilizing a new class of easily manufactured sensor and a probabilistic mapping algorithm. Specifically, the goal is to equip bees with millimeter scale ASIC technology “backpacks” that record key flight information, thus transforming a honey bee swarm into a vast cyber-physical system which can acquire data related to social insect behavior as well as bust and bloom over large areas. Foraging probability maps will then be developed by applying a simultaneous localization and mapping algorithm to the gathered data. The project is still in its initial phase and thus, we will discuss the motivation for this project as well as provide background on the various enabling technologies. We will then discuss a prototype system for gathering data on flight patterns prior to placing the actual technology on a bee. The data yielded from this work will benefit both the scientific community and bee keepers with knowledge gains spanning low power micro-scale devices and robotics, to improved understanding of how pollination occurs in different environments.
|10/30||Short Student Talks||Titles Below|
Speaker 1: Adam Pacheck, Cornell University
Title: Reactive Composition of Learned Abstractions
Speaker 2: Yixiao Wang, Cornell University
Title: “Space Agent” as a Design Partner – Study and design interactions between robot surfaces and human designers
Abstract: In this presentation, we first propose the concept of “Space Agents”, which are “interactive and intelligent environments perceived by users as human agents.” The foundation of this concept are communication theories, and it functions as the bridge between human users and the built environment. To better study the human-human like interactions and partnerships between users and their environments, we decide to design and study interactions between “space agents” and human designers, which is my dissertation topic. More specifically, we would like to study and test following hypotheses: 1) The “space agents” could form a (temporary) partnership with the human designers; 2) The “space agent” together with the “designer-space partnership” could improve designer’s work performance, perceived spatial support, and work life quality. We propose to design continuous robotic surfaces as space-making robots which could give agency to a traditional working space. Scenarios are specified to demonstrate how could these robotic surfaces enable spatial reconfigurations as an effective partner, and previous works are presented to show the progress of my dissertation.
Speaker 3: Ryan O’Hern, Cornell University
Title: Automating Vineyard Yield Prediction
Abstract: Advances in mobile computing, sensors, and machine learning technology have been a boon to the fields of agricultural robotics and precision agriculture. In this talk, I will discuss preliminary results of an on-going collaboration between Cornell’s College of Engineering and the College of Agriculture and Life Sciences to advance viticultural practices with new robotics techniques. This talk will focus on our initial work to predict yield in vineyards using computer vision techniques.
|11/6||Short Student Talks||Titles Below|
Speaker 1: Nialah Wilson, Cornell University
Title: Design, Coordination, and Validation of Controllers for Decision Making and Planning in Large-Scale Distributed Systems
Abstract: A good swarm will be comprised of cheap, simple robots and run on efficient algorithms, making it scalable with regards to both cost, computation, and maintenance. Previous work has been done to control large-scale distributed systems with centralized or decentralized control, but none examine what happens when modules are allowed to decide when to switch between control schemes, or explore the optimality and guarantees that can still be made in a hybrid control system. I propose using two robotic platforms, a flexible modular robot, and a team of micro blimps, to study decision making and task-oriented behaviors in large-scale distributed systems by creating new hybrid control algorithms for an extended subsumption architecture.
Speaker 2: Wil Thomason, Cornell University
Title: A Flexible Sampling-Based Approach to Integrated Task and Motion Planning
Abstract: Integrated Task and Motion Planning (TAMP) seeks to combine tools from symbolic (task) planning and geometric (motion) planning to efficiently solve geometrically constrained long-horizon planning problems. In this talk, I will present some of my work in progress on a new approach to solving the TAMP problem based on a real-valued “unsatisfaction” semantics for interpreting symbolic formulae. This semantics permits us to directly sample in regions where the preconditions for symbolic actions are satisfied. In conjunction with arbitrary task-level heuristics, this enables us to use off-the-shelf sampling based motion planning to efficiently solve TAMP problems.
Speaker 3: Ji Chen, Cornell University
Title: Verifiable Control of Robotic Swarms from High-level Specifications
Abstract: Designing controllers automatically for robotic swarm systems to guarantee safety, correctness, scalability and flexibility in achieving high-level tasks remains a challenging problem. In this talk, I will present a control scheme that takes in specifications for high-level tasks and outputs continuous controllers which result in the desired collective behaviors. In particular, I will discuss the properties that swarm must have in the continuous level to ensure the correctness of mapping from symbolic plans to real-world execution. In addition, I will also compare the centralized and decentralized approaches in terms of time efficiency, failure resilience, and computation complexity.
|11/13||Tariq Iqbal, MIT||Coordination dynamics in human-robot teams.|
Abstract: As autonomous robots are becoming more prominent across various domains, they will be expected to interact and work with people in teams. If a robot has an understanding of the underlying dynamics of a group, then it can recognize, anticipate, and adapt to the human motion to be a more effective teammate. In this talk, I will present algorithms to measure the degree of coordination in groups and approaches to extend these understandings by a robot to enable fluent collaboration with people. I will first describe a non-linear method to measure group coordination, which takes multiple types of discrete, task-level events into consideration. Building on this method, I will then present two anticipation algorithms to predict the timings of future actions in teams. Finally, I will describe a fast online activity segmentation algorithm which enables fluent human-robot collaboration.
Bio: Tariq Iqbal is a postdoctoral associate in the Interactive Robotics Group at MIT. He received his Ph.D. from the University of California San Diego, where he was a member of the Contextual Robotics Institute and the Healthcare Robotics Lab. His research focuses on developing algorithms for robots to solve problems in complex human environments, by enabling them to perceive, anticipate, adapt, and collaborate with people.
|11/20||Tom Howard, University of Rochester||Learning Adaptive Models for Robot Motion Planning and Human-Robot Interaction|
|Abstract: The efficiency and optimality of robot decision making is often dictated by the fidelity and complexity of models for how a robot can interact with its environment. It is common for researchers to engineer these models a priori to achieve particular levels of performance for specific tasks in a restricted set of environments and initial conditions. As we progress towards more intelligent systems that perform a wider range of objectives in a greater variety of domains, the models for how robots make decisions must adapt to achieve, if not exceed, engineered levels of performance. In this talk I will discuss progress towards model adaptation for robot intelligence, including recent efforts in natural language understanding for human-robot interaction and robot motion planning.
Biosketch: Thomas Howard is an assistant professor in the Department of Electrical and Computer Engineering at the University of Rochester. He also holds secondary appointments in the Department of Biomedical Engineering, Department of Computer Science, and Department of Neuroscience and directs the University of Rochester’s Robotics and Artificial Intelligence Laboratory. Previously he held appointments as a research scientist and a postdoctoral associate at MIT’s Computer Science and Artificial Intelligence Laboratory in the Robust Robotics Group, a research technologist at the Jet Propulsion Laboratory in the Robotic Software Systems Group, and a lecturer in mechanical engineering at Caltech and was a Goergen Institute for Data Science Center of Excellence Distinguished Researcher.Howard earned a PhD in robotics from the Robotics Institute at Carnegie Mellon University in 2009 in addition to BS degrees in electrical and computer engineering and mechanical engineering from the University of Rochester in 2004. His research interests span artificial intelligence, robotics, and human-robot interaction with particular research focus on improving the optimality, efficiency, and fidelity of models for decision making in complex and unstructured environments with applications to robot motion planning and natural language understanding. Howard was a member of the flight software team for the Mars Science Laboratory, the motion planning lead for the JPL/Caltech DARPA Autonomous Robotic Manipulation team, and a member of Tartan Racing, winner of the DARPA Urban Challenge. Howard has earned Best Paper Awards at RSS (2016) and IEEE SMC (2017), two NASA Group Achievement Awards (2012, 2014), and was a finalist for the ICRA Best Manipulation Paper Award (2012). Howard’s research at the University of Rochester has been supported by National Science Foundation, Army Research Office, Army Research Laboratory, Department of Defense Congressionally Directed Medical Research Program, and the New York State Center of Excellence in Data Science.
|11/27||Keith LeGrand, Sandia National Lab||Finite Set Statistics Based Multi-object Tracking: Recent Advances, Challenges, and Space Applications|
|Abstract: Multi-object tracking is the process of simultaneously estimating an unknown number of objects and their partially hidden states using unlabeled noisy measurement data. Common applications of multi-object tracking algorithms include space situational awareness (SSA), missile defense, pedestrian tracking, and airborne surveillance. In recent years, a new branch of statistical calculus known as finite set statistics (FISST) has provided a formalism for solving such tracking problems and has resulted in a renaissance in tracking research. Today, researchers are applying FISST to formalize and solve problems not typically thought of as traditional tracking problems, such as robotic simultaneous localization and mapping (SLAM), obstacle localization for driverless vehicles, lunar descent and landing, and autonomous swarm control. This talk discusses the basic principles of multi-object tracking with a focus on FISST and highlights recent advancements. Special challenges, such as probabilistic object appearance detection, extended object tracking, and distributed multi-sensor fusion are presented. Finally, this talk will present the latest application of FISST theory to sensor planning, whereby multi-object information measures are used to optimize the performance of large dynamic sensor networks.|
|12/4 Short Student Talks Titles Below|
Matt Law, Cornell University
Steven Ceron, Cornell University
Chris Mavrogiannis, Cornell University
Wednesdays at 2:55-4:10pm, Upson 116 (The Lounge).
Light refreshments served starting at 2:30.
Robotics Seminar is sponsored by HRG Robotics
Spring 2018 Schedule
|1/24||Hadas Kress-Gazit, Cornell University Synthesis for Composable Robots: Guarantees and Feedback for Complex Behaviors|
| Getting a robot to perform a complex task, for example completing the DARPA Robotics Challenge, typically requires a team of engineers who program the robot in a time consuming and error prone process and who validate the resulting robot behavior through testing in different environments. The vision of synthesis for robotics is to bypass the manual programming and testing cycle by enabling users to provide specifications – what the robot should do – and automatically generating, from the specification, robot control that provides guarantees for the robot’s behavior.
This talk will describe the work done in the verifiable robotics research group towards realizing the synthesis vision and will focus on synthesis for composable robots – modular robots and swarms. Such robotic systems require new abstractions and synthesis techniques that address the overall system behavior in addition to the individual control of each component, i.e. module or swarm member.
|1/31||Susan Fussell and Elijah Webber-Han Explorations using Telepresence Robots in the Wild|
Mobile Robotic (Tele)Presence (MRP) systems are a promising technology for distance interaction because they provide both embodiment and mobility. In principle, MRPs have the potential to support a wide array of informal activities, such as walking across campus, attending a movie or visiting a restaurant. However, realizing this potential has been challenging, due to a host of issues including internet connectivity, audio interference, limited mobility and limited line of sight. We will describe some ongoing work looking at the benefits and challenges of using MRPs in the wild. The goal of this work is to develop a framework for understanding MRP use in informal social settings that captures key relationships among the physical requirements of the setting, the social norms of the setting, and the challenges posed for MRP pilots and people in the local environment. This framework will then inform the design of novel user interfaces and crowdsourcing techniques to help MRP pilots anticipate and overcome challenges of specific informal social settings
Joint Work: Sue Fussell, Elijah Weber-Han, Dept. of Communication & Dept. of Info. Science at Cornell University
|2/7||Panel, Cornell University What We Talk About When We Talk About Design|
Panelists: Keith Evan Green, Kirstin Hagelskjaer Petersen, Guy Hoffman, Rob Shepherd and François Guimbretière.
As panelists, we will interact with each other and the audience on the topic of what design means for robotics and what robotics means for design. Panelists would also like to discuss briefly the Q-exam in design.
|2/14||Bo Fu, Cornell University Sailing in Space|
Solar sail is a type of spacecraft propelled by harvesting momentum from solar radiation. Compared with spacecraft propelled by traditional chemical rockets or the more advanced electric propulsion engines, the unique feature of solar sails is that they do not use fuel for propulsion. This allows for the possibility of return-type (round-trip) missions to other heavenly bodies, which would be difficult or near impossible with conventional propulsion methods. This also makes solar sails highly promising candidates for service as interplanetary cargo ships in future space missions.
Solar sail research is quite broad and multi-disciplinary. In this talk, an overview of solar sail technology including the history, the fundamentals of photon-sail interaction, and the state of the art of solar sailing is presented. One specific area solar sail research – attitude dynamics and control – is discussed in detail. Attitude control of large sails poses a challenge because most methods developed for solar sail attitude control require the controller mass to scale with the sail’s surface area. This is addressed by a newly proposed tip displacement method (TDM), where by moving the wing tips, the geometry of sail film is exploited to generate the necessary control forces and torques. The TDM method is described as it applies to a square solar sail that consists of four triangular wings. The mathematical relationship between the displacement of the wing tip and the control torque generated is fully developed under quasi-static condition and assuming the wing takes on the shape of a right cylindrical shell. Results from further investigation by relaxing previous modeling assumptions are presented. Future research directions in aerospace engineering spanning field of autonomy, sensing, controls, and modeling are discussed.
|2/21||Guy Hoffman, Cornell University Science Fiction / Double Feature: Design Q Exam and Nonverbal Behaviors|
|In this informal meeting of the robotics seminar, we will do good on our promise to discuss the structure of the new(ish) Design Q exam, including presentations by faculty of the expectations, war stories from students who took the Design Q, and Q&A (no pun intended). The second part of this double feature seminar is going to be a presentation and discussion on one of the classics papers at the foundation of HRI, Paul Ekman’s 1969 article “The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding”, which is at the basis of decades of research on body language, and a must-know for any researcher interested in HRI systems using gestures and facial expressions.For a non-light reading: http://www.communicationcache.com/uploads/1/0/8/8/10887248/the_repertoire_of_nonverbal_behavior_categories_origins__usage_and_coding.pdf|
|2/28||Ross Knepper, Cornell University Autonomy, Embodiment, and Anthropomorphism: the Ethics of Robotics|
A robot is an artificially intelligent machine that can sense, think, and act in the world. Its physical, embodied aspect sets a robot apart from other artificially intelligent systems, and it also profoundly affects the way that people interact with robots. Although a robot is an autonomous, engineered machine, its appearance and behavior can trigger anthropomorphic impulses in people who work with it. In many ways, robots occupy a niche that is somewhere between man and machine, which can lead people to form unhealthy emotional attitudes towards them. We can develop unidirectional emotional bonds with robots, and there are indications that robots occupy a distinct moral status from humans, leading us to treat them without the same dignity afforded to a human being. Are emotional relationships with robots inevitable? How will they influence human behavior, given that robots do not reciprocate as humans would? This talk will examine issues such as cruelty to robots, sex robots, and robots used for sales, guard or military duties.
This talk was previously presented in spring 2017 as part of CS 4732: Social and Ethical Issues in AI.
|3/7||Andy Ruina How do people, and how should legged robots, avoid falling down?|
What actuators does a person or legged robot have available to help prevent falls?
|3/14||Kirstin H. Petersen, Cornell University Multi-Robot Mini Symposium|
|This Multi-Robot Mini Symposium will feature a series of brief talks by students and professors related to recent work on Multi-Robot/Swarm Robotics research. The goal is to identify and inspire new ideas among the multi-robot community at Cornell. We are looking for speakers – please notify Kirstin Petersen (khp37) if you would like to do a pitch!|
|This week we will host a debate/discussion on some topics in robotics. There is still time to contribute discussion questions here:
https://docs.google.com/document/d/1_H3M-WIM6UN_TMsNQvgW9sMoYGVBuFXwi14fEor5tDM/edit?usp=sharingAnything is fair game. The topics will be announced Wednesday morning. Good questions have an opportunity for deep discussion, support a variety of viewpoints, and engage the broad robotics community.
You may sign your name or leave your question anonymous. If you put your name, you are volunteering to give a few sentence explanation of the question and its implications.
|3/28||Steve Supron, Maidbot Maidbot: Designing and Building Rosie the Robot for the Hospitality Industry|
|Steve Supron joined Maidbot as Manufacturing Lead during its incubation days over two years ago at REV Ithaca. Micah Green, a former Cornellian and the founder and CEO of Maidbot, hired Steve to help bring his dream of Rosie the Robot to the hotel industry. Steve will present the company’s story as well as the challenges and considerations of robotics in a hospitality setting. Steve will review some of the unique design decisions and technology and production choices the team has made along the way from early prototypes to testable pilot units and on to the production design.|
|*There will be no seminar on this date due to Spring Break.|
|4/11||Claudia Pérez D’Arpino, MIT Learning How to Plan for Multi-Step Manipulation in Collaborative Robotics|
Abstract: The use of robots for complex manipulation tasks is currently challenged by the limited ability of robots to construct a rich representation of the activity at both the motion and tasks levels in ways that are both functional and apt for human-supervised execution. For instance, the operator of a remote robot would benefit from planning assistance, as opposed to the currently used method of joint-by-joint direct teleoperation. In manufacturing, robots are increasingly expected to execute manipulation tasks in shared workspace with humans, which requires the robot to be able to predict the human actions and plan around these predictions. In both cases, it is beneficial to deploy systems that are capable of learning skills from observed demonstrations, as this would enable the application of robotics by users without programming skills. However, previous work on learning from demonstrations is limited in the range of tasks that can be learned and generalized across different skills and different robots. I this talk, I present C-LEARN, a method of learning from demonstrations that supports the use of hard geometric constraints for planning multi-step functional manipulation tasks with multiple end effectors in quasi-static settings, and show the advantages of using the method in a shared autonomy framework.
Speaker Bio: Claudia Pérez D’Arpino is a PhD Candidate in the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, advised by Prof. Julie A. Shah in the Interactive Robotics Group since 2012. She received her degrees in Electronics Engineering (2008) and Masters in Mechatronics (2010) from the Simon Bolivar University in Caracas, Venezuela, where she served as Assistant Professor in the Electronics and Circuits Department (2010-2012) with a focus on Robotics. She participated in the DARPA Robotics Challenge with Team MIT (2012-2015). Her research at CSAIL combines machine learning and planning techniques to empower humans through the use of robotics and AI. Her PhD research centers in enabling robots to learn and create strategies for multi-step manipulation tasks by observing demonstrations, and develop efficient methods for robots to employ these skills in collaboration with humans, either for shared workspace collaboration, such as assembly in manufacturing, or for remote robot control in shared autonomy, such as emergency response scenarios. Web: http://people.csail.mit.edu/cd
|4/18||Dr. Girish Chowdhary, UIUC, Co-Founder EarthSense Inc. Autonomous and Intelligent Robots in Unstructured Field Environments|
Abstract: What if a team of collaborative autonomous robots grew your food for you? In this talk, I will demonstrate some key theoretical and algorithm advances in adaptive control, reinforcement learning, collaborative autonomy, and robot-based analytics my group is working to bring this future a lot nearer!
I will discuss my group’s theoretical and practical work towards the challenges in making autonomous, persistent, and collaborative field robotics a reality. I will discuss new algorithms that are laying the foundation for robust long-duration autonomy in harsh, changing, and uncertain environments, including deep learning for robot embedded vision, deep adversarial reinforcement learning for large state-action spaces, and transfer learning for deep reinforcement learning domains. I will also describe the new breed of lightweight, compact, and highly autonomous field robots that my group is creating and deploying in fields across the US. I will show several videos of the TerraSentia robot, which is being widely hailed as opening the doors to an exciting revolution in agricultural robotics by popular media, including Chicago Tribune, the MIT Technology Review, Discovery Canada and leading technology blogs. I will also discuss several technological and socio-economic challenges of making autonomous field-robotic applications with small robots a reality, including opportunities in high-throughput phenotyping, mechanical weeding, and robots for defense applications.
Speaker Bio: Girish Chowdhary is an assistant professor at the University of Illinois at Urbana-Champaign, and the director of the Distributed Autonomous Systems laboratory at UIUC. He holds a PhD (2010) from Georgia Institute of Technology in Aerospace Engineering. He was a postdoc at the Laboratory for Information and Decision Systems (LIDS) of the Massachusetts Institute of Technology (2011-2013), and an assistant professor at Oklahoma State University’s Mechanical and Aerospace Engineering department (2013-2016). He also worked with the German Aerospace Center’s (DLR’s) Institute of Flight Systems for around three years (2003-2006). Girish’s ongoing research interest is in theoretical insights and practical algorithms for adaptive autonomy, with a particular focus on field-robotics. He has authored over 90 peer reviewed publications in various areas of adaptive control, robotics, and autonomy. On the practical side, Girish has led the development and flight-testing of over 10 research UAS platform. UAS autopilots based on Girish’s work have been designed and flight-tested on six UASs, including by independent international institutions. Girish is an investigator on NSF, AFOSR, NASA, ARPA-E, and DOE grants. He is the winner of the Air Force Young Investigator Award, and the Aerospace Guidance and Controls Systems Committee Dave Ward Memorial award. He is the co-founder of EarthSense Inc., working to make ultralight agricultural robotics a reality.
|4/25||Vignesh Vatsal, Cornell University Design and Analysis of a Wearable Robotic Forearm|
Human augmentations that can enhance a user’s capabilities in terms of strength, power, safety, and task efficiency have been a persistent area of research. Historically, most efforts in this field have focused on prostheses and exoskeletons, which serve either to replace and rehabilitate lost capabilities, or enhance already existing ones by adhering to human limb structures. More recently, we are witnessing devices that add capabilities beyond those found in nature, such as additional limbs and fingers. However, most of these devices have been designed for specific tasks and applications, at far ends on a spectrum of power, size, and weight. Additionally, they are not considered to be agents for collaborative activities, with interaction modes typically involving teleoperation or demonstration-based programmable motions.
We envision a more general-purpose wearable robot, on the scale of a human forearm, which enhances the reach of a user, and acts as a truly collaborative autonomous agent. We aim to connect the fields of wearable robot design, control systems, and computational human-robot interaction (HRI). We report on an iterative process for user-centered design of the robot, followed by an analysis of its kinematics, dynamics and biomechanics. The collaboration aspect involves collecting data from human-human teleoperation studies to build models for human intention recognition and robot behavior generation in joint human-robot tasks.
|5/2 Mark Campbell, Cornell University Where will our cars take us? The history, challenges, and potential impact of self driving cars|
|Autonomous, self-driving cars have the potential to impact society in many ways, including taxi/bus service; shipping and delivery; and commuting to/from work. This talk will give an overview of the history, technological work to date and challenges, and potential future impact of self-driving cars. A key challenge is the ability to perceive the environment from the cars sensors, i.e. how can a car convert pixels from a camera, to knowledge of a scene with cars, cyclist, and pedestrians. Perception in self-driving cars is particularly challenging, given the fast viewpoint changes and close proximity of other objects. This perceived information is typically uncertain, constantly being updated, yet must also be used for important decisions by the car, ranging from a simple change to lanes, or stopping and queuing at a traffic light. Videos, examples, and insights will be given of Cornell’s autonomous car, as well as key performers such as Google/Waymo and car companies.|
|5/9 Kalesha Bullard, Georgia Tech Can you teach me?: Leveraging and Managing Interaction to Enable Concept Grounding|
Abstract: When a robotic agent is given a recipe for a task, it must perceptually ground each entity and concept within the recipe (e.g., items, locations) in order to perform the task. Assuming no prior knowledge, this is particularly challenging in newly situated or dynamic environments, where the robot has limited representative training data. This research examines the problem of enabling a social robotic agent to leverage interaction with a human partner for learning to efficiently ground task-relevant concepts in its situated environment. Our prior work has investigated Learning from Demonstration approaches for the acquisition of (1) training instances as examples of task-relevant concepts and (2) informative features for appropriately representing and discriminating between task-relevant concepts. In ongoing work, we examine the design of algorithms that enable the social robot learner to autonomously manage the interaction with its human partner, towards actively gathering both instance and feature information for learning the concept groundings. This is motivated by the way that humans learn, by combining information rather than simply focusing on one type. In this talk, I present insights and findings from our initial work on learning from demonstration for grounding of task-relevant concepts and ongoing work on interaction management to improve the learning of grounded concepts.
Bio: Kalesha Bullard is a PhD candidate in Computer Science at Georgia Institute of Technology. Her thesis research lies at the intersection of Human-Robot Interaction and Machine Learning: enabling a social robot to learn groundings for task-relevant concepts, through leveraging and managing interaction with a human teacher. She is co-advised by Sonia Chernova, associate professor in the school of Interactive Computing at Georgia Tech, and Andrea L. Thomaz, associate professor in the department of Electrical and Computer Engineering at The University of Texas in Austin. Before coming to Georgia Tech, Kalesha received her undergraduate degree in Mathematics Education from The University of Georgia and subsequently participated in the Teach For America national service corps as a high school mathematics teacher. Over the course of her research career, Kalesha has served as a Program Committee co-chair for three different workshops and symposia, completed research internships at IBM Watson and NASA Jet Propulsion Laboratory, and was awarded an NSF Graduate Research Fellowship and a Google Generation Scholarship. Kalesha’s broader personal research vision is to enable social robots with the cognitive reasoning abilities and social intelligence necessary to engage in meaningful dialogue with their human partners, over long-term time horizons. Towards that end, she is particularly interested in grounded and embodied dialogue whereby the agent can communicate autonomously, intuitively, and expressively.
Tuesdays at 3-4pm, Upson 106.
Robotics Seminar is sponsored by HRG Robotics
Fall 2017 Schedule
|8/30||Ian Walker, Clemson University||Continuum Robot Trunks and Tentacles|
This talk will provide an overview of research in biologically inspired continuous backbone “trunk and tentacle” continuum robots. In particular, robots inspired by octopus arms and plants (vines) will be discussed. Use of these robots for novel inspection and manipulation operations, targeted towards Aging in Place applications and Space-based operations, will be discussed.Ian Walker received the B.Sc. in Mathematics from the University of Hull, England, in 1983 and the M.S. and Ph.D. in Electrical and Computer Engineering from the University of Texas at Austin in 1985 and 1989. He is a Professor in the Department of Electrical and Computer Engineering at Clemson University. Professor Walker’s research focuses on research in the construction, modeling, and application of continuum robots.
|9/5||Robotics Faculty||Conversation on Robotics|
|Round-table discussion of current topics in robotics with faculty and students. Coffee and cookies are served.|
|9/12||Rob Shepherd||The Additive Manufacturing of Robots|
|The liquid phase processing of polymers has been used in the last 100 years to produce items that vary in size and function from buoyant boat hulls to the living hinges on tic-tac boxes. Recently, the fields of stretchable electronics and soft robotics have made significant progress in manufacturing approaches to add increased mechanical function as well as sensory feedback from the additive manufacturing of soft materials, including polymers and elastomers. This talk will be a survey of the work my research group, the Organic Robotics Laboratory, has contributed in this space. Much of the work will revolved around a 3D printing process called Projection Stereolithography. Our group leases a Carbon M1 3D printer that is available for other researchers to use, so attending this talk can also be seen as an introduction to the process and its capabilities.|
|9/19||Hadas Kress-Gazit||Synthesis for Robots: Guarantees and Feedback for Complex Behaviors|
Getting a robot to perform a complex task, for example completing the DARPA Robotics Challenge, typically requires a team of engineers who program the robot in a time consuming and error prone process and who validate the resulting robot behavior through testing in different environments. The vision of synthesis for robotics is to bypass the manual programming and testing cycle by enabling users to provide specifications – what the robot should do – and automatically generating, from the specification, robot control that provides guarantees for the robot’s behavior.
In this talk I will describe the work done in my group towards realizing the synthesis vision. I will discuss what it means to provide guarantees for physical robots, types of feedback we can generate, specification formalisms that we use and our approach to synthesis for different robotic systems such as modular robots, soft robots and multi robot systems.
|9/26||Ross Knepper||Learning Competent Social Navigation|
Competence in pedestrian social navigation requires a robot to exhibit many strengths, from perceiving the intentions of others through social signals to acting clearly to convey intent. It is made more difficult by the presence of many individual people with their own agendas as well as by the fact that all communication and coordination occurs implicitly through social signaling (chiefly gross body motion, eye gaze, and body language). Furthermore, much of the information people glean about one another’s intentions is derived from the social context. For example, office workers are more likely to be heading towards the cafeteria if it is lunchtime and towards the exit if it is time to go home.
In this talk, I explore some of the mathematical tools that allow us to tease apart the problem of social navigation into patterns that distill enough of the complexity to be learnable. One of the key problems is to predict the future motions of others based on an observed “path prefix”. Past results have shown that geometric prediction of pedestrian motion is nearly impossible to do accurately due to the very fact that people are behaving in a socially competent manner, since they react to other people in ways that achieve their joint goals. Instead, I show how trajectories of navigating pedestrians can be jointly predicted topologically. This prediction can readily be learned in order to understand how people intend to avoid colliding with one another while achieving their goals.
|10/3||Andy Ruina||Why Don’t Bicycles Fall Over? (2:45 p.m. in Kimball B11)|
When viewed from the rear a bicycle looks like an inverted pendulum. Where the wheels touch the ground, it has an effective hinge point. So, if a bike tips a little, gravity acting on the center of mass tends to tip it more. So, superficially, a bike is unstable. Yet in practice, moving bicycles don’t fall over. Why not? This question has three variants. How do bike riders control control bikes to stay up? That is, what forces are invoked to keep the bike from falling? Second, how do people balance bikes when riding no hands? And third, how does ghost riding work? That is, at least some bikes won’t fall over when they are moving fast enough, even with no rider. How does that happen?
The third question, about ghost riding (bicycle self stability), being purely a question of mechanics, seems simplest. In the folk lore, there are two dominant theories: the gyroscopic theory of Klein and Sommerfeld (~1911) and the Castor (aka ‘trail’) theory of Jones (1970). By means of examples, we now know that both were wrong. Gyroscopic terms and ‘positive’ castor are neither necessary nor sufficient, separately or in combination, for bicycle self-stability.
As for what riders do, hands on or hands off, the centrifugal theory of bicycle balance is pretty complete: when having an undesirable fall to the right, you should steer to the right.
*Note change: 2:45 p.m. in Kimball B11
|*No Seminar This Week|
|10/17||*Special Lab Tour Week|
Lab Tours at 3:00 p.m. and 3:30 p.m. Each lab will give a tour at these times (2 tours per lab), so please choose one to start at 3:00 p.m. and then make your way over to the other by 3:30 p.m. The tours will last about 15-20 minutes so you’ll have time to walk from one building to the other.
Tour 1 – Francois V. Guimbretiere’s Robotic Assisted Fabrication Lab
Location: 223 Gates (with a robot poster on the door)
Tour 2 – Robert Shepherd, Organic Robotics Lab (https://orl.mae.cornell.edu/)
Location: 442 Upson
|10/24||Sue Fussell, Malte Jung,
Guy Hoffman, Ross Knepper
|Methods and Metrics in Human-Robot Interaction|
|Several faculty who study human-robot interaction present some of the best practices in HRI research. HRI differs from many other subfields of robotics because it deals with humans. We are limited both in our understanding of human psychology and in our ability to experiment on humans. To help audiences better appreciate HRI research presentations, this talk and discussion will cover popular approaches to conducting HRI research, including experimental methodology and useful metrics for evaluation of experiments.|
|10/31||Lightning Talks||Hosted by Guy Hoffman|
|We will present lightning talks of human-robot-interaction themed papers that have been submitted this year to conferences such as HRI, CHI, and AAAI. Talks will be max 5 minutes and preferably presented by the student author.|
|11/7||Fernando Tubilla||Assembling Orders in Amazon’s Robotic Warehouses|
|Every day, Amazon picks, packs, and ships millions of customer orders from a network of fulfillment centers (FCs) spread all over the globe. With each FC holding millions of inventory items, most customer orders requiring a unique combination of several of these items, and many orders needing to be shipped within a few hours of being placed, cutting-edge advances in technology are needed to ensure that orders are fulfilled efficiently and shipped on time. In this talk, we will present Amazon’s mobile robotic fulfillment solution, consisting of a fleet of thousands of drive units per FC that deliver inventory shelves to picking associates. We will describe the solution’s key advantages and its main components, and provide an overview of the complex resource allocation and planning problems addressed by its sophisticated algorithms. We will also discuss the Amazon Robotics Challenge for advancing the state of the art in item manipulation and grasping, as well as a couple of big open problems in robotic warehousing.|
|11/14||Kevin O’Brien||The Elastomeric Passive Transmission: Improving the Speed and Force of Tendon-Driven Robotics and Prosthetics|
|In this talk I will present an Elastomeric Passive Transmission (EPT) which increases the maximum output force and actuation speed of tendon-driven actuators. The EPT achieves these improvements with minimal impact to the size, weight, or cost of the system. Using inherent tendon tension to strain elastomeric struts toward the center of the motor-mounted spool, the EPT passively adjusts the effective gearing ratio of a motor. This allows a tendon-driven actuator to move with high speed when unimpeded, and with high-force under load. Our EPTs can be used with low-cost motors to achieve the performance (maximum force and speed) of a high-cost motor at a drastically reduced cost, or they can further improve the performance of higher quality, more expensive motors. To demonstrate the utility of these EPTs, we have integrated them into a prosthetic hand which meets, and in some cases exceeds, the performance of a high-end commercial prosthetic with motors that are 10% the cost. Our prosthetic hand has 6 active degrees of freedom which drive 5 3D-printed, soft digits (one for the flexion of each finger, and two for the thumb). Each finger can fully close in < .6 seconds and can grasp with a maximum force of ~40N. The entire hand has a mass of ~400 grams and a material cost of < $500.|
|11/21||Kirstin Petersen||Leveraging Honey Bees as Bio-Cyber Physical Systems|
|I will discuss ongoing (and future) work related to a new project undertaken in the CEI-lab. This involves integration of honey bees into Bio-Cyber Physical Systems. Social insects are capable of robust sustained operation in unpredictable environments far beyond what is possible with state-of-the-art artificial systems. Honey bees are the premiere agricultural pollinator bringing in over $150 billion annually. A colony causes pollination by dispatching tens of thousands of scouts and foragers to survey and sample kilometer-wide areas around their hive. Thus, the colony as a whole accumulates vast information about the local agricultural landscape, bloom and dearth — information that would be very informative if available to farmers and beekeepers.|
|11/28||Mike Duffy||The Third Aerospace Revolution will be Enabled by Robotics and Electric Propulsion|
|The first aerospace revolution started in 1903 with the Wright brothers successful sustained powered flight, opening up the skies for manned flight. The second aerospace revolution came in the early 1950’s with the first commercial jet airliner, the de Havilland Comet, which made affordable air transportation available to the masses. The third aerospace revolution takes the pilot and fossil fuels out of the aircraft to drastically improve cost, safety, reduce noise and improve the user experience to enable a new way of moving through the sky.The size and cost of sensors and electronics have come down so significantly, primarily due to the smart phone industries mass production of products; additionally, electric motors, controllers and batteries have become more power and energy dense due to automotive and electronics industries, that today, drones and personal aircraft have become technically and economically feasible for the masses. This talk will attempt to show the trends that are making personal air travel possible in the next 5-10 years and how Robotics and Electric Propulsion will be the enablers.|
Wednesdays at 2pm in Upson 531.
Spring 2017 Schedule
|1/25||Ross Knepper||Robotics Community Discussion|
The robotics seminar series will be kicked off this semester with a community discussion about the seminar and how it can best fulfill the needs of the community, i.e. build more connections among labs and departments, educate researchers about tools and techniques, and better inform interested parties about the latest and greatest research.
|2/8||Wil Thomason||Robotic Personal Assistants Lab Chalk Talks|
The Robotic Personal Assistants Lab (RPAL) under PI Prof. Knepper investigates technologies to make robots behave as peers in collaborative tasks with people. In this seminar, several members of the lab will give informal chalk talks to describe their current research. These talks are meant to be interactive and accessible to a robotics audience. Rather than polished talks, these are snapshots of works in progress. We hope that this session will serve as a template for other labs at Cornell to emulate.
|2/15||Chris Mavrogiannis||Robotic Personal Assistants Lab Chalk Talks|
The Robotic Personal Assistants Lab (RPAL) under PI Prof. Knepper investigates technologies to make robots behave as peers in collaborative tasks with people. In this seminar, several members of the lab will give informal chalk talks to describe their current research. These talks are meant to be interactive and accessible to a robotics audience. Rather than polished talks, these are snapshots of works in progress. We hope that this session will serve as a template for other labs at Cornell to emulate.
|2/22||Carlo Pinciroli||Robot Swarms as a Programmable Machine|
Robot swarms promise to offer solutions for applications that today are considered dangerous, expensive, or even impossible. Notable examples include construction, space exploration, mining, ocean restoration, nanomedicine, disaster response, and humanitarian demining. The diverse and large-scale nature of these applications requires the coordination of numerous robots, likely in the order of hundreds or thousands, with heterogeneous capabilities. Swarm engineering is an emerging research field that studies how to model, design, develop, and verify swarm systems. In this talk, I will discuss the aspects of swarm engineering that intersect with classical computer science. In particular, focusing on the concept of robot swarms as a “programmable machine”, I will analyze the issues that arise when one wants to write programs for swarms. After presenting Buzz, a programming language for swarms on which I worked during my postdoc, I will outline a number of open problems on which I intend to work over the next years.
Bio: Carlo Pinciroli is assistant professor at Worcester Polytechnic Institute, where he leads the NEST Lab. His research interests include swarm robotics and software engineering. Prof. Pinciroli obtained a Master’s degree in Computer Engineering at Politecnico di Milano, Italy and a Master’s degree in Computer Science at University of Illinois at Chicago, in 2005. He then worked for one year in several projects for Barclays Bank PLC group. In 2006 he joined the IRIDIA laboratory at Université Libre de Bruxelles in Belgium, under the supervision of Prof. Marco Dorigo. While at IRIDIA, he obtained a Diplôme d’études approfondies in 2007 and a PhD in applied sciences in 2014, and he completed a 8-month post-doctoral period. Between 2015 and 2016, Prof. Pinciroli was a postdoctoral researcher at MIST, École Polytechnique de Montréal in Canada under the supervision of Prof. Giovanni Beltrame. Prof. Pinciroli published 49 peer-reviewed articles and 2 book chapters, and edited 1 book. In 2015, F.R.S.-FNRS awarded him the most prestigious postdoctoral scholarship in Belgium (Chargé des Recherches).
|3/1||Jim Jing and Scott Hamill||Modularity and Design|
The Verifiable Robotics Research Group has been exploring different aspects of modularity in robot control and design. In this two part talk, Jim will describe current work on high-level control of modular robots (in collaboration with Mark Campbell’s and Mark Yim’s groups) and Scott will describe our initial thoughts on task-influenced design of modular soft robots (in collaboration with Rob Shepherd’s group).
|3/8||Erik Komendera||An Approach to Robotic In-Space Assembly|
Abstract: With the retirement of the Space Shuttle program, the option to lift heavy payloads to orbit has become severely constrained. Combined with the increasing success and decreasing costs of commercial small- to medium-lift launch vehicles, robotic in-space assembly is becoming attractive for mission concepts such as large space telescopes, assembly and repair facilities, solar electric propulsion tugs, and in situ resource utilization. Challenges in autonomous assembly include reasoning with uncertainties in the structure, agents, and environment, delegating a large variety of assembly tasks, and making error corrections and adjustments as needed. For space applications, the design and assembly of each part requires extensive planning, manufacturing, and checkout procedures. This hinders servicing, and prevents repurposing functional parts on derelict spacecraft. The advent of practical robotic in-space assembly will mitigate the need for deployment mechanisms and enable assembly using materials delivered by multiple launch vehicles. This reduction in complexity will lead to simplified common architectures, enabling interchangeable parts, and driving down costs
In recent years, Langley Research Center has developed assembly methods to address some of these challenges by distributing long reach manipulation tasks and precise positioning tasks between specialized agents, employing Simultaneous Localization and Mapping (SLAM) in the assembly workspace, using sequencing algorithms, and detecting and correcting errors. This talk will describe ongoing research, discuss the results of several recent robotic assembly experiments, and preview the upcoming assembly experiments to be performed under Langley’s “tipping point” partnership with Orbital/ATK.
Bio: Dr. Erik Komendera is a roboticist at NASA Langley Research Center in Hampton, VA. He earned his MS (’12) and PhD (’14) in Computer Science from the University of Colorado, and earned a BSE in Aerospace Engineering (’07) from the University of Michigan. Dr. Komendera’s current research focuses on autonomous assembly of structures in space, with a special focus on state estimation and machine learning techniques to identify and overcome errors in the assembly process. He currently serves as a task lead on the joint NASA/Orbital ATK Tipping Point project titled “Commercial Infrastructure for Robotic Assembly and Servicing” (CIRAS). In addition, he is Principal Investigator for a LaRC Center Innovation Fund / Internal Research and Development award to investigate machine learning methods for ensuring robust assembly and repair of solar array modules, and is a key member of the “Robotic Assembly of Modular Space Exploration Systems” research incubator effort.
|3/15||Rob MacCurdy, MIT|
|3/22||Bennett Wineholt||Deep Learning for Hobby Robotics|
Recent work to reduce the size and computational requirements of deep neural networks for machine learning has allowed applications including video object recognition and speech recognition to be performed responsively on small robotic systems which are commonly limited by power and payload constraints. This talk will present an application lifecycle for developing robot behaviors using deep learning techniques as well as describing advances in model compression which make these techniques more performant.
Bio: Bennett Wineholt is a staff member at the Cornell University Center for Advanced Computing supporting faculty needs for computing and consulting services to accelerate discovery.
|3/29||Patrícia Alves-Oliveria||Robots and Creativity|
In this talk Patrícia will present her work on the field of Human-Robot Interaction. Specifically, she will introduce her previous work on the European project EMOTE whose goal was to develop a robotic tutor to support curricular activities in school. Additionally, Patrícia will present her initial work on creativity with robots.
Bio: Patrícia is a PhD student in psychology in an exchange program between Portugal and Cornell University. She is being supervised by Prof. Guy Hoffman and Prof. Ana Paiva (Gaips lab, Portugal) and she is studying how we can use robots to boot creativity in children.
|4/5||CORNELL SPRING BREAK|
|4/12||Jesse Goldberg||Dopamine based error signals suggest a reinforcement learning algorithm during song acquisition in birds|
Reinforcement learning enables animals to learn to select the most rewarding action in a given context. Edward Thorndike posed a simple solution to this problem in his Law of Effect: ‘Responses that produce a satisfying effect in a particular situation become more likely to occur again in that situation, and responses that produce a discomforting effect become less likely to occur again in that situation.’ This idea underlies stimulus-response, reinforcement, and instrumental learning and implementing it requires three pieces of information: (1) the action (response) an animal makes; (2) the context (situation) in which the action is taken; and (3) evaluation of the outcome (effect). In vertebrates, the basal ganglia have been proposed to integrate the three pieces of information required for reinforcement learning: (1) The situation, or current context, is thought to be signaled by a massive projection from the cortex to the striatum, the input layer of the BG; (2) The chosen action is signaled by striatal medium spiny neurons (MSNs) that drive behavior via projections to downstream motor centers; and (3) The evaluation of the outcome is transmitted to the striatum by midbrain DA neurons. These signals underlie a simple ‘three-factor learning rule’: If a cortical input is active (signifying a context), the MSN discharges (driving the action chosen), and an increase in DA subsequently occurs (signifying a good outcome), then the connection strength of the cortical input to the MSN is increased. Overall, by controlling the strength of the corticostriatal synapse, this dopamine-modulated corticostriatal plasticity governs which action will be chosen in a given context, placing DA in the premier position of determining what animals will learn and how they will behave. Here, I will discuss how our recent identification of dopaminergic error signals in birdsong support the potential generality dopamine modulated corticostriatal plasticity in implementing learning in a wide range of behaviors.
|4/19||Kevin Chen||Hybrid aerial-aquatic locomotion in an insect scale flapping wing robot|
|Abstract: Flapping flight is ubiquitous among agile natural flyers. Taking inspiration from biological flappers, we develop a robot capable of insect-like flight, and then go beyond biological capabilities by demonstrating multi-phase locomotion and impulsive water-air transition. In this talk, I will present our recent research on developing a hybrid aerial-aquatic microrobot and discuss the underlying physics. I will start by describing experimental and computational studies of flapping wing aerodynamics that aim to quantify fluid-wing interactions and ultimately distill scaling rules for robotic design. Comparative studies of fluid-wing interactions in air and water show remarkable similarities, which lead to the development of the first hybrid aerial-aquatic flapping wing robot. In addition to discussing the flapping frequency scaling rule and robot underwater stability, I will describe the challenges and benefits imposed by water surface tension. By developing an impulsive mechanism that utilizes electrochemical reaction, we further demonstrate robot water-air transition. I will conclude by outlining the challenges and opportunities in our current microrobotic research.|
|4/26||Anil Rao||A Computational Framework for Constrained Optimal Control Problems Using Gaussian Quadrature Collocation|
Optimal control concerns systems that evolve in time for which you have partial control of the system and it is desired to optimize a specified performance criterion. Optimal control problems arise in a variety of applications including engineering, economics, medicine, and epidemiology.
With a few notable exceptions (for example, the brachistochrone problem), virtually no optimal control problems have analytic solutions. Consequently, it is necessary to obtain a solution using numerical methods. Even with modern computers, solving optimal control problems numerically is a challenge because most optimal control problems of interest are nonlinear, high-dimensional, and have complex constraints. As a result, finding accurate solutions to a general optimal control problem requires the development of sophisticated methods.
This seminar describes a framework for solving constrained optimal control problems. The key approach described in this seminar is a class of variable-interval (h) variable-order (p) methods, also called hp-adaptive methods. In the hp-adaptive approach, a continuous optimal control problem is approximated as a finite-dimensional nonlinear optimization problem. This class of hp-adaptive methods are employed using Gaussian quadrature to provide high-accuracy solutions using a significantly lower-dimensional discretization when compared with traditional fixed-order methods.
This seminar will first step through a motivation for the hp-adaptive approach. Recent research done in hp-adaptive mesh refinement techniques will be highlighted along with advances in methods for algorithmic differentiation. The effectiveness of the approach will be demonstrated using the benchmark Bryson minimum time-to-climb of the F-4 supersonic aircraft. Specifically, this aircraft flight example will demonstrate the significant improvements in computational efficiency gained by the hp-adaptive approach over previously developed methods. Furthermore, a low-thrust Earth orbit transfer with eclipsing will be used to demonstrate the capability of the approach on a challenging space flight application. Finally, future research directions will be discussed.
Anil V. Rao earned a BS in mechanical engineering and and AB in mathematics from Cornell, an MSE in aerospace engineering from the University of Michigan, and an MA and PhD from Princeton University. After earning his PhD, Dr. Rao joined the The Aerospace Corporation in Los Angeles abd was subsequently a Senior Member of the Technical Staff at The Charles Stark Draper Laboratory in Cambridge, Mass. While at Draper, from 2001 to 2006, he was an adjunct faculty in the Department of Aerospace and Mechanical Engineering at Boston University, where he taught the core undergraduate dynamics course. Since 2006 he has been in Mechanical and Aerospace Engineering at the University of Florida where he is current an Associate Professor and Erich Farber Faculty Fellow. His research interests include computational methods for optimal control and trajectory optimization, nonlinear optimization, space flight mechanics, orbital mechanics, guidance, and navigation. He has co-authored the textbook Dynamics of Particles and Rigid Bodies: A Systematic Approach (Cambridge University Press, 2006). He is active in professional societies including the American Institute of Aeronautics and Astronautics, the American Astronautical Society, and the Society for Industrial and Applied Mathematics. Dr. Rao serves on the editorial board of the Journal of the Astronautical Sciences, the Journal of Optimization Theory and Applications, and the Journal of Spacecraft and Rockets. He is the co-developer of the industrial-strength optimal control software GPOPS-II. His teaching and research awards include the Department Teacher of the Year at BU (2002 and 2006) and at the University of Florida (2008), the College of Engineering Outstanding Teacher of the Year Award at BU (2004), the Book of the Year Award at Draper Laboratory (2006), the Pramod P. Khargonekar Junior Faculty Award (2012) at the University of Florida. He is an Associate Fellow of the American Institute of Aeronautics and Astronautics.
|5/10||Thomas Wallin||Manufacturing techniques of soft robotics|
|Conventional robots are composed of rigid components with discrete linkages that promote high precision and controllability; however, these systems require complex sensing and feedback controls and can struggle to perform in uncontrolled conditions. Soft robots, by comparison, reduce the control complexity and manufacturing cost, while simultaneously allowing new, sophisticated functions. While earlier generations of soft robots were limited architecturally and functionally, recent advances in materials and additive manufacturing technologies have enabled new and exciting capabilities. In this talk, I will begin by discussing the essential elements of soft robots, highlighting the pertinent material properties. Then I will describe the advantages and limitations of the different 3D printing technologies employed in both the indirect and direct fabrication of soft actuators. For each manufacturing technique, we will discuss the compatible material classes with a focus on actuation and/or sensing mechanisms.|
|8/24||Kirstin Petersen||Designing Robot Collectives|
In robot collectives, interactions between large numbers of individually simple robots lead to complex global behaviors. A great source of inspiration is social insects such as ants and bees, where thousands of individuals coordinate to handle advanced tasks like food supply and nest construction in a remarkably scalable and error tolerant manner. Likewise, robot swarms have the ability to address tasks beyond the reach of single robots, and promise more efficient parallel operation and greater robustness due to redundancy. Key challenges involve both control and physical implementation. In this seminar I will discuss an approach to such systems relying on embodied intelligent robots designed as an integral part of their environment, where passive mechanical features replace the need for complicated sensors and control.
The majority of my talk will focus on a team of robots for autonomous construction of user-specified three-dimensional structures developed during my thesis. Additionally, I will give a brief overview of my research on the Namibian mound-building termites that inspired the robots. Finally, I will talk about my recent research thrust, enabling stand-alone centimeter-scale soft robots to eventually be used in swarm robotics as well. My work advances the aim of collective robotic systems that achieve human-specified goals, using biologically-inspired principles for robustness and scalability.
|8/31||Michael Duffy||Boeing LIFT! Project – Cooperative Drones to Reduce the Cost of Vertical Flight|
The LIFT! Project explored scaling of all-electric multi-rotor propulsion and methods of cooperation between multiple VTOL aircraft. Multi-rotor aircraft have become pervasive throughout the hobby industry, toy industry and research institutions due – in part – to very powerful, inexpensive inertial measurement devices and increased energy density of Li-Ion batteries driven by the mobile phone industry. This research demonstrates the viability of large multi-rotor systems up to two magnitudes of gross weight larger than a typical COTS hobby multi-rotor vehicle. Furthermore, this research demonstrates modularity and cooperation between large multi-rotor aircraft. In order to study large multi-rotor technologies, The Boeing Company decided to build a series of large scale multi-rotor vehicles ranging from 6 lbs gross weight to over 525 lbs gross weight using low cost COTS components. The LIFT! Project successfully demonstrated the effectiveness, modularity and scalability of electric multi-rotor technologies while identifying a useful load fraction (useful load/gross weight) of 0.64 for large, electric, unmanned multi-rotor aircraft. This research offers new insights on the feasibility of large electric VTOL aircraft, empirical trends, potential markets, and future research necessary for the commercial viability of electric VTOL aircraft.
|9/7||Robotics Faculty||Robotics Lab Hop|
This week, the robotics research space on Upson Hall 5th floor opens its doors to the robotics seminar. See the newly-occupied labs of Profs. Ferrari, Shepherd, Kress-Gazit, Campbell, Ruina, and Knepper. Meet in the hallway outside of Upson 522.
|9/14||Guy Hoffman||Interacting with Robots through Touch: Materials as Affordances|
Nonverbal behavior is at the core of human-robot interaction, but the subfield of social haptics is distinctly underrepresented. Most efforts focus around inserting sensors under a soft skin and using pattern recognition to infer a human’s tactile intention. There is virtually no work on robots touching humans in a social way, or robots responding to touch in a socially meaningful tactile manner. In that context, the advent of soft robotics and computational materials offers a new way for social robots to express internal and affective states. In the past, robot used mainly rotational and prismatic degrees of freedom for expression. How can new actuation technologies, such as shape-memory alloys, pneumatics, and “4D printed” structures contribute to new feedback methods and interaction paradigms? Also, how can we integrate traditional materials, such as wood, metals and ceramics to support the robot’s expressive capacity?
|9/21||Keith Green||Architectural Robotics: Ecosystems of Bits, Bytes, & Biology|
Keith Evan Green looks toward a next frontier in robotics: interactive, partly intelligent, meticulously designed physical environments. Green calls this “Architectural robotics”: cyber-physical, built environments made interactive, intelligent, and adaptable by way of embedded robotics, or in William Mitchell’s words, “robots for living in.” In architectural robotics, computation—specifically robotics—is embedded in the very physical fabric of our everyday living environments at relatively large physical scales ranging from furniture to the metropolis. In this talk, Green examines how architectural robotic systems support and augment us at work, school, and home, as we roam, interconnect, and age.
|9/28||Ross Knepper||On the Communicative Aspect of Human-Robot Joint Action|
Actions performed in the context of a joint activity comprise two aspects: functional and communicative. The functional component achieves the goal of the action, whereas its communicative component, when present, expresses some information to the actor’s partners in the joint activity. The interpretation of such communication requires leveraging information that is public to all participants, known as common ground. Humans cannot help but infer some meaning – whether or not it was intended by the actor – and so robots must be cognizant of how their actions will be interpreted in context. In this talk, I address the questions of why and how robots can deliberately utilize this communicative channel on top of normal functional actions to work more effectively with human partners. We examine various human-robot interaction domains, including social navigation and collaborative assembly.
|10/5||Ross Knepper||Part II: On the Communicative Aspect of Human-Robot Joint Action|
Part II — Actions performed in the context of a joint activity comprise two aspects: functional and communicative. The functional component achieves the goal of the action, whereas its communicative component, when present, expresses some information to the actor’s partners in the joint activity. The interpretation of such communication requires leveraging information that is public to all participants, known as common ground. Humans cannot help but infer some meaning – whether or not it was intended by the actor – and so robots must be cognizant of how their actions will be interpreted in context. In this talk, I address the questions of why and how robots can deliberately utilize this communicative channel on top of normal functional actions to work more effectively with human partners. We examine various human-robot interaction domains, including social navigation and collaborative assembly.
|10/12||Susan Herring||Telepresence Robot Communication, Gender, and Metaphors of (Dis)ability|
The principal use of telepresence robots is for human-human communication, where at least one person (the pilot) is remote via the robot and one or more persons (locals) are on site. It is important, therefore, to understand the nature of such communication – how locals perceive robot pilots as social actors, how robotic mediation affects interactional dynamics and norms, and how the experience of telepresence robot communication varies for different groups of users. In this talk, I address these questions through the dual lenses of gender and (dis)ability. I report the findings of a mock job interview study in which a male interviewer used a Beam+ telepresence robot, and the male and female interviewees were primed in advance with one of three metaphors about the interviewer – as a robot, as a (normal) human, or as a human with disabilities (cf. Takayama & Go, 2012). The interviews and reponses to a post-study survey were analyzed for interaction with, and attitudes toward, the robot interviewer. Initial results reveal differences across genders and across metaphorical priming conditions, but whereas the former are largely consistent with previous findings on gender and technology, the metaphor findings were unexpected. I discuss evidence that telepresence robot communication privileges some groups of communicators over others and suggest possible interventions – including metaphor manipulation and modifications to the robots themselves – to establish a level playing field before telepresence robot communication practices, which are currently emergent, become fixed.
Biographical Note: Susan Herring is Professor of Information Science and Linguistics at Indiana University, Bloomington. Mobility challenged herself, she uses and researches telepresence robots. She is also a long-time researcher of digitally-mediated communication, Director of IU’s Center for Computer-Mediated Communication, a past editor of the Journal of Computer-Mediated Communication, and current editor of Language@Internet.
|10/19||Adrian Boteanu||Verifiable Grounding and Execution of Natural Language Instructions|
Robots are increasingly often expected to work along with humans. Natural language enables bi-directional interaction: for users to specify tasks and for the system to provide feedback. A significant challenge particular to this situated interaction is establishing correspondence between language and their physical meaning such as actions and objects, known as grounding. As both tasks and environments increase in complexity, the potential for ambiguity in interpreting the user’s statements increases.
I will present a grounding model which combines both physical and Linear Temporal Logic (LTL) representations to ground instructions. It allows for a formal specification to be generated from the grounding process. This specification is synthesized into a controller guaranteed to accomplish the task. Conversely, if synthesis is unsuccessful, it reveals problems such as logical inconsistencies in the specification or discrepancies between the specification and the physical environment.
In this latter case, the robot conveys these issues through natural language by referencing the physical environment and incorporates the user’s responses back into the specification. This robot-driven interaction enables the user to iteratively correct the grounded specification without requiring knowledge of the underlying representation.
|11/30||Mike Meller||Improving actuation efficiency through variable recruitment hydraulic McKibben muscles|
Hydraulic control systems have become increasingly popular as the means of actuation for human-scale legged robots and assistive devices. One of the biggest limitations to these systems is their run time untethered from a power source. One way to increase endurance is by improving actuation efficiency. We investigate reducing servovalve throttling losses by using a selective recruitment artificial muscle bundle comprised of three motor units. Each motor unit is made up of a pair of hydraulic McKibben muscles connected to one servovalve. The pressure and recruitment state of the artificial muscle bundle can be adjusted to match the load in an efficient manner, much like the firing rate and total number of recruited motor units is adjusted in skeletal muscle. A volume-based effective initial braid angle is used in the model of each recruitment level. This semi-empirical model is utilized to predict the efficiency gains of the proposed variable recruitment actuation scheme versus a throttling-only approach. A real-time orderly recruitment controller with pressure-based thresholds is developed. This controller is used to experimentally validate the model-predicted efficiency gains of recruitment on a robot arm. The results show that utilizing variable recruitment allows for much higher efficiencies over a broader operating envelope.
|12/7||Chris Mavrogiannis||Decentralized Multi-Agent Navigation Planning with Braids|
Navigating a human environment is a hard task for a robot, due to the lack of formal rules guiding traffic, the lack of explicit communication among agents and the unpredictability of human behavior. Despite the great progress in robotic navigation over the past few decades, robots still fail to navigate multi-agent human environments seamlessly. Most existing approaches focus on the problem of collision avoidance without explicitly modeling agents’ interactions. This often results in non-smooth robot behaviors that tend to confuse humans, who in turn react unpredictably to the robot motion and further complicate the robot’s decision making.
In this talk, I will present a novel planning framework that aims at reducing the emergence of such undesired oscillatory behaviors by leveraging the power of implicit communication through motion. Inspired by the collaborative nature of human navigation, our approach explicitly incorporates the concept of cooperation in the decision making stage, by reasoning over joint strategies of avoidance instead of treating others as separate moving obstacles. These joint strategies correspond to the spatiotemporal topologies of agents’ trajectories and are modeled using the topological formalism of braids. The braid representation is the basis for the design of an inference mechanism that associates agents’ past trajectories with future collective behaviors in a given context. This mechanism is used as a means of “social understanding” that allows agents to select actions that express compliance with the emerging joint strategy by compromising efficiency. Incorporating such a mechanism in the planning stage results in a rapid uncertainty decrease regarding the emerging joint strategy that facilitates all agents’ decision making. Simulated examples of multi-agent scenarios highlight the benefit of reasoning about joint strategies and appear promising for application in real-world environments.
|2/3||Abhishek Anand||Building machines worthy of being entrusted with human lives||Ross Knepper|
|2/10||David Moroniti & Spyros Maniatopoulos||Commercializing robotics: developing autonomous solutions for customer-driven problems||Andy Ruina|
|2/17||Andy Ruina||Recent thoughts on how to balance while walking||Andy Ruina|
|2/24||Ross Knepper||The Modern Prometheus: or Making Robots More Social May Harm HRI||Ross Knepper|
|3/2||Ian Lenz||Deep Learning for Robotics||Ross Knepper|
|3/9||Cynthia Sung||Roundtable discussion||Wil Thomason|
|3/16||Kress-Gazit, Jung, Hoffman, Knepper, Ruina||Research spotlights of several faculty||Ross Knepper|
|3/23||Adrian Boteanu||A Model for Verifiable Grounding and Execution of Complex Language Instructions||Ross Knepper|
|4/13||Guy Hoffman||Experimental Research in Progress on Robots and Ethics||Ilse Van Meerbeek|
|4/20||Jesse Miller & Andy Ruina||A directionally self-stable robotic sail boat: concept and simulations||Ross Knepper|
|4/27||Huichan Zhao & Jonathan Jalving||Optical Sensing and EMG Control of a Soft Orthosis||Ilse Van Meerbeek|
|5/4||Spyros Maniatopoulos, Jennifer Padgett||Practice ICRA Talks||Ilse Van Meerbeek|
|5/11||Michael Meller||Improving actuation efficiency through variable recruitment hydraulic McKibben muscles||Ilse Van Meerbeek|
|9/4||Thomas Wallin||Hydrogel Stereolithography for Soft Robotics||Ilse van Meerbeek|
|9/11||Steve Squyres||Robotic Exploration of the Martian Surface with the Rovers Spirit and Opportunity||Ross Knepper|
|9/18||Boris Kogan||Bi-pedal robots: crude mechanical design methodologies||Andy Ruina|
|9/25||Guy Hoffman||Designing Robots for Fluent Collaboration||Ross Knepper|
|10/2||Greg Stiesberg||A passively stable hopping robot that isn’t passively stable||Andy Ruina|
|10/9||Pingping Zhu||Biophysical Modeling of Satisficing Control Strategies as Derived from Quantification of Primate Brain Activity and Psychophysics||Silvia Ferrari|
|10/16||Malte Jung||Robots and the Dynamics of Emotions in Work Teams||Ross Knepper|
|10/23||Boris Kogan||Slow little boats in a big fast ocean||Andy Ruina|
|10/30||Spyros Maniatopoulos||Reactive Robot Mission Planning: Bridging the Gap Between Theory and Practice|
|11/6||Minas Liarokapis (Yale)||Adaptive Robot Hands: Challenges and Applications||Ross Knepper|
|11/13||Matt Sheen||Good Robot Simulation||Andy Ruina|
|11/20||Mason Peck||Control-Moment Gyroscopes for Low-Power Robotic Motion||Ross Knepper|
|12/4||Hongchuan Wei||Sensor Planning for Multiple Target Tracking||Silvia Ferrari|
|12/11||Matt Kelly||Trajectory Optimization – Overview and Tutorial||Andy Ruina|