Robotics Seminar Fall 2018

Tuesdays at 3:00-4:00pm, Upson 106 (Conference Room Next to the Lounge).

Light refreshments served starting at 2:45.

Fall 2018 Schedule

 8/28  Shuo Li, Cornell University Additive Manufacturing of Soft Robots
 This talk will present multidisciplinary work from material composites and robotics. We have created new types of actuators, sensors, displays, and additive manufacturing techniques for soft robots and haptic interfaces. For example, we now use stretchable optical waveguides as sensors for high accuracy, repeatability, and material compatibility with soft actuators. For displaying information, we have created stretchable, elastomeric light emitting displays as well as texture morphing skins for soft robots. We have created a new type of soft actuator based on molding of foams, new chemical routes for stereolithography printing of silicone and hydrogel elastomer based soft robots, and implemented deep learning in stretchable membranes for interpreting touch. All of these technologies depend on the iterative and complex feedback between material and mechanical design.  I will describe this process, what is the present state of the art, and future opportunities for science in the space of additive manufacturing of elastomeric robots.
Wed 9/5 Adam Bry, Skydio Scaling up autonomous flight

NOTE: Special time and location: 5pm on Wednesday in Upson 106

Abstract: Drones hold enormous potential for consumer video, inspection, mapping, monitoring, and perhaps even delivery. They’re also natural candidates for autonomy and likely to be among the first widely-deployed systems that incorporate meaningful intelligence based on computer vision and robotics research. In this talk I’ll discuss the research we’re doing at Skydio, along with the challenges involved in building a robust robotics software system that needs to work at scale.

Bio: Adam Bry is co-founder and CEO of Skydio, a venture backed drone startup based in the bay area. Prior to Skydio he helped start Project Wing at Google[x] where he worked on the flight algorithms and software. He holds a SM in Aero/Astro from MIT and a BS in Mechanical Engineering from Olin College. Adam grew up flying radio controlled airplanes and is a former national champion in precision aerobatics. He was named to the MIT Tech Review 35 list in 2016.


9/4/18 Bonus Seminar this week!

4-5 p.m. in 203 Thurston Hall

Architectural Robotics: Ecosystems of Bits, Bytes and Biology

Info here: https://www.mae.cornell.edu/news/events.cfm?event=18875&view=future&y=2018&m=8&d=31


9/11  Andy Ruina, Cornell University Some Thoughts on Model Reduction for Robotics
 These are unpublished thoughts, actually more questions than thoughts.And not all that well informed. So audience feedback is welcome. Especially from people who know about how to formulate machine learning problems
(I already know, sort of, how to formulate MatSheen learning problems).One posing of many robotics control problems is as a general problem in `motor control’ (a biological term, I think).Assume one has a machine and the best model (something one can compute simulations with) one can actually get of the machine, its environment, its sensors and its computation abilities. One also has some sense of the uncertainty in various aspects of these.The general motor problem is this: Given a history of sensor readings and requested goals (commands), and all of the givens above, what computation should be done to determine the motor commands so as to best achieve the goals.”Best” means, most accurately and most reliably by whatever measures one chooses.If one poses this as an optimization problem over the space of all controllers (all mappings from command and sensor histories to the set of commands), it is too big a problem, even if coarsely discretized.Hence, everyone applies all manner of assumed simplifications before attempting to make a controller.The question here is this, can one pose an optimization problem for the best simplification? Can one pose it in a way such that finding a useful approximate solution could be useful?In bipedal robots there are various classes of simplified models used by various people to attempt to control their robots. Might there be a rational way to choose between them, or find better ones?As abstract as this all sounds, perhaps thinking about such things could help us make better walking-robot controllers.
9/18  Group Discussion Big-data machine learning meets small-data robotics
 Abstract: Machine learning techniques have transformed many fields, including computer vision and natural language processing, where plentiful data can be cheaply and easily collected and curated.  Training data in robotics is expensive to collect and difficult to curate or annotate.  Furthermore, robotics cannot be formulated as simply a prediction problem in the way that vision and NLP can often be.  Robots must close the loop, meaning that we ask our learning techniques to consider the effect of possible decisions on future predictions.  Despite exciting progress in some relatively controlled (toy) domains, we still lack good approaches to adapting modern machine learning techniques to the robotics problem.  How can we overcome these hurdles?  Please come prepared to discuss.  Here are some potential discussion topics:

  1. Are robot farms like the one at Google a good approach?  Google has dozens of robots picking and placing blocks 24/7 to collect big training data in service of training traditional models.
  2. Since simulation allows the cheap and easy generation of big training data, many researchers are attempting domain transfer from simulation to the real robot.  Should we be attempting to make simulators photo-realistic with perfect physics?  Alternatively, should we instead vary simulator parameters to train a more general model?
  3. How can learned models adapt to unpredictable and unstructured environments such as people’s homes?  When you buy a Rosie the Robot, is it going to need to spend a week exploring the house, picking up everything, and tripping over the cat to train its models?
  4. If we train mobile robots to automatically explore and interact with the world in order to gather training data at relatively low cost, the data will be biased by choices made in building that autonomy.  Similar to other recent examples in which AI algorithms adopt human biases, what are the risks inherent in biased robot training data?
  5. What role does old-fashioned robotics play?  We have long learned to build state estimators, planners, and controllers by hand.  Given that these work pretty well, should we be building learning methods around them?  Or should they be thrown out and the problems solved from scratch with end-to-end deep learning methods?
  6. What is the connection between machine learning and hardware design?  Can a robot design co-evolve with its algorithms during training?  Doing so would require us to encode design specifications much more precisely than has been done in the past, but so much of design practice resists specification due to its complexity.  Specifically, can design be turned into a fully-differentiable neural network structure?

Please bring your own questions for the group to discuss, too!

9/25 Cheng Zhang, Cornell University Sensing + Interaction On and Around the Body

 Abstract: Wearables are a significant part of the new generation of computing. Compared with more traditional computers (e.g., laptop, smartphones), wearable devices are more readily available for immediate use, but significantly smaller in size, creating new opportunities and challenges for on-body sensing and interaction. My holistic research approach (from problem understanding to invention to implementation and evaluation) investigates how to effectively exchange information between humans, their environment, and wearables. My Ph.D. thesis focuses on novel wearable input using on-body sensing through various high-level interaction gestures, low-level input events, and a redesign of the interaction. In this talk, I will highlight three projects. The first is a wearable ring that allows the user to input over 40 unistroke gestures (including text and numbers). It also shows how to overcome a limited training set size, a common challenge in applying machine learning techniques to real systems, through an understanding of the characteristics of data and algorithms. The second project demonstrates how to combine a strong, yet incomplete, understanding of on-body signal propagation physics with machine learning to create a novel yet practical sensing and interaction techniques. The third project is an active acoustic sensing technique that enables a user to interact with wearable devices in the surrounding 3D space through continuous high-resolution tracking of finger’s absolute 3D position. It demonstrates how to solve a technical interaction challenge through a deep understanding of signal propagation. I will also share my vision on future opportunities for on-body sensing and interaction, especially in high-impact areas, such as health, activity recognition, AR/VR, and more futuristic interaction paradigms between humans and the increasingly connected environment.

Bio: Cheng Zhang is an assistant professor in Information Science at Cornell University. He received his Ph.D. in Computer Science at Georgia Institute of Technology, advised by Gregory Abowd (IC) and Omer Inan (ECE). His research focuses on enabling the seamless exchange of information among humans, computers, and the environment, with a particular emphasis on the interface between humans and wearable technology. His Ph.D. thesis presents 10 different novel input techniques for wearables, some leveraging commodity devices while others incorporate new hardware. His work blends an understanding of signal propagation on and around the body with, when necessary, appropriate machine learning techniques. His work has resulted in over a dozen publications in top-tier conferences and journals in the field of Human-Computer Interaction and Ubiquitous Computing (including two best paper awards), as well as over 6 pending U.S. and international patents. His work has attracted the attention of various media outlets, including ScienceDaily, DigitalTrends, ZDNet, New Scientist, RT, TechRadar, Phys.org<http://phys.org/>, Yahoo News, Business Insider, and MSN News. The work that leverages commodity devices has resulted in significant commercial impact. His work on novel interaction on smartwatch was licensed by Canadian startup ProximityHCI to improve the smartwatch interaction experience.

10/2 Short Student Talks Titles Below

 Presenter 1: Alap Kshirsagar, Hoffman Research Group

Title: Monetary-Incentive Competition between Humans and Robots: Experimental Results

Abstract: In this talk, I will describe an experiment studying monetary-incentive competition between a human and a robot. In this first of its kind experiment, participants (n=60) competed against an autonomous robot arm in ten competition rounds, carrying out a monotonous task for winning monetary rewards. For each participant, we manipulated the robot’s performance and the reward in each round. We found a small discouragement effect, with human effort decreasing with increased robot performance, significant at the p < 0.005 level. We also found a positive effect of the robot’s performance on its perceived competence, a negative effect on the participants’ liking of the robot, and a negative effect on the participants’ self-competence, all at p<0.0001.
These findings shed light on how people may exert work effort and perceive robotic competitors in a human-robot workforce, and could have implications on labor supply decisions and the design of compensation schemes in the workplace. I will also briefly comment on some experimental and statistical analysis practices that we adhered to in this study.

Presenter 2: Carlos Araújo de Aguiar, Green Research Group

Title: transFORM – A Cyber-Physical Environment Increasing Social Interaction and Place Attachment in Underused, Public Spaces

Abstract: The emergence of social networks and apps has reduced the importance of physical space as a locus for social interaction. In response, we introduce transFORM, a cyber-physical environment installed in under-used, outdoor, public spaces. transFORM embodies our understanding of how a responsive, cyber-physical architecture can augment social relationship and increase place attachment. In this paper we critically examine the social interaction problem in the context of our increasingly digital society, present our ambition, and introduce our prototype which we will iteratively design and test. Cyber-physical interventions at large scale in public spaces are an inevitable future, and this paper serves to establish the fundamental terms of this frontier.

10/9 Neil Dantam, Colorado School of Mines  Task and Motion Planning: Algorithms, Implementation, and Evaluation
Everyday tasks combine discrete and geometric decision-making. The robotics, AI, and formal methods communities have concurrently explored different planning approaches, producing techniques with different capabilities and trade-offs. We identify the combinatorial and geometric challenges of planning for everyday tasks, develop a hybrid planning algorithm, and implement an extensible planning framework. In ongoing work, we are improving the scalability and extensibility of our task-motion planner and developing planner-independent evaluation metrics.
10/16 Dylan Shell, Texas A&M University Active Imperception and Naïve Robots
If robots become deeply interwoven in our lives, they’ll learn a great deal about us. Such robots may then disclose information about us, either if compromised by ne’er-do-wells or more simply when observed by third parties. In this talk I’ll describe a few ways we’ve been thinking about robotic privacy recently. This will include a privacy-preserving tracking problem, where we’ll look at how one might think about estimators which are constrained so as to ensure they never know too much. And also how we can solve planning problems subject to stipulations on the information divulged during plan execution. In these cases, sensors can provide too much information and an important question is: What sort of sensors are needed to ensure that the robot has the opportunity to cultivate ignorance. This is a robot design question—one which we’ll also examine briefly.
10/23 Short Student Talks Titles Below

 Thais Campos de Almeida, Cornell University

Yuhan Hu, Cornell University

Haron Abdel-Raziq, Cornell University

10/30  Short Student Talks Titles Below

 Adam Pacheck, Cornell University

Yixiao Wang, Cornell University

Ryan O’Hern, Cornell University

11/6  Short Student Talks Titles Below

 Nialah Wilson, Cornell University

Wil Thomason, Cornell University

Ji Chen, Cornell University

11/13 Tariq Iqbal, MIT
11/20 Tom Howard, University of Rochester
11/27
12/4                                             Short Student Talks                                                                                                               Titles Below

 Matt Law, Cornell University

Steven Ceron, Cornell University

Chris Mavrogiannis, Cornell University

 

 

The schedule is maintained by Vanessa Maley (vsm34@cornell.edu) and Ross Knepper (rak@cs.cornell.edu). To be added to the mailing list, please email vsm34@cornell.edu.

Schedules for previous semesters

Comments are closed