Safe Autonomy    |    Co-design of cost-efficient perception and control    |    Medical Devices: Verification and Programming    |   

 


Safe Autonomy

(Themes: Formal methods, Control, Low-cost Co-design)

As autonomous systems gain traction in industry and government, the technical focus is on component performance: how to develop a more accurate segmentation algorithm, a faster sensor fusion technique, or a better decision logic at intersections. However, none of these systems will hit the streets unless they are shown to be sufficiently safe. Safety is a whole-system property: there is no sense in which a segmentation algorithm or PID controller is `safe'. Techniques that break down global requirements into component specifications leave much to be desired in this domain. It is also true that the environment of an Autonomous Vehicle is severly under-determined: think of all the ways in which other cars (or other drones) could behave. Finally, think of all the machine learning black boxes that run on these systems. The challenge of safe autonomy is real and largely unaddressed.

We view Safe Autonomy as a discipline in its own right. Our approach has three prongs: On-board, scalable verification; Provably correct control from high-level objectives; and a Computer-Aided Design (CAD) approach that deploys multiple tools within a coherent and traceable toolchain. More specifically, I am developing theory and techniques for scaling exhaustive verification to the point where an auto manufacturer can just buy an ASIC that is custom-made for exhaustively verifying short-term plans, and can execute real-time on-board the car. On the dual side, I am developing efficient control algorithms for efficiently generating correct actuation from rich temporal logic descriptions. Finally, these techniques are integrated with other approaches in a toolchain that starts with easy driving scenario description and ends with the production and analysis of high-risk situations. The overall goal is to develop a coherent `curriculum' on Safe Autonomy for autonomous cars and more generally for fully autonomous systems.

See related publications here and here


Co-design of cost-efficient perception and control

(Themes: Formal methods, Control, Low-cost Co-design)

Autonomous systems, like self-driving cars, are particularly power-hungry: they run powerful on-board computers to process the sensory data and extract useful information from it, like the positions and velocities of neighboring cars, in real-time. A quick calculation, based on figures from research vehicles, reveals that the drive time of a small electric car decreases from 6 hours to 2.5 hours when autonomous features are turned on. How do we decrease the energy consumption without compromising control performance?

Perception algorithms (e.g., object detection from images), and more generally state estimation algorithms, are typically designed to always run to completion and provide the best answer possible in all situations. However, obtaining the best possible estimate is not always necessary. E.g., on an emtpy road, the car can tolerate larger localization errors. A second downside to run-to-completion algorithms is that the delay incurred by the controller while waiting for the best estimate can actually degrade control performance, e.g. when a reaction is required sooner, as shown on the left. This presents an opportunity for integrating the design of the controller and the perception pipeline, to take advantage of this estimate quality/energy and estimate quality/performance trade-offs. In fact, the next generation of high-level energy-saving techniques will have to be task-aware, as modern system requirements make worst-case design approaches non-viable.

My work focuses on characterizing the energy/quality/performance trade-off empirically and theoretically, and on developing rigorous algorithms for exploiting this trade-off. We consider control objectives from low-level tracking to high-level missions expressed in temporal logic, perception algorithms from low-level feature extraction to high-level objet detection, and trade-off mechanisms from low-level frequency throttling to high-level perceptually-driven control margins. The overall goal is to push autonomous sytems (and their components) to become less conservative and more fluid in their actions, without compromising safety.

See related publications.

Robustness-maximing control. Left video shows 8 Crazyflie quadrotors executing a Reach-Avoid maneuver by following a robustness-maximizing trajectory that guarantees correctness in continuous-time and -space. See the CCTA paper for details of (an earlier version of) this controller. The right video shows a simulation of 8 quadrotors executing 2 different missions: 4 quadrotors perform a Reach-Avoid, adn 4 perform a persistent surveillance mission. In both videos, the green box is the target region while red indicates an unsafe region.


Medical Devices: Verification, Programming and Clinical Trials

(Themes: Formal methods, Low-cost Co-design)

The design and implementation of reliable and correct medical device software is challenging, especially considering the limitations of hardware platforms on which it must run, and the compromises that they lead to. Safety recalls of pacemakers and Implantable Cardioverter Defibrillators (ICDs) between 1990 and 2000 affected over 600,000 devices. Of these, 41% were due to firmware issues (i.e. software). Yet the need for these devices continues unabated: e.g., 600,000 pacemakers are now implanted yearly, and 120,000 defibrillators are implanted yearly in the US alone. There is a pressing need for 1) rigorous verification of correctness for these algorithms, and 2) efficient implementations that run on the limited hardware platforms. Correctness verification cannot be exhaustive (in the classical sense of formal methods) since the very notion of `correct' is ambiguous in the medical domain, and very much patient-specific. We need an approach that combines a) statistical methods that yield population-level answers, similar to clinical trials, b) formal methods that cover cases where disease-specific models are available and their limitations are understood, and c) programming languages and paradigms that automatically yield efficient implementations.

My work tackles these three aspects and uses ICDs as the application domain of choice: even though ICDs are significantly more complex than pacemakers, I developed models of the heart and ICD that falls in a decidable class, thus opening the way to formal verification of specific operating modes of an ICD. Given a verified model, I investigate the use of streaming formalisms for the programming of ICD algorithms, which automatically yield cost-efficient implementations. And I leverage Bayesian methods to incorporate computer models in the design of population-wide clinical trials. The overall goal is to decrease the regulatory burden of developing medical devices by proving models to be as reliable in the medical field as they are in more traditional engineering fields like the automtive industry, thus paving the way to Model-Based (pre-)Clinical Trials

See related publications .

Drs. Liang Jackson (Electrophysiology fellow, HUP) and Ari Brooks (Director of Endocrine and Oncologic Surgery and Director of the Integrated Breast Center at HUP) on the challenges of medical devices and clinical trials, followed by an explanation of Model-Based Clinical Trials

A conversation with Dr. Sanjay Dixit, Director, Cardiac Electrophysiology Laboratories, Philadelphia VA Medical Center and Professor of Medicine at the Hospital of the University of Pennsylvania, on the use of computer simulation in Electrophysiology. This was presented at the kick-off meeting of CyberCardia, an NSF Frontiers grant