Welcome to my web pages. I am an Associate Professor in the School of MIME. I am affiliated with the Robotics group, Human-Centered computing, and Graphics and Visualization. Previously, I was an
Associate Professor in the Department of Computer Science and
Engineering at Washington
University in St. Louis. If there's something that you're
interested in, or want to know, and it's not on these pages, then send me email and ask.
My background is in computer science, specifically computer graphics and surface modeling. However, over the years I've delved into human computer interfaces, surface modeling for biological applications, bat sonar, art-based rendering, 3D sketching, and understanding how people perform 3D segmentation of volumetric data.
Currently, I have active research projects in the following areas:
- Robot grasping: Robots are terrible at picking things up, humans do a pretty good job of it, but can't tell us how they do it. We are applying human-factors style analysis techniques in order to understand what mental models and local decision making process people use when performing grasping. In a practical sense, we bring people in and ask them to do grasping with a physical robot hand, then ask them questions (in the form of physical tasks) in order to get them to demonstrate their decision-making. Ravi Balasubramanian
- Privacy in robotics: As robots move into our offices and homes (both autonomous and tele-operated), what social conventions should they follow? What are our privacy expectations? If a plumber tele-operates a robot around your house to fix your sink, would it be weird if it went into the bedroom? Would you clean up the house before they robot "came over"? Or would you want the robot to filter the video feed so that the mess was hidden? Our goal is to understand privacy and social conventions *before* robots become ubiquitous, so that we can adapt the technology to fit what people want. Bill Smart, Frank Bernieri, Ross Sowell, Margot Kaminski, Matt Rueben, Averting Robot Eyes
- Meaningful human control for autonomous systems through a law lens: This work grew out of a couple days spent discussing meaningful human control for autonomous weapons with people from a wide-variety of fields, the Navy, Air force, law, Public policy, United Nations Conventional Weapons group... Autonomous and semi-autonomous systems have the ability to learn, respond to data, make decisions - all in ways that are difficult to predict. How do we, as humans, harness the strength of these techniques while showing a good-faith effort at minimizing their harm? We approach this through a mix of education (eg, clearly defining how human-level concepts like "find the person" get translated to sensors and algorithms), software testing via data sets (eg, providing explicit information on the range of expected values and how the data sets map to human concepts) and software engineering (bottom up unit testing). Bill Smart, Woody Hartzog, Paper at We Robot 2017, An Education Theory of Fault for Autonomous Systems
- Spatial understanding of cross-sections: Cross-sections show up in 3D volume (eg MRI, CT) segmentation, analyzing how structures bend, geology... What makes understanding cross-sections hard? Can we develop training materials that help people understand them? This work grew out of analyzing how experts do 3D volume segmentation. Training novices is challenging both because the task itself is hard (going back and forth between 3D data, 2D cross sections, and 3D surfaces) and because the tools are complicated. Our goal is to provide domain-agnostic training tools that help novices understand the core, underlying task independantly of the tool set used to do it. Ruth West, Anahita Sanandaji, Inferring Cross-sections of 3D Objects: A 3D Spatial Ability Test Instrument for 3D Volume Segmentation, Eliciting Tacit Expertise in 3D Volume Segmentation
- Bat sonar for robots: Can we take what we know about how bat sonar works and design effective sensors for robots? Two key components are that the bats shape sound through the geometry of their ears and nose, and they move their head, ears, and nose. Can we back-engineer how these two components influence sonar effectiveness in order to design a physical, robotic equivalent? Rolf Mueller, Shape analysis for bat ears
It's that time of year again, when I receive many emails asking if I have space in my research group, a PhD position, or will you please read my CV. I do not answer these emails. All applicants must go through the admissions process; I do not make individual hiring decisions. Your application will be reviewed and ranked by a group of faculty, at which point the top applicants will be selected.
If you have a specific question about the kinds of research I do, please feel free to ask; put RQ: in the subject line. If you are an undergraduate or graduate student at OSU, please feel free to make an appointment to come talk to me. I have many projects that are suitable for undergradautes.
- Robots at Cornell College
- A collaboration with my former PhD student, Prof. Ross Sowell, to bring robotics to Cornell College.[April 29, 2014]
- New Sketching Video
- A second video using the JustDrawIt system to create a tower.[March 1, 2012]
- News article
- My 20 minutes of fame in the Record. Thanks, Rennie, for doing such an excellent job applying subtle gaze direction to mammography training![January 1, 2012]
- Sabbatical at Adobe
- A video of the sketching interface (JustDrawIt) I developed while on sabbatical at Adobe.[July 1, 2011]
- Biomedical Engineering course at SIGGRAPH
- Biomedical course notes from Siggraph 2010. Funded in part by NSF grant 0702662.[August 1, 2010]
My calendar. I am generally in from 8:30-10:30 and 12:30-4:30 every day.
My advice on how not to write an NSF grant.
My suggestions on how to survive flying with one or more children.
Source code in Sourceforge for mesh processing, feature finding, and surface normals from point clouds.
Curvature paper data files.
Volume Viewer, a user-friendly, 3D image segmentation program.
C++ notes for people who know the syntax but are looking for a bit of practical suggestions and information on what actually happens in the compiler and linker.
The Siggraph course material: