This page lists my general research interests, and includes links to specific project web pages. If you are interested in working with me on any of these projects, or on something else, then there are some rules that you need to be aware of before you contact me.
My research interests fall broadly into four categories: biological modeling, art-motivated interaction and rendering, human-robot interaction, and surface modeling.
Mesh processing and shape analysis sourcecode and tools that I've made available via Sourceforge.
To come: 3D paintings, abstract rendering, 3D sketching
Coming soon
Lewis the robot photographer
Cooperative robot tasks
Subtle gaze direction is a fairly simple technique that takes
advantage of the way the human visual system works to guide visual
focus around an image without using overt cues that the user
consciously sees. At any given time we are only attending to a very
small portion of our high-resolution visual field (the fovea) - about
the width of your thumb held at arm's length. Your eye saccades around
the scene, jumping from point to point, based on a variety of cues. Your brain integrates this into one seamless image. One cue the brain uses is motion. By simply modulating a portion of the image in the peripheral vision (and turning it off before the eye saccades to that location) it's possible to direct someone's gaze to that location. This cue is largely subconscious, so in effect, it is possible to "drive" someone's gaze around an image. We apply this technique to a variety of applications, both to help a user perform a task and to see how visual attention affects other cognitive functions such as recall.
Coming soon: Using subtle gaze direction to direct someone's gaze around a scene for teaching art history, locating objects, and mammogram readings
I am primarily interested in the representation, creation, and
comparison of complicated, organic shapes. To date, most of the more
interesting free-form models are made by scanning in 3D
shapes. Creating complicated shapes from scratch on the computer has
proved to be a difficult task. There has been some progress in quickly
sketching simple
blobby models, and some beautiful work in sculpting
and sketching of implicit models and editing
of mesh models. I have developed a novel analytical
surface representation, based on manifolds, that supports free-form
editing, and sketch and widget-based tools for editing these, and
other, surfaces. The heart of this representation is the ability to
build complicated surfaces by locally specifying the desired shape,
then blending the results together.
MRI, CT, and Ultra sound all provide methods for visualizing the
internals of human bodies. While visualization is useful, building
full 3D models of the data opens up a potentially huge array of
diagnostic tools, ranging from physical simulations to detailed
comparisons of anatomical differences. Unfortunately, at the moment it
is a very time-consuming process to produce these models, requiring a
great deal of human intervention. I am working on ways to
automatically extract these models, or speed up the manual segmentation process, taking advantage of the fact that
we know the anatomy the data represents. This involves representing
not only the basic shape, but how that shape can deform across the
population.
I also have developed a large number of shape analysis tools that I've made available via Sourceforge.
While at Microsoft I worked in the area of facial animation.
Biological modeling
In biological modeling I look at how geometry and topology can be used to analyze, and measure, the relationship of shape to function for a range of structures such as hearts and joints, and in brain development and in bat sonar. Human-computer interaction is a necessary component of my research in these areas because it is the human’s domain knowledge in biology that we are tapping into when designing algorithms and techniques. For biologists, simply creating a mathematical model of their data is only part of the story — ideally, these models should explain biological function and drive the development of new hypothesis. This is the aspect of biomechanical research that I find truly captivating: advancing biological knowledge through the application of domain-specific computation and algorithms.
To come: 3D image segmentation, brain development
Art-based interaction and rendering
This area is often called non-photorealistic rendering, although I am
less interested in duplicating traditional media on the computer and
more interested in capturing the artistic design process,
and in developing the computer as a new art form.
Over the years artists have developed a loose set of rules and
traditions that enable them to effectively convey information to
viewers. Many of these rules and traditions have been developed by
"reverse engineering" the human visual and cognitive systems. I am
working on ways to quantify these decisions in a computationally
tractable form. The first step is to develop techniques that enable
artists to manipulate images and models in ways that are more closely
related to the types of decisions they make. The second step is to
automate parts of the decisions process, enabling anyone to more
effectively convey their own information.
Human-robot interaction
Subtle gaze direction
Surface Modeling