|
Undergraduate projects with extensions to master's projects.
Making practice models
I am interested in the space of deformations for organic shapes, such as the shape of the ulna bone across the human population. To get an understanding of this, we need 3D models and correspondences between these models. This requires scanning many versions of a model and identifying correlated features across those models.
Finding opposing colors
Artists use luminance and warm-cool color contrasts to increase (or decrease) the sense of depth in a scene. They often adjust the brightness and color of two adjacent areas to change the relative depth relationships to (perceptually) match what they see in the real world - even if those colors are not the "correct" ones. How artists determine whether to use luminance, or what color changes to make is an open question. In this project, given two colors, the goal is to determine whether it would be better to use luminance, or colors (in which case, what color changes) would be the most effective in creating the desired perceived depth change.
Cameras and perspective, user study
Computer graphics relies heavily on the simple pinhole camera model to
project 3D scenes into 2D. This projection is characterized by 11
parameters; position (3), orientation (3), focal length (1), center of
projection (2), aspect ratio and skew (2). The latter are not used
particularly often, because they produce odd effects.
Traditional camera manipulation has treated the camera as a physical
object in the scene, which the user positions (in a 2D window!) much
the same as they would any other object. It is surprisingly difficult
to frame scenes in this manner, unless it's something simple like
center the camera on an object. We have been working on alternative
camera control methods that, instead, indicate what 2D projection
changes would occur when changing the camera.
The big question this project asks is: Is camera projection inherantly
difficult to control and understand, or is it the computer camera
controls that make this difficult?
Study outline: Set up two similar scenes, one virtual and one
physical. Give the user a storyline task (illustrate this story with
images) and ask them to create those images using either the virtual
or the physical (with a hand-held camera) scene. Compare how they
approach the task in both cases, by asking the participants to "talk
out loud" and also through explicit questionaires.
Textures through sketching
There are several procedural methods for specifying texture (reaction
diffusion, vornoii tiles, sum of sines and cosines, etc). The
advantage of procedural textures is that you can make as much of
something as you'd like. The disadvantage is, it can be very difficult
to find the parameters you want to build a particular texture.
The point of this project is to implement a sketching interface for a
specific procedural texture. This requires a combination of machine
learning and computational geometry (depending on the texture type) to
map from a sketch to a set of parameters.
Eventually, the goal is to build up a library of these approaches so
that the user can incrementally create a texture, e.g., adding spots
to a brick texture, by layering textures on top of one another.
3D painting
One of the characteristics of a real painting (as opposed to an image
or a print) is that the paint itself produces a slight depth effect
and view-dependent color combinations due to changing layers of
pigment. Basically, the artist can either blend pigment together in a
medium such as oil, and apply the two pigments together, or they can
layer the separate pigments one on top of the other. The two produce
slightly different effects because, although our eye "blends" the two
color spectrums together, the combined spectrum changes as the view
point changes. This is in part because the second layer is not
uniformly thin, so the amount of blended pigment changes slightly.
In this project, the goal is to create a painting system that allows
the artist to explicitly layer pigments and create novel blend
effects. The painting actually changes as the viewpoint shifts.
I have a prototype of this system, implemented by Thomas
Lin. Currently, it uses a true ray tracer to implement the depth
effects. Bring it up to real time involves replacing the ray tracer
with a 'pseudo' ray tracing, using the fact that it's a ray tracing of
a set of height fields to speed the process up.
Also, the color blending mechanism is currently a straight alpha
blend. There are also a variety of other ways to blend colors, some
based on physics and some just "making it up".
Abstraction and realism
Artists often use abstraction to indicate large, textured areas. The
idea is to simplify a lot of detail while keeping a few hints of the
overall texture. For example, a bush can be rendered as a few shaded
blobs with some texture used to outline a couple leaves and the
important branches. Most approaches to this do some form of Gaussian
blurring to simplify out the details. However, this is rather
unnatural - our eye doesn't blur an image, it simply only pays
attention to a few features of interest.
In this project, the goal is to make an algorithmic renderer of 3D
objects that "replaces" complicated detail with an abstraction, then
fills in realistic texture on top of it in selected places.
Nathan Dudley put together a first version
of this for algorithmically generated trees.
|