Surface editing is a difficult problem for several reasons. First, you're usually using a 2D input device (the mouse) to edit 3D surfaces projected onto a 2D device. This pretty much destroys any sense of proprioception and scale. Second, most interfaces are based around the controls the surface representation has, such as control points. Third, there is a trade-off between accurately capturing the desired changes and visual smoothness - too much filtering and it becomes difficult to make very fine changes, too little and the surface starts to look like the someone forgot to iron it. Finally, it's important to not only capture what the user wants to change but also how far this change should extend.
Representation can play a big role in what type of interactions make sense and are easily supported. For example, splines provide nice local control (grab a control point and move it) but very little control over the width of influence. Wavelets are great for controlling detail at various levels, but their controls are very non-intuitive. Particles or points are very flexible, and are great for pushing around using shaped tools, but are terrible at controlling smoothness. With Matthew Ayers (now at Microsoft) we looked at combining these representations, and seamlessly switching between them to support a wide variety of curve manipulation techniques.
Widgets allow the user to specify not only what they want to change, but how they want to change it. They can also provide visual controls for a multitude of parameters. We explore widgets for sweeps, warps, and editing blends between two surfaces.
Sketching is another great interface technique, but the problem is how to make sense of the sketches, especially if they're in 3D? One solution is to project the sketches onto walls, and so edit only two dimensions at a time.
See also: representation.