Shape analysis in the geometry community follows a fairly standard pattern. It goes something like this:
- Fix class of shapes (points or curves, usually)
- Define distance between two shapes
- Minimize distance under transformations (rotations, translations, sometimes scaling)
- Approximate distance if necessary
- Study distance for special classes of shapes.
This process has brought forth a number of interesting ideas and tools - among them
- free space decompositions and equivalence classes of regions with respect to combinatorial structure in the solution (so you can enumerate solution types)
- how to approximate spaces of transformations via carefully chosen grids in order to get provable approximations for distance estimation
- connections between geometric matching and string matching via different kinds of hashing tricks.
But while shape matching is a rich area of research within CompGeom, it's less clear what influence this research has had in the larger shape analysis environment. Some of the obstacles (some self-inflicted, some not) are:
Definitions involving 'max' terms.
Combinatorially, it's easier to define distance measures where the distance is governed by a max over some quantity (usually a max distance over pairs of points). It's easier to define the space of possible solutions, and make the requisite combinatorial arguments. But such measures are highly non-robust, since any one 'outlier' can cause the distance measure to report a large distance.
This problem is usually fixed by introducing 'outlier-sensitive' variants of the measure under consideration, which leaves some combinatorial structure intact (at a price), or by replacing 'max' measures by 'sum' measures, which can often be inelegant, and usually destroys most of the algorithmic tools developed for the original case.
Reliance on the 'expensive exact, cheap approximate' framework.
This might require some explanation. Most of the above shape matching measures can be computed for two fixed shapes relatively easily. But if you have to compute them under transformation groups, things get hairy really quickly. Running times in the realm of n^7 or n^8 for point sets in three dimensions are not unusual.
What's worse though is the kind of impracticality involved in these algorithms (yes I know, n^7 doesn't need further beating, but still...). They usually involve finding intersections of surfaces defined by medium-to-high-degree polynomials, and then walking through the arrangements thus defined.
There's no way on God's green earth that anyone in their right mind will implement these algorithms, so we usually apply the standard bait-and-switch: "if you love these expensive exact methods, you're REALLY going to like our fast and tasty approximations !!". There have been some really creative tools designed to solve shape matching problems approximately, but what they often do is hide high complexity in terms involving the error epsilon, while looking well behaved in n.
It's difficult to overstate the depth and beauty that approximation methods bring to the field of computational geometry as a whole, and you should read Sariel's soon-to-be-book on this topic to learn more. But in the narrow realm of shape analysis, this two-step 'expensive exact, sort-of-cheap-approximation' has the following problems:
- Designing exact algorithms with humungous running times only confuses the people you might want to have use your algorithms. They'll just turn around and use some other measure. Coming along afterwards and saying, "but wait, we can APPROXIMATE IT" doesn't help, because no one's vested enough in the measure to even care at this point.
- Hiding expensive dependencies in the error term is problematic. Theoretically, it's a sound form of analysis - isolating the dependency from the dominant 'input size' term. But practically speaking, a bound that's cubic in 1/epsilon is not a whole lot better than a bound that's (say) quadratic in n, for reasonable values of epsilon. Which is to say, they're both terrible. You can of course protest that in practice things will work a lot better (and they often do!), but again, you've lost your audience, who wasn't really vested in the distance measure in the first place !
This is an odd kind of objection to level at theoretical research. But here's the problem as I see it. If your focus is the primitive 'compute distance between two shapes', then you use that as the platform and layer on things like 'minimize under transformations; find near neighbors', and so on.
The problem is that this approach focuses on the 'tree' (the distance measure) while in my mind missing the 'forest': namely, the larger set of problems of shape analysis, of which computing the distance measure is but one. I'll develop this idea as I talk about other approaches to shape analysis - the point I want to make is that you have to view the particular distance measure between shapes in the context of a large set of possible applications. A measure that looks nice and
clean and well founded on its own may be less well suited for the rigors of supporting a class of analysis problems than a different measure that's maybe less elegant, but more flexible when seen in context.
I was discussing some of the issues here with someone, and I think a clarification is important here.
I'm not saying that people can or cannot work on studying whatever distance measures they want. There are all kinds of interesting geometric puzzles that have cropped up while studying shape matching (subquadratic algorithm for the Frechet distance?).
But when we look at the CompGeom shape matching literature through the lens of "Is this a way of designing the theoretical foundations of the larger shape analysis research program", then the above objections become more relevant.
In the next installment, I'll switch over to the line of shape analysis research that originated with David Kendall and others, and present a similar critique of that body of work.