Tuesday, August 26, 2008

The tensor product trick

Blogging has been slow this summer as I (surprise surprise) was actually working. Today the semester starts, so I expect to be blogging more as I attempt to clear my head from the detritus of semester minutiae.

Terry Tao has a new post up in the "tricks wiki" started by Timothy Gowers. The "trick" can be summarized concisely as: if you want to prove an inequality of the form X < Y, but can only prove X < CY, then take "tensor powers" of the objects X and Y and prove the weaker inequality, and then take a "root".

What's interesting to me is that this is none other than a generalization of the "standard" amplification trick that is most commonly used in hardness results. The easiest application is the proof that CLIQUE admits no absolute approximation: if it did, take graph powers, find the additive-error number, and take square roots, and you have an exact solution (since the value must be integral). Generalizations exist: you can use the same argument to show that CLIQUE admits no constant factor approximation, and even more intricately (invoking the PCP theorem) that independent set has no polynomial-factor approximation.

It makes me wonder what other "standard" analysis tricks draw their lineage from generalizations in the world of "real math". For example, tricks involving building exponentially growing covers tend to show up frequently in topology and analysis.

Monday, August 18, 2008

$10 million for complexity theory...

Via His Quantum Holiness, news comes of the NSF Expeditions awards: each award is $10 million, and four were given out. One of them was for complexity theory, and the team is a star-studded list of complexity and algorithms folks, led by Sanjeev Arora. Here's the blurb:
In their Expedition to Understand, Cope with, and Benefit From Intractability, Sanjeev Arora and his collaborators at Princeton University, Rutgers University, New York University and the Institute for Advanced Study will attack some of the deepest and hardest problems in computer science, striving to bridge fundamental gaps in our understanding about the power and limits of efficient algorithms. Computational intractability, a concept that permeates science, mathematics and engineering, limits our ability to understand nature or to design systems. The PIs hope to better understand the boundary between the tractable and the intractable. This has the potential to revolutionize our understanding of algorithmic processes in a host of disciplines and to cast new light on fields such as quantum computing, secure cryptography and pseudorandomness. The research team plans to draw on ideas from diverse fields including algorithms, complexity, cryptography, analysis, geometry, combinatorics and quantum mechanics
Congratulations ! For getting the award, and demonstrating that hard-core complexity theory CAN get funded...

Update: Sanjeev Arora comments...

Tuesday, August 05, 2008

Math != calculation, part 537...

From the NYT, on scoring the i-can't-believe-it's-not-a-sport sport of gymnastics:
The new system is heavy on math and employs two sets of judges, an A panel and a B panel, to do the computations. Two A-panel judges determine the difficulty and technical content of each routine. Six B-panel judges score routines for execution, artistry, composition and technique.

The A-panel judges’ scorecards start at zero, and points are added to give credit for requirements, individual skills and skills performed in succession.

The A panel counts only the gymnast’s 10 most difficult skills, which are ranked from easiest to most difficult (from A to G for women and from A to F for men). An A-level skill, like a back handspring in the floor exercise, is worth one-tenth of a point. The value increases by one-tenth of a point for each subsequent level, meaning a B-level skill is worth two-tenths and an F-level is worth six-tenths.

Required elements add a maximum of 2.5 points to the score. Extra points, either one-tenth or two-tenths, are given for stringing skills together.

Each judge adds the marks, then the two reach a consensus. Elite gymnasts usually have a difficulty score in the 6’s; the toughest routines generally have difficulty scores in the high 6’s or 7’s.

[...]
The system rewards difficulty. But the mistakes are also more costly.

Which is where the judges on the B panel come in. They rate the execution, artistry and technique of a routine, starting at a score of 10.0 and deducting for errors.

This score, called an execution score, is where the perfect 10.0 still exists. But reaching it is unlikely.

A slightly bent knee can be a deduction of one-tenth of a point. A more drastically bent knee can cost three-tenths. In this system, the deductions jump from one-tenth to three-tenths to five-tenths. A fall costs a gymnast eight-tenths. In the old system, a fall was a five-tenths deduction.

The highest and the lowest of the judges’ scores are thrown out. The remaining four scores are averaged to obtain the final B-panel score.

On the scoreboard, the final score appears in big numbers, just above the gymnast’s marks for difficulty and execution.

Apart from my grumble about the level of 'math', it's an interesting way of doing the scoring.

I wonder if this could work for conferences: replace 'degree of difficulty' by 'hardness of problem, general hotness of the area, etc', and then you could deduct points for things like
  • More than two digits after the decimal point in the approximation ratio
  • exponent of running time requires the \frac environment
  • More than two parameters in the running time
  • Gratuitous use of O() notation to hide dependencies you don't like (yes I'm talking to you, high dimensional clustering folk)
  • Requiring your winged pigs to be large in dimension, have extra glitter on the wing tips, and carry golden harps in order to make the horses take off (Hi there, complexity denizens)

Disqus for The Geomblog