Monday, February 28, 2005

a different set of numbers...

350+ posts

87,000+ words

70,000+ visits

113,000+ page views.

....

and a tired yet satisfied blogger, 1 year old, and plugging away.

Ah, the good old days...

BoingBoing points us to a dictionary of swearing, in every language imaginable. Reading the Hindi section brings back memories of the good ol' days at IIT Kanpur: I can assure you that every phrase on the list could be heard almost every day. In fact, we spent the first month of freshman year receiving instruction in all possible variations on these insults (being IITans, we were quick learners ;))

[ brief pause while I am overcome with nostalgia...]

It should come as no surprise to the Indians who read this that the Punjabi section was one of the top swear picks. Yiddish also seems like a good choice, but I am surprised at the inclusion of French in the top picks list.

Tags:

The Road to Reality

Roger Penrose's new book, The Road To Reality, is out. Peter Woit has a detailed review of the book (I am waiting for mine to show up), but in the meantime, consider this amusing quote from the NYT review:
The perfect reader for ''The Road to Reality,'' I fantasized, would be someone comfortable traversing the highest planes of abstract reasoning, yet who had missed some of the most important landmarks of scientific history -- a being, perhaps, from another place and time.
I don't know. It sounds an awful lot like a mathematician to me.

Sunday, February 27, 2005

SODA Outtakes: Market Equilibria

I know I know. By the time I am done with SODA outtakes, it will be time for SODA 2006. This is why we need more algorithms bloggers ! Machine learning blogs appear to be popping up faster than you can say 'expectation-maximization', but there is no corresponding growth spurt in algorithms. Sigh....

This outtake is much easier, because there is a well-written survey I can point to. The topic is market equilibria, or the problem of determining when equilibrium sets in when traders exchange good at various prices, and how fast this can be determined.

More formally, the problem is: given an initial allocation of goods to traders (a "supply" vector, where each "dimension" represents a single commodity), and a "preference" function for each trader (I have a high affinity for gold and cookies, you have an aversion to silver, but really like beer), determine a set of prices for goods, and a "demand" vector for each trader so that if each trader sells what they have and buys what they want (at the given prices), their preference is maximized.

Here, the preference function component for any commodity is a concave function of its amount. Note that you want a concave function in order to model "diminishing returns". In addition, having concave utilities helps ensure unique optimal solutions (because the average of two solutions has better utility).

There are also sanity conditions (you can only buy with whatever money you have from selling, and the total amount of goods is fixed - this does not model producer economies).

This SIGACT News survey by Codenotti, Pemmaraju and Varadarajan lays out the terrain of market equilbria very nicely, outlining the history, the techniques (some beautiful methods) and what is currently known.

It's a good example of an area that has been studied extensively (in economics) but where the computer science angle brings in questions that were not necessarily of concern earlier: issues of computational feasibility, efficiency, approximations that are characteristic of the algorithmic way of approaching a problem.

Tags: , ,

Friday, February 25, 2005

Numb3rs Review: "Sabotage"

A nice thrilling episode, with very little math (but see below). As always, major spoiler alert.

Plot summary: Series of trains are derailed. At each site, a paper containing the same set of numbers is left behind. Everyone thinks it's a code, but it actually isn't (at least not in the traditional sense). Some good old sleuthing yields the answer.

It was funny when the NTSB official asks Charlie if he knows any cryptography. Although technically isn't cryptanalysis what the code breakers do ? Larry the loony physicist had some hilarious quotes:
  • "Our evening was primal, in the carnal sense, not the mathematical."
  • "The math department, the least libidinous place on campus"
The last fact is sadly not too far from the truth, but wait ! Mathematicans are cool (via Mathpuzzle)

The only real math nuggets came at the end, when Charlie was trying to explain to the female FBI agent that math appears in nature. His example: The Fibonacci series ! Bzzttt...

As it turns out, Rudbeckia Hirta just today posted a note about the exaggerated claims concerning the Fibonacci series; that the Greeks referred to the "golden ratio" (no), that the pyramids and the Parthenon somehow magically encode this ratio (only if you round off heavily), and even how the ratio is supposed to be aesthetically pleasing (no evidence). Keith Devlin has more on this.

Oh well: if Dan Brown can do it...
Tags: ,

p.s I understand that we need some kind of romantic interest to give the series some spice (although CSI and Law and Order manage perfectly well without it), but why on earth are they setting up an advisor-student relationship ? Allegedly at some distant point in the past this used to be fairly common in the humanities, but nowadays ? My only question is: how long before this becomes a "Big Issue" in the academic blogosphere ?

Meta: recent comments

I added a new feature so you can see the recent comments posted on the site (this is handy if you posted a note and are looking to see if there was a response). It's on the left, below the "Previous Posts" section.

Comments were broken for a while because I forgot to update the hack that allows me comments in the first place. They should be working again.

SODA Outtakes: Lloyd's algorithm and analysing heuristics

Uri Feige's invited talk at SODA 05 was on 'Rigorous Analysis of Heuristics for NP-hard Problems'. His main argument (at least in the initial portion of his talk) was that it's a good idea to analyze known heuristics for problems, because they work well in practice, and a rigorous analysis will shed light on why.

[Aside: in the theoryCS community, a "heuristic" usually refers to a procedure that has no provable performance guarantees; this could mean either no clear running time analysis, or no clear quality of optimization guarantee. The term "algorithm" is usually reserved for a procedure that has well-defined steps and ends in a well-defined state after a definite (finite) number of steps. One will occasionally find the term 'algorithm' used to refer to a heuristic in non-theory areas; beware !]

Doing such analysis, although maybe not as "sexy" as coming up with a new algorithm, is important also for "sociological" reasons. Although it might technically be correct to argue that a particular heuristic is not reliable and thus should not be used, the resolute theoretician is often faced with a mountain of empirical evidence in support of said heuristic, compared with the possibly complex and hard-to-understand algorithm that she proposes.

Some of the best known heuristics show up in clustering, especially since this is a problem that transcends many different application domains, including areas that are not as familiar with theoretical algorithmic methods. One of the best known methods for clustering with k centers is Lloyd's algorithm (or the k-means algorithm), which works essentially as a discrete variant of expectation-maximization. The algorithm itself is very simple:
  1. Choose k "centers"
  2. Assign each point to its nearest center.
  3. Compute the k centroids of points assigned to a given center. These are the new k "centers".
  4. Go back to step 2.
It is known that Lloyd's algorithm will hit a local minimum fairly quickly (though not a global one: k-means is NP-hard). However, there is no analysis of how quickly this happens. In a SODA 2005 paper, Har-Peled and Sadri examine the complexity of k-means. They show that:
  1. In 1D, and in d-D (on the grid), the number of iterations of the k-means algorithm is polynomial in the number of points, the dimension, and the "spread" (the ratio of max distance to min-distance).
  2. Even in 1D, there are example point sets on which the k-means algorithm takes a linear number of iterations to converge.
What they also show is that slight modifications to the k-means heuristic result in algorithms that converge in time independent of the dimension (and with much better upper bounds). The simpler of their two schemes can be described as follows: in Step 2 of the original approach, merely move one "misclassified" point to its true cluster, rather than all.

They go on to present empirical evidence in support of their claim that these algorithms generate similar quality answers (in similar time) to k-means. This part is still a work in progress: there are very fast implementations of k-means that they don't compare against, (but the data structures might be used to speed up their approach as well).

It is not clear to me that their paper answers to any degree the question of why Lloyd's algorithm works as well as it appears to. However, the fact that they can prove bounds for their alternate algorithms suggests that maybe this is a line of attack to take when analyzing Lloyd's method. These alternates are also quite simple, and so could conceivably be used as k-means replacements if one wished guaranteed running time bounds.

One final point: the authors have their code available online. Would that more would do the same !

Tuesday, February 22, 2005

CiteULike

Yaroslav Bulatov, who runs one of the new machine learning blogs, points us to CiteULike, a very intriguing site. Some of you may have used web-based bookmark sites like Furl or del.icio.us, and may have even used the "social bookmarking" aspect of these sites.

CiteULike is a similar site, for archiving papers. Here's how it works. If you find a paper you are interested in, you click on a bookmark link, and the paper is added to a web-based collection under your name. However, what makes this unique is that if the paper link is on a "well known" site like the ACM DL, Citeseer, arxiv, or many other places, bibliographic information is automatically added to the entry (this is because the server parses pages etc and extracts the necessary bibliographic information)

Subsequently, when you want to extract bib entries for all these papers, one click gets you a .bib file for all the papers you want. This is very handy (for example) when you are doing a literature search and reading/downloading papers by the dozen.

You can also associate tags with the entries if you choose to. This gets into the "social bookmarking" aspect of CiteULike: the idea being that if you tag a bunch of papers as being related to "artgallery", and so does someone else, you will learn what the other person has found and vice versa. Personally, I am either too oldfashioned or too clueless to have made effective use of this feature, but the archiving/.bib features above are enough for me try it out.

Like any respectable web-based service, there are RSS feeds for any view of the collection (by tag, by person, by personal tag), and there are also watch lists, so if you want to keep track of when a new page (tag or person) adds a new paper, you can do so.

Here's my paper feed. It will be an interesting experiment.

If you think your committee is rough on you...

From Jim Holt's New Yorker article on Einstein and Gödel:
Einstein’s conclusions were the product of pure thought, proceeding from the most austere assumptions about nature. In the century since he derived them, they have been precisely confirmed by experiment after experiment. Yet his June, 1905, paper on relativity was rejected when he submitted it as a dissertation. (He then submitted his April paper, on the size of atoms, which he thought would be less likely to startle the examiners; they accepted it only after he added one sentence to meet the length threshold.)
The article dwells at length on Gödel's interpretation of time (something I had mentioned a while ago). Read the whole thing: it's fascinating.

(HT: Libertarianoid)

Tags: , , ,

Monday, February 21, 2005

A great new resource for molecular modelling

From WorldChanging:
ZINC -- which, in the free software tradition of recursive acronyms, stands for ZINC Is Not Commercial -- is a free database of compounds for "virtual screening." That is, ZINC provides 3D models of chemical compounds in a standard "docking" format used in chemistry and biochemistry software, allowing researchers to assemble and test new chemical compositions on their computers.
This is really great. When I was doing my Ph.D, I mucked around with molecular structures for docking and matching purposes, and I had accces only to structures that our collaborators at Pfizer provided us. Having such a large database to play with will really help geometry folks who do molecular modelling, not to mention all the medical researchers who probably already use this.

Things we take for granted..

Richard Hofstadter's classic book Anti-intellectualism in American Life is a fascinating sweep through American history that provides a good answer for those who wail 'How could someone as (apparently) stupid as Bush get elected ?'. Another book, The Development of Academic Freedom in the United States with Walter P. Metzger, should be at the top of my list of reading, based solely on this review by Scott McLemee. An excerpt from the review (emphasis mine):

Condensing 500 pages into five paragraphs is a fool's errand, but here goes anyway.

The belief that only the community of scholars has the final authority to determine what counts as valid research or permissible speech has deep roots in the history of the university, going all the way back to its origins in medieval Europe. But it was extremely slow to develop in colonial and antebellum America, which had few institutions of higher learning that were anything but outgrowths of religious denominations.

How we take certain things for granted...

Saturday, February 19, 2005

Numb3rs Review: "Prime Suspect"

This was the most mathematically oriented episode of all: can't get much better than the Riemann Hypothesis ! Scott Aaronson has a review as well: go and read that first !

I agree with much of what Scott says. I think the problem is a little different though: some of the random phrases attributed to mathematicians (which mathematicians refers to an open problem as a "math problem") actually make sense if a non-mathematician uses them, and when Don (the FBI agent) uses them, it doesn't sound incongruous at all. It's just that the writers need to switch gears a little when writing dialogue for the mathematical characters.

They've done a good job getting a sense of the mathematics; just less of a sense of the mathematician.

Plot Summary: Mathematician claims to have proof of Riemann Hypothesis. Hackers kidnap daughter, demanding proof (and algorithm), which they will then use to crack cryptography.

Major beef: there is no direct link between a solution to the Riemann Hypothesis and factoring. It is undoubtedly true that a solution will shed a lot of light on the structure of primes, but it's not the kind of immediate corollary that is commonly implied in the popular press (take this, for example).

But credit goes where credit is due: I was wondering how they would make the conceptual jump from a theorem to an actual algorithm (assuming said connection). At least they tried to make it credible, with Charlie pointing out that an algorithm could take years to develop.

A point that I didn't really consider, but both Scott and my wife noticed, was that by using the Riemann Hypothesis, the plot line introduced the idea of a problem that has remained unsolved for 150 years, and that it's good for people to realize that "math problems" can take a while to solve. In fact, this point is reinforced when Don expects Charlie to solve it in a few hours :).

Trivia note: the mathematician father was played by Neil Patrick Harris, the "original" child prodigy from Doogie Howser, M.D.
Tags: ,

Tuesday, February 15, 2005

Erik Demaine and Origami

The NYT has a fabulous article on Erik Demaine and origami. It is worth noting that
  • The article has no quotes that make him look like an absent-minded professor.
  • It does a beautiful job of capturing both the art and the science of origami.
  • It has no real clangers in terms of the mathematical descriptions.

SoCG 2005 Results

Are out: 41/140 papers in (no I didn't submit anything).

The entire list is here.

Monday, February 14, 2005

"Laws of Chance"

One would think I would let this pass without comment, but oh well...

A recent /. post links to an article on RedNova News about a "Random Event Generator" that purports to predict world events:
One of these new technologies was a humble-looking black box known was a Random Event Generator (REG). This used computer technology to generate two numbers - a one and a zero - in a totally random sequence, rather like an electronic coin-flipper.
Now it's all very well to generate randomness from a computer, and in fact there have been suggestions that certain kinds of electrical fluctuations on a chip can be recorded and used for randomness generation. But to claim that this is "totally random" by virtue of using the magical "computer technology" is a bit of a stretch.

But wait, it gets worse:

The pattern of ones and noughts - 'heads' and 'tails' as it were - could then be printed out as a graph. The laws of chance dictate that the generators should churn out equal numbers of ones and zeros - which would be represented by a nearly flat line on the graph. Any deviation from this equal number shows up as a gently rising curve.

I presume they mean that they plot |H-T| against n. But as any book on probability will tell you, you do expect to see random fluctuations (this is after all a binomial process), and the standard deviation is around sqrt(N), so bumps are to be expected: a flat line is what would be suspicious.

During the late 1970s, Prof Jahn decided to investigate whether the power of human thought alone could interfere in some way with the machine's usual readings. He hauled strangers off the street and asked them to concentrate their minds on his number generator. In effect, he was asking them to try to make it flip more heads than tails.

It was a preposterous idea at the time. The results, however, were stunning and have never been satisfactorily explained.

Well it sounds like the results haven't been stated either. What exactly were these "stunning" results ? Ah, here we go:

Again and again, entirely ordinary people proved that their minds could influence the machine and produce significant fluctuations on the graph, 'forcing it' to produce unequal numbers of 'heads' or 'tails'.

According to all of the known laws of science, this should not have happened - but it did. And it kept on happening.

"All the known laws of science" can't explain an unequal number of heads and tails. In other words, if I just managed to get a streak of 10 heads with a coin, I'm psychic ?

Aieee......

Saturday, February 12, 2005

Theory folks in the press

The NYT has an article about spamblocking in which the 1992 work of Dwork and Naor on junk email is cited:
In a paper titled "Pricing Via Processing, or Combating Junk Mail ," two computer scientists, Cynthia Dwork and Moni Naor, came up with a way to force a sender to pay every time a message was sent - payment not in money, but in time, by applying the computer's resources to a computational puzzle, devised on the fly for that particular message.

[...snip...]

Ms. Dwork and her colleagues have named this the Penny Black Project.
This is remarkably prescient: in 1992 I was grateful to have access to email, let alone worry about junk mail. On an irrelevant note, I am impressed that the NYT has started providing real links in their articles.

Tags: ,

SODA Outtakes: Approximation algorithms for metric embeddings.

One of the most exciting areas of algorithms in the past ten years has been the study of metric embeddings. Given two metric spaces, can we "embed" one in the other (which basically means construct an 1-1 map from points in the first to points in the second) so that distances are roughly preserved ? The seminal paper of Linial, London and Rabinovich demonstrated how to use a low-distortion embedding of a metric space into l1, to determine a good approximation algorithm for the sparsest cut problem (the distortion of the embedding is the approximation ratio achieved).

This set off a flurry of research into questions of the form "What is the best distortion for embedding spaces of type A into spaces of type B". There is no way I can survey all the results in this area; a new CRC book chapter by Indyk and Matoušek should be able to cover most of the terrain, and the excellent book by Deza and Laurent has much of the technical background. Everything started (in a sense) from the result by Bourgain that any metric space can be embedded into l1 with distortion O(log n) (albeit in log2 n dimensions).

Other questions that were studied along the way:
  • what is the minimum distortion that one must face when embedding A into B (metrics derived from expanders proved to be useful gadgets in this regard)
  • Can one reduce the dimensionality of a space without distorting distances too much ? This question has numerous implications for all kinds of problems in clustering, data mining, database search, you name it...
  • Are there general subclasses of metrics that have low-distortion embeddings ? A fundamental open question here involves metrics on planar graphs. It is still open as to whether they can be embedded in l1 with constant distortion (again, this has implications for sparsest cut in planar graphs)
  • Metric "Ramsey theory": if I can't embed the entire metric space, can I embed a large fraction of the points/edges with smaller distortion ? Yair Bartal just gave a nice talk at AT&T on partial embeddings (the "edge" variant).
A set of questions that is now getting more attention is the "natural " approximation angle. Although we know that any metric can be embedded in an lp space with distortion log n, what about a particular metric given to us ? Might one not be able to do better ? And can we come up with an algorithm to determine the BEST distortion for this particular metric ?

It was shown early on that one can find the optimal distortion into l2 (upto additive error) in polynomial time using semi-definite programming. For l1, the corresponding question is NP-hard. What I have been seeing is a slew of interesting recent papers that explore hardness results and approximation algorithms for embedding problems:
  • Kenyon/Rabani/Sinclair (STOC 2004): They consider bijections between two n-point metric spaces, and present a constant factor approximation in the case when the metrics are derived from a line (as long as OPT is small).
  • Papadimitriou/Safra (SODA 2005): They show that the minimum distortion bijection between two 3D point sets is hard to approximate to within a factor of 3.
  • Badoiu, Dhamdhere, Gupta, Rabinovich, Raecke, Ravi, and Sidiropoulos (SODA 2005): A collection of results on embedding metrics in 1D and 2D. One main result is an O(sqrt{n})-approximation for embedding a metric on the line.
  • Badoiu, Chuzhoy, Indyk, Sidiropoulos (STOC 2005): I don't have a copy of the paper, but the title is indicative enough: "Low-Distortion Embeddings of General Metrics Into the Line"
This is not to say that these problems have not been studied earlier. As far back as 1993 for example, Farach-Colton, Kannan and Warnow had looked at embedding a metric into an ultrametric. However, the current focus on such problems is an interesting new direction in the theory of metric embeddings.



p.s usual disclaimers apply: this is a blog post, not a survey, and hence is opinionated and far from comprehensive. Pointers to relevant add ons are welcome.

Turing Train Terminal

When I was an undergrad, we were once given an assignment to write out a universal turing machine as a state diagram. I have heard of others being asked to program one (using gturing?).

This is by far a more enjoyable variant.



Via BoingBoing.

Friday, February 11, 2005

Numb3rs review: "Structural Corruption"

Bill Gasarch has a second review of the P vs NP episode over at Lance's place. He speculates as to what the characters might have meant by saying that P vs NP is "unsolvable". My favorite:
Charlie was using proofs that naturalize, which won't work because of the results of Razborov and Rudich (Judd Hirsch: So use unnatural techniques)
Last night's episode dealt with calculations involving the structural integrity of a building. Not much actual math shown in this episode, and there were some curious points:
  • Charlie is a "professor of applied mathematics", and yet in the first episode crazy string theorist wants his help with some calculations. I guess string theory could be viewed as applied math in some circles...
  • When trying to figure out why the dead student had come to him for help with calculations (is he a math prof or a human calculator?), Charlie says "We can only infer that it had something to do with math". Uh, yeah......
  • Charlie, "professor of applied math" and string theorist par excellence, can also mock up a structural integrity demo, with GUI, 3D graphics and all, in one night. Sheesh; give him a job at Google, people...
There was one sequence that sounded like a reference to Benford's Law. Charlie finds out worker names and union IDs have been manufactured, by detecting that the numbers on the IDs follow suspicious patterns. This is actually plausible, in that Benford's law has been used to detect fraud and fake accounting.
Tags: ,

Thursday, February 10, 2005

Football and Dynamic Programming.

What's not to like !

On Ph.D Theses

When I was in grad school, it was the conventional "wisdom" that the people who read your Ph.D thesis consist of
  • your advisor
  • your committee
  • your friends (to see if they've been acknowledged)
  • your parents (who don't necessarily read it so much as keep it as a trophy/proof that you can now get a "real job".
But the truth is that Ph.D theses can serve a valuable purpose. Over the years, I have come to appreciate a well written thesis in an area that I am new to. Not necessarily because of the results; those I can often dig up from journal papers. What I found most useful is the introduction, because in a well written thesis, the introduction lays out the terrain of the problem: what is known, and what is not. What is folklore and what results follow trivially. What papers are key to understanding the evolution of the area, and what their contributions are. Basically a lot of what a good survey has to offer, but more expansive, more leisurely, and often more useful.

So if there are any graduate students reading this, my recommendation to them is: take the time to expound on your area in an introduction. You are (or should be) an expert on this topic by now, and your voice is authoritative enough, even if you may not feel that way. The reward, of having someone read your work and appreciate it, is something that can be surprisingly elusive.

Using the GPU

GPU (Graphics card) programming is one of my pet projects, and it has been interesting to see the gradual harnessing of the GPU as a fast streaming coprocessor. A new direction is an even tighter coupling with a computer's underlying GUI and window management system.
  • OS X: "Quartz Extreme uses a supported graphics card built into your Mac to relieve the main PowerPC chip of on screen calculations. This dramatically improves system performance, making Panther much more responsive."
  • Longhorn: "Longhorn will feature a graphics subset called WGF (Windows Graphic Foundation). Its goal is to unify 2D and 3D graphics operation in one and will bring 2D Windows drawing and 3D operations together. Nowadays, 3D is done using a Direct X subset with the current version 9.0c. Longhorn will also use 3D menus and 3D interfaces and will require at least Shader 2.0 compliant cards, so it will make the graphics card an important part of your PC."
  • And late to the party, but getting there, Unix/X (via Thane Plambeck): " David Reveman, who became a Novell employee a couple of weeks ago, has been writing a new X server on OpenGL/Glitz called Xgl. Because Xgl is built on GL primitives it naturally gets the benefit of hardware acceleration. For example, window contents get rendered directly into textures (actually they get copied once in video memory for now), and so you get the benefit of the 3d hardware doing the compositing when you move semi-opaque windows or regions around."
Graphics cards will get even cheaper, since everyone will need one. Intel will probably put even more effort into the little joke they call their onboard graphics accelerator. Interesting times....

Wednesday, February 09, 2005

Heal thyself ?

Adam Buchsbaum observes that although Google Maps is the coolest thing since, I don't know, Google Scholar, there is one thing it doesn't do: if you type an address in the regular Google toolbar, it asks you if you want to plot a map using Mapquest or Yahoo !

C'mon guys, you can do better than that.

p.s take the tour. Try searching for 'free wifi 19130': excellent !!

Monday, February 07, 2005

"To chop a tea kettle"

A while back, Lance and Ernie and I were discussing the relative lack of questions at theory talks (in conferences specially). Having just returned from SODA, I can confirm that questions were few and far between at talks (although the odd talk here and there generated fairly animated discussions).

Scott McLemee at Inside Higher Ed has an interesting column on modern academic interactions, and how they relate to the blogosphere. The article is worth reading in its entirety, (as is the Crooked Timber note that pointed me there), but I couldn't resist excerpting the following, on discussions at conferences:

At conferences, scholars would stand up and read their papers, one by one. Then the audience would "ask questions," as the exercise is formally called. What that often meant, in practice, was people standing up to deliver short lectures on the papers they would have liked to have heard, instead -- and presumably would have delivered, had they been invited.

Hypothetically, if everyone on a panel read one another's papers beforehand, they might be able to get some lively cross-talk going. This does happen in some of the social sciences, but it seems never to occur among humanities scholars. The whole process seems curiously formal, and utterly divorced from any intent to communicate. A routine exercise, or rather perhaps an exercise in routinism. A process streamlined into grim efficiency, yielding one more line on the scholar's vita.

[...snip...]

The inner dynamic of these gatherings is peculiar, but not especially difficult to understand. They are extremely well-regulated versions of what Erving Goffman called "face work" -- an "interaction ritual" through which people lay claim to a given social identity. Thanks to the steady and perhaps irreversible drive to "professionalization," the obligation to perform that ritual now comes very early in a scholar's career.

And so the implicit content of many a conference paper is not, as one might think, "Here is my research." Rather, it is: "Here am I, qualified and capable, performing this role, which all of us here share, and none of us want to question too closely. So let's get it over with, then go out for a drink afterwards.

And the title of my post ? Read the whole article...

Damn !

Unlike some, I prefer to watch the game

Friday, February 04, 2005

Numb3rs Review: "Vector"

This episode had very little math, except a reference to a standard model of disease spread called an SIR model.

I have to say that the writers spent a lot more time getting the math details right than getting the biology right. Why on earth would you compare geometric structures of DNA, when you could just look at the sequence ? My wife the biologist had two comments on this, her first episode.
  • "This is not a TV show, it's a horror movie"
  • "I've been beaten senseless"
(as an aside, she refused to watch Jurassic Park when it first came out). She also assures me that the pictures of the DNA model are wrong (the alleged differences would be on the inside of the helix, not the outside). They did mumble something about RNA, but RNA wouldn't look like the pictures they had on screen.

There was an amusing incident with the eccentric string theorist where he forgot which way he was headed. I could be wrong, but I think this refers to an anecdote involving Herman Weyl at Princeton.

There was more Numb3rs PR in the math press. Keith Devlin (the Math Guy) had an interesting article on Numb3rs at MAA Online. He does an occasional segment on Weekend Edition with Scott Simon. His interviews make for interesting listening (a recent one was on the continuum hypothesis).
Tags: ,

And the wrath of God thunders down upon the unbelievers

Ernst Mayr, "leading evolutionary biologist of the 20th century", died today. From the NYT obit:

He was known as an architect of the evolutionary or modern synthesis, an intellectual watershed when modern evolutionary biology was born. The synthesis, which has been described by Dr. Stephen Jay Gould of Harvard as "one of the half-dozen major scientific achievements in our century," revived Darwin's theories of evolution and reconciled them with new findings in laboratory genetics and in field work on animal populations and diversity.


Polynomial hierarchy collapse

Another thing that puzzled me about the Unknotting result was the corollary:

If Unknotting is NP-complete, then PH collapses to the second level.

As it turns out, this is a trivial corollary of an older result by Boppana, Håstad and Zachos that if co-NP has short interactive proofs, then PH collapses. Specifically if Unknotting is NP-complete, then Knotting is co-NP-complete and since it is in AM = IP(2), this implies that co-NP is in IP(2), and everything comes crashing down.

It would be nice if there was some way to augment the Complexity Zoo to make such results easier to find.

Unknotting (proof translation)

Jeff Erickson kindly provides a translation in the comments to my previous post. I reproduce the full comment here, lightly edited for links and with minor corrections/additions marked in italics:

The paper relies on a construction (due to Haken?) of a triangulated 3-dimensional manifold M(K) from a given knot K, such that M(K) is homeomorphic to the 3-sphere if and only if K is unknotted. Specifically, M(K) consists of two copies of S3-T(K), glued along their boundaries (which are toruses), where T(K) is a tubular neighborhood of the knot K.

So one algorithm for deciding whether a knot K is trivial is to construct M(K) and check whether M(K) is the 3-sphere. The 3-sphere recognition problem is in NP. Haas et al. used a slightly different technique to prove that UNKOTTING is in NP, but both NP-proofs rely on something called "normal surface theory" introduced by Kneser in 1929 and used by Haken in the early 1960s to boil everything down to integer programming(!). (For a brief outline of normal surface theory, read the introduction of Benjamin Burton's Ph.D thesis)

Hara, Tani, and Yamamoto describe an interactive proof protocol for KNOTTING based on this characterization, and similar to an interactive protocol for graph non-isomorphism. The prover is trying to convince the verifier that her knot is nontrivial. The verifier constructs two triangulated 3-manifolds—the manifold M(K) described above, and a particular triangulation S(K) of the 3-sphere. After randomly permuting the vertex labels, the verifier presents the prover with a one of the two triangulations, chosen uniformly at random, without telling the prover which it is. The prover then (supposedly) tells the verifier whether his given manifold is the 3-sphere.

(1) If the prover says that S(K) is the 3-sphere, the verifier learns nothing.

(2) If the prover says that S(K) is NOT the 3-sphere, the verifier knows he's lying and assumes the knot is trivial.

(3) If the prover says that M(K) is the 3-sphere, the verifier assumes the knot is trivial.

(4) if the prover says that M(K) is NOT the 3-sphere, the verifier assumes the knot is nontrivial.

If the knot is trivial, then no prover can convince this verifier with probablity better than 1/2. If the knot is nontrivial, an honest prover can convince this verifier with probability 1/2. Running this protocol twice inflates the probabilities to 1/4 and 3/4.

END PROOF

As an aside, the original two-round interactive proof for graph nonisomorphism is a beautiful example of the power of interactive proofs. It was first demonstrated by Goldreich, Micali and Wigderson in 1986, and I reproduce the proof here in full (for definitions, refer to the original paper).

1. Given two graphs G1, G2 on n vertices. the verifier randomly picks i ε {1,2} and a random permutation π of [1..n].
2. Verifier then sends π(Gi) to prover
3. Prover returns j ε {1,2}, and verifier accepts iff j = i.

Proof:
If G1 and G2 are nonisomorphic, the prover can figure out which is which by comparing the graph sent by the verifier to G1 and G2, and thus will always return the correct answer.

If G1 and G2 are isomorphic, the prover cannot tell the difference, and thus can "fool" the verifier with probability 1/2.
QED.

Thursday, February 03, 2005

Unknotting...

With this I hope to start a series of posts on papers/ideas that I found interesting at SODA 2005. One of the side effects of no wireless access at SODA was that I couldn't comment on talks as they happened, but it (hopefully) meant that I had time to reflect on what I found interesting.

Exhibit 1, in no particular order, is the following result, by Hara, Tani and Yamamoto:

Unknotting is in AM co-AM

Unknotting is one of the more fundamental problems in knot theory. If we think of a (tame) knot as a (really tangled) loop in three dimensions, then much of simple knot theory focuses on the question: Given two knots, are they equivalent ? (where equivalence is expressed in terms of homeomorphisms).

The trivial knot (or the "unknot") is the simple circle in 3D, and thus the unknotting problem is:

UNKNOTTING: Given a knot, is it equivalent to the unknot ?

It was first shown in 1961 by Haken that Unknotting was decidable. After a brief gap of 36 years, Haas, Lagarias and Pippenger showed that Unknotting was in fact in NP (!), and conjectured that it was in NP ∩ co-NP. Other results along the way showed that Genus (is the genus of a knot less than g) was in PSPACE, and ManifoldGenus (a generalization to a given arbitrary manifold), was NP-Complete.

The new result speaks for itself. Formally, the authors show that Knotting (the complement of Unknotting) is in AM, by presenting a two-round interactive protocol for it. They also mention a corollary that if Unknotting is NP-Complete, then PH collapses to the second level. If I had paid better attention in my complexity class I would have figured out how this follows, but alas... I'd be grateful for any enlightenment. (Update: here it is)

The tragedy for me is that although it is clear that this result is interesting, the proof itself is rather impenetrable, requiring a knowledge of combinatorial topology above and beyond what I can muster up. This is why translators are needed ! (Update: here it is)

Knot theory is a good example of a (relatively arcane) subfield of mathematics that has suddenly become profoundly important in "real life". For more on this, read what well-known theoretical computer scientist John Baez has to say :).

On a more frivolous note, you can do knot art ! Calling our resident mathematical knot knitter...

Frank Harary, 1921-2005.

I wish I had known this earlier. Frank Harary, a graph theory giant most well known for his graph theory textbooks, died on Jan 4, 2005. There is more information at his homepage. (via mathpuzzle.com)

Timothy Gowers and the study of mathematics

When I was in high school I used to watch the Carl Sagan series 'Cosmos', and was fascinated by how he used to bring physics to life. I even remember the episode where he describes the physical consequences of the natural constants being closer to real life (suppose the speed of light was 50 mph, etc).

Ever since then, I've looked for expositors who could extract the same kind of beauty out of mathematics and computer science, distilling its essence in a way that might seem obvious to those of us who practice it, but that allows the core concepts to be revealed to a non-technical audience in a way that captures the imagination.

Timothy Gowers, the Fields Medalist, is well known for his expository articles on a variety of topics. He gave a speech in 2000 in Paris at an event sponsored by the Clay Mathematics Institute. The speech was titled 'The Importance of Mathematics', and I highly encourage watching it.

What struck me the most about this lecture was his eloquent defense of mathematics, and how he conveyed a sense of the aesthetic beauty of the field (why mathematicians refer to a theorem as 'beautiful'). With beautiful examples, he systematically laid out the landscape of a mathematical life, not in terms of topics, but in terms of how mathematical work is done.

He firmly dismissed the idea of mathematics needing to be useful (something we have had our own controversies over in theoryCS), and distinguished between the two statements
(1) Mathematics is not a useful subject
(2) A typical mathematician does not try to be useful.
arguing that (1) is false even though (2) is usually true, and that it should be this way.

A simple way of expressing some of the ideas in his lecture would be:
Mathematics is surprisingly useful, because complexity arises from simplicity and beauty reveals itself to be important.
I just bought his book 'Mathematics: A Very Short Introduction', which brings out these ideas in much more detail, and also puts to rest some of the usual canards associated with mathematics (the 30yr old rule, the lack of women, etc).

To some degree, the description of the daily activities of an algorithms researcher are similar to that of a mathematician, but in many ways they are different. We need a similar exposition for theoryCS !

Scian Melt #7

The latest delectable mix of things scientific and Indian is up.

Disqus for The Geomblog