Tuesday, April 26, 2005

AAAS Fellows

The American Academy of Arts and Sciences (AAAS) announced their 225th class of Fellows today. Their membership includes people that span a large spectrum of areas. Of particular interest to computer scientists is the category I.6, in which there were four Fellows announced today:
Of the four, two are theoreticians. Herbert Edelsbrunner is of course one of our own, computational (and combinatorial!) geometry demigod and part of the 1-2 geometry smackdown that Duke University delivers to the rest of the world. Zvi Galil is one of the early algorithms greats who helped develop the field of algorithms (graph algorithms, string algorithms, dynamic algorithms) into the rich area it is today.

Interestingly, Sergey Brin and Larry Page have also been named as Fellows, in the category V.II: "Business, Corporate, and Philanthropic Leadership (Nonprofit sector)".

Saturday, April 23, 2005

Merger mania

A pre-disclaimer: I know nothing about this conference and by all indications, it is probably a good one. I just found this description from the CFP funny.
CoNEXT is a joint conference series having its roots in QoFIS, NGC and
MIPS. QoFIS and NGC are highly successful international workshops initiated
by two European COST Actions, namely COST263 on Quality of Future Internet
Services and COST264 on Networked Group Communications. MIPS resulted from
the merging of two other major workshops, namely IDMS (concentrated on
interactive and distributed multimedia services) and PROMS (focusing on
protocols for networked multimedia systems); also the associated ICQT
(Internet Charging and QoS Technology) workshop is integrated into CoNEXT 2005.
And I thought media mergers were hard to keep track of...

Wednesday, April 20, 2005

A question on reviewing

You are asked to review a paper for a conference (not a journal). You find that the paper is on a problem that you have been working on for some time. This makes you
  1. An expert reviewer eminently qualified to review the paper ?
  2. A conflicted reviewer who should recuse themself ?
  3. Insert your choice here.
Parameters to consider:
  • You either do or do not have prior published work on this topic
  • The problem is exactly what you were working on, or closely related
  • You have submitted something similar to the conference (I think this makes (2) the more likely choice) or not.

Tuesday, April 19, 2005

A Baez gem...

From his Quantum Gravity Seminar:
1+2+3+4+... = -1/12.
And no, this is not a joke. Read the notes.

Tick tock: SGP 2005

The Eurographics Symposium on Geometry Processing is a forum for geometric methods in graphics. It's a great place to see some of the applications of geometry.

Apr 20: Abstracts due
Apr 27: Papers due
May 23: Results announced

which is another very fast turn-around time.

Tick tock: CCCG 2005

CCCG is the Canadian Conference on Computational Geometry, and the deadline is approaching.

May 2: Papers due
May 31: Results announced.

That is one heck of a turn around time.

Personal trivia: My first professor in algorithms, Asish Mukhopadhyay, is running the conference this year.

Monday, April 18, 2005

Why no paper should ever be rejected...

From a commenter on an earlier post:
Suresh, maybe you are being too hard on the guy. Maybe Professor Callos was afraid of being murdered by the author of a rejected paper?

Think I'm joking? The American Physical Society NEVER rejects an abstract submission to their conference. Why? Well, they used to, but some crazy guy who was "misunderstood" and had some great new theory of everything killed the APS president and his secretary because of multiple rejections. After that, they instituted an open door policy to the conference. Of course, there are special sessions for kooks....
What appears to be the source of this story, from the alumni magazine of Winona State University:

At the 50th anniversary of the murder of his friend and co-worker, Winona State University alumnus Tom Baab, '48, of Park Ridge, Ill., established the Eileen Fahey Memorial Scholarship. Fahey, a secretary at Columbia University, was shot and killed at her desk on July 14, 1952.

While earning his master's degree in American letters at Columbia, Tom worked part-time for the American Physical Society (APS), headquartered at the Pupin Physics Laboratory on the Columbia campus in New York City. Fahey, a 20-year-old secretary, was sitting at her desk reading a letter from her fiancé, a Marine serving in Korea, when Bayard Peakes entered the office and emptied a clip of .22 caliber pistol shots into Fahey, killing her. Peakes then fled the campus.

In the weeks that followed, Tom was among those questioned by police for possible leads and motives. Peakes was finally traced through a letter written to him by Karl K. Darrow, head of Bell Labs and secretary of the APS. Darrow had declined to accept a paper Peakes wanted to present at the next APS meeting. Peakes's paper proposed the non-existence of the electron and Darrow rejected it, suggesting that Peakes might ruin his career in physics with such a theory.

At his arrest, Peakes said he wanted to kill a man at the APS since his rejection letter had come from a male. Fahey was the only person in the office and the shots were directed at her instead. Peakes was tried and sentenced to the Rockland County Asylum for the Criminally Insane.

State of The Art Reports (STAR)

Eurographics has an interesting submission forum called STAR (State of the Art Reports):
STARs are survey papers that cover hot topics in contemporary computer graphics research. Their goal is to give a comprehensive overview of all relevant work in the respective field and to explain in depth the techniques and algorithms involved. Potential STARs may be based on a recent tutorial or course given by the authors.
Apparently, this has been going on for the last few years. It's an interesting idea:
For submission, a 6-10 page STAR-let is sufficient, which should contain the author(s) name(s), institution, short biography and contact details, along with a detailed outline of the STAR, including the main bibliographical entries. All submitted manuscripts undergo a formal review process. Authors of conditionally accepted proposals must subsequently submit a final 15-25-page STAR, that incorporates constructive reviewer feedback and acceptance of the STAR is contingent on this requirement.

Accepted STAR papers will be invited to be submitted as regular contributions to Computer Graphics Forum.

At EG'05, STAR authors will be given 90 minutes to present the state-of-the-art of their respective field. For one author, the conference registration fee will be waived.
(Emphasis mine). Sounds almost like the review process is intermediate between a conference review and a journal review (because of the contingent acceptance mode). It's always interesting to hear about new ways of talking about, and disseminating, new research.

As formats go, this sounds similar to the tutorials that are common in database conferences, but are all too rare at theory conferences. FOCS one year had a day of tutorials, and we have endless straw polls on 'does the community want to spend an extra day attending tutorials', which is guaranteed to get a NO answer almost always.

Tags: ,

Sunday, April 17, 2005

what the ... ?

From a "press release" at blogshares.com:
The Geomblog was the subject of much speculation when analysts at several firms were heard to be very positive about it's recent performance. It's share price rose from B$1,065.48 to B$1,502.33. Much of the hype was said to originate from kakngah whose Star Spangled Banner (artefact) was said to be involved.

kakngah declined to comment on the recent speculation.

kakngah : http://blogshares.com/user.php?id=18697
I wish someone could explain to me how this whole thing works.

Friday, April 15, 2005

methinks thou protesteth too much...

As many of you may now know, the MIT grad students' plan to attend SCI with a randomly-generated talk was exposed for the sorry sham it was, and the good folks at SCI were shocked, shocked at the perfidy perpetrated by these poltroons.

The mysterious Prof. Nagib Callaos has finally spoken. In a 4-page denounciation sent to an inquiring researcher, he lays out a formidable argument for the value of non-peer reviewed papers, citing numerous authorities on the topic. The PS/PDF is not text-editable, so I can't cut and paste some of the choicer excerpts: read it and be 'umbled.

He remains unashamed, but full of sadneess [sic].

p.s A commenter points out this interesting story from a few years ago.

Real-life derangements.

A happy ending (at least for one author-set) to the bizarre referee report case:
We just found out that the reviewer had made a mistake with regard to my review and it was intended not for my paper but for some other "lucky" candidate - they are re-evaluating our paper and it may still get in:-) This joker had shuffled up all his four reviews such that none of them matched the actual paper!!
As any computer scientist worth his salt knows, this bizarre event is a derangement: a permutation of n items so that no item is matched to itself. It's often used as the canonical application of the principle of inclusion-exclusion in combinatorics. The likelihood of such an event is not that low; the probability of a derangement (uniformly over the space of all permutations) is 1/e ~= 0.37...

Lest you think that derangements are mathematical arcana, I give you the Married Couples Problem. And let this be a less0n: if you don't like the reviews you get, and have good reasons, don't keep quiet !

p.s I pity the authors that this review was intended for.

Wednesday, April 13, 2005

FOCS and the arxiv

The Quantum Pontiff notes that after the FOCS deadline, a number of interesting papers have started showing up in the arXiv. These are all quantum computing papers, it should be noted; qc folks (mainly because of the physics connection) tend to be rather diligent about arXiving papers. As far as I can tell, there is no similar burst for non-qc papers.


SCI reviews

The paper from a previous post was accepted to SCI inspite of having no reviews. Since SCI reviewers are presumably flooded with submissions and are clearly overwhelmed, maybe the following review might help them. This is an actual complete review for a real conference (minus the scores): its immense value lies in its chameleon-like ability to "fit" as a review for any paper you might submit ! I offer this to the community as a token of goodwill on the eve of the Tamil New Year:
-- Comments to the author(s):
The paper is technically poor and also the results.
The authors did not refer appropriately the past work.

The author addressed the topic in irrelevant way.

The paper is not clearly written.

No technical or engineering contribution.

-- Summary:
The paper does not describe the problems and the
solutions synthetically and the technical options are
badly explained. The authors must follow common rules
for writing articles in the domain.

Tuesday, April 12, 2005

More Attacks on SCI

To follow up on Ernie's post about the perfect SCI submission, here's a website that will automatically generate random papers in computer science, with graphs and all ! Developed by Jeremy Stribling, Max Krohn, and Dan Aguayo, all graduate students at MIT, the website was "tested" by submitting a paper to SCI. Here's the abstract:
Many physicists would agree that, had it not been for congestion control, the evaluation of web browsers might never have occurred. In fact, few hackers worldwide would disagree with the essential unification of voice-over-IP and public-private key pair. In order to solve this riddle, we confirm that SMPs can be made stochastic, cacheable, and interposable.
The paper was accepted ! As an aside, their 'acceptance notification' provides a novel solution to the problem of reviewer loads at conferences (FOCS commitee, are you reading this ?):
On behalf of the Organizing Committee, we would like to inform you
that, up to the present, we have not received any reviews yet for you
paper entitled: "Rooter: A Methodology for the Typical Unification of
Access Points and Redundancy". So, your paper has been accepted, as a
non-reviewed paper, for presentation at the 9th World Multiconference
on Systemics, Cybernetics and Informatics (WMSCI 2005) to be held in
Orlando, USA, on July 10-13, 2005.

The authors are looking for funding to attend SCI in Florida, where they plan to present a randomly-generated talk. Donate generously :)

(Source: Adam Buchsbaum)

Sunday, April 10, 2005

More on funding and research

Daniel Lemire responds to my lament on research. Daniel, you did understand me correctly, so there is no confusion there. :)

I wrote this long rant about how the funding that computer scientists need is not insignificant, because of student support. I then realized that this is not what I wanted to respond to. The real issue is this:
if you need a lot of cash to do your research, you’ve got to justify the use of the money. Justify it to whom? To the people who give you the money. This seems only fair. If you want to do research for its own sake, and you also want a lot of money, well, tough.
But this is the point I am lamenting. The very idea that you justify research by drawing a direct lline between that research and some promised benefit in the near future is problematic. Of course research needs to have direction. If I am going to study the higher moment properties of the distribution formed by the length of the hind legs of ring-tailed wallabies, I should presumably have some reason why this is relevant. But I cannot draw a direct line between this distribution and (say) a cure for cancer, and nor should I be expected to. I would argue that the point of government funding is precisely to fund projects that have no near term monetary value, but fit into a long term research agenda as perceived by a jury of one's peers (like say a grant review panel).

Once again, I defer to Timothy Gowers and his masterful lecture, where he spells out in detail how research contributions can be twisted, gnarly things that are hard to linearize. My point is merely that if one believes that long-term research add value in terms of adding to our base of knowledge about the world (Platonic or otherwise), then one cannot merely say, 'do it on your own time; Einstein did !'. Even he built GR on a mathematical edifice that had been in development for a while, with frankly no clear "practical" purpose prior to his work.

Saturday, April 09, 2005

On knowledge for knowledge's sake.

The CRA blog points to an eloquent OpEd in the WaPo by science writer Rick Weiss:
Crouched today in a defensive posture, we are suffering from a lack of confidence and a shriveled sense of the optimism that once urged us to reach boldly into the unknown. Equally important, we seem to have forgotten that many good things come just from being open to them, without a formed idea of what they are or how they should come out. We are losing, in short, one of the oldest traditions in science: to simply observe, almost monk-like, with an open mind and without a plan.
The desire for knowledge for its own sake seems almost quaint in these days of interdisciplinary research, justifying one's bottom line, monetizing one's research, and so on and so forth. I remember back in the heady days of the late 90s that doing a Ph.D had become something distinctly unfashionable; only the poor sods who didn't have an idea for seed funding were plodding away at their degrees.

There is much to be grateful for in the gold rush of the 90s. Not necessarily for the innovation it fueled - there was enough of that already - but for the tremendous boost it gave to the computer industry, and to computer science as a whole. The job market exploded, and people started getting insane salaries for doing the same kind of work that would have got them much less earlier; I often feel that I am overpaid for the the work that I do (not that I am complaining :)).

But with this kind of success comes a price. There is something to be said for toiling in the trenches for low pay and low reward, if only that it directs your attention to what really matters. When opportunity knocks, pressure is not far behind. What I mean is that as opportunities for making money off of research increase, the expectations for doing so also increase, and soon, you are having to explain why you are NOT doing profitable research.

I feel that in some ways this has happened with research in computer science (not wholly, but in many ways). The more industry-focused research is, the more short term or "evolutionary" it tends to be. Big questions are not asked, and thus are not answered.

At NetDB, there was a panel titled 'Networks and Databases: Do We Meet or Merge ?". An interesting argument made by one of the speakers was that peer-to-peer systems, although an active area of research, will not last as long as something like sensor networks, that have industry backing as well (the implicit suggestion being that one is more worthy of further study than the other). Given how profoundly the notion of a peer-to-peer system has affected our lives, especially socially, it seems like an unfortunate argument to make, and is an argument motivated primarily by financial considerations.

I'm not sure where I'm going with this, to be honest. It's just that I've had far too many discussions about whether knowledge acquisition should be utility driven or "for its own sake", and all the signs suggesting that government is actively enforcing the first view disturb me greatly.

Friday, April 08, 2005

Tuesday, April 05, 2005

Attacks on ideas...

I've been unable to blog as frequently as I would have liked: alas, blogging and research still appears to be a zero-sum game for me in terms of time. The attacks on evolution, science and the academic sphere have become a lot shriller, with spectacles like people at the Discovery Institute (the home base for (un)intelligent design) now deigning to take on Einstein (read Sean Carroll for a lip-smackingly satisfying thwack). What is saddest is this account from the Scientific American editors' blog, that starts:
Last night I had dinner with more than a dozen presidents of private and state universities, and it became one of the most frustrating and disappointing evenings of my professional life. It gave me reason to question the leadership to be expected from the universities on one of the most significant issues of science education in the U.S.
Scientists are too busy doing science to step up and defend the field, and political organizations are attempting to view science through the tired left-right polarized lenses that pass for public discourse. If university presidents can't step up to the plate, who will ?

Tags: ,

Sunday, April 03, 2005

Numb3rs Review: "Identity Crisis"

I never got around to posting my Numb3rs review because this episode was rather low on the math. There was a small element of Bayesian reasoning, where Charlie attempts to figure out the likelihood that Charlie convicted the wrong guy based on two crimes having the same M.O.

Plot Summary: Don discovers a crime with an M.O identical to a case that he thought he had solved a year ago. He gets Charlie to tell him that the likelihood of a coincidence was very small, and then goes after the real killer. Along the way, Charlie insults a fingerprint specialist, but she gets her revenge, with a "butcher's hook" (no no, not that kind).

What I found more interesting was a real life Numb3rs story (with no crime though), via the New Yorker, starring the brothers Chudnovsky, and a delicately tampered rug. And if you're tired of such seriousness, check out this profile of the most beautiful mathematics grad student on TV, Navi Rawat (courtesy Sepia Mutiny).

Tags: ,

The /. translation

of the NYT article on defense funding that I mentioned earlier:
  • I'll put it in StarCraft terms: you're spending your minerals on upgrading your Zealots, and failing to invest in the pylons and tech structures that would allow you to build a whole frickin' fleet of Protoss Carriers
  • You can't cut Counter Strike research! What will I play now?
The discussion is interesting though (at +5 filtering). Some of the standard fallacies are brought up (and occasionally demolished):
  • All useful CS research was done in the 70s
  • We don't need no stinkin' guvmint: a teenage hacker can do better
  • Why should DARPA fund academic research
  • Why do CS researchers needs tons of money anyway ? All they do is think !

Saturday, April 02, 2005

The role of the university in the current funding crisis

When writing a post, it often happens that I have too many thoughts to fit into what is conventionally a short note format. Further reflection on my post about funding led to this:

Although it is not clear that the dot-com bust had anything to do with increased submission rates to conferences/funding agencies, it is fair to say that the dot-com boom had a lot to do with this.
I recall that in the late 90s and early 00s, universities were desperately trying to increase the sizes of their CS departments, allegedly in response to increased undergradate enrollment and the consequent increased demand for teachers.

Needless to say, when you hire tenure-track professors, you get more teaching, but you get a heck of a lot more research, as you have more slaves running the tenure treadmill as fast their legs can go. That inevitably leads to more submissions to conferences and more requests for money.

But tenure-track jobs often lead to tenure, i.e permanent jobs. These professors are not likely to stop publishing and stop requesting funding, but the underlying supply of students appears to be drying up (at least at the graduate level, though probably not at the undergraduate level). What happens then ?

This appears reminiscient of what has happened in biology. The PCR revolution led to an explosion in the number of researchers in biology and genetics. The NIH has historically been generous in funding graduate work in biology. However, funding for PIs and post-docs is far more scarce, creating a supply-demand bottleneck that means long years toiling at postdocs for ever elusive faculty jobs.

In computer science, we are far from that, also because industry jobs are to an extent still a viable option for researchers. But as industrial research labs cut back, and funding for basic research dries up, we could easily be facing the same kind of funding crises that biology faces today. And to draw another lesson from biology, what is true for rats is true for humans: as resources dwindle, organisms become more ferocious towards each other. if you think your grant reviews were harsh before, watch out !

Dwindling research funding in the U.S.

Today's New York Times has an article by John Markoff titled 'Pentagon Redirects Its Research Dollars'. The lead paragraph:
The Defense Advanced Research Projects Agency at the Pentagon - which has long underwritten open-ended "blue sky" research by the nation's best computer scientists - is sharply cutting such spending at universities, researchers say, in favor of financing more classified work and narrowly defined projects that promise a more immediate payoff.
Others have already commented on this article: it is interesting to read it in conjunction with the latest David Patterson letter in CACM. One of the most fascinating tables in this letter is a chart outlining the evolution (in terms of money, and where the research was done) of many of the most important technologies available today. Technology transfer is a rather overused buzzword, but this chart really shows how technology moves between academia and industrial research labs (in both directions), and then to commercial settings. If ever one wanted proof of the value of academic "blue-sky" research, this is it.

As always, there are a number of related trends:
  • In a previous letter, David Patterson lamented the increasing submission loads to various conferences. If you look at the chart of NSF funding requests and acceptances over the past few years, it looks strikingly similar.
  • Research is becoming more conservative; again, David Patterson in his two letters cites the lack of 'out-there' conference papers as well as the 'chilling effect' of funding on truly innovative grant proposals.
It is not hard to see why these are related: in a highly competitive conference (and funding) acceptance scenario, papers are not accepted; they are merely 'not rejected'. In other words, we find reasons to eliminate papers rather than accept them. What this means that is you have to have a cast-iron presentation of a paper/grant; anything that can get shot down, will. Thus, more outlandish and innovative proposals die.

Being conservative about accepting 'out-there' papers is not necessarily a bad thing; however, I'd argue that one cannot afford to be conservative about funding new ideas: to use a rather abhorrent analogy, funding is like the VC seed capital for a company, and conference/journal peer review is like the market place; if you cut off the source of innovation, there is no fuel for good ideas to emerge.

One has to ask though: How much is ACM itself responsible for this state of affairs ? I have read more interesting articles in CACM in the past three months than I had in the prior 7 years, and I am truly grateful that David Patterson is raising issues that are familiar to all of us in research. However, the ACM, over the past many years, has morphed from one of the truly representative computer science organizations (many foundational research results were published in CACM in days gone by), to an IT trade organization. The ACM does not speak to me anymore, at a time when an active lobbying group for basic computer science research has never been more important.

Part of this, for better or for worse, has been the explosion in the IT industry over the last ten years. With the kind of money pouring into the IT world, it probably made sense to cater to the needs of IT professionals. However, companies are fickle creatures; one of the main problems we face right now is the lack of industrial funding for basic research, something that used to be a big part of the computer science landscape.

Let's be clear here: DARPA is right that if they are
devoting more attention to "quick reaction" projects that draw on the fruits of earlier science and technology to produce useful prototypes as soon as possible.
then they don't need to fund university research (and in fact academic researchers wouldn't want to work on these problems). Their mistake of course is not seeing the value of long-term fundamental research even now, but you have to ask: when the flagship magazine of the most well known computer science organization cannot be distinguished from a random trade magazine, who's to tell the difference between academic research and corporate software efforts ?

Hat tip to Chandra Chekuri for pointing out the above articles.

Friday, April 01, 2005

Theory Day at Maryland

UMIACS@Maryland is organizing a theory day just after STOC, on May 25. STOC is close to Baltimore this year, so for those who can make it, this would be a nice extra day of activities (that is, if you aren't burned out by talks already!)

Here's the program:
Tags: ,

No wifi please, we're Japanese...

Todd Ogasawara went to Japan for business and discovered that unlike in the US, where hotels are tripping over themselves to offer WiFi, over there it’s mighty rough trying to find a hotel with any sort of broadband access (whether or wired or wireless) for guests. Not that there isn’t WiFi in Japan or anything like that, but apparently most hotels assume that you’ll have some sort of high-speed access via your cellphone and so don’t feel compelled to offer WiFi.
Looks like blogging will be greatly reduced on my upcoming trip :)


The Economist has a nice article on what they claim to be a potential new trend in mathematical theorem proving: the use of computers to demonstrate the proof of a theorem. They cite the 4-color theorem and Kepler's conjecture as examples of 'proof by computer' and discuss the pros and cons of such approaches.

The article distinguishes two kinds of computer proof: one where the the computer is used to enumerate and check a large number of subcases in a proof, and one where the computer is used to generate a proof via an automated theorem-proving process. The article argues that
It is possible that mathematicians will trust computer-based results more if they are backed up by transparent logical steps, rather than the arcane workings of computer code, which could more easily contain bugs that go undetected.
I am skeptical whether mathematicians will be willing to trust a computer over their own intuition. The dark secret at the heart of mathematics is that most proofs are established by ultimately appealing to the reader's intuition. In fact, it is more likely that I can convince a listener of my proof by intuitive arguments than by pure formal methods. This is a sliding scale of course; my intuitively obvious claim is another person's completely nontrivial lemma, and it is through the give and take among mathematicians in the community that a proof is 'stabilized' or 'confirmed'.

Ultimately, it's worth remembering that part of the beauty of mathematics is that rigid formal truths have beautiful intuitive interpretations, and it is these intuitive metaphors that are much more lasting than the proofs themselves.

A quote from the last paragraph of the Economist article is fitting, and would make Timothy Gowers proud:
Why should the non-mathematician care about things of this nature? The foremost reason is that mathematics is beautiful, even if it is, sadly, more inaccessible than other forms of art. The second is that it is useful, and that its utility depends in part on its certainty, and that that certainty cannot come without a notion of proof.

Tags: ,

Disqus for The Geomblog