Sunday, September 27, 2009

Rajeev Motwani Memorial

I just returned from the technical workshop and memorial in honor of Rajeev Motwani. The workshop was excellently run by Ashish Goel, and designed (extremely well, I thought) as a mix of retrospective and technical content, with one speaker presenting a brief retrospective and introduction, and a technical speaker laying out a body of work.

The topics were varied: they started with Sanjeev Khanna discussing matchings in random graphs (Rajeev's thesis work), and going onto brand new results in this area: for example, an expected time O(n log n) bound for matchings in d-regular bipartite graphs. Piotr Indyk talked about locality-sensitive hashing, David Karger talked about randomized min cuts, Aneesh Sharma discussed monetization problems in social networks, and Sudipto Guha concluded with a overview of data streams.

The talks were surprisingly technical: if I closed my eyes, I could have easily imagined being in a SODA conference room. The only difference was that people were actually paying attention, as opposed to clicking away on laptops (or tweeting!). It was a large crowd: over 100 people, by my casual count.

There were many retrospectives, given by Dick Karp, Jeff Ullman, Chandra Chekuri, and Ron Conway. Chandra spoke in particular about the experience of being Rajeev's student, and as a former student myself, his words touched me the most. He talked with feeling, compassion and honesty, and drew a compelling picture of a man that we began to know all over again.

There was a beautiful memorial service in the Stanford Church, with words in English and Sanskrit, a hauntingly beautiful hymn from the Vedas sung by Rajeev's elder daughter, and testimonials from colleagues and friends old and new. Don Knuth was the organist for the entire ceremoney, and played pieces you didn't think could be played on a church organ. After the service, and a reception, there was a concert by one of Rajeev's favorite bands, Indian Ocean. They played amazing music, and I'm downloading their songs as we speak, but that's a tale for another time.

It was good to go back and meet people who I knew so well for a brief period of time, and then lost touch with. Many (if not all) of Rajeev's former students were there, and there were many others who cohabited the Gates Building along with me. All of us older, a little grayer, but still recognizable :). Spread-apart families often only get together at weddings or at funerals, and this was one of those occasions where it was great to see everyone, but as we all kept murmuring "unfortunately it had to happen like this".

If I had to describe the feeling that dominated my thinking that day, it was a sense of being robbed. Upon hearing testimonial after testimonial, anecdote after anecdote, listening to this divine rock group that Rajeev listened to and loved, I could only wonder at the many sides of this person whom I knew so little of. I wished I had known more about him: that our interactions had been more multidimensional than that of advisor and student, and that I (and my fellow students at the time) had seen more of the ebullience and vivacity that others spoke so vividly of.

By the end, a new picture began to emerge, of a 'hub', a 'connector' and a 'facilitator', someone who had the clarity to know what people really needed to succeed, and the self-effacement to stand back and make it happen, by connecting people together. He helped legions, and legions came to bid him farewell.

It therefore seems oddly fitting that his career in research started with studying random matchings, and ended with new explorations of social networks. His life, one might think, has always been about creating, analyzing and enriching connections.

Thursday, September 24, 2009

Recent items

Deadlines are keeping me busy, and away from blogging.

Speaking of which, the Fall Workshop on Computational Geometry has a submission deadline this Friday. The Fall workshop is a "true" workshop, in that you go, give a talk on whatever you're working on, and there's no messing around with proceedings, publications and things like that. It's a pure meet-and-chat kind of venue, which means the pressure is low, and the visibility is quite high (many of the east-coast geometry folks show up).

This year it's in Tufts, so the location is even better. So get those 2-page abstracts in !

In other news, FOCS registration deadline is looming. Bill mentions a workshop to celebrate 50 years of FOCS (which dates back to when it was the conference on switching and circuits (!)) and 20 years of the GATech ACO program.

The workshop is NOT like the Fall workshop :). It's a series of invited talks by a star-studded cast: KARP !! YANNAKAKIS !! ALON !! BLUM !! All together, for one brief engagement !!

Sounds like a great idea: the kind of thing we probably need to do more of to get more folks to the conference.

Wednesday, September 16, 2009

Memorial Workshop for Rajeev Motwani: Update

I had mentioned a memorial workshop for Rajeev Motwani to be held at Stanford Friday Sep 25. Registration is now open for the workshop and the memorial service to follow.

Registration is free, but mandatory. So if you plan on attending either the workshop or the service or both, make sure you register.

Beamer + Ipe + views...

At this point, mostly everyone is aware of how to use beamer to make presentations in LaTeX. However, many fewer people (mostly only geometry folk) are aware of Ipe, a kind of next-generation xfig.

You may stop reading right now if
  • you always use powerpoint for slides OR
  • you rarely have to use LaTeX in slides OR
  • you really hate non-WYSIWYG presentation software
Still here ? ok...

I wanted to share a workflow tip that I found quite useful when making slides with step-through animations. Suppose you have a sequence of slides in a presentation that unfold a figure step by step, or animate some algorithm execution etc. Ipe, coupled with a few LaTeX commands, provides a really nifty way of rendering the animation without jumps, misalignments, or misdrawings.

Ipe (and many other drawing programs) has the notion of a layer. More powerfully, Ipe also has the notion of a 'view', which you can think of as a (sub)set of layers. For example, if you have a drawing with layers 1,2,3,4,5, then view 1 might consist of {1,2,3}, and view 2 might consist of {1,2,5}, and so on.

What this means is that when you want to do step-animation, it's really easy. Each time you move to a new step, you create a new view in Ipe (which also usually creates a new layer), and you can select whichever subset of the current set of layers you want to render, as well as drawing new ones.

Ipe stores all the views as separate pages in a PDF file, so your final animation consists of a multi-page PDF file. And now comes the cool part.

Suppose you want to include this sequence of views in a beamer slide, with each view appearing after the next in response to a mouse click. You need two things:
  • pdftk (which comes standard in most linux installations), which allows you to split a PDF file into multiple files (one per page), with any format for the filename that you specify. For example, I have a command called 'splitpdf' that does this:
    pdftk $1.pdf burst output $1-%d.pdf

    which takes a file name.pdf and splits it into name-1.pdf, name-2.pdf and so on.

  • Next, you need the (standard) LaTeX package 'xmpmulti' which gives you the command \multiinclude. It allows you to include multiple figures that share a common prefix. So for example, to include all the figures created by the previous pdftk command, you merely write
    \multiinclude[start=1,format=pdf]{name}
    .The 'start=1' starts counting from 1 instead of 0, and you can also specify an end-counter.

But the best part is when you instead use
\multiinclude[<+>][start=1,format=pdf]{name}
Now, the files are included with an automatic 'replace each by the next' mode (<+> is standard beamer notation for this). At this point, you have a sequence of animations ready to go. In fact, when I give lectures, I have a number of slides that just look like this:
\begin{frame}
\frametitle{Title}
\mi{name}
\end{frame}

where \mi is a macro I defined for the above multiinclude. Ipe does all the layering and viewing work for me, and multiinclude takes care of the rest. This has made making complicated animations really simple and fast.

p.s if you're still wondering why one should use Ipe instead of xfig, the LaTeX integration in Ipe is superb. No nonsense with special flags, and pstex_t and craziness like that. You get WYSIWYG LaTeX inside Ipe, you can use whatever macros you have in your text, and the various nifty alignment tools make an Ipe drawing look really clean.

Friday, September 11, 2009

Memorial Workshop for Rajeev Motwani

Plans have been in the offing for a memorial workshop for Rajeev Motwani, and now the details are available. Below is the announcement (sent by Ashish Goel):
Dear friends,

As you might know, our dear friend and colleague, Rajeev Motwani, passed away in a tragic accident on June 5, 2009. We are holding a technical workshop titled

Randomized Algorithms: Theory and Applications

at Stanford University on Sep 25th, 2009, from 10am - 2:30pm to honor Rajeev's research in the area of algorithms and their applications. If you did not know Rajeev's research, please see http://www.stanford.edu/~ashishg/cgi-bin/RememberingRajeev/ for a brief introduction.

The workshop will take place at the Bechtel Conference Center in Encina Hall. Workshop attendees are also invited to a memorial celebration starting at 4:00pm at the Stanford Memorial church, followed by a performance by one of Rajeev Motwani's favorite bands, Indian Ocean. The registration URL, web-page information, parking directions, and talk titles will follow in a later email.

Registration will be free, but mandatory. Please feel free to bring this to the attention of any colleagues/students who you think might wish to attend, and send me an email if you have any questions.

Workshop program:

10 - 10:45 am
Welcome remarks
Retrospective by Richard Karp, UC Berkeley
Technical talk by Sanjeev Khanna, University of Pennsylvania

10:45 - 11:30 am
Retrospective by Jeff Ullman, Stanford University
Technical talk by Piotr Indyk, Massachusetts Institute of Technology

11:30 am - 12:15 pm
Retrospective by Chandra Chekuri, University of Urbana-Champaign
Technical talk by David Karger, Massachusetts Institute of Technology

12:15 - 1:30 pm
Lunch for registered attendees

1:30 - 2:30 pm
Retrospective by Ron Conway, Venture Capitalist
Technical talk by Sudipto Guha, University of Pennsylvania
Technical talk by Aleksandra Korolova, Stanford University

The Scientific committee for the workshop consisted of:
Moses Charikar
Ashish Goel
Richard Karp
Prabhakar Raghavan
Tim Roughgarden
I'll update this post with the registration URL when it becomes available.

Wednesday, September 09, 2009

Geometry at ESA (guest post)

(ed. note: Jeff Phillips is a researcher in computational geometry, data analysis and statistics. He's also one of the newly minted CIFellows, and will be working with me starting in a week. He kindly agreed to file this report from ESA.)

ESA (along with ICALP) is one of the top two European conferences for Theory A. And probably since the ICALP deadline is typically between the SoCG submission and notification, ESA is the home of many of the best computational geometry talks. This year was no exception, with two "geometry" sessions, as well as many other geometry talks mixed in other sessions.

Another interesting pattern at ESA which makes it different to other algorithms conferences is the juxtaposition of theoretical and experimental work. The two tracks (A: Design and Analysis, B: Engineering and Applications) have separate submissions, but are not treated separately in the program. This leads to the fun game of trying to determine whether a talk was from track A or track B. Sometimes you are surprised when a talk starts with a nice theoretical results, and then the speaker presents several experiments to show the algorithm works well in practice, or mentions that it has already been added to CGAL.

I think this a great practice and should be considered by other conferences such as SODA or SoCG. For instance ALENEX talks could be mixed in with SODA talks. This would encourage more algorithms people to actually implement their algorithms, because it would allow them to apply to a separate track that would give more value to practical algorithms. How would this not be a positive for our community and its perception from other parts of computer science!

A couple of related data points: There were 56 track A papers and 10 track B papers, and both were accepted at a rate of about 25%.

The invited talks followed the theme of theory and practice as well. Michael Mitzenmacher began with a very clear talk explaining many open problems in Cuckoo hashing. He has been working on small variations in the algorithm that lead to large differences in performance, and then has been going to great lengths to explain why these variations make such a large difference.

Erik Demaine gave a very entertaining talk describing how a certain line of his work has alternated between art and the theory behind the art. He also performed a couple magic tricks, reminding us that most of the best magic requires lots of research, and sometimes algorithms!

On the third day, Noam Nisan presented a great high level view of how Google's TV ad auctions work. Much of his talk served to point out all of the important details often ignored in theoretical analysis, but critical in implementing such a system effectively.

I'd like to note a few papers I found interesting. This is not by any means an exclusive list, but just enough to give a taste of the (geometric) results.

Manuel Caroli and Monique Teillaud. Computing 3D Periodic Triangulations.
They presented a set of rules and an algorithm for computing triangulations in 3D which are periodic, that is they tesselate to fill an infinite space, but can be defined as a complex within a unit cube. These triangulations are needed for many simulations where it is difficult to deal with boundary conditions, so these can be avoided, but effectively having no boundary. Despite, the usefulness of these triangulations, they had not really been properly formally defined until this paper. Furthermore, their algorithm has been implemented and will soon me in CGAL, if not already.


Mohammad Ali Abam, Mark de Berg, Mohammad Farshi, Joachim Gudmundsson and Michiel Smid. Geometric Spanners for Weighted Point Sets.
They study Euclidean a size n point sets with weights so that a weighted distance between two points p and q is described
d_w(p,q) = w(p) + d(p,q) + w(q)
where d(.,.) is the Euclidean distance and w(.) is the weight of a point. This may, for instance, represent a potential road network where it takes w(p) time to get to the center of a city from its outskirts. This paper shows that typical spanner-type results can be found for a weighted point set using this weighted distance. I.e. in R^d a (5+eps)-spanner can be found with O(n) edges, and (2+eps)-spanner can be found with O(n log n) edges.


Sumit Ganguly and Christian Sohler. d-dimensional Knapsack in the Streaming Model.
This paper studies the d-dimensional knapsack problem, that is given a set of n items with a value v_i and a size in d-dimensions s_{i,1}...s_{i,d} find a set of items I with maximum total value such that for all j \in [1,d] that sum_{i in I} s_{i,j} <= 1. This problem is studied in the streaming model, which I found interesting, because they were unable to store the actual solution, because it might have linear size. Instead, they approximate the optimal cost and present a "template" solution where they give an approximate size of each element in their approximate solution I'.

Bernhard Haeupler, Siddhartha Sen, and Robert E. Tarjan. Rank-Pairing Heaps.
A data structure paper, but for one very useful in geometry, among other places. This paper does not present any new results from a classical theoretical perspective. They describe a heap data structure which has the same running time for all operations as Fibonacci heaps. The contribution is that their data structure is considerably simpler than any variant of Fibonacci heaps, in particular, the structure of the heap never needs to be restructured. As an added benefit, their data structure easily outperforms Fibonacci heaps.
Other notes (mainly from business meeting): See also Michael Mitzenmacher's post.

  • There were 37 computational geometry submissions, 10 were accepted.
  • Mark deBerg (next years track A chair), declared that next year the page size will be STRICTLY ENFORCED and he is not opposed to automatically rejecting papers if they play too many games to squeeze more text in the alloted number of pages. Be warned.
  • next year, ESA 2010 (and ALGO) will be in Liverpool, England, and it was voted for ESA 2011 to be in Saarbrucken. Which won over a bid from Greece. (ed: HOW ON EARTH ? !!!!!!)
  • Proceedings were not included in registration, but could be bought. On the second day, after politely asking Springer, participants were able to download a single pdf of the proceedings from a couple USB keys. This was very useful!, but it would have been better earlier in the conference.

Things I missed (I had to leave partway through the third day).

Attached workshops which are part of ALGO on Thursday and Friday:

Friday, September 04, 2009

SODA 2010

Ironically (I'm on the PC), I'm probably the last one to post about the SODA acceptances (although I did tweet it a while ago)

In case you don't already know, one major change this year is the 20 page limit on final versions, brought about by the all-electronic proceedings. It's worth pointing out here that this is a major change that I don't think ANY conference (theoretical or otherwise) has put into place (I can say this because I had nothing to do with it :)). 20 pages is huge: there's no way you can wiggle out of writing full proofs at this point, and I hope that this trend will continue as more of our conferences go all-electronic.

One caveat: I don't know what the final version format is. It seems unnecessary to go with the butt-ugly Sheridan 2-column format, but we'll see.

On the papers:
I'm not going to comment on the papers, since I reviewed many of them. Suffice it to say that (even though it sounds like a cliche), I was impressed by the quality of the submissions, and there were many good papers that unfortunately couldn't make it. We're in the process of writing decision summaries for the authors: these are intended to capture more of the discussion surrounding a paper. Hopefully, they will give a better sense of what the reviewers liked and disliked about the paper than just the actual reviews.

On the process:
I felt a bit more disconnected from the overall deliberations this time, and maybe others felt this way too. I spent a lot of time on "my pile", but I didn't have a lot of time to look over papers that I wasn't in some way responsible for. Given the number of submissions, this is unavoidable I guess.

I actually think the reason I felt this way was because of the software interface. Instead of the (clunky) SIGACT server interface used in years gone by, we used the snazzy HotCRP interface. In almost every way, this is a vastly superior interface (and I've used EasyChair, the Microsoft CMT, and other packages as well). It feels lightweight, it has great searching and tagging capabilities, and most of the interfaces are one or two clicks from the home page. It cleanly combines email notifications and uploads/downloads with web-based editing, and I'd recommend it very strongly for anyone organizing a conference in the future.

The only feature the SIGACT server had which this doesn't, was a landing page where you got a stream of all comments on all papers. It was a serendipitious way of picking up on discussions not related to papers you were in charge of, and I remember in the past getting interested in a discussion and a paper and actually reading and submitting a review myself. In HotCRP, you land at a page containing your reviews, and it doesn't give you a direct view into the global picture (except maybe for the PC chair).

One amazing feat of social engineering that HotCRP also does: at this landing page, it tells you how many reviews you've put in compared to the average. There was a time during the review process where I'd reload the page every few hours and see myself falling behind the PC average, increasing the pressure on me to submit reviews :).

Disqus for The Geomblog