Wednesday, January 18, 2012

SODA review I: Talks, talks and more talks.

I asked Jeff Phillips (a regular contributor) if he'd do some conference posting for those of us unable to make it to SODA. Here's the first of his two missives.

Suresh requested I write a conference report for SODA. I never know how to write these reports since I always feel like I must have left out some nice talk/paper and then I risk offending people. The fact is, there are 3 parallel sessions, and I can't pay close attention to talks for 3 days straight, especially after spending the previous week at the Shonan meeting that Suresh has been blogging about.

Perhaps, it is apt to contrast it with the Shonan meeting. At Shonan there were many talks (often informal with much back and forth) on topics very well clustered on "Large-scale Distributed Computation". There were several talks earlier in the workshop that just overlaid the main techniques that have become quite powerful within an area, and then there were new talks on recent breakthroughs. But although we mixed up the ordering of subtopics a bit, there was never that far of a context switch, and you could see larger views coalescing in people's minds throughout the week.

At SODA, the spectrum is much more diverse - probably the most diverse conference on the theoretical end of computer science. The great thing is that I get to see colleagues across a much broader spectrum of areas. But the talks are often a bit more specific, and despite having usually fairly coherent sessions, the context switches are typically quite a bit larger and it seems harder to stay focused enough to really get at the heart at what is in each talk. Really getting the point requires both paying attention and being in the correct mind set to start with. Also, there are not too many talks in my areas of interest (i.e. geometry, big data algorithmics).



So then what is there to report. I've spent most of my time in the hallways, catching up on gossip (which either is personal, or I probably shouldn't blog about without tenure - or even with tenure), or discussing on-going or new research problems with friends (again not yet ready for a blog). And of the talks I saw, I generally captured vague notions or concepts. Usually stored away for when I think about a related problem and I need to make a similar connection, or look up a technique in the paper. And, although, I was given a CD of the proceedings, but my laptop has not CD drive. For the reasons discussed above, I rarely completely get how something works from a short conference talk. Here are some example snip-its of what I took away from a few talks:

Private Data Release Via Learning Thresholds | Moritz Hardt, Guy Rothblum, Rocco A. Servedio.
Take-away : There is a deep connection between PAC learning and differential privacy. Some results from one can be applied to the other, but perhaps many others can be as well.
Submatrix Maximum Queries in Monge Matrices and Monge Partial Matrices, and Their Applications | Haim Kaplan, Shay Mozes, Yahav Nussbaum and Micha Sharir
Take-away: There is a cool "Monge" property that matrices can have which makes many subset query operations more efficient. This can be thought of as each row represents a pseudo-line. Looks useful for matrix problems where the geometric intuition about what the columns mean is relevant. 
Analyzing Graph Structure Via Linear Measurements | Kook Jin Ahn, Sudipto Guha, Andrew McGregor
Take-away : They presented a very cool linear sketch for graphs. This allows several graph problems to be solved under a streaming (or similar models) in the way usually more abstract, if not geometric, data is. (ed: see my note on Andrew's talk at Shonan)

Lsh-Preserving Functions and Their Applications | Flavio Chierichetti, Ravi Kumar
Take-away: They present a nice characterization of what sorts of similarities (based on combinatorial sets), showing which ones can and cannot be used within a LSH framework. There techniques seemed to be a bit more general that for these discrete similarities over sets, so if need to use this for another similarity may be good to check out in more detail. 

Data Reduction for Weighted and Outlier-Resistant Clustering | Dan Feldman, Leonard Schulman
Take-away: They continue to develop the understanding on what can be done for core sets using sensitivity-based analysis. This helps outline not just what functions can be approximated with subsets as proxy, but also how the distribution of points affects these results. The previous talk by Xin Xiao (with Kasturi Varadarajan on A Near-Linear Algorithm for Projective Clustering Integer Points) also used these concepts. 

There were many other very nice results and talks that I also enjoyed, but the take-away was often even less interesting to blog about. Or sometimes they just made more progress towards closing a specific subarea. I am not sure how others use a conference, but if you are preparing your talk, you might consider trying to build a clear concise take-away message into your talk so that people like me with finite attention spans can remember something precise out of it. And so people like me are most likely to look more carefully at the paper the next time we work on a related problem.

Post a Comment

Disqus for The Geomblog