Monday, June 21, 2010

3SUM Puzzle

A puzzle courtesy of Mohan Paturi:


Given a set S of n numbers, the 3SUM problem asks whether there exist a,b,cS such that a+b+c=0. It is generally believed that this should take around n2 time.

But let's assume |S+S| = tn2. Show that the problem can be solved in O(t lg2n) time.

Any ideas for a better bound?

Wednesday, June 9, 2010

Journals

There is a fraction of our community who is in love with the idea of publishing in journals, and would like to see our conferences go away or assume a radically different role. In many of the cases, it seems to me that the driving force behind this idea is nothing more than snobbery. Basically, the argument goes like this: (1) I think Mathematicians are cool; (2) Mathematicians publish in journals.


If one is willing to admit that a field can persist in stupidy for a long time due to tradition, then one has to also entertain the possibility that Computer Science is the grown-up in the house, while Mathematics is stuck with remnants of an era when travel was hard, and presenting the graduate students to a welcoming community was not considered important.

But it's not possible to poke fun at journals without the pedantic person in the room jumping up and down that conference publications are not formally verified. This view is, in my opinion, entirely deserving of a counterpoint to Lance's essay entitled "Time for Theoretical Computer Scientists to Stop Believing Fairy Tales".

There are basically two reasons to believe a paper is correct, none of which is that some bored editor used up some brownie points with a friend, who(se student) gave the paper a quick read while watching a World Cup game:
  1. The authors thought seriously about it and wrote down all the details. Regardless of what you think about journals, this should already be achieved at conference level. Yes, a conference is an announcement -- but I care when you announce "I've done this!" rather than "I'm reasonably sure I can do this." It is beyond my comprehension why conferences do not require full proofs (despite several successful attempts in the past).

  2. Interested people read it. Yesterday, Timothy Chan sent me a breakthrough paper. Between giving two talks, kayaking on the Charles, and driving back from STOC, I really couldn't read it. But today I read it, and flipped it upside down in my mind until I got it. The value of putting such a paper in a journal? (cdr '(a))
If you tend to write readerless publications, abolishing the journal system might create a distinct feeling of loneliness, as you can't even be sure that you have a few constrained readers. My heart bleeds for you.

But if/when people are interested in your paper, it will be checked for correctness. I've been impressed many times by how well this works, and by the dedication reviewers put in during the short time frame of conference review. Many bugs get caught at the STOC/FOCS level --- by people who care. And if a bug is not caught in time, there's no loss: it will be caught when the paper becomes interesting. (One could certainly imagine improvements to the system, like a requirement that all papers be on arXiv with requests for clarifications/bug reports/discussions posted below the fold. But that's a different crusade.)

To summarize, the journal vs. conference debate is an easy cosmetic change that we can pursue to feel like we're changing something, but it's besides the point if correctness is what we want. We should instead be tackling the real issue: how to increase the quality and readership of our papers. Can we reduce the (perceived) expectation on the number of papers one should publish? Fight the Least Publishable Unit philosophy? Achieve more unity in the field? Reduce the number of conferences, while also allowing smaller results to appear (posters anyone?)...

Funny enough, this is well aligned with another hot goal of the day: increasing conference participation. If papers have more readers and people don't need to travel to the Kingdom of Far Far Away to publish uninteresting papers, there are probably more people showing up at STOC/FOCS/SODA.

Wednesday, June 2, 2010

Representing a vector

This is the second post about "Changing Base without Losing Space." The main result of the paper is, if I am allowed to say it, rather amazing:
On a binary computer, you can represent a vector of N decimal digits using ⌈N·log210⌉ bits, and support reading/writing any digit in constant time.
Of course, this works for any alphabet Σ; decimal digits are just an example.

Unlike the prefix-free coding from last time, this may never find a practical application. However, one can hardly imagine a more fundamental problem than representing a vector on a binary computer --- this statement that could have been discovered in 1950 or in 2050 :) Somebody characterized our paper as "required reading for computer scientists." (In fact, if you teach Math-for-CS courses, I would certainly encourage you to teach this kind of result for its entertainment value.)

To understand the solution, let's think back of the table for prefix-free coding from last time:
Here is the reasoning process behind the first few operations:
  1. I have two symbols from alphabet B+1. I need an output alphabet of B, so let's split them into a letter from B, and whatever is left (in particular, a letter from B+3).
  2. I have a letter from B+3. Can I combine it with another letter to make something close to a power of 2? Yes, I should use a letter from alphabet B-3, since (B+3)(B-3) is close to B2.
  3. How can I get a letter from B-3? Take the next two input symbols, and split them into a letter from B-3, and whatever is left (in particular, a letter from B+6).
To generalize, let's assume we want to code something from an alphabet X (not a power of two). We can store it in binary without loss of space, if we have another, unrelated, symbol to code (an information carrier). In a picture:
If I want to output M bits (M≫lg X), I have to combine the symbol from X with a symbol from Y=⌊2M/X⌋. The loss of entropy will be lg(Y+1) - lg Y = O(1/Y), since the floor function could convert Y+0.9999 into Y.

Now I have to get a symbol from Y. This is possible if my information carrier came from some alphabet TY. Then, I can break it into a symbol from Y, and one from Z=⌈T/Y⌉. Again, the entropy loss is lg Z - lg(Z-1)=O(1/Z), since the ceiling can convert Z-0.9999 into Z.

To balance the losses, set YZ≈√T. That is, by having a large enough information carrier, I can make the loss negligible. In particular, if I apply the information carrier N times, I could set TN2, meaning that I only lose O(1/N) bits per application, and only a fraction of a bit overall! (This fraction of a bit will become one entire bit at the end, since I need to write the last symbol in binary.)

Thus, in principle, I could encode an array of digits by grouping them into blocks of O(lg N) digits (making T=the alphabet of a block be large enough). Then, I can iteratively use one block as the information carrier for the leftover of the previous block (the Z value from the previous block). The crucial observation is that, to decode block i, we only need to look at memory locations i (giving the Y component) and i+1 (giving the Z component). Thus, we have constant time access!

The one questionable feature of this scheme is that it requires O(N) precomputed constants, which is cheating. Indeed, the alphabets Y and Z change chaotically from one iteration to the next (the next Y is dictated by the previous Z, "filling it" to a power of two). There seems to be no pattern to these values, so I actually need to store them.

One can get away with just O(lg N) constants by applying the information carrier idea in a tree fashion. The alphabets will vary from level to level, but are identical on one level by symmetry. See the paper for complete details.

I will talk about my second STOC paper ("Towards Polynomial Lower Bounds for Dynamic Problems") after the conference.

Prefix-Free Codes

I am told the least this blog could do is to talk about my own results :) So here goes.

Next week at STOC, I am presenting "Changing Base without Losing Space," a joint paper with Mikkel Thorup and Yevgeniy Dodis. The paper and slides can be found here. The paper contains two results achieved by the same technique. Today, I will talk about the simpler one: online prefix-free codes.

The problem is to encode a vector of bits of variable length in a prefix-free way; that is, the decoder should be able to tell when the code ends. (Note on terminology: In information theory, this is called "universal coding"; prefix-free is about coding letters from a fixed alphabet, e.g. the Hamming code is prefix-free.)

Let N be the (variable) length of the bit vector. Here are some classic solutions (known as Elias codes):
  1. A code of 2N bits: after each data bit, append one bit that is 0 for end-of-file (EOF) or 1 if more data is coming;
  2. A code of N+2lg N bits: at the beginning of the message, write N by code 1; then write the bit vector.
  3. A code of N+lg N+2lglg N bits: at the beginning, write N by code 2; then write the bit vector.
  4. Recursing, one obtains the optimal size of N+lg N+lglg N+...+O(lg*N)
Prefix-freeness turns out to be very useful in (practical) cryptography. For instance, if I want a signature of a file, I could use some standard cypher that works on small blocks (say, AES on 128 bits), and chain it:
However, this is only secure if the input is prefix-free, or otherwise we are vulnerable to extension attacks:
This creates the need for online prefix-free codes: I want to encode a stream of data (in real time, with little buffering), whose length is unknown in advance. In this setting, the simplest solution using 2N bits still works, but the others don't, since they need to write N at the beginning. In fact, one can "rebalance" the 2N solution into an online code of size N+O(√N): append a bit after each block of size √N, wasting a partially-filled block at the end. Many people (ourselves included) believed this to be optimal for quite some time...

However, our paper presents an online code with ideal parameters: the size is N+lg N+O(lglg N), the memory is only O(lg N), and the encoding is real time (constant time per symbol). Since the solution is simple and practical, there is even reason to hope that it will become canonical in future standards!

So, how do we do it? I will describe the simplest version, which assumes the input comes in blocks of b bits and that b≫2lg N (quite reasonable for b=128 as in AES). Each block is a symbol from an alphabet of size B=2b. We can augment this alphabet with an EOF symbol; in principle, this should not cost much, since lg(B+1)≈lg B for large B. More precisely, N symbols from an alphabet of B+1 have entropy N·lg(B+1) = N·b+O(N/B) bits, so there's negligible loss if BN.

The problem, though, is to "change base without losing space": how can we change from base B+1 (not a power of two) into bits in real time? A picture is worth 1000 words:
We can think of two continuously running processes that regroup two symbols into two symbols of different alphabets:
  • Split: Two input symbols in alphabet B+1 are changed into two symbols in alphabets B-3i and B+3(i+1), for i=0,1,2,... This works as long as (B-3i)(B+3i+3) ≥ (B+1)2, which is always the case for n2 B/4 (hence the assumption b≫2lg N).

  • Merge: Two input symbols in alphabet B-3i and B+3i are regrouped into two symbols in alphabet B, which can be written out in binary (b bits each). This is always possible, since (B-3i)(B+3i) ≤ B2
Encoding and decoding are fast and simple: they amount to a few multiplications and divisions for each block. And that's it!

Tuesday, June 1, 2010

MADALGO summer school

MADALGO is organizing a Summer School on Geometric Data Structures on Aug 16-19 in Aarhus, Denmark. Registration and local food are free (with a capacity limit), but you have to get there on your own dime.


The speakers are Timothy Chan, Sariel Har-Peled, John Iacono, and yours truly. We will strive to make this a very instructive event, and I would encourage you to attend.

Avner Magen

On May 29, Avner died in an avalanche while climbing in Alaska. A memorial blog has been set up here.


Like research, climbing is about fighting the soul-less nature. A mountain peak or a proof take no pity on our attempts --- they are as hard to reach as they have been since time immemorial. When we succeed to reach the top, our flesh and soul are the mark of our superiority, for we can feel a joy that nature does not share. But when we fail, our flesh and soul are our sorrow.

Rest in peace, Avner. Researchers and climbers have enjoyed your company.