In recent years, many algorithmic problems have been revisited under the assumption that the input is so large that one can only afford restricted access to it, and one is willing to settle for an approximation to the solution. In this talk, I'll discuss joint work with C. Seshadhri, in which we study the LIS problem from this perspective in two distinct models of computation.
In the standard computational model we obtain an approximation algorithm that runs in time polylogarithmic in the input size. For any chosen constant c > 0, the algorithm outputs an approximation to the length of the LIS that (with high probability) is accurate to within an additive cn error.
In the data stream model, the input data is not available for random access, but rather arrives as a sequence of data items. The goal is to estimate the length of the LIS while minimizing the memory needed. We present a simple algorithm that is accurate to within an additive cn error and uses O(log^2(n)/c) memory.