There are several prominent computational problems for which
simple iterative methods are widely preferred in practice despite
an absence of runtime or performance analysis (or "worse", actual
evidence that more sophisticated methods have superior
performance according to the usual criteria). These situations
raise interesting challenges for the analysis of algorithms.
We are concerned in this work with one such simple method: a
classical iterative algorithm for balancing matrices via scaling
transformations. This algorithm, which goes back to Osborne and
Parlett & Reinsch in the 1960s, is implemented as a standard
preconditioner for eigenvalue computations in many numerical
linear algebra packages. Surprisingly, despite its widespread use
over several decades, no bounds were known on its rate of
convergence.
We prove the first such bound: a natural L_\infty -norm variant
of the algorithm converges, on n by n matrices, in O(n^3 log(n))
elementary scaling transformations. This is tight up to the log n
factor. We also prove a conjecture of Chen that characterizes
those matrices for which the limit of the balancing process is
independent of the order in which balancing operations are
performed.
J. ACM 2017. Joint work with Alistair Sinclair

## Date:

Mon, 20/11/2017 - 14:00 to 15:00

## Location:

Room 130, Feldman Building, Givat Ram