next up previous
Next: Bibliography

Combinatorial Optimization Meets Calculus: Scaling Exponents for TSP

David J. Aldous and Allon G. Percus

Freshman calculus tells us how to find a minimum $x_*$ of a smooth function $f(x)$: set the derivative $f^\prime(x_*) = 0$ and check $f^{\prime \prime}(x_*) > 0$. The related series expansion tells us, for points $x$ near to $x_*$, how the distance $\delta = \vert x-x_*\vert$ relates to the difference $\varepsilon = f(x) - f(x_*)$ in $f$-values: $\varepsilon $ scales as $\delta^2$. This scaling exponent $2$ persists for functions defined on multidimensional space: if $x_*$ is a local minimum and $\varepsilon (\delta) = \min\{f(x)-f(x_*):\vert x-x_*\vert =
\delta)$, then $\varepsilon (\delta)$ scales as $\delta^2$ for a generic smooth function $f$.

Combinatorial optimization, exemplified by the traveling salesman problem (TSP: find the minimum length tour through $n$ points in the plane), is traditionally viewed as a quite distinct subject, with theoretical analysis focussed on the number of steps that algorithms require to find the optimal solution. To make a connection, compare an arbitrary tour ${\bf x}$ with the optimal (minimum-length) tour ${\bf x}_*$ by the two quantities

\begin{eqnarray*}
\delta_n({\bf x}) = \mbox{\{number of edges in {\bf x}
but not...
...mbox{\{length difference between {\bf x}
and {\bf x}$_*$\}}/s(n)
\end{eqnarray*}



where $s(n)$ is the length of the minimum length tour. Now define $\varepsilon _n(\delta)$ to be the minimum value of $\varepsilon _n({\bf x})$ over all tours ${\bf x}$ for which $\delta_n({\bf x}) \geq \delta$. Although the function $\varepsilon _n(\delta)$ will depend on $n$ and the problem instance, we anticipate that for typical instances drawn from a suitable probability model it will converge in the $n \to \infty$ limit to some deterministic function $\varepsilon (\delta)$. The universality paradigm from statistical physics [1] suggests there might be a scaling exponent $\alpha$ defined by

\begin{displaymath}\varepsilon (\delta) \sim \delta^\alpha \mbox{ as } \delta \to 0 \end{displaymath}

and that the exponent should be robust under model details.

Our study of the TSP yields strong evidence that the scaling exponent is $3$. This is based on exact analytic methods in a mean-field model of interpoint distances (distances between pairs of points are random, independent for different pairs, thus ignoring geometric constraints) and on Monte Carlo simulations for random points in $2$, $3$ and $4$ dimensional space. The new analytic results, to be detailed elsewhere, build upon a recent probabilistic reinterpretation [2] of work of Krauth and Mézard [3] establishing the average length of mean-field TSP tours. Parallel but mathematically easier results show that for the minimum spanning tree problem, a standard algorithmically easy problem, the scaling exponent equals $2$.

Figure 1: The optimal tour (white) and a near-optimal tour (blue) in a TSP instance with 256 points placed randomly on the plane.
Click for figure

These results are intriguing for two reasons. For a combinatorial optimization problem, a larger exponent means that there are more near-optimal solutions, suggesting that the algorithmic problem of finding the optimal solution is intrinsically harder. So scaling exponents may serve to separate combinatorial optimization problems of an appropriate type into a small set of classes of increasing difficulty. For instance, the minimum matching and minimum Steiner tree problems are expected to have scaling exponent $3$, and thus be in the same class as TSP in a quantitative way, as distinct from their qualitative similarity as NP-complete problems under worst-case inputs. This parallels the notion of universality class in statistical physics. Secondly, in the best-understood statistical physics areas such as percolation, the mean-field scaling exponents are exact for $d$-dimensional problems only for $d$ larger than some critical dimension, which typically is at least $4$ [4]. The fact that for TSP the mean field exponent is either exactly or a close approximation to the correct exponent in all dimensions $d \geq 2$ is therefore quite unexpected, and a convincing conceptual explanation remains to be discovered.




next up previous
Next: Bibliography
David & 2005-12-09