David J. Aldous and Allon G. Percus
Freshman calculus tells us how to find a minimum of a smooth
function
: set the derivative
and check
.
The related series expansion tells us, for points
near to
,
how the distance
relates to the difference
in
-values:
scales as
. This scaling exponent
persists for functions defined on multidimensional space:
if
is a local minimum and
,
then
scales as
for a generic smooth function
.
Combinatorial optimization, exemplified by the
traveling salesman problem
(TSP: find the minimum length tour through points in the plane),
is traditionally viewed as a quite distinct subject, with theoretical
analysis focussed on the number of steps that algorithms require to find
the optimal solution.
To make a connection,
compare an arbitrary tour
with the optimal (minimum-length)
tour
by the two quantities
Our study of the TSP yields strong evidence that the scaling exponent is .
This is based on exact analytic methods in a mean-field
model of interpoint distances
(distances between pairs of points are random, independent for
different pairs, thus ignoring geometric constraints)
and on Monte Carlo simulations for random points in
,
and
dimensional space.
The new analytic results, to be detailed elsewhere, build upon a recent probabilistic
reinterpretation [2]
of work of Krauth and Mézard
[3]
establishing the average length of mean-field TSP tours.
Parallel but mathematically easier results show that for the
minimum spanning tree problem, a standard algorithmically easy
problem, the scaling exponent equals
.
Click for figure |
These results are intriguing for two reasons.
For a combinatorial optimization problem,
a larger exponent means that there are more near-optimal solutions,
suggesting that the algorithmic problem of finding the optimal
solution is intrinsically harder.
So scaling exponents may serve to separate
combinatorial optimization problems of an appropriate type
into a small set of classes of increasing difficulty.
For instance, the minimum matching and minimum Steiner tree problems
are expected to have scaling exponent ,
and thus be in the same class as TSP in a
quantitative way, as distinct from their qualitative similarity
as NP-complete problems under worst-case inputs.
This parallels the notion of
universality class
in statistical physics.
Secondly, in the best-understood statistical physics areas such as
percolation, the mean-field scaling exponents are exact
for
-dimensional problems only for
larger than some
critical dimension, which typically is at least
[4].
The fact that for TSP the mean field exponent is either exactly or
a close approximation to the correct exponent in all dimensions
is therefore
quite unexpected,
and a convincing conceptual explanation remains to be discovered.