0

Disclamer: First of all, I know that not all NP-complete problems have a large 'search space' where they have to look for a solution, but a large amount of the best-known ones do, so I will make this assumption since this is a question about (known) algorithms and not about complexity theory. So it might probably apply more to discrete optimization problems, but I wont restrict it to them.

Context: Most the algorithms for solving this type of NP-complete problems that I know usually have a way of tossing out possible solutions in the search space while the algorithm is running (think of branch-and-bound here for example). However, while in the average and best cases this always yields reductions that are more or less efficient, in all examples I could think of, there is a way of constructing a worst-case problem where you have to go through all points in your search space. So much so, that a colleague of mine suggests this is a fundamental property of NP-complete problems at least (of course, if P = NP then this whole discussion is trivial).

The Problem: I believe that there has to be an example of an NP-complete problem and an algorithm for solving it, where you can always find reductions of the search space, even in the worst case once you are running the algorithm, even though this might only get you a constant (or more generally polynomial) decrease in the worst case runtime compared to an exhaustive search algorithm. Of course I can think of trivial examples where you synthetically expand the search space, but there you could reduce it a priori, so I am looking for a real algorithm for a real problem, which means you can usually only reduce the space during the execution of the algorithm.

Example: I'll illustrate this all with an example. Mixed-integer linear-programming is known to be NP-hard. However, a lot of research for the simplex algorithm which is used on relaxations in a branch-and-bound usually lets you discard large portions of the search space. A very simple example of this is:

max x_1 + ... x_n
w.r.t.
0 <= x_1 <= x_2 <= x_n <= N*Pi
x_2,x_4,x_6,..., x_(floor(n/2)*2) integers

Here it is pretty obvious that you always want the largest x_i possible, so you can leave out the rest. From the initial relaxation you would choose the largest x_n possible and leave all the rest out. However, you can think of examples where it does not:

max v_1 * x_1 + ... + v_n * x_n
w.r.t.
0 <= x_1,x_2, ..., x_n <= 1
w_1 * x_1 + ... + w_n * x_n <= W
x_1, ..., x_n integers

which is a 0-1 knapsack problem. Depending on the weights, values and the order of the branch and bound, you could have to test every single combination of x_i's to find the maximum.

Goens
  • 407
  • 3
  • 11
  • Are you looking for something like the [DPLL](https://en.wikipedia.org/wiki/DPLL_algorithm) algorithm to solve the SAT Problem? – amit Jul 02 '15 at 10:03
  • amit: something like that, yes, but where you can't worst-case have to execute all possibilities. In DPLL if your formula has exactly one solution that satisfies it, and your recursive evaluation scheme is unfavorable, you can end up checking every single possibility – Goens Jul 03 '15 at 09:46

1 Answers1

0

I’m not sure that “non-exhaustive” has a nice definition. I’ll try to answer anyway.

Take the problem of finding a maximum clique. We can parameterize the search space by a Boolean vector indicating whether each vertex belongs to the clique, making 2n possibilities, none excludable a priori, but every n-vertex graph has at most 3n/3 ≤ 1.5n maximal cliques, and even a fairly simple algorithm like Bron–Kerbosch achieves this bound up to polynomial factors. (The Wikipedia article describes subsequent improvements in the exponential base.)

Another example is Hamiltonian path. There are n! different solutions in a complete graph, none excludable a priori, but there exists a dynamic program to find one that has running time 2n poly(n).

On the other hand, the Strong Exponential Time Hypothesis is that we can’t do much better than 2n for n-variable satisfiability, which rules out a known algorithm with a better worst-case running time. In practice, the heuristics are so good that we use satisfiability as a reduction target for, e.g., checking the validity of combinatorial circuits. As far as I’m concerned, “NP-hard means that exhaustive search is as good as it gets” is a harmful oversimplification.

David Eisenstat
  • 64,237
  • 7
  • 60
  • 120
  • The strong exponential time hypothesis would rule out a better worst-case running time, yes, in terms of scaling. If there is a way to exclude a single interpretation from checking in every case (don't believe you can in 3-SAT, it's just for the sake of the example), then you already always strictly better than the brute-force algorithm even though the run-time complexity stays the same for the worst-case. – Goens Jul 03 '15 at 09:55
  • The maximum clique example is exactly what I was looking for! The Hamiltonian Path one I might be a bit misleading, I am not sure. While the n! number for a complete graph might be the worst-case execution time of the brute-force algorithm, it's enough if you can find a graph (or family of graphs) where the dynamic programming algorithm does not do better than the brute-force one. The question is if you consider your search space as all possible paths, or do you take finding paths into account, in the latter case the example works as well! – Goens Jul 03 '15 at 10:02