Disclamer: First of all, I know that not all NP-complete problems have a large 'search space' where they have to look for a solution, but a large amount of the best-known ones do, so I will make this assumption since this is a question about (known) algorithms and not about complexity theory. So it might probably apply more to discrete optimization problems, but I wont restrict it to them.
Context: Most the algorithms for solving this type of NP-complete problems that I know usually have a way of tossing out possible solutions in the search space while the algorithm is running (think of branch-and-bound here for example). However, while in the average and best cases this always yields reductions that are more or less efficient, in all examples I could think of, there is a way of constructing a worst-case problem where you have to go through all points in your search space. So much so, that a colleague of mine suggests this is a fundamental property of NP-complete problems at least (of course, if P = NP then this whole discussion is trivial).
The Problem: I believe that there has to be an example of an NP-complete problem and an algorithm for solving it, where you can always find reductions of the search space, even in the worst case once you are running the algorithm, even though this might only get you a constant (or more generally polynomial) decrease in the worst case runtime compared to an exhaustive search algorithm. Of course I can think of trivial examples where you synthetically expand the search space, but there you could reduce it a priori, so I am looking for a real algorithm for a real problem, which means you can usually only reduce the space during the execution of the algorithm.
Example: I'll illustrate this all with an example. Mixed-integer linear-programming is known to be NP-hard. However, a lot of research for the simplex algorithm which is used on relaxations in a branch-and-bound usually lets you discard large portions of the search space. A very simple example of this is:
max x_1 + ... x_n
w.r.t.
0 <= x_1 <= x_2 <= x_n <= N*Pi
x_2,x_4,x_6,..., x_(floor(n/2)*2) integers
Here it is pretty obvious that you always want the largest x_i possible, so you can leave out the rest. From the initial relaxation you would choose the largest x_n possible and leave all the rest out. However, you can think of examples where it does not:
max v_1 * x_1 + ... + v_n * x_n
w.r.t.
0 <= x_1,x_2, ..., x_n <= 1
w_1 * x_1 + ... + w_n * x_n <= W
x_1, ..., x_n integers
which is a 0-1 knapsack problem. Depending on the weights, values and the order of the branch and bound, you could have to test every single combination of x_i's to find the maximum.