1

I have developed an algorithm that combines randomized search and Simulated Annealing for solving the Movie Scenes Scheduling problem, a problem that consists of proposing the best shooting sequence for minimizing the costs (actor wages per take), considering that you have a maximum number of takes you can shoot per day (per session).

Now, I need to find the complexity of the proposed algorithm.

The algorithm does the following.

def search_and_sim_annealing(costs, max_takes,temp,N,reps: int=3):
  best_solution = []
  best_cost = 100
  for rep in range(reps):
    solution = random_search(costs, max_takes, N)
    SA_sol = Simulated_Annealing(solution, max_takes,costs,temp)

    cost_SA = calculate_cost(SA_sol, max_takes,costs)
          
    if cost_SA < best_cost:
      best_solution = SA_sol
      best_cost = cost_SA

  return best_solution

First function(random_search)

  1. Generate random solution, using random.shuffle (O(n)).
  2. Calculate the cost of the solution (O(n*m))
  3. Repeats N times to generate N random solutions, calculating the cost for each one.
  4. Saves the best solution for starting the simulated annealing algorithm.

For what I understand, up to this function, complexity is O(N.n.n.m)=O(N.m.n^2)

Second function (Simulated Annealing)

  1. Calculates cost of the parameter solution (O(n*m))
  2. While loop temperature is above a threshold... (I've counted I: 2292 iterations) -> O(log I)?
  3. Generate neighbor solution (see code below) -> Complexity: O(2n) or (n^2)?
def generate_neighbor(solution):
  i,j = sorted(random.sample( range(1,len(solution)) , 2))
  sol_list = list(solution)
  vecina = tuple(sol_list[:i] + [sol_list[j]] + sol_list[i+1:j] + [sol_list[i]] + sol_list[j+1:])
  return neighbor
  1. Calculate cost of the neighbor solution. -> O(n.m)
  2. Compares cost with previous best solutions.
  3. Repeat the process R times for R solutions and getting the best of them.->O(R)

So it would be O(R.n.m.n.m.(n^2).(log I))... O(R.n^4.m^2.log I)

If we concatenate those two, it would be equal to O(N.m.n^2).O(R.n^4.m^2.log I)... But as I'm quite unsure about how to do it... I've never done this type of calculus for an algorithm like this.

The idea of this algorithm is to have a complexity significantly lower than n!, which is the complexity of the greedy algorithm I have previously coded.

Thanks for your insights.


Here follows the code for the SA function:

Simulated Annealing:

def simulated_annealing(solution, max_takes, costs, temp):

  ref_solution = solution
  ref_cost = calculate_cost(ref_solution, max_takes,costs)
  
  best_solution = []
  best_cost = 100
    
  K=0
  while temp > .0001:
    K+=1
    neighbor = generate_neighbor(ref_solution)
    n_cost = calculate_cost(neighbor, max_takes,costs)
          
    if n_cost < ref_cost:
        best_solution = neighbor
        best_cost = n_cost
    
    if n_cost < ref_cost or prob(temp, abs(ref_cost - n_cost)):
      ref_solution = copy.deepcopy(neighbor)
      ref_cost = n_cost

    temp = lower_temp(temp)
 
  return best_solution

0 Answers0