I want to distribute n rectangles from a dataframe with different heights and lengths so that the aspect ratio (r_expected) of the total length (L) and total height (H) is roughly L/H = 0.33. A sketch of the rectangle distribution logic below:
To find the most optimal length, I start with a guessed length (L_guess). From that I can calculate the total height (H) (note: I'm omitting a detailed description of how this is done since it is secondary) and therefore the ratio r, as seen here:
import pandas as pd
import numpy as np
H = 0
L_guess = 18
l_cum = [2.5, 6.1, 9.9, 13.8, 18.1, 20.0] #cumulative length of rects
h = [3.5, 4.5, 6.7, 4.8, 6.8, 3.1] # height of rects
df = pd.DataFrame(list(zip(l_cum,h)), columns = ["l_cum", "h"])
tolerance = 0.01 #error tolerance for calculated ratio
r_expected = 0.33 #expected target ratio L/H
rowlist = [np.floor(i/L_guess) for i in df["l_cum"]] #check how many rects fit in L_guess by assigning a row index
for k in range(int(max(rowlist))): #check with rect has the largest height
vals = [j for j,i in enumerate(rowlist) if i == k]
H += max(df['h'][vals]) #add largest height to get total height
r = L_guess/H #calculated ratio out of L_guess and calculated h
I then update the L_guess if the calculated ratio r is not within a given tolerance to the expected ratio of 0.33. NOW HERE'S MY PROBLEM: Currently, I arbitrarily increase or decrease L_guess by 15% until the calculated ratio is within the given tolerance and L_guess is returned:
if abs(r_expected - r) <= tolerance: #if ratio difference is <= tolerance
return L_guess #stop iterating and return length that works best
elif r < r_expected: #if L_guess too small
L_guess += L_guess * 0.15 #increase
findbestL(df,L_guess) #try again
else: #If L_guess too big
L_guess -= L_guess * 0.15 #decrease
findbestL(df,L_guess)
Is there a way to find the suitable increment without guessing and therefore speed up the calculation time? I have read about the gradient descent that aims at minimizing some function that seems somewhat similar?