8

I've written this Haskell program to solve Euler 15 (it uses some very simple dynamic programming to run a tad faster, so I can actually run it, but removing that you would expect it to run in O(2^n).

-- Starting in the top left corner of a 2×2 grid, and only being able to move to
-- the right and down, there are exactly 6 routes to the bottom right corner.
--
-- How many such routes are there through a 20×20 grid?

calcPaths :: Int -> Integer
calcPaths s
 = let  go x y
          | x == 0 || y == 0    = 1
          | x == y              = 2 * go x (y - 1)
          | otherwise           = go (x - 1) y + go x (y - 1)
   in   go s s

I later realised this could be done in O(n) by transforming it into an equation and upon thinking of it a little longer, realised it's actually quite similar to my solution above, except the recursion (which is slow on our hardware) is replaced by mathematics representing the same thing (which is fast on our hardware).

equivalent equation

Is there a systematic way to perform this kind of optimisation (produce and prove an equation to match a recursion) on recursive sets of expressions, specifically one which could be realistically "taught" to a compiler so this reduction is done automatically?

kvanbere
  • 3,289
  • 3
  • 27
  • 52

3 Answers3

4

Unfortunately I can't say much about analytical algorithmic optimizations, but in practice there is a useful technique for dynamic programming named memoization. For example, with Memoize library your code can be rewritten as

import Data.Function.Memoize

calcPaths s
 = let go f x y
          | x == 0 || y == 0    = 1
          | x == y              = 2 * f x (y - 1)
          | otherwise           = f (x - 1) y + f x (y - 1)
   in memoFix2 go s s

so the go function will be calculated only once for any combination of arguments.

Yuuri
  • 1,858
  • 1
  • 16
  • 26
  • Thankyou for bringing up memoization! A reminder to everyone else, that I'm not looking for a solution to the euler the problem -- I already have two -- I'm looking for a systematic way to derive a solution of good complexity from any branching brute force. – kvanbere Feb 26 '14 at 11:00
  • 2
    I wanted to mention memoization because it is a more or less systematic approach (it has its own usability restrictions) and it somewhat simplifies function equations in an implicit way. – Yuuri Feb 26 '14 at 11:09
2

You can also use dynamic programming if the problem is divisible into smaller subproblems for e.g

F(x,y) = F(x-1,y) + F(x,y-1)

here F(x,y) is divisible into smaller sub-problems hence DP can be used

int arr[xmax+1][ymax+1];

//base conditions 

for(int i=0;i<=xmax;i++)
 arr[i][0] = 1

for(int j=0;j<=ymax;j++)
 arr[0][j] = 1;

// main equation

for(int i=1;i<=xmax;i++) {

  for(int j=1;j<=ymax;j++) {

     arr[i][j] = arr[i-1][j] + arr[i][j-1];
  }
} 

As you mentioned compiler optimization DP can be used to do so you just need to write instructions in compiler which when given a recursive solution check if it is divisible into sub-problems of smaller sizes if so then use DP with simple for loop build up like above but the most difficult part is optimizing it automatically for example above DP needs O(xmax*ymax) space but can be easily optimized for getting solution in O(xmax+ymax) space

Example solver :- http://www.cs.unipr.it/purrs/

Vikram Bhat
  • 6,106
  • 3
  • 20
  • 19
  • Strictly, this doesn't answer the question directly either, since this form of dynamic programming isn't systematic and doesn't produce the mentioned optimization (brute force ==> equation). – kvanbere Feb 26 '14 at 11:30
  • Sorry, I know I made a comment about this being the same as the Haskell solution but now I realise my mistake. The haskell solution is 2^n whereas this is n^2 (making use of cheap memoization by using a grid, neat!). Above comment still applies. – kvanbere Feb 26 '14 at 11:32
  • 1
    @kvanberendonck From what i know these kind of recurrence equations can in some cases be solved using z-transforms or generating functions where you take z-transform and convert to linear equations and then take inverse z-transform but you need to do that in compiler – Vikram Bhat Feb 26 '14 at 11:44
  • Thanks for showing me PURRS, it's exactly what I was looking for. – kvanbere Feb 26 '14 at 13:12
0

This also seems like somewhat of a philosophical question. It seems that you are asking that the compiler recognize that you would like a more efficient (faster? using less resources?) process to return the value of the function call (rather than the most efficient way to execute your code).

Carrying the idea further, we might have a compiler give suggestions of, in this case, mathematical formulas that might distill code more succinctly/efficiently; alternatively, the compiler might choose to connect to the internet and have another computer (e.g., google or wolfram) conduct the calculation; at the extreme, perhaps the compiler will recognize that what might actually be better to deliver at this time is not the answer to Euler Project 15 but a chocolate cake recipe or instructions for fixing your home heating.

The question makes me think of artificial intelligence and the role of the computer (how much of your math should the computer do for you? how much should it follow the code more closely?). That said, this kind of optimization ought to be an interesting project to think about.

גלעד ברקן
  • 23,602
  • 3
  • 25
  • 61