1

One of the biggest issues I have with haskell is to be able to (correctly) predict the performance of haskell code. While I have some more difficult problems, I realize I have almost no understanding.

Take something simple like this:

count [] = 0
count (x:xs) = 1 + count xs 

As I understand it, this isn't strictly a tail call (it should need to keep 1 on the stack), so looking at this definition -- what can I reason about it? A count function should obviously have O(1) space requirements, but does this one? And can I be guaranteed it will or won't?

Heptic
  • 3,076
  • 4
  • 30
  • 51
  • you mean _O(1)_ space requirement in addition to the list that it must now create! that list might have been a lazy thunk before. personally, for high-level data structures like lists, I'll always be happy when the compiler gets faster, even if it's at the cost of reasoning when an optimization gets applied; otherwise follow what chris suggests – gatoatigrado Sep 05 '11 at 05:32
  • Answer to your question is there: [How does Haskell tail recursion work?](http://stackoverflow.com/questions/412919/how-does-haskell-tail-recursion-work) – Hynek -Pichi- Vychodil Sep 05 '11 at 12:30
  • 1
    In a tail recursive call, the recursive call is the last thing that happens inside the function. In your example, 1 has to be added *after* the recursive call returns, hence it is not tail recursive. – fredoverflow Sep 05 '11 at 17:39

1 Answers1

6

if you want to reason more easily about recursive functions, use higher order functions with known time nd space complexity. If you use foldl or foldr you know that their space complexity cannot be O(1). But if you use foldl' from Data.List as in

count = foldl' (\acc x -> acc + 1) 0 

your function will be O(1) in space complexity as foldl' is tail recursive per definition.

HTH Chris

Chris
  • 2,987
  • 2
  • 20
  • 21