S.Lott's answer is the best one. But this was too long to just add to a comment on his, so I put it here. Anyways, it's how I think about assert, which is basically that it's just a shorthand way to do #ifdef DEBUG.
Anyways, there are two schools of thought about input checking. You can do it at the target, or you can do it at the source.
Doing it at the target is inside the code:
def sqrt(x):
if x<0:
raise ValueError, 'sqrt requires positive numbers'
root = <do stuff>
return root
def some_func(x):
y = float(raw_input('Type a number:'))
try:
print 'Square root is %f'%sqrt(y)
except ValueError:
# User did not type valid input
print '%f must be a positive number!'%y
Now, this has a lot of advantages. Probably, the guy who wrote sqrt knows the most about what are valid values for his algorithm. In the above, I have no idea if the value I got from the user is valid or not. Someone has to check it, and it makes sense to do it in the code that knows the most about what's valid - the sqrt algorithm itself.
However, there is a performance penalty. Imagine code like this:
def sqrt(x):
if x<=0:
raise ValueError, 'sqrt requires positive numbers'
root = <do stuff>
return root
def some_func(maxx=100000):
all_sqrts = [sqrt(x) for x in range(maxx)]
i = sqrt(-1.0)
return(all_sqrts)
Now, this function is going to call sqrt 100k times. And every time, sqrt is going to check to see if the value is >= 0. But we already know it's valid, because of how we generate those numbers - those extra valid checks are just wasted execution time. Wouldn't it be nice to get rid of them? And then there's one, that will throw a ValueError, and so we'll catch it, and realize we made a mistake. I write my program relying on the subfunction to check me, and so I just worry about recovering when it doesn't work.
The second school of thought is that, instead of the target function checking inputs, you add constraints to the definition, and require the caller to ensure that it's calling with valid data. The function promises that with good data, it will return what its contract says. This avoids all those checks, because the caller knows a lot more about the data it's sending than the target function, where it came from and what constraints it has inherently. The end result of these is code contracts and similar structures, but originally this was just by convention, i.e. in the design, like the comment below:
# This function valid if x > 0
def sqrt(x):
root = <do stuff>
return root
def long_function(maxx=100000):
# This is a valid function call - every x i pass to sqrt is valid
sqrtlist1 = [sqrt(x) for x in range(maxx)]
# This one is a program error - calling function with incorrect arguments
# But one that can't be statically determined
# It will throw an exception somewhere in the sqrt code above
i = sqrt(-1.0)
Of course, bugs happens, and the contract might get violations. But so far, the result is about the same - in both cases, if I call sqrt(-1.0), i will get an exception inside the sqrt code itself, can walk up the exception stack, and figure out where my bug is.
However, there are much more insidious cases...suppose, for instance, my code generates a list index, stores it, later looks up the list index, extracts a value, and does some processing. and let's say we get a -1 list index by accident. all those steps may actually complete without any errors, but we have wrong data at the end of the test, and we have no idea why.
So why assert? It would be nice to have something with which we could get closer-to-the-failure debug information while we are in testing and proving our contracts. This is pretty much exactly the same as the first form - after all, it's doing the exact same comparison, but it is syntactically neater, and slightly more specialized towards verify a contract. A side benefit is that once you are reasonably sure your program works, and are optimizating and looking for higher performance vs. debuggability, all these now-redundant checks can be removed.