At the core the difference is that isDefinedAt on the partial function is not being defined as you would expect on the version using the PartialFunction.apply method. That is actually why this method is now deprecated, PartialFunction.apply is meant to convert a total function into a partial function with isDefinedAt always returning true, which means it will think it is defined for 3 in your example, and try to apply the function instead of falling back to the even function you have supplied as an alternative.
This brings up a common sore point in the community regarding total functions vs partial functions. PartialFunction is a subtype of Function, I guess in the OO design sense that it is a function with an additional method (isDefinedAt) which tells you whether the function is defined for a particular value. Many feel this is a mistake, as in the Liskov sense, Function should be a subtype of PartialFunction because you can use a Function anywhere a PartialFunction is expected, but if you use a PartialFunction where a Function is expected it will compile, then may fail at runtime. My feeling is that it is because Function can be considered to have an implicit isDefinedAt that always returns true, which would allow you to correct the relationship and make Function a subtype of PartialFunction. This comes to a head in PartialFunction.apply which expects a total function and due to this expectation defines isDefinedAt to always return true, but it can’t enforce that expectation, so if you call PartialFunction.apply(somePartialFunction) bad things happen that most programmers won’t expect.
PartialFunction.apply Scaladoc
PartialFunction[Int, String]{...} is syntactic sugar for
PartialFunction[Int, String].apply({...})
To minimize:
val even: PartialFunction[Int, String] = PartialFunction[Int, String]{
case i if i % 2 == 0 => i + " is even"
}
val isEven: PartialFunction[Int, String] = {
case i if i % 2 == 0 => i + " is even"
}
println(even.isDefinedAt(3)) //true
println(isEven.isDefinedAt(3)) //false