Background:
Let's say I'm developing an app using the iOS 6 SDK. I have my deployment target set to iOS 5. I can then use features from 6 but in order to keep compatibility with 5, you have to have some checks in your code:
// method only available from 6, class of someObj existed in 5
if (someObj respondToSelector:@selector(aMethod)) {
[someObj aMethod];
}
Or
// entire class only available from 6
if (NSStringFromClass([SKStoreProductViewController class]) != nil) {
SKStoreProductViewController *store = [[SKStoreProductViewController alloc] init];
}
so far so good. These are the standard ways of doing things, as far as I know.
But I found out today that for the new class example, if I just try and alloc/init an object from that class without any checks, it doesn't crash on iOS 5 which I would have expected but rather returns null.
// run this on an iOS 5 device
NSLog(@"%@", [UICollectionView alloc] init]);
Why isn't that causing a crash on iOS 5? I guess this is something to do with the way the linker works, but I would expect a crash since that symbol doesn't exist in that version.
The secondary question is this: if the normal test is to use the NSStringFromClass method, this implies you can send the +class method to a non-existent class and it will return null - why/how does that work?
Finally, I noticed I can create a ViewController that adopts a protocol that was only defined in iOS 6 and again it causes no issues on 5. ?