Amin Negm-Awad has explained what the code does (with a minor confusion over octal), but as to answering your question:
Why is strtol method used in this code I inherited?
we can only guess.
It appears that the second character in the string is a hexadecimal digit being used for (up to) 4 flag bits, the method is testing whether the least significant of these is set. A simpler way to do this might be:
- (NSString *)starFromFlags:(NSString *)flags
{
unichar flag = [flags characterAtIndex:1];
if (flag > 127 || !isxdigit(flag)) // check flag is in ASCII range and hex digit
return @"INVALID";
else
return digittoint(flag) & 0x1 ? @"YES" : @"NO"; // check if flag is odd hex
}
isxdigit()
and digittoint()
are C library functions (just like strtol()
), use man isxdigit
in the Terminal for the documentation (unless you are using an older version of Xcode which has the documentation for these, Apple unhelpfully removed the docs in the latest versions). The first checks if a character is a hexadecimal digit, the second returns the integer equivalent. The > 127
check is minimal protection against non-"ASCII" characters in your string.
Note: An NSString
(presents itself as) as sequence of UTF-16 code units so characterAtIndex:
returns a unichar
(a 16-bit type) hence the type of flag
. However this code doesn't handle any unicode string correctly. If your strings are "ASCII" it will work.
The above function actually does more error checking than the original, if you are happy to reduce the error checking you can just use:
- (NSString *)starFromFlags:(NSString *)flags
{
return digittoint([flags characterAtIndex:1]) & 0x1 ? @"YES" : @"NO";
}
This will return @"YES"
if and only if the flag is a hex digit and its LSB is set, if it isn't a hex digit or the LSB isn't set it returns @"NO"
. This works because digittoint()
returns 0
if its argument isn't a hex digit.
So why did the original programmer use strtol()
? Maybe they didn't know about digittoint()
.
HTH