8

I recently read about decompilation of iOS apps and I'm now really concerned about it. As stated in the following posts (#1 and #2) it is possible to decompile an iOS which is distributed to the App Store. This can be done with jailbreak and I think with copying the app from memory to hdd. With some tools it is possible to

  • read out strings (strings tools)
  • dump the header files
  • reverse engineer to assembly code

It seems NOT to be possible to reverse engineer to Cocoa code.

As security is a feature of the software I create, I want to prevent bad users from reconstructing my security functions (encryption with key or log in to websites). So I came up with the following questions:

  1. Can someone reconstruct my saving and encryption or login methods with assembly? I mean can he understand what exactly is going on (what is saved to which path at which time, which key is used etc., with what credentials is a login to which website performed)? I have no assembly understanding it looks like the matrix for me...
  2. How can I securly use NSStrings which cannot be read out with strings or read in assembly? I know one can do obfuscation of strings - but this is still not secure, isn't it?
Community
  • 1
  • 1
Pauli
  • 343
  • 1
  • 4
  • 17
  • 3
    no need to jailbreak to do reverse engineering, and yes, any method you could write in your code could be understood by someone having sufficient time to do so. Use methods provided by the SDK, as their sourcecode won't be directly in your application, but don't dream too much it isn't a 100% safe way, EVERYTHING that can be read can be copied, and everything that can be copied can be cracked. – Jerome Diaz Jul 29 '13 at 08:37
  • So when I use SDK class methods (e.g. `[NSString stringWithFormat:...]` or for saving files, this would not be visible in assembly directly? I mean any app includes the foundation framework... – Pauli Jul 29 '13 at 08:46
  • when you use [NSString stringWithFormat:...] what is visible is the call to stringWithFormat, what isn't visible is what is behind this method => we know what is called, not how it is done behind it – Jerome Diaz Jul 29 '13 at 09:08
  • ok. but I think a person which is able to reverse engineer also knows what is behind SDK methods (or he can find out using the documentation). It would be nearly impossible to make an SDK method call, where only the developer knows the result of it? – Pauli Jul 29 '13 at 09:17

1 Answers1

33

This is a problem that people have been chasing for years, and any sufficiently-motivated person with skills will be able to find ways to find out whatever information you don't want them to find out, if that information is ever stored on a device.

Without jailbreaking, it's possible to disassemble apps by using the purchased or downloaded binary. This is static inspection and is facilitated with standard disassembly tools. Although you need to have a tool which is good enough to add symbols from the linker and understand method calls sufficiently to be able to tease out what's going on. If you want to get a feel for how this works, check out hopper, it's a really good disassembly/reverse-engineering tool.

Specifically to your secure log in question, you have a bigger problem if you have a motivated attacker: system-based man-in-the-middle attacks. In this case, the attacker can shim out the networking code used by your system and see anything which is sent via standard networking. Therefore, you can't depend on being able to send any form of unencrypted data into a "secure" pipe at the OS or library level and expect it not to be seen. At a minimum you'll need to encrypt before getting the data into the pipe (i.e. you can't depend on sending any plain text to standard SSL libraries). You can compile your own set of SSL libraries and link them directly in to your App, which means you don't get any system performance and security enhancements over time, but you can manually upgrade your SSL libraries as necessary. You could also create your own encryption, but that's fraught with potential issues, since motivated hackers might find it easier to attack your wire protocol at that point (publicly-tested protocols like SSL are usually more secure than what you can throw together yourself, unless you are a particularly gifted developer with years of security/encryption experience).

However, all of this assumes that your attacker is sufficiently motivated. If you remove the low-hanging fruit, you may be able to prevent a casual hacker from making a simple attempt at figuring out your system. Some things to avoid:

  • storing plain-text encryption keys for either side of the encryption
  • storing keys in specifically named resources (a file named serverkey.text or a key stored in a plist with a name which contains key are both classics)
  • avoid simple passwords wherever possible

But, most important is creating systems where the keys (if any) stored in the application themselves are useless without information the user has to enter themselves (directly, or indirectly through systems such as OAUTH). The server should not trust the client for any important operation without having had some interaction with a user who can be trusted.

Apple's Keychain provides a good place to store authentication tokens, such as the ones retrieved during an OAUTH sequence. The API is a bit hard to work with, but the system is solid.

In the end, the problem is that no matter what you do, you're just upping the ante on the amount of work that it takes to defeat your measures. The attacker gets to control all of the important parts of the equation, so they will eventually defeat anything on the device. You are going to need to decide how much effort to put into securing the client, vs securing the server and monitoring for abuse. Since the attacker holds all of the cards on the device, your better approach is going to be methods that can be implemented on the server to enhance your goals.

gaige
  • 17,263
  • 6
  • 57
  • 68
  • Thanks! I have some more questions regarding the log in and SSL encryption: 1. You mean I should encrypt the data first then let the OS perform a secure connection to the server (HTTPS/SSL) and transfer the data, right? Then I still have the problem that I need to store/hard code an encryption key for that encryption before the transfer... 2. How exactly can I compile my own set of SSL libraries? Can I find good information about this somewhere on the web? When I include my own SSL libraries, couldn't an attacker also decompile them? – Pauli Jul 29 '13 at 09:51
  • 1
    I put some more info in the answer. To answer your specific questions: 1) if you're worried about that attack, then you need to pass only encrypted information to the OS. That could mean statically compiled SSL libraries linked in your app or encrypting the data before sending it to the system libraries. 2. To compile your own SSL libraries, use OpenSSL or something similarly popular and then compile statically and link to your app. I'm not really advocating this either, but it will obfuscate things for MIM attacks (doesn't help disassembly attacks, though). – gaige Jul 29 '13 at 10:00
  • 1
    For the record, there are several libraries which make easier to use Apple's keychain, such as Sam Soffe's [SSKeychain](https://github.com/soffes/sskeychain) – pablasso Dec 03 '13 at 22:44