5 Security
Security is a fundamental aspect of software development and it’s important to know about best practices and common patterns that can help strengthen the security of the projects you work on. I want to emphasize that I’m not a security expert. The recommendations I provide in this chapter are based on my experience and what I’ve learned from fellow developers.
Make It Hard
I once read that, if someone wants to access your data, then they will succeed. How badly they want to access your data determines whether they’ll succeed. I don’t know whether this is true, but I tend to err on the side of safety.
Why is this important? It changed my perspective on security. It’s naive to think that you can outsmart people that are trained to find and extract the information they need. That doesn’t mean you need to be complacent or ignore the advice you read. It simply means that your actions and motivation change slightly.
An effective approach to security is to have the mindset to make it hard for the other party to access the data you’re trying to protect. In other words, you add several layers of security to protect the data of the user. Let’s start with the basics.
Plain Text
If you have some experience developing software, you most likely know that you shouldn’t store sensitive information in plain text. Ever. Don’t store the user’s username and password in the user defaults database, for example. Use the keychain to protect this type of sensitive information.
The same applies to networking. Apple and Google are actively forcing developers to move away from HTTP and use SSL by default. Apple’s App Transport Security encourages developers to be aware of the security risks of their applications. Make sure that your application communicates with remote services over a secure connection. This isn’t always possible if you aren’t in control of the remote service. In such a scenario, it’s up to you to decide what the next best option is.
But SSL may not always be sufficient. Your application is still susceptible to, for example, man-in-the-middle attacks. You can remedy this by adopting certificate pinning, adding an extra layer of security.
Obfuscating Information
A common question I receive is how to best hide or obfuscate sensitive information that’s bundled with your application. That’s a good question. The answer may disappoint you, though. As I mentioned earlier in this chapter, there’s always a way for people with bad intentions to get a hold of the information they need. You need to consider the sensitivity of the information you’re trying to protect.
The same advice applies, though. Make it as hard as possible. But, at the same time, consider the sensitivity of the information you’re protecting. Don’t store sensitive information, such as API keys, in your application’s Info.plist. It’s easy to dissect an application you downloaded from the App Store and inspect the contents of the Info.plist.
I usually store sensitive information as private constants in a configuration file, which means it’s compiled alongside the application. This doesn’t make it impossible to extract the sensitive information, but it makes it less trivial.
Fetching Sensitive Information
You can go one step further and avoid storing keys or credentials in the application itself. Instead, your application contacts a remote service and asks for credentials every time it needs to communicate with that service. This requires a dedicated infrastructure and a lot more work up front, but it adds a powerful layer of security.
Encryption
Encryption is an effective solution to protect the user’s data. Realm, for example, has built-in support for encrypting the data stored in its database. For Core Data, however, this is less trivial. I hope Apple will make this less cumbersome in a future release of the framework.
The data the user stores on their device is automatically encrypted if the device is protected with a password or Touch ID. Only you, the user, can unlock the data stored on your device because you hold the key to decrypt it, not Apple. It’s great to see that Apple continues to invest in the privacy and security of its customers. Apple’s motivation is a bit more nuanced, though.
Privacy
A lot has been written about privacy and protecting the user’s privacy. Unfortunately, many developers don’t realize that this also means protecting the user’s privacy from companies that offer services they use day in day out. If your application uses analytics or displays ads, then you’re exposing the user’s personal information to the companies behind these services.
I used to use Fabric for crash reporting and analytics, but I no longer do for personal projects. As a developer, it’s my responsibility to protect the user’s privacy and they expect that from me. I understand that many developers don’t have this luxury, but I still believe that you should, at a minimum, consider the option and be aware of the information you may be exposing to third parties.
If you include a third party SDK in your application and you don’t have access to the source, then how do you know what information you’re sharing with this third party? You don’t. That’s important to keep in mind.
Logging and Debugging
Logging information to the console is my favorite technique to debug issues because it’s simple and to the point. It’s a technique many of us use, but it’s also a potential security problem. Many developers forget that print or log statements also log information to the console in production. This can be useful and intentional, but it can also be a security issue.
I hope you’re not logging credentials or other sensitive information. Even fragments of the user’s data shouldn’t be logged in production. If you need to generate logs, then I recommend looking into remote logging in combination with data encryption. Avoid that a third party, any third party, can access the logs you generate.
Educating Your Client
The role of a developer is often reduced to writing code and solving a problem. Not only is this incorrect, I strongly believe any developer, regardless of their experience, should also provide a technical service to the parties they work with. What does that mean? If you’re told to implement a solution, then it’s your responsibility to inform your client or project manager about any security risks or problems.
I believe it’s the task of the developer to educate the client. The client still decides what happens and what needs to be implemented, but they should at a minimum be aware of the risks involved. I’ve implemented several solutions I didn’t agree with, but I tried to educate the client about alternative solutions that were safer.
At one point I inherited a project in which the user’s credentials were stored in the user defaults database. Even though there was no room to refactor this glaring security hole, I informed the client about the problem. For a developer, it can be frustrating not having final say in such arguments, but that’s how it is. This is very different if you build a product business in which you make the calls.