Compared to other mobile platforms, iOS has always had quite a positive image in terms of security. Unlike Android, with its open-source code and hundreds of various hardware producers, Apple retains complete control not only over software, but also over hardware of their mobile devices. That’s why iOS developers have a lower risk of creating a security breach.
Does this imply that, unlike Android, iOS is 100% secure? No, it doesn’t. According to Symantec, iOS is susceptible to all the nine major threat families that include hundreds and thousands of malware variants. Though iOS-powered devices aren’t laid open to attack to the same extent as rooted Android devices, they also have their vulnerabilities that malicious users can take advantage of.
Discovered in 2016, spy software Pegasus, which targeted iOS devices, is still known as the most sophisticated mobile attack. The malware accessed location and authentication data, collected information from iMessage, Viber, WhatsApp, Gmail, Skype, and Facebook apps, as well as tracked messages and calls.
As it was revealed, certain issues in Safari WebKit and the iOS kernel were the reasons why Pegasus was able to infect iOS devices. Apple soon addressed these vulnerabilities in a system update. Yet, Pegasus had enough time to cause quite a stir in the media and make both iOS users and developers realize possible consequences of security breaches.
A whitepaper published later in 2016 provided estimates of losses from data breaches. On average, each stolen confidential record reportedly costs $154, so a relatively small number of 10,000 company’s records can account for $1.54 million. This is a critical loss for any business.
Only properly designed mobile software can help enterprises avoid the costs of information leak. Below, we’ll look into the five most common iOS app security issues (according to OWASP’s ranking of insecure development techniques) and describe how developers should address them.
Often, mobile software vulnerabilities appear because of improper use of certain features. An iOS app can violate official coding guides and practices, contain engineers’ unintentional errors or have chunks of Android- or Windows Phone-specific code. Despite sounding rather innocent, all these issues create serious breaches that can be used by malicious users and allow for code-injection attacks.
• Always refer and keep to Apple’s official iOS app security guides and practices.
• Conduct thorough code review to reveal calls for Android/Windows Phone features.
• Make your app validate all input data to avoid XSS attacks.
When a device is lost or stolen and gets in the hands of a malicious third party, any improperly stored data is in danger. With a physical access to the device’s file system, a cunning user can create a bug that will infect the device and gather all sensitive data.
The aftermath depends solely on the type of data stored on a device. The original owner can become a victim of fraud, identity theft, or financial loss. If a device is a part of a corporate network, the consequences can be even graver.
• Use Apple’s File Protection mechanism for medium-size data.
• Enable high-level encryption for large-size data.
• Store all sensitive data in Keychain.
• Use secure API for Keychain data.
• Avoid storing credentials in NSUserDefaults: they can be easily extracted.
Since a mobile device is continuously sending and receiving all sorts of data, great risks also lie in the process of communication. Anything that has to take part in it (a router, a mobile phone tower) or anything that can interfere (a bug on a device or endpoint) can intercept personal sensitive data.
• Make SSL/TLS encryption require SSL chain verification. Otherwise, SSL/TLS certification will be open to hijacking.
• Let your app establish connection only after verification to ensure security of the endpoint.
• Thoroughly test your app to make sure it gets only valid certificates.
• Don’t let NSURL calls pass self-signed (potentially dangerous) certificates.
• Implement output encoding on suspicious data.
Certain ways of authentication can be vulnerable as such and create a high risk of a third party intervention, either remote or direct. These ways include authentication methods that are commonly accepted by many users, but in actuality aren’t secure enough on their own.
• Avoid using geolocation or Touch ID for authentication: as explained in this article, these types of authentication can be by-passed.
• Always apply server-side authentication, assuming client-side can be by-passed.
• Ask users to choose stronger passwords (minimum 6 characters long and containing both letters and numbers) and avoid PIN-format.
• Don’t make “Remember me” authentication a default option and don’t let it store credentials locally.
• Use device-specific security tokens when possible.
According to the App Store regulations, all iOS apps have to be encrypted. However, encryption can turn out weak or flawed. An app with insufficient encryption will have vulnerabilities allowing malicious users to decrypt application code and manipulate it at their will.
• Use long-lasting cryptographic standards and algorithms like ATS and SHA-2 (avoid RC2, MMD4, MD5, SHA-1).
• Don’t store cryptographic keys locally or in the code: they can be extracted.
• Store sensitive data on a secure server, in a container or a keychain.
Apart from following the iOS app security tips mentioned above, it’s crucial to develop an app with the most dangerous conditions in mind. Assume that your app will only be used in public Wi-Fi networks. That your user will be a careless person who sets the simplest passwords. That the device will store sensitive corporate data and will be targeted by skilled hackers. Learn to be security-conscious to the extent where you can be certain your app will stay safe even under these utmost circumstances.