Lade Inhalt...

Differences Between the Security Models of Android and iOS

Seminararbeit 2018 11 Seiten

Informatik - IT-Security

Leseprobe

The Security Models of Android and iOS: A Comparison

Samuel Hopstock Department of Informatics Technical University of Munich

Abstract—Smartphones are being used as the preferred device for as many things as possible in today’s world. This is why having secure phones, that are resilient against attacks targeting their users’ data, becomes more and more important. This paper tries to assess what measures device vendors have taken to ensure those attacks will not be successful.

Because the market is mostly divided between Google’s Android and Apple’s iOS, we put our focus on those two operating systems and compare their respective security models. Additionally this comparison will be evaluating how those models have changed over time since the beginning of the smartphone era around 2010.

The last part of this analysis will take a look at a different view on smartphones, the perspective of so-called ”power users”: Those are people that do not only use their smartphone for downloading some apps and surfing the Internet but rather want to do some lower-level customization to the operating system, by rooting their Android device or jailbreaking their iPhone. This process of gaining full privileges on the phone not only creates advantages for the user but can also have rather negative implications on the device’s security. How exactly does this affect the protections implemented by the vendor?

I. INTRODUCTION

For some time now users have relied on a smartphone for many if not most digital tasks in their everyday life, like browsing the Internet and interacting with other people through social media platforms. This of course makes life easier for the user by enabling them to store all their important data on a single device they always carry with them. But this widespread transformation of smartphones, away from being a simple device for telephony to serving as a central hub of user data also makes them an attractive target for malware attacks: Breaking into a smartphone may give attackers access to all kinds of sensitive data like credentials to users’ e-mail inboxes or even online banking accounts.

One aspect that benefits the deployment of malicious apps to a great number of devices at once, while only requiring a single code base, is that most of the smartphone market is divided into only two mobile operating systems: In 2017, Google’s Android accounted for 85% and Apple’s iOS for 14.7% of all smartphones worldwide, covering a total of 99.7% of the market.[1] So attackers trying to break into most devices at once do not really have to care about different systems (like Microsoft’s Windows Mobile), as those only cover a fraction of the market.

Because of this duopoly, Apple and Google have a great responsibility concerning their operating systems’ security, as many people rely on them being resistant to potential attacks. In order to achieve this goal, the two OSes apply several different protections. Some of those measures belong to similar concepts across both platforms but the two also take quite different approaches in some aspects of their security models. This paper aims to look at both platforms’ security models in more detail and compare their key features, also how they might have changed over time. Then some additional thoughts are being presented on the security perspective of the ever present question ’’Which OS is better, iOS or Android?”

May 27, 2018

II. First Versions: before 2010

A. iPhone OS (before 4.x)

Already the first iPhones had cryptographic keys embedded in their hardware during production: Two AES keys, one called GID (group ID) and one UID (unique ID). The GID is the same for all devices using the same processor model and the UID is different for every single device (generated in the pro­duction factory when the device is being manufactured). When the phone boots up, several device keys are then derived from the UID and GID keys: Keys 0x835, 0x836 and 0x838 from the UID and key 0x837 from the GID. This happens in order to avoid unnecessary exposure of those two integral keys. 0x837 is then used to decrypt the device firmware so that the kernel can be loaded into memory. It is saved in an encrypted form in order to make reverse engineering the firmware code harder. Apple’s mobile devices establish a strict chain of trust throughout the different parts of the software (Fig. 1):

- The boot ROM is immutable and loaded onto the chip at production time. It contains the Apple CA’s public key which is used to check whether the iBoot bootloader code is correctly signed by Apple. If this cannot be ensured, the device enters the DFU mode (Device Firmware Upgrade) which only lets the phone be restored to a correctly signed iBoot version and prohibits any other action until this has been done.
- iBoot then proceeds to load the kernel and again checks whether the kernel has been correctly signed by Apple and can thus be trusted.

Starting with iPhone OS 3.0, the Device ID and a nonce have been included in the kernel signature in order to prevent downgrading the device to a potentially vulner­able version of the OS. When a user wants to install

Abbildung in dieser Leseprobe nicht enthalten

Fig. 1. The iOS chain of trust ranging from boot code to user level apps

a different OS version on their device (e.g. for an OS upgrade or when restoring their device using iTunes), Apple has to send them a new signature for the OS as by using a nonce, each signature can only be used to install a firmware version once. Now when Apple releases a new OS version, their servers stop signing the old version which forces users to use the new version once they want to restore their device through iTunes.

To the user this might seem strange, why would anyone want to restore their device to an older OS version? The key reason for this is that if a kernel vulnerability has been found, an attacker might be able to get access to their targeted device and manually install the vulnerable OS version. This then might give them the ability to steal user data from the phone or install malware. Additionally this aims makes the process of jailbreaking harder, as users cannot install firmware that is known to have a jailbreaking exploit once they have updated to a newer version.

- The kernel only runs trusted apps: Only the Apple CA may sign them and for this, developers have to register for the Apple Developer Program providing their identity information and pay 99$ per year. Every app is addition­ally examined thoroughly by Apple before being signed in order to identify potential malware.

The fact that only apps signed by Apple itself are executed by the kernel also leads to the situation that retrieving programs from 3rd party app stores (which is possible with other operating systems like Android) is not allowed.

Apps are compiled to ARM bytecode and protected by the FairPlay DRM system (the executable part of the app bundle is encrypted), so the only way of ’’cracking” apps, i.e. retrieving the executable code and modifying it, is by extracting the code from RAM once the app is loaded into memory. This is only possible on jailbroken devices because it requires elevated privileges.

Apps are executed in an isolated sandbox which prohibits access to other apps’ data and system resources. But it is possible to request access to some sensitive system features like location services. This is done by so-called entitlements: An entitlement is a signed key-value pair which tells the system whether this app may access a specific device feature or not. Third party app developers are given the ability to request certain entitlements and when the app then asks for access, the user is prompted with a dialog asking for their decision.

Apps may save sensitive data like passwords to the device keychain. This keychain is a SQLite database and the indi­vidual entries are encrypted with one of the device keys (key 0x835). This device key can be computed and extracted for offline use and with access to it, all past and future keychain items can be decrypted. If an attacker somehow gets access to the key and the keychain, they can decrypt it on their own computer, without further needing access to the device. What is also saved to the keychain is the device pin that is being used for the secure lockscreen, and not in a hashed form as one might expect but the complete key. This makes removing or manipulating the key easy if access to the keychain is available. This security issue has been fixed in later iOS versions.

Starting with the iPhone 3GS, the OS encrypts the complete file system. At this point the process used a single key for all files (derived from the UID key) and its main purpose was to prevent data retrieval by physically moving memory chips to a different device (decryption key only available on hardware chip) and to make wiping the device quick (decryption key destroyed).

[2], [3]

B. Android (before 3.x)

While iOS is under Apple’s control as much as somehow possible, Google’s Android sets its focus on being an open system (e.g. by the firmware being open source). One critical aspect of this strategy is that Android users can easily install apps from 3rd party sources, not only from the official Play Store (or Android Market, as it was called in early days). This and the problem that Android apps, being programmed in Java, are relatively easy to decompile and modify, opens possibilities to distribute ”cracked” versions of popular paid apps or even versions injected with malware code, to credulous users. Additionally app signing on Android is fundamentally different to Apple’s approach (see Fig. 2): While iOS apps are signed by Apple after being scanned for compliant behavior, Android apps are signed by the developers themselves using self-generated keys. So generally there is no real way of

Abbildung in dieser Leseprobe nicht enthalten

Fig. 2. Comparing the App signing process on Android and iOS

knowing who created an app, unless of course the developers have published their public key on a trusted server. But this would require additional engagement of both the developers and the users. The only way Android's app signing prevents manipulation is when a user already has an older version of the app installed on their phone and a third party tries to present the user with an update to this app which actually contains malicious code. When trying to install this ’’update”, the OS notices that the installed version has not been signed with the same private key as this forged version and aborts the update process. But chances are high that the user would just think that maybe this is a bug in Android and uninstall the old version before trying to use the one presented by the attacker. The result of all this is that while apps uploaded to the official store are being analyzed by Google and can thus be trusted for the most part, when using other sources, users have to be aware of the risk of potentially installing malicous apps.

But even though Android lets malware apps be installed easily, it has also been trying to limit the attack surface of the system and any installed programs since its first versions. While iOS apps all run under the same Unix user called ’mobile” (UID 501), Android assigns each app an individual UID. The effect this has is that apps can easily be isolated on kernel level by using built-in Unix permissions to limit an app’s file system access to its own files. This is part of the concept of apps running in a sandbox, which is also being used on iOS. A difference to iOS is that because of this separation into different Unix users, exploiting the complete system because of a vulnerability or malicious code parts in a single application is made much harder. In early iOS versions a vulnerability in the browser application lead to the first jailbreak exploit that attacked the kernel, which would not have been possible on Android because of this isolation, even in its first versions.

Another central part of the Android security model is the use of ”permissions”. Those can be compared to iOS app entitlements but they allow for a much more granular access control. This is because in addition to providing access to sensitive device features such as using GPS location services permissions can be necessary to communicate with certain app components: Apps are split in separate components such as user interface providers (activities), background services, content providers (e.g. databases providing data about the user’s contacts) and broadcast receivers (registering to certain system messages). Now if a developer decides that some component in their app should be able to interact with third parties, they can handle communication requests from outside the own app scope. But if the data provided is sensitive and only a restricted set of other parties should access this component, a permission can be required for accessing it. This permission is a String value that can either be chosen from a predefined set of OS permissions or defined by the developers themselves. The set of permissions an app needs for operating properly has to be known at install time and this list is being presented to the user. There are also permissions defined by Android itself that handle access to certain system features. Some of those (e.g. permissions allowing the communication with certain hardware devices) are directly mapped to Linux UIDs and GIDs (controlled by rules in /system/etc/permissions/platform.xml). For example when an app is granted the android.permission.BLUETOOTH_ADMIN it is assigned the GID net_bt_admin. The other way works, too: Processes running under the media UID are automatically granted the android.permission.MODIFY_AUDIO_SETTINGS.

A security feature that has not been available in Android’s first versions is file system encryption. It was only introduced with Android Honeycomb (3.x) in 2011 and has been optional until Android Marshmallow (6.x). Then Google started to require newly released devices to be encrypted by default in order to be certified for official Play Store access.

[4], [5]

C. Comparison

Looking at the early security models of Android and iOS one gets the impression that Android incorporates quite a few good concepts, such as isolating processes by giving them separate Unix UIDs and securing interprocess communication with permissions. But permissions already are a prime example of how conceptually good security measures can fail because of bad usability: When users want to install an app on Android, they are being presented with a list of permissions this app wants to be granted. Now many apps request access to a whole range of permissions so the user mostly will not really bother reading through them all and then deciding whether each request is legitimate or suspicious. They rather accept this without really reading it. This is similar to the way a great majority of users handle requests to accept the terms and conditions for anything they want to do online, also because if they do not agree to those terms, they of course are not allowed to use the service, be it Facebook or any online shop. App permissions are the same ”all or nothing” decision: If the user thinks that the permission to read the information of the user's contacts might not be needed for a simple calculator app, the old Android security model does not give them the possibility of disabling just this single permission, it rather completely stops installing the app.

iOS has handled this better in two ways: On the one side, by thoroughly checking all apps before they are published, things like illegitimate permission requests can mostly be found and prevented. On the other side, even early iOS versions asked for the user's permission at runtime for certain actions (like location access). It has taken a long time until this concept has been incorporated in Android.

Another strong advantage of iOS as compared to early Android versions is the strict chain of trust that the complete system establishes. Nearly all Android phones have been giving the user the possibility to unlock the device's bootloader and flash any custom firmware image they would like. The only caveat is that the unlocking process wipes all personal data in order to prevent data theft. This is the complete opposite of Apple's approach to sign all parts of the firmware and only boot genuine iOS. To ”power users” who would like to apply low-level tweaks and modifications to their device’s operating system, Android's approach is of course much more appealing. This view on the phones is covered in more detail in section four. But for the vast majority of users Apple’s strict focus on device integrity seems very suited: They can normally rest assured that if they keep installing OS security patches, their phones are running genuine iOS firmware with apps inspected and signed by Apple, so their data is safe.

Overall, speaking of Android and iOS around 2010, iOS has had a much more refined security model.

III. Current Versions

A. iOS 11

1) System Security: The security model used for Apple’s iDevices today has changed in a few aspects compared to earlier iOS versions. A first notable point is the introduction of the so-called Secure Enclave. This is a hardware element that stores cryptographic keys (especially the GID and UID keys) and controls sensitive operations on the device like full storage wipes. Today, the UID key is no longer generated externally by the factory that produces the device but rather the generation happens on-device directly inside the Secure Enclave. Like this no third party can know this key or store it in an external place, not even Apple itself.

The Enclave secures the process of changing the lockscreen passcode (which now does not only consist of 4 digits but rather is an alphanumeric code of arbitrary length). The secure boot process is also used within this secured hardware chip in order to verify the integrity and authenticity of its own software. The part of device memory that is used by the Secure Enclave is encrypted with an ephemeral key which is freshly generated by the Enclave itself on every device startup. Communication between the Enclave and the kernel happens through shared memory portions and system interrupts.

A feature supported by many mobile phones today is the ability to authenticate via biometric sensors, like most notably the fingerprint sensor. This makes using a secure lockscreen more comfortable for the user, as the device passcode does not have to be entered every time the screen is turned on. As a result, a longer code can be used which is harder to break and thus more secure. Biometric login involves the processing of sensitive personal data, so the authentication again makes use of the device’s Secure Enclave. The sensor communicates directly with the Enclave and the phone’s CPU only acts as a neutral third party passing along end-to-end encrypted messages. A factory-generated shared key between each unique pair of TouchID sensor and Secure Enclave is used for this encrypted communication. A consequence of this is also that replacing a broken TouchID sensor is not that simple, because it can only communicate with one specific Secure Enclave, which then has to be exchanged as well.

Another new feature is the introduction of so-called efface- able storage: A small part of a device’s flash memory used for the storage of the root key of the filesystem encryption hierarchy. This storage can be quickly and securely erased in order to provide a safe way of wiping the device.

Speaking of filesystem encryption: This is now done with separate keys per file. Files are divided into protection classes and each file key is wrapped with the respective protection class key. Those class keys are wrapped with a hardware key and, depending on the protection class, additionally protected by the device’s lockscreen passcode (see Fig. 3).

Abbildung in dieser Leseprobe nicht enthalten

Fig. 3. Key usage with iOS file system encryption[3]

iOS also uses further protection techniques for the kernel like address space layout randomization (ASLR) and data execution prevention. The former moves kernel data struc­tures to random addresses in the RAM during runtime in order to prevent exploits like the ”return-to-libc” attack. The latter prevents the execution of malicious code that has not previously been included in a program’s dedicated executable code but rather was dynamically loaded into memory.

[...]

Details

Seiten
11
Jahr
2018
ISBN (eBook)
9783668987517
ISBN (Buch)
9783668987524
Sprache
Englisch
Katalognummer
v491309
Institution / Hochschule
Technische Universität München
Note
1,0
Schlagworte
Android iOS Security Mobile Smartphones Root Jailbreak

Autor

Teilen

Zurück

Titel: Differences Between the Security Models of Android and iOS