Wednesday 24 December 2014

Dissecting Lollipop's Smart Lock

Android 5.0 (Lollipop) has been out for a while now, and most of its new features have been introduced, benchmarked, or complained about extensively. The new release also includes a number of of security enhancements, of which disk encryption has gotten probably the most media attention. Smart Lock (originally announced at Google I/O 2014), which allows bypassing the device lockscreen when certain environmental conditions are met, is probably the most user-visible new security feature. As such, it has also been discussed and blogged about extensively. However, because Smart Lock is a proprietary feature incorporated in Google Play Services, not many details about its implementation or security level are available. This post will look into the Android framework extensions that Smart Lock is build upon, show how to use them to create your own unlock method, and finally briefly discuss its Play Services implementation.

Trust agents

Smart Lock is build upon a new Lollipop feature called trust agents. To quote from the framework documentation, a trust agent is a 'service that notifies the system about whether it believes the environment of the device to be trusted.'  The exact meaning of 'trusted' is up to the trust agent to define. When a trust agent believes it can trust the current environment, it notifies the system via a callback, and the system decides how to relax the security configuration of the device.  In the current Android incarnation, being in a trusted environment grants the user the ability to bypass the lockscreen.

Trust is granted per user, so each user's trust agents can be configured differently. Additionally, trust can be granted for a certain period of time, and the system automatically reverts to an untrusted state when that period expires. Device administrators can set the maximum trust period trust agents are allowed to set, or disable trust agents altogether. 

Trust agent API

Trust agents are Android services which extend the TrustAgentService base class (not available in the public SDK). The base class provides methods for enabling the trust agent (setManagingTrust()), granting and revoking trust (grant/revokeTrust()), as well as a number of callback methods, as shown below.

public class TrustAgentService extends Service {

public void onUnlockAttempt(boolean successful) {
}

public void onTrustTimeout() {
}

private void onError(String msg) {
Slog.v(TAG, "Remote exception while " + msg);
}

public boolean onSetTrustAgentFeaturesEnabled(Bundle options) {
return false;
}

public final void grantTrust(
final CharSequence message,
final long durationMs, final boolean initiatedByUser) {
//...
}

public final void revokeTrust() {
//...
}

public final void setManagingTrust(boolean managingTrust) {
//...
}

@Override
public final IBinder onBind(Intent intent) {
return new TrustAgentServiceWrapper();
}


//...
}

To be picked up by the system, a trust agent needs to be declared in AndroidManifest.xml with an intent filter for the android.service.trust.TrustAgentService action and require the BIND_TRUST_AGENT permission, as shown below. This ensures that only the system can bind to the trust agent, as the BIND_TRUST_AGENT permission requires the platform signature. A Binder API, which allows calling the agent from other processes, is provided by the TrustAgentService base class. 

<manifest ... >

<uses-permission android:name="android.permission.CONTROL_KEYGUARD" />
<uses-permission android:name="android.permission.PROVIDE_TRUST_AGENT" />

<application ...>
<service android:exported="true"
android:label="@string/app_name"
android:name=".GhettoTrustAgent"
android:permission="android.permission.BIND_TRUST_AGENT">
<intent-filter>
<action android:name="android.service.trust.TrustAgentService"/>
<category android:name="android.intent.category.DEFAULT"/>
</intent-filter>

<meta-data android:name="android.service.trust.trustagent"
android:resource="@xml/ghetto_trust_agent"/>
</service>
...
</application>
</manifest>

The system Settings app scans app packages that match the intent filter shown above, checks if they hold the PROVIDE_TRUST_AGENT signature permission (defined in the android package) and shows them in the Trust agents screen (Settings->Security->Trust agents) if all required conditions are met. Currently only a single trust agent is supported, so only the first matched package is shown. Additionally, if the manifest declaration contains a <meta-data> tag that points to an XML resource that defines a settings activity (see below for an example), a menu entry that opens the settings activity is injected into the Security settings screen.

<trust-agent xmlns:android="http://schemas.android.com/apk/res/android"
android:title="Ghetto Unlock"
android:summary="A bunch of unlock triggers"
android:settingsActivity=".GhettoTrustAgentSettings" />


Here's how the Trusted agents screen might look like when a system app that declares a trusted agent is installed.

Trust agents are inactive by default (unless part of the system image), and are activated when the user toggles the switch in the screen above. Active agents are ultimately managed by the system TrustManagerService which also keeps a log of trust-related events. You can get the current trust state and dump the even log using the dumpsys command as shown below.

$ adb shell dumpsys trust
Trust manager state:
User "Owner" (id=0, flags=0x13) (current): trusted=0, trustManaged=1
Enabled agents:
org.nick.ghettounlock/.GhettoTrustAgent
bound=1, connected=1, managingTrust=1, trusted=0
Events:
#0 12-24 10:42:01.915 TrustTimeout: agent=GhettoTrustAgent
#1 12-24 10:42:01.915 TrustTimeout: agent=GhettoTrustAgent
#2 12-24 10:42:01.915 TrustTimeout: agent=GhettoTrustAgent
...

Granting trust

Once a trust agent is installed, a trust grant can be triggered by any observable environment event, or directly by the user (for example, by via an authentication challenge). An often requested, but not particularly secure (unless using a WPA2 profile that authenticates WiFi access points), unlock trigger is connecting to a 'home' WiFi AP. This feature can be easily implemented using a broadcast receiver that reacts to android.net.wifi.STATE_CHANGE (see sample app; based on the sample in AOSP). Once a 'trusted' SSID is detected, the receiver only needs to call the grantTrust() method of the trust agent service. This can be achieved in a number of ways, but if both the service and the receiver are in the same package, a straightforward way is to use a LocalBroadcastManager (part of the support library) to send a local broadcast, as shown below.

static void sendGrantTrust(Context context,
String message,
long durationMs,
boolean initiatedByUser) {
Intent intent = new Intent(ACTION_GRANT_TRUST);
intent.putExtra(EXTRA_MESSAGE, message);
intent.putExtra(EXTRA_DURATION, durationMs);
intent.putExtra(EXTRA_INITIATED_BY_USER, initiatedByUser);
LocalBroadcastManager.getInstance(context).sendBroadcast(intent);
}


// in the receiver
@Override
public void onReceive(Context context, Intent intent) {
if (WifiManager.NETWORK_STATE_CHANGED_ACTION.equals(intent.getAction())) {
WifiInfo wifiInfo = (WifiInfo) intent
.getParcelableExtra(WifiManager.EXTRA_WIFI_INFO);

// ...
if (secureSsid.equals(wifiInfo.getSSID())) {
GhettoTrustAgent.sendGrantTrust(context, "GhettoTrustAgent::WiFi",
TRUST_DURATION_5MINS, false);
}
}
}


This will call the TrustAgentServiceCallback installed by the system lockscreen and effectively set a per-user trusted flag. If the flag is true, the lockscreen implementation allows the keyguard to be dismissed without authentication. Once the trust timeout expires, the user must enter their pattern, PIN or password in order to dismiss the keyguard. The current trust state is displayed at the bottom of the keyguard as a padlock icon: when unlocked, the current environment is trusted; when locked, explicit authentication is required. The user can also manually lock the device by pressing the padlock, even if an active trust agent currently has trust.

NFC unlock

As discussed in a previous post, implementing NFC unlock in previous Android versions was possible, but required some modifications to the system NFCService, because the NFC controller was not polled while the lockscreen is displayed. In order to make implementing NFC unlock possible, Lollipop introduces several hooks into the NFCService, which allow NFC polling on the lockscreen. If a matching tag is discovered, a reference to a live Tag object is passed to interested parties. Let's look into the how this is implementation in a bit more detail.

The NFCAdapter class has a couple of new (hidden) methods that allow adding and removing an NFC unlock handler (addNfcUnlockHandler() and removeNfcUnlockHandler(), respectively). An NFC unlock handler is an implementation of the NfcUnlockHandler interface shown below.

interface NfcUnlockHandler {
public boolean onUnlockAttempted(Tag tag);
}

When registering an unlock handler you must specify not only the NfcUnlockHandler object, but also a list of NFC technologies that should be polled for at the lockscreen. Calling the addNfcUnlockHandler() method requires the WRITE_SECURE_SETTINGS signature permission.

Multiple unlock handlers can be registered and are tried in turn until one of them returns true from onUnlockAttempted(). This terminates the NFC unlock sequence, but doesn't actually dismiss the keyguard. In order to unlock the device, an NFC unlock handler should work with a trust agent in order to grant trust. Judging from NFCService's commit log, this appears to be a fairly recent development: initially, the Settings app included functionality to register trusted tags, which would automatically unlock the device (based on the tag's UID), but this functionality was removed in favour of trust agents.

Unlock handlers can authenticate the scanned NFC tag in a variety of ways, depending on the tag's technology. For passive tags that contain fixed data, authentication typically relies either on the tag's unique ID, or on some shared secret written to the tag. For active tags that can execute code, it can be anything from an OTP to full-blown multi-step mutual authentication. However, because NFC communication is not very fast, and most tags have limited processing power, a simple protocol with few roundtrips is preferable. A simple implementation that requires the tag to sign a random value with its RSA private key, and then verifies the signature using the corresponding public key is included in the sample application. For signature verification to work, the trust agent needs to be initialized with the tag's public key, which in this case is imported via the trust agent's settings activity shown below.

Smart Lock

'Smart Lock' is just the marketing name for the GoogleTrustAgent which is included in Google Play Services (com.google.android.gms package), as can be seen from the dumpsys output below.

$ adb shell dumpsys trust
Trust manager state:
User "Owner" (id=0, flags=0x13) (current): trusted=1, trustManaged=1
Enabled agents:
com.google.android.gms/.auth.trustagent.GoogleTrustAgent
bound=1, connected=1, managingTrust=1, trusted=1
message=""



This trust agent offers several trust triggers: trusted devices, trusted places and a trusted face. Trusted face is just a rebranding of the face unlock method found in previous versions. It uses the same proprietary image recognition technology, but is significantly more usable, because, when enabled, the keyguard continuously scans for a matching face instead of requiring you to stay still while it takes and process your picture. The security level provided also remains the same -- fairly low, as the trusted face setup screen warns. Trusted places is based on the geofencing technology, which has been available in Google Play services for a while. Trusted places use the 'Home' and 'Work' locations associated with your Google account to make setup easier, and also allows for registering a custom place based on the current location or any coordinates selectable via Google Maps. As a helpful popup warns, accuracy cannot be guaranteed, and the trusted place range can be up to 100 meters. In practice, the device can remain unlocked for a while even when this distance is exceeded.

Trusted devices supports two different types of devices at the time of this writing: Bluetooth and NFC. The Bluetooth option allows the Android device to remain unlocked while a paired Bluetooth device is in range. This features relies on Bluetooth's built-in security mechanism, and as such its security depends on the paired device. Newer devices, such as Android Wear watches or the Pebble watch, support Secure Simple Pairing (Security Mode 4), which uses Elliptic Curve Diffie-Hellman (ECDH) in order to generate a shared link key. During the paring process, these devices display a 6-digit number based on a hash of both devices' public keys in order to provide device authentication and protect against MiTM attacks (a feature called numeric comparison). However, older wearables (such as the Meta Watch), Bluetooth earphones, and others are also supported. These previous-generation devices only support Standard Pairing, which generates authentication keys based on the device's physical address and a 4-digit PIN, which is usually fixed and set to a well-know value such as '0000' or '1234'. Such devices can be easily impersonated.

Google's Smart Lock implementation requires a persistent connection to a trusted device, and trust is revoked once this connection is broken (Update: apparently a trusted connection can be established without a key on Android < 5.1 ). However, as the introductory screen (see below) warns, Bluetooth range is highly variable and may extend up to 100 meters. Thus while the 'keep device unlocked while connected to trusted watch on wrist' use case makes a lot of sense, in practice the Android device may remain unlocked even when the trusted Bluetooth device (wearable, etc.) is in another room.


As discussed earlier, an NFC trusted device can be quite flexible, and has the advantage that, unlike Bluetooth, proximity is well defined (typically not more than 10 centimeters). While Google's Smart Lock seems to support an active NFC device (internally referred to as the 'Precious tag'), no such device has been publicly announced yet. If the Precious is not found, Google's NFC-based trust agent falls back to UID-based authentication by saving the hash of the scanned tag's UID (tag registration screen shown below). For the popular NFC-A tags (most MIFARE variants) this UID is 4 or 7 bytes long (10-byte UIDs are also theoretically supported). While using the UID for authentication is a fairly wide-spread practice, it was originally intended for anti-collision alone, and not for authentication. 4-byte UIDs are not necessarily unique and may collide even on 'official' NXP tags. While the specification requires 7-byte IDs to be both unique (even across different manufacturers) and read-only, cards with a rewritable UID do exists, so cloning a MIFARE trusted tag is quite possible. Tags can also be emulated with a programmable device such as the Proxmark III. Therefore, the security level provided by UID-based authentication is not that high.

Summary

Android 5.0 (Lollipop) introduces a new trust framework based on trust agents, which can notify the system when the device is in a trusted environment. As the system lockscreen now listens for trust events, it can change its behaviour based on the trust state of the current user. This makes it easy to augment or replace the traditional pattern/PIN/password user authentication methods by installing  trust agents. Trust agent functionality is currently only available to system applications, and Lollipop can only support a single active trust agent. Google Play Services provides several trust triggers (trustlets) under the name 'Smart Lock' via its trust agent. While they can greatly improve device usability, none of the currently available Smart Lock methods are particularly precise or secure, so they should be used with care.

Thursday 23 October 2014

Android Security Internals is out

Some six months after the first early access chapters were announced, my book has now officially been released. While the final ebook PDF has been available for a few weeks, you can now get all ebook formats (PDF, Mobi and ePub) directly from the publisher, No Starch Press. Print books are also ready and should start shipping tomorrow (Oct 24th). You can use the code UNDERTHEHOOD when checking out for a 30% discount in the next few days. The book will also be available from O'ReillyAmazon and other retailers in the coming weeks.

This book would not have been possible without the efforts of Bill Pollock and Alison Law from No Starch, who edited, refined and produced my raw writings. +Kenny Root  reviewed all chapters and caught some embarrassing mistakes, all that are left are mine alone. Jorrit �Chainfire� Jongma reviewed my coverage of SuperSU and Jon �jcase� Sawyer contributed the foreword. Once again, a big thanks to everyone involved!

About the book

The book's purpose and structure have not changed considerably since it was first announced. It walks you through Android's security architecture, starting from the bottom up. It starts with fundamental concepts such as Binder, permissions and code signing, and goes on to describe more specific topics such as cryptographic providers, account management and device administration. The book includes excerpts from core native daemons and platform services, as well as some application-level code samples, so some familiarity with Linux and Android programming is assumed (but not absolutely required). 

Android versions covered

The book covers Android 4.4, based on the source code publicly released through AOSP. Android's master branch is also referenced a few times, because master changes are usually a good indicator of the direction future releases will take. Vendor modifications or extensions to Android, as well as  device-specific features are not discussed.

The first developer preview of Android 5.0 (Lollipop, then known only as 'Android L') was announced shortly after the first draft of this book was finished. This first preview L release included some new security features, such as improvements to full-disk encryption and device administration, but not all planned features were available (for example, Smart Lock was missing). The final Lollipop developer preview (released last week) added those missing features and finalized the public API. The source code for Lollipop is however not yet available, and trying to write an 'internals' book without it would either result in incomplete or speculative coverage, or would turn into an (rather though) exercise in reverse engineering. That is why I've chosen not to cover Android 5.0 in the book at all and it is exclusively focused on Android 4.4 (KitKat).

Lollipop is a major release, and as such would require reworking most of the chapters and, of course, adding a lot of new content. This could happen in an updated version of the book at some point. Not to worry though, some of the more interesting new security features will probably get covered right here, on the blog,  first.

With that out of the way, here is the extended table of contents. You can find the full table of contents on the book's official page.

Update: Chapter 1 is now also freely available on No Starch's site.

Table of contents

 Chapter 1: Android�s Security Model
  • Android�s Architecture
  • Android�s Security Model
Chapter 2: Permissions
  • The Nature of Permissions
  • Requesting Permissions
  • Permission Management
  • Permission Protection Levels
  • Permission Assignment
  • Permission Enforcement
  • System Permissions
  • Shared User ID
  • Custom Permissions
  • Public and Private Components
  • Activity and Service Permissions
  • Broadcast Permissions
  • Content Provider Permissions
  • Pending Intents
Chapter 3: Package Management
  • Android Application Package Format
  • Code signing
  • APK Install Process
  • Package Verification
Chapter 4: User Management
  • Multi-User Support Overview
  • Types of Users
  • User Management
  • User Metadata
  • Per-User Application Management
  • External Storage
  • Other Multi-User Features
Chapter 5: Cryptographic Providers
  • JCA Provider Architecture
  • JCA Engine Classes
  • Android JCA Providers
  • Using a Custom Provider
Chapter 6: Network Security and PKI
  • PKI and SSL Overview
  • JSSE Introduction
  • Android JSSE Implementation
Chapter 7: Credential Storage
  • VPN and Wi-Fi EAP Credentials
  • Credential Storage Implementation
  • Public APIs
Chapter 8: Online Account Management
  • Android Account Management Overview
  • Account Management Implementation
  • Google Accounts Support
Chapter 9: Enterprise Security
  • Device Administration
  • VPN Support
  • Wi-Fi EAP
Chapter 10: Device Security
  • Controlling OS Boot-Up and Installation
  • Verified Boot
  • Disk Encryption
  • Screen Security
  • Secure USB Debugging
  • Android Backup
Chapter 11: NFC and Secure Elements
  • NFC Overview
  • Android NFC Support
  • Secure Elements
  • Software Card Emulation
Chapter 12: SElinux
  • SELinux Introduction
  • Android Implementation
  • Android 4.4 SELinux Policy
Chapter 13: System Updates and Root Access
  • Bootloader
  • Recovery
  • Root Access
  • Root Access on Production Builds

Sunday 5 October 2014

Revisiting Android disk encryption

In iOS 8, Apple has expanded the scope of data encryption and now mixes in the user's passcode with an unextractable hardware UID when deriving an encryption key, making it harder to extract data from iOS 8 devices. This has been somewhat of a hot topic lately, with opinions ranging from praise for Apple's new focus on serious security, to demands for "golden keys" to mobile devices to be magically conjured up. Naturally, the debate has spread to other OS's, and Google has announced that the upcoming Android L release will also have disk encryption enabled by default. Consequently, questions and speculation about the usefulness and strength of Android's disk encryption have sprung up on multiple forums, so this seems like a good time to take another look at its implementation. While Android L still hasn't been released yet, some of the improvements to disk encryption it introduces are apparent in the preview release, so this post will briefly introduce them as well.

This post will focus on the security level of disk encryption, for more details on its integration with the platform, see Chapter 10 of my book -- 'Android Security Internals' (early access full PDF is available now, print books should ship by end of October).

Android 3.0-4.3

Full disk encryption (FDE) for Android was introduced in version 3.0 (Honeycomb) and didn't change much until version 4.4 (discussed in the next section). Android's FDE uses the dm-crypt target of Linux's device mapper framework to implement transparent disk encryption for the userdata (mounted as /data) partition. Once encryption is enabled, all writes to disk automatically encrypt data before committing it to disk and all reads automatically decrypt data before returning it to the calling process. The disk encryption key (128-bit, called the 'master key') is randomly generated and protected by the lockscreen password. Individual disk sectors are encrypted by the master key using AES in CBC mode, with ESSIV:SHA256 to derive sector IVs.

Android uses a so called 'crypto footer' structure to store encryption parameters. It is very similar to the encrypted partition header used by LUKS (Linux Unified Key Setup), but is simpler and omits several LUKS features. While LUKS supports multiple key slots, allowing for decryption using multiple passphrases, Android's crypto footer only stores a single copy of the encrypted master key and thus supports a single decryption passphrase. Additionally, while LUKS splits the encrypted key in multiple 'stripes' in order to reduce the probability of recovering the full key after it has been deleted from disk, Android has no such feature. Finally, LUKS includes a master key checksum (derived by running the master key through PBKDF2), which allows to check whether the entered passphrase is correct without decrypting any of the disk data. Android's crypto footer doesn't include a master key checksum, so the only way to check whether the entered passphrase is correct is to try and mount the encrypted partition. If the mount succeeds, the passphrase is considered correct.

Here's how the crypto footer looks in Android 4.3 (version 1.0).

struct crypt_mnt_ftr {
__le32 magic;
__le16 major_version;
__le16 minor_version;
__le32 ftr_size;
__le32 flags;
__le32 keysize;
__le32 spare1;
__le64 fs_size;
__le32 failed_decrypt_count;
unsigned char crypto_type_name[MAX_CRYPTO_TYPE_NAME_LEN];
};

The structure includes the version of the FDE scheme, the key size, some flags and the name of the actual disk encryption cipher mode (aes-cbc-essiv:sha256). The crypto footer is immediately followed by the encrypted key and a 16-bit random salt value. In this initial version, a lot of the parameters are implicit and are therefore not included in the crypto footer. The master key is encrypted using an 128-bit AES key (key encryption key, or KEK) derived from an user-supplied passphrase using 2000 iteration of PBKDF2. The derivation process also generates an IV, which is used to encrypt the master key in CBC mode. When an encrypted device is booted, Android takes the passphrase the user has entered, runs it through PBKDF2, decrypts the encrypted master key and passes it to dm-crypt in order to mount the encrypted userdata partition.

Bruteforcing FDE 1.0

The encryption scheme described in the previous section is considered relatively secure, but because it is implemented entirely in software, it's security depends entirely on the complexity of the disk encryption passphrase. If it is sufficiently long and complex, bruteforcing the encrypted master key could take years. However, because Android has chosen to reuse the losckreen PIN or password (maximum length 16 characters), in practice most people are likely to end up with a relatively short or low-entropy disk encryption password. While the PBKDF2 key derivation algorithm has been designed to work with low-entropy input, and requires considerable computational effort to bruteforce, 2000 iterations are not a significant hurdle even to current off-the-shelf hardware. Let's see how hard it is to bruteforce Android FDE 1.0 in practice.

Bruteforcing on the device is obviously impractical due to the limited processing resources of Android devices and the built-in rate limiting after several unsuccessful attempts. A much more practical approach is to obtain a copy of the crypto footer and the encrypted userdata partition and try to guess the passphrase offline, using much more powerful hardware. Obtaining a raw copy of a disk partition is usually not possible on most commercial devices, but can be achieved by booting a specialized data acquisition boot image signed by the device manufacturer,  exploiting a flaw in the bootloader that allows unsigned images to be booted (such as this one), or simply by booting a custom recovery image on devices with an unlocked bootloader (a typical first step to 'rooting').

Once the device has been booted, obtaining a copy of the userdata partition is straightforward. The crypto footer however, despite its name, typically resides on a dedicated partition on recent devices. The name of the partition is specified using the encryptable flag in the device's fstab file. For example, on the Galaxy Nexus, the footer is on the metadata partition as shown below.

/dev/block/platform/omap/omap_hsmmc.0/by-name/userdata  /data  ext4  \
noatime,nosuid,nodev,nomblk_io_submit,errors=panic \
wait,check,encryptable=/dev/block/platform/omap/omap_hsmmc.0/by-name/metadata

Once we know the name of the partition that stores the crypto footer it can be copied simply by using the dd command.

Very short passcodes (for example a 4-digit PIN) can be successfully bruteforced using a script (this particular one is included in Santoku Linux) that runs on a desktop CPU. However, much better performance can be achieved on a GPU, which has been specifically designed to execute multiple tasks in parallel. PBKDF2 is an iterative algorithm based on SHA-1 (SHA-2 can also be used) that requires very little memory for execution and lends itself to paralellization. One GPU-based, high-performance PBKDF2 implementation is found in the popular password recovery tool hashcat. Version 1.30 comes with a built-in Android FDE module, so recovering an Android disk encryption key is as simple as parsing the crypto footer and feeding the encrypted key, salt, and the first several sectors of the encrypted partition to hashcat. As we noted in the previous section, the crypto footer does not include any checksum of the master key, so the only way to check whether the decrypted master key is the correct one is to try to decrypt the disk partition and look for some known data. Because most current Android devices use the ext4 filesystem, hashcat (and other similar tools) look for patterns in the ext4 superblock in order to confirm whether the tried passphrase is correct.

The Android FDE input for hashcat includes the salt, encrypted master key and the first 3 sectors of the encrypted partition (which contain a copy of the 1024-byte ext4 superblock). The hashcat input file might look like this (taken from the hashcat example hash):

$fde$16$ca56e82e7b5a9c2fc1e3b5a7d671c2f9$16$7c124af19ac913be0fc137b75a34b20d$eac806ae7277c8d4...

On a device that uses a six-digit lockscreen PIN, the PIN, and consequently the FDE master key can be recovered with the following command:

$ cudaHashcat64 -m 8800 -a 3 android43fde.txt ?d?d?d?d?d?d
...
Session.Name...: cudaHashcat
Status.........: Cracked
Input.Mode.....: Mask (?d?d?d?d?d?d) [6]
Hash.Target....: $fde$16$aca5f840...
Hash.Type......: Android FDE
Time.Started...: Sun Oct 05 19:06:23 2014 (6 secs)
Speed.GPU.#1...: 20629 H/s
Recovered......: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts
Progress.......: 122880/1000000 (12.29%)
Skipped........: 0/122880 (0.00%)
Rejected.......: 0/122880 (0.00%)
HWMon.GPU.#1...: 0% Util, 48c Temp, N/A Fan

Started: Sun Oct 05 19:06:23 2014
Stopped: Sun Oct 05 19:06:33 2014

Even when run on the GPU of a mobile computer (NVIDIA GeForce 730M), hashcat can achieve more than 20,000 PBKDF2 hashes per second, and recovering a 6 digit PIN takes less than 10 seconds. On the same hardware, a 6-letter (lowercase only) password takes about 4 hours.

As you can see, bruteforcing a simple PIN or password is very much feasible, so choosing a strong lockscreen password is vital. Lockscreen password strength can be enforced by installing a device administrator that sets password complexity requirements. Alternatively, a dedicated disk encryption password can be set on rooted devices using the shell or a dedicated application. CyanogenMod 11 supports setting a dedicated disk encryption password out of the box, and one can be set via system Settings, as shown below.

Android 4.4

Android 4.4 adds several improvements to disk encryption, but the most important one is replacing the PBKDF2 key derivation function (KDF) with scrypt. scrypt has been specifically designed to be hard to crack on GPUs by requiring a large (and configurable) amount of memory. Because GPUs have a limited amount of memory, executing multiple scrypt tasks in parallel is no longer feasible, and thus cracking scrypt is much slower than PBKDF2 (or similar hash-based KDFs). As part of the upgrade process to 4.4, Android automatically updates the crypto footer to use scrypt and re-encrypts the master key. Thus every device running Android 4.4 (devices using a vendor-proprietary FDE scheme excluded) should have its FDE master key protected using an scrypt-derived key.

The Android 4.4 crypto footer looks like this (version 1.2):

struct crypt_mnt_ftr {
__le32 magic;
__le16 major_version;
__le16 minor_version;
__le32 ftr_size;
__le32 flags;
__le32 keysize;
__le32 spare1;
__le64 fs_size;
__le32 failed_decrypt_count;
unsigned char crypto_type_name[MAX_CRYPTO_TYPE_NAME_LEN];
__le32 spare2;
unsigned char master_key[MAX_KEY_LEN];
unsigned char salt[SALT_LEN];
__le64 persist_data_offset[2];
__le32 persist_data_size;
__le8 kdf_type;
/* scrypt parameters. See www.tarsnap.com/scrypt/scrypt.pdf */
__le8 N_factor; /* (1 << N) */
__le8 r_factor; /* (1 << r) */
__le8 p_factor; /* (1 << p) */
};

As you can see, the footer now includes an explicit kdf_type which specifies the KDF used to derive the master key KEK. The values of the scrypt initialization parameters (N, r and p) are also included. The master key size (128-bit) and disk sector encryption mode (aes-cbc-essiv:sha256) are the same as in 4.3.

Bruteforcing the master key now requires parsing the crypto footer, initializing scrypt and generating all target PIN or password combinations. As the 1.2 crypto footer still does not include a master key checksum, checking whether the tried PIN or password is correct again requires looking for known plaintext in the ext4 superblock.

While hashcat does support scrypt since version 1.30, it is not much more efficient (and in fact can be slower) than running scrypt on a CPU. Additionally, the Android 4.4 crypto footer format is not supported, so hashcat cannot be used to recover Android 4.4 disk encryption passphrases as is.

Instead, the Santoku Linux FDE bruteforcer Python script can be extended to support the 1.2 crypto footer format and the scrypt KDF. A sample (and not particularly efficient) implementation can be found here. It might produce the following output when run on a 3.50GHz Intel Core i7 CPU:

$ time python bruteforce_stdcrypto.py header footer 4

Android FDE crypto footer
-------------------------
Magic : 0xD0B5B1C4
Major Version : 1
Minor Version : 2
Footer Size : 192 bytes
Flags : 0x00000000
Key Size : 128 bits
Failed Decrypts: 0
Crypto Type : aes-cbc-essiv:sha256
Encrypted Key : 0x66C446E04854202F9F43D69878929C4A
Salt : 0x3AB4FA74A1D6E87FAFFB74D4BC2D4013
KDF : scrypt
N_factor : 15 (N=32768)
r_factor : 3 (r=8)
p_factor : 1 (p=2)
-------------------------
Trying to Bruteforce Password... please wait
Trying: 0000
Trying: 0001
Trying: 0002
Trying: 0003
...
Trying: 1230
Trying: 1231
Trying: 1232
Trying: 1233
Trying: 1234
Found PIN!: 1234

real 4m43.985s
user 4m34.156s
sys 0m9.759s

As you can see, trying 1200 PIN combinations requires almost 5 minutes, so recovering a simple PIN is no longer instantaneous. That said, cracking a short PIN or password is still very much feasible, so choosing a strong locksreen password (or a dedicated disk encryption password, when possible) is still very important.

Android L

A preview release of the upcoming Android version (referred to as 'L') has been available for several months now, so we can observe some of expected changes to disk encryption. If we run the crypto footer obtained from an encrypted Android L device through the script introduced in the previous section, we may get the following output:

$ ./bruteforce_stdcrypto.py header L_footer 4

Android FDE crypto footer
-------------------------
Magic : 0xD0B5B1C4
Major Version : 1
Minor Version : 3
Footer Size : 2288 bytes
Flags : 0x00000000
Key Size : 128 bits
Failed Decrypts: 0
Crypto Type : aes-cbc-essiv:sha256
Encrypted Key : 0x825F3F10675C6F8B7A6F425599D9ECD7
Salt : 0x0B9C7E8EA34417ED7425C3A3CFD2E928
KDF : unknown (3)
N_factor : 15 (N=32768)
r_factor : 3 (r=8)
p_factor : 1 (p=2)
-------------------------
...

As you can see above, the crypto footer version has been upped to 1.3, but the disk encryption cipher mode and key size have not changed. However, version 1.3 uses a new, unknown KDF specified with the constant 3 (1 is PBKDF2, 2 is scrypt). Additionally, encrypting a device no longer requires setting a lockscreen PIN or password, which suggests that the master key KEK is no longer directly derived from the lockscreen password. Starting the encryption process produces the following logcat output:

D/QSEECOMAPI: (  178): QSEECom_start_app sb_length = 0x2000
D/QSEECOMAPI: ( 178): App is already loaded QSEE and app id = 1
D/QSEECOMAPI: ( 178): QSEECom_shutdown_app
D/QSEECOMAPI: ( 178): QSEECom_shutdown_app, app_id = 1
...
I/Cryptfs ( 178): Using scrypt with keymaster for cryptfs KDF
D/QSEECOMAPI: ( 178): QSEECom_start_app sb_length = 0x2000
D/QSEECOMAPI: ( 178): App is already loaded QSEE and app id = 1
D/QSEECOMAPI: ( 178): QSEECom_shutdown_app
D/QSEECOMAPI: ( 178): QSEECom_shutdown_app, app_id = 1

As discussed in a previous post, 'QSEE' stands for Qualcomm Secure Execution Environment, which is an ARM TrustZone-based implementation of a TEE. QSEE provides the hardware-backed credential store on most devices that use recent Qualcomm SoCs. From the log above, it appears that Android's keymaster HAL module has been extended to store the disk encryption key KEK in hardware-backed storage (Cf. 'Using scrypt with keymaster for cryptfs KDF' in the log above). The log also mentions scrypt, so it is possible that the lockscreen password (if present) along with some key (or seed) stored in the TEE are fed to the KDF to produce the final master key KEK. However, since no source code is currently available, we cannot confirm this. That said, setting an unlock pattern on an encrypted Android L device produces the following output, which suggests that the pattern is indeed used when generating the encryption key:

D/VoldCmdListener(  173): cryptfs changepw pattern {}
D/QSEECOMAPI: ( 173): QSEECom_start_app sb_length = 0x2000
D/QSEECOMAPI: ( 173): App is already loaded QSEE and app id = 1
...
D/QSEECOMAPI: ( 173): QSEECom_shutdown_app
D/QSEECOMAPI: ( 173): QSEECom_shutdown_app, app_id = 1
I/Cryptfs ( 173): Using scrypt with keymaster for cryptfs KDF
D/QSEECOMAPI: ( 173): QSEECom_start_app sb_length = 0x2000
D/QSEECOMAPI: ( 173): App is already loaded QSEE and app id = 1
D/QSEECOMAPI: ( 173): QSEECom_shutdown_app
D/QSEECOMAPI: ( 173): QSEECom_shutdown_app, app_id = 1
E/VoldConnector( 756): NDC Command {5 cryptfs changepw pattern [scrubbed]} took too long (6210ms)

As you can be see in the listing above, the cryptfs changepw command, which is used to send instructions to Android's vold daemon, has been extended to support a pattern, in addition to the previously supported PIN/password. Additionally, the amount of time the password change takes (6 seconds) suggests that the KDF (scrypt) is indeed being executed to generate a new encryption key. Once we've set a lockscreen unlock pattern, booting the device now requires entering the pattern, as can be seen in the screenshot below. Another subtle change introduced in Android L, is that when booting an encrypted device the lockscreen pattern, PIN or password needs to be entered only once (at boot time), and not twice (once more on the lockscreen, after Android boots) as it was in previous versions.


While no definitive details are available, it is fairly certain that (at least on high-end devices), Android's disk encryption key(s) will have some hardware protection in Android L. Assuming that the implementation is similar to that of the hardware-backed credential store, disk encryption keys should be encrypted by an unextractable key encryption key stored in the SoC, so obtaining a copy of the crypto footer and the encrypted userdata partition, and bruteforcing the lockscreen passphrase should no longer be sufficient to decrypt disk contents. Disk encryption in the Android L preview (at least on a Nexus 7 2013) feels significantly faster (encrypting the 16GB data partition takes about 10 minutes), so it is most probably hardware-accelerated as well (or the initial encryption is only encrypting disk blocks that are actually in use, and not every single block as in previous versions). However, it remains to be seen whether high-end Android L devices will include a dedicated crypto co-processor akin to Apple's 'Secure Enclave'. While the current TrustZone-based key protection is much better than the software only implementation found in previous versions, a flaw in the secure TEE OS or any of the trusted TEE applications could lead to extracting hardware-protected keys or otherwise compromising the integrity of the system.

Update 2014/11/4: The official documentation about disk encryption has been updated, including details about KEK protection. Quote:
The encrypted key is stored in the crypto metadata. Hardware backing is implemented by using Trusted Execution Environment�s (TEE) signing capability. Previously, we encrypted the master key with a key generated by applying scrypt to the user's password and the stored salt. In order to make the key resilient against off-box attacks, we extend this algorithm by signing the resultant key with a stored TEE key. The resultant signature is then turned into an appropriate length key by one more application of scrypt. This key is then used to encrypt and decrypt the master key. To store this key:
  1. Generate random 16-byte disk encryption key (DEK) and 16-byte salt.
  2. Apply scrypt to the user password and the salt to produce 32-byte intermediate key 1 (IK1).
  3. Pad IK1 with zero bytes to the size of the hardware-bound private key (HBK). Specifically, we pad as: 00 || IK1 || 00..00; one zero byte, 32 IK1 bytes, 223 zero bytes.
  4. Sign padded IK1 with HBK to produce 256-byte IK2.
  5. Apply scrypt to IK2 and salt (same salt as step 2) to produce 32-byte IK3.
  6. Use the first 16 bytes of IK3 as KEK and the last 16 bytes as IV.
  7. Encrypt DEK with AES_CBC, with key KEK, and initialization vector IV.
Here's a diagram that visualizes this process:

 Summary

Android has included full disk encryption (FDE) support since version 3.0, but versions prior to 4.4 used a fairly easy to bruteforce key derivation function (PBKDF2 with 2000 iterations). Additionally, because the disk encryption password is the same as the lockscreen one, most users tend to use simple PINs or passwords (unless a device administrator enforces password complexity rules), which further facilitates bruteforcing. Android 4.4 replaced the disk encryption KDF with scrypt, which is much harder to crack and cannot be implemented efficiently on off-the-shelf GPU hardware. In addition to enabling FDE out of the box, Android L is expected to include hardware protection for disk encryption keys, as well as  hardware acceleration for encrypted disk access. These two features should make FDE on Android both more secure and much faster.

[Note that the discussion in this post is based on "stock Android" as released by Google (references source code is from AOSP). Some device vendors implement slightly different encryption schemes, and hardware-backed key storage and/or hardware acceleration are already available via vendor extensions on some high-end devices.]

Wednesday 23 July 2014

Secure voice communication on Android

While the topic of secure voice communication on mobile is hardly new, it has been getting a lot of media attention following the the official release of the Blackphone, Consequently, this is a good time to go back to basics and look into how secure voice communication is typically implemented. While this post focuses on Android, most of the discussion applies to other platforms too, with only the mobile clients presented being Android specific.

Voice over IP

Modern mobile networks already encrypt phone calls, so voice communication is secure by default, right? As it turns out, the original GSM encryption protocol (A5/1) is quite weak and can be attacked with readily available hardware and software. The somewhat more modern alternative (A5/3) is also not without flaws, and in addition its adoption has been fairly slow, especially in some parts of the world. Finally, mobile networks depend on a shared key, which while protected by hardware (UICC/SIM card) on mobile phones, can be obtained from MNOs (via legal or other means) and used to enable call interception and decryption.

So what's the alternative? Short of building your own cellular network, the alternative is to use the data connectivity of the device to transmit and receive voice. This strategy is known as Voice over IP (VoIP) and has been around for a while, but the data speeds offered by mobile networks have only recently reached levels that make it practical on mobiles.

Session Initiation Protocol

Different technologies and standards that enable VoIP are available, but by far the most widely adopted one relies on the Session Initiation Protocol (SIP). As the name implies, SIP is a signalling protocol, whose purpose is establish a media session between endpoints. A session is established by discovering the remote endpoint(s), negotiating a media path and codec, and establishing one or more media streams between the endpoints. Media negotiation is achieved with the help of the Session Description Protocol (SDP), and typically transmitted using the Real-time Transport Protocol (RTP). While a SIP client, or more correctly a user agent (UA), can connect directly to a peer, peer discovery usually makes use of one or more well-known registrars. A registrar is a SIP endpoint (server) which accepts REGISTER requests from a set of clients in the domain(s) it is responsible for, and offers a location services to interested parties, much like DNS. Registration is dynamic and temporary: each client registers its SIP URI and IP address with the registrar, thus making it possible for other peers to discover it for the duration of the registration period. The SIP URI can contain arbitrary alphanumeric characters (much like an email address), but the username part is typically limited to numbers for backward compatibility with existing networks and devices (e.g., sip:0123456789@mydomain.org).

A SIP call is initiated by a UA sending an INVITE message specifying the target peer, which might be mediated by multiple SIP 'servers' (registrars and/or proxies). Once a media path has been negotiated, the two endpoints (Phone A and Phone B in the figure below) might communicate directly (as shown in the figure) or via a one or more media proxies which help bridge SIP clients that don't have a publicly routable IP address (such as those behind NAT), implement conferencing, etc.


SIP on mobiles

Because SIP calls are ultimately routed using the registered IP address of the target peer, arguably SIP is not very well suited for mobile clients. In order to receive calls, clients need to remain online even when not actively used and keep a constant IP address for fairly long periods of time. Additionally, because public IP addresses are rarely assigned to mobile clients,  establishing a direct media channel between two mobile peers can be challenging. The online presence problem is typically solved by using a complementary, low-overhead signalling mechanism such as Google Cloud Messaging (GCM) for Android in order to "wake up" the phone before it can receive a call. The requirement for a stable IP address is typically handled by shorter registration times and triggering registration each time the connectivity of the device changes (e.g., from going from LTE to WiFi). The lack of a public IP address is usually overcome by using various supporting methods, ranging from querying STUN servers to discover the external public IP address of a peer, to media proxy servers which bridge connections between heavily NAT-ed clients. By combining these and other techniques, a well-implemented SIP client can offer an alternative voice communication channel on a mobile phone, while integrating with the OS and keeping resource usage fairly low.

Most Android devices have included a built-in SIP client as part of the framework since version 2.3 in the android.net.sip package. However, the interface offered by this package is very high level, offers few options and does not really support extension or customization. Additionally, it hasn't received any new features since the initial release, and, most importantly, is optional and therefore unavailable on some devices. For this reason, most popular SIP clients for Android are implemented using third party libraries such as PJSIP, which support advanced SIP features and offer a more flexible interface.

Securing SIP

As mentioned above, SIP is a signalling protocol. As such, it does not carry any voice data, only information related to setting up media channels. A SIP session includes information about each of the peers and any intermediate servers, including IP addresses, supported codecs, user agent strings, etc. Therefore, even if the media channel is encrypted, and the contents of a voice call cannot be easily recovered, the information contained in the accompanying SIP messages -- who called whom, where the call originated from and when, can be equally important or damaging. Additionally, as we'll show in the next section, SIP can be used to negotiate keys for media channel encryption, in which case intercepting SIP messages can lead to recovering plaintext voice data.

SIP is a transport-independent text-based protocol, similar to HTTP, which is typically transmitted over UDP. When transmitted over an unencrypted channel, it can easily be intercepted using standard packet capture software or dumped to a log file at any of the intermediate nodes a SIP message traverses before reaching its destination. Multiple tools that can automatically correlate SIP messages with the associated media streams are readily available. This lack of inherent security features requires that SIP be secured by protecting the underlying transport channel.

VPN

A straightforward method to secure SIP is to use a VPN to connect peers. Because most VPNs support encryption, signalling, as well as media streams tunneled through the VPN are automatically protected. As an added benefit, using a VPN can solve the NAT problem by offering directly routable private addresses to peers. Using a VPN works well for securing VoIP trunks between SIP servers which are linked using a persistent, low-latency and high-bandwidth connection. However, the overhead of a VPN connection on mobile devices can be too great to sustain a voice channel of even average quality. Additionally, using a VPNs can result in highly variable latency (jitter), which can deteriorate voice quality even if jitter buffers are used. That said, many Android SIP clients can be setup to automatically use a VPN if available. The underlying VPN used can be anything supported on Android, for example the built-in IPSec VPN or a third-party VPN such as OpenVPN. However, even if a VPN provides tolerable voice quality, typically it only ensures an encrypted tunnel to a SIP proxy, and there are no guarantees that any SIP messages or voice streams that leave the proxy are encrypted. That said, a VPN can be a usable solution, if all calls are terminated within a trusted private network (such as a corporate network).

Secure SIP

Because SIP is transport-independent it can be transmitted over any supported protocol, including a connection-oriented one such as TCP. When using TCP, a secure channel between SIP peers can be established with the help of the standard TLS protocol. Peer authentication is handled in the usual manner -- using PKI certificates, which allow for mutual authentication. However, because a SIP message typically traverses multiple servers until it reaches its final destination, there is no guarantee that the message will be always encrypted. In other words, SIP-over-TLS, or secure SIP, does not provide end-to-end security but only hop-to-hop security.

SIP-over-TLS is relatively well supported by all major SIP servers, including open source once like Asterisk and FreeSWITCH. For example, enabling SIP-over-TLS in Asterisk requires generating a key and certificate, configuring a few global tls options, and finally requiring peers to use TLS when connecting to the server as described here. However, Asterisk does not currently support client authentication for SIP clients (although there is some limited support for client authentication on trunk lines).

Most popular Android clients support using the TLS transport for SIP, with some limitations. For example the popular open source CSipSimple client supports TLS, but only version 1.0 (as well as SSL v2/v3). Additionally, it does not use Android's built-in certificate and key stores, but requires certificates to be saved on external storage in PEM format. Both limitations are due to the underlying PJSIP library, which is built using OpenSSL and requires keys and certificates to be stored as files in OpenSSL's native format. Additionally, server identity is not checked by default and the check needs to be explicitly enabled in order for server identity to be verified, as shown in the screenshot below.


Another popular VoIP client, Zoiper, doesn't use a pre-initialized trust store at all, but requires peer certificates to be manually confirmed and cached for each SIP server. The commercial Bria Android client (by CounterPath) does use the system trust store and automatically verifies peer identity.

When a secure SIP connection to a peer is established, VoIP clients indicate this on the call setup and call screens as shown in the CSipSimple screenshot below.


SIP Alternatives

While SIP is a widely adopted standard, it is also quite complex and supports many extensions that are not particularly useful in a mobile environment. Instead of SIP the RedPhone secure VoIP client uses a simple custom signalling protocol based on a RESTful HTTP (with some additional verbs) API. The protocol is secured using TLS with server certificates issued by a private CA, which RedPhone clients implicitly trust.  

Securing the media channel

As mentioned in our brief SIP introduction,  the media channel between peers is usually implemented using the RTP protocol. Because the media channel is completely separated from SIP, even if all signalling is carried out over TLS, media streams are unprotected by default. RTP streams can be secured using the Secure RTP (SRTP) profile of the RTP protocol. SRTP is designed to provide confidentiality, message authentication, and replay protection to the underlying RTP streams, as well as to the supporting RTCP protocol. SRTP uses a symmetric cipher, typically AES in counter mode, to provide confidentiality and a message authentication code (MAC), typically HMAC-SHA1, to provide packet integrity. Replay protection is implemented by maintaining a replay list which received packets are checked against to detect possible replay.

When a voice channel is encrypted using SRTP the transmitted data looks like random noise (as any encrypted data should), as shown below.



SRTP defines a pseudo-random function (PRF) which is used to derive the session keys (used for encryption and authentication) from a master key and master salt. What SRTP does not specify is how the master key and salt should be obtained or exchanged between peers.

SDES

SDP Security Descriptions for Media Streams (SDES) is an extension to the SDP protocol which adds a media attribute that can be used to negotiate a key and other cryptographic parameters for SRTP. The attribute is simply called crypto and can contain a crypto suite, key parameters, and, optionally, session parameters. A crypto attribute which includes a crypto suite and key parameters might look like this:

a=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:VozD8O2kcDFeclWMjBOwvVxN0Bbobh3I6/oxWYye

Here AES_CM_128_HMAC_SHA1_80 is a crypto suite which uses AES in counter mode with an 128-bit key for encryption and produces an 80-bit SRTP authentication tag using HMAC-SHA1. The Base64-encoded value that follows the crypto suite string contains the master key (128 bits) concatenated with the master salt (112 bits) which are used to derive SRTP session keys.

SDES does not provide any protection or authentication of the cryptographic parameters it includes, and is therefore only secure when used in combination with SIP-over-TLS (or another secure signalling transport). SDES is widely supported by both SIP servers, hardware SIP phones and software clients. For example, in Asterisk enabling SDES and SRTP is as simple as adding encryption=yes to the peer definition. Most Android SIP clients support SDES and can automatically enable SRTP for the media channel when the INVITE SIP message includes the crypto attribute. For example, in the CSipSimple screenshot above the master key for SRTP was received via SDES.

The main advantage of SDES is its simplicity. However it requires that all intermediate servers are trusted, because they have access to the SDP data that includes the master key. Even though the SRTP media stream might be transmitted directly between two peers, SRTP effectively provides only hop-to-hop security, because compromising any of the intermediate SIP servers can result in recovering the master key and eventually session keys. For example, if the private key of a SIP server involved in SDES key exchange is compromised, and the TLS session that carried SIP messages session did not use forward secrecy, the master key can easily be extracted from a packet capture using Wireshark, as shown below.


ZRTP

ZRTP aims to provide end-to-end security for SRTP media streams by using the media channel to negotiate encryption keys directly between peers. It is essentially a key agreement protocol based on a Diffie-Hellman with added Man-in-the-Middle (MiTM) protections. MiTM protection relies on the so called "short authentication strings" (SAS), which are derived from the session key and are displayed to each calling party. The parties need to confirm that they see the same SAS by reading it to each other over the phone. As an additional MiTM protection, ZRTP uses a form of key continuity, which mixes in previously negotiated key material into the shared secret obtained using Diffie-Hellman when deriving session keys. Thus ZRTP does not require a secure signalling channel or a PKI in order to establish a SRTP session key or protect against MiTM attacks.

On Android, ZRTP is supported both by VoIP clients for dedicated services such as RedPhone and Silent Phone, and by general-purpose SIP clients like CSipSimple. On the server side, ZRTP is supported by both FreeSWITCH and Kamailio (but not by Asterisk), so it its fairly easy to set up a test server and test ZRTP support on Android.

ZRTP support in CSipSimple can be configured on a per account basis by setting the  ZRTP mode option to "Create ZRTP". It must be noted however, that ZRTP encryption is opportunistic and will fall back to cleartext communication if the remote peer does not support ZRTP. When the remote party does support ZRTP, CSipSimple shows an SAS confirmation dialog only the first time you connect to a particular peer and then displays the SAS and encryption scheme in the call dialog as shown below.


In this case, the voice channel is direct and ZRTP/SRTP provide end-to-end security. However, the SIP proxy server can also establish a separate ZRTP/SRTP channel with each party and proxy the media streams. In this case, the intermediate server has access to unencrypted media streams and the provided security is only hop-to-hop, as when using SDES. For example, when FreeSWITCH establishes a separate media channel with two parties that use ZRTP, CSipSimple will display the following dialog, and the SAS values at both clients won't match because each client uses a separate session key. Unfortunately, this is not immediately apparent to end users which may not be familiar with the meaning of the "EndAtMitM" string that signifies this.


The ZRTP protocol supports a "trusted MiTM" mode which allows clients to verify the intermediate server after completing a key enrollment procedure which establishes a shared key between the client and a particular server. This features is supported by FreeSWITCH, but not by common Android clients, including CSipSimple.

Summary

Android supports the SIP protocol natively, but the provided APIs are restrictive and do not support advanced VoIP features such as media channel encryption. Most major SIP client apps support voice encryption using SRTP and either SDES or ZRTP for key negotiation. Popular open source SIP severs such as Asterisk and FreeSWITCH also support SRTP, SDES, and ZRTP and make it fairly easy to build a small scale secure VoIP network that can be used by Android clients. Hopefully, the Android framework will be extended to include the features required to implement secure voice communication without using third party libraries, and integrate any such features with other security services provided by the platform.

Thursday 8 May 2014

Using KitKat verified boot

Android 4.4 introduced a number of security enhancements, most notably SELinux in enforcing mode. One security feature that initially got some press attention, because it was presumably aiming to 'end all custom firmware', but hasn't been described in much detail, is verified boot. This post will briefly explain how verified boot works and then show how to configure and enable it on a Nexus device.

Verified boot with dm-verity

Android's verified boot implementation is based on the dm-verity device-mapper block integrity checking target. Device-mapper is a Linux kernel framework that provides a generic way to implement virtual block devices. It is used to implement volume management (LVM), full-disk encryption (dm-crypt), RAIDs and even distributed replicated storage (DRBD). Device-mapper works by essentially mapping a virtual block device to one or more physical block devices, optionally modifying transferred data in transit. For example, dm-crypt decrypts read physical blocks and encrypts written blocks before committing them to disk. Thus disk encryption is transparent to users of the virtual dm-crypt block device. Device-mapper targets can be stacked on top of each other, making it possible to implement complex data transformations. 

As we mentioned, dm-verity is a block integrity checking target. What this means is that it transparently verifies the integrity of each device block as it is being read from disk. If the block checks out, the read succeeds, and if not -- the read generates an I/O error as if the block was physically corrupt. Under the hood dm-verity is implemented using a pre-calculated hash tree which includes the hashes of all device blocks. The leaf nodes of the tree include hashes of physical device blocks, while intermediate nodes are hashes of their child nodes (hashes of hashes). The root node is called the root hash and is based on all hashes in lower levels (see figure below). Thus a change even in a single device block will result in a change of the root hash. Therefore in order to verify a hash tree we only need to verify its root hash. At runtime dm-verity calculates the hash of each block when it is read and verifies it using the pre-calculated hash tree. Since reading data from a physical device is already a time consuming operation, the latency added by hashing and verification as relatively low.

[Image from Android dm-verity documentation,  licensed under Creative Commons Attribution 2.5]

Because dm-verity depends on a pre-calculated hash tree over all blocks of a device, the underlying device needs to be mounted read-only for verification to be possible. Most filesystems record mount times in their superblock or similar metadata, so even if no files are changed at runtime, block integrity checks will fail if the underlying block device is mounted read-write. This can be seen as a limitation, but it works well for devices or partitions that hold system files, which are only changed by OS updates. Any other change indicates either OS or disk corruption, or a malicious program that is trying to modify the OS or masquerade as a system file. dm-verity's read-only requirement also fits well with Android's security model, which only hosts application data on a read-write partition, and keeps OS files on the read-only system partition.

Android implementation

dm-verity was originally developed in order to implement verified boot in Chrome OS, and was integrated into the Linux kernel in version 3.4. It is enabled with the CONFIG_DM_VERITY kernel configuration item. Like Chrome OS, Android 4.4 also uses the kernel's dm-verity target, but the cryptographic verification of the root hash and mounting of verified partitions are implemented differently from Chrome OS.

The RSA public key used for verification is embedded in the boot partition under the verity_key filename and is used to verify the dm-verity mapping table. This mapping table holds the locations of the target device and the offset of the hash table, as well as the root hash and salt. The mapping table and its signature are part of the verity metablock which is written to disk directly after the last filesystem block of the target device. A partition is marked as verifiable by adding the verify flag to the Android-specific fs_mgr flags filed of the device's fstab file. When Android's filesystem manager encounters the verify flag in fstab, it loads the verity metadata from the block device specified in fstab and verifies its signature using the verity_key. If the signature check succeeds, the filesystem manager parses the dm-verity mapping table and passes it to the Linux device-mapper, which use the information contained in the mapping table in order to create a virtual dm-verity block device. This virtual block device is then mounted at the mount point specified in fstab in place of the corresponding physical device. As a result, all reads from the underlying physical device are transparently verified against the pre-generated hash tree. Modifying or adding files, or even remounting the partition in read-write mode, results in an integrity verification failure and an I/O error.

We must note that as dm-verity is a kernel feature, in order for the integrity protection it provides to be effective, the kernel the device boots needs to be trusted. On Android, this means verifying the boot partition, which also includes the root filesystem RAM disk (initrd) and the verity public key. This process is device-specific and is typically implemented in the device bootloader, usually by using an unmodifiable verification key stored in hardware to verify the boot partition's signature.

Enabling verified boot

The official documentation describes the steps required to enable verified boot on Android, but lacks concrete information about the actual tools and commands that are needed. In this section we show the commands required to create and sign a dm-verity hash table and demonstrate how to configure an Android device to use it. Here is a summary of the required steps: 
  1. Generate a hash tree for that system partition.
  2. Build a dm-verity table for that hash tree.
  3. Sign that dm-verity table to produce a table signature.
  4. Bundle the table signature and dm-verity table into verity metadata.
  5. Write the verity metadata and the hash tree to the system parition.
  6. Enable verified boot in the devices's fstab file.
As we mentioned earlier, dm-verity can only be used with a device or partition that is mounted read-only at runtime, such as Android's system partition. While verified boot can be applied to other read-only partition's such as those hosting proprietary firmware blobs, this example uses the system partition, as protecting OS files results in considerable device security benefits. 

A dm-verity hash tree is generated with the dedicated veritysetup program. veritysetup can operate directly on block devices or use filesystem images and write the hash table to a file. It is supposed to produce platform-independent output, but hash tables produced on desktop Linux didn't quite agree with Android, so for this example we'll generate the hash tree directly on the device. To do this we first need to compile veritysetup for Android. A project that generates a statically linked veritysetup binary is provided on Github. It uses the OpenSSL backend for hash calculations and has only been slightly modified (in a not too portable way...), to allow for the different size of the off_t data type, which is 32-bit in current versions of Android's bionic library. 

In order to add the hash tree directly to the system partition, we first need to make sure that there is enough space to hold the hash tree and the verity metadata block (32k) after the last filesystem block. As most devices typically use the whole system partition, you may need to modify the BOARD_SYSTEMIMAGE_PARTITION_SIZE value in your device's BoardConfig.mk to allow for storing verity data. After you have adjusted the size of the system partition, transfer the veritysetup binary to the cache or data partitions of the device, and boot a recovery that allows root shell access over ADB. To generate and write the hash tree to the device we use the veritysetup format command as shown below.

# veritysetup --debug --hash-offset 838893568 --data-blocks 204800 format \
/dev/block/mmcblk0p21 /dev/block/mmcblk0p21
...
# Updating VERITY header of size 512 on device /dev/block/mmcblk0p21, offset 838893568.
VERITY header information for /dev/block/mmcblk0p21
UUID: 0dd970aa-3150-4c68-abcd-0b8286e6000
Hash type: 1
Data blocks: 204800
Data block size: 4096
Hash block size: 4096
Hash algorithm: sha256
Salt: 1f951588516c7e3eec3ba10796aa17935c0c917475f8992353ef2ba5c3f47bcb
Root hash: 5f061f591b51bf541ab9d89652ec543ba253f2ed9c8521ac61f1208267c3bfb1

This example was executed on a Nexus 4, make sure you use the correct block device for your phone instead of /dev/block/mmcblk0p21. The --hash-offset parameter is needed because we are writing the hash tree to the same device that holds filesystem data. It is specified in bytes (not blocks) and needs to point to a location after the verity metadata block. Adjust according to your filesystem size so that hash_offset > filesystem_size + 32k. The next parameter, --data-blocks, specifies the number of blocks used by the filesystem. The default block size is 4096, but you can specify a different size using the --data-block-size parameter. This value needs to match the size allocated to the filesystem with BOARD_SYSTEMIMAGE_PARTITION_SIZE. If the command succeeds it will output the calculated root hash and the salt value used, as shown above. Everything but the root hash is saved in the superblock (first block) of the hash table. Make sure you save the root hash, as it is required to complete the verity setup.

Once you have the root hash and salt, you can generate and sign the dm-verity table. The table is a single line that contains the name of the block device, block sizes, offsets, salt and root hash values. You can use the gentable.py script (edit constant values accordingly first) to generate it or write it manually based on the output of veritysetup. See dm-verity's documentation for details about the format. For our example it looks like this (single line, split for readability):

1 /dev/block/mmcblk0p21 /dev/block/mmcblk0p21 4096 4096 204800 204809 sha256 \
5f061f591b51bf541ab9d89652ec543ba253f2ed9c8521ac61f1208267c3bfb1 \
1f951588516c7e3eec3ba10796aa17935c0c917475f8992353ef2ba5c3f47bcb

Next, generate a 2048-bit RSA key and sign the table using OpenSSL. You can use the command bellow or the sign.sh script on Github.

$ openssl dgst -sha1 -sign verity-key.pem -out table.sig table.bin

Once you have a signature you can generate the verity metadata block, which includes a magic number (0xb001b001) and the metadata format version, followed by the RSA PKCS#1.5 signature blob and table string, padded with zeros to 32k. You can generate the metadata block with the mkverity.py script by passing the signature and table files like this:

$ ./mkverity.py table.sig table.bin verity.bin

Next, write the generated verity.bin file to the system partition using dd  or a similar tool, right after the last filesystem block and before the start of the verity hash table. Using the same number of data blocks passed to veritysetup, the needed command (which also needs to be executed in recovery) becomes:

# dd if=verity.bin of=/dev/block/mmcblk0p21 bs=4096 seek=204800

Finally, you can check that the partition is properly formatted using the veritysetup verify command as shown below, where the last parameter is the root hash:

# veritysetup --debug --hash-offset 838893568 --data-blocks 204800 verify \
/dev/block/mmcblk0p21 /dev/block/mmcblk0p21 \
5f061f591b51bf541ab9d89652ec543ba253f2ed9c8521ac61f1208267c3bfb1

If verification succeeds, reboot the device and verify that the device boots without errors. If it does, you can proceed to the next step: add the verification key to the boot image and enable automatic integrity verification.

The RSA public key used for verification needs to be in mincrypt format (also used by the stock recovery when verifying OTA file signatures), which is a serialization of mincrypt's RSAPublicKey structure. The interesting thing about this structure is that ts doesn't simply include the modulus and public exponent values, but contains pre-computed values used by mincrypt's RSA implementation (based on Montgomery reduction). Therefore converting an OpenSSL RSA public key to mincrypt format requires some modular operations and is not simply a binary format conversion. You can convert the PEM key using the pem2mincrypt tool (conversion code shamelessly stolen from secure adb's implementation). Once you have converted the key, include it in the root of your boot image under the verity_key filename. The last step is to modify the device's fstab file in order to enable block integrity verification for the system partition. This is simply a matter of adding the verify flag, as shown below:

/dev/block/platform/msm_sdcc.1/by-name/system  /system  ext4  ro, barrier=1  wait,verify

Next, verify that your kernel configuration enable CONFIG_DM_VERITY, enable it if needed and build your boot image. Once you have boot.img, you can try booting the device with it using fastboot boot boot.img (without flashing it). If the hash table and verity metadata blcok have been generated and written correctly, the device should boot, and /system should be a mount of the automatically created device-mapper virtual device, as shown below. If the boot is successful, you can permanently flash the boot image to the device.

# mount|grep system
/dev/block/dm-0 /system ext4 ro,seclabel,relatime,data=ordered 0 0

Now any modifications to the system partition will result in read errors when reading the corresponding file(s). Unfortunately, system modifications by file-based OTA updates, which modify file blocks without updating verity metadata, will also invalidate the hash tree. As mentioned in the official documentation, in order to be compatible with dm-verity verified boot, OTA updates should also operate at the block level, ensuring that both file blocks and the hash tree and metadata are updated. This requires changing the current OTA update infrastructure, which is probably one of the reasons verified boot hasn't been deployed to production devices yet.

Summary

Android includes a verified boot implementation based on the dm-verity device-mapper target since version 4.4. dm-verity is enabled by adding a hash table and a signed metadata block to the system partition and specifying the verify flag in the device's fstab file. At boot time Android verifies the metadata signature and uses the included device-mapper table to create and mount a virtual block device at /system. As a result, all reads from /system are verified against the dm-verity hash tree, and any modification to the system partition results in I/O errors.