WorryFree Computers   »   [go: up one dir, main page]

Posted by Vikrant Nanda and René Mayrhofer, Android Security & Privacy Team

[Cross-posted from the Android Developers Blog]


There is no better time to talk about Android dessert releases than the holidays because who doesn't love dessert? And what is one of our favorite desserts during the holiday season? Well, pie of course.

In all seriousness, pie is a great analogy because of how the various ingredients turn into multiple layers of goodness: right from the software crust on top to the hardware layer at the bottom. Read on for a summary of security and privacy features introduced in Android Pie this year.
Platform hardening
With Android Pie, we updated File-Based Encryption to support external storage media (such as, expandable storage cards). We also introduced support for metadata encryption where hardware support is present. With filesystem metadata encryption, a single key present at boot time encrypts whatever content is not encrypted by file-based encryption (such as, directory layouts, file sizes, permissions, and creation/modification times).

Android Pie also introduced a BiometricPrompt API that apps can use to provide biometric authentication dialogs (such as, fingerprint prompt) on a device in a modality-agnostic fashion. This functionality creates a standardized look, feel, and placement for the dialog. This kind of standardization gives users more confidence that they're authenticating against a trusted biometric credential checker.

New protections and test cases for the Application Sandbox help ensure all non-privileged apps targeting Android Pie (and all future releases of Android) run in stronger SELinux sandboxes. By providing per-app cryptographic authentication to the sandbox, this protection improves app separation, prevents overriding safe defaults, and (most significantly) prevents apps from making their data widely accessible.
Anti-exploitation improvements
With Android Pie, we expanded our compiler-based security mitigations, which instrument runtime operations to fail safely when undefined behavior occurs.

Control Flow Integrity (CFI) is a security mechanism that disallows changes to the original control flow graph of compiled code. In Android Pie, it has been enabled by default within the media frameworks and other security-critical components, such as for Near Field Communication (NFC) and Bluetooth protocols. We also implemented support for CFI in the Android common kernel, continuing our efforts to harden the kernel in previous Android releases.

Integer Overflow Sanitization is a security technique used to mitigate memory corruption and information disclosure vulnerabilities caused by integer operations. We've expanded our use of Integer Overflow sanitizers by enabling their use in libraries where complex untrusted input is processed or where security vulnerabilities have been reported.
Continued investment in hardware-backed security

One of the highlights of Android Pie is Android Protected Confirmation, the first major mobile OS API that leverages a hardware-protected user interface (Trusted UI) to perform critical transactions completely outside the main mobile operating system. Developers can use this API to display a trusted UI prompt to the user, requesting approval via a physical protected input (such as, a button on the device). The resulting cryptographically signed statement allows the relying party to reaffirm that the user would like to complete a sensitive transaction through their app.

We also introduced support for a new Keystore type that provides stronger protection for private keys by leveraging tamper-resistant hardware with dedicated CPU, RAM, and flash memory. StrongBox Keymaster is an implementation of the Keymaster hardware abstraction layer (HAL) that resides in a hardware security module. This module is designed and required to have its own processor, secure storage, True Random Number Generator (TRNG), side-channel resistance, and tamper-resistant packaging.

Other Keystore features (as part of Keymaster 4) include Keyguard-bound keys, Secure Key Import, 3DES support, and version binding. Keyguard-bound keys enable use restriction so as to protect sensitive information. Secure Key Import facilitates secure key use while protecting key material from the application or operating system. You can read more about these features in our recent blog post as well as the accompanying release notes.
Enhancing user privacy

User privacy has been boosted with several behavior changes, such as limiting the access background apps have to the camera, microphone, and device sensors. New permission rules and permission groups have been created for phone calls, phone state, and Wi-Fi scans, as well as restrictions around information retrieved from Wi-Fi scans. We have also added associated MAC address randomization, so that a device can use a different network address when connecting to a Wi-Fi network.

On top of that, Android Pie added support for encrypting Android backups with the user's screen lock secret (that is, PIN, pattern, or password). By design, this means that an attacker would not be able to access a user's backed-up application data without specifically knowing their passcode. Auto backup for apps has been enhanced by providing developers a way to specify conditions under which their app's data is excluded from auto backup. For example, Android Pie introduces a new flag to determine whether a user's backup is client-side encrypted.

As part of a larger effort to move all web traffic away from cleartext (unencrypted HTTP) and towards being secured with TLS (HTTPS), we changed the defaults for Network Security Configuration to block all cleartext traffic. We're protecting users with TLS by default, unless you explicitly opt-in to cleartext for specific domains. Android Pie also adds built-in support for DNS over TLS, automatically upgrading DNS queries to TLS if a network's DNS server supports it. This protects information about IP addresses visited from being sniffed or intercepted on the network level.


We believe that the features described in this post advance the security and privacy posture of Android, but you don't have to take our word for it. Year after year our continued efforts are demonstrably resulting in better protection as evidenced by increasing exploit difficulty and independent mobile security ratings. Now go and enjoy some actual pie while we get back to preparing the next Android dessert release!

Making Android more secure requires a combination of hardening the platform and advancing anti-exploitation techniques.


Acknowledgements: This post leveraged contributions from Chad Brubaker, Janis Danisevskis, Giles Hogben, Troy Kensinger, Ivan Lozano, Vishwath Mohan, Frank Salim, Sami Tolvanen, Lilian Young, and Shawn Willden.


Posted by Lilian Young and Shawn Willden, Android Security; and Frank Salim, Google Pay

[Cross-posted from the Android Developers Blog]

New Android Pie Keystore Features

The Android Keystore provides application developers with a set of cryptographic tools that are designed to secure their users' data. Keystore moves the cryptographic primitives available in software libraries out of the Android OS and into secure hardware. Keys are protected and used only within the secure hardware to protect application secrets from various forms of attacks. Keystore gives applications the ability to specify restrictions on how and when the keys can be used.
Android Pie introduces new capabilities to Keystore. We will be discussing two of these new capabilities in this post. The first enables restrictions on key use so as to protect sensitive information. The second facilitates secure key use while protecting key material from the application or operating system.

Keyguard-bound keys

There are times when a mobile application receives data but doesn't need to immediately access it if the user is not currently using the device. Sensitive information sent to an application while the device screen is locked must remain secure until the user wants access to it. Android Pie addresses this by introducing keyguard-bound cryptographic keys. When the screen is locked, these keys can be used in encryption or verification operations, but are unavailable for decryption or signing. If the device is currently locked with a PIN, pattern, or password, any attempt to use these keys will result in an invalid operation. Keyguard-bound keys protect the user's data while the device is locked, and only available when the user needs it.
Keyguard binding and authentication binding both function in similar ways, except with one important difference. Keyguard binding ties the availability of keys directly to the screen lock state while authentication binding uses a constant timeout. With keyguard binding, the keys become unavailable as soon as the device is locked and are only made available again when the user unlocks the device.
It is worth noting that keyguard binding is enforced by the operating system, not the secure hardware. This is because the secure hardware has no way to know when the screen is locked. Hardware-enforced Android Keystore protection features like authentication binding, can be combined with keyguard binding for a higher level of security. Furthermore, since keyguard binding is an operating system feature, it's available to any device running Android Pie.
Keys for any algorithm supported by the device can be keyguard-bound. To generate or import a key as keyguard-bound, call setUnlockedDeviceRequired(true) on the KeyGenParameterSpec or KeyProtection builder object at key generation or import.

Secure Key Import

Secure Key Import is a new feature in Android Pie that allows applications to provision existing keys into Keystore in a more secure manner. The origin of the key, a remote server that could be sitting in an on-premise data center or in the cloud, encrypts the secure key using a public wrapping key from the user's device. The encrypted key in the SecureKeyWrapper format, which also contains a description of the ways the imported key is allowed to be used, can only be decrypted in the Keystore hardware belonging to the specific device that generated the wrapping key. Keys are encrypted in transit and remain opaque to the application and operating system, meaning they're only available inside the secure hardware into which they are imported.

Secure Key Import is useful in scenarios where an application intends to share a secret key with an Android device, but wants to prevent the key from being intercepted or from leaving the device. Google Pay uses Secure Key Import to provision some keys on Pixel 3 phones, to prevent the keys from being intercepted or extracted from memory. There are also a variety of enterprise use cases such as S/MIME encryption keys being recovered from a Certificate Authorities escrow so that the same key can be used to decrypt emails on multiple devices.
To take advantage of this feature, please review this training article. Please note that Secure Key Import is a secure hardware feature, and is therefore only available on select Android Pie devices. To find out if the device supports it, applications can generate a KeyPair with PURPOSE_WRAP_KEY.



Providing users with safe and secure experiences, while helping developers build and grow quality app businesses, is our top priority at Google Play. And we’re constantly working to improve our protections.

Google Play has been working to minimize app install attribution fraud for several years. In 2017 Google Play made available the Google Play Install Referrer API, which allows ad attribution providers, publishers and advertisers to determine which referrer was responsible for sending the user to Google Play for a given app install. This API was specifically designed to be resistant to install attribution fraud and we strongly encourage attribution providers, advertisers and publishers to insist on this standard of proof when measuring app install ads. Users, developers, advertisers and ad networks all benefit from a transparent, fair system.

We also take reports of questionable activity very seriously. If an app violates our Google Play Developer policies, we take action. That’s why we began our own independent investigation after we received reports of apps on Google Play accused of conducting app install attribution abuse by falsely claiming credit for newly installed apps to collect the download bounty from that app’s developer.

We now have an update regarding our ongoing investigation:

  • On Monday, we removed two apps from the Play Store because our investigation discovered evidence of app install attribution abuse.
  • We also discovered evidence of app install attribution abuse in 3 ad network SDKs. We have asked the impacted developers to remove those SDKs from their apps. Because we believe most of these developers were not aware of the behavior from these third-party SDKs, we have given them a short grace period to take action.
  • Google Ads SDKs were not utilized for any of the abusive behaviors mentioned above.
  • Our investigation is ongoing and additional reviews of other apps and third party SDKs are still underway. If we find evidence of additional policy violations, we will take action.
We will continue to investigate and improve our capabilities to better detect and protect against abusive behavior and the malicious actors behind them.



Customization is one of Android's greatest strengths. Android's open source nature has enabled thousands of device types that cover a variety of use cases. In addition to adding features to the Android Open Source Project, researchers, developers, service providers, and device and chipset manufacturers can make updates to improve Android security. Investing and engaging in academic research advances the state-of-the-art security techniques, contributes to science, and delivers cutting edge security and privacy features into the hands of end users. To foster more cooperative applied research between the Android Security and Privacy team and the wider academic and industrial community, we're launching ASPIRE (Android Security and PrIvacy REsearch).

ASPIRE's goal is encouraging the development of new security and privacy technology that impacts the Android ecosystem in the next 2 to 5 years, but isn't planned for mainline Android development. This timeframe extends beyond the next annual Android release to allow adequate time to analyze, develop, and stabilize research into features before including in the platform. To collaborate with security researchers, we're hosting events and creating more channels to contribute research.

On October 25th 2018, we invited top security and privacy researchers from around the world to present at Android Security Local Research Day (ASLR-D). At this event, external researchers and Android Security and Privacy team members discussed current issues and strategies that impact the future direction of security research—for Android and the entire industry.

We can't always get everyone in the same room and good ideas come from everywhere. So we're inviting all academic researchers to help us protect billions of users. Research collaborations with Android should be as straightforward as collaborating with the research lab next door. To get involved you can:

  1. Submit an Android security / privacy research idea or proposal to the Google Faculty Research Awards (FRA) program.
  2. Apply for a research internship as a student pursuing an advanced degree.
  3. Apply to become a Visiting Researcher at Google.
  4. If you have any security or privacy questions that may help with your research, reach out to us.
  5. Co-author publications with Android team members, outside the terms of FRA.
  6. Collaborate with Android team members to make changes to the Android Open Source Project.

Let’s work together to make Android the most secure platform—now and in the future.



We believe that cutting-edge research plays a key role in advancing the security and privacy of users across the Internet. While we do significant in-house research and engineering to protect users’ data, we maintain strong ties with academic institutions worldwide. We provide seed funding through faculty research grants, cloud credits to unlock new experiments, and foster active collaborations, including working with visiting scholars and research interns.

To accelerate the next generation of security and privacy breakthroughs, we recently created the Google Security and Privacy Research Awards program. These awards, selected via internal Google nominations and voting, recognize academic researchers who have made recent, significant contributions to the field.

We’ve been developing this program for several years. It began as a pilot when we awarded researchers for their work in 2016, and we expanded it more broadly for work from 2017. So far, we awarded $1 million dollars to 12 scholars. We are preparing the shortlist for 2018 nominees and will announce the winners next year. In the meantime, we wanted to highlight the previous award winners and the influence they’ve had on the field.
2017 Awardees

Lujo Bauer, Carnegie Mellon University
Research area: Password security and attacks against facial recognition

Dan Boneh, Stanford University
Research area: Enclave security and post-quantum cryptography

Aleksandra Korolova, University of Southern California
Research area: Differential privacy

Daniela Oliveira, University of Florida
Research area: Social engineering and phishing

Franziska Roesner, University of Washington
Research area: Usable security for augmented reality and at-risk populations

Matthew Smith, Universität Bonn
Research area: Usable security for developers


2016 Awardees

Michael Bailey, University of Illinois at Urbana-Champaign
Research area: Cloud and network security

Nicolas Christin, Carnegie Mellon University
Research area: Authentication and cybercrime

Damon McCoy, New York University
Research area: DDoS services and cybercrime

Stefan Savage, University of California San Diego
Research area: Network security and cybercrime

Marc Stevens, Centrum Wiskunde & Informatica
Research area: Cryptanalysis and lattice cryptography

Giovanni Vigna, University of California Santa Barbara
Research area: Malware detection and cybercrime


Congratulations to all of our award winners.



For years, Google has been waging a comprehensive, global fight against invalid traffic through a combination of technology, policy, and operations teams to protect advertisers and publishers and increase transparency throughout the advertising industry.

Last year, we identified one of the most complex and sophisticated ad fraud operations we have seen to date, working with cyber security firm White Ops, and referred the case to law enforcement. Today, the U.S. Attorney’s Office for the Eastern District of New York announced criminal charges associated with this fraud operation. This takedown marks a major milestone in the industry’s fight against ad fraud, and we’re proud to have been a key contributor.

In partnership with White Ops, we have published a white paper about how we identified this ad fraud operation, the steps we took to protect our clients from being impacted, and the technical work we did to detect patterns across systems in the industry. Below are some of the highlights from the white paper, which you can download here.

All about 3ve: A creative and sophisticated threat

Referred to as 3ve (pronounced “Eve”), this ad fraud operation evolved over the course of 2017 from a modest, low-level botnet into a large and sophisticated operation that used a broad set of tactics to commit ad fraud. 3ve operated on a significant scale: At its peak, it controlled over 1 million IPs from both residential malware infections and corporate IP spaces primarily in North America and Europe.

Through our investigation, we discovered that 3ve was comprised of three unique sub-operations that evolved rapidly, using sophisticated tactics aimed at exploiting data centers, computers infected with malware, spoofed fraudulent domains, and fake websites. Through its varied and complex machinery, 3ve generated billions of fraudulent ad bid requests (i.e., ad spaces on web pages that advertisers can bid to purchase in an automated way), and it also created thousands of spoofed fraudulent domains. It should be noted that our analysis of ad bid requests indicated growth in activity, but not necessarily growth in transactions that would result in charges to advertisers. It’s also worth noting that 3+ billion daily ad bid requests made 3ve an extremely large ad fraud operation, but its bid request volume was only a small percentage of overall bid request volume across the industry.
Our objective

Trust and integrity are critical to the digital advertising ecosystem. Investments in our ad traffic quality systems made it possible for us to tackle this ad fraud operation and to limit the impact it had on our clients as quickly as possible, including crediting advertisers.

3ve’s focus, like many ad fraud schemes, was not a single player or system, but rather the whole advertising ecosystem. As we worked to protect our ad systems against traffic from this threat, we identified that others also had observed this traffic, and we partnered with them to help remove the threat from the ecosystem. The working group, which included nearly 20 partners, was a key component that shaped our broader investigation into 3ve, enabling us to engage directly with each other and to work towards a mutually beneficial outcome.
Industry collaboration helps bring 3ve down

While ad fraud traditionally has been seen as a faceless crime in which bad actors don’t face much risk of being identified or consequences for their actions, 3ve’s takedown demonstrates that there are risks and consequences to committing ad fraud. We’re confident that our collective efforts are building momentum and moving us closer to finding a resolution to this challenge.

For example, industry initiatives such as the Interactive Advertising Bureau (IAB) Tech Lab’s ads.txt standard, which has experienced and continues to see very rapid adoption (over 620,000 domains have an ads.txt), as well as the increasing number of buy-side platforms and exchanges offering refunds for invalid traffic, are valuable steps towards cutting off the money flow to fraudsters. As we announced last year, we’ve made, and will continue to make investments in our automated refunds for invalid traffic, including our work with supply partners to provide advertisers with refunds for invalid traffic detected up to 30 days after monthly billing.

Industry bodies such as the IAB, Trustworthy Accountability Group (TAG), Media Rating Council, and the Joint Industry Committee for Web Standards, who are serving as agents of change and collaboration across our industry, are instrumental in the fight against ad fraud. We have a long history of working with these bodies, including ongoing participation in TAG and IAB leadership and working groups, as well as our inclusion in the TAG Certified Against Fraud program. That program’s value was reinforced with the IAB’s requirement that all members need to be TAG certified by the middle of this year.


Successful disruption

A coordinated takedown of infrastructure related to 3ve’s operations occurred recently. The takedown involved disrupting as much of the related infrastructure as possible to make it hard to rebuild any of 3ve’s operations. As the graph below demonstrates, declining volumes in invalid traffic indicate that the disruption thus far has been successful, bringing the bid request traffic close to zero within 18 hours of starting the coordinated takedown.
Looking ahead

We’ll continue to be vigilant, working to protect marketers, publishers, and users, while continuing to collaborate with the broader industry to safeguard the integrity of the digital advertising ecosystem that powers the open web. Our work to take down 3ve is another example of our collaboration with the broader ecosystem to improve trust in digital advertising. We are committed to helping to create a better digital advertising ecosystem — one that is more valuable, transparent, and trusted for everyone.



[Cross-posted from the Android Developers Blog]

In a previous blog post, we talked about using machine learning to combat Potentially Harmful Applications (PHAs). This blog post covers how Google uses machine learning techniques to detect and classify PHAs. We'll discuss the challenges in the PHA detection space, including the scale of data, the correct identification of PHA behaviors, and the evolution of PHA families. Next, we will introduce two of the datasets that make the training and implementation of machine learning models possible, such as app analysis data and Google Play data. Finally, we will present some of the approaches we use, including logistic regression and deep neural networks.

Using Machine Learning to Scale

Detecting PHAs is challenging and requires a lot of resources. Our security experts need to understand how apps interact with the system and the user, analyze complex signals to find PHA behavior, and evolve their tactics to stay ahead of PHA authors. Every day, Google Play Protect (GPP) analyzes over half a million apps, which makes a lot of new data for our security experts to process.

Leveraging machine learning helps us detect PHAs faster and at a larger scale. We can detect more PHAs just by adding additional computing resources. In many cases, machine learning can find PHA signals in the training data without human intervention. Sometimes, those signals are different than signals found by security experts. Machine learning can take better advantage of this data, and discover hidden relationships between signals more effectively.

There are two major parts of Google Play Protect's machine learning protections: the data and the machine learning models.

Data Sources

The quality and quantity of the data used to create a model are crucial to the success of the system. For the purpose of PHA detection and classification, our system mainly uses two anonymous data sources: data from analyzing apps and data from how users experience apps.

App Data

Google Play Protect analyzes every app that it can find on the internet. We created a dataset by decomposing each app's APK and extracting PHA signals with deep analysis. We execute various processes on each app to find particular features and behaviors that are relevant to the PHA categories in scope (for example, SMS fraud, phishing, privilege escalation). Static analysis examines the different resources inside an APK file while dynamic analysis checks the behavior of the app when it's actually running. These two approaches complement each other. For example, dynamic analysis requires the execution of the app regardless of how obfuscated its code is (obfuscation hinders static analysis), and static analysis can help detect cloaking attempts in the code that may in practice bypass dynamic analysis-based detection. In the end, this analysis produces information about the app's characteristics, which serve as a fundamental data source for machine learning algorithms.

Google Play Data

In addition to analyzing each app, we also try to understand how users perceive that app. User feedback (such as the number of installs, uninstalls, user ratings, and comments) collected from Google Play can help us identify problematic apps. Similarly, information about the developer (such as the certificates they use and their history of published apps) contribute valuable knowledge that can be used to identify PHAs. All these metrics are generated when developers submit a new app (or new version of an app) and by millions of Google Play users every day. This information helps us to understand the quality, behavior, and purpose of an app so that we can identify new PHA behaviors or identify similar apps.

In general, our data sources yield raw signals, which then need to be transformed into machine learning features for use by our algorithms. Some signals, such as the permissions that an app requests, have a clear semantic meaning and can be directly used. In other cases, we need to engineer our data to make new, more powerful features. For example, we can aggregate the ratings of all apps that a particular developer owns, so we can calculate a rating per developer and use it to validate future apps. We also employ several techniques to focus in on interesting data.To create compact representations for sparse data, we use embedding. To help streamline the data to make it more useful to models, we use feature selection. Depending on the target, feature selection helps us keep the most relevant signals and remove irrelevant ones.

By combining our different datasets and investing in feature engineering and feature selection, we improve the quality of the data that can be fed to various types of machine learning models.

Models

Building a good machine learning model is like building a skyscraper: quality materials are important, but a great design is also essential. Like the materials in a skyscraper, good datasets and features are important to machine learning, but a great algorithm is essential to identify PHA behaviors effectively and efficiently.

We train models to identify PHAs that belong to a specific category, such as SMS-fraud or phishing. Such categories are quite broad and contain a large number of samples given the number of PHA families that fit the definition. Alternatively, we also have models focusing on a much smaller scale, such as a family, which is composed of a group of apps that are part of the same PHA campaign and that share similar source code and behaviors. On the one hand, having a single model to tackle an entire PHA category may be attractive in terms of simplicity but precision may be an issue as the model will have to generalize the behaviors of a large number of PHAs believed to have something in common. On the other hand, developing multiple PHA models may require additional engineering efforts, but may result in better precision at the cost of reduced scope.

We use a variety of modeling techniques to modify our machine learning approach, including supervised and unsupervised ones.

One supervised technique we use is logistic regression, which has been widely adopted in the industry. These models have a simple structure and can be trained quickly. Logistic regression models can be analyzed to understand the importance of the different PHA and app features they are built with, allowing us to improve our feature engineering process. After a few cycles of training, evaluation, and improvement, we can launch the best models in production and monitor their performance.

For more complex cases, we employ deep learning. Compared to logistic regression, deep learning is good at capturing complicated interactions between different features and extracting hidden patterns. The millions of apps in Google Play provide a rich dataset, which is advantageous to deep learning.

In addition to our targeted feature engineering efforts, we experiment with many aspects of deep neural networks. For example, a deep neural network can have multiple layers and each layer has several neurons to process signals. We can experiment with the number of layers and neurons per layer to change model behaviors.

We also adopt unsupervised machine learning methods. Many PHAs use similar abuse techniques and tricks, so they look almost identical to each other. An unsupervised approach helps define clusters of apps that look or behave similarly, which allows us to mitigate and identify PHAs more effectively. We can automate the process of categorizing that type of app if we are confident in the model or can request help from a human expert to validate what the model found.

PHAs are constantly evolving, so our models need constant updating and monitoring. In production, models are fed with data from recent apps, which help them stay relevant. However, new abuse techniques and behaviors need to be continuously detected and fed into our machine learning models to be able to catch new PHAs and stay on top of recent trends. This is a continuous cycle of model creation and updating that also requires tuning to ensure that the precision and coverage of the system as a whole matches our detection goals.

Looking forward

As part of Google's AI-first strategy, our work leverages many machine learning resources across the company, such as tools and infrastructures developed by Google Brain and Google Research. In 2017, our machine learning models successfully detected 60.3% of PHAs identified by Google Play Protect, covering over 2 billion Android devices. We continue to research and invest in machine learning to scale and simplify the detection of PHAs in the Android ecosystem.

Acknowledgements

This work was developed in joint collaboration with Google Play Protect, Safe Browsing and Play Abuse teams with contributions from Andrew Ahn, Hrishikesh Aradhye, Daniel Bali, Hongji Bao, Yajie Hu, Arthur Kaiser, Elena Kovakina, Salvador Mandujano, Melinda Miller, Rahul Mishra, Sebastian Porst, Monirul Sharif, Sri Somanchi, Sai Deep Tetali, and Zhikun Wang.



Update: We identified a bug that affected how we calculated data from Q3 2018 in the Transparency Report. This bug created inconsistencies between the data in the report and this blog post. The data points in this blog post have been corrected.

As shared during the What's new in Android security session at Google I/O 2018, transparency and openness are important parts of Android's ethos. We regularly blog about new features and enhancements and publish an annual Android Security Year in Review, which highlights Android ecosystem trends. To provide more frequent insights, we're introducing a quarterly Android Ecosystem Security Transparency Report. This report is the latest addition to our Transparency Report site, which began in 2010 to show how the policies and actions of governments and corporations affect privacy, security, and access to information online.

This Android Ecosystem Security Transparency Report covers how often a routine, full-device scan by Google Play Protect detects a device with PHAs installed. Google Play Protect is built-in protection on Android devices that scans over 50 billion apps daily from inside and outside of Google Play. These scans look for evidence of Potentially Harmful Applications (PHAs). If the scans find a PHA, Google Play Protect warns the user and can disable or remove PHAs. In Android's first annual Android Security Year in Review from 2014, fewer than 1% of devices had PHAs installed. The percentage has declined steadily over time and this downward trend continues through 2018. The transparency report covers PHA rates in three areas: market segment (whether a PHA came from Google Play or outside of Google Play), Android version, and country.

Devices with Potentially Harmful Applications installed by market segment

Google works hard to protect your Android device: no matter where your apps come from. Continuing the trend from previous years, Android devices that only download apps from Google Play are 9 times less likely to get a PHA than devices that download apps from other sources. Before applications become available in Google Play they undergo an application review to confirm they comply with Google Play policies. Google uses a risk scorer to analyze apps to detect potentially harmful behavior. When Google’s application risk analyzer discovers something suspicious, it flags the app and refers the PHA to a security analyst for manual review if needed. We also scan apps that users download to their device from outside of Google Play. If we find a suspicious app, we also protect users from that—even if it didn't come from Google Play.

In the Android Ecosystem Security Transparency Report, the Devices with Potentially Harmful Applications installed by market segment chart shows the percentage of Android devices that have one or more PHAs installed over time. The chart has two lines: PHA rate for devices that exclusively install from Google Play and PHA rate for devices that also install from outside of Google Play. In 2017, on average 0.09% of devices that exclusively used Google Play had one or more PHAs installed. The first three quarters in 2018 averaged a lower PHA rate of 0.08%.

The security of devices that installed apps from outside of Google Play also improved. In 2017, ~0.82% of devices that installed apps from outside of Google Play were affected by PHA; in the first three quarters of 2018, ~0.68% were affected. Since 2017, we've reduced this number by expanding the auto-disable feature which we covered on page 10 in the 2017 Year in Review. While malware rates fluctuate from quarter to quarter, our metrics continue to show a consistent downward trend over time. We'll share more details in our 2018 Android Security Year in Review in early 2019.

Devices with Potentially Harmful Applications installed by Android version

Newer versions of Android are less affected by PHAs. We attribute this to many factors, such as continued platform and API hardening, ongoing security updates and app security and developer training to reduce apps' access to sensitive data. In particular, newer Android versions—such as Nougat, Oreo, and Pie—are more resilient to privilege escalation attacks that had previously allowed PHAs to gain persistence on devices and protect themselves against removal attempts. The Devices with Potentially Harmful Applications installed by Android version chart shows the percentage of devices with a PHA installed, sorted by the Android version that the device is running.

Devices with Potentially Harmful Applications rate by top 10 countries

Overall, PHA rates in the ten largest Android markets have remained steady. While these numbers fluctuate on a quarterly basis due to the fluidity of the marketplace, we intend to provide more in depth coverage of what drove these changes in our annual Year in Review in Q1, 2019.

The Devices with Potentially Harmful Applications rate by top 10 countries chart shows the percentage of devices with at least one PHA in the ten countries with the highest volume of Android devices. India saw the most significant decline in PHAs present on devices, with the average rate of infection dropping by 34 percent. Indonesia, Mexico, and Turkey also saw a decline in the likelihood of PHAs being present on devices in the region. South Korea saw the lowest number of devices containing PHA, with only 0.1%.

Check out the report

Over time, we'll add more insights into the health of the ecosystem to the Android Ecosystem Security Transparency Report. If you have any questions about terminology or the products referred to in this report please review the FAQs section of the Transparency Report. In the meantime, check out our new blog post and video outlining Android’s performance in Gartner’s Mobile OSs and Device Security: A Comparison of Platforms report.



Open Source Software (OSS) is extremely important to Google, and we rely on OSS in a variety of customer-facing and internal projects. We also understand the difficulty and importance of securing the open source ecosystem, and are continuously looking for ways to simplify it.

For the OSS community, we currently provide OSS-Fuzz, a free continuous fuzzing infrastructure hosted on the Google Cloud Platform. OSS-Fuzz uncovers security vulnerabilities and stability issues, and reports them directly to developers. Since launching in December 2016, OSS-Fuzz has reported over 9,000 bugs directly to open source developers.

In addition to OSS-Fuzz, Google's security team maintains several internal tools for identifying bugs in both Google internal and Open Source code. Until recently, these issues were manually reported to various public bug trackers by our security team and then monitored until they were resolved. Unresolved bugs were eligible for the Patch Rewards Program. While this reporting process had some success, it was overly complex. Now, by unifying and automating our fuzzing tools, we have been able to consolidate our processes into a single workflow, based on OSS-Fuzz. Projects integrated with OSS-Fuzz will benefit from being reviewed by both our internal and external fuzzing tools, thereby increasing code coverage and discovering bugs faster.

We are committed to helping open source projects benefit from integrating with our OSS-Fuzz fuzzing infrastructure. In the coming weeks, we will reach out via email to critical projects that we believe would be a good fit and support the community at large. Projects that integrate are eligible for rewards ranging from $1,000 (initial integration) up to $20,000 (ideal integration); more details are available here. These rewards are intended to help offset the cost and effort required to properly configure fuzzing for OSS projects. If you would like to integrate your project with OSS-Fuzz, please submit your project for review. Our goal is to admit as many OSS projects as possible and ensure that they are continuously fuzzed.

Once contacted, we might provide a sample fuzz target to you for easy integration. Many of these fuzz targets are generated with new technology that understands how library APIs are used appropriately. Watch this space for more details on how Google plans to further automate fuzz target creation, so that even more open source projects can benefit from continuous fuzzing.

Thank you for your continued contributions to the Open Source community. Let’s work together on a more secure and stable future for Open Source Software.



It’s Halloween 🎃 and the last day of Cybersecurity Awareness Month 🔐, so we’re celebrating these occasions with security improvements across your account journey: before you sign in, as soon as you’ve entered your account, when you share information with other apps and sites, and the rare event in which your account is compromised.

We’re constantly protecting your information from attackers’ tricks, and with these new protections and tools, we hope you can spend your Halloween worrying about zombies, witches, and your candy loot—not the security of your account.

Protecting you before you even sign in
Everyone does their best to keep their username and password safe, but sometimes bad actors may still get them through phishing or other tricks. Even when this happens, we will still protect you with safeguards that kick-in before you are signed into your account.

When your username and password are entered on Google’s sign-in page, we’ll run a risk assessment and only allow the sign-in if nothing looks suspicious. We’re always working to improve this analysis, and we’ll now require that JavaScript is enabled on the Google sign-in page, without which we can’t run this assessment.

Chances are, JavaScript is already enabled in your browser; it helps power lots of the websites people use everyday. But, because it may save bandwidth or help pages load more quickly, a tiny minority of our users (0.1%) choose to keep it off. This might make sense if you are reading static content, but we recommend that you keep Javascript on while signing into your Google Account so we can better protect you. You can read more about how to enable JavaScript here.

Keeping your Google Account secure while you’re signed in

Last year, we launched a major update to the Security Checkup that upgraded it from the same checklist for everyone, to a smarter tool that automatically provides personalized guidance for improving the security of your Google Account.

We’re adding to this advice all the time. Most recently, we introduced better protection against harmful apps based on recommendations from Google Play Protect, as well as the ability to remove your account from any devices you no longer use.
More notifications when you share your account data with apps and sites

It’s really important that you understand the information that has been shared with apps or sites so that we can keep you safe. We already notify you when you’ve granted access to sensitive information — like Gmail data or your Google Contacts — to third-party sites or apps, and in the next few weeks, we’ll expand this to notify you whenever you share any data from your Google Account. You can always see which apps have access to your data in the Security Checkup.

Helping you get back to the beginning if you run into trouble

In the rare event that your account is compromised, our priority is to help get you back to safety as quickly as possible. We’ve introduced a new, step-by-step process within your Google Account that we will automatically trigger if we detect potential unauthorized activity.

We'll help you:
  • Verify critical security settings to help ensure your account isn’t vulnerable to additional attacks and that someone can’t access it via other means, like a recovery phone number or email address.
  • Secure your other accounts because your Google Account might be a gateway to accounts on other services and a hijacking can leave those vulnerable as well.
  • Check financial activity to see if any payment methods connected to your account, like a credit card or Google Pay, were abused.
  • Review content and files to see if any of your Gmail or Drive data was accessed or mis-used.
Online security can sometimes feel like walking through a haunted house—scary, and you aren't quite sure what may pop up. We are constantly working to strengthen our automatic protections to stop attackers and keep you safe you from the many tricks you may encounter. During Cybersecurity Month, and beyond, we've got your back.



[Cross-posted from the Google Webmaster Central Blog]

Today, we’re excited to introduce reCAPTCHA v3, our newest API that helps you detect abusive traffic on your website without user interaction. Instead of showing a CAPTCHA challenge, reCAPTCHA v3 returns a score so you can choose the most appropriate action for your website.

A frictionless user experience

Over the last decade, reCAPTCHA has continuously evolved its technology. In reCAPTCHA v1, every user was asked to pass a challenge by reading distorted text and typing into a box. To improve both user experience and security, we introduced reCAPTCHA v2 and began to use many other signals to determine whether a request came from a human or bot. This enabled reCAPTCHA challenges to move from a dominant to a secondary role in detecting abuse, letting about half of users pass with a single click. Now with reCAPTCHA v3, we are fundamentally changing how sites can test for human vs. bot activities by returning a score to tell you how suspicious an interaction is and eliminating the need to interrupt users with challenges at all. reCAPTCHA v3 runs adaptive risk analysis in the background to alert you of suspicious traffic while letting your human users enjoy a frictionless experience on your site.

More Accurate Bot Detection with "Actions"

In reCAPTCHA v3, we are introducing a new concept called “Action”—a tag that you can use to define the key steps of your user journey and enable reCAPTCHA to run its risk analysis in context. Since reCAPTCHA v3 doesn't interrupt users, we recommend adding reCAPTCHA v3 to multiple pages. In this way, the reCAPTCHA adaptive risk analysis engine can identify the pattern of attackers more accurately by looking at the activities across different pages on your website. In the reCAPTCHA admin console, you can get a full overview of reCAPTCHA score distribution and a breakdown for the stats of the top 10 actions on your site, to help you identify which exact pages are being targeted by bots and how suspicious the traffic was on those pages.
Fighting bots your way

Another big benefit that you’ll get from reCAPTCHA v3 is the flexibility to prevent spam and abuse in the way that best fits your website. Previously, the reCAPTCHA system mostly decided when and what CAPTCHAs to serve to users, leaving you with limited influence over your website’s user experience. Now, reCAPTCHA v3 will provide you with a score that tells you how suspicious an interaction is. There are three potential ways you can use the score. First, you can set a threshold that determines when a user is let through or when further verification needs to be done, for example, using two-factor authentication and phone verification. Second, you can combine the score with your own signals that reCAPTCHA can’t access—such as user profiles or transaction histories. Third, you can use the reCAPTCHA score as one of the signals to train your machine learning model to fight abuse. By providing you with these new ways to customize the actions that occur for different types of traffic, this new version lets you protect your site against bots and improve your user experience based on your website’s specific needs.

In short, reCAPTCHA v3 helps to protect your sites without user friction and gives you more power to decide what to do in risky situations. As always, we are working every day to stay ahead of attackers and keep the Internet easy and safe to use (except for bots).

Ready to get started with reCAPTCHA v3? Visit our developer site for more details.



Fighting invalid traffic is essential for the long-term sustainability of the digital advertising ecosystem. We have an extensive internal system to filter out invalid traffic – from simple filters to large-scale machine learning models – and we collaborate with advertisers, agencies, publishers, ad tech companies, research institutions, law enforcement and other third party organizations to identify potential threats. We take all reports of questionable activity seriously, and when we find invalid traffic, we act quickly to remove it from our systems.

Last week, BuzzFeed News provided us with information that helped us identify new aspects of an ad fraud operation across apps and websites that were monetizing with numerous ad platforms, including Google. While our internal systems had previously caught and blocked violating websites from our ad network, in the past week we also removed apps involved in the ad fraud scheme so they can no longer monetize with Google. Further, we have blacklisted additional apps and websites that are outside of our ad network, to ensure that advertisers using Display & Video 360 (formerly known as DoubleClick Bid Manager) do not buy any of this traffic. We are continuing to monitor this operation and will continue to take action if we find any additional invalid traffic.

While our analysis of the operation is ongoing, we estimate that the dollar value of impacted Google advertiser spend across the apps and websites involved in the operation is under $10 million. The majority of impacted advertiser spend was from invalid traffic on inventory from non-Google, third-party ad networks.

A technical overview of the ad fraud operation is included below.

Collaboration throughout our industry is critical in helping us to better detect, prevent, and disable these threats across the ecosystem. We want to thank BuzzFeed for sharing information that allowed us to take further action. This effort highlights the importance of collaborating with others to counter bad actors. Ad fraud is an industry-wide issue that no company can tackle alone. We remain committed to fighting invalid traffic and ad fraud threats such as this one, both to protect our advertisers, publishers, and users, as well as to protect the integrity of the broader digital advertising ecosystem.
Technical Detail
Google deploys comprehensive, state-of-the-art systems and procedures to combat ad fraud. We have made and continue to make considerable investments to protect our ad systems against invalid traffic.

As detailed above, we’ve identified, analyzed and blocked invalid traffic associated with this operation, both by removing apps and blacklisting websites. Our engineering and operations teams, across various organizations, are also taking systemic action to disrupt this threat, including the takedown of command and control infrastructure that powers the associated botnet. In addition, we have shared relevant technical information with trusted partners across the ecosystem, so that they can also harden their defenses and minimize the impact of this threat throughout the industry.

The BuzzFeed News report covers several fraud tactics (both web and mobile app) that are allegedly utilized by the same group. The web-based traffic is generated by a botnet that Google and others have been tracking, known as “TechSnab.” The TechSnab botnet is a small to medium-sized botnet that has existed for a few years. The number of active infections associated with TechSnab was reduced significantly after the Google Chrome Cleanup tool began prompting users to uninstall the malware.

In similar fashion to other botnets, this operates by creating hidden browser windows that visit web pages to inflate ad revenue. The malware contains common IP based cloaking, data obfuscation, and anti-analysis defenses. This botnet drove traffic to a ring of websites created specifically for this operation, and monetized with Google and many third party ad exchanges. As mentioned above, we began taking action on these websites earlier this year.

Based on analysis of historical ads.txt crawl data, inventory from these websites was widely available throughout the advertising ecosystem, and as many as 150 exchanges, supply-side platforms (SSPs) or networks may have sold this inventory. The botnet operators had hundreds of accounts across 88 different exchanges (based on accounts listed with “DIRECT” status in their ads.txt files).

This fraud primarily impacted mobile apps. We investigated those apps that were monetizing via AdMob and removed those that were engaged in this behavior from our ad network. The traffic from these apps seems to be a blend of organic user traffic and artificially inflated ad traffic, including traffic based on hidden ads. Additionally, we found the presence of several ad networks, indicating that it's likely many were being used for monetization. We are actively tracking this operation, and continually updating and improving our enforcement tactics.



[Cross-posted from the Android Developers Blog]

In Android Pie, we introduced Android Protected Confirmation, the first major mobile OS API that leverages a hardware protected user interface (Trusted UI) to perform critical transactions completely outside the main mobile operating system. This Trusted UI protects the choices you make from fraudulent apps or a compromised operating system. When an app invokes Protected Confirmation, control is passed to the Trusted UI, where transaction data is displayed and user confirmation of that data's correctness is obtained.
Once confirmed, your intention is cryptographically authenticated and unforgeable when conveyed to the relying party, for example, your bank. Protected Confirmation increases the bank's confidence that it acts on your behalf, providing a higher level of protection for the transaction.
Protected Confirmation also adds additional security relative to other forms of secondary authentication, such as a One Time Password or Transaction Authentication Number. These mechanisms can be frustrating for mobile users and also fail to protect against a compromised device that can corrupt transaction data or intercept one-time confirmation text messages.
Once the user approves a transaction, Protected Confirmation digitally signs the confirmation message. Because the signing key never leaves the Trusted UI's hardware sandbox, neither app malware nor a compromised operating system can fool the user into authorizing anything. Protected Confirmation signing keys are created using Android's standard AndroidKeyStore API. Before it can start using Android Protected Confirmation for end-to-end secure transactions, the app must enroll the public KeyStore key and its Keystore Attestation certificate with the remote relying party. The attestation certificate certifies that the key can only be used to sign Protected Confirmations.
There are many possible use cases for Android Protected Confirmation. At Google I/O 2018, the What's new in Android security session showcased partners planning to leverage Android Protected Confirmation in a variety of ways, including Royal Bank of Canada person to person money transfers; Duo Security, Nok Nok Labs, and ProxToMe for user authentication; and Insulet Corporation and Bigfoot Biomedical, for medical device control.
Insulet, a global leading manufacturer of tubeless patch insulin pumps, has demonstrated how they can modify their FDA cleared Omnipod DASH TM Insulin management system in a test environment to leverage Protected Confirmation to confirm the amount of insulin to be injected. This technology holds the promise for improved quality of life and reduced cost by enabling a person with diabetes to leverage their convenient, familiar, and secure smartphone for control rather than having to rely on a secondary, obtrusive, and expensive remote control device. (Note: The Omnipod DASH™ System is not cleared for use with Pixel 3 mobile device or Protected Confirmation).

This work is fulfilling an important need in the industry. Since smartphones do not fit the mold of an FDA approved medical device, we've been working with FDA as part of DTMoSt, an industry-wide consortium, to define a standard for phones to safely control medical devices, such as insulinSince smartphones do not fit the mold of an FDA approved medical device, we've been working with FDA as part of DTMoSt, an industry-wide consortium, to define a standard for phones to safely control medical devices, such as insulin pumps. A technology like Protected Confirmation plays an important role in gaining higher assurance of user intent and medical safety.
To integrate Protected Confirmation into your app, check out the Android Protected Confirmation training article. Android Protected Confirmation is an optional feature in Android Pie. Because it has low-level hardware dependencies, Protected Confirmation may not be supported by all devices running Android Pie. Google Pixel 3 and 3XL devices are the first to support Protected Confirmation, and we are working closely with other manufacturers to adopt this market-leading security innovation on more devices.


Posted by Nagendra Modadugu and Bill Richardson, Google Device Security Group

[Cross-posted from the Android Developers Blog]

At the Made by Google event last week, we talked about the combination of AI + Software + Hardware to help organize your information. To better protect that information at a hardware level, our new Pixel 3 and Pixel 3 XL devices include a Titan M chip.We briefly introduced Titan M and some of its benefits on our Keyword Blog, and with this post we dive into some of its technical details.
Titan M is a second-generation, low-power security module designed and manufactured by Google, and is a part of the Titan family. As described in the Keyword Blog post, Titan M performs several security sensitive functions, including:
  • Storing and enforcing the locks and rollback counters used by Android Verified Boot.
  • Securely storing secrets and rate-limiting invalid attempts at retrieving them using the Weaver API.
  • Providing backing for the Android Strongbox Keymaster module, including Trusted User Presence and Protected Confirmation. Titan M has direct electrical connections to the Pixel's side buttons, so a remote attacker can't fake button presses. These features are available to third-party apps, such as FIDO U2F Authentication.
  • Enforcing factory-reset policies, so that lost or stolen phones can only be restored to operation by the authorized owner.
  • Ensuring that even Google can't unlock a phone or install firmware updates without the owner's cooperation with Insider Attack Resistance.
Including Titan M in Pixel 3 devices substantially reduces the attack surface. Because Titan M is a separate chip, the physical isolation mitigates against entire classes of hardware-level exploits such as Rowhammer, Spectre, and Meltdown. Titan M's processor, caches, memory, and persistent storage are not shared with the rest of the phone's system, so side channel attacks like these—which rely on subtle, unplanned interactions between internal circuits of a single component—are nearly impossible. In addition to its physical isolation, the Titan M chip contains many defenses to protect against external attacks.
But Titan M is not just a hardened security microcontroller, but rather a full-lifecycle approach to security with Pixel devices in mind. Titan M's security takes into consideration all the features visible to Android down to the lowest level physical and electrical circuit design and extends beyond each physical device to our supply chain and manufacturing processes. At the physical level, we incorporated essential features optimized for the mobile experience: low power usage, low-latency, hardware crypto acceleration, tamper detection, and secure, timely firmware updates. We improved and invested in the supply chain for Titan M by creating a custom provisioning process, which provides us with transparency and control starting from the earliest silicon stages.

A closer look at Titan M

Titan (left) and Titan M (right)

Titan M's CPU is an ARM Cortex-M3 microprocessor specially hardened against side-channel attacks and augmented with defensive features to detect and respond to abnormal conditions. The Titan M CPU core also exposes several control registers, which can be used to taper access to chip configuration settings and peripherals. Once powered on, Titan M verifies the signature of its flash-based firmware using a public key built into the chip's silicon. If the signature is valid, the flash is locked so it can't be modified, and then the firmware begins executing.
Titan M also features several hardware accelerators: AES, SHA, and a programmable big number coprocessor for public key algorithms. These accelerators are flexible and can either be initialized with keys provided by firmware or with chip-specific and hardware-bound keys generated by the Key Manager module. Chip-specific keys are generated internally based on entropy derived from the True Random Number Generator (TRNG), and thus such keys are never externally available outside the chip over its entire lifetime.
While implementing Titan M firmware, we had to take many system constraints into consideration. For example, packing as many security features into Titan M's 64 Kbytes of RAM required all firmware to execute exclusively off the stack. And to reduce flash-wear, RAM contents can be preserved even during low-power mode when most hardware modules are turned off.
The diagram below provides a high-level view of the chip components described here.

Better security through transparency and innovation

At the heart of our implementation of Titan M are two broader trends: transparency and building a platform for future innovation.
Transparency around every step of the design process — from logic gates to boot code to the applications — gives us confidence in the defenses we're providing for our users. We know what's inside, how it got there, how it works, and who can make changes.
Custom hardware allows us to provide new features, capabilities, and performance not readily available in off-the-shelf components. These changes allow higher assurance use cases like two-factor authentication, medical device control, P2P payments, and others that we will help develop down the road.
As more of our lives are bound up in our phones, keeping those phones secure and trustworthy is increasingly important. Google takes that responsibility seriously. Titan M is just the latest step in our continuing efforts to improve the privacy and security of all our users.



*Updated on October 17, 2018 with details about changes in other browsers

TLS (Transport Layer Security) is the protocol which secures HTTPS. It has a long history stretching back to the nearly twenty-year-old TLS 1.0 and its even older predecessor, SSL. Over that time, we have learned a lot about how to build secure protocols.

TLS 1.2 was published ten years ago to address weaknesses in TLS 1.0 and 1.1 and has enjoyed wide adoption since then. Today only 0.5% of HTTPS connections made by Chrome use TLS 1.0 or 1.1. These old versions of TLS rely on MD5 and SHA-1, both now broken, and contain other flaws. TLS 1.0 is no longer PCI-DSS compliant and the TLS working group has adopted a document to deprecate TLS 1.0 and TLS 1.1.

In line with these industry standards, Google Chrome will deprecate TLS 1.0 and TLS 1.1 in Chrome 72. Sites using these versions will begin to see deprecation warnings in the DevTools console in that release. TLS 1.0 and 1.1 will be disabled altogether in Chrome 81. This will affect users on early release channels starting January 2020. Apple, Microsoft, and Mozilla have made similar announcements.

Site administrators should immediately enable TLS 1.2 or later. Depending on server software (such as Apache or nginx), this may be a configuration change or a software update. Additionally, we encourage all sites to revisit their TLS configuration. Chrome’s current criteria for modern TLS is the following:

  • TLS 1.2 or later.
  • An ECDHE- and AEAD-based cipher suite. AEAD-based cipher suites are those using AES-GCM or ChaCha20-Poly1305. ECDHE_RSA_WITH_AES_128_GCM_SHA256 is the recommended option for most sites.
  • The server signature should use SHA-2. Note this is not the signature in the certificate, made by the CA. Rather, it is the signature made by the server itself, using its private key.

The older options—CBC-mode cipher suites, RSA-encryption key exchange, and SHA-1 online signatures—all have known cryptographic flaws. Each has been removed in the newly-published TLS 1.3, which is supported in Chrome 70. We retain them at prior versions for compatibility with legacy servers, but we will be evaluating them over time for eventual deprecation.


None of these changes require obtaining a new certificate. Additionally, they are backwards-compatible. Where necessary, servers may enable both modern and legacy options, to continue to support legacy clients. Note, however, such support may carry security risks. (For example, see the DROWN, FREAK, and ROBOT attacks.)


Over the coming Chrome releases, we will improve the DevTools Security Panel to point out deviations from these settings, and suggest improvements to the site’s configuration.


Enterprise deployments can preview the TLS 1.0 and 1.1 removal today by setting the SSLVersionMin policy to “tls1.2”. For enterprise deployments that need more time, this same policy can be used to re-enable TLS 1.0 or TLS 1.1 until January 2021.



Android is all about choice. As such, Android strives to provide users many options to protect their data. By combining Android’s Backup Service and Google Cloud’s Titan Technology, Android has taken additional steps to securing users' data while maintaining their privacy.

Starting in Android Pie, devices can take advantage of a new capability where backed-up application data can only be decrypted by a key that is randomly generated at the client. This decryption key is encrypted using the user's lockscreen PIN/pattern/passcode, which isn’t known by Google. Then, this passcode-protected key material is encrypted to a Titan security chip on our datacenter floor. The Titan chip is configured to only release the backup decryption key when presented with a correct claim derived from the user's passcode. Because the Titan chip must authorize every access to the decryption key, it can permanently block access after too many incorrect attempts at guessing the user’s passcode, thus mitigating brute force attacks. The limited number of incorrect attempts is strictly enforced by a custom Titan firmware that cannot be updated without erasing the contents of the chip. By design, this means that no one (including Google) can access a user's backed-up application data without specifically knowing their passcode.

To increase our confidence that this new technology securely prevents anyone from accessing users' backed-up application data, the Android Security & Privacy team hired global cyber security and risk mitigation expert NCC Group to complete a security audit. Some of the outcomes included positives around Google’s security design processes, validation of code quality, and that mitigations for known attack vectors were already taken into account prior to launching the service. While there were some issues discovered during this audit, engineers corrected them quickly. For more details on how the end-to-end service works and a detailed report of NCC Group’s findings, click here.

Getting external reviews of our security efforts is one of many ways that Google and Android maintain transparency and openness which in turn helps users feel safe when it comes to their data. Whether it’s 100s of hours of gaming data or your personalized preferences in your favorite Google apps, our users' information is protected.

We want to acknowledge contributions from Shabsi Walfish, Software Engineering Lead, Identity and Authentication to this effort


Posted by Sami Tolvanen, Staff Software Engineer, Android Security & Privacy

[Cross-posted from the Android Developers Blog]

Android's security model is enforced by the Linux kernel, which makes it a tempting target for attackers. We have put a lot of effort into hardening the kernel in previous Android releases and in Android 9, we continued this work by focusing on compiler-based security mitigations against code reuse attacks.
Google's Pixel 3 will be the first Android device to ship with LLVM's forward-edge Control Flow Integrity (CFI) enforcement in the kernel, and we have made CFI support available in Android kernel versions 4.9 and 4.14. This post describes how kernel CFI works and provides solutions to the most common issues developers might run into when enabling the feature.

Protecting against code reuse attacks

A common method of exploiting the kernel is using a bug to overwrite a function pointer stored in memory, such as a stored callback pointer or a return address that had been pushed to the stack. This allows an attacker to execute arbitrary parts of the kernel code to complete their exploit, even if they cannot inject executable code of their own. This method of gaining code execution is particularly popular with the kernel because of the huge number of function pointers it uses, and the existing memory protections that make code injection more challenging.
CFI attempts to mitigate these attacks by adding additional checks to confirm that the kernel's control flow stays within a precomputed graph. This doesn't prevent an attacker from changing a function pointer if a bug provides write access to one, but it significantly restricts the valid call targets, which makes exploiting such a bug more difficult in practice.

Figure 1. In an Android device kernel, LLVM's CFI limits 55% of indirect calls to at most 5 possible targets and 80% to at most 20 targets.

Gaining full program visibility with Link Time Optimization (LTO)

In order to determine all valid call targets for each indirect branch, the compiler needs to see all of the kernel code at once. Traditionally, compilers work on a single compilation unit (source file) at a time and leave merging the object files to the linker. LLVM's solution to CFI is to require the use of LTO, where the compiler produces LLVM-specific bitcode for all C compilation units, and an LTO-aware linker uses the LLVM back-end to combine the bitcode and compile it into native code.

Figure 2. A simplified overview of how LTO works in the kernel. All LLVM bitcode is combined, optimized, and generated into native code at link time.
Linux has used the GNU toolchain for assembling, compiling, and linking the kernel for decades. While we continue to use the GNU assembler for stand-alone assembly code, LTO requires us to switch to LLVM's integrated assembler for inline assembly, and either GNU gold or LLVM's own lld as the linker. Switching to a relatively untested toolchain on a huge software project will lead to compatibility issues, which we have addressed in our arm64 LTO patch sets for kernel versions 4.9 and 4.14.
In addition to making CFI possible, LTO also produces faster code due to global optimizations. However, additional optimizations often result in a larger binary size, which may be undesirable on devices with very limited resources. Disabling LTO-specific optimizations, such as global inlining and loop unrolling, can reduce binary size by sacrificing some of the performance gains. When using GNU gold, the aforementioned optimizations can be disabled with the following additions to LDFLAGS:
LDFLAGS += -plugin-opt=-inline-threshold=0 \
           -plugin-opt=-unroll-threshold=0
Note that flags to disable individual optimizations are not part of the stable LLVM interface and may change in future compiler versions.

Implementing CFI in the Linux kernel

LLVM's CFI implementation adds a check before each indirect branch to confirm that the target address points to a valid function with a correct signature. This prevents an indirect branch from jumping to an arbitrary code location and even limits the functions that can be called. As C compilers do not enforce similar restrictions on indirect branches, there were several CFI violations due to function type declaration mismatches even in the core kernel that we have addressed in our CFI patch sets for kernels 4.9 and 4.14.
Kernel modules add another complication to CFI, as they are loaded at runtime and can be compiled independently from the rest of the kernel. In order to support loadable modules, we have implemented LLVM's cross-DSO CFI support in the kernel, including a CFI shadow that speeds up cross-module look-ups. When compiled with cross-DSO support, each kernel module contains information about valid local branch targets, and the kernel looks up information from the correct module based on the target address and the modules' memory layout.

Figure 3. An example of a cross-DSO CFI check injected into an arm64 kernel. Type information is passed in X0 and the target address to validate in X1.
CFI checks naturally add some overhead to indirect branches, but due to more aggressive optimizations, our tests show that the impact is minimal, and overall system performance even improved 1-2% in many cases.

Enabling kernel CFI for an Android device

CFI for arm64 requires clang version >= 5.0 and binutils >= 2.27. The kernel build system also assumes that the LLVMgold.so plug-in is available in LD_LIBRARY_PATH. Pre-built toolchain binaries for clang and binutils are available in AOSP, but upstream binaries can also be used.
The following kernel configuration options are needed to enable kernel CFI:
CONFIG_LTO_CLANG=y
CONFIG_CFI_CLANG=y
Using CONFIG_CFI_PERMISSIVE=y may also prove helpful when debugging a CFI violation or during device bring-up. This option turns a violation into a warning instead of a kernel panic.
As mentioned in the previous section, the most common issue we ran into when enabling CFI on Pixel 3 were benign violations caused by function pointer type mismatches. When the kernel runs into such a violation, it prints out a runtime warning that contains the call stack at the time of the failure, and the call target that failed the CFI check. Changing the code to use a correct function pointer type fixes the issue. While we have fixed all known indirect branch type mismatches in the Android kernel, similar problems may be still found in device specific drivers, for example.
CFI failure (target: [<fffffff3e83d4d80>] my_target_function+0x0/0xd80):
------------[ cut here ]------------
kernel BUG at kernel/cfi.c:32!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
…
Call trace:
…
[<ffffff8752d00084>] handle_cfi_failure+0x20/0x28
[<ffffff8752d00268>] my_buggy_function+0x0/0x10
…
Figure 4. An example of a kernel panic caused by a CFI failure.
Another potential pitfall are address space conflicts, but this should be less common in driver code. LLVM's CFI checks only understand kernel virtual addresses and any code that runs at another exception level or makes an indirect call to a physical address will result in a CFI violation. These types of failures can be addressed by disabling CFI for a single function using the __nocfi attribute, or even disabling CFI for entire code files using the $(DISABLE_CFI) compiler flag in the Makefile.
static int __nocfi address_space_conflict()
{
      void (*fn)(void);
 …
/* branching to a physical address trips CFI w/o __nocfi */
 fn = (void *)__pa_symbol(function_name);
      cpu_install_idmap();
      fn();
      cpu_uninstall_idmap();
 …
}
Figure 5. An example of fixing a CFI failure caused by an address space conflict.
Finally, like many hardening features, CFI can also be tripped by memory corruption errors that might otherwise result in random kernel crashes at a later time. These may be more difficult to debug, but memory debugging tools such as KASAN can help here.

Conclusion

We have implemented support for LLVM's CFI in Android kernels 4.9 and 4.14. Google's Pixel 3 will be the first Android device to ship with these protections, and we have made the feature available to all device vendors through the Android common kernel. If you are shipping a new arm64 device running Android 9, we strongly recommend enabling kernel CFI to help protect against kernel vulnerabilities.
LLVM's CFI protects indirect branches against attackers who manage to gain access to a function pointer stored in kernel memory. This makes a common method of exploiting the kernel more difficult. Our future work involves also protecting function return addresses from similar attacks using LLVM's Shadow Call Stack, which will be available in an upcoming compiler release.