WorryFree Computers   »   [go: up one dir, main page]



In June, we announced and launched End-To-End, a tool for those who need even more security for their communications than what we already provide. Today, we’re launching an updated version of our extension — still in alpha — that includes a number of changes:

  • We’re migrating End-To-End to GitHub. We’ve always believed strongly that End-To-End must be an open source project, and we think that using GitHub will allow us to work together even better with the community.
  • We’ve included several contributions from Yahoo Inc. Alex Stamos, Yahoo’s Chief Security Officer, announced at BlackHat 2014 in August that his team would be participating in our End-To-End project; we’re very happy to release the first fruits of this collaboration.
  • We’ve added more documentation. The project wiki now contains additional information about End-To-End, both for developers as well as security researchers interested in understanding better how we think about End-To-End’s security model. 

We’re very thankful to all those who submitted bugs against the first alpha release. Two of those bugs earned a financial reward through our Vulnerability Rewards Program. One area where we didn’t receive many bug reports was in End-To-End’s new crypto library. On the contrary: we heard from several other projects who want to use our library, and we’re looking forward to working with them. 

One thing hasn’t changed for this release: we aren’t yet making End-To-End available in the Chrome Web Store. We don’t feel it’s as usable as it needs to be. Indeed, those looking through the source code will see references to our key server, and it should come as no surprise that we’re working on one. Key distribution and management is one of the hardest usability problems with cryptography-related products, and we won’t release End-To-End in non-alpha form until we have a solution we’re content with.

We’re excited to continue working on these challenging and rewarding problems, and we look forward to delivering a more fully fledged End-to-End next year.



(Cross-posted from the Gmail Blog)

We know that the safety and reliability of your Gmail is super important to you, which is why we’re always working on security improvements like serving images through secure proxy servers, and requiring HTTPS. Today, Gmail on the desktop is becoming more secure with support for Content Security Policy (CSP). CSP helps provide a layer of defense against a common class of security vulnerabilities known as cross-site scripting (XSS).

There are many great extensions for Gmail. Unfortunately, there are also some extensions that behave badly, loading code which interferes with your Gmail session, or which compromises your email’s security. Gmail’s CSP helps protect you, by making it more difficult to load unsafe code into Gmail.

Most popular (and well-behaved) extensions have already been updated to work with the CSP standard, but if you happen to have any trouble with an extension, try installing its latest version from your browser’s web store (for example, the Chrome Web Store for Chrome users).

CSP is just another example of how Gmail can help make your email experience safer. For advice and tools that help keep you safe across the web, you can always visit the Google Security Center.

This post was updated on December 18th to add a description of the XSS defense benefit of CSP, and to more precisely define the interaction with extensions.



reCAPTCHA protects the websites you love from spam and abuse. So, when you go online—say, for some last-minute holiday shopping—you won't be competing with robots and abusive scripts to access sites. For years, we’ve prompted users to confirm they aren’t robots by asking them to read distorted text and type it into a box, like this:
But, we figured it would be easier to just directly ask our users whether or not they are robots—so, we did! We’ve begun rolling out a new API that radically simplifies the reCAPTCHA experience. We’re calling it the “No CAPTCHA reCAPTCHA” and this is how it looks:
On websites using this new API, a significant number of users will be able to securely and easily verify they’re human without actually having to solve a CAPTCHA. Instead, with just a single click, they’ll confirm they are not a robot.
A brief history of CAPTCHAs 

While the new reCAPTCHA API may sound simple, there is a high degree of sophistication behind that modest checkbox. CAPTCHAs have long relied on the inability of robots to solve distorted text. However, our research recently showed that today’s Artificial Intelligence technology can solve even the most difficult variant of distorted text at 99.8% accuracy. Thus distorted text, on its own, is no longer a dependable test.

To counter this, last year we developed an Advanced Risk Analysis backend for reCAPTCHA that actively considers a user’s entire engagement with the CAPTCHA—before, during, and after—to determine whether that user is a human. This enables us to rely less on typing distorted text and, in turn, offer a better experience for users.  We talked about this in our Valentine’s Day post earlier this year.

The new API is the next step in this steady evolution. Now, humans can just check the box and in most cases, they’re through the challenge.

Are you sure you’re not a robot?

However, CAPTCHAs aren't going away just yet. In cases when the risk analysis engine can't confidently predict whether a user is a human or an abusive agent, it will prompt a CAPTCHA to elicit more cues, increasing the number of security checkpoints to confirm the user is valid.
Making reCAPTCHAs mobile-friendly

This new API also lets us experiment with new types of challenges that are easier for us humans to use, particularly on mobile devices. In the example below, you can see a CAPTCHA based on a classic Computer Vision problem of image labeling. In this version of the CAPTCHA challenge, you’re asked to select all of the images that correspond with the clue. It's much easier to tap photos of cats or turkeys than to tediously type a line of distorted text on your phone.
Adopting the new API on your site

As more websites adopt the new API, more people will see "No CAPTCHA reCAPTCHAs".  Early adopters, like Snapchat, WordPress, Humble Bundle, and several others are already seeing great results with this new API. For example, in the last week, more than 60% of WordPress’ traffic and more than 80% of Humble Bundle’s traffic on reCAPTCHA encountered the No CAPTCHA experience—users got to these sites faster. To adopt the new reCAPTCHA for your website, visit our site to learn more.

Humans, we'll continue our work to keep the Internet safe and easy to use. Abusive bots and scripts, it’ll only get worse—sorry we’re (still) not sorry.

Securing modern web applications can be a daunting task—doubly so if they are built (quickly) with diverse languages and technology stacks. That’s why we run a multi-faceted product security program, which helps our engineers build and deploy secure software at every stage of the development lifecycle. As part of this effort, we have developed an internal web application security scanning tool, codenamed Inquisition (as no bug expects it!).

The scanner is built entirely on Google technologies like Chrome and Google Cloud Platform, with support for the latest HTML5 features, a low false positive rate and ease of use in mind. We have discussed some of the technology behind this tool in a talk at the Google Testing Automation Conference 2013.

While working on this tool, we found we needed a synthetic testbed to both test our current capabilities and set goals for what we need to catch next. Today we’re announcing the open-source release of Firing Range, the results of our work (with some help from researchers at the Politecnico di Milano) in producing a test ground for automated scanners.

Firing Range is a Java application built on Google App Engine and contains a wide range of XSS and, to a lesser degree, other web vulnerabilities. Code is available on github.com/google/firing-range, while a deployed version is at public-firing-range.appspot.com.

How is it different from the many vulnerable test applications already available? Most of them have focused on creating realistic-looking testbeds for human testers; we think that with automation in mind it is more productive, instead, to try to exhaustively enumerate the contexts and the attack vectors that an application might exhibit. Our testbed doesn’t try to emulate a real application, nor exercise the crawling capabilities of a scanner: it’s a collection of unique bug patterns drawn from vulnerabilities that we have seen in the wild, aimed at verifying the detection capabilities of security tools.

We have used Firing Range both as a continuous testing aid and as a driver for our development, defining as many bug types as possible, including some that we cannot detect (yet!).

We hope that others will find it helpful in evaluating the detection capabilities of their own automated tools, and we certainly welcome any contributions and feedbacks from the broader security research community.

Posted by Claudio Criscione, Security Engineer

A recent poll in the U.S. showed that more people are concerned about being hacked than having their house robbed. That’s why we continue to work hard to keep Google accounts secure. Our defenses keep most bad actors out, and we’ve reduced hijackings by more than 99% over the last few years.

We monitor many potential threats, from mass hijackings (typically used to send lots of spam) to state-sponsored attacks (highly targeted, often with political motivations).

This week, we’re releasing a study of another kind of threat we’ve dubbed “manual hijacking,” in which professional attackers spend considerable time exploiting a single victim’s account, often causing financial losses. Even though they’re rare—9 incidents per million users per day—they’re often severe, and studying this type of hijacker has helped us improve our defenses against all types of hijacking.

Manual hijackers often get into accounts through phishing: sending deceptive messages meant to trick you into handing over your username, password, and other personal info. For this study, we analyzed several sources of phishing messages and websites, observing both how hijackers operate and what sensitive information they seek out once they gain control of an account. Here are some of our findings:

  • Simple but dangerous: Most of us think we’re too smart to fall for phishing, but our research found some fake websites worked a whopping 45% of the time. On average, people visiting the fake pages submitted their info 14% of the time, and even the most obviously fake sites still managed to deceive 3% of people. Considering that an attacker can send out millions of messages, these success rates are nothing to sneeze at.
  • Quick and thorough: Around 20% of hijacked accounts are accessed within 30 minutes of a hacker obtaining the login info. Once they’ve broken into an account they want to exploit, hijackers spend more than 20 minutes inside, often changing the password to lock out the true owner, searching for other account details (like your bank, or social media accounts), and scamming new victims.
  • Personalized and targeted: Hijackers then send phishing emails from the victim’s account to everyone in his or her address book. Since your friends and family think the email comes from you, these emails can be very effective. People in the contact list of hijacked accounts are 36 times more likely to be hijacked themselves. 
  • Learning fast: Hijackers quickly change their tactics to adapt to new security measures. For example, after we started asking people to answer questions (like “which city do you login from most often?”) when logging in from a suspicious location or device, hijackers almost immediately started phishing for the answers.

We’ve used the findings from this study, along with our ongoing research efforts, to improve the many account security systems we have in place. But we can use your help too.

  • Stay vigilant: Gmail blocks the vast majority of spam and phishing emails, but be wary of messages asking for login information or other personal data. Never reply to these messages; instead, report them to us. When in doubt, visit websites directly (not through a link in an email) to review or update account information.
  • Get your account back fast: If your account is ever at risk, it’s important that we have a way to get in touch with you and confirm your ownership. That’s why we strongly recommend you provide a backup phone number or a secondary email address (but make sure that email account uses a strong password and is kept up to date so it’s not released due to inactivity).
  • 2-step verification: Our free 2-step verification service provides an extra layer of security against all types of account hijacking. In addition to your password, you’ll use your phone to prove you’re really you. We also recently added an option to log in with a physical USB device.

Take a few minutes and visit the Secure Your Account page, where you can make sure we’ve got backup contact info for you and confirm that your other security settings are up to date.

Posted by Elie Bursztein, Anti-Abuse Research Lead



Google is committed to increasing the use of TLS/SSL in all applications and services. But “HTTPS everywhere” is not enough; it also needs to be used correctly. Most platforms and devices have secure defaults, but some applications and libraries override the defaults for the worse, and in some instances we’ve seen platforms make mistakes as well. As applications get more complex, connect to more services, and use more third party libraries, it becomes easier to introduce these types of mistakes.

The Android Security Team has built a tool, called nogotofail, that provides an easy way to confirm that the devices or applications you are using are safe against known TLS/SSL vulnerabilities and misconfigurations. Nogotofail works for Android, iOS, Linux, Windows, Chrome OS, OSX, in fact any device you use to connect to the Internet. There’s an easy-to-use client to configure the settings and get notifications on Android and Linux, as well as the attack engine itself which can be deployed as a router, VPN server, or proxy.

We’ve been using this tool ourselves for some time and have worked with many developers to improve the security of their apps. But we want the use of TLS/SSL to advance as quickly as possible. Today, we’re releasing it as an open source project, so anyone can test their applications, contribute new features, provide support for more platforms, and help improve the security of the Internet.

Posted by Chad Brubaker, Android Security Engineer

Cross-posted on the Research Blog and the Chromium Blog

At Google, we are constantly trying to improve the techniques we use to protect our users' security and privacy. One such project, RAPPOR (Randomized Aggregatable Privacy-Preserving Ordinal Response), provides a new state-of-the-art, privacy-preserving way to learn software statistics that we can use to better safeguard our users’ security, find bugs, and improve the overall user experience.

Building on the concept of randomized response, RAPPOR enables learning statistics about the behavior of users’ software while guaranteeing client privacy. The guarantees of differential privacy, which are widely accepted as being the strongest form of privacy, have almost never been used in practice despite intense research in academia. RAPPOR introduces a practical method to achieve those guarantees.

To understand RAPPOR, consider the following example. Let’s say you wanted to count how many of your online friends were dogs, while respecting the maxim that, on the Internet, nobody should know you’re a dog. To do this, you could ask each friend to answer the question “Are you a dog?” in the following way. Each friend should flip a coin in secret, and answer the question truthfully if the coin came up heads; but, if the coin came up tails, that friend should always say “Yes” regardless. Then you could get a good estimate of the true count from the greater-than-half fraction of your friends that answered “Yes”. However, you still wouldn’t know which of your friends was a dog: each answer “Yes” would most likely be due to that friend’s coin flip coming up tails.

RAPPOR builds on the above concept, allowing software to send reports that are effectively indistinguishable from the results of random coin flips and are free of any unique identifiers. However, by aggregating the reports we can learn the common statistics that are shared by many users. We’re currently testing the use of RAPPOR in Chrome, to learn statistics about how unwanted software is hijacking users’ settings.

We believe that RAPPOR has the potential to be applied for a number of different purposes, so we're making it freely available for all to use. We'll continue development of RAPPOR as a standalone open-source project so that anybody can inspect test its reporting and analysis mechanisms, and help develop the technology. We’ve written up the technical details of RAPPOR in a report that will be published next week at the ACM Conference on Computer and Communications Security.

We’re encouraged by the feedback we’ve received so far from academics and other stakeholders, and we’re looking forward to additional comments from the community. We hope that everybody interested in preserving user privacy will review the technology and share their feedback at rappor-discuss@googlegroups.com.

Posted by Úlfar Erlingsson, Tech Lead Manager, Security Research


2-Step Verification offers a strong extra layer of protection for Google Accounts. Once enabled, you’re asked for a verification code from your phone in addition to your password, to prove that it’s really you signing in from an unfamiliar device. Hackers usually work from afar, so this second factor makes it much harder for a hacker who has your password to access your account, since they don’t have your phone.

Today we’re adding even stronger protection for particularly security-sensitive individuals. Security Key is a physical USB second factor that only works after verifying the login site is truly a Google website, not a fake site pretending to be Google. Rather than typing a code, just insert Security Key into your computer’s USB port and tap it when prompted in Chrome. When you sign into your Google Account using Chrome and Security Key, you can be sure that the cryptographic signature cannot be phished.
Security Key and Chrome incorporate the open Universal 2nd Factor (U2F) protocol from the FIDO Alliance, so other websites with account login systems can get FIDO U2F working in Chrome today. It’s our hope that other browsers will add FIDO U2F support, too. As more sites and browsers come onboard, security-sensitive users can carry a single Security Key that works everywhere FIDO U2F is supported.

Security Key works with Google Accounts at no charge, but you’ll need to buy a compatible USB device directly from a U2F participating vendor. If you think Security Key may be right for you, we invite you to learn more.

Posted by Nishit Shah, Product Manager, Google Security

Today we are publishing details of a vulnerability in the design of SSL version 3.0. This vulnerability allows the plaintext of secure connections to be calculated by a network attacker. I discovered this issue in collaboration with Thai Duong and Krzysztof Kotowicz (also Googlers).

SSL 3.0 is nearly 18 years old, but support for it remains widespread. Most importantly, nearly all browsers support it and, in order to work around bugs in HTTPS servers, browsers will retry failed connections with older protocol versions, including SSL 3.0. Because a network attacker can cause connection failures, they can trigger the use of SSL 3.0 and then exploit this issue.

Disabling SSL 3.0 support, or CBC-mode ciphers with SSL 3.0, is sufficient to mitigate this issue, but presents significant compatibility problems, even today. Therefore our recommended response is to support TLS_FALLBACK_SCSV. This is a mechanism that solves the problems caused by retrying failed connections and thus prevents attackers from inducing browsers to use SSL 3.0. It also prevents downgrades from TLS 1.2 to 1.1 or 1.0 and so may help prevent future attacks.

Google Chrome and our servers have supported TLS_FALLBACK_SCSV since February and thus we have good evidence that it can be used without compatibility problems. Additionally, Google Chrome will begin testing changes today that disable the fallback to SSL 3.0. This change will break some sites and those sites will need to be updated quickly.

In the coming months, we hope to remove support for SSL 3.0 completely from our client products.

Thank you to all the people who helped review and discuss responses to this issue.

Posted by Bodo Möller, Google Security Team

[Updated Oct 15 to note that SSL 3.0 is nearly 18 years old, not nearly 15 years old.]

It’s been a year since we launched our Patch Reward program, a novel effort designed to recognize and reward proactive contributions to the security of key open-source projects that make the Internet tick. Our goal is to provide financial incentives for improvements that go beyond merely fixing a known security bug.

We started with a modest scope and reward amounts, but have gradually expanded the program over the past few months. We’ve seen some great work so far—and to help guide future submissions, we wanted to share some of our favorites:
  • Addition of Curve25519 and several other primitives in OpenSSH to strengthen its cryptographic foundations and improve performance.
  • A set of patches to reduce the likelihood of ASLR info leaks in Linux to make certain types of memory corruption bugs more difficult to exploit.
  • And, of course, the recent attack-surface-reducing function prefix patch in bash that helped mitigate a flurry of “Shellshock”-related bugs.

We hope that this list inspires even more contributions in the year to come. Of course, before participating, be sure to read the rules page. When done, simply send your nominations to security-patches@google.com. And keep up the great work!

Posted by Michal Zalewski, Google Security Team

Security and privacy are top priorities for Google. We’ve invested a lot in making our products secure, including strong SSL encryption by default for Search, Gmail and Drive. We’re working to further extend encryption across all our services, ensuring that your connection to Google is private.

For some time, we’ve offered network administrators the ability to require the use of SafeSearch by their users, which filters out explicit content from search results; this is especially important for schools. However, using this functionality has meant that searches were sent over an unencrypted connection to Google. Unfortunately, this has been the target of abuse by other groups looking to snoop on people’s searches, so we will be removing it as of early December.

Going forward, organizations can require SafeSearch on their networks while at the same time ensuring that their users’ connections to Google remain encrypted. (This is in addition to existing functionality that allows SafeSearch to be set on individual browsers and to be enabled by policy on managed devices like Chromebooks.) Network administrators can read more about how to enable this new feature here.

Posted by Brian Fitzpatrick, Engineering Director

Cross-posted on the Chromium Blog

We work hard to keep you safe online. In Chrome, for instance, we warn users against malware and phishing and offer rewards for finding security bugs. Due in part to our collaboration with the research community, we’ve squashed more than 700 Chrome security bugs and have rewarded more than $1.25 million through our bug reward program. But as Chrome has become more secure, it’s gotten even harder to find and exploit security bugs.

This is a good problem to have! In recognition of the extra effort it takes to uncover vulnerabilities in Chrome, we’re increasing our reward levels. We’re also making some changes to be more transparent with researchers reporting a bug.

First, we’re increasing our usual reward pricing range to $500-$15,000 per bug, up from a previous published maximum of $5,000. This is accompanied with a clear breakdown of likely reward amounts by bug type. As always, we reserve the right to reward above these levels for particularly great reports. (For example, last month we awarded $30,000 for a very impressive report.)

Second, we’ll pay at the higher end of the range when researchers can provide an exploit to demonstrate a specific attack path against our users. Researchers now have an option to submit the vulnerability first and follow up with an exploit later. We believe that this a win-win situation for security and researchers: we get to patch bugs earlier and our contributors get to lay claim to the bugs sooner, lowering the chances of submitting a duplicate report.

Third, Chrome reward recipients will be listed in the Google Hall of Fame, so you’ve got something to print out and hang on the fridge.

As a special treat, we’re going to back-pay valid submissions from July 1, 2014 at the increased reward levels we’re announcing today. Good times.

We’ve also answered some new FAQs on our rules page, including questions about our new Trusted Researcher program and a bit about our philosophy and alternative markets for zero-day bugs.

Happy bug hunting!

Posted by Tim Willis, Hacker Philanthropist, Chrome Security Team

Cross-posted on the Open Source Blog

A recent Pew study found that 86% of people surveyed had taken steps to protect their security online. This is great—more security is always good. However, if people are indeed working to protect themselves, why are we still seeing incidents, breaches, and confusion? In many cases these problems recur because the technology that allows people to secure their communications, content and online activity is too hard to use. 

In other words, the tools for the job exist. But while many of these tools work technically, they don’t always work in ways that users expect. They introduce extra steps or are simply confusing and cumbersome. (“Is this a software bug, or am I doing something wrong?”) However elegant and intelligent the underlying technology (and much of it is truly miraculous), the results are in: if people can’t use it easily, many of them won’t. 

We believe that people shouldn’t have to make a trade-off between security and ease of use. This is why we’re happy to support Simply Secure, a new organization dedicated to improving the usability and safety of open-source tools that help people secure their online lives. 

Over the coming months, Simply Secure will be collaborating with open-source developers, designers, researchers, and others to take what’s there—groundbreaking work from efforts like Open Whisper Systems, The Guardian Project, Off-the-Record Messaging, and more—and work to make them easier to understand and use. 

We’re excited for a future where people won’t have to choose between ease and security, and where tools that allow people to secure their communications, content, and online activity are as easy as choosing to use them.

Posted by Meredith Whittaker, Open Source Research Lead and Ben Laurie, Senior Staff Security Engineer

One of the unfortunate realities of the Internet today is a phenomenon known in security circles as “credential dumps”—the posting of lists of usernames and passwords on the web. We’re always monitoring for these dumps so we can respond quickly to protect our users. This week, we identified several lists claiming to contain Google and other Internet providers’ credentials.

We found that less than 2% of the username and password combinations might have worked, and our automated anti-hijacking systems would have blocked many of those login attempts. We’ve protected the affected accounts and have required those users to reset their passwords.

It’s important to note that in this case and in others, the leaked usernames and passwords were not the result of a breach of Google systems. Often, these credentials are obtained through a combination of other sources. 

For instance, if you reuse the same username and password across websites, and one of those websites gets hacked, your credentials could be used to log into the others. Or attackers can use malware or phishing schemes to capture login credentials.

We’re constantly working to keep your accounts secure from phishing, malware and spam. For instance, if we see unusual account activity, we’ll stop sign-in attempts from unfamiliar locations and devices. You can review this activity and confirm whether or not you actually took the action.

A few final tips: Make sure you’re using a strong password unique to Google. Update your recovery options so we can reach you by phone or email if you get locked out of your account. And consider 2-step verification, which adds an extra layer of security to your account. You can visit g.co/accountcheckup where you’ll see a list of many of the security controls at your disposal.

Posted by Borbala Benko, Elie Bursztein, Tadek Pietraszek and Mark Risher, Google Spam & Abuse Team

Cross-posted on the Chromium Blog

The SHA-1 cryptographic hash algorithm has been known to be considerably weaker than it was designed to be since at least 2005 — 9 years ago. Collision attacks against SHA-1 are too affordable for us to consider it safe for the public web PKI. We can only expect that attacks will get cheaper.

That’s why Chrome will start the process of sunsetting SHA-1 (as used in certificate signatures for HTTPS) with Chrome 39 in November. HTTPS sites whose certificate chains use SHA-1 and are valid past 1 January 2017 will no longer appear to be fully trustworthy in Chrome’s user interface.

SHA-1's use on the Internet has been deprecated since 2011, when the CA/Browser Forum, an industry group of leading web browsers and certificate authorities (CAs) working together to establish basic security requirements for SSL certificates, published their Baseline Requirements for SSL. These Requirements recommended that all CAs transition away from SHA-1 as soon as possible, and followed similar events in other industries and sectors, such as NIST deprecating SHA-1 for government use in 2010.

We have seen this type of weakness turn into a practical attack before, with the MD5 hash algorithm. We need to ensure that by the time an attack against SHA-1 is demonstrated publicly, the web has already moved away from it. Unfortunately, this can be quite challenging. For example, when Chrome disabled MD5, a number of enterprises, schools, and small businesses were affected when their proxy software — from leading vendors — continued to use the insecure algorithms, and were left scrambling for updates. Users who used personal firewall software were also affected.

We plan to surface, in the HTTPS security indicator in Chrome, the fact that SHA-1 does not meet its design guarantee. We are taking a measured approach, gradually ratcheting down the security indicator and gradually moving the timetable up (keep in mind that we release stable versions of Chrome about 6-8 weeks after their branch point):

Chrome 39 (Branch point 26 September 2014)
Sites with end-entity (“leaf”) certificates that expire on or after 1 January 2017, and which include a SHA-1-based signature as part of the certificate chain, will be treated as “secure, but with minor errors”.

The current visual display for “secure, but with minor errors” is a lock with a yellow triangle, and is used to highlight other deprecated and insecure practices, such as passive mixed content.


Chrome 40 (Branch point 7 November 2014; Stable after holiday season)
Sites with end-entity certificates that expire between 1 June 2016 to 31 December 2016 (inclusive), and which include a SHA-1-based signature as part of the certificate chain, will be treated as “secure, but with minor errors”.

Sites with end-entity certificates that expire on or after 1 January 2017, and which include a SHA-1-based signature as part of the certificate chain, will be treated as “neutral, lacking security”.

The current visual display for “neutral, lacking security” is a blank page icon, and is used in other situations, such as HTTP.

Chrome 41 (Branch point in Q1 2015)
Sites with end-entity certificates that expire between 1 January 2016 and 31 December 2016 (inclusive), and which include a SHA-1-based signature as part of the certificate chain, will be treated as “secure, but with minor errors”.

Sites with end-entity certificates that expire on or after 1 January 2017, and which include a SHA-1-based signature as part of the certificate chain, will be treated as “affirmatively insecure”. Subresources from such domain will be treated as “active mixed content”.

The current visual display for “affirmatively insecure” is a lock with a red X, and a red strike-through text treatment in the URL scheme.

Note: SHA-1-based signatures for trusted root certificates are not a problem because TLS clients trust them by their identity, rather than by the signature of their hash.

Posted by Chris Palmer, Secure Socket Lover and Ryan Sleevi, Transport Layer Securer

Cross-posted on the Chrome Blog

You should be able to use the web safely, without fear that malware could take control of your computer, or that you could be tricked into giving up personal information in a phishing scam.

That’s why we’ve invested so much in tools that protect you online. Our Safe Browsing service protects you from malicious websites and warns you about malicious downloads in Chrome. We’re currently showing more than three million download warnings per week—and because we make this technology available for other browsers to use, we can help keep 1.1 billion people safe.

Starting next week, we’ll be expanding Safe Browsing protection against additional kinds of deceptive software: programs disguised as a helpful download that actually make unexpected changes to your computer—for instance, switching your homepage or other browser settings to ones you don’t want.

We’ll show a warning in Chrome whenever an attempt is made to trick you into downloading and installing such software. (If you still wish to proceed despite the warning, you can access it from your Downloads list.) 
As always, be careful and make sure you trust the source when downloading software. Check out these tips to learn how you can stay safe on the web.

Posted by Moheeb Abu Rajab, Staff Engineer, Google Security

Last week we announced support for non-Latin characters in Gmail—think δοκιμή and 测试 and みんな—as a first step towards more global email. We’re really excited about these new capabilities. We also want to ensure they aren’t abused by spammers or scammers trying to send misleading or harmful messages.

Scammers can exploit the fact that , , and ο look nearly identical to the letter o, and by mixing and matching them, they can hoodwink unsuspecting victims.* Can you imagine the risk of clicking “ShppingSite” vs. “ShoppingSite” or “MyBank” vs. “MyBɑnk”?

To stay one step ahead of spammers, the Unicode community has identified suspicious combinations of letters that could be misleading, and Gmail will now begin rejecting email with such combinations. We’re using an open standard—the Unicode Consortiums “Highly Restricted” specification—which we believe strikes a healthy balance between legitimate uses of these new domains and those likely to be abused.

We’re rolling out the changes today, and hope that others across the industry will follow suit. Together, we can help ensure that international domains continue to flourish, allowing both users and businesses to have a tête-à-tête in the language of their choosing.

Posted by Mark Risher, Spam & Abuse Team

*For those playing at home, that's a Myanmar letter Wa (U+101D), a Gujarati digit zero (U+AE6) and a Greek small letter omicron (U+03BF), followed by the ASCII letter 'o'.

Cross-posted from the Webmaster Central Blog

Security is a top priority for Google. We invest a lot in making sure that our services use industry-leading security, like strong HTTPS encryption by default. That means that people using Search, Gmail and Drive, for example, automatically have a secure connection to Google. 

Beyond our own stuff, we’re also working to make the Internet safer more broadly. A big part of that is making sure that websites people access from Google are secure. For instance, we have created resources to help webmasters prevent and fix security breaches on their sites. 

We want to go even further. At Google I/O a few months ago, we called for “HTTPS everywhere” on the web. 

We’ve also seen more and more webmasters adopting HTTPS (also known as HTTP over TLS, or Transport Layer Security), on their website, which is encouraging. 

For these reasons, over the past few months we’ve been running tests taking into account whether sites use secure, encrypted connections as a signal in our search ranking algorithms. We’ve seen positive results, so we’re starting to use HTTPS as a ranking signal. For now it's only a very lightweight signal—affecting fewer than 1% of global queries, and carrying less weight than other signals such as high-quality content—while we give webmasters time to switch to HTTPS. But over time, we may decide to strengthen it, because we’d like to encourage all website owners to switch from HTTP to HTTPS to keep everyone safe on the web.

In the coming weeks, we’ll publish detailed best practices (we’ll add a link to it from here) to make TLS adoption easier, and to avoid common mistakes. Here are some basic tips to get started:
  • Decide the kind of certificate you need: single, multi-domain, or wildcard certificate
  • Use 2048-bit key certificates
  • Use relative URLs for resources that reside on the same secure domain
  • Use protocol relative URLs for all other domains
  • Check out our Site move article for more guidelines on how to change your website’s address
  • Don’t block your HTTPS site from crawling using robots.txt
  • Allow indexing of your pages by search engines where possible. Avoid the noindex robots meta tag

If your website is already serving on HTTPS, you can test its security level and configuration with the Qualys Lab tool. If you are concerned about TLS and your site’s performance, have a look at Is TLS fast yet?. And of course, if you have any questions or concerns, please feel free to post in our Webmaster Help Forums.

We hope to see more websites using HTTPS in the future. Let’s all make the web more secure!

Posted by Zineb Ait Bahajji and Gary Illyes, Webmaster Trends Analysts

Posted by Chris Evans, Researcher Herder

Security is a top priority for Google. We've invested a lot in making our products secure, including strong SSL encryption by default for Search, Gmail and Drive, as well as encrypting data moving between our data centers. Beyond securing our own products, interested Googlers also spend some of their time on research that makes the Internet safer, leading to the discovery of bugs like Heartbleed.

The success of that part-time research has led us to create a new, well-staffed team called Project Zero.

You should be able to use the web without fear that a criminal or state-sponsored actor is exploiting software bugs to infect your computer, steal secrets or monitor your communications. Yet in sophisticated attacks, we see the use of "zero-day" vulnerabilities to target, for example, human rights activists or to conduct industrial espionage. This needs to stop. We think more can be done to tackle this problem.

Project Zero is our contribution, to start the ball rolling. Our objective is to significantly reduce the number of people harmed by targeted attacks. We're hiring the best practically-minded security researchers and contributing 100% of their time toward improving security across the Internet.

We're not placing any particular bounds on this project and will work to improve the security of any software depended upon by large numbers of people, paying careful attention to the techniques, targets and motivations of attackers. We'll use standard approaches such as locating and reporting large numbers of vulnerabilities. In addition, we'll be conducting new research into mitigations, exploitation, program analysis—and anything else that our researchers decide is a worthwhile investment.

We commit to doing our work transparently. Every bug we discover will be filed in an external database. We will only report bugs to the software's vendor—and no third parties. Once the bug report becomes public (typically once a patch is available), you'll be able to monitor vendor time-to-fix performance, see any discussion about exploitability, and view historical exploits and crash traces. We also commit to sending bug reports to vendors in as close to real-time as possible, and to working with them to get fixes to users in a reasonable time.

We're hiring. We believe that most security researchers do what they do because they love what they do. What we offer that we think is new is a place to do what you love—but in the open and without distraction. We'll also be looking at ways to involve the wider community, such as extensions of our popular reward initiatives and guest blog posts. As we find things that are particularly interesting, we'll discuss them on our blog, which we hope you'll follow.

Posted by Adam Langley, Security Engineer

On Wednesday, July 2, we became aware of unauthorized digital certificates for several Google domains. The certificates were issued by the National Informatics Centre (NIC) of India, which holds several intermediate CA certificates trusted by the Indian Controller of Certifying Authorities (India CCA).

The India CCA certificates are included in the Microsoft Root Store and thus are trusted by the vast majority of programs running on Windows, including Internet Explorer and Chrome. Firefox is not affected because it uses its own root store that doesn’t include these certificates.

We are not aware of any other root stores that include the India CCA certificates, thus Chrome on other operating systems, Chrome OS, Android, iOS and OS X are not affected. Additionally, Chrome on Windows would not have accepted the certificates for Google sites because of public-key pinning, although misissued certificates for other sites may exist.

We promptly alerted NIC, India CCA and Microsoft about the incident, and we blocked the misissued certificates in Chrome with a CRLSet push.

On July 3, India CCA informed us that they revoked all the NIC intermediate certificates, and another CRLSet push was performed to include that revocation.

Chrome users do not need to take any action to be protected by the CRLSet updates. We have no indication of widespread abuse and we are not suggesting that people change passwords.

At this time, India CCA is still investigating this incident. This event also highlights, again, that our Certificate Transparency project is critical for protecting the security of certificates in the future.

Update Jul 9: India CCA informed us of the results of their investigation on July 8. They reported that NIC’s issuance process was compromised and that only four certificates were misissued; the first on June 25. The four certificates provided included three for Google domains (one of which we were previously aware of) and one for Yahoo domains. However, we are also aware of misissued certificates not included in that set of four and can only conclude that the scope of the breach is unknown.

The intermediate CA certificates held by NIC were revoked on July 3, as noted above. But a root CA is responsible for all certificates issued under its authority. In light of this, in a future Chrome release, we will limit the India CCA root certificate to the following domains and subdomains thereof in order to protect users:
  1. gov.in
  2. nic.in
  3. ac.in
  4. rbi.org.in
  5. bankofindia.co.in
  6. ncode.in
  7. tcs.co.in

Posted by Kevin Stadmeyer, Technical Program Manager

At Google, ensuring the security of our users is a top priority, and we are constantly assessing how we can make our services even more secure. We recently received a report via our Vulnerability Reward Program of a security issue affecting a small subset of file types in Google Drive and have since made an update to address it.

This issue is only relevant if all of the following apply:
  • The file was uploaded to Google Drive
  • The file was not converted to Docs, Sheets, or Slides (i.e. remained in its original format such as .pdf, .docx, etc.)
  • The owner changed sharing settings so that the document was available to “Anyone with the link”
  • The file contained hyperlinks to third-party HTTPS websites in its content
In this specific instance, if a user clicked on the embedded hyperlink, the administrator of that third-party site could potentially receive header information that may have allowed him or her to see the URL of the original document that linked to his or her site.

Today’s update to Drive takes extra precaution by ensuring that newly shared documents with hyperlinks to third-party HTTPS websites will not inadvertently relay the original document’s URL.

While any documents shared going forward are no longer impacted by this issue, if one of your previously shared documents meets all four of the criteria above, you can generate a new sharing link with the following steps:
  1. Create a copy of the document, via File > "Make a copy..."
  2. Share the copy of the document with particular people or via a new shareable link, via the “Share” button
  3. Delete the original document


Cross-posted from the Chromium Blog

Extensions are a great way to enhance the browsing experience. However, some extensions ask for broad permissions that allow access to sensitive data such as browser cookies or history. Last year, we introduced the Chrome Apps & Extensions Developer Tool, which provides an improved developer experience for debugging apps and extensions. The newest version of the tool, available today, lets power users audit any app or extension and get visibility into the precise actions that it's performing.

Once you’ve installed the Chrome Apps & Extensions Developer Tool, it will start locally auditing your extensions and apps as you use them. For each app or extension, you can see historical activity over the past few days as well as real-time activity by clicking the “Behavior” link. The tool highlights activities that involve your information, such as reading website cookies or modifying web sites, in a privacy section. You can also search for URLs to see if an extension has modified any matching pages. If you’re debugging an app or extension, you can use the “Realtime” tab to watch the stream of API calls as an extension or app runs. This can help you track down glitches or identify unnecessary API calls.

Whether you’re a Chrome power user or a developer testing an extension, the Chrome Apps & Extensions Developer Tool can give you the information you need to understand how apps and extensions affect your browsing.

Posted by Adrienne Porter Felt, Software Engineer and Extension Tinkerer

posted by Stephan Somogyi, Product Manager, Security and Privacy

Your security online has always been a top priority for us, and we’re constantly working to make sure your data is safe. For example, Gmail supported HTTPS when it first launched and now always uses an encrypted connection when you check or send email in your browser. We warn people in Gmail and Chrome when we have reason to believe they’re being targeted by bad actors. We also alert you to malware and phishing when we find it.

Today, we’re adding to that list the alpha version of a new tool. It’s called End-to-End and it’s a Chrome extension intended for users who need additional security beyond what we already provide.

“End-to-end” encryption means data leaving your browser will be encrypted until the message’s intended recipient decrypts it, and that similarly encrypted messages sent to you will remain that way until you decrypt them in your browser.


While end-to-end encryption tools like PGP and GnuPG have been around for a long time, they require a great deal of technical know-how and manual effort to use. To help make this kind of encryption a bit easier, we’re releasing code for a new Chrome extension that uses OpenPGP, an open standard supported by many existing encryption tools.

However, you won’t find the End-to-End extension in the Chrome Web Store quite yet; we’re just sharing the code today so that the community can test and evaluate it, helping us make sure that it’s as secure as it needs to be before people start relying on it. (And we mean it: our Vulnerability Reward Program offers financial awards for finding security bugs in Google code, including End-to-End.)

Once we feel that the extension is ready for primetime, we’ll make it available in the Chrome Web Store, and anyone will be able to use it to send and receive end-to-end encrypted emails through their existing web-based email provider.

We recognize that this sort of encryption will probably only be used for very sensitive messages or by those who need added protection. But we hope that the End-to-End extension will make it quicker and easier for people to get that extra layer of security should they need it.

You can find more technical details describing how we've architected and implemented End-to-End here.



Earlier this year, we deployed a new TLS cipher suite in Chrome that operates three times faster than AES-GCM on devices that don’t have AES hardware acceleration, including most Android phones, wearable devices such as Google Glass and older computers. This improves user experience, reducing latency and saving battery life by cutting down the amount of time spent encrypting and decrypting data.

To make this happen, Adam Langley, Wan-Teh Chang, Ben Laurie and I began implementing new algorithms -- ChaCha 20 for symmetric encryption and Poly1305 for authentication -- in OpenSSL and NSS in March 2013. It was a complex effort that required implementing a new abstraction layer in OpenSSL in order to support the Authenticated Encryption with Associated Data (AEAD) encryption mode properly. AEAD enables encryption and authentication to happen concurrently, making it easier to use and optimize than older, commonly-used modes such as CBC. Moreover, recent attacks against RC4 and CBC also prompted us to make this change.

The benefits of this new cipher suite include:
  • Better security: ChaCha20 is immune to padding-oracle attacks, such as the Lucky13, which affect CBC mode as used in TLS. By design, ChaCha20 is also immune to timing attacks. Check out a detailed description of TLS ciphersuites weaknesses in our earlier post.
  • Better performance: ChaCha20 and Poly1305 are very fast on mobile and wearable devices, as their designs are able to leverage common CPU instructions, including ARM vector instructions. Poly1305 also saves network bandwidth, since its output is only 16 bytes compared to HMAC-SHA1, which is 20 bytes. This represents a 16% reduction of the TLS network overhead incurred when using older ciphersuites such as RC4-SHA or AES-SHA. The expected acceleration compared to AES-GCM for various platforms is summarized in the chart below.


As of February 2014, almost all HTTPS connections made from Chrome browsers on Android devices to Google properties have used this new cipher suite. We plan to make it available as part of the Android platform in a future release. If you’d like to verify which cipher suite Chrome is currently using, on an Android device or on desktop, just click on the padlock in the URL bar and look at the connection tab. If Chrome is using ChaCha20-Poly1305 you will see the following information:
ChaCha20 and Poly1305 were designed by Prof. Dan Bernstein from the University of Illinois at Chicago. The simple and efficient design of these algorithms combined with the extensive vetting they received from the scientific community make us confident that these algorithms will bring the security and speed needed to secure mobile communication. Moreover, selecting algorithms that are free for everyone to use is also in line with our commitment to openness and transparency.

We would like to thank the people who made this possible: Dan Bernstein who invented and implemented both ChaCha/20 and Poly1305, Andrew Moon for his open-source implementation of Poly1305, Ted Krovetz for his open-source implementation of ChaCha20 and Peter Schwabe for his implementation work. We hope there will be even greater adoption of this cipher suite, and look forward to seeing other websites deprecate AES-SHA1 and RC4-SHA1 in favor of AES-GCM and ChaCha20-Poly1305 since they offer safer and faster alternatives. IETF draft standards for this cipher suite are available here and here.



There is nothing more important than making sure our users and their information stay safe online. Doing that means providing security features at the user-level like 2-Step Verification and recovery options, and also involves a lot of work behind the scenes, both at Google and with developers like you. We've already implemented developer tools including Google Sign-In and support for OAuth 2.0 in Google APIs and IMAP, SMTP and XMPP, and we’re always looking to raise the bar.

That's why, beginning in the second half of 2014, we'll start gradually increasing the security checks performed when users log in to Google. These additional checks will ensure that only the intended user has access to their account, whether through a browser, device or application. These changes will affect any application that sends a username and/or password to Google.

To better protect your users, we recommend you upgrade all of your applications to OAuth 2.0. If you choose not to do so, your users will be required to take extra steps in order to keep accessing your applications.

The standard Internet protocols we support all work with OAuth 2.0, as do most of our APIs. We leverage the work done by the IETF on OAuth 2.0 integration with IMAP, SMTP, POP, XMPP, CalDAV, and CardDAV.

In summary, if your application currently uses plain passwords to authenticate to Google, we strongly encourage you to minimize user disruption by switching to OAuth 2.0.



Have you ever wondered how Google Maps knows the exact location of your neighborhood coffee shop? Or of the hotel you’re staying at next month? Translating a street address to an exact location on a map is harder than it seems. To take on this challenge and make Google Maps even more useful, we’ve been working on a new system to help locate addresses even more accurately, using some of the technology from the Street View and reCAPTCHA teams.

This technology finds and reads street numbers in Street View, and correlates those numbers with existing addresses to pinpoint their exact location on Google Maps. We’ve described these findings in a scientific paper at the International Conference on Learning Representations (ICLR). In this paper, we show that this system is able to accurately detect and read difficult numbers in Street View with 90% accuracy. 
Street View numbers correctly identified by the algorithm
These findings have surprising implications for spam and abuse protection on the Internet as well. For more than a decade, CAPTCHAs have used visual puzzles in the form of distorted text to help webmasters prevent automated software from engaging in abusive activities on their sites. Turns out that this new algorithm can also be used to read CAPTCHA puzzles—we found that it can decipher the hardest distorted text puzzles from reCAPTCHA with over 99% accuracy. This shows that the act of typing in the answer to a distorted image should not be the only factor when it comes to determining a human versus a machine.

Fortunately, Google’s reCAPTCHA has taken this into consideration, and reCAPTCHA is more secure today than ever before. Last year, we announced that we’ve significantly reduced our dependence on text distortions as the main differentiator between human and machine, and instead perform advanced risk analysis. This has also allowed us to simplify both our text CAPTCHAs as well as our audio CAPTCHAs, so that getting through this security measure is easy for humans, but still keeps websites protected.
CAPTCHA images correctly solved by the algorithm
Thanks to this research, we know that relying on distorted text alone isn’t enough. However, it’s important to note that simply identifying the text in CAPTCHA puzzles correctly doesn’t mean that reCAPTCHA itself is broken or ineffective. On the contrary, these findings have helped us build additional safeguards against bad actors in reCAPTCHA.

As the Street View and reCAPTCHA teams continue to work closely together, both will continue to improve, making Maps more precise and useful and reCAPTCHA safer and more effective. For more information, check out the reCAPTCHA site and the scientific paper from ICLR 2014.


You may have heard of “Heartbleed,” a flaw in OpenSSL that could allow the theft of data normally protected by SSL/TLS encryption. We’ve assessed this vulnerability and applied patches to key Google services such as Search, Gmail, YouTube, Wallet, Play, Drive, Apps, App Engine, AdWords, DoubleClick, Maps, Maps Engine, Earth, Analytics and Tag Manager.  Google Chrome and Chrome OS are not affected. We are still working to patch some other Google services. We regularly and proactively look for vulnerabilities like this -- and encourage others to report them -- so that that we can fix software flaws before they are exploited. 

If you are a Google Cloud Platform or Google Search Appliance customer, or don’t use the latest version of Android, here is what you need to know:

Cloud SQL
We are currently patching Cloud SQL, with the patch rolling out to all instances today and tomorrow. In the meantime, users should use the IP whitelisting function to ensure that only known hosts can access their instances. Please find instructions here.

Google Compute Engine
Customers need to manually update OpenSSL on each running instance or should replace any existing images with versions including an updated OpenSSL. Once updated, each instance should be rebooted to ensure all running processes are using the updated SSL library. Please find instructions here.

Google Search Appliance (GSA)
Engineers have patched GSA and issued notices to customers. More information is available in the Google Enterprise Support Portal.

Android
All versions of Android are immune to CVE-2014-0160 (with the limited exception of Android 4.1.1; patching information for Android 4.1.1 is being distributed to Android partners).

We will continue working closely with the security research and open source communities, as doing so is one of the best ways we know to keep our users safe.

Apr 12: Updated to add Google AdWords, DoubleClick, Maps, Maps Engine and Earth to the list of Google services that were patched early, but inadvertently left out at the time of original posting.

Apr 14: In light of new research on extracting keys using the Heartbleed bug, we are recommending that Google Compute Engine (GCE) customers create new keys for any affected SSL services. Google Search Appliance (GSA) customers should also consider creating new keys after patching their GSA. Engineers are working on a patch for the GSA, and the Google Enterprise Support Portal will be updated with the patch as soon as it is available.

Also updated to add Google Analytics and Tag Manager to the list of Google services that were patched early, but inadvertently left out at the time of original posting.

Apr 16: Updated to include information about GSA patch.

Apr 28: Updated to add Google Drive, which was patched early but inadvertently left out at the time of original posting.


We have received several credible reports and confirmed with our own research that Google’s Domain Name System (DNS) service has been intercepted by most Turkish ISPs (Internet Service Providers).

A DNS server tells your computer the address of a server it’s looking for, in the same way that you might look up a phone number in a phone book. Google operates DNS servers because we believe that you should be able to quickly and securely make your way to whatever host you’re looking for, be it YouTube, Twitter, or any other.

But imagine if someone had changed out your phone book with another one, which looks pretty much the same as before, except that the listings for a few people showed the wrong phone number. That’s essentially what’s happened: Turkish ISPs have set up servers that masquerade as Google’s DNS service.



At Google, we’re constantly trying to improve security for our users. Besides the many technical security features we build, our efforts include educating users with advice about what they can do to stay safe online. Our Safety Center is a great example of this. But we’re always trying to do better and have been looking for ways to improve how we provide security advice to users.

That’s why we’ve started a research project to try to pare down existing security advice to a small set of things we can realistically expect our users to do to stay safe online. As part of this project, we are currently running a survey of security experts to see what advice they think is most important.

If you work in security, we’d really appreciate your input. Please take our survey here: goo.gl/F4fJ59

With your input we can draw on our collective expertise to get closer to an optimal set of advice that users can realistically follow, and thus, be safer online. Thanks!



Cross-posted on the Official Google Blog and Gmail Blog

Your email is important to you, and making sure it stays safe and always available is important to us. As you go about your day reading, writing, and checking messages, there are tons of security measures running behind the scenes to keep your email safe, secure, and there whenever you need it.

Starting today, Gmail will always use an encrypted HTTPS connection when you check or send email. Gmail has supported HTTPS since the day it launched, and in 2010 we made HTTPS the default. Today's change means that no one can listen in on your messages as they go back and forth between you and Gmail’s servers—no matter if you're using public WiFi or logging in from your computer, phone or tablet.

In addition, every single email message you send or receive—100 percent of them—is encrypted while moving internally. This ensures that your messages are safe not only when they move between you and Gmail's servers, but also as they move between Google's data centers—something we made a top priority after last summer’s revelations.

Of course, being able to access your email is just as important as keeping it safe and secure. In 2013, Gmail was available 99.978 percent of the time, which averages to less than two hours of disruption for a user for the entire year. Our engineering experts look after Google's services 24x7 and if a problem ever arises, they're on the case immediately. We keep you informed by posting updates on the Apps Status Dashboard until the issue is fixed, and we always conduct a full analysis on the problem to prevent it from happening again.

Our commitment to the security and reliability of your email is absolute, and we’re constantly working on ways to improve. You can learn about additional ways to keep yourself safe online, like creating strong passwords and enabling 2-step verification, by visiting the Security Center: https://www.google.com/help/security.



Notice something different about reCAPTCHA today? You guessed it; those tricky puzzles are now warm and fuzzy just in time for Valentine’s Day. Today across the U.S., we're sharing CAPTCHAs that spread the message of love.

Screen shot of CAPTCHAs reading Valentine, love, chocolate, and flowers
Some examples of Valentine's Day CAPTCHAs

But wait. These look really easy. Does this mean that those pesky bots are going to crack these easy CAPTCHAs and abuse our favorite websites? Not so fast.

A few months ago, we announced an improved version of reCAPTCHA that uses advanced risk analysis techniques to distinguish humans from machines. This enabled us to relax the text distortions and show our users CAPTCHAs that adapt to their risk profiles. In other words, with a high likelihood, our valid human users would see CAPTCHAs that they would find easy to solve. Abusive traffic, on the other hand, would get CAPTCHAs designed to stop them in their tracks. It is this same technology that enables us to show these Valentine’s Day CAPTCHAs today without reducing their anti-abuse effectiveness.

But that’s not all. Over the last few months, we’ve been working hard to improve the audio CAPTCHA experience. Our adaptive CAPTCHA technology has, in many cases, allowed us to relax audio distortions and serve significantly easier audio CAPTCHAs. We’ve served over 10 million easy audio CAPTCHAs to users worldwide over the last few weeks and have seen great success rates. We hope to continue enhancing our accessibility option in reCAPTCHA in the months to come. Take a listen to this sample of easy audio CAPTCHA:

  

We’re working hard to improve people’s experience with reCAPTCHA without compromising on the spam and abuse protection you’ve come to trust from us. For today, we hope you enjoy our Valentine’s Day gift to you.