Security Blog
The latest news and insights from Google on security and safety on the Internet
Experimenting with Post-Quantum Cryptography
July 7, 2016
Posted by Matt Braithwaite, Software Engineer
Quantum computers are a fundamentally different sort of computer that take advantage of aspects of quantum physics to solve certain sorts of problems dramatically faster than conventional computers can. While they will, no doubt, be of huge benefit in some areas of study, some of the problems that they are effective at solving are the ones that we use to secure digital communications. Specifically, if large quantum computers can be built then they may be able to break the asymmetric cryptographic primitives that are currently used in TLS, the security protocol behind HTTPS.
Quantum computers exist today but, for the moment, they are small and experimental, containing only a handful of quantum bits. It's not even certain that large machines will ever be built, although
Google
,
IBM
,
Microsoft
,
Intel
and others are working on it. (Adiabatic quantum computers, like the D-Wave computer that Google operates with NASA, can have large numbers of quantum bits, but currently solve fundamentally different problems.)
However, a hypothetical, future quantum computer would be able to retrospectively decrypt any internet communication that was recorded today, and many types of information need to remain confidential for decades. Thus even the possibility of a future quantum computer is something that we should be thinking about today.
Experimenting with Post-quantum cryptography in Chrome
The study of cryptographic primitives that remain secure even against quantum computers is called “post-quantum cryptography”. Today we're announcing an experiment in Chrome where a small fraction of connections between desktop Chrome and Google's servers will use a post-quantum key-exchange algorithm in addition to the elliptic-curve key-exchange algorithm that would typically be used. By adding a post-quantum algorithm on top of the existing one, we are able to experiment without affecting user security. The post-quantum algorithm might turn out to be breakable even with today's computers, in which case the elliptic-curve algorithm will still provide the best security that today’s technology can offer. Alternatively, if the post-quantum algorithm turns out to be secure then it'll protect the connection even against a future, quantum computer.
Our aims with this experiment are to highlight an area of research that Google believes to be important and to gain real-world experience with the larger data structures that post-quantum algorithms will likely require.
We're indebted to Erdem Alkim, Léo Ducas, Thomas Pöppelmann and Peter Schwabe, the researchers who developed “
New Hope
”, the post-quantum algorithm that we selected for this experiment. Their scheme looked to be the most promising post-quantum key-exchange when we investigated in December 2015. Their work builds upon
earlier work
by Bos, Costello, Naehrig and Stebila, and also on
work
by Lyubashevsky, Peikert and Regev.
We explicitly do not wish to make our selected post-quantum algorithm a de-facto standard. To this end we plan to discontinue this experiment within two years, hopefully by replacing it with something better. Since we selected New Hope, we've noted two
promising
papers
in this space, which are welcome. Additionally, Google researchers, in collaboration with researchers from NXP, Microsoft, Centrum Wiskunde & Informatica and McMaster University, have just published
another paper
in this area. Practical research papers, such as these, are critical if cryptography is to have real-world impact.
This experiment is currently enabled in
Chrome Canary
and you can tell whether it's being used by opening the recently introduced
Security Panel
and looking for “CECPQ1”, for example on
https://play.google.com/store
. Not all Google domains will have it enabled and the experiment may appear and disappear a few times if any issues are found.
While it's still very early days for quantum computers, we're excited to begin preparing for them, and to help ensure our users' data will remain secure long into the future.
One Year of Android Security Rewards
June 16, 2016
Posted by Quan To, Program Manager, Android Security
A year ago
, we added
Android Security Rewards
to the long standing
Google Vulnerability Rewards Program
. We offered up to $38,000 per report that we used to fix vulnerabilities and protect Android users.
Since then, we have received over 250 qualifying vulnerability reports from researchers that have helped make Android and mobile security stronger. More than a third of them were reported in Media Server which has been
hardened in Android N
to make it more resistant to vulnerabilities.
While the program is focused on Nexus devices and has a primary goal of improving Android security, more than a quarter of the issues were reported in code that is developed and used outside of the Android Open Source Project. Fixing these kernel and device driver bugs helps improve security of the broader mobile industry (and even some non-mobile platforms).
By the Numbers
Here’s a quick rundown of the Android VRP’s first year:
We paid over $550,000 to 82 individuals. That’s an average of $2,200 per reward and $6,700 per researcher.
We paid our top researcher,
@heisecode
, $75,750 for 26 vulnerability reports.
We paid 15 researchers $10,000 or more.
There were no payouts for the top reward for a complete remote exploit chain leading to TrustZone or Verified Boot compromise.
Thank you to
those
who submitted high quality
vulnerability reports
to us last year.
Improvements to Android VRP
We’re constantly working to improve the program and today we’re making a few changes to all vulnerability reports filed after June 1, 2016.
We’re paying more!
We will now pay 33% more for a high-quality vulnerability report with proof of concept. For example, the reward for a Critical vulnerability report with a proof of concept increased from $3000 to $4000.
A high quality vulnerability report with a proof of concept, a CTS Test, or a patch will receive an additional 50% more.
We’re raising our rewards for a remote or proximal kernel exploit from $20,000 to $30,000.
A remote exploit chain or exploits leading to TrustZone or Verified Boot compromise increase from $30,000 to $50,000.
All of the changes, as well as the additional terms of the program, are explained in more detail in our
Program Rules
. If you’re interested in helping us find security vulnerabilities, take a look at
Bug Hunter University
and learn how to submit high quality vulnerability reports. Remember, the better the report, the more you’ll get paid. We also recently updated our
severity ratings
, so make sure to check those out, too.
Thank you to everyone who helped us make Android safer. Together, we made a huge investment in security research that has made Android stronger. We’re just getting started and are looking forward to doing even more in the future.
Evolving the Safe Browsing API
May 20, 2016
Posted by Emily Schechter and Alex Wozniak, Safe Browsing Team
We're excited to announce the launch of the new
Safe Browsing API version 4
. Version 4 replaces the existing Safe Browsing API version 3. With the launch of v4, we’re now starting the deprecation process for v2-3: please transition off of these older Safe Browsing protocol versions as soon as possible and onto protocol version 4.
Safe Browsing
protects well over two billion internet-connected devices from threats like malware and phishing, and has done so for over a decade. We launched v1 of the Safe Browsing API
in 2007
to give developers a simple mechanism to access Google’s lists of suspected unsafe sites.
The web has evolved since then and users are now increasingly using the web from their mobile devices. These devices have constraints less common to traditional desktop computing environments: mobile devices have very limited power and network bandwidth, and often poor quality of service. Additionally, cellular data costs our users money, so we have a responsibility to use it judiciously.
With protocol version 4, we’ve optimized for this new environment with a clear focus on maximizing protection per bit, which benefits all Safe Browsing users, mobile and desktop alike. Version 4 clients can now define constraints such as geographic location, platform type, and data caps to use bandwidth and device resources as efficiently as possible. This allows us to function well within the much stricter mobile constraints without sacrificing protection.
We’ve been using the new protocol since December via the
Safe Browsing client on Android
, which is part of Google Play Services. The first app to use the client is Chrome, starting with version 46: we’re already protecting hundreds of millions of Android Chrome users by default.
We’ve Done Most Of The Work For You Already
A single device should only have a single, up-to-date instance of Safe Browsing data, so we’re taking care of that for all Android developers. Please don’t implement your own Version 4 client on Android: we’re working on making a simple, device-local API available to prevent any resource waste on device. We’ll announce the availability of this new device-local API as soon as possible; in the meantime, there’s no need to develop a Version 4 client on your own. For those who operate in less resource-constrained environments, using the Safe Browsing Version 4 API directly allows you to:
Check pages against the Safe Browsing lists based on platform and threat types.
Warn users before they click links that may lead to infected pages.
Prevent users from posting links to known infected pages
To make Safe Browsing integration as simple as possible, we’re also releasing a
reference client implementation
of the new API today, written in Go. It also provides a Safe Browsing HTTP proxy server, which supports JSON.
It’s easy to start protecting users with the new Version 4 of the Safe Browsing API.
Sign up for a key
and
let us know what you think
!
Hardening the media stack
May 5, 2016
Posted by Dan Austin and Jeff Vander Stoep, Android Security team
[Cross-posted from the
Android Developers Blog
]
To help make Android more secure, we encourage and
reward
researchers who discover vulnerabilities. In 2015, a series of bugs in mediaserver’s libstagefright were disclosed to Google. We released updates for these issues with our August and September 2015
security bulletins
.
In addition to addressing issues on a monthly basis, we’ve also been working on new security features designed to enhance the existing security model and provide additional defense in-depth. These defense measures attempt to achieve two goals:
Prevention
: Stop bugs from becoming vulnerabilities
Containment
: Protect the system by de-privileging and isolating components that handle untrusted content
Prevention
Most of the vulnerabilities found in libstagefright were heap overflows resulting from unsigned
integer overflows
. A number of integer overflows in libstagefright allowed an attacker to allocate a buffer with less space than necessary for the incoming data, resulting in a buffer overflow in the heap.
The result of an unsigned integer overflow is well defined, but the ensuing behavior could be unexpected or unsafe. In contrast, signed integer overflows are considered undefined behavior in C/C++, which means the result of an overflow is not guaranteed, and the compiler author may choose the resulting behavior—typically what is fastest or simplest. We have added compiler changes that are designed to provide safer defaults for both signed and unsigned integer overflows.
The UndefinedBehaviorSanitizer (
UBSan
) is part of the LLVM/Clang compiler toolchain that detects undefined or unintended behavior. UBSan can check for multiple types of undefined and unsafe behavior, including signed and unsigned integer overflow. These checks add code to the resulting executable, testing for integer overflow conditions during runtime. For example, figure 1 shows source code for the
parseChunk
function in the
MPEG4Extractor
component of libstagefright after the original researcher-supplied patch was applied. The modification, which is contained in the black box below, appears to prevent integer overflows from occurring. Unfortunately, while
SIZE_MAX
and
size
are 32-bit values,
chunk_size
is a 64-bit value, resulting in an incomplete check and the potential for integer overflow. In the line within the red box, the addition of
size
and
chunk_size
may result in an integer overflow and creation of buffer smaller than
size
elements. The subsequent
memcpy
could then lead to exploitable memory corruption, as
size
+
chunk_size
could be less than
size
, which is highlighted in the blue box. The mechanics of a potential exploit vector for this vulnerability are explained in more detail by
Project Zero
.
Figure 1.
Source code demonstrating a subtle unsigned integer overflow.
Figure 2 compares assembly generated from the code segment above with a second version compiled with integer sanitization enabled. The add operation that results in the integer overflow is contained in the red box.
In the unsanitized version,
size
(
r6
) and
chunk_size
(
r7
) are added together, potentially resulting in
r0
overflowing and being less than
size
. Then,
buffer
is allocated with the
size
specified in
r0
, and
size
bytes are copied to it. If
r0
is less than
r6
, this results in memory corruption.
In the sanitized version,
size
(
r7
) and
chunk_size
(
r5
) are added together with the result stored in
r0
. Later,
r0
is checked against
r7
, if
r0
is less than
r7
, as indicated by the
CC
condition code,
r3
is set to 1. If
r3
is 1, and the carry bit was set, then an integer overflow occurred, and an abort is triggered, preventing memory corruption.
Note that the incomplete check provided in the patch was not included in figure 2. The overflow occurs in the
buffer
allocation’s
add
operation. This addition triggers an integer sanitization check, which turns this exploitable flaw into a harmless abort.
Figure 2.
Comparing unsanitized and sanitized compiler output.
While the integer sanitizers were originally intended as code hygiene tools, they effectively prevent the majority of reported libstagefright vulnerabilities. Turning on the integer overflow checks was just the first step. Preventing the runtime abort by finding and fixing integer overflows, most of which are not exploitable, represented a large effort by Android's media team. Most of the discovered overflows were fixed and those that remain (mostly for performance reasons) were verified and marked as safe to prevent the runtime abort.
In Android N, signed and unsigned integer overflow detection is enabled on the entire media stack, including libstagefright. This makes it harder to exploit integer overflows, and also helps to prevent future additions to Android from introducing new integer overflow bugs.
Containment
For Android M and earlier, the mediaserver process in Android was responsible for most media-related tasks. This meant that it required access to all permissions needed by those responsibilities and, although mediaserver ran in its own sandbox, it still had access to a lot of resources and capabilities. This is why the libstagefright bugs from 2015 were significant—mediaserver could access several important resources on an Android device including camera, microphone, graphics, phone, Bluetooth, and internet.
A root cause analysis showed that the libstagefright bugs primarily occurred in code responsible for parsing file formats and media codecs. This is not surprising—parsing complex file formats and codecs while trying to optimize for speed is hard, and the large number of edge cases makes such code susceptible to both accidental and malicious malformed inputs.
However, media parsers do not require access to most of the privileged permissions held by mediaserver. Because of this, the media team re-architected mediaserver in Android N to better adhere to the principle of least privilege. Figure 3 illustrates how the monolithic mediaserver and its permissions have been divided, using the following heuristics:
parsing code moved into unprivileged sandboxes that have few or no permissions
components that require sensitive permissions moved into separate sandboxes that only grant access to the specific resources the component needs. For example, only the cameraserver may access the camera, only the audioserver may access Bluetooth, and only the drmserver may access DRM resources.
Figure 3
. How mediaserver and its permissions have been divided in Android N.
Comparing the potential impact of the libstagefright bugs on Android N and older versions demonstrates the value of this strategy. Gaining code execution in libstagefright previously granted access to all the permissions and resources available to the monolithic mediaserver process including graphics driver, camera driver, or sockets, which present a rich kernel attack surface.
In Android N, libstagefright runs within the mediacodec sandbox with access to very few permissions. Access to camera, microphone, photos, phone, Bluetooth, and internet as well as dynamic code loading are disallowed by
SELinux
. Interaction with the kernel is further restricted by
seccomp
. This means that compromising libstagefright would grant the attacker access to significantly fewer permissions and also mitigates privilege escalation by reducing the attack surface exposed by the kernel.
Conclusion
The media hardening project is an ongoing effort focused on moving functionality into less privileged sandboxes and further reducing the permissions granted to those sandboxes. While the techniques discussed here were applied to the Android media framework, they are suitable across the Android codebase. These hardening techniques—and others—are being actively applied to additional components within Android. As always, we appreciate feedback on our work and welcome suggestions for how we can improve Android. Contact us at
security@android.com
.
Bringing HTTPS to all blogspot domain blogs
May 3, 2016
Posted by Milinda Perera, Software Engineer, Security
HTTPS is fundamental to internet security; it protects the integrity and confidentiality of data sent between websites and visitors' browsers. Last September, we
began
rolling out HTTPS support for blogspot domain blogs so you could try it out. Today, we’re launching another milestone: an HTTPS version for every blogspot domain blog. With this change, visitors can access any blogspot domain blog over an encrypted channel.
The HTTPS indicator in the Chrome browser
As part of this launch, we're removing the HTTPS Availability setting. Even if you did not previously turn on this setting, your blogs will have an HTTPS version enabled.
We’re also adding a
new setting called HTTPS Redirect
that allows you to opt-in to redirect HTTP requests to HTTPS. While all blogspot blogs will have an HTTPS version enabled, if you turn on this new setting, all visitors will be redirected to the HTTPS version of your blog at
https
://<your-blog>.blogspot.com even if they go to
http
://<your-blog>.blogspot.com. If you choose to turn off this setting, visitors will have two options for viewing your blog: the unencrypted version at
http
://<your-blog>.blogspot.com or the encrypted version at
https
://<your-blog>.blogspot.com.
The new HTTPS Redirect setting in the Blogger dashboard
Please be aware that
mixed content
may cause some of your blog's functionality not to work in the HTTPS version. Mixed content is often caused by incompatible templates, gadgets, or post content. While we're proactively fixing most of these errors, some of them can only be fixed by you, the blog authors. To help
spot and fix
these errors, we recently
released
a mixed content warning tool that alerts you to possible mixed content issues in your posts, and gives you the option to fix them automatically before saving.
Existing links and bookmarks to your blogs are not affected by this launch, and will continue to work. Please note that blogs on custom domains will not yet have HTTPS support.
This update expands Google's
HTTPS Everywhere
mission to all blogspot domain blogs. We appreciate your
feedback
and will use it to make future improvements.
Protecting against unintentional regressions to cleartext traffic in your Android apps
April 25, 2016
Posted by Alex Klyubin, Android Security team
[Cross-posted from the
Android Developers Blog
]
When your app communicates with servers using cleartext network traffic, such as HTTP, the traffic risks being eavesdropped upon and tampered with by third parties. This may leak information about your users and open your app up to injection of unauthorized content or exploits. Ideally, your app should use secure traffic only, such as by using
HTTPS instead of HTTP
. Such traffic is protected against eavesdropping and tampering.
Many Android apps already use secure traffic only. However, some of them occasionally regress to cleartext traffic by accident. For example, an inadvertent change in one of the server components could make the server provide the app with HTTP URLs instead of HTTPS URLs. The app would then proceed to communicate in cleartext, without any user-visible symptoms. This situation may go unnoticed by the app’s developer and users.
Even if you believe your app is only using secure traffic, make sure to use the new mechanisms provided by Android Marshmallow (Android 6.0) to catch and prevent accidental regressions.
New Protections Mechanisms
For apps which only use secure traffic, Android 6.0 Marshmallow (API Level 23) introduced two mechanisms to address regressions to cleartext traffic: (1) in production / installed base, block cleartext traffic, and (2) during development / QA, log or crash whenever non-TLS/SSL traffic is encountered. The following sections provide more information about these mechanisms.
Block cleartext traffic in production
To protect the installed base of your app against regressions to cleartext traffic, declare
android:usesCleartextTraffic=”false”
attribute on the
application
element in your app’s AndroidManifest.xml. This declares that the app is not supposed to use cleartext network traffic and makes the platform network stacks of Android Marshmallow block cleartext traffic in the app. For example, if your app accidentally attempts to sign in the user via a cleartext HTTP request, the request will be blocked and the user’s identity and password will not leak to the network.
You don’t have to set minSdkVersion or targetSdkVersion of your app to 23 (Android Marshmallow) to use
android:usesCleartextTraffic
. On older platforms, this attribute is simply ignored and thus has no effect.
Please note that WebView does not yet honor this feature.
And under certain circumstances cleartext traffic may still leave or enter the app. For example, Socket API ignores the cleartext policy because it does not know whether the data it transmits or receives can be classified as cleartext. Android platform HTTP stacks, on the other hand, honor the policy because they know whether traffic is cleartext.
Google AdMob is also built to honor this policy. When your app declares that it does not use cleartext traffic, only HTTPS-only ads should be served to the app.
Third-party network, ad, and analytics libraries are encouraged to add support for this policy. They can query the cleartext traffic policy via the
NetworkSecurityPolicy
class.
Detect cleartext traffic during development
To spot cleartext traffic during development or QA,
StrictMode API
lets you modify your app to detect non-TLS/SSL traffic and then either log violations to system log or crash the app (see
StrictMode.VmPolicy.Builder.detectCleartextNetwork()
). This is a useful tool for identifying which bits of the app are using non-TLS/SSL (and DLTS) traffic. Unlike the
android:usesCleartextTraffic
attribute, this feature is not meant to be enabled in app builds distributed to users.
Firstly, this feature is supposed to flag secure traffic that is not TLS/SSL. More importantly, TLS/SSL traffic via HTTP proxy also may be flagged. This is an issue because as a developer, you have no control over whether a particular user of your app may have configured their Android device to use an HTTP proxy. Finally, the implementation of the feature is not future-proof and thus may reject future TLS/SSL protocol versions. Thus, this feature is intended to be used only during the development and QA phase.
Declare finer-grained cleartext policy in Network Security Config
Android N
offers finer-grained control over cleartext traffic policy. As opposed to
android:usesCleartextTraffic
attribute, which applies to all destinations with which an app communicates, Android N’s
Network Security Config
lets an app specify cleartext policy for specific destinations. For example, to facilitate a more gradual transition towards a policy that does not allow cleartext traffic, an app can at first block accidental cleartext only for communication with its most important backends and permit cleartext to be used for other destinations.
Next Steps
It is a security best practice to only use secure network traffic for communication between your app and its servers. Android Marshmallow enables you to enforce this practice, so give it a try!
As always, we appreciate feedback and welcome suggestions for improving Android. Contact us at
security@android.com
. HTTPS, Android-Security
Android Security 2015 Annual Report
April 19, 2016
Posted by Adrian Ludwig, Lead Engineer, Android Security
Today, for the
second year in a row
, we’re releasing our Android Security Annual report. This detailed summary includes: a look at how Google services protect the Android ecosystem, an overview of new security protections introduced in 2015, and our work with Android partners and the security research community at large. The full report is
here
, and an overview is below.
One important goal of releasing this report is to drive an informed conversation about Android security. We hope to accomplish this by providing more information about what we are doing, and what we see happening in the ecosystem. We strongly believe that rigorous, data-driven discussion about security will help guide our efforts to make the Android ecosystem safer.
Enhancing Google's services to protect Android users
In the last year, we’ve significantly improved our machine learning and event correlation to detect potentially harmful behavior.
We protected users from malware and other Potentially Harmful Apps (PHAs), checking over 6 billion installed applications per day.
We protected users from network-based and on-device threats by scanning 400 million devices per day.
And we
protected hundreds of millions of Chrome users on Android
from unsafe websites with Safe Browsing.
We continued to make it even more difficult to get PHAs into Google Play. Last year’s enhancements reduced the probability of installing a PHA from Google Play by over 40% compared to 2014. Within Google Play, install attempts of most categories of PHAs declined including:
Data Collection: decreased over 40% to 0.08% of installs
Spyware: decreased 60% to 0.02% of installs
Hostile Downloader: decreased 50% to 0.01% of installs
Overall, PHAs were installed on fewer than 0.15% of devices that only get apps from Google Play. About 0.5% of devices that install apps from both Play and other sources had a PHA installed during 2015, similar to the data in last year’s report.
It’s critical that we also protect users that install apps from sources other than Google Play. Our
Verify Apps service
protects these users and we improved the effectiveness of the PHA warnings provided by Verify Apps by over 50%. In 2015, we saw an increase in the number of PHA install attempts outside of Google Play, and we disrupted several coordinated efforts to install PHAs onto user devices from outside of Google Play.
New security features in the Android platform
Last year, we
launched Android 6.0 Marshmallow
, introducing a variety of new security protections and controls:
Full disk encryption is now a requirement for all new Marshmallow devices with adequate hardware capabilities and is also extended to allow encryption of data on SD cards.
Updated app permissions enable you to manage the data they share with specific apps with more granularity and precision.
New verified boot ensures your phone is healthy from the bootloader all the way up to the operating system.
Android security patch level
enables you to check and make sure your device has the most recent security updates.
And much more, including support for fingerprint scanners, and SELinux enhancements.
Deeper engagement with the Android ecosystem
We’re working to foster Android security research and making investments to strengthen protections across the ecosystem now and in the long run.
In June,
Android joined Google’s Vulnerability Rewards Program
, which pays security researchers when they find and report bugs to us. We fixed over 100 vulnerabilities reported this way and paid researchers more than $200,000 for their findings.
In August, we launched our
monthly public security update program
to the Android Open Source Project, as well as a security update lifecycle for Nexus devices. We intend the update lifecycle for Nexus devices to be a model for all Android manufacturers going forward and have been actively working with ecosystem partners to facilitate similar programs. Since then, manufacturers have provided monthly security updates for hundreds of unique Android device models and hundreds of millions of users have installed monthly security updates to their devices. Despite this progress, many Android devices are still not receiving monthly updates—we are increasing our efforts to help partners update more devices in a timely manner.
Greater transparency, well-informed discussions about security, and ongoing innovation will help keep users safe. We'll continue our ongoing efforts to improve Android’s protections, and we look forward to engaging with the ecosystem and security community in 2016 and beyond.
Helping webmasters re-secure their sites
April 18, 2016
Posted by Kurt Thomas and Yuan Niu, Spam & Abuse Research
Every week,
over 10 million users encounter harmful websites
that deliver malware and scams. Many of these sites are compromised personal blogs or small business pages that have fallen victim due to a weak password or outdated software. Safe Browsing and Google Search protect visitors from dangerous content by displaying browser warnings and labeling search results with
'this site may harm your computer'
. While this helps keep users safe in the moment, the compromised site remains a problem that needs to be fixed.
Unfortunately, many webmasters for compromised sites are unaware anything is amiss. Worse yet, even when they learn of an incident, they may lack the security expertise to take action and address the root cause of compromise. Quoting one webmaster from a survey we conducted, “our daily and weekly backups were both infected” and even after seeking the help of a specialist, after “lots of wasted hours/days” the webmaster abandoned all attempts to restore the site and instead refocused his efforts on “rebuilding the site from scratch”.
In order to find the best way to help webmasters clean-up from compromise, we recently teamed up with the University of California, Berkeley to explore how to quickly contact webmasters and expedite recovery while minimizing the distress involved. We’ve summarized our key lessons below. The full study, which you can read
here
, was recently presented at the
International World Wide Web Conference
.
When Google works directly with webmasters during critical moments like security breaches, we can help 75% of webmasters re-secure their content. The whole process takes a median of 3 days. This is a better experience for webmasters and their audience.
How many sites get compromised?
Number of freshly compromised sites Google detects every week.
Over the last year Google detected nearly 800,000 compromised websites—roughly 16,500 new sites every week from around the globe. Visitors to these sites are exposed to low-quality scam content and malware via
drive-by downloads
. While browser and search warnings help protect visitors from harm, these warnings can at times feel punitive to webmasters who learn only after-the-fact that their site was compromised. To balance the safety of our users with the experience of webmasters, we set out to find the best approach to help webmasters recover from security breaches and ultimately reconnect websites with their audience.
Finding the most effective ways to aid webmaster
Getting in touch with webmasters:
One of the hardest steps on the road to recovery is first getting in contact with webmasters. We tried three notification channels: email, browser warnings, and search warnings. For webmasters who proactively registered their site with
Search Console
, we found that email communication led to 75% of webmasters re-securing their pages. When we didn’t know a webmaster’s email address, browser warnings and search warnings helped 54% and 43% of sites clean up respectively.
Providing tips on cleaning up harmful content:
Attackers rely on hidden files, easy-to-miss redirects, and remote inclusions to serve scams and malware. This makes clean-up increasingly tricky. When we emailed webmasters, we included tips and samples of exactly which pages contained harmful content. This, combined with expedited notification, helped webmasters clean up 62% faster compared to no tips—usually within 3 days.
Making sure sites stay clean:
Once a site is no longer serving harmful content, it’s important to make sure attackers don’t reassert control. We monitored recently cleaned websites and found 12% were compromised again in 30 days. This illustrates the challenge involved in identifying the root cause of a breach versus dealing with the side-effects.
Making security issues less painful for webmasters—and everyone
We hope that webmasters never have to deal with a security incident. If you are a webmaster, there are some quick steps you can take to reduce your risk. We’ve made it easier to
receive security notifications through Google Analytics
as well as through
Search Console
. Make sure to register for both services. Also, we have laid out helpful tips for
updating your site’s software
and
adding additional authentication
that will make your site safer.
If you’re a hosting provider or building a service that needs to notify victims of compromise, understand that the entire process is distressing for users. Establish a reliable communication channel before a security incident occurs, make sure to provide victims with clear recovery steps, and promptly reply to inquiries so the process feels helpful, not punitive.
As we work to make the web a safer place, we think it’s critical to empower webmasters and users to make good security decisions. It’s easy for the security community to be pessimistic about incident response being ‘too complex’ for victims, but as our findings demonstrate, even just starting a dialogue can significantly expedite recovery.
Growing Eddystone with Ephemeral Identifiers: A Privacy Aware & Secure Open Beacon Format
April 14, 2016
Posted by
Nirdhar Khazanie, Product Manager and
Yossi Matias, VP Engineering
Last July, we
launched
Eddystone, an open and extensible Bluetooth Low Energy (BLE) beacon format from Google, supported by Android, iOS, and Chrome. Beacons mark important places and objects in a way that your phone can understand. To do this, they typically broadcast public one-way signals ‒ such as an Eddystone-UID or -URL.
Today, we're introducing Ephemeral IDs (EID), a beacon frame in the Eddystone format that gives developers more power to control who can make use of the beacon signal. Eddystone-EID enables a new set of use cases where it is important for users to be able to exchange information securely and privately. Since the beacon frame changes periodically, the signal is only useful to clients with access to a resolution service that maps the beacon’s current identifier to stable data. In other words, the signal is only recognizable to a controlled set of users. In this post we’ll provide a bit more detail about this feature, as well as Google’s implementation of
Eddystone-EID
with Google Cloud Platform’s
Proximity Beacon API
and the Nearby API for Android and CocoaPod for iOS.
Technical Specifications
To an observer of an Eddystone-EID beacon, the AES-encrypted eight byte beacon identifier changes pseudo-randomly with an average period that is set by the developer ‒ over a range from 1 second to just over 9 hours. The identifier is generated using a key and timer running on the beacon. When the beacon is provisioned, or set up, the key is generated and exchanged with a resolution service such as Proximity Beacon API using an Elliptic Curve Diffie-Hellman key agreement
protocol
, and the timer is synchronized with the service. This way, only the beacon and the service that it is registered with have access to the key. You can read more about the technical details of Eddystone-EID from the
specification
‒ including the provisioning process ‒ on GitHub, or from our recent
preprint
.
An Eddystone-EID contains measures designed to prevent a variety of nuanced attacks. For example, the rotation period for a single beacon varies slightly from identifier to identifier, meaning that an attacker cannot use a consistent period to identify a particular beacon. Eddystone-EID also enables safety features such as proximity awareness, device authentication, and data encryption on packet transmission. The
Eddystone-TLM
frame has also been extended with a new version that broadcasts battery level also encrypted with the shared key, meaning that an attacker cannot use the battery level as an identifying feature either.
When correctly implemented and combined with a service that supports a range of access control checks, such as Proximity Beacon API, this pattern has several advantages:
The beacon’s location cannot be spoofed, except by a real-time relay of the beacon signal. This makes it ideal for use cases where a developer wishes to enable premium features for a user at a location.
Beacons provide a high-quality and precise location signal that is valuable to the deployer. Eddystone-EID enables deployers to decide which developers/businesses can make use of that signal.
Eddystone-EID beacons can be integrated into devices that users carry with them without leaving users vulnerable to tracking.
Integrating Seamlessly with the Google Beacon Platform
Launching today on
Android
and
iOS
, is a new addition to the wider Google beacon platform: Beacon Tools. Beacon Tools allows you to provision and register an Eddystone-EID beacon, as well as associate content with your beacon through the Google Cloud Platform.
In addition to Eddystone-EID and the new encrypted version of the previously available Eddystone-TLM, we’re also adding a common configuration protocol to the Eddystone family. The
Eddystone GATT service
allows any Eddystone beacon to be provisioned by any tool that supports the protocol. This encourages the development of an open ecosystem of beacon products, both in hardware and software, removing restrictions for developers.
Eddystone-EID Support in the Beacon Industry
We’re excited to have worked with a variety of industry players as Eddystone-EID develops. Over the past year, Eddystone
manufacturers
in the beacon space have grown from 5 to over 25. The following 15 manufacturers will be supporting Eddystone-EID, with more to follow:
Accent Systems
Bluvision
Reco/Perples
Beacon Inside
Estimote
Sensoro
Blesh
Gimbal
Signal360
BlueBite
Nordic
Swirl
Bluecats
Radius Networks
Zebra
In addition to beacon manufacturers, we’ve been working with a range of innovative companies to demonstrate Eddystone-EID in a variety of different scenarios.
Samsonite
and
Accent Systems
have developed a suitcase with Eddystone-EID where users can securely keep track of their personal luggage.
K11
is a Hong Kong museum and retail experience using
Sensoro
Eddystone-EID beacons for visitor tours and customer promotions.
Monumental Sports
in Washington, DC, uses
Radius Networks
Eddystone-EID beacons for delivering customer rewards during Washington Wizards and Capitals sporting events.
Sparta Digital
has produced an app called Buzzin that uses Eddystone-EID beacons deployed in Manchester, UK to enable a more seamless transit experience.
You can get started with Eddystone-EID by creating a Google Cloud Platform project and purchasing compatible hardware through one of our
manufacturers
. Best of all, Eddystone-EID works transparently to beacon subscriptions created through the Google Play Services Nearby Messages API, allowing you to run combined networks of Eddystone-EID and Eddystone-UID transparently in your client code!
Improvements to Safe Browsing Alerts for Network Administrators
April 6, 2016
Posted by Nav Jagpal, Software Engineer
We
launched
Safe Browsing Alerts for Network Administrators over 5 years ago. Just as Safe Browsing warns users about dangerous sites, this service sends notifications to network administrators when our systems detect harmful URLs on their networks.
We’ve made good progress:
22k ASNs are being monitored, or roughly 40% of active networks
1300 network administrators are actively using the tool
250 reports are sent daily to these administrators
Today, to provide Network Admins with even more useful information for protecting their users, we’re adding URLs related to Unwanted Software, Malicious Software, and Social Engineering to the set of information we share.
Here’s the full set of data we share with network administrators:
Compromised
: Pages harming users through
drive-by-download
or exploits.
Distribution
: Domains that are responsible for launching exploits and serving malware. Unlike compromised sites, which are often run by innocent webmasters, distribution domains are typically set up with the primary purpose of serving malicious content.
Social Engineering
: Deceptive websites that trick users into performing unwanted actions such as downloading software or divulging private information. Social engineering includes phishing sites that trick users into revealing passwords.
Unwanted Software
: URLs which lead to software that violates our
Unwanted Software Policy
. This kind of software is often distributed through deceptive means such as social engineering, and has harmful software traits such as modifying users’ browsing experience in unexpected ways and performing unwanted ad injections. You can learn more about Unwanted Software, or UwS,
here
.
Malware Software
: Traditional malware downloads, such as trojans and viruses.
Network administrators can use the data provided by our service to gain insights into the security and quality of their network. By working together, we can make it more challenging and expensive for attackers to profit from user harm.
If you’re a network administrator and haven’t yet registered your AS, you can do so
here
. If you are experiencing problems verifying ownership, please
contact us
.
Labels
#sharethemicincyber
#supplychain #security #opensource
android
android security
android tr
app security
big data
biometrics
blackhat
C++
chrome
chrome enterprise
chrome security
connected devices
CTF
diversity
encryption
federated learning
fuzzing
Gboard
google play
google play protect
hacking
interoperability
iot security
kubernetes
linux kernel
memory safety
Open Source
pha family highlights
pixel
privacy
private compute core
Rowhammer
rust
Security
security rewards program
sigstore
spyware
supply chain
targeted spyware
tensor
Titan M2
VDP
vulnerabilities
workshop
Archive
2024
May
Apr
Mar
Feb
Jan
2023
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2022
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2021
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2020
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2019
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2018
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2017
Dec
Nov
Oct
Sep
Jul
Jun
May
Apr
Mar
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Aug
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Sep
Aug
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
2010
Nov
Oct
Sep
Aug
Jul
May
Apr
Mar
2009
Nov
Oct
Aug
Jul
Jun
Mar
2008
Dec
Nov
Oct
Aug
Jul
May
Feb
2007
Nov
Oct
Sep
Jul
Jun
May
Feed
Follow @google
Follow
Give us feedback in our
Product Forums
.