Security Blog
The latest news and insights from Google on security and safety on the Internet
Out with unwanted ad injectors
March 31, 2015
Posted by Nav Jagpal, Software Engineer, Safe Browsing
It’s tough to read the New York Times under these circumstances:
And it’s pretty unpleasant to shop for a Nexus 6 on a search results page that looks like this:
The browsers in the screenshots above have been infected with ‘ad injectors’. Ad injectors are programs that insert new ads, or replace existing ones, into the pages you visit while browsing the web. We’ve received more than 100,000 complaints from Chrome users about ad injection since the beginning of 2015—more than network errors, performance problems, or any other issue.
Injectors are yet another symptom of “
unwanted software
”—programs that are deceptive, difficult to remove, secretly bundled with other downloads, and have other bad qualities. We’ve made
several
recent
announcements about our work to fight unwanted software via
Safe Browsing
, and now we’re sharing some updates on our efforts to protect you from injectors as well.
Unwanted ad injectors: disliked by users, advertisers, and publishers
Unwanted ad injectors aren’t part of a healthy ads ecosystem. They’re part of an environment where bad practices hurt users, advertisers, and publishers alike.
People don’t like ad injectors for several reasons: not only are they intrusive, but people are often tricked into installing ad injectors in the first place, via deceptive advertising, or software “bundles.” Ad injection can also be a security risk, as the
recent “Superfish” incident
showed.
But, ad injectors are problematic for advertisers and publishers as well. Advertisers often don’t know their ads are being injected, which means they don’t have any idea where their ads are running. Publishers, meanwhile, aren’t being compensated for these ads, and more importantly, they unknowingly may be putting their visitors in harm’s way, via spam or malware in the injected ads.
How Google fights unwanted ad injectors
We have a variety of policies that either limit, or entirely prohibit, ad injectors.
In Chrome, any extension hosted in the Chrome Web Store must comply with the
Developer Program Policies
. These require that extensions have a
narrow and easy-to-understand purpose
. We don’t ban injectors altogether—if they want to, people can still choose to install injectors that clearly disclose what they do—but injectors that sneak ads into a user’s browser would certainly violate our policies. We show people familiar red warnings when they are about to download software that is deceptive, or doesn’t use the right APIs to interact with browsers.
On the ads side,
AdWords advertisers
with software downloads hosted on their site, or linked to from their site, must comply with our
Unwanted Software Policy
. Additionally, both
Google Platforms program policies
and the
DoubleClick Ad Exchange (AdX) Seller Program Guidelines
, don’t allow programs that overlay ad space on a given site without permission of the site owner.
To increase awareness about ad injectors and the scale of this issue, we’ll be releasing new research on May 1 that examines the ad injector ecosystem in depth. The study, conducted with researchers at University of California Berkeley, drew conclusions from more than 100 million pageviews of Google sites across Chrome, Firefox, and Internet Explorer on various operating systems, globally. It’s not a pretty picture. Here’s a sample of the findings:
Ad injectors were detected on all operating systems (Mac and Windows), and web browsers (Chrome, Firefox, IE) that were included in our test.
More than 5% of people visiting Google sites have at least one ad injector installed. Within that group, half have at least two injectors installed and nearly one-third have at least four installed.
Thirty-four percent of Chrome extensions injecting ads were classified as outright malware.
Researchers found 192 deceptive Chrome extensions that affected 14 million users; these have since been disabled. Google now incorporates the techniques researchers used to catch these extensions to scan all new and updated extensions.
We’re constantly working to improve our product policies to protect people online. We encourage others to do the same. We’re committed to continuing to improve this experience for Google and the web as a whole.
Even more unwanted software protection via the Safe Browsing API
March 24, 2015
Posted by Emily Schechter, Safe Browsing Program Manager
Deceptive software disguised as a useful download harms your web experience by making undesired changes to your computer. Safe Browsing offers protection from such
unwanted software
by showing a warning in Chrome before you download these programs. In
February
we started showing additional warnings in Chrome before you visit a site that encourages downloads of unwanted software.
Today, we’re adding information about unwanted software to our
Safe Browsing API
.
In addition to our constantly-updated malware and phishing data, our unwanted software data is now publicly available for developers to integrate into their own security measures. For example, any app that wants to save its users from winding up on sites that lead to deceptive software could use our API to do precisely that.
We continue to integrate Safe Browsing technology across Google—in
Chrome
,
Google Analytics
, and more—to protect users. Our Safe Browsing API helps extend our malware, phishing, and unwanted software protection to keep more than 1.1 billion users safe online.
Check out our updated API documentation
here
.
Maintaining digital certificate security
March 23, 2015
Posted by Adam Langley, Security Engineer
On Friday, March 20th, we became aware of unauthorized digital certificates for several Google domains. The certificates were issued by an intermediate certificate authority apparently held by a company called
MCS Holdings
. This intermediate certificate was issued by
CNNIC
.
CNNIC is included in all major root stores and so the misissued certificates would be trusted by almost all browsers and operating systems. Chrome on Windows, OS X, and Linux, ChromeOS, and Firefox 33 and greater would have rejected these certificates because of
public-key pinning
, although misissued certificates for other sites likely exist.
We promptly alerted CNNIC and other major browsers about the incident, and we blocked the MCS Holdings certificate in Chrome with a
CRLSet
push. CNNIC responded on the 22nd to explain that they had contracted with MCS Holdings on the basis that MCS would only issue certificates for domains that they had registered. However, rather than keep the private key in a suitable
HSM
, MCS installed it in a man-in-the-middle proxy. These devices intercept secure connections by masquerading as the intended destination and are sometimes used by companies to intercept their employees’ secure traffic for monitoring or legal reasons. The employees’ computers normally have to be configured to trust a proxy for it to be able to do this. However, in this case, the presumed proxy was given the full authority of a public CA, which is a serious breach of the CA system. This situation is similar to
a failure by ANSSI
in 2013.
This explanation is congruent with the facts. However, CNNIC still delegated their substantial authority to an organization that was not fit to hold it.
Chrome users do not need to take any action to be protected by the CRLSet updates. We have no indication of abuse and we are not suggesting that people change passwords or take other action. At this time we are considering what further actions are appropriate.
This event also highlights, again, that the
Certificate Transparency
effort is critical for protecting the security of certificates in the future.
(Details of the certificate chain for software vendors can be found
here
.)
Update - April 1
: As a result of a joint investigation of the events surrounding this incident by Google and CNNIC, we have decided that the CNNIC Root and EV CAs will no longer be recognized in Google products. This will take effect in a future Chrome update. To assist customers affected by this decision, for a limited time we will allow CNNIC’s existing certificates to continue to be marked as trusted in Chrome, through the use of a publicly disclosed whitelist. While neither we nor CNNIC believe any further unauthorized digital certificates have been issued, nor do we believe the misissued certificates were used outside the limited scope of MCS Holdings’ test network, CNNIC will be working to prevent any future incidents. CNNIC will implement Certificate Transparency for all of their certificates prior to any request for reinclusion. We applaud CNNIC on their proactive steps, and welcome them to reapply once suitable technical and procedural controls are in place.
Safe Browsing and Google Analytics: Keeping More Users Safe, Together
February 26, 2015
Posted by Stephan Somogyi, Product Manager, Security and Privacy
[Cross-posted on the
Google Analytics Blog
]
If you run a web site, you may already be familiar with
Google Webmaster Tools
and how it lets you know if Safe Browsing finds something problematic on your site. For example, we’ll notify you if your site is delivering malware, which is usually a sign that it’s been hacked. We’re extending our Safe Browsing protections to automatically display notifications to all Google Analytics users via familiar
Google Analytics Notifications
.
Google Safe Browsing
has been protecting people across the Internet for over eight years and we're always looking for ways to extend that protection even further. Notifications like these help webmasters like you act quickly to respond to any issues. Fast response helps keep your site—and your visitors—safe.
Pwnium V: the never-ending* Pwnium
February 24, 2015
Posted by Tim Willis, Hacker Philanthropist, Chrome Security Team
[Cross-posted from the
Chromium Blog
]
Around this time each year we announce the rules, details and maximum cash amounts we’re putting up for our
Pwnium competition
. For the last few years we put a huge pile of cash on the table (last year it was
e
million
) and gave researchers one day during
CanSecWest
to present their exploits. We’ve received some great entries over the years, but it’s time for something bigger.
Starting today, Pwnium will change its scope significantly, from a single-day competition held once a year at a security conference to a year round, worldwide opportunity for security researchers.
For those who are interested in what this means for the Pwnium rewards pool, we crunched the numbers and the results are in: it now goes all the way up to $∞ million*.
We’re making this change for a few reasons:
Removing barriers to entry:
At Pwnium competitions, a security researcher would need to have a
bug chain
in March, pre-register, have a physical presence at the competition location and hopefully get a good timeslot. Under the new scheme, security researchers can submit their bugs year-round through the
Chrome Vulnerability Reward Program (VRP)
whenever they find them.
Removing the incentive for bug hoarding:
If a security researcher was to discover a Pwnium-quality bug chain today, it’s highly likely that they would wait until the contest to report it to get a cash reward. This is a bad scenario for all parties. It’s bad for us because the bug doesn’t get fixed immediately and our users are left at risk. It’s bad for them as they run the real risk of a bug collision. By allowing security researchers to submit bugs all year-round, collisions are significantly less likely and security researchers aren’t duplicating their efforts on the same bugs.
Our researchers want this:
On top of all of these reasons, we asked our handful of participants if they wanted an option to report all year. They did, so we’re delivering.
Logistically, we’ll be adding Pwnium-style bug chains on Chrome OS to the
Chrome VRP
. This will increase our top reward to $50,000, which will be on offer all year-round. Check out our
FAQ
for more information.
Happy hunting!
*Our lawyercats wouldn’t let me say “never-ending” or “infinity million” without adding that “this is an experimental and discretionary rewards program and Google may cancel or modify the program at any time.” Check out the reward eligibility requirements on the
Chrome VRP page
.
More Protection from Unwanted Software
February 23, 2015
Posted by Lucas Ballard, Software Engineer
SafeBrowsing helps keep you safe online and includes protection against
unwanted software
that makes undesirable changes to your computer or interferes with your online experience.
We recently expanded our efforts in Chrome, Search, and ads to keep you even safer from sites where these nefarious downloads are available.
Chrome
: Now, in addition to showing warnings
before you download unwanted software
, Chrome will show you a new warning, like the one below, before you visit a site that encourages downloads of unwanted software.
Search
: Google Search now incorporates signals that identify such deceptive sites. This change reduces the chances you’ll visit these sites via our search results.
Ads
: We
recently
began to disable Google ads that lead to sites with unwanted software.
If you’re a site owner, we recommend that you register your site with
Google Webmaster Tools
. This will help you stay informed when we find something on your site that leads people to download
unwanted software
, and will provide you with helpful tips to resolve such issues.
We’re constantly working to keep people safe across the web.
Read more about Safe Browsing technology and our work to protect users
here
.
Using Google Cloud Platform for Security Scanning
February 19, 2015
Posted by Rob Mann, Security Engineering Manager
[Cross-posted from the
Google Cloud Platform Blog
]
Deploying a new build is a thrill, but every release should be scanned for security vulnerabilities. And while web application security scanners have existed for years, they’re not always well-suited for Google App Engine developers. They’re often difficult to set up, prone to over-reporting issues (false positives)—which can be time-consuming to filter and triage—and built for security professionals, not developers.
Today, we’re releasing Google Cloud Security Scanner in beta. If you’re using App Engine, you can easily scan your application for two very common vulnerabilities: cross-site scripting (XSS) and mixed content.
While designing Cloud Security Scanner we had three goals:
Make the tool easy to set up and use
Detect the most common issues App Engine developers face with minimal false positives
Support scanning rich, JavaScript-heavy web applications
To try it for yourself, select
Compute > App Engine > Security scans
in the Google
Developers Console
to run your first scan, or
learn more here
.
So How Does It Work?
Crawling and testing modern HTML5, JavaScript-heavy applications with rich multi-step user interfaces is considerably more challenging than scanning a basic HTML page. There are two general approaches to this problem:
Parse the HTML and emulate a browser. This is fast, however, it comes at the cost of missing site actions that require a full DOM or complex JavaScript operations.
Use a real browser. This approach avoids the parser coverage gap and most closely simulates the site experience. However, it can be slow due to event firing, dynamic execution, and time needed for the DOM to settle.
Cloud Security Scanner addresses the weaknesses of both approaches by using a multi-stage pipeline. First, the scanner makes a high speed pass, crawling, and parsing the HTML. It then executes a slow and thorough full-page render to find the more complex sections of your site.
While faster than a real browser crawl, this process is still too slow. So we scale horizontally. Using
Google Compute Engine
, we dynamically create a botnet of hundreds of virtual Chrome workers to scan your site. Don’t worry, each scan is limited to 20 requests per second or lower.
Then we attack your site (again, don’t worry)! When testing for XSS, we use a completely benign payload that relies on
Chrome DevTools
to execute the debugger. Once the debugger fires, we know we have JavaScript code execution, so false positives are (almost) non-existent. While this approach comes at the cost of missing some bugs due to application specifics, we think that most developers will appreciate a low effort, low noise experience when checking for security issues—we know Google developers do!
As with all dynamic vulnerability scanners, a clean scan does not necessarily mean you’re security bug free. We still recommend a manual security review by your friendly web app security professional.
Ready to get started?
Learn more here
. Cloud Security Scanner is currently in beta with many more features to come, and we’d love to hear your feedback. Simply click the “Feedback” button directly within the tool.
Feedback and data-driven updates to Google’s disclosure policy
February 13, 2015
Posted by Chris Evans and Ben Hawkes,
Project Zero
; Heather Adkins, Matt Moore and Michal Zalewski, Google Security; Gerhard Eschelbeck, Vice President, Google Security
Cross-posted from the
Project Zero blog
Disclosure deadlines have long been an industry standard practice. They improve end-user security by getting security patches to users faster. As noted in
CERT’s 45-day disclosure policy
, they also “balance the need of the public to be informed of security vulnerabilities with vendors' need for time to respond effectively”.
Yahoo!’s 90-day policy
notes that “Time is of the essence when we discover these types of issues: the more quickly we address the risks, the less harm an attack can cause”.
ZDI’s 120-day policy
notes that releasing vulnerability details can “enable the defensive community to protect the user”.
Deadlines also acknowledge an uncomfortable fact that is alluded to by some of the above policies: the offensive security community invests considerably more into vulnerability research than the defensive community. Therefore, when we find a vulnerability in a high profile target, it is often already known by advanced and stealthy actors.
Project Zero
has adhered to a 90-day disclosure deadline. Now we are applying this approach for the rest of Google as well. We notify vendors of vulnerabilities immediately, with details shared in public with the defensive community after 90 days, or sooner if the vendor releases a fix. We’ve chosen a middle-of-the-road deadline timeline and feel it’s reasonably calibrated for the current state of the industry.
To see how things are going, we crunched some data on Project Zero’s disclosures to date. For example, the Adobe Flash team probably has the largest install base and number of build combinations of any of the products we’ve researched so far. To date, they have
fixed 37 Project Zero vulnerabilities
(or 100%) within the 90-day deadline. More generally, of 154 Project Zero bugs fixed so far, 85% were fixed within 90 days. Restrict this to the 73 issues filed and fixed after Oct 1st, 2014, and 95% were fixed within 90 days. Furthermore, recent
well-discussed
deadline
misses
were typically fixed very quickly after 90 days. Looking ahead, we’re not going to have any deadline misses for at least the rest of February.
Deadlines appear to be working to improve patch times and end user security—especially when enforced consistently.
We’ve studied the above data and taken on board some great debate and external feedback around some of the corner cases for disclosure deadlines. We have improved the policy in the following ways:
Weekends and holidays.
If a deadline is due to expire on a weekend or US public holiday, the deadline will be moved to the next normal work day.
Grace period.
We now have a 14-day grace period. If a 90-day deadline will expire but a vendor lets us know before the deadline that a patch is scheduled for release on a specific day within 14 days following the deadline, the public disclosure will be delayed until the availability of the patch. Public disclosure of an unpatched issue now only occurs if a deadline will be significantly missed (2 weeks+).
Assignment of CVEs.
CVEs are an industry standard for uniquely identifying vulnerabilities. To avoid confusion, it’s important that the first public mention of a vulnerability should include a CVE. For vulnerabilities that go past deadline, we’ll ensure that a CVE has been pre-assigned.
As always, we reserve the right to bring deadlines forwards or backwards based on extreme circumstances. We remain committed to treating all vendors strictly equally. Google expects to be held to the same standard; in fact, Project Zero has bugs in the pipeline for Google products (Chrome and Android) and these are subject to the same deadline policy.
Putting everything together, we believe the policy updates are still strongly in line with our desire to improve industry response times to security bugs, but will result in softer landings for bugs marginally over deadline. Finally, we’d like to call on all researchers to adopt disclosure deadlines in some form, and feel free to use our policy verbatim if you find our data and reasoning compelling. We’re excited by the early results that disclosure deadlines are delivering—and with the help of the broader community, we can achieve even more.
Security Reward Programs: Year in Review, Year in Preview
January 30, 2015
Posted by Eduardo Vela Nava, Security Engineer
Since 2010, our
Security Reward Programs
have been a cornerstone of our relationship with the security research community. These programs have been successful because of two core beliefs:
Security researchers should be rewarded for helping us protect Google's users.
Researchers help us understand how to make Google safer by discovering, disclosing, and helping fix vulnerabilities at a scale that’s difficult to replicate by any other means.
We’re grateful for the terrific work these researchers do to help keep users safe. And so, we wanted to take a look back at 2014 to celebrate their contributions to Google, and in turn, our contributions back to them.
Looking back on 2014
Our Security Reward Programs continue to grow at a rapid clip. We’ve now paid more than $4,000,000 in rewards to security researchers since 2010 across all of our reward programs, and we’re looking forward to more great years to come.
In
2014
:
We paid researchers more than $1,500,000.
Our largest single reward was $150,000. The researcher then
joined us
for an internship.
We rewarded more than 200 different researchers.
We rewarded more than 500 bugs. For Chrome, more than half of all rewarded reports for 2014 were in developer and beta versions. We were able to squash bugs before they could reach our main user population.
The top three contributors to the VRP program in 2014 during a recent visit to Google Zurich: Adrian (Romania), Tomasz (Poland / UK), Nikolai (Ukraine)
What’s new for 2015
We are announcing two additions to our programs today.
First, researchers' efforts through these programs, combined with our own internal security work, make it increasingly difficult to find bugs. Of course, that's good news, but it can also be discouraging when researchers invest their time and struggle to find issues. With this in mind, today we're rolling out a new, experimental program: Vulnerability Research Grants. These are up-front awards that we will provide to researchers before they ever submit a bug.
Here’s how the program works:
We'll publish different types of vulnerabilities, products and services for which we want to support research beyond our normal vulnerability rewards.
We'll award grants immediately before research begins, with no strings attached. Researchers then pursue the research they applied for, as usual.
There will be various tiers of grants, with a maximum of
$3,133.70
USD.
On top of the grant, researchers are still eligible for regular rewards for the bugs they discover.
To learn more about the current grants, and review your eligibility, have a look at our
rules page
.
Second, also starting today, all mobile applications officially developed by Google on
Google Play
and
iTunes
will now be within the scope of the
Vulnerability Reward Program
.
We’re looking forward to continuing our close partnership with the security community and rewarding them for their time and efforts in 2015!
An Update to End-To-End
December 16, 2014
Posted by Stephan Somogyi, Product Manager, Security and Privacy
In June, we
announced and launched End-To-End
, a tool for those who need even more security for their communications than what we already provide. Today, we’re launching an updated version of our extension — still in alpha — that includes a number of changes:
We’re
migrating End-To-End to GitHub
. We’ve always believed strongly that End-To-End must be an open source project, and we think that using GitHub will allow us to work together even better with the community.
We’ve included several contributions from Yahoo Inc. Alex Stamos, Yahoo’s Chief Security Officer, announced at BlackHat 2014 in August that his team would be participating in our End-To-End project; we’re very happy to release the first fruits of this collaboration.
We’ve added more documentation. The
project wiki
now contains additional information about End-To-End, both for developers as well as security researchers interested in understanding better how we think about End-To-End’s security model.
We’re very thankful to all those who submitted bugs against the first alpha release. Two of those bugs earned a financial reward through our
Vulnerability Rewards Program
. One area where we didn’t receive many bug reports was in End-To-End’s new crypto library. On the contrary: we heard from several other projects who want to use our library, and we’re looking forward to working with them.
One thing hasn’t changed for this release: we aren’t yet making End-To-End available in the Chrome Web Store. We don’t feel it’s as usable as it needs to be. Indeed, those looking through the source code will see references to our key server, and it should come as no surprise that we’re working on one. Key distribution and management is one of the hardest usability problems with cryptography-related products, and we won’t release End-To-End in non-alpha form until we have a solution we’re content with.
We’re excited to continue working on these challenging and rewarding problems, and we look forward to delivering a more fully fledged End-to-End next year.
Labels
#sharethemicincyber
#supplychain #security #opensource
android
android security
android tr
app security
big data
biometrics
blackhat
C++
chrome
chrome enterprise
chrome security
connected devices
CTF
diversity
encryption
federated learning
fuzzing
Gboard
google play
google play protect
hacking
interoperability
iot security
kubernetes
linux kernel
memory safety
Open Source
pha family highlights
pixel
privacy
private compute core
Rowhammer
rust
Security
security rewards program
sigstore
spyware
supply chain
targeted spyware
tensor
Titan M2
VDP
vulnerabilities
workshop
Archive
2024
May
Apr
Mar
Feb
Jan
2023
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2022
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2021
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2020
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2019
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2018
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2017
Dec
Nov
Oct
Sep
Jul
Jun
May
Apr
Mar
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Aug
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Sep
Aug
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
2010
Nov
Oct
Sep
Aug
Jul
May
Apr
Mar
2009
Nov
Oct
Aug
Jul
Jun
Mar
2008
Dec
Nov
Oct
Aug
Jul
May
Feb
2007
Nov
Oct
Sep
Jul
Jun
May
Feed
Follow @google
Follow
Give us feedback in our
Product Forums
.