Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Device Integrity Attestation through the Browser #8

Open
philippp opened this issue Apr 13, 2022 · 50 comments
Open

Device Integrity Attestation through the Browser #8

philippp opened this issue Apr 13, 2022 · 50 comments

Comments

@philippp
Copy link
Contributor

Chrome proposes developing a high-level document to capture use-cases and requirements for device attestation and other high-fidelity, low-entropy signals. This is a call for collaboration among interested members of the anti-fraud community group to identify important signals for Invalid Traffic (IVT) detection and other relevant security use cases.

Many modern platforms have built-in tools to help differentiate legitimate and emulated devices. Android provides applications with the Safety Net API and Apple’s App Attest offers some of the same protections.
By transmitting signals of legitimacy from the device’s platform, such as if the device is emulated or rooted, publishers and their technology partners could use this information in part to determine if traffic is invalid. They could then choose appropriate actions like flagging advertising actions as suspicious or requiring more information for sensitive actions like logins or financial transactions. These signals could be open for all websites to consume and could additionally facilitate a variety of other security use cases in a privacy compliant manner.

We would like to forward the most useful integrity signals on each platform, and provide a unified representation to web sites and applications.

There are many open questions in this area that we’d like to explore:

  1. Would a platform signal attesting to the device’s legitimacy be a useful addition?
  2. What integrity signals would be most useful? (For example, device booted from manufacturer-signed firmware, browser runtime integrity checks, etc.)
  3. Would an ideal implementation reduce (not eliminate) the need for fingerprinting?
  4. What remaining needs would require fingerprinting (for example, enforcing uniqueness / sign-up protections; cookie theft prevention)?
  5. What other signals are derived from common fingerprinting surfaces that browsers could surface in a privacy safe manner? For - example, geo, time since last state clear, etc.?
  6. What about longitudinal signals? Should the browser play a role here at all?
  7. How do we experiment with new signals or a changing threat landscape?
  8. What would be useful on platforms that do not have a comprehensive attestation framework?

Potential challenges

  1. How do we maintain equitable access to the web for users with older devices or platforms, which may not provide this signal?
  2. Should we introduce some noise, or hold back the signal on some fraction of devices to prevent over-reliance on these signals
  3. Will threat actors shift to using valid devices that provide these signals, and will the additional cost of attacks cause only temporarily reduce fraud? How soon might these signals become stale over time?

We’d like to start an effort to explore this approach, starting with requirements gathering, in the Anti-Fraud Community Group, and would welcome collaboration.

Related work:

@michaelficarra
Copy link
Member

Response on behalf of F5. F5 provides anti-automation and anti-fraud services, among other products.

This proposal would be valuable, but we have concerns about its feasibility. What process are you envisioning for the web service to validate the attestation? If the attestation is only made to the browser, web services would still have to account for adversaries running modified browsers. This solution also depends on a full chain of trust from hardware to browser (or web service), which is sadly fragile. In addition, lacking this platform integrity is not itself an indicator of fraud or automation, so additional signals must be employed regardless. Overall, the proposal as-is appears more useful for fraud use cases than automation use cases. It may still have minor value in pushing adversaries from simple automation tooling to custom browser builds, though that would likely be short-lived.

As a related alternative to this proposal, we would like to propose instead that we provide hardware-attested uniqueness, attested to the web service, not just the browser. This should be done in a way that cannot be forged or replayed and in a way that can be validated efficiently by the web service. The attestation should include a unique and persistent identifier, a proof of timeliness, and a manufacturer. The unique identifier should be mixed with the origin/site in a way that prevents the feature from being abused for cross-site tracking. This kind of scheme would allow anti-automation service providers like us the ability to impose costs on scaling interaction with web services. Another benefit of this kind of proposal is that it can likely be done without trusting every intermediate layer of the stack.

What integrity signals would be most useful?

We would find it valuable to know that a browser is unmodified and not under the control of automation (such as through the WebDriver or Chrome DevTools protocols).

Would an ideal implementation reduce (not eliminate) the need for fingerprinting?

If we have the above signal, we may be able to reduce fingerprinting, assuming the signal is observed over time to be negatively associated with automated traffic (as in, our adversaries are unable to defeat this integrity check). If we have the hardware-attested uniqueness we proposed, it would allow us to reduce or possibly eliminate fingerprinting.

What about longitudinal signals?

What does this mean?

What would be useful on platforms that do not have a comprehensive attestation framework?

We will continue to use the techniques we use today on these platforms. Browser vendors could implement further privacy improvements only on platforms where more secure alternatives like what is proposed here exist so that web services that depend on anti-fraud solutions like ours can continue to operate. We are optimistic that TPMs will be both sufficient to implement our proposed hardware-attested uniqueness and considered widely available enough to gate access to sensitive web services.

@SpaceGnome
Copy link

+1 for browser runtime integrity checks and if we can determine that we are interacting with an actual un-tampered version of the claimed Browser.

@philippp
Copy link
Contributor Author

philippp commented Apr 22, 2022

Thank you for the feedback!

What process are you envisioning for the web service to validate the attestation?

This idea sprung from Apple’s App Attestation Framework and the Play Integrity API, which allow services’ native applications to validate the presence of a real device. In both cases, the developer owned service choses a nonce and issues a challenge (containing this nonce) to the native client application. The client application forwards the challenge to the device, which forwards it to Apple / Google servers and returns a signed response. The service then verifies the signed response that includes the original nonce. Since the browser is just a native application, we are tempted to explore whether this challenge-response pattern could be extended to allow web services to challenge the hardware that the browser is running on.

web services would still have to account for adversaries running modified browsers.

Agreed - this is a piece of the puzzle, and we may need to attest both the device and the browser: if we only attested the browser, we could be fooled by virtual devices; if we only attest the device, we can be fooled by modified browsers.

we would like to propose instead that we provide hardware-attested uniqueness, attested to the web service, not just the browser.

We’ve also been thinking about the role of such a “proof of uniqueness,” and how it could be provided across platforms. Until now, we have been thinking about “proof of integrity” and “proof of uniqueness” as complementary capabilities that may be exercised together but potentially provided independently (pending on the abilities of the platform in question). Our hope is that a framework of narrowly defined capabilities that could be composed would be more realistic to implement than a single “goodness” check that attempts to do it all - device integrity being one such capability.

What does [longitudinal signals] mean?

Device attestation is a point-in-time check and may not reveal that a device has historically failed integrity tests, for example. There may be value in being able to attest to a “history of integrity,” if such could be established with confidence.

@npdoty
Copy link

npdoty commented Apr 29, 2022

I would be interested to know more about what kind or class of modifications to a browser are the kind that need to be attested about. Distinctive to the Web platform is that users should be able to control their browser (user agent), and it seems encouraging that users can configure their browser, or add extensions to modify their browser in various ways. If it's commonplace for users to install browser extensions, it seems infeasible to punish or un-attest any modification of a browser, or if modifications were unattested, it would have harmful downstream effects on the freedom/creativity of the web.

@philippp
Copy link
Contributor Author

philippp commented May 2, 2022

Thanks Nick, agreed that we need to work through the compatibility, accessibility, and defensibility challenges inherent in a "human-ness" test, as it is one of the trickier "capabilities" in this regard. I am hoping we can start by aligning on an enumeration of capabilities (potentially inclusive of a human-ness check), evaluate their challenges and constraints, and then evaluate potential solutions.

@npdoty
Copy link

npdoty commented May 2, 2022

Yeah, maybe a list of capabilities is the next section to add on to the use cases document, (although of course we should also consider alternative ways to address use cases, and which cases cannot be completely addressed).

At first glance, it looks very challenging to classify which browser extensions would make some online actions not "human" any more and which would be acceptable. And it seems easy to see ways users will be harmed if we try to maintain approved/disapproved lists.

The prior art (like the SafetyNet API) don't seem to make any attempt at the human-ness attestation, but also it seems challenging to see how the attestations that are made would apply to a platform with more generally user-controlled software. Are there any examples of acceptable/unacceptable browser extensions that would be analogous to the attestations you're looking for?

@chris-wood
Copy link

chris-wood commented May 3, 2022

First, I'd like to say that the IETF is standardizing these types of signals in the Privacy Pass working group. The architecture document describes the basic structure of the protocol and interactions between various participants. So if we are to do something here, I would strongly recommend building upon this underlying and existing work. No need to reinvent the wheel.

That said, I want to zero in on the question of longitudinal signals, as I am not sure this could be implemented in a privacy-preserving way. In particular, as I understand the proposal, the longitudinal signal would be presented to the origin, i.e., the party that consumes these signals and acts accordingly. Anything beyond a point-in-time signal carries more information (more entropy) than a single bit, and therefore contributes to the fingerprinting surface of the client. Bounded or not, that information will likely be abused for the purposes of partitioning clients into smaller anonymity sets.

@philippp, would you mind clarifying your thinking behind this type of signal?

@philippp
Copy link
Contributor Author

philippp commented May 3, 2022

Thanks for the feedback, Nick and Chris. It may be helpful to disambiguate three concepts:

Capabilities: Capabilities are an abstract description of the abilities that defensive teams need to solve their key use cases. The enumeration of capabilities should be agnostic to implementation - it should be a list of reported needs, and we can argue about their complexities and legitimacy following this enumeration.

Sources of truth: Once we have identified the sought capabilities and their criteria / constraints, we can evaluate what sources of truth may deliver on these capabilities. Today these may be inferences from high-entropy client signals, in the future we may derive truthful inferences from other systems (including, but not limited to hardware attestation, decentralized trust systems, and other innovations). I imagine the design and critique of these sources will be most challenging, and I hope that we can advance these discussions by creating parallel “swim lanes” from an initial enumeration of capabilities.

Relay mechanisms: Chris, as you helpfully pointed out, there is prior art in how we relay signals across trust boundaries. We do not want to reinvent the wheel, and we will consider private access tokens as we think about how these signals are relayed from the attesting component to the party evaluating the response.

The prior art (like the SafetyNet API) don't seem to make any attempt at the human-ness attestation, but also it seems challenging to see how the attestations that are made would apply to a platform with more generally user-controlled software.

I don’t have a satisfactory design for human-ness, but suggest that we start by aligning on the need for such a check (ideally on its own capability-specific issue when the time comes), and then solicit and discuss concrete suggestions.

That said, I want to zero in on the question of longitudinal signals, I am not sure this could be implemented in a privacy-preserving way. … Anything beyond a point-in-time signal carries more information (more entropy) than a single bit, and therefore contributes to the fingerprinting surface of the client.

Once we have a concrete enumeration of capabilities we will be in a better position to evaluate whether they can be satisfied with a single bit indicating “overall goodness.” One trade-off will be between the interpretability by defensive teams - the ability to adapt constraints to their defensive use cases, and detect when a single source of truth has been broken - and potential increased identifiability via exposed entropy. I think this will be a more fruitful discussion once we have the list of capabilities enumerated.

@claucece
Copy link
Member

claucece commented May 3, 2022

Thank you for a very nice discussion! Just to chime in here: I love the idea of a capabilities document as it helps asses all uses cases rather than using a single solution for everything. This is helpful not only due to having a framework to properly asses but also because it creates a document that people of the community can refer to/expand. @dvorak42 do you think we can indeed add this to the use cases document, or should we create a separate one?

@dvorak42
Copy link
Member

dvorak42 commented May 3, 2022

I think making it a part of the use cases document might make sense here, or at least tied to it given the overlap.

philippp added a commit to philippp/proposals that referenced this issue May 4, 2022
Expand the use cases document to introduce and define the concept capabilities, and create a space for the enumeration of relevant capabilities.

Following the discussion regarding native attestation mechanisms as a source of truth (see antifraudcg#8), I'd like to suggest clarifying the functional requirements for our key use cases. I'm suggesting expanding the use cases document to introduce the concept of "capabilities," allowing us to enumerate the types of inferences we need to make in order to detect the relevant types of fraud and abuse. Following this enumeration, we can hopefully parallelize the discussion of these capabilities, their unique complexities, and ultimately solicit designs to avail these capabilities subject to the agreed upon requirements.
@michaelficarra
Copy link
Member

Hey all, I'd like to set up an "incubation" call to discuss this proposal. I'd like to spend time with a small subset of the CG to clarify and maybe split out some pieces from this proposal. This way, the next time we present it to the whole CG, we will already have thought through many of the aspects that are currently unexplored or open questions. From my side, I am particularly interested in discussing mitigations for the fingerprinting/tracking concerns. Who would be interested in joining this call?

@SpaceGnome
Copy link

I'd be interested in joining the call.

@philippp
Copy link
Contributor Author

I would also be interested in joining the call.

@dvorak42
Copy link
Member

We'll be sending out an email to the CG list later today/tomorrow to organize and gather interest for the call. Please reply there once we've sent it so we can keep all the organization in one place.

@npdoty
Copy link

npdoty commented Jun 17, 2022

User concerns here might fall into five categories:

  1. hardness of the use of attestation: will the user be denied access, slowed access, be put through some burdensome process or otherwise punished because they can't or don't want to satisfy the attestation process? if some attestation mechanism reaches a certain percentage penetration of the market, is it likely that sites will rely on it in a harder way? is there some way to mitigate that?
  2. granularity of what's attested: is the attestation looks-good vs. looks-bad, or will it be more detailed (this user satisfies conditions X, Y and Z but not W)? more metadata would increase privacy risk, but perhaps also user freedom risk.
  3. freedom: can users still control their own software and their own device? can a user still install extensions on their browser, configure and customize their browser? use open source software? what DRM-like capabilities need to be present for attestation to be acceptable?
  4. consolidation: will attestation depend on their being a small number of attesters, or will it contribute to further consolidation of platform/browser providers?
  5. privacy: blinded tokens and privacypass could improve privacy by separating the information shared for attestation from the navigation to a particular site. but there could still be impacts on privacy in terms of what data needs to be shared with an attester to prove goodness (and what is done with that data), as well as privacy implications of what's revealed to an origin by which attester a user has used or what metadata would be available in an attestation.

@michaelficarra
Copy link
Member

michaelficarra commented Jul 1, 2022

After some research into the capabilities of widely-available trusted computing hardware (including TPM 1.2, TPM 2.0, and Apple's Secure Enclave), these are the features that I've identified could be useful for antifraud if exposed to the web. Applicable user concerns as defined in #8 (comment) are listed.

Platform Integrity

Description: Trusted computing hardware can collect information about the running environment, including whether the operating system has been modified, what applications are running, whether any sensitive or debugging APIs are in use, etc. The collected information can be signed and either exposed directly to the web service or given to a mutually trusted third party for analysis and summarisation. A nonce is used to guarantee freshness. Web services that are presented with this attestation will have increased assurance that the inputs they receive have not been programmatically triggered or manipulated.

Use Cases Addressed: account creation, account takeover, advertising click fraud, ecommerce fraud, payment fraud

Limitations: This process is only effective on known platforms with no known exploits. Either outdated platforms with known exploits or custom ("home brew") platforms will not be able to provide sufficient evidence since the inputs to the trusted computing hardware cannot be trusted. Notably, open-source platforms are not necessarily excluded from this process; if the platform is sufficiently restricted from manipulation, a known build/distribution of it can be registered and validated.

Distributing the work of validating the attestation is impractical, as each web service would be responsible for tracking public keys of trusted computing vendors and monitoring for key compromises or intentional misuse/abuse (Microsoft curates and distributes such a list for this purpose). In addition, they would have to keep a large database of signatures of trustable platforms and monitor for known exploits of any trusted platform. For these practicality reasons, an approach that relies upon a mutually trusted third party is essential. Also, validating the signature may require exposure of a uniquely-identifying certificate (as in the Endorsement Keys proposal below).

User Concerns: There can be privacy concerns depending on what information is collected/shared and with whom. All trusted computing solutions have a minor consolidation concern, as the trusted computing hardware vendors must, for practical purposes, be limited. But for this proposal, there's an additional consolidation concern, as the set of trustable platforms must also be limited for practicality. There can be hardness concern from limiting the use of "homebrew" platforms and restricting virtualisation.

Mitigations: Privacy concerns are mitigated when using a mutually trusted third party. Direct Anonymous Attestation may be used to avoid being uniquely identified by the signing certificate used.

Monotonic Counters

Description: Trusted computing hardware has a facility for keeping monotonic counters. A single central counter is kept and can only ever be read and incremented. A small number of additional counter slots are available, and when they are read, the central counter is incremented, and they advance to its new value. Web services that are presented with a signed counter can associate it with an account or device profile to detect unauthorised access, and can use its magnitude or its rate of change to infer abuse. Sybil attacks are addressed somewhat, as the counter makes it harder for a single device to appear to be many devices.

Use Cases Addressed: account creation, account takeover, advertising click fraud

Limitations: Unfortunately, it doesn't seem there is a way to reliably scope the counter to web interactions or, ideally, individual web origins. Protections may have to be put in place to prevent malicious web services from artificially inflating their visitors' counters to reduce the reliability of this feature.

I am not yet sure whether the validating party can be confident that the signed counter value originated in an NV_COUNTER index (as opposed to some more manipulable input).

User Concerns: Some privacy concerns for users with significantly uncommon counter values. This could also be seen as a very minor cross-origin communication channel.

Unique Endorsement Keys

Description: Trusted computing hardware is typically provisioned with a unique X.509 certificate signed by the vendor called an endorsement key. The serial number or public key component of this certificate can be used by a web service to impose a scaling cost for Sybil attacks that is tied to the cost of acquiring trusted computing hardware, since each unit will only ever have one key. The rest of the platform (OS, UA, etc) does not need to be trusted for this to work.

Use Cases Addressed: all

Limitations: Distributing the work of validating the endorsement key is impractical, as each web service would be responsible for tracking public keys of trusted computing vendors and monitoring for key compromises or intentional misuse/abuse. A solution that uses a third party for maintaining this list would be more practical.

The endorsement key amounts to a unique, unchangeable identifier. This identifier, if exposed directly to the web service, can be misused for cross-origin tracking.

User Concerns: Significant privacy concerns.

Mitigations: Use a mutually trusted third party for anonymisation or at least mixing the unique ID with the origin.

Only expose the API to trusted antifraud providers. Allow users to delegate this trust decision to the browser or other selected curator.

Only expose the API in a user-selectable browsing context. Alternatively, use a permissions prompt.

Trusted computing hardware has the capability to generate further certificates in the endorsement key's chain, deterministically based on a seed that can be derived from the origin. Unfortunately, I'm not sure whether this would allow the vendor certificate to be verified and also prevent the consumer from misusing the intermediate key as a unique identifier for tracking. I need to do more research into this mitigation.

@philippp
Copy link
Contributor Author

philippp commented Jul 7, 2022

Thank you for researching and sharing those hardware-backed affordances, Michael. Also, Nick - thank you for illuminating and continuing to represent relevant areas of concern as we discuss this proposal.

Nick: I wonder if hardness and freedom two sides of the same challenge. Most platforms codify what modifications are welcome (by allowing extensions, apps, plug-ins, etc), allowing services and apps to account for these alterations in their threat models. I don't propose that we any platform allows in terms of extensions/plug-ins/etc, but to allow services to check whether the platform expectations have been broken by modifications. As discussed, this would also be possible on OSS implementations that are installed from a signed build.

In terms of how this influences modding/hobbyist innovation, perhaps we can look at the existing innovation in operating systems: Services that need to assert integrity (e.g. multiplayer games) historically relied on invasive techniques (e.g. win32 rootkits, web canvas fingerprinting) to detect deception. Microsoft responded by introducing Windows Defender in WinRT, Android and iOS have similar attestation APIs.

On the web, we're still in the world of everyone collecting fingerprints and building similar heuristics and models for detection. Is marginalization of non-attestable devices a known problem for native attestation? Naively, I expect app developers to want to maximize their reach, even if it means taking on some risk.

Michael: Both the monotonic counters and the unique endorsement keys seem useful for the attestation service. The attestation service could use the monotonic counters and endorsement keys to identify "hot" devices, and could generate origin-specific endorsement keys if a longitudinal same-site client identity is needed.

@dvorak42
Copy link
Member

We'll be briefly (3-5 minutes) going through open proposals at the Anti-Fraud CG meeting this week. If you have a short 2-4 sentence summary/slide you'd like the chairs to use when representing the proposal, please attach it to this issue otherwise the chairs will give a brief overview based on the initial post.

@dvorak42
Copy link
Member

From the CG meeting, there's concern that a lot of the work here is dependent on devices/platforms supporting signals and whether developing the platform support really falls under the W3C. One potential step is to try coming up with the asks for what platforms could provide to fit the various capabilities/use cases the CG has developed, and then working further on a solution in this space once that exists.

@r3muxd
Copy link

r3muxd commented Jul 22, 2023

re: user freedom - the fact that such a conversation over a foundational feature of the web (openness) was discussed in April 2022 but on a random github where nobody could have possibly known about it is somewhat like the plans in the Hitchhiker's Guide being in a basement with a sign stating "Beware the Leopard"
if users (as in, user agent) had a chance in hell of knowing about this rather than "anti-fraud" groups there would be a lot more of a pushback, which I can only assume was by design

@constantin-angheloiu
Copy link

sadly, we can't demand for the same vision of an open web from such guys, already slaves of big corps. trying to sell us the "anti-fraud" donut .. 🤮

@Netbulae
Copy link

Netbulae commented Jul 28, 2023

Can we also have the reverse, ie that it refuses any connection to an untrusted party like google or meta.

This will save me a ton of browser modifications

@quantumpacket
Copy link

quantumpacket commented Jul 28, 2023

It's not surprising to find Google employees as members of these anti-free internet groups. But what I do find surprising is a member from @brave is a member of this group as a Chair. So many double faced companies and organizations these days. Time to start blocking all Chrome based browsers.

quantumpacket referenced this issue in chromium/chromium Jul 28, 2023
This CL moves the base::Feature from content_features.h to
a generated feature from runtime_enabled_features.json5.

This means that the base::Feature can be default-enabled
while the web API is controlled by the RuntimeFeature, which will
still be default-disabled.

An origin trial can enable the RuntimeFeature, which will
allow full access to the API, provided the base::Feature is also
enabled (see change to origin_trial_context.cc).

Meanwhile, the base::Feature can be disabled through Finch as a
kill-switch for the whole feature, and prevent origin trials
from turning the feature on.

Tests have been added to WebView test, as it allowed for easy
spoofing of responses on a known origin.

Bug: 1439945
Change-Id: Ifa0f5d4f5e0a0bf882dd1b0207698dddd6f71420
Fixed: b/278701736
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/4681552
Reviewed-by: Rayan Kanso <[email protected]>
Commit-Queue: Peter Pakkenberg <[email protected]>
Reviewed-by: Dmitry Gozman <[email protected]>
Reviewed-by: Richard Coles <[email protected]>
Reviewed-by: Kinuko Yasuda <[email protected]>
Cr-Commit-Position: refs/heads/main@{#1173344}
@VegaDeftwing
Copy link

VegaDeftwing commented Jul 28, 2023

@michaelficarra

a known build/distribution of it can be registered and validated.

History has shown the user impact of this choice. Microsoft effectively has to bless canonical for Ubuntu images to pass through Secure Boot. This puts them in position of power over a competitor and is a large barrier of entry that would not even be considered for smaller projects. Some platforms make turning off Secure Boot difficult or impossible or make adding custom keys unnecessarily difficult. Reading https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface/Secure_Boot alone makes it clear this is a pretty high technical burden that most users could not be expected to overcome thus limiting operating system choice. This precedent can not be ignored when considering similar restrictions.

Privacy concerns are mitigated when using a mutually trusted third party

This too comes with a cost. Equating this to a VPN but with a wildly different purpose than typical I think is reasonable. VPNs do offer some benefits for users, but as any gamer could tell you the latency/ping cost makes them unreasonable for use at all times.

@philippp

(e.g. multiplayer games) historically relied on invasive techniques (e.g. win32 rootkits, web canvas fingerprinting) to detect deception. Microsoft responded by introducing Windows Defender in WinRT, Android and iOS have similar attestation APIs.

And the technical crowd has largely been against both, equating the Windows Defender in WinRT to just a Microsoft blessed rootkit instead of a third party one. The solution to this, among gamers, is simple: Play games with people you know and trust. Hop in a discord call, not join random, public lobbies. It's more fun and avoids the problem directly without requiring invasive software or other disturbing solutions.

I could also make comparisons to the various rootkits instead by test proctoring software, which has gone oh so well: https://soatok.blog/2020/09/12/edutech-spyware-is-still-spyware-proctorio-edition/ . Again, the solution here is not to ensure absolute trust, it's to design other system so that the trust isn't necessary. Write the test such that a user can't cheat, because cheating has no meaning. Schools are slowing adapting to this, they just used these awful systems as an excuse to avoid much needed changed at the cost of entrapped students.

Of course there are other much hated DRM system in other industries I could go on about.


Is there a way I can sign up for notifications with developments and meeting notes from future @antifraudcg actions?

@quantumpacket
Copy link

quantumpacket commented Jul 28, 2023

Is there a way I can sign up for notifications with developments and meeting notes from future @antifraudcg actions?

You can attempt to join here https://www.w3.org/community/antifraud/ I'm not sure if there is a vetting process as there is for other groups, where you typically have to represent an organization, by design to keep people out.

I find it interesting the participant breakdown out of 189 members is broken up into:

62 from Google LLC
12 from Microsoft
9 from Apple Inc.

Only 1 from W3C! 😆

@amyipdev
Copy link

There is no such thing as a "legitimate" device, plain and simple. To determine some devices legitimate and others illegitimate inherently excludes large blocks of people from the internet. It's that simple.

@rektide
Copy link

rektide commented Jul 28, 2023

I don't see how WEI would be more than a very weak speed bump for fraudsters.

It increases burden by forcing them (along with all legit users) into a narrower subset of hardware & software, but that doesn't seem like a particularly onerous restriction on people who want to commit fraud, especially at any kind of scale.

These folks will still be able to automate their processes at scale on these secure platforms, unless Google wants to test & certify every keyboard mouse a11y & other HID device too, and even then it's not hard to take a $2 pi pico & fake a specific keyboard.

It's unclear to me who is actually helped by WEI. It hurts a huge class of legit users & actually stopping fraud or bots seems unlikely to be particularly effective, particularly if the user & attester relationship is at all privacy preserving (which WEI does ok at; the lack of said privacy would be highly concerning for other reasons). Doing anything useful is an anti-use-case of WEI.

@zb3
Copy link

zb3 commented Jul 28, 2023

advertising click fraud

Oh, that sounds awesome. I guess we should all commit some advertising click fraud so we'll see Google die faster.
If I'm ever forced to view ads I'll retaliate by random clicking / faking interest..

@o5k
Copy link

o5k commented Jul 28, 2023

Entering my opinion as a game developer on the web, with active experience in anticheat (in particular, botting and use of multiple accounts) for the web. In essence, the task expected for WEI to fulfil is far too expensive in terms of cost to the infrastructure of the Internet as a whole.

Gamedevs will know that anticheat is a continous cat & mouse game. One of the most popular ways of attempting to tackle it is by means of client-side attestation (CSAC), software that embeds itself into the kernel, ensuring it can start before and oversee any programs that may be attempting to interfere with the to be protected software (the game).

This deeply infringes upon the user's device, and requires a lot of trust from the user. The main element of this trust is an understanding that the anti-cheat is benevolent software, with only one mission, to help the developers find cheaters. However, as this software has grown more invasive, and genuine users feel untrusted, a disdain for this software has risen up.

WEI is, in essence, client-side anticheat for browsers themselves. However, in this case, the benefits are not as clear.

If WEI intends to protect as many groups as the proposal wishes (eliminating as much fraud as possible), the attesters (the actual anti-cheat software) are forced to extreme lengths to avoid tampering. This element of the spec is left extremely nebulous, which... makes sense. There's no purpose to making an attester whose code is public. For an attester to be able to do its work, it will quickly enter into a cat&mouse game so steep it will become just like the most invasive anti-cheat software. If it was any less invasive, it simply wouldn't work.

By design, attesters will quickly change from simple, naive checks to enforcing just about everything. This is the reality of creating a no-trust relationship toward the user—there is no effective way to do it other than locking the user into a walled garden. Any browsers that do not enable the backdoor for an attester to enter ring0 will become second-rate citizens of the Internet at best. This may sound black-and-white, but that's how it is.

In essence, we are trading:

  • a small amount of "trust" toward the user's device, but that we still can't ensure because this is forever a cat&mouse game

for:

  • user choice in user agents,
  • user choice in operating systems,
  • any amount of customization to the user agent (extensions) or operating system,
  • a potentially severe amount of a11y features,
  • etc.

I'm neither the first nor the last to mention this, but the concept of this unchecked attestation is a disastrously bad idea. It works somewhat well in gaming because it remains in its own sphere, and the only real purpose it can serve is preventing cheating. This proposal, on the other hand, has clear other purposes it can serve.

Above all, it will end the ability for any browser other than the ones accepted by attesters to exist. This is a pretty daring proposal to come from Google (then again, at this point W3C is just a subsidiary of Google), and as said by practically everyone in the tech world, not a good direction. If it is implemented, in any way or shape or form, it will lead to the end of the open Internet. Attesters (I wonder who will be the attester for Chrome... hmmm....) will, no matter what, play into the hands of centralization. It will become a requirement to run Chrome without any extensions, without any a11y features enabled in Windows, on a TPM-enabled device running Windows 11, with the latest updates installed, to access any large site. Google cares a lot about its advertising integrity, so of course it would rank sites that implement this higher. Et cetera, et cetera.

With how much of a drawback WEI has, and how much this plays in the favour of the company pushing for its implementation (well, that already is implementing it), it is hard to not see the use cases presented as nothing more than a front for a hostile takeover of the open Internet. This is genuinely disgraceful, and I believe the outcry is more than deserved. Nonetheless, it was hard to not see coming. With ad spend reduced quite drastically, the system of ad revenue is finally starting to show its weakness. When times get worse, these companies will start showing their true colors.

We must hope that the pushback only gets stronger. Otherwise, this will truly be the end of the open Internet.

@dvorak42
Copy link
Member

This is solely a clarification on the Anti-Fraud CG processes and purpose, and not this particular proposal. The Anti-Fraud CG was established as a group within the W3C where we could bring together folks invested in various parts of the ecosystem to discuss, provide feedback, and iterate on proposals. This was started a couple years back to have a place to do this sort of public discussion in the open and try to develop more privacy-positive proposals compared to other techniques involving full fingerprinting/user identification that is prevalent in the ecosystem. The group was established to bring together folks from across the ecosystem including web privacy advocates, browser vendors, anti-fraud developers, and others.

Two important clarifications are:

  1. That for the most part any CG member is allowed to propose potential APIs, and that proposals made to the CG are not an indication that the community group necessarily supports the proposal, there is a process where proposals can be adopted as work items that the CG is interested in spending time on with consensus from the CG.

  2. That community groups are intended to iterate and discuss proposals/work items, however community groups do not have the ability to standardize APIs. Once there's some amount of consensus in an API shape, proposals have to be adopted by a Working Group (ie WebAppSec, https://www.w3.org/groups/wg/). It is there where the actual standardization process happens and consensus is established whether there's support for an API and the ecosystem impacts are measured.

We'll add a document to our repos with clarifications of how the W3C operates for external folks.

@komali2
Copy link

komali2 commented Aug 3, 2023

It's not surprising to find Google employees as members of these anti-free internet groups. But what I do find surprising is a member from @brave is a member of this group as a Chair. So many double faced companies and organizations these days. Time to start blocking all Chrome based browsers.

Brave won't be shipping the result of this conversation, WebEnvironmentIntegrity: brave/brave-core#19476 . I don't know why there's a Brave member as chair but maybe so as to keep abreast of stuff like this?

@indrora
Copy link

indrora commented Aug 3, 2023

Tossing my hat in the ring.

This sort of proposal will, in the end, also have a negative effect on blind users who heavily modify their browsers in order to use them (e.g. DragonNaturallySpeaking, NVDA, etc.) and makes it logistically impossible to create non-Google derived browsers (e.g. Safari, Arc) and would have a severe negative effect in countries where Chrome is not the standard for mobile (e.g. China, where QC Browser is much more common) or where Android versions lag behind (approximately 20% of the Android users in the world are running a version before Android 10, the oldest supported version by Google, [source0(https://www.statista.com/statistics/921152/mobile-android-version-share-worldwide/) )

@MrStonedOne
Copy link

User agents must serve the user's authority and autonomy not the website's interests. I don't understand why this conversation needs to go further than this.

@Voltra
Copy link

Voltra commented Aug 9, 2023

I already get mad at Discord for reading my process list, how do you guys expect people to react to this?

@Blazzycrafter
Copy link

We must hope that the pushback only gets stronger. Otherwise, this will truly be the end of the open Internet.

just a troll thinking.....
How quickly will Google regret this decision if NOBODY uses Google.com because NOBODY uses their Chrome unmodded?

i am using operagx (chrome engine) and my fam use brave (chrome engine)....
btw i am thinking of using mailcow as mail service.....
if google prevents me to use gmail.... then i have no reason to NOT use mailcow XD

@turquoispandabear10
Copy link

Google Safety Net on Android is an interesting and important technology, but it is currently being abused. GrapheneOS for example, contributes code and uncovers vulnerabilities, that are adopted by and help protect the whole Android ecosystem. It is probably the most secure operating system based on Android. But because they only pass the basic attestation check of Google Safety Net, GrapheneOS users are banned from using important applications such as Google Pay. This is clearly a monopolistic practice, where OS's that even have more security protections than are available on the stock Pixel operating system, are prevented from using basic services that everyone needs such as payments. The fact that even Google itself does not allow for the open use of its apps shows, that these sorts of attestation software will be used in actual practice as Web DRM, and will destroy users' freedom to run their own software, as they will not be able to interact with anything important.

I do think that this device integrity attestation can improve security and help with fraudulent attacks on websites, but there must be a public promise, and in the terms of use of this system it must be stated, that those users who are prevented from accessing a website must be allowed to enter through solving a CAPTCHA or some similar system that does not require directly personal identity information such as a phone number. This will allow this service to reduce the burden placed on websites, and reduce CAPTCHAs for common software and devices, but will preserve the right for users to run their own software.

The Google/GrapheneOS issue:
https://discuss.grapheneos.org/d/475-wallet-google-pay/2

@philipwhiuk
Copy link

@dvorak42

The group was established to bring together folks from across the ecosystem including web privacy advocates, browser vendors, anti-fraud developers, and others.

Can you cite any actual evidence of web privacy advocates included in your group? Nobody provides a recognisable organisation focusing on web-privacy based on a review of your participants list?

@michaelficarra
Copy link
Member

@philipwhiuk Our participants list includes representatives from Center for Democracy and Technology, for example. The are also individuals representing themselves and representatives with that role in their organisation.

@quantumpacket
Copy link

quantumpacket commented Sep 3, 2023

Our participants list includes representatives from Center for Democracy and Technology,

Their financials show who is funding them:

Thank you to our supporters

$500k+
Amazon • Chan Zuckerberg Initiative • Ford Foundation • Google • John S. & James L. Knight >Foundation

$100k+
Anonymous • Apple • Cooley* • Democracy Fund • Patrick J. McGovern Foundation • Meta • >Microsoft • Omidyar Network • Open Society Foundations • Ropes & Gray* • TikTok • WhatsApp • >Wilson Sonsini Goodrich & Rosati*

$50k+
Airbnb • Davis Wright Tremaine* • Spotify • Uber • Verizon • Vinson & Elkins*

$25k+
Adobe • AT&T • Covington & Burling • Discord Inc. • Intuit • The John D. and Catherine T. >MacArthur Foundation • Kohlberg, Kravis, Roberts & Co Latham & Watkins • Mozilla • Visa • XR >Association • Zoom Video Communications

Look at all those privacy respecting companies funding them. 😆

These corrupt companies create and fund these shell organizations to hide behind, so they can claim oh look we care about privacy. It even has the word "Democracy" in the name, that means we're serious about listening to public opinion...only when it suits us.

@ljharb
Copy link

ljharb commented Sep 3, 2023

Are you confusing funding with controlling, perhaps?

@quantumpacket
Copy link

quantumpacket commented Sep 3, 2023

You don't bite the hand that feeds you. Whoever writes the checks is the one who is in control. I thought that was common sense knowledge? It would be pretty naive to think otherwise.

@ljharb
Copy link

ljharb commented Sep 3, 2023

It's not knowledge, it's just your theory, and in practice this isn't how things play out.

If still think you're right and want that to change, then write a larger check - because your theory implies that nothing else (including complaining here) will move the needle.

@ejc3
Copy link

ejc3 commented Oct 24, 2023

Would this proposal help prevent this attack:
https://www.thestack.technology/new-okta-breach-support-har/

Basically tying cookies or a token to a particular device so that if they were ever stolen (like via a HAR file upload), they wouldn't be valid on another browser.

@indrora
Copy link

indrora commented Oct 24, 2023 via email

@philippp
Copy link
Contributor Author

philippp commented Nov 7, 2023

Thanks again for all of the discussion on this issue. Based on the feedback we have received, we are going to close this issue and not pursue web platform device attestations that (1) are adverse to the user (even when abusive clients are part of the threat model) or (2) expose system details through which one platform can be distinguished from others.

There may be an opportunity for other, more narrowly scoped, usages of the device as part of the web platform when (1) it does not disclose what type of device it is and (2) it is in the direct interest of the user - for example, to protect an account or a payment instrument from being used on unexpected devices. It may thus make sense to refine and narrow our requirements for capabilities that do not rely on device fingerprinting, to ensure that these don't run into the same concerns as WEI

@Voltra
Copy link

Voltra commented Nov 7, 2023

(2) it is in the direct interest of the user

for example, to protect an account or a payment instrument from being used on unexpected devices

These two do not match

@ejc3
Copy link

ejc3 commented Nov 8, 2023 via email

@Voltra
Copy link

Voltra commented Nov 8, 2023

That would greatly facilitate fingerprinting...

@MrStonedOne
Copy link

MrStonedOne commented Nov 8, 2023

While i share the same concerns everybody else here does (you can see my earlier comment here), cookies are already fingerprinting, and i do think it is likely possible to make a system that allows for verification that a login/session cookie hasn't been exfiltrated that won't provide any additional fingerprinting than the cookie itself has provided.

@Voltra
Copy link

Voltra commented Nov 8, 2023

No, cookies in and of themselves can't be used for fingerprinting. For the basic reason that you can have two (or more) different ones on the same device. To sustain the metaphor, you don't have two (or more) fingerprints on a single finger.

In addition, if they could, no one would be trying to find ways of achieving fingerprinting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests