Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attacks Against Proposed Outcome-Based Microtargeting Prevention #140

Open
johnwilander opened this issue Mar 10, 2021 · 1 comment
Open

Comments

@johnwilander
Copy link

During this week's IWABG call, I brought up attacks against threshold-based prevention of microtargeting and was referred to https://github.com/WICG/turtledove/blob/main/OUTCOME_BASED.md. It may be that your proposal addresses the attacks I describe below but it wasn't clear to me that it does.

Attacks Against Proposed Outcome-Based Microtargeting Prevention

This all refers to this document: https://github.com/WICG/turtledove/blob/main/OUTCOME_BASED.md

Cliff Attack

The cliff attack is when a bad actor races to reach a threshold and then stops right after (the cliff) to instead behave in some alternative way that the prevention technique was supposed to stop. In the case of the proposed microtargeting prevention, this would mean producing enough ghost wins to reach the required longest k-diverse subsequence (LKS). What stops a bad actor from microtargeting users after their creative has reached the required LKS?

Bot Attack

The document says "the Privacy Infrastructure has to detect and block bidders' attempts to learn that status by means of creating synthetic users (bots) that win auctions." How do you intend to detect that? Does your threat model include bad actors running customized browsers with altered auction behavior, possibly producing traffic from multiple IP addresses and locations?

Detection Attacks

Detecting Preliminary Validation

A bad actor can customize its own browser to run auctions and detect its own preliminary validation state.

Detecting Validation

A bad actor can automate a browser instance to load a webpage until their own creative is rendered. That will prove to them that they are now validated.

Attacks Against Competitors

A bad actor can customize its own browser, run auctions, detect competitors' validation state and bidding strategy, and either alter its own bidding to always win or produce fraudulent auction outcomes in a browser farm to make the competitor's campaign look like it's being microtargeted.

@jonasz
Copy link
Contributor

jonasz commented Mar 23, 2021

Hi John,

In OBTD browser teams have flexibility in defining what microtargeting is, and what guarantees ("policies") they'd like to enforce.

The intention behind the algorithm we presented was to show (as a proof of concept) how to enforce the policy that says "an ad, if served, is expected to be seen by a certain number of people at least once".

Cliff Attack

(...) What stops a bad actor from microtargeting users after their creative has reached the required LKS?

Browsers may choose to extend the basic policy I mentioned above - for example, by "invalidating" an ad once it is detected that the ad continues to be served but LKS is not growing as expected. (Let's call this "growing LKS policy"). This way, it's possible to ensure that "an ad, if served, is expected to be seen by a certain number of people at least once each day". Of course more advanced policies are also possible.

Bot Attack

The document says "the Privacy Infrastructure has to detect and block bidders' attempts to learn that status by means of creating synthetic users (bots) that win auctions." How do you intend to detect that? Does your threat model include bad actors running customized browsers with altered auction behavior, possibly producing traffic from multiple IP addresses and locations?

The original OBTD document didn't delve into the problem of bots for the sake of brevity, as bots seem to be a wider issue in FLEDGE (and the web in general) - so we were working in a simplified threat model. If we want to consider OBTD in isolation, it can be extended to the "harder" threat model.

One approach worth exploring could be to base the validation process entirely on signals coming from trusted browser instances. This way, the mechanism doesn't need all browsers to be trusted, but only relies on the existence of some trusted browsers.

For example, the browsers could ensure that:

  • An ad can only be served to an untrusted browser if it has been seen by K trusted browsers.
  • An ad becomes invalidated if "trusted LKS" is not growing.

Detection Attacks

Detecting Preliminary Validation

A bad actor can customize its own browser to run auctions and detect its own preliminary validation state.

That is true. We didn't want to complicate the algorithm, as this attack is not feasible at scale. However, you could mitigate it further by randomizing the preliminary validation threshold per "user's bucket". (That is, different users would be allowed to see the ad at different times.) Once the attacker detects preliminary validation in their own browser, they wouldn't be able to say how many users have already seen the ad, and whether the user they wish to microtarget is also in the state of preliminary validation.

Detecting Validation

A bad actor can automate a browser instance to load a webpage until their own creative is rendered. That will prove to them that they are now validated.

Detecting validation (as opposed to preliminary validation) was not an issue in the example policy we chose. If I understand the motivation behind your question right - I think "growing LKS" would solve the issue.

Attacks Against Competitors

A bad actor can customize its own browser, run auctions, detect competitors' validation state and bidding strategy, and either alter its own bidding to always win

From our perspective, the fact that others will learn that "RTB House is serving ads X, Y and Z" is not an issue, and we are open to this kind of transparency.

or produce fraudulent auction outcomes in a browser farm to make the competitor's campaign look like it's being microtargeted.

I believe this would be solved by the "trusted LKS" approach. (The browser farm would not contribute towards validation.)

Please let me know if I got your questions right, and if my answers makes sense, I'm also happy to discuss more.

// Also a final note - this is of course our (RTB House's) perspective on OBTD, and the browsers may have other views on how they want to define microtargeting and implement specific validation algorithms.

Best regards,
Jonasz

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants