Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attack on privacy budgets using fake sites and malicious Match Key Provider #57

Open
bmcase opened this issue Mar 24, 2023 · 10 comments
Open
Labels
research questions Open questions with cryptographic research components.

Comments

@bmcase
Copy link
Collaborator

bmcase commented Mar 24, 2023

Attack on privacy budgets using fake sites and malicious MKP

Summary:

  • There is an attack that a malicious Match Key Provider (MKP) can carry out by creating fake sites and using those sites' privacy budgets to measure events that happened on its own source or trigger site.
  • We discuss some potential mitigations to the attack

I’ve been thinking about the overall threat model for managing the privacy budgets. Last summer for the End-to-End doc I was thinking about similar budget sharing attacks but those attacks at the time seemed preventable, but I was able to crystallize this attack into something we don’t prevent with what we are currently doing.

Background

We have recently been assuming that the associated data to encrypted matchkeys includes: (site, the trigger/source bit, requested MKP, epoch, and previously the RC intended to run the query). Also for source-fan-out queries we’ve been assuming that the Helper Party Network (HPN) will verify that all source events were created on the site that is submitting the query and spending the budget.

Attack

Steps for the attack using a source site:

  1. Suppose there is a malicious MKP with a source site (a.example).
  2. a.example creates fake source sites a1.example, …, a10.example, and each one independently registers with the HPN to get a privacy budget.
  3. Trigger site b.example sends trigger reports to a.example.
  4. As a MKP a.example can create the source reports for its source events without calling the API. Thus can use the same underlying data to create copies of the source events that happened on a.example as if they happened on a1.example,...,a10.example.
  5. Now a.example, a1.example, …, a10.example all run source-fan-out queries with the trigger reports from b.example.
  6. Each one of these queries is for the same underlying data and lets a.example learn about the events that happen on it using the budget from 10x more sites.

If we also required encrypted matchkeys to include who would be the RC issuing the query (b.example deciding their events can be queries by a.example but not a1.example,...,a10.example), that would prevent this attack unless b.example is also colluding and requests copies for all the fake sites. But since we assume source and trigger sites will collude in our threat model, this would be an insufficient mitigation in the general case. A similar attack could be done with the MKP using a trigger-site

Potential Mitigations

The IPA team have all been thinking of some potential mitigations to the above attack

  1. Remove having a MKP all together and just use on device matchkeys
    1. This would prevent a MKP from having the ability to create events that didn’t actually occur as a result of a user visiting a website.
    2. It would give up cross-device attribution, except for in cases where a users matchkey could use other OS supported methods of being synced across logged in devices
    3. It would also make sort more expensive by needing increased size for matchkeys, except that we may have the option to use an OPRF based matching, see below.
  2. Limit who can be a MKP
    1. This doesn’t completely remove the attack but limits who could carry it out and makes those parties more trusted in the system.
    2. Ideally, we are able to combine this with additional technical mechanisms like query transparency that allow detecting if a matchkey provider is being malicious in this way.
    3. See also issue Restrictions on match key providers #42 on limiting who can be a MKP
  3. Query transparency
    1. With the addition of query transparency (having the HPN publish various high level statistics about the queries being run and by who) it could let you see that some of the same ciphertexts were used in many different queries (all the queries run by the fake sites in the attack above include the same set of trigger reports). This alone isn’t sufficient to tell you who is running the attack as the real site (a.example) need not run a query and just the fake sites. But if there are a limited number of MKPs, detection of such behaviors could be used to trigger more frequent audits.
  4. Make privacy budgets at the business level
    1. When you sign up to get a privacy budget from the HPN, we could ensure more KYC to know what business is running the queries. This could make it harder than simply creating fake sites to get more budget.
  5. Use multiple non-colluding match key providers
    1. One idea @eriktaubeneck suggested was to use the XOR of match keys from different, non-colluding match key providers. It would give something closer to the intersection of the device graphs. @martinthomson has thought about an extension to this using a group key exchange.

Likely we would want to use a combination of the above methods. We think device generated matchkeys, 1), would be a decent MVP and would be similar to the first in-market test we are planning, but we will continue to research ways of supporting a MKP with better security guarantees.

Even without a MKP involved, there is the possibility of more general privacy budget sharing attacks that try to induce a client to submit events for many sites. For example if a site can induce a client to submit events for many sites with the ability to know they all were by the same person, they could carry this attack out on their own. For example, sites might try to do some form of bounce tracking to try and get an event registered as if it happened on multiple sites. There are probably some easy mitigations to bounce tracking and similar abuses like having minimum delays from page load until release of the match key, which could be cut short after user engagement.

Implications for MPC – option for OPRF based matching

If we were to go with mitigation 1) or 2) above, that changes the trust model around what sort of attacks a malicious MKP can carry out. If we carry this same new threat model through the MPC we can get some optimizations. Note that from a privacy/security standpoint we don’t have to make this change to matching; it is just that this becomes an option we could more likely pursue as an optimization.

  1. Consider first that if there is no MKP: in this case if we still cache the matchkey ciphertext per domain to maintain a sensitivity bound on the matchkey (IPA Sharding with Sensitivity Bound #35), we get a much stronger bound. No longer can a malicious matchkey provider increase the cardinality of known matchkeys. So all matchkeys are have cardinality bounded the number of domains likely to be visited from one device in an epoch. With this bound in place we can return to using an OPRF based join instead of sorting and can prevent the cardinality attack with differential privacy.
  2. If we limit who a MKP can be and trust them not to carry out attacks on the privacy budget, we might as well also trust them not to run malicious cardinality attacks.
  3. Note that if we don’t switch to an OPRF based matching but do include who would be the final RC to run the query in the encryption of the matchkey as associated data, that could mess up the ciphertext caching we hope to use to enable private sharding.

Overall threat model for privacy budgets

This attack is a particular example of the sort of attacks sites might try to carry out through sharing their budgets. We should expand the threat model document(link) to capture a more robust definition of a privacy budget threat model. Other APIs are susceptible to similar style budget sharing attacks. PCM is known to be vulnerable to a similar style of fakes sites attack, see 2.2.5 of https://mozilla.github.io/ppa-docs/pcm.pdf. ARA has been adjusting its privacy grain and limits to deal with similar style privacy budget sharing concerts, see WICG/attribution-reporting-api#661 and WICG/attribution-reporting-api#725

@csharrison
Copy link
Contributor

Thanks for filing this, @bmcase . One thing I want to caution is that your option (5) may be susceptible to "trivial" collusion attacks e.g. a site that merely always sets a "0" match key. This is an attack that comes up often when it comes to "intersecting" device graphs, where there is no way to ensure that a device graph is honestly constructed (and e.g. assumes all devices are shared by one user). When attacks are this easy and we don't have other restrictions on MKPs, the non-collusion assumption gets weaker.

@martinthomson
Copy link
Collaborator

Regarding the use of OPRF matching, the cardinality leak we're talking about still likely applies in some ways. If we are to consider the possibility of webviews and IPA aims to operate at the device level (not at the browser level), a single device manifests as multiple discrete contexts. You cannot use ciphertext stability across contexts, but cardinality might be used to link those contexts.

@eriktaubeneck eriktaubeneck added the research questions Open questions with cryptographic research components. label Mar 26, 2023
@eriktaubeneck
Copy link
Contributor

@csharrison good call out.

One thing I want to caution is that your option (5) may be susceptible to "trivial" collusion attacks e.g. a site that merely always sets a "0" match key.
Good point, and it could really be any constant value.

This was ultimately just an early line of thinking around the idea that if the match key could mount attacks, we could possibly use the same principle of non-colluding match key providers. That said, we aren't proposing that any helper parties are appropriate, so we can't over extend the idea to match key providers.

@eriktaubeneck
Copy link
Contributor

Thanks for writing up this detailed issue @bmcase!

For the sake of making progress on a standard in the PATCG, option 1 seems like a good immediate step, but leaves open the ability to add in match key providers as the standard matures. If we take this approach, though, I do think it's critical to solve for the different variations of same device App/Web source event → App/Web trigger report, and this does add some challenges given the scope of the PATCG (and the W3C.)

Some proposals have also talked about using the platform's "Sign In" feature to sync events across a user's device (with the intent of on-device attribution.) If we attempted to use that same functionality to sync match keys, note that these devices only need to establish some form of a shared state, which can be used to generate consistent match keys. New devices could likely only join these shared state for new epochs, but that's likely a reasonable tradeoff.

@csharrison
Copy link
Contributor

This was ultimately just an early line of thinking around the idea that if the match key could mount attacks, we could possibly use the same principle of non-colluding match key providers.

Yes +1, I think the general idea of extending the privacy threat model to assume non-collusion across MKPs is interesting and worth further exploration. I guess my point is that the relaxation only really makes sense with a system that has some baseline trust for participating MKPs.

That said, we aren't proposing that any helper parties are appropriate, so we can't over extend the idea to match key providers.

I'm having trouble parsing this. Can you clarify what you meant here?

For the sake of making progress on a standard in the PATCG, option 1 seems like a good immediate step, but leaves open the ability to add in match key providers as the standard matures.

I generally agree with this. However, I do hope we can spend time on a long term solution which keeps the notion of MKPs in place. This is really the core "magic" of IPA and without it we constrain the possible supportable use-cases.

@eriktaubeneck
Copy link
Contributor

That said, we aren't proposing that any helper parties are appropriate, so we can't over extend the idea to match key providers.

I'm having trouble parsing this. Can you clarify what you meant here?

Sorry about that, I should probably not attempt to reply while jet-lagged 😅. What I was attempting to say was that, with Helper Party Networks, our idea is that the web platform vendors will be able to approve networks; we don't expect any set of three Helper Parties to meet the conditions required to be a Helper Party Network. Similarly, if we extend the idea non-colluding Match Key Providers, we similarly would need to think about which such groupings of Match Key Providers would reasonably meet the the non-collusion assumptions. I think this is aligned with your comment above:

I guess my point is that the relaxation only really makes sense with a system that has some baseline trust for participating MKPs.

On the other point:

For the sake of making progress on a standard in the PATCG, option 1 seems like a good immediate step, but leaves open the ability to add in match key providers as the standard matures.

I generally agree with this. However, I do hope we can spend time on a long term solution which keeps the notion of MKPs in place. This is really the core "magic" of IPA and without it we constrain the possible supportable use-cases.

+1. There are a whole host of use cases which can only be supported in this way (CTV, for example.) It's certainly our intension to continue working towards the goal of full interoperability, with cross device and cross platform support. I believe there's an argument to be made that one of the values of taking this approach is keeping open optionality for cross-device.

That said, I do still think there are meaningful differences between IPA and the on-device attribution proposals, even without the setMatchKey() API, and these tradeoffs are still worth considering. For example, (as I mentioned above) it's likely easier to support cross-device within a platform, since there's isn't a need to continually sync state.

@bmcase
Copy link
Collaborator Author

bmcase commented Apr 4, 2023

@martinthomson to your earlier comment

Regarding the use of OPRF matching, the cardinality leak we're talking about still likely applies in some ways. If we are to consider the possibility of webviews and IPA aims to operate at the device level (not at the browser level), a single device manifests as multiple discrete contexts. You cannot use ciphertext stability across contexts, but cardinality might be used to link those contexts.

That makes sense. Do you think there is a way to at least get a bound on this that we can have for the sensitivity of shard sizes?

@csharrison
Copy link
Contributor

@martinthomson to your earlier comment

Regarding the use of OPRF matching, the cardinality leak we're talking about still likely applies in some ways. If we are to consider the possibility of webviews and IPA aims to operate at the device level (not at the browser level), a single device manifests as multiple discrete contexts. You cannot use ciphertext stability across contexts, but cardinality might be used to link those contexts.

That makes sense. Do you think there is a way to at least get a bound on this that we can have for the sensitivity of shard sizes?

I want to make sure I am understanding this concern. Is it assuming a platform-level device ID rather than a browser provided device ID? If so, the platform could enforce a bound here, at the cost of fairness and coverage. (e.g. maintain a set of device IDs, sticky to a set of calling apps). The platform could also passively monitor this and provide aggregate metrics for us to calibrate epsilons.

@csharrison
Copy link
Contributor

One other possible solution here involves a stricter privacy budget scheme. Currently, source fanout queries only consume budget for the source site, and vice versa for trigger fanout queries. However, I believe this attack is stopped if source fanout queries consume budget from every impacted site in the query, including the source site and all advertiser sites.

Here's how this works:

  • Any interesting query that we intend to protect requires at least one side of the attribution to have an "unknown" match key. Otherwise a MKP can simply use the known match keys to aggregate without privacy.
  • A malicious MKP can only duplicate events from known match keys and replicate them with a fake site's provenance.
  • So every joined attributed event that paricipates in an aggregation includes at least one site whose match key is unknown, this site gets their budget consumed from.
  • Therefore replicating events with known match keys cannot consume unlimited budget.

While I think this solves the problem listed here, it introduces a lot of potential problems that may not be feasible to fix. For instance, if a report collector can consume budget from many sites all at once, we'd need to protect those sites against exhaustion attacks coming from malicious report collectors.

@csharrison
Copy link
Contributor

I wanted to mention another (partial, incomplete) solution to this problem. Imagine a reduced/parallel version of IPA that:

  1. Supports the setMatchKey functionality
  2. Only supports trigger-fanout queries (and thus only requires consuming budget from advertiser sites)
  3. Only (natively) supports registering triggers (not sources) with adversary-supplied match keys

In this setting, I believe this issue is more or less moot.

(2) means that the only replay attacks in scope here are ones where the match key provider is colluding with the advertiser site (i.e. for every trigger event the MKP + advertiser know the cleartext match key).

(3) means that the API is not providing x-site match keys across sites for the purpose of source registration. That means all sources that could be registered (eligible for x-device) need to have the match key known "in the clear" / generated offline.

The combination of (2) and (3) means that if an adversary could issue a replay attack by colluding with an advertiser, they could do so without using the IPA API at all. The downside of course is that total cross-device coverage is naturally reduced.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
research questions Open questions with cryptographic research components.
Projects
None yet
Development

No branches or pull requests

4 participants