Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic but K-Anon Creative URLs for Server Side BA Auctions #729

Open
thegreatfatzby opened this issue Jul 27, 2023 · 25 comments
Open

Dynamic but K-Anon Creative URLs for Server Side BA Auctions #729

thegreatfatzby opened this issue Jul 27, 2023 · 25 comments

Comments

@thegreatfatzby
Copy link
Contributor

One of the fun challenges we're working on is the need to pre-declare creatives in the IG at IG joining time, and being constrained to those at auction time. Update URL frequency could be tweaked to make this better, but fundamentally it's a challenge.

It's also interesting because I think that requirement is mainly operational: in theory if the creative URL was not pre-declared but met K-thresholds that wouldn't impact privacy, but that K evaluation at auction time would be infeasible if all browsers had to query for all object Ks at auction time. Rather than doing that, K is updated periodically based on the declared creatives, allowing the K evaluation to happen locally at auction time.

I would think that in the evolving BA world this could be loosened with some smart coordination. If we did a setup something like this:

  1. K Servers are placed in various DCs worldwide. They shard K-object counts by owner, have some level of redundancy within each cluster, and are eventually consistent across DCs.
  2. The service adds something for an owners Buyer* Front End TEEs to pull (service or pub/sub, I'd go pub/sub for start up + incremental, but I digress) K-updates into memory.
  3. The BFE code pulls that down, and includes it in the response to the SFE (note it would be the Trusted code doing this, not the bidding functions.
  4. SFE could use the value from the response of K-filtering (I'd also like to see multiple bids returned which would help with finding optimal bids in the place of K-misses.
  5. A callback from the SFE to the BFE would result in an incrementing of K as needed.

Could we allow the Buyer Front End/Bidding Functions to return creative URLs without pre-declaration, but just still apply the K-anon threshold locally.

*In the long run I would push for this being "Interest Group Owners", as I hope to see some of the publisher side IG flexibility discussed in #686.

@michaelkleber
Copy link
Collaborator

This would certainly be a big difference in what an Interest Group "means". If the browser wanted to tell the user what an IG meant, for example, then an IG that's holding on to a dozen ads could readily be "summarized" by showing screenshots of the ads it might lead to. An IG that could lead to the display of any of a billion possible ads is a quite different beast.

I'm not saying that the change you ask for is impossible at the technical level. But it would be a substantial difference at a conceptual level from the API we've been working on and discussing for the past three years.

@thegreatfatzby
Copy link
Contributor Author

Understood on the principles level.

On the tactical thing of showing the "potential ads" and having that not be excessively large, could that part of the feature be achieved'ish with something like:

  • IG provides a "selectionFunction" or something that the browser can call to pull down up to X ads.
  • The BFE limits the number of creatives to X that can be added to the ads list for bidding at auction time.

@michaelkleber
Copy link
Collaborator

Hmm interesting. That does seem like it would achieve the goal. But if you have that selectionFunction, then why not call that function and stick its result in the ads field of the IG, getting us back to where we started?

@thegreatfatzby
Copy link
Contributor Author

thegreatfatzby commented Aug 7, 2023

Welllll I agree it's logically equivalent but I think it would help drastically with a few specific things I'm chewing on:

  • Syncing of Information: not having to pre-declare the creatives you want to target would save a significant sync between the DSP Servers and many clients, moving that to a server-to-server operation that we, and I think most DSPs, would be able to control and tune better. Being able to respond in basically-real-time to DSP Client updates of Campaigns and Creatives is a major feature that isn't even generally thought about these days, and adding more sync time will put sand in their gears as they try to update Campaigns. This is especially relevant for clients who update their settings programmatically in response to optimization pipelines, as it's (edit) not just an ad manager logging in at the beginning of the day, checking a few things, and tweaking a few settings. It also seems like many industry players are experimenting with DALI-like Ad Creative generation, which would definitely ad to the value of dynamic updates
  • B&A Payload Optimization: not having to send the creative URLs or even IDs in with the payload would save a lot of space.
  • Decentralized K-Anon: I don't think this is in scope for 1.0, but moving K-server to Server Side could lead in nicely to TEE based K-servers that can be owned and operated by Ad Techs so that they can own and operate their critical infrastructure, rather than us all having to call a K-operator at 3 AM when things break.

I think, to put it a bit abstractly, the more we can keep this distributed system in the 100s-1000s of Nodes having to do regular cache syncs, debugging, operations, etc, the simpler this will be and I suspect we'll a) be able to leverage the accumulated experience of ad tech in server side operations b) avoid interesting new bugs with data syncing and c) save on bandwidth overall, especially at auction time. I think Individuals User Data staying in the browser makes a lot of sense and is worth the tradeoff; Ad Tech Objects like Creatives seem less clearly worth the tradeoff.

When I would not hold this opinion is if Edge Computing (not Edge browser, but Edge Compute in general) gets to a point where Ad Tech can more reliably carry out operations there and meet feature and SLA needs, then we'd be able to centralize on the browser and develop operational things like data syncs and debugging exclusively for the edge. My best guess is that for now B & A will continue to handle a lot of traffic, and an ad tech syncing Ad Client Object Updates to the Edge so it can go back to the server carries a) additional challenges and b) reduced privacy impact when compared to the Individuals User Data, so userBiddingSignals, prevWins, Shared Storage, etc, which makes pretty good sense to keep client side in this paradigm we're building towards.

@thegreatfatzby
Copy link
Contributor Author

Hey @michaelkleber in updating some stuff in the doc on my comments from today's call I thought about a) framing some of this as data syncing more abstractly, which I want to present as just helpful for aligning, or at least making it clear how I'm seeing this and then also b) say that I don't think the narrative loss is necessarily as big as we're thinking, even on the specific piece we got to.

Narrative Distinction

On the narrative side I’m going to hair split a little. Given dynamic renderURLs as defined here:

  • The narrative about being able to explain why an ad was shown to you in the past, would (or at least could) be maintained through various browser tracking of state at ad time.
  • The narrative of what could happen with that IG in the future, or alternatively what it means for you to be in that IG at that given point in time , does change for sure.

But! it can already change every 24hrs, and we've talked about having it be less than that. So I think there's room for creative problem solving here to keep the "current meaning of the IG" piece pretty similar to its current meaning.

Distributed System: Types of Distributed Data in Ads

Abstractly, I've been thinking of this as a distributed system with two types of data, user data (user bidding signals, prevWins, etc) and business data (Line Items, Creatives, etc).

Distributed User Data

I think there’s an implicit long term goal to make the client the source of truth for user info (which I am aligned with, client appropriately defined), but various things (BA, non-TEE-KV) implicitly acknowledge we’re (we=tech) not ready for that yet.

Distributed Business Data

Even if we could make the client source of truth for user data, at that point we’d still have to deal with the business objects, for which the server would be the source of truth almost for sure since they have to be managed in one place and acted on in many (even today).

So for right now we’re in a world where the server is still the source of truth for all the data the client is syncing.

Data Syncing: Lambda

If we're agreed that, for now, we are syncing data in a distributed system,.

Often in data systems you'll do a "Lambda Architecture", where you have batch updates that do a full refresh, and then allow some incremental updates, with true-ups repeating at some cadence dictated by reality and the nature of the system. I'm hand waiving here but good enough.

In that framing, right now we only offer full batch updates via the updateUrl; we could offer batch updates and incremental updates via something.

One way information, in particular the ads attribute, could be incrementally updated is via the bidding function. This could work by a) keep the updateUrl as the thing that updates the client side cache of information (bidding signals, ads, etc) as a “full refresh” b) specifically for ad attribute, let the renderUrls be “incrementally updated” if they come in via the bidding function.

Privacy Achieved

So, as the "incremental update" comes in, if there is a cache miss, just send an increment to the k-source-of-truth and move on to the next ad locally, which isn't really much of a loss.

Narrative ALSO Achieved???????????

Assuming the full batch refresh is truly that, with minor incremental updates over the next period, you could always just invoke updateUrl to sync the current source of truth. So, when the user says “show me the IG meaning”, you could invoke the updateUrl and at that point it would have “the most up to date version of what the IG means”.

Now, assuming that updateUrl is faithful as a full refresh, which I think the ad tech has at least some incentive to make it, at that point you’re still giving the fully synced state of “what is this IG”, exactly as before. Yes it can change in the future, but the only difference is that now it can change incrementally, rather than only once every 24 (x) hours.

@thegreatfatzby
Copy link
Contributor Author

This might be implicit in ^, but thinking about it a bit, I would think building a distinction between user objects and business objects into any future designs, and in particular the understanding that the user agent may/likely-will not be able to be the source of truth for the business objects and will rather be a cache for them, would help in making this successful.

Although actually, having typed that out loud, I'm wondering what your opinion on that general statement is @michaelkleber , in particular the user agent not being the source of truth for business objects and rather being a cache for them. I suppose if we limit this to the most-original-ideally-private vision for on device with web bundles, how were you thinking about sources of truth vs caches for business objects in a distributed system sense? Again, typing out loud, was it something like:

  • User Objects: source of truth is user agent, data never cached anywhere else.
  • Business Objects: source of truth is ad tech servers, batch full sync once a day to user agent cache.
  • Interest Group Object: source of truth for the linking of the two?

@michaelkleber
Copy link
Collaborator

Hey Isaac, thanks for writing up your thoughts here and for the discussion in last week's call.

I've been chewing on your appealing idea of "incremental updates produced by the bidding function", and I'm afraid that really would be a big change to the privacy model, for a reason that I did not think of in the moment when you proposed it.

Right now the contents of an Interest Group only reflect information that was known about the user on a single site (the site where they joined the IG). The bidding function, however, gets to process information from the IG join site and contextual information from the site running the auction. So if the bidding function becomes able to persist information in the IG and then that information is visible next time that IG bids, then the IG can accumulate user information from more and more sites over time (in the form of a growing list of render URLs with arbitrary meaning).

But! Is there any chance that the way you expect to use this feature would actually be partitioned by site? That is: If a user joins an IG on site X and then the IG somehow spontaneously generates a novel renderURL during an auction on site Y, do you expect to use that renderURL for future auctions only on site Y, or also on site Z? The reuse on site Z is the new problematic event, but reuse only on the same site seems feasible.

(This per-site partitioning idea doesn't really play well with your "Lambda architecture / incremental update / local cache of global state" point of view, or at least I haven't yet figured out a way to hold those two in my head at the same time.)

@thegreatfatzby
Copy link
Contributor Author

So we could limit the use, but I wonder if we can modify the proposal a bit so that the box that generates the new renderUrls only has the one input and not two and we don't have marginal privacy cost.

Model vs Implementation

I hope that what you're saying, or maybe would say with prodding, is actually two related but separate things:

  1. "I find the data syncing model raised appealing as an abstract description of what we're trying to accomplish."
    1a. "This is helpful, b/c models can help people manage complexity through abstraction, which in turn helps drive thinking in a structured way to solve problems in a way that gets as far on the pareto curve as we can".
  2. "I find the idea of implementing the incremental sync through generateBid appealing, but sadly there is a clear privacy leak in doing so. ¡Que lastima!"

¿Jah?

Alternative Implementation

If jah, then I agree we should look for an alternative implementation to remove the marginal leak. (I also like the idea of formalizing different pieces of the distributed architecture we're working towards, which isn't to say you don't know it already, but even just getting it out appeals to me).

Ultimately there needs to be some hook, that would allow the caller to push/have-pulled new ads into the IG. We have many hooks, or could invent new ones, but new ones cost more obvs, so let's try to start with existing ones.

Keep in generateBid, Work Around

So we could just limit the functionality to keep it in generateBid.

Double Key

One would be as you suggest, which I understand to be doing the "double partitioning" thing, where the key for the "incrementally synced renderUrl" is now (IG, TLD of ad), and presumably would only be passed to the generateBid function if invoked from TLD = that TLD.

This would be better! But yes, suboptimal.

Use a different hook that is less problematic

So if the issue with generateBid is that it's a box with 2 inputs instead of one, let's make it a box with one input instead of 2.

trustedBiddingSignals Hook

I think we might already have one: why not let the call to the KV server return renderUrls that would be added incrementally to the IG.ads via some known response element. Right now the perInterestGroupData element is indexed by IG name, and the framework will recognize a priorityVector element within it and do prioritization based on that. So the equivalent would be that the framework recognizes something like perInterestGroupData.IGName.ads, and in addition to passing along to the generateBid function it merges that into the ads element.

I can see some interesting merge issues, like what happens if the renderUrl is the same but it has different ad metadata, buyerReportingID, domains for pixels, etc. Maybe for now let's say it'll do it either on (adRenderId) if available, or (renderUrl, buyerReportingID, buyerSellerReportingID). I dunno, something something.

@thegreatfatzby
Copy link
Contributor Author

@michaelkleber I put together a first round of analysis for this.

Scope

One of the ways the creative view of the browser can get out of sync with the home system is new creatives being made that aren't immediately available for on-device (or even TEE based) auctions. For the sake of getting something out I'm limiting analysis to that for this post.

Background

Creatives

Our creative object lives in a table (multiple tables, but whatever) that has an auto incrementing primary key ID and records the created_on which cannot change.

Creative Serving

We have a table that records aggregates for impressions, including by creative, along with an hour, country, buyer ID, etc. (Note that the table I'm looking at now does not include user counts, I'll have to do more analysis if we want that).

Target Question

So my first question I wanted to answer is something like "given a 24 hour window of static-ness after IG update, what is the impact on the serving of newly created creatives". Some questions this can resolve to include:

  • How many imps would be immediately thrown out (and not even counted for k-anon)?
  • How much spend would that account for?
  • How many creatives are impacted?
  • Across how many buyers, and across how much geography.

Setup

Let's call an Even24HourWindow a start and end date that is 24 hours wide, and with minutes and seconds set to 0. So:

  • 2023-11-11 01 --> 2023-11-12 01
  • 2023-11-11 02 --> 2023-11-12 02
    And so on.

For each Even24HourWindow in the last 3 days, find the min(id) and max(id) of creatives made in that period; then find the total impressions served for those creatives in that same window.

(Note that I did the min(id) and max(id) as a convenience to get started, but I'm aware it's not strictly 100% perfect, as the ID being auto-incremented does not 100% mean you can say no IDs on the boundary are creeping in, but I think for the first round it should be preeeettttyyy directionally correct.).

Results

I'll upload my little spreadsheet, and can upload a privatized version of the script if desired, but a decent first draft is giving something like this:

  • For recent weekdays, I see generally between 20 million and 60 million served impressions for creatives created in the most recent 24 hour period. These impressions are worth between $100K-$200K in spend. (Highest was a hair over 200k, lowest a hair over 100K).
  • On roughly 4-5 K distinct creatives, across many buyers (> 150), across many countries.
  • The numbers are lower on the weekend, as is to be expected.

(I tried reducing the window to 4 hours: the losses are reduced but still significant: some windows have low revenue impacts (< $1 K) but others are at or over $10K, and since the windows are smaller we'd have to multiply by 6 to get apples and apples. The numbers still come out lower, but still significant enough).

Relative vs Absolute Value

To be somewhat complete and give this relative to totals (which I removed from the results as it seemed a little tooo precise and generally useful) it seems we'll get ~200K new creatives in a day, and the number of impressions impact amounts to < 1% (0.2%).

However, we get complaints about the smallest of discrepencies. Losing 200K of platform spend a day would be quite serious overall, and likely serious to enough clients that it would get their attention. It's also likely that some of these cases may have special meaning, as creatives have a limited shelf life in some cases (my 24 hour sale creative has a preeettty limited shelf life; my Coors super bowl add is a bigger life but might still be impacted a lot by a day out of a 7 day campaign) (although to be fair I'd have to dig way more on this to get like, average lifespan).

Non-Empirical Comments

While there are other measures I'm thinking about, I wanted to add a more qualitative piece of "evidence". Since the numbers are big enough that I do think people will notice, we will get bugs. And not just any bug, but bugs that are driven by data inconsistency in a distributed system. These are among the hardest issues to debug even with full access to logs, which we won't have.

Another interesting piece here is that this request coming from a browser means it is potentially fairly high scale with no real user authentication, and so fits more into an auction/bidding type of system scenario. These systems are expensive to build/maintain/etc. If we can avoid another one of those we should.

@thegreatfatzby
Copy link
Contributor Author

Lost Imps Revenue Due to Static Creative URLs Analysis 20231113.xlsx

This isn't perfect, I should redo with something that doesn't start "now" as some of the top rows are not really worth looking at, but again just want to get started with directional data and hear thoughts about useful other evidence.

@michaelkleber
Copy link
Collaborator

I like your suggestion to use the trustedBiddingSignals response as a chance to realize that an IG would benefit from updating. On the other hand, that server sending back new ads along with signals for the current auction seems like a risky morass that I would very much want to avoid if we can find any other way to accomplish the goals.

As a happy medium, perhaps we could use the trustedBiddingSignals to trigger the existing updateURL mechanism? Note that the response is a JSON object that includes structured 'perInterestGroupData'. We could add a field to that response to trigger the update:

{ 'keys' : {...},
  'perInterestGroupData': {
    'myFavoriteInterestGroup': {'updateIfOlderThan': 60}
  }
}

Then if the IG called "myFavoriteInterestGroup" has not been updated in the past 60 seconds, we could schedule it for updating when this auction ends, alongside any other IGs that have not been updated in the past 86400 seconds as usual.

Since your update can change IG keys as well deposit new ads, it would be easy for your trustedBiddingSignals request to effectively include a hash of the ads the IG is currently carrying around. This should, I think, make it as easy as possible for you to detect and address your concerns about consistency in distributed systems.

@thegreatfatzby
Copy link
Contributor Author

thegreatfatzby commented Nov 15, 2023

I will run some more queries and ponder more, but can you help me understand the morass that worries you so? Is it a privacy thing? System complexity thing?

EDIT: want to add, part of the reason I'd like to understand that better is that I'd like to understand if you have a "prioritization concern", which is valid but one type of thing, vs a more structural/architectural concern.

@michaelkleber
Copy link
Collaborator

My immediate design concerns are:

  1. Parsing and handling the Trusted Bidding Signals response blocks the rest of the auction, so piling data in there that is not required on the critical path is bad for performance reasons.

  2. Performing independent database updates to record the outcome of this auction and inputs to a future auction at the same time is needless complexity.

  3. The Trusted Bidding Signals lookup should be an idempotent request. If it might perform a browser-side state change then it's not, and we need to worry about response reuse, request reissue with disagreeing responses, merge conflicts, etc.

I'm not saying that any of these mean the design you imagined is impossible, just that it adds a bunch of complexity that seems unnecessary.

(At the same time, a malicious Trusted Bidding Signals server could use your design as an opportunity to push information into the Interest Group based on behavior on multiple sites. But even if we were discussing a future with TEE'd K/V servers with no added privacy risk, it still seems like an unpleasant design choice.)

@thegreatfatzby
Copy link
Contributor Author

Sorry still chewing but on your last paragraph, what's the unpleasant design choice for the long run TEE based k/v future?

@michaelkleber
Copy link
Collaborator

Sorry, I mean that (1) and (2) and (3) are still unpleasant even if we sweep the privacy risk under the rug.

@thegreatfatzby
Copy link
Contributor Author

On Demand Refresh: I Mean, Yeah, Let's do that Too

@michaelkleber I do think having a "please refresh when this auction is over" lever is a clear improvement over the current situation from a functionality perspective, I see no argument otherwise. I think I think that even were I to when I convince you that having an incremental option available is both wise and less morass'y than suggested, that having the "On Demand Refresh" available will still be valuable as an API choice too.

So I think I'd say let's at least start with that, especially if it's something we could do sooner than later.

But, Still

On the call I said "that doesn't quite get you there", and I maintain that. I think it comes down to two related things:

  1. Full Replication Consistency as Blocking Requirement for Client Facing Functionality: our new architecture drastically increases the number of nodes participating in auctions. By requiring pre-declaration, even if "pre" might be shrunk to "one auction + hopefully fast HTTP request", we are still blocking functionality on full replication consistency between lots of nodes. That is hard to do robustly and efficiently, when something like lazy replication consistency, which is easier, cheaper, and more flexible, would work.
  2. Resource Usage: My desire to get resource usage into our discussions as a first class citizen. Pull based state sync requests from lots of heterogenous nodes, create lots of opportunities for cache stampeding, inconsistencies, and resource usage.

(These are quite the Morass'i, in my view).

Privacy Concerns Detailed?

I definitely understand that the BYOS KV can return a URL derived from cross partition information. Is there a re-identification risk here, or is this a case where the mixing of the signals across partitions, even if output gated with k-anon, is not acceptable?

Morass'i

Parsing and DB Update Timing: Ehhhhhh

Morass'i 1 and 2 I think are implementation detail concerns that need not be coupled to the design or feature.

  1. If the update is incremental the additional parsing overhead needn't be large. Some cap could be put in place here.
  2. The word "needless" is doing a lot of work here :), but even if it is, given the flexibility with consistency we'd have here the full local persistence can be queued for after the auction or something.

Idempotence? Or Referential Transparency?

I like a good architectural principles argument, but I think I need to understand this one better.

As I understand it, "Idempotence" w/r/t to the KV request/response, would mean: "for a given request to the KV server, assuming we hold variables like Time and AllOtherAuctionStateLikeBudget_CampaignActiveStatus_etc constant, the exact same request would give the same response."

Double checking the Buyer KV request doc it has as variables: experiment ID, TLD, IG names, and keys from the IGs. None of those would be impacted if we added an entry to the ads list...so wouldn't it still be idempotent?

As I understand it the request would no longer be Referentially Transparent, and maybe that's what the merge conflicts might be referring to?

What I'll Do

I will try to get more numbers to back up my resource usage concerns, but I still think we are requiring a level of consistency here that isn't necessary to achieve our goals, and which reduces functionality that will inhibit adoption.

@caraitto
Copy link
Collaborator

One thought about the proposal in #729 (comment): what happens if we have failed updates?

Currently, for failed updates (i.e. if there was a network error [0]), the interest group becomes eligible for updating again an hour after the last update attempt, instead of waiting a day. We don't modify the last_updated time for failed updates, since they weren't actually updated.

I think the idea was that making the group immediately eligible to retry might DoS the update server with too many update requests? (Although, if the update is served as static content, where the server does minimal work for each request, perhaps the risk of DoS isn't too high?)

IIUC, reducing server load was one of the main reasons we had updates only happen once a day to begin with, rather than updating all post auction / manually triggered IGs all the time? The other would be reducing the number of network calls the client needs to make, reducing client bandwidth usage. Also, since updates need to happen close in time to a trigger event (post auction or manual trigger), we have a higher risk of canceling updates due to running out of time (10 min per round of updating) if there are more groups to update.

So, if a group update recently failed, should this mechanism allow it to skip the 1 hour wait? Or would it make more sense to ignore this new bidding signals directive for interest groups whose last update attempt ended in failure?

[0] Although Internet disconnected is special -- the update gets to retry right away under the idea that we never really made a real request on the network while disconnected. For invalid JSON responses (bad JSON or valid JSON, but not a valid interest group), we delay a day until the next allowed attempt, but still don't update last_updated.

caraitto added a commit to caraitto/turtledove that referenced this issue Mar 22, 2024
@caraitto
Copy link
Collaborator

I decided to go with allowing the expedited updates to go through even if the last update failed. I also changed updateIfOlderThan to updateIfOlderThanMs, as web standard conventions use milliseconds.

@caraitto
Copy link
Collaborator

caraitto commented Apr 1, 2024

FYI, I'm setting a floor of 10 minutes -- any updateIfOlderThanMs less than 10 minutes (600,000 ms) will be capped to 10 minutes.

@caraitto
Copy link
Collaborator

Sorry for the late update -- this feature landed in Chrome 125.0.6402.0 behind the InterestGroupUpdateIfOlderThan feature flag.

To enable the feature for development, you'll need to start Chrome from the command line (as specific to your platform, see [0]) as such:

chrome --enable-features=InterestGroupUpdateIfOlderThan

(Multiple features may be passed to --enable-features, separated by commas).

We are working on getting the flag flip gradually rolled out, first to canary / dev, and then to beta and stable.

There is no feature detection for this feature -- if updateIfOlderThanMs is passed to a browser where InterestGroupUpdateIfOlderThan is off, then updateIfOlderThanMs is simply ignored.

[0] https://www.chromium.org/developers/how-tos/run-chromium-with-flags/

@caraitto
Copy link
Collaborator

Rollout has started to 50% of canary / dev -- @thegreatfatzby, would you be able to help test this feature by serving updateIfOlderThanMs responses? That would ensure the code is getting exercised in those channels. Thanks :)

@caraitto
Copy link
Collaborator

caraitto commented May 7, 2024

@thegreatfatzby Were you able to try testing this feature? I haven't seen any traffic to the feature in dashboards yet -- we'd like to hold off on rolling out more broadly until we get some traffic for canary and dev. Thanks :)

JensenPaul pushed a commit that referenced this issue May 9, 2024
* Add `updateIfOlderThanMs`

Addresses #729

* Update FLEDGE.md

Fix invalid JSON (' instead of "), and add a note about clamping to 10 minutes.

* Update FLEDGE.md
@ccharnay67
Copy link

Hello,

We (Criteo) recently started setting updateIfOlderThanMs and toying a bit with its value. We would have expected seeing more calls to our updateUrl as a result, but it does not seem to be the case so far.

Has this feature been widely rolled out in Chrome already? Or is it expected that it has no effect at the moment?

@caraitto
Copy link
Collaborator

We've rolled out as far as 50% of canary, dev, and beta, but we haven't started rolling out to Chrome stable yet. I am now seeing that there is some (quite low) usage of the feature. So, you might see a very small increase in update traffic? But, I'd expect it to not be a large increase until the feature rolls out to Chrome stable, which is currently pending on approvals.

@ccharnay67
Copy link

Thanks for the update @caraitto. We didn't really see an increase in update traffic, but it could also be due to the fact we're not using it on a large part of our traffic at the moment. We'll wait for a wider rollout of the feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants