Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Edge MPC #154

Open
jonasz opened this issue Mar 16, 2021 · 1 comment
Open

Edge MPC #154

jonasz opened this issue Mar 16, 2021 · 1 comment

Comments

@jonasz
Copy link
Contributor

jonasz commented Mar 16, 2021

On a couple occasions there was a discussion about the need to optimize on-device resource usage, and the possibility of offloading computations to a server.

One idea we discussed internally at RTB House is the approach of "Edge MPC" servers. As the name suggests, these servers could perform computations that are feasible in an MPC setting.

For example, generate_bid could ask the browser to perform matrix multiplication:

let input = (some computation);
let output = await navigator.edge_mpc.matrix_mul(input, 'matrix_xyz_14889');

Prior to that call, the generate_bid's owner would have to have a way to register the 'matrix_xyz_14889' on the MPC servers.

Some thoughts:

  • All the calls to the navigator.edge_mpc.* (from all bidding_fns) could be batched in a single roundtrip to the Edge MPC server.
  • The browser may specify how many sequential calls are allowed per bidding_fn. (If one call depends on the output from the other, that would require two roundtrips, of course.)
  • The availability of navigator.edge_mpc.* may be optional, and it'd be on generate_bid to decide what to do if it's not available. In that sense, it'd be a best-effort optimization.
  • It seems that shallow neural networks could be evaluated by performing a couple calls to Edge MPC, trading cpu usage for network latency.

Just a high level idea, surely requires further work, but maybe it's worth giving it a thought.

Best regards,
Jonasz

@p-j-l
Copy link
Contributor

p-j-l commented Mar 24, 2021

We’ve also been thinking along these lines, thanks for the post.

Rather than plan on running MPC, we were wondering if we could address some of these questions by having the entire code that’s run inside worklets be moved to run in a trusted server instead? This has the potential to improve user experience.

One way of looking at this is that this could be a basic version of a SPARROW server.

The example that we’re thinking of here is executing JS functions that have no side effects, which is the case for the proposed FLEDGE bidding functions. A simple way to do this is for the browser to send the function and arguments to the trusted server for evaluation.

Taking generate_bid as an example, the browser would send a request to the trusted server with the interest_group, auction_signals, per_buyer_signals, trusted_bidding_signals, and browser_signals. Sending one request per interest_group per bidder would be prohibitive, and we already trust the server to handle multiple interest groups for a single user, so instead the client could send a single request for many interest groups.

There are necessarily going to be tradeoffs in resources here. A trusted server would introduce additional overhead in:

  • Network bandwidth to pass arguments and receive results for running the function.
  • Server compute resources.
  • Maintenance and upkeep of these new servers.
  • Overall complexity of the system.
  • Network bandwidth to upload the JS function to the trusted server for evaluation. This may not apply if functions are uploaded separately.

These are balanced against the amount of browser resources required to run worklet functions and therefore the user experience effects. We’re exploring this tradeoff now.

We’re in the process of exploring what it means for a server to be trusted, how that trust could be ensured, and how side-effect free functions can be run securely in a series of posts here.

For reference, this is the issue about resource limits: #132

Thanks,
Phil

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants