Measurement implementation guide

1. Before you begin

Consider what's most important to your business based on the following types of customers and use cases and ensure your integration and the experiment reflect those priorities. Those criteria could include:

  • Customer type: large versus small advertisers, agencies, vertical type, geo footprint
  • Campaign objectives and conversion types: user acquisition, customer retention, purchases, revenue
  • Use cases: reporting, ROI analysis, bid optimization

2. Use cases

We often see summary reports used for reporting and event-level reports used for optimization (and possibly reporting as auxiliary data). To maximize measurement capabilities, combine event-level and aggregate-level; for example, based on Google Ads's methodology and Privacy Sandbox optimization research.

3. General

Baseline Optimal
Reporting
  • Using summary reports for reporting use-cases
  • Understand how to use summary + event level reports together for reporting
Optimization
  • Clear explanation of what exactly is being optimized
  • Clear understanding of which reports drive your optimization model
  • Using event-level reports for optimization use-cases
  • PA + ARA
  • PA optimization may involve modelingSignals
  • Understand how to use summary + event level reports together, especially for ROAS optimization
Cross-app & web attribution
  • Compare Cross App and Web Attribution through ARA with current Cross App and Web coverage
  • If not currently measuring Cross App and Web Attribution, consider if this might be beneficial

4. Configuration setup

Baseline Optimal
Event-Level Reports
  • Proper setup of source / trigger registration calls, for any flow (PA, non-PA, etc)
  • Using either Click Through Conversions (CTC) or View Through Conversions (VTC)
  • Using default configuration setup
  • Full understanding of priority, expiry, event_report_window, deduplication_key, filters,_lookback_window
  • Proper setup of source / trigger registration calls, for all flows (PA, non-PA, all ad types etc)
  • Using both CTC and VTC
  • Testing different reporting windows to optimize against report loss and identifying optimal settings for your use-cases
  • Integration with Sim lib, a tool that can be used to test ARA based on historical data
Summary Reports
  • Proper setup of source / trigger registration calls, for any flow (PA, non-PA, etc)
  • Using either CTC or VTC
  • Full understanding of aggregate report configs: filters, aggregatable_report_window, scheduled_report_time, source_registration_time, reporting_origin
  • Proper setup of source / trigger registration calls, for all flows (PA, non-PA, all ad types etc)
  • Using both CTC and VTC
  • Integration with SimLib and experimentation with Noise Lab simulations. Can be used to test various API configurations

5. Implementation strategies

Baseline Optimal
Non 3PC data
  • Consider how to use third party cookies (while available) and data not impacted by 3PCD to validate or further improve ARA performance
Noise
  • Integration with SimLib and experimentation with Noise Lab simulations to assess noise impact
  • Implement and test various mechanisms for de-noising data
Aggregation Service
  • Check that the source-side and trigger-side keys that you are planning to use make sense for your use-cases

    Example key structure to start with could be: a key structure that includes all dimensions you want to track. Based on the output you can test different key structures.
  • Testing multiple different key structures, including hierarchical keys to optimize for your use-cases
  • Testing various Epsilon values within Aggregation Service and able to provide a perspective on it
Batching strategy
  • Full understanding of the impact of different batching frequencies (e.g. hourly, daily or weekly) and how reports are batched (e.g. by advertiser X scheduled report time). Additional details in the Developer Docs andAgg Service Load Testing guidance
  • Test with at least one batching frequency and one advertiser
  • Testing different combinations of batching frequencies and report dimensions, and identifying optimal settings for their use-cases
  • Minimize report loss by adjusting batching strategy to account for potential delayed aggregatable reports
Debugging
  • Use all types of debug reports as part of your testing and evaluation