Skip to content

Latest commit

 

History

History

inter_region_reports

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

Reproducing Inter-Region Tests with PerfKit Benchmarker

This is a hands-on lab/tutorial about running PerfKit Benchmarker (PKB) on Google Cloud. You can reproduce your own inter-region latency and throughput reports by following these instructions. This lab is referenced in the Google Cloud VPC Network performance online documentation.

Overview

Introducing PerfKit Benchmarker

PerfKit Benchmarker is an open source framework with commonly accepted benchmarking tools that you can use to measure and compare cloud providers. PKB automates setup and teardown of resources, including Virtual Machines (VMs), on whichever cloud provider you choose. Additionally, PKB installs and runs the benchmark software tests and provides patterns for saving the test output for future analysis and debugging.

Check out the PerfKit Benchmarker README for a detailed introduction.

What you'll do

This lab demonstrates an end-to-end workflow for running benchmark tests, uploading the result data to Google Cloud, and rendering reports based on that data.

In this lab, you will:

  • Install PerfKit Benchmarker
  • Create a BigQuery dataset for benchmark result data storage
  • Start a benchmark test for latency
  • Start a benchmark test for throughput
  • Work with the test result data in BigQuery
  • Create a new Data Studio data source and report

Note: this lab is biased to running networking benchmarks, on Google Cloud.

Prerequisites

  • Basic familiarity with Linux command line
  • Basic familiarity with Google Cloud

Set up

What you'll need

To complete this lab, you'll need:

  • Access to a standard internet browser (Chrome browser recommended), where you can access the Cloud Console and the Cloud Shell
  • A Google Cloud project

Sign in to Cloud Console

In your browser, open the Cloud Console.

Select your project using the project selector dropdown at the top of page.

Activate the Cloud Shell

From the Cloud Console click the Activate Cloud Shell icon on the top right toolbar:

alt text

You may need to click Continue the first time.

It should only take a few moments to provision and connect to your Cloud Shell environment.

This Cloud Shell virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this lab can be done within a browser on your Google Chromebook.

Once connected to the Cloud Shell, you can verify your setup.

  1. Check that you're already authenticated.

    gcloud auth list
    

    Expected output

     Credentialed accounts:
    ACTIVE  ACCOUNT
    *       <myaccount>@<mydomain>.com
    

    Note: gcloud is the powerful and unified command-line tool for Google Cloud. Full documentation is available from https://cloud.google.com/sdk/gcloud. It comes pre-installed on Cloud Shell. Notice gcloud supports tab-completion.

  2. Verify your project is known.

    gcloud config list project
    

    Expected output

    [core]
    project = <PROJECT_ID>
    

    If it is not, you can set it with this command:

    gcloud config set project <PROJECT_ID>
    

    Expected output

    Updated property [core/project].
    

Disable OS Login, in favor of legacy SSH keys

OS Login is now enabled by default on this project, and any VM instances created. OS Login enables the use of Compute Engine IAM roles to manage SSH access to Linux instances.

PKB, however, uses legacy SSH keys for authentication, so OS Login must be disabled.

In Cloud Shell, disable OS Login for the project.

gcloud compute project-info add-metadata --metadata enable-oslogin=FALSE

Task 1. Install PerfKit Benchmarker

in this lab, you use Cloud Shell and the PKB repo in GitHub.

  1. Set up a virtualenv isolated Python environment within Cloud Shell.

    sudo apt-get install python3-venv -y
    python3 -m venv $HOME/my_virtualenv
    
    source $HOME/my_virtualenv/bin/activate
    
  2. Ensure Google Cloud SDK tools like bq find the proper Python.

    export CLOUDSDK_PYTHON=$HOME/my_virtualenv/bin/python
    
  3. Clone the PerfKitBenchmarker repository.

    cd $HOME && git clone https://github.com/GoogleCloudPlatform/PerfKitBenchmarker.git
    
    cd PerfKitBenchmarker/
    
  4. Install PKB dependencies.

    pip install --upgrade pip
    pip install -r requirements.txt
    

Note: these setup instructions are specific for running network benchmarks. Comprehensive instructions for running other benchmarks can be located by reviewing the README in the PKB repo, or just look through the code.

Task 2. Create a BigQuery dataset for benchmark result data storage

By default, PKB logs test output to the terminal and to result files under /tmp/perfkitbenchmarker/runs/.

A recommended practice is to push your result data to BigQuery, a serverless, highly-scalable, cost-effective data warehouse. You can then use BigQuery to review your test results over time, and create data visualizations.

In order to create online reports, this lab sends data to tables in a BigQuery dataset. BigQuery dataset tables can then be used as data sources for reports.

Using the BigQuery command-line tool bq, initialize an empty dataset.

bq mk pkb_results

Output (do not copy)

Dataset '[PROJECT-ID]:pkb_results' successfully created.

You can safely ignore any warnings about the imp module.

You can also create datasets using the BigQuery UI in the Cloud Console. The dataset can be named anything, but you need to use the dataset name in options on the command-line when you run tests.

Task 3. Start a benchmark test for latency

PKB supports custom configuration files in which you can set the cloud provider, zone, machine type, and many other options for each VM.

  1. Grab and review the custom config file for latency: all_region_latency.yaml

    cat ./tutorials/inter_region_reports/data/all_region_latency.yaml
    

    Output (do not copy)

    # ping benchmark for latency.
    ping:
      flag_matrix: inter_region
      flag_matrix_filters:
        inter_region: "zones < extra_zones"
      flag_matrix_defs:
        inter_region:
          gce_network_tier: [premium]
          zones: [asia-east1-a,asia-east2-a,asia-northeast1-a,asia-northeast2-a,asia-south1-a,asia-southeast1-a,australia-southeast1-a,europe-north1-a,europe-west1-c,europe-west2-a,europe-west3-a,europe-west4-a,europe-west6-a,northamerica-northeast1-a,southamerica-east1-a,us-central1-a,us-east1-b,us-east4-a,us-west1-a,us-west2-a]
          extra_zones: [asia-east1-a,asia-east2-a,asia-northeast1-a,asia-northeast2-a,asia-south1-a,asia-southeast1-a,australia-southeast1-a,europe-north1-a,europe-west1-c,europe-west2-a,europe-west3-a,europe-west4-a,europe-west6-a,northamerica-northeast1-a,southamerica-east1-a,us-central1-a,us-east1-b,us-east4-a,us-west1-a,us-west2-a]
          machine_type: [n1-standard-2]
      flags:
        cloud: GCP
        ip_addresses: BOTH
    
  2. Run the latency tests.

    ./pkb.py --benchmarks=ping \
        --benchmark_config_file=./tutorials/inter_region_reports/data/all_region_latency.yaml \
        --bq_project=$(gcloud info --format='value(config.project)') \
        --bigquery_table=pkb_results.all_region_results
    

    This test pass usually takes ~12 minutes for each region pair. Test output will be pushed to the BigQuery table pkb_results.all_region_results.

    Output (do not copy)

    ...
    -------------------------PerfKitBenchmarker Results Summary-------------------------
    PING:
      Min Latency            33.101000 ms        (ip_type="internal" ...)
      Average Latency        33.752000 ms        (ip_type="internal" ...)
      Max Latency            34.023000 ms        (ip_type="internal" ...)
      Latency Std Dev         0.407000 ms        (ip_type="internal" ...)
      ...
      Min Latency            34.440000 ms        (ip_type="external" ...)
      Average Latency        34.903000 ms        (ip_type="external" ...)
      Max Latency            38.060000 ms        (ip_type="external" ...)
      Latency Std Dev         0.460000 ms        (ip_type="external" ...)
    ...
    ----------------------------------------
    Name  UID    Status     Failed Substatus
    ----------------------------------------
    ping  ping0  SUCCEEDED
    ----------------------------------------
    Success rate: 100.00% (1/1)
    ...
    

    Test results show that this benchmark runs 4 times between the two VM instances in different regions:

    • traffic over external IPs, vm1>vm2
    • traffic over internal IPs, vm1>vm2
    • traffic over external IPs, vm2>vm1
    • traffic over internal IPs, vm2>vm1

Task 4. Start a benchmark test for throughput

  1. Review the custom config file for throughput tests: all_region_iperf.yaml.

    cat tutorials/inter_region_reports/data/all_region_iperf.yaml
    

    Output (do not copy)

    # iperf benchmark for throughput.
    iperf:
      flag_matrix: inter_region
      flag_matrix_filters:
        inter_region: "zones < extra_zones"
      flag_matrix_defs:
        inter_region:
          gce_network_tier: [premium]
          zones: [asia-east1-a,asia-east2-a,asia-northeast1-a,asia-northeast2-a,asia-south1-a,asia-southeast1-a,australia-southeast1-a,europe-north1-a,europe-west1-c,europe-west2-a,europe-west3-a,europe-west4-a,europe-west6-a,northamerica-northeast1-a,southamerica-east1-a,us-central1-a,us-east1-b,us-east4-a,us-west1-a,us-west2-a]
          extra_zones: [asia-east1-a,asia-east2-a,asia-northeast1-a,asia-northeast2-a,asia-south1-a,asia-southeast1-a,australia-southeast1-a,europe-north1-a,europe-west1-c,europe-west2-a,europe-west3-a,europe-west4-a,europe-west6-a,northamerica-northeast1-a,southamerica-east1-a,us-central1-a,us-east1-b,us-east4-a,us-west1-a,us-west2-a]
          machine_type: [n1-standard-2]
      flags:
        cloud: GCP
        iperf_runtime_in_seconds: 60
        iperf_sending_thread_count: 1,4,32
    
  2. Run the throughput tests.

    ./pkb.py --benchmarks=iperf \
        --benchmark_config_file=./tutorials/inter_region_reports/data/all_region_iperf.yaml \
        --bq_project=$(gcloud info --format='value(config.project)') \
        --bigquery_table=pkb_results.all_region_results
    

    This test pass usually takes ~20 minutes for each region pair. It runs throughput tests with 1, 4, and 32 threads.

    Test output will be pushed to the BigQuery table pkb_results.all_region_results.

    Output (do not copy)

    ...
    -------------------------PerfKitBenchmarker Results Summary-------------------------
    IPERF:
      Throughput   4810.000000 Mbits/sec   (ip_type="external" ... receiving_zone="us-east1-b" ...)
      Throughput   9768.000000 Mbits/sec   (ip_type="internal" ... receiving_zone="us-east1-b" ...)
      Throughput   7116.000000 Mbits/sec   (ip_type="external" ... receiving_zone="us-central1-a" ...)
      Throughput   9747.000000 Mbits/sec   (ip_type="internal" ... receiving_zone="us-central1-a" ...)
    ...
    ------------------------------------------
    Name   UID     Status     Failed Substatus
    ------------------------------------------
    iperf  iperf0  SUCCEEDED
    ------------------------------------------
    Success rate: 100.00% (1/1)
    ...
    

    Test results show that, for each threadcount value, this benchmark runs 4 times between the two VM instances in different regions:

    • traffic over external IPs, vm1>vm2
    • traffic over internal IPs, vm1>vm2
    • traffic over external IPs, vm2>vm1
    • traffic over internal IPs, vm2>vm1

Task 5. Work with the test result data in BigQuery

  1. Query pkb_results.all_region_results to view the test results in BigQuery.

    In Cloud Shell, run a bq command.

    bq query 'SELECT test, metric, value, product_name FROM pkb_results.all_region_results'
    

    You can also see your data using the Query editor in the BigQuery UI.

    Output (do not copy)

    ...
    +-------+--------------------+--------------------+--------------------+
    | test  |       metric       |       value        |    product_name    |
    +-------+--------------------+--------------------+--------------------+
    | iperf | Throughput         |              724.0 | PerfKitBenchmarker |
    | iperf | Throughput         |              717.0 | PerfKitBenchmarker |
    | iperf | Throughput         |              733.0 | PerfKitBenchmarker |
    | iperf | Throughput         |              701.0 | PerfKitBenchmarker |
    | iperf | Throughput         |             2866.0 | PerfKitBenchmarker |
    | iperf | Throughput         |             2849.0 | PerfKitBenchmarker |
    | iperf | Throughput         |             2433.0 | PerfKitBenchmarker |
    | iperf | Throughput         |             2880.0 | PerfKitBenchmarker |
    | iperf | Throughput         |             7909.0 | PerfKitBenchmarker |
    | iperf | Throughput         |             9790.0 | PerfKitBenchmarker |
    | iperf | Throughput         |             6985.0 | PerfKitBenchmarker |
    | iperf | Throughput         |             9708.0 | PerfKitBenchmarker |
    | iperf | lscpu              |                0.0 | PerfKitBenchmarker |
    | iperf | proccpu            |                0.0 | PerfKitBenchmarker |
    | iperf | proccpu_mapping    |                0.0 | PerfKitBenchmarker |
    | iperf | End to End Runtime | 1114.2245268821716 | PerfKitBenchmarker |
    | ping  | End to End Runtime |   755.233142375946 | PerfKitBenchmarker |
    | ping  | Min Latency        |             33.296 | PerfKitBenchmarker |
    | ping  | Average Latency    |             33.378 | PerfKitBenchmarker |
    | ping  | Max Latency        |             33.687 | PerfKitBenchmarker |
    | ping  | Latency Std Dev    |              0.214 | PerfKitBenchmarker |
    | ping  | Min Latency        |             34.613 | PerfKitBenchmarker |
    | ping  | Average Latency    |             34.916 | PerfKitBenchmarker |
    | ping  | Max Latency        |             38.789 | PerfKitBenchmarker |
    | ping  | Latency Std Dev    |              0.512 | PerfKitBenchmarker |
    | ping  | Min Latency        |             34.657 | PerfKitBenchmarker |
    | ping  | Average Latency    |             34.711 | PerfKitBenchmarker |
    | ping  | Max Latency        |              34.79 | PerfKitBenchmarker |
    | ping  | Latency Std Dev    |              0.227 | PerfKitBenchmarker |
    | ping  | Min Latency        |             33.051 | PerfKitBenchmarker |
    | ping  | Average Latency    |             34.339 | PerfKitBenchmarker |
    | ping  | Max Latency        |             34.812 | PerfKitBenchmarker |
    | ping  | Latency Std Dev    |               0.63 | PerfKitBenchmarker |
    | ping  | proccpu_mapping    |                0.0 | PerfKitBenchmarker |
    | ping  | proccpu            |                0.0 | PerfKitBenchmarker |
    | ping  | lscpu              |                0.0 | PerfKitBenchmarker |
    +-------+--------------------+--------------------+--------------------+
    
  2. Create a BigQuery view which makes working with the data easier.

    sed "s/<PROJECT_ID>/$(gcloud info --format='value(config.project)')/g" \
        ./tutorials/inter_region_reports/data/all_region_result_view.sql \
        > ./view.sql
    
    bq mk --view="$(cat ./view.sql)" pkb_results.all_region_result_view
    
  3. Verify you can retrive data through the view.

    bq query --nouse_legacy_sql \
    'SELECT test, metric, value, unit, sending_zone, receiving_zone, sending_thread_count, ip_type, product_name, thedate FROM pkb_results.all_region_result_view ORDER BY thedate'
    

    Output (do not copy)

    +-------+-----------------+--------+-----------+---------------+----------------+----------------------+
    | test  |     metric      | value  |   unit    | sending_zone  | receiving_zone | sending_thread_count |
    +-------+-----------------+--------+-----------+---------------+----------------+----------------------+
    | ping  | Min Latency     | 33.051 | ms        | us-central1-a | us-east1-b     | NULL                 |...
    | ping  | Average Latency | 34.339 | ms        | us-central1-a | us-east1-b     | NULL                 |...
    | ping  | Max Latency     | 34.812 | ms        | us-central1-a | us-east1-b     | NULL                 |...
    | ping  | Latency Std Dev |   0.63 | ms        | us-central1-a | us-east1-b     | NULL                 |...
    | ping  | Min Latency     | 34.657 | ms        | us-east1-b    | us-central1-a  | NULL                 |...
    | ping  | Average Latency | 34.711 | ms        | us-east1-b    | us-central1-a  | NULL                 |...
    | ping  | Max Latency     |  34.79 | ms        | us-east1-b    | us-central1-a  | NULL                 |...
    | ping  | Latency Std Dev |  0.227 | ms        | us-east1-b    | us-central1-a  | NULL                 |...
    | ping  | Min Latency     | 34.613 | ms        | us-central1-a | us-east1-b     | NULL                 |...
    | ping  | Average Latency | 34.916 | ms        | us-central1-a | us-east1-b     | NULL                 |...
    | ping  | Max Latency     | 38.789 | ms        | us-central1-a | us-east1-b     | NULL                 |...
    | ping  | Latency Std Dev |  0.512 | ms        | us-central1-a | us-east1-b     | NULL                 |...
    | ping  | Min Latency     | 33.296 | ms        | us-east1-b    | us-central1-a  | NULL                 |...
    | ping  | Average Latency | 33.378 | ms        | us-east1-b    | us-central1-a  | NULL                 |...
    | ping  | Max Latency     | 33.687 | ms        | us-east1-b    | us-central1-a  | NULL                 |...
    | ping  | Latency Std Dev |  0.214 | ms        | us-east1-b    | us-central1-a  | NULL                 |...
    | iperf | Throughput      |  724.0 | Mbits/sec | us-central1-a | us-east1-b     | 1                    |...
    | iperf | Throughput      |  717.0 | Mbits/sec | us-central1-a | us-east1-b     | 1                    |...
    | iperf | Throughput      |  733.0 | Mbits/sec | us-east1-b    | us-central1-a  | 1                    |...
    | iperf | Throughput      |  701.0 | Mbits/sec | us-east1-b    | us-central1-a  | 1                    |...
    | iperf | Throughput      | 2866.0 | Mbits/sec | us-central1-a | us-east1-b     | 4                    |...
    | iperf | Throughput      | 2849.0 | Mbits/sec | us-central1-a | us-east1-b     | 4                    |...
    | iperf | Throughput      | 2433.0 | Mbits/sec | us-east1-b    | us-central1-a  | 4                    |...
    | iperf | Throughput      | 2880.0 | Mbits/sec | us-east1-b    | us-central1-a  | 4                    |...
    | iperf | Throughput      | 7909.0 | Mbits/sec | us-central1-a | us-east1-b     | 32                   |...
    | iperf | Throughput      | 9790.0 | Mbits/sec | us-central1-a | us-east1-b     | 32                   |...
    | iperf | Throughput      | 6985.0 | Mbits/sec | us-east1-b    | us-central1-a  | 32                   |...
    | iperf | Throughput      | 9708.0 | Mbits/sec | us-east1-b    | us-central1-a  | 32                   |...
    +-------+-----------------+--------+-----------+---------------+----------------+----------------------+
        ```
    

You will learn to visualize such data, in the next section.

Task 6. Create a new Data Studio data source and report

  1. Open Data Studio.

  2. Click Create > Report.

    • You may need to click Get Started the first time.
    • You may need to Accept terms and conditions.
    • You may need to choose No, thanks to email.
    • If you don't see an Untitled Report, click Create > Report again.
  3. Click BigQuery under Add data to report > Connect to data.

    • You may need to click Authorize.
    • Click My Projects.
    • Click the project-id you're using.
    • Click the pkb_results dataset created earlier.
    • Click the all_region_result_view created earlier.
    • Click Add.
    • Click Add to report to confirm.

    This creates a new data source in your new reports.

  4. Click Untitled Report and name your report Inter-region Dashboard.

  5. Remove the default Chart > Table.

    • Select the default table object in the report body.
    • Press the Delete key on your keyboard.
  6. Create a Pivot table chart for your iperf results.

    • Click Add a chart.
    • Click Pivot table.
    • Drop the Pivot table in the upper left corner of the report body.
    • Drag the chart boundary to make it a rectangle with room for 4 columns.
  7. Set iperf Pivot table settings.

    • For Row dimension click Add dimension and choose receiving_region.
    • For Column dimension click Add dimension and choose sending_region.
    • For Metric click Add metric and choose value.
    • The metric defaults to SUM, click SUM and choose Average.
    • Remove the Record Count metric.
    • Scroll settings down to Filter and click Add a filter.
    • Click Create a filter.
    • Name the filter iperf filter.
    • For field, choose test.
    • For condition, choose Equal to.
    • For value, type iperf.
    • Click Save.
  8. Create a Pivot table chart for your ping results.

    • Click Add a chart.
    • Click Pivot table.
    • Drop the Pivot table below the first table.
    • Drag the chart boundary to make it a rectangle with room for 4 columns.
  9. Set ping Pivot table settings.

    • For Row dimension click Add dimension and choose receiving_region.
    • For Column dimension click Add dimension and choose sending_region.
    • For Metric click Add metric and choose value.
    • The metric defaults to SUM, click SUM and choose Average.
    • Remove the Record Count metric.
    • Scroll settings down to Filter and click Add a filter.
    • Click Create a filter.
    • Name the filter ping filter.
    • For field, choose test.
    • For condition, choose Equal to.
    • For value, type ping.
    • Click Save.
  10. Click View to see your rendered report.

    Click Edit to edit again. You can customize the report setup to create your own inter-region latency and throughput reports.

Enjoy.

Cleanup

Note that the following resources may have been created, that you may wish to remove.

  • The pkb_results dataset in BigQuery
  • The all_region_results table in the pkb_results dataset
  • Any reports you copied/created in Data Studio

Congratulations!

You have completed the Reproducing Inter-Region Tests with PerfKit Benchmarker lab!

What was covered

You installed PerfKit Benchmarker, and ran benchmark tests in the cloud.

You learned how to build an end-to-end workflow for running benchmarks, gathering data, and visualizing performance trends.

Learn More

Credits

Note: the original version of this lab was prepared by the networking research team at the AT&T Center for Virtualization at Southern Methodist University.