Data Masking/Tokenization from Cloud Storage to BigQuery (using Cloud DLP) template

The Data Masking/Tokenization from Cloud Storage to BigQuery template uses Sensitive Data Protection and creates a streaming pipeline that does the following steps:

  1. Reads CSV files from a Cloud Storage bucket.
  2. Calls the Cloud Data Loss Prevention API (part of Sensitive Data Protection) for de-identification.
  3. Writes the de-identified data into the specified BigQuery table.

The template supports using both a Sensitive Data Protection inspection template and a Sensitive Data Protection de-identification template. As a result, the template supports both of the following tasks:

  • Inspect for potentially sensitive information and de-identify the data.
  • De-identify structured data where columns are specified to be de-identified and no inspection is needed.

This template does not support a regional path for de-identification template location. Only a global path is supported.

Pipeline requirements

  • The input data to tokenize must exist.
  • The Sensitive Data Protection templates must exist (for example, DeidentifyTemplate and InspectTemplate). For more details, see Sensitive Data Protection templates.
  • The BigQuery dataset must exist.

Template parameters

Required parameters

  • inputFilePattern : The CSV files to read input data records from. Wildcards are also accepted. (Example: gs://mybucket/my_csv_filename.csv or gs://mybucket/file-*.csv).
  • deidentifyTemplateName : The Sensitive Data Protection de-identification template to use for API requests, specified with the pattern projects/<PROJECT_ID>/deidentifyTemplates/<TEMPLATE_ID>. (Example: projects/your-project-id/locations/global/deidentifyTemplates/generated_template_id).
  • datasetName : The BigQuery dataset to use when sending tokenized results. The dataset must exist prior to execution.
  • dlpProjectId : The ID for the Google Cloud project that owns the DLP API resource. This project can be the same project that owns the Sensitive Data Protection templates, or it can be a separate project.

Optional parameters

  • inspectTemplateName : The Sensitive Data Protection inspection template to use for API requests, specified with the pattern projects/<PROJECT_ID>/identifyTemplates/<TEMPLATE_ID>. (Example: projects/your-project-id/locations/global/inspectTemplates/generated_template_id).
  • batchSize : The chunking or batch size to use for sending data to inspect and detokenize. For a CSV file, the value of batchSize is the number of rows in a batch. Determine the batch size based on the size of the records and the sizing of the file. The DLP API has a payload size limit of 524 KB per API call.

Run the template

Console

  1. Go to the Dataflow Create job from template page.
  2. Go to Create job from template
  3. In the Job name field, enter a unique job name.
  4. Optional: For Regional endpoint, select a value from the drop-down menu. The default region is us-central1.

    For a list of regions where you can run a Dataflow job, see Dataflow locations.

  5. From the Dataflow template drop-down menu, select the Data Masking/Tokenization from Cloud Storage to BigQuery (using Cloud DLP) template.
  6. In the provided parameter fields, enter your parameter values.
  7. Click Run job.

gcloud

In your shell or terminal, run the template:

gcloud dataflow jobs run JOB_NAME \
    --gcs-location gs://dataflow-templates-REGION_NAME/VERSION/Stream_DLP_GCS_Text_to_BigQuery \
    --region REGION_NAME \
    --staging-location STAGING_LOCATION \
    --parameters \
inputFilePattern=INPUT_DATA,\
datasetName=DATASET_NAME,\
batchSize=BATCH_SIZE_VALUE,\
dlpProjectId=DLP_API_PROJECT_ID,\
deidentifyTemplateName=projects/TEMPLATE_PROJECT_ID/deidentifyTemplates/DEIDENTIFY_TEMPLATE,\
inspectTemplateName=projects/TEMPLATE_PROJECT_ID/identifyTemplates/INSPECT_TEMPLATE_NUMBER

Replace the following:

  • DLP_API_PROJECT_ID: your DLP API project ID
  • JOB_NAME: a unique job name of your choice
  • REGION_NAME: the region where you want to deploy your Dataflow job—for example, us-central1
  • VERSION: the version of the template that you want to use

    You can use the following values:

  • STAGING_LOCATION: the location for staging local files (for example, gs://your-bucket/staging)
  • INPUT_DATA: your input file path
  • DEIDENTIFY_TEMPLATE: the Sensitive Data ProtectionDeidentify Template number
  • DATASET_NAME: the BigQuery dataset name
  • INSPECT_TEMPLATE_NUMBER: the Sensitive Data ProtectionInspect Template number
  • BATCH_SIZE_VALUE: the batch size (# of rows per API for CSV files)

REST

To run the template using the REST API, send an HTTP POST request. For more information on the API and its authorization scopes, see projects.templates.launch.

POST https://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/locations/LOCATION/templates:launch?gcsPath=gs://dataflow-templates-LOCATION/VERSION/Stream_DLP_GCS_Text_to_BigQuery
{
   "jobName": "JOB_NAME",
   "environment": {
       "ipConfiguration": "WORKER_IP_UNSPECIFIED",
       "additionalExperiments": []
   },
   "parameters": {
      "inputFilePattern":INPUT_DATA,
      "datasetName": "DATASET_NAME",
      "batchSize": "BATCH_SIZE_VALUE",
      "dlpProjectId": "DLP_API_PROJECT_ID",
      "deidentifyTemplateName": "projects/TEMPLATE_PROJECT_ID/deidentifyTemplates/DEIDENTIFY_TEMPLATE",
      "inspectTemplateName": "projects/TEMPLATE_PROJECT_ID/identifyTemplates/INSPECT_TEMPLATE_NUMBER"
   }
}

Replace the following:

  • PROJECT_ID: the Google Cloud project ID where you want to run the Dataflow job
  • DLP_API_PROJECT_ID: your DLP API project ID
  • JOB_NAME: a unique job name of your choice
  • LOCATION: the region where you want to deploy your Dataflow job—for example, us-central1
  • VERSION: the version of the template that you want to use

    You can use the following values:

  • STAGING_LOCATION: the location for staging local files (for example, gs://your-bucket/staging)
  • INPUT_DATA: your input file path
  • DEIDENTIFY_TEMPLATE: the Sensitive Data ProtectionDeidentify Template number
  • DATASET_NAME: the BigQuery dataset name
  • INSPECT_TEMPLATE_NUMBER: the Sensitive Data ProtectionInspect Template number
  • BATCH_SIZE_VALUE: the batch size (# of rows per API for CSV files)

What's next