Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Get started with OpenShift APIs for Data Protection

November 4, 2024
Mohammad Ahmad Prasad Joshi Phuong Nguyen
Related topics:
DatabasesOperators
Related products:
Red Hat OpenShift Container PlatformRed Hat OpenShift Data Foundation

Share:

    OpenShift APIs for Data Protection (OADP) is an operator that facilitates the backup and restore of workloads in Red Hat OpenShift clusters. Based on the upstream open source project Velero, it allows you to backup and restore all Kubernetes resources for a given project, including persistent volumes. 

    The underlying mechanism within OADP that allows the backup and restore of persistent volumes is either Restic, Kopia, CSI snapshots, or CSI dataMover. Backups are incremental by default. 

    This guide aims to demonstrate a very basic use-case of OADP, with the aim to simulate a disaster recovery scenario. A backup is created of a simple database application running in a namespace, then a restoration is tested of the same application after the namespace is deleted.

    What this guide is

    • A simple demonstration in a home lab (with single node OpenShift version 4.16) using Red Hat OpenShift Data Foundation, connected to an external Ceph cluster.
    • A demonstration of a very basic use-case of OADP.

    What this guide is not

    • An exploration of more sophisticated namespaces with different types of storage.
    • A tutorial on how to install single node OpenShift, or OpenShift Data Foundation.
    • A tutorial that factors in appropriate SSL/TLS configuration.
    • A guide to use OADP using the Velero command-line interface (CLI).

    Prerequisites

    Prior to installing OADP, ensure you have the following:

    • OpenShift cluster configured (we are using single node OpenShift v4.16 for this demo).
    • oc (OpenShift client) CLI.
    • An S3 compatible bucket for storing the resources (we are using OpenShift Data Foundation connected to external ceph for this demo, which includes Noobaa, providing S3 compatible object bucket storage).
    • A CSI compatible storage. 

    Installation of OADP operator

    Log in to your OpenShift console, and from the Operator menu, select OperatorHub. Search for OADP and choose the Operator from Red Hat, similar to what is shown in Figure 1.

    Red Hat Operator Hub console for installing operators via a GUI.
    Figure 1: Red Hat Operator Hub console for installing operators via a GUI.

    Once installed, you should see something similar to this (Figure 2).

    Status of OADP Operator install.
    Figure 2: Status of OADP Operator install.

    Set up a default backing store

    In order for your backups to be stored somewhere, OADP requires some way to store backed up data. As we have OpenShift Data Foundation setup with NooBaa, we will create an S3 compatible object store.

    We use the following YAML file to create our object bucket:

    $ cat > ./oadp_noobaa_objectbucket.yaml << EOF
    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      labels:
        app: noobaa
        bucket-provisioner: openshift-storage.noobaa.io-obc
        noobaa-domain: openshift-storage.noobaa.io
      name: mys3bucket
      namespace: openshift-adp
    spec:
      additionalConfig:
        bucketclass: noobaa-default-bucket-class
      bucketName: mys3bucket-mydemo-10000000
      generateBucketName: mys3bucket
      objectBucketName: obc-openshift-adp-mys3bucket
      storageClassName: openshift-storage.noobaa.io
    EOF
    $ oc apply -f ./oadp_noobaa_objectbucket.yaml
    objectbucketclaim.objectbucket.io/mys3bucket created

    Verify the bucket is created:

    $ oc get obc
    NAME         STORAGE-CLASS                 PHASE   AGE
    mys3bucket   openshift-storage.noobaa.io   Bound   25s

    If the bucket is not bound yet, the STORAGE-CLASS field will not be populated.

    Or using the NooBaa CLI:

    $ noobaa obc list 
    NAMESPACE       NAME                      BUCKET-NAME                  STORAGE-CLASS                 BUCKET-CLASS                  PHASE   
    openshift-adp   mys3bucket                mys3bucket-mydemo-10000000   openshift-storage.noobaa.io   noobaa-default-bucket-class   Bound      

    In order to allow OADP to access this bucket, we need to extract the access ID and keys for this bucket. 

    Obtain the access key ID:

    $ oc get secret mys3bucket -o json | jq -r .data.AWS_ACCESS_KEY_ID | base64 -d
    mY_aCceS_kEy

    Obtain the secret access key:

    $ oc get secret mys3bucket -o json | jq -r .data.AWS_SECRET_ACCESS_KEY | base64 -d
    mY_sEcRet_kEy

    Create a file with these credentials in the following format:

    $ cat << EOF > ./credentials-velero
    [default]
    aws_access_key_id=mY_aCceS_kEy
    aws_secret_access_key=mY_sEcRet_kEy
    EOF

    Create the credentials based on the file above:

    $ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
    secret/cloud-credentials created

    Create the default backing store based on the YAML below:

    $ cat > ./mys3backuplocation.yaml << EOF 
    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      namespace: openshift-adp
      name: ts-dpa
    spec:
      configuration:
        velero:
          defaultPlugins:
          - openshift
          - aws
          - csi
      backupLocations:
        - name: default
          velero:
            provider: aws
            objectStorage:
              bucket: mys3bucket-mydemo-10000000
              prefix: velero
            config:
              profile: default # should be same as the cloud-credentials/cloud 
              region: noobaa
              s3ForcePathStyle: "true"
              # from oc get route -n openshift-storage s3
              s3Url: https://s3-openshift-storage.apps.sno1.local.momolab.io  
              insecureSkipTLSVerify: "true"
            credential:
              name: cloud-credentials
              key: cloud
            default: true
    EOF
    $ oc apply -f ./mys3backuplocation.yaml
    dataprotectionapplication.oadp.openshift.io/ts-dpa created

    Once created, you should be able to verify the configuration was applied successfully:

    $ oc get dpa
    NAME     AGE
    ts-dpa   34s

    Verify/check dpa status:

    $ oc describe dpa ts-dpa | grep -A 5 "Status"
    Status:
      Conditions:
        Last Transition Time:  2024-09-16T23:15:43Z
        Message:               Reconcile complete
        Reason:                Complete
        Status:                True
        Type:                  Reconciled
    Events:                    <none>

    Afterwards, you should be able to see:

    $ oc get bsl
    NAME      PHASE       LAST VALIDATED   AGE   DEFAULT
    default   Available   17s              24s   true

    Create a basic application 

    Here, we simply create a simple to-do list database application using the following steps:

    $ oc apply -f https://raw.githubusercontent.com/openshift/oadp-operator/master/tests/e2e/sample-applications/mysql-persistent/mysql-persistent.yaml
    namespace/mysql-persistent created
    serviceaccount/mysql-persistent-sa created
    persistentvolumeclaim/mysql created
    securitycontextconstraints.security.openshift.io/mysql-persistent-scc created
    service/mysql created
    deployment.apps/mysql created
    service/todolist created
    Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+
    deploymentconfig.apps.openshift.io/todolist created
    route.route.openshift.io/todolist-route created

    Verify the application is created:

    $ oc project mysql-persistent
    Now using project "mysql-persistent" on server "https://api.sno1.local.momolab.io:6443".
    
    $ oc get pods
    NAME                     READY   STATUS      RESTARTS   AGE
    mysql-6d6c7fdb65-gwwbl   2/2     Running     0          12m
    todolist-1-deploy        0/1     Completed   0          19m
    todolist-1-t78k6         1/1     Running     0          19m
    $ oc get pvc
    NAME    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                           VOLUMEATTRIBUTESCLASS   AGE
    mysql   Bound    pvc-fb0571b9-7999-459b-8b62-32803a9595a4   1Gi        RWO            ocs-external-storagecluster-ceph-rbd   <unset>                 19m
    

    Add a few tasks, as shown in Figure 3.

    GUI interface of simple TODO application with a couple of entries for testing (pre restore).
    Figure 3: GUI interface of simple TODO application with a couple of entries for testing (pre restore).

    Create a backup custom resource

    Use the following YAML to create a backup CR:

    $ cat > ./backup_mysql-persistent.yaml << EOF
    apiVersion: velero.io/v1
    kind: Backup
    metadata:
      name: backup-mysql-persistent
      labels:
        velero.io/storage-location: default
      namespace: openshift-adp
    spec:
      hooks: {}
      includedNamespaces:
      - mysql-persistent
      includedResources: [] 
      excludedResources: [] 
      storageLocation: default
      ttl: 720h0m0s
    EOF
    
    $ oc create -f backup_mysql-persistent.yaml
    backup.velero.io/backup-mysql-persistent created

    Check the CR:

    $ oc get backups -n openshift-adp
    NAME               AGE
    backup-mysql-persistent   30s

    Verify that the backup succeeded:

    $ oc get backups -n openshift-adp mysql-persistent -o yaml
    Name:         backup-mysql-persistent
    Namespace:    openshift-adp
    Labels:       velero.io/storage-location=default
    Annotations:  velero.io/resource-timeout: 10m0s
                  velero.io/source-cluster-k8s-gitversion: v1.29.7+4510e9c
                  velero.io/source-cluster-k8s-major-version: 1
                  velero.io/source-cluster-k8s-minor-version: 29
    API Version:  velero.io/v1
    Kind:         Backup
    Metadata:
      Creation Timestamp:  2024-09-18T23:21:41Z
      Generation:          6
      Resource Version:    3043558
      UID:                 47ee04bf-3570-480b-971e-ca006821b370
    Spec:
      Csi Snapshot Timeout:          10m0s
      Default Volumes To Fs Backup:  false
      Excluded Resources:
      Hooks:
      Included Namespaces:
        mysql-persistent
      Included Resources:
      Item Operation Timeout:  4h0m0s
      Snapshot Move Data:      false
      Storage Location:        default
      Ttl:                     720h0m0s
    Status:
      Backup Item Operations Attempted:  1
      Backup Item Operations Completed:  1
      Completion Timestamp:              2024-09-18T23:21:55Z
      Csi Volume Snapshots Attempted:    1
      Csi Volume Snapshots Completed:    1
      Expiration:                        2024-10-18T23:21:41Z
      Format Version:                    1.1.0
      Hook Status:
      Phase:  Completed
      Progress:
        Items Backed Up:  64
        Total Items:      64
      Start Timestamp:    2024-09-18T23:21:41Z
      Version:            1
    Events:               <none>

    This implies the backup was successful.

    Verify data on NooBaa S3 bucket

    To verify the content, we need to download and install the AWS CLI utility, and provide it with the NooBaa details, as per (if you have SSL problems, use http instead of https for the endpoint below):

    $ cat > ./setup_alias.sh << EOF
    export NOOBAA_S3_ENDPOINT=https://s3-openshift-storage.apps.sno1.local.momolab.io
    export NOOBAA_ACCESS_KEY=mY_aCceS_kEy
    export NOOBAA_SECRET_KEY=mY_sEcRet_kEy
    alias s3='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint $NOOBAA_S3_ENDPOINT --no-verify-ssl s3'
    EOF
    $ source ./setup_alias.sh

    Show contents of the bucket (ignore SSL verification here, but ideally setup):

    $ s3 ls s3://mys3bucket-mydemo-10000000/ 2>/dev/null
                               PRE velero/

    Note

    2>/dev/null was added to the end of the command to hide a warning from the Python urllb3 module around certificates, for clarity purposes. See this documentation for further details.


    The PRE velero folder is where all the backups are stored.

    Test a restore

    Delete the namespace mysql-persistent: 

    $ oc delete project mysql-persistent
    project.project.openshift.io "mysql-persistent" deleted

    Create the following restore CR:

    $ cat > ./restore_mysql-persistent.yaml << EOF
    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: restore-mysql-persistent
      namespace: openshift-adp
    spec:
      backupName: backup-mysql-persistent
    EOF
    $ oc create -f restore_mysql-persistent.yaml 
    restore.velero.io/restore-mysql-persistent created

    Check the status of the restore CR:

    $ oc get restore -n openshift-adp
    NAME                       AGE
    restore-mysql-persistent   20s

    Verify restore was successful:

    $ oc describe restore restore-mysql-persistent
    Name:         restore-mysql-persistent
    Namespace:    openshift-adp
    Labels:       <none>
    Annotations:  <none>
    API Version:  velero.io/v1
    Kind:         Restore
    Metadata:
      Creation Timestamp:  2024-09-18T23:24:26Z
      Finalizers:
        restores.velero.io/external-resources-finalizer
      Generation:        8
      Resource Version:  3044806
      UID:               156fef20-d2e2-42b2-9542-544723f595f2
    Spec:
      Backup Name:  backup-mysql-persistent
      Excluded Resources:
        nodes
        events
        events.events.k8s.io
        backups.velero.io
        restores.velero.io
        resticrepositories.velero.io
        csinodes.storage.k8s.io
        volumeattachments.storage.k8s.io
        backuprepositories.velero.io
      Item Operation Timeout:  4h0m0s
    Status:
      Completion Timestamp:  2024-09-18T23:24:51Z
      Hook Status:
      Phase:  Completed
      Progress:
        Items Restored:  32
        Total Items:     32
      Start Timestamp:   2024-09-18T23:24:26Z
      Warnings:          7
    Events:              <none>

    Wait for the pods to come up, and verify the data is preserved as expected, as shown in Figure 4.

    GUI interface of simple TODO application with a couple of entries for testing (post restore).
    Figure 4: GUI interface of simple TODO application with a couple of entries for testing (post restore).

    Conclusion

    Using OADP, we backed up a basic application deployed in a specific namespace by capturing its resources and persistent volumes. After verifying the backup, the entire namespace was deleted, including all associated data and configurations. With a few simple commands, we initiated a full restoration, successfully recovering the namespace, application, and its data, demonstrating the tool’s reliability in ensuring seamless, end-to-end recovery of OpenShift workloads.

    Acknowledgment

    This article wouldn’t be possible without the help of Red Hat’s OADP team.

    Last updated: November 8, 2024

    Related Posts

    • 5 steps to manage your first API using Red Hat OpenShift API Management

    • Experiment with the OpenShift API Management Developer Sandbox

    • Installing Red Hat OpenShift API Management

    • Storage and data protection for OpenShift Virtualization

    Recent Posts

    • Why Models-as-a-Service architecture is ideal for AI models

    • How to run MicroShift as a container using MINC

    • OpenShift 4.19 brings a unified console for developers and admins

    • 3 steps to secure network segmentation with Ansible and AWS

    • Integrate vLLM inference on macOS/iOS using OpenAI APIs

    What’s up next?

    The Red Hat OpenShift cheat sheet presents oc commands for managing an application’s lifecycle.

    Get the cheat sheet
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue