Volumes overview

This page provides an overview of the volumes feature of Google Cloud NetApp Volumes.

About volumes

A volume is a file system container in a storage pool that stores application, database, and user data.

You create a volume's capacity using the available capacity in the storage pool and you can define and resize the capacity without disruption to your processes.

Storage pool settings apply to the volumes contained within them automatically. These settings include service level, location, network (Virtual Private Cloud (VPC)), Active Directory policy, and customer-managed encryption key (CMEK) policy.

Volume performance

For the Flex service level, the performance capability of a volume is based on its storage pool's capacity. All volumes in a Flex storage pool share the performance of the pool.

For volumes in Standard, Premium, and Extreme service levels, the performance capabilities are based on the volume capacity.

To achieve your performance objective, you should provision the right amount of capacity to your pool (Flex) or volume (Standard, Premium, and Extreme) large enough. For example, a volume with 5 TiB of provisioned space in a storage pool with the Extreme service level (128 MiBps throughput per TiB of allocated volume capacity) provides a throughput of 640 MiBps (5*128 MiBps).

On top of using the pool or volume capacity to manage performance, volumes in Premium or Extreme service level lets you switching service level by assigning them to an appropriate pool.

Space provisioning

You should provision the right amount of capacity to your volume large enough to hold your data and leave some empty space as buffer for growth.

If a volume becomes full, clients receive an out of space error when they try to modify or add data, which can lead to problems for your applications or users. You should monitor usage of your volumes and maintain a provisioned space buffer of 20% above your expected volume utilization. For information on monitoring usage, see Monitor NetApp Volumes.

Snapshots consume the volume's capacity. For more information, see Snapshot space use.

Volume reversion

NetApp Volumes lets you revert volumes to a previously created snapshot. When you revert a volume, it restores all volume contents back to the point in time the snapshot was taken. Any snapshot created after the snapshot used for the reversion is lost. If you don't want to lose data, we recommend that you clone a volume or restore data with snapshots instead.

You can use volume reversion to test and upgrade applications or fend off ransomware attacks. The process is similar to overwriting the volume with a backup, but only takes a few seconds. You can revert a volume to a snapshot independent of the capacity of the volume.

Reversions happen when the volume is online and in use by clients. We recommend stopping all critical applications before you revert to avoid potential data corruption because the reversion changes open files without any notification to the application.

Block volume from deletion when clients are connected

NetApp Volumes lets you block the deletion of volumes when they are mounted by a client. If you use volumes for Google Cloud VMware Engine (GCVE) datastores, you must enable the setting to block the deletion of volumes when clients have mounted volumes. If you enable Block volume from deletion when clients are connected setting, an error message displays when you try to delete a mounted volume.

Standard, Premium, and Extreme service levels support blocking the deletion of volumes.

The following protocols support blocking the deletion of volumes:

  • NFSV3
  • NFSV4.1
  • NFSV3 and NFSV4.1

To delete a volume when this option is enabled, all the clients must first unmount the volume. After that, you must wait for more than 52 hours to delete the volume.

Large capacity volumes (Preview)

Standard, Premium, and Extreme service levels allow volume sizes between 100 GiB and 102,400 GiB and maximum throughput of up to 4.5 GiBps. Some workloads require larger volumes and higher throughput, which can be achieved by using the large capacity volume option with Premium and Extreme service levels.

Large capacity volumes can be sized between 102,401 GiB and 1 PiB in increments of 1 GiB and deliver throughput performance of up to 12.5 GiBps.

Large capacity volumes in an Extreme service level offer six storage endpoints (IP addresses) to load-balance client traffic to the volume and achieve higher performance. The six IP addresses make such volumes an ideal candidate for workloads which require high performance and highly concurrent access to the same data. For recommendations on how to connect your clients, see Connect large capacity volumes with multiple storage endpoints. Large capacity volumes in the Premium service level offer one storage endpoint, which make them suitable for large data sets with moderate or low performance requirements.

Volumes cannot be converted into large capacity volumes and the other way around after creation.

Large capacity volumes limitations

As the large capacity volumes feature is in Preview, the following limitations are applicable:

  • This feature has limited availability and is not open to all customers by default. Contact your sales representative to get added to the Preview.

  • It is recommended to use a dedicated service project for large capacity volumes.

  • Volume replication is not supported.

  • Volume backups are not supported.

  • CMEK is not supported.

  • Kerberos NFSv4.1 is not supported.

  • The service level for large capacity volumes cannot be changed between Premium and Extreme.

  • The interval between snapshots must be 30 minutes or longer. This requirement has implications on scheduled snapshots. You must modify the minute and hour parameters of hourly, daily, and weekly snapshots to make sure that they are taken at least 30 minutes apart from each other.

Use auto-tiering (Preview) to reduce costs

Google Cloud NetApp Volumes lets you enable auto-tiering on a per-volume basis if auto-tiering is enabled on the storage pool. Auto-tiering reduces the overall cost of volume usage. For more information about auto-tiering, see product overview.

Once you enable auto-tiering on a volume, you can pause and resume auto-tiering as needed. However, auto-tiering can't be disabled after it's enabled. You can also adjust the cooling threshold on a per-volume basis. The cooling threshold can be set between 7 to 183 days. The default cooling threshold is 31 days. Data that is cooled beyond the cooling threshold moves to the cold tier once a day.

For auto-tiered volumes, you can:

  • View data that resides on hot and cold tiers.

  • Adjust the cooling threshold.

The volume's used capacity is the total of the hot tier and the cold tier data. It has to be less than the total volume capacity.

Auto-tiering considerations

The following considerations apply while you use auto-tiering:

  • Data on the cold tier is priced lower than data on the hot tier. Using a shorter cooling threshold can move data sooner to the cold tier, which can reduce the overall cost if the data is not accessed again soon.

  • Data on the cold tier is slower to access than data on the hot tier. Using a cooling threshold which is too short can make access to your data slower.

  • Moving the data to and from the cold tier has data transfer costs. If you choose a short cooling threshold, data can move more frequently between the hot and cold tiers which can increase the overall cost.

What's next

Create a volume.