Connect to a cluster from outside its VPC

This page examines different ways to connect to an AlloyDB for PostgreSQL cluster from outside its configured virtual private cloud (VPC). It assumes that you have already created an AlloyDB cluster.

About external connections

Your AlloyDB cluster comprises a number of nodes within a Google Cloud VPC. When you create a cluster, you also configure private services access between one of your VPCs and the Google-managed VPC containing your new cluster. This peered connection allows you to use private IP addresses to access resources on the cluster's VPC as if they are part of your own VPC, using private IP addresses.

Situations exist where your application must connect to your cluster from outside this connected VPC:

  • Your application runs elsewhere within the Google Cloud ecosystem, outside of the VPC that you connected to your cluster through private services access.

  • Your application runs on a VPC that exists outside of Google's network.

  • Your application runs "on-premises", on a machine located somewhere else on the public internet.

In all of these cases, you must set up an additional service to enable this kind of external connection to your AlloyDB cluster.

Summary of external-connection solutions

We recommend two general solutions for making external connections, depending upon your needs:

The next several sections describe these external-connection solutions in detail.

Connect through an intermediary VM

To establish a connection to an AlloyDB cluster from outside its VPC using open-source tools and a minimum of additional resources, run a proxy service on an intermediary VM set up within that VPC. You can set up a new VM for this purpose, or use a VM already running within your AlloyDB cluster's VPC.

As a self-managed solution, using an intermediary VM generally costs less and has a faster set-up time than using a Network Connectivity product. It also has downsides: the connection's availability, security, and data throughput all become dependent on the intermediary VM, which you must maintain as part of your project.

Connect through IAP

Using Identity-Aware Proxy (IAP), you can securely connect to your cluster without the need to expose the intermediary VM's public IP address. You use a combination of firewall rules and Identity and Access Management (IAM) to limit access through this route. This makes IAP a good solution for non-production uses like development and prototyping.

To set up IAP access to your cluster, follow these steps:

  1. Install Google Cloud CLI on your external client.

  2. Install the AlloyDB Auth Proxy on the intermediary VM.

  3. Run the AlloyDB Auth Proxy, having it listen on its default address of 127.0.0.1:

    ./alloydb-auth-proxy \
      /projects/my-project/locations/us-central1/clusters/my-cluster/instances/my-primary
    
  4. Prepare your project for IAP TCP forwarding.

    When defining the new firewall rule, allow ingress TCP traffic to port 22 (SSH). If you are using your project's default network with its pre-populated default-allow-ssh rule enabled, then you don't need to define an additional rule.

  5. Set up port forwarding between your external client and the intermediary VM using SSH through IAP:

    gcloud compute ssh my-vm \
           --tunnel-through-iap \
           --zone=us-central1-a \
           --ssh-flag="-L 5432:localhost:5432"
    
  6. Test your connection using psql on your external client, having it connect to the local port you specified in the previous step. For example, to connect as the postgres user role to port 5432:

    psql -h 127.0.0.1 -p 5432 -U postgres
    

Connect through a SOCKS proxy

Running a SOCKS service on the intermediary VM provides a flexible and scalable connection to your AlloyDB cluster, with end-to-end encryption provided by the AlloyDB Auth Proxy. With appropriate configuration, you can make it suitable for production workloads.

This solution includes these steps:

  1. Install, configure, and run a SOCKS server on the intermediary VM. One example is Dante, a popular open-source solution.

    Configure the server to bind to the VM's ens4 network interface for both external and internal connections. Specify any port you wish for internal connections.

  2. Configure your VPC's firewall to allow TCP traffic from the appropriate IP address or range to the SOCKS server's configured port.

  3. Install the AlloyDB Auth Proxy on the external client.

  4. Run the AlloyDB Auth Proxy on your external client, with the ALL_PROXY environment variable set to the intermediary VM's IP address, and specifying the port that the SOCKS server uses.

    This example configures the AlloyDB Auth Proxy to connect to the database at my-main-instance, by way of a SOCKS server running at 198.51.100.1 on port 1080:

    ALL_PROXY=socks5://198.51.100.1:1080 ./alloydb-auth-proxy \
      /projects/my-project/locations/us-central1/clusters/my-cluster/instances/my-main-instance
    

    If you are connecting from a peered VPC, you can use the intermediary VM's internal IP address; otherwise, use its external IP address.

  5. Test your connection using psql on your external client, having it connect to the port that the AlloyDB Auth Proxy listens on. For example, to connect as the postgres user role to port 5432:

    psql -h 127.0.0.1 -p 5432 -U postgres
    

Connect through a PostgreSQL pooler

If you need to install and run the AlloyDB Auth Proxy on the intermediary VM, instead of an external client, then you can enable secure connections to it by pairing it with a protocol-aware proxy, also known as a pooler. Popular open-source poolers for PostgreSQL include Pgpool-II and PgBouncer.

In this solution, you run both the AlloyDB Auth Proxy and the pooler on the intermediary VM. Your client or application can then securely connect directly to the pooler over SSL, without the need to run any additional services. The pooler takes care of passing PostgreSQL queries along to your AlloyDB cluster through the Auth Proxy.

Because every instance within an AlloyDB cluster has a separate internal IP address, each proxy service can communicate with only one specific instance: either the primary instance, the stand-by, or a read pool. Therefore, you need to run a separate pooler service, with an appropriately configured SSL certificate, for every instance in the cluster.

Connect through Cloud VPN or Cloud Interconnect

For production work requiring high availability (HA), we recommend the use of a Google Cloud Network Connectivity product: either Cloud VPN or Cloud Interconnect, depending upon your external service's needs and network topology. You then configure Cloud Router to advertise the appropriate routes.

While using a Network Connectivity product is a more involved process than setting up an intermediary VM, this approach shifts the burdens of uptime and availability from you to Google. In particular, HA VPN offers 99.99% SLA, making it appropriate for production environments.

Network Connectivity solutions also free you from the need to maintain a separate, secure VM as part of your application, avoiding the single-point-of-failure risks inherent with that approach.

To start learning more about these solutions, see Choosing a Network Connectivity product.

What's next