Skip to content

GCP blog post #1509

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jun 6, 2024
Merged

GCP blog post #1509

merged 5 commits into from
Jun 6, 2024

Conversation

levkk
Copy link
Contributor

@levkk levkk commented Jun 6, 2024

No description provided.

@levkk levkk merged commit ce74b7b into master Jun 6, 2024
1 check passed
@levkk levkk deleted the levkk-gcp-blog-post branch June 6, 2024 21:50
tags: [engineering]
---

# Our migration from AWS to GCP with minimal downtime
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would go with "Migrating databases from AWS to GCP with minimal downtime" to appeal to a more targeted audience.


## The migration

Our primary Serverless deployment was in Oregon, AWS *us-west-2* region. We were moving it to GCP in Iowa, *us-central1* region.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- Our primary Serverless deployment was in Oregon, AWS us-west-2 region. We were moving it to GCP in Iowa, us-central1 region.

+ Much of our Serverless infrastructure was deployed in Oregon, AWS us-west-2 region. We were moving it to GCP in Iowa, us-central1 region to be more centrally located with respect to US datacenters.


The final step was to move our customers' traffic from AWS to GCP, and do so without losing a byte of data. We picked the lowest traffic period, midnight Pacific time, and shut down our AWS primary.

As soon as the Systemd service stopped, we changed the DNS record to point to our GCP standby and ran `SELECT pg_promote()`. Traffic moved over almost immediately, thanks to our low DNS TTL and we were back in business.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be worth mentioning 0 downtime pgcat cutovers as a possibility for in-datacenter machine transfers, but how that isn't an option when you're moving pgcat poolers themselves to stay close to the database and minimize latency.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants