-
Notifications
You must be signed in to change notification settings - Fork 324
GCP blog post #1509
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GCP blog post #1509
Conversation
tags: [engineering] | ||
--- | ||
|
||
# Our migration from AWS to GCP with minimal downtime |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would go with "Migrating databases from AWS to GCP with minimal downtime" to appeal to a more targeted audience.
|
||
## The migration | ||
|
||
Our primary Serverless deployment was in Oregon, AWS *us-west-2* region. We were moving it to GCP in Iowa, *us-central1* region. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
Our primary Serverless deployment was in Oregon, AWS us-west-2 region. We were moving it to GCP in Iowa, us-central1 region.
+
Much of our Serverless infrastructure was deployed in Oregon, AWS us-west-2 region. We were moving it to GCP in Iowa, us-central1 region to be more centrally located with respect to US datacenters.
|
||
The final step was to move our customers' traffic from AWS to GCP, and do so without losing a byte of data. We picked the lowest traffic period, midnight Pacific time, and shut down our AWS primary. | ||
|
||
As soon as the Systemd service stopped, we changed the DNS record to point to our GCP standby and ran `SELECT pg_promote()`. Traffic moved over almost immediately, thanks to our low DNS TTL and we were back in business. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be worth mentioning 0 downtime pgcat cutovers as a possibility for in-datacenter machine transfers, but how that isn't an option when you're moving pgcat poolers themselves to stay close to the database and minimize latency.
No description provided.