-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Panic when connecting through alloydb auth proxy - panic: runtime error: slice bounds out of range [:1157627904] with capacity 16384 #504
Comments
not reproduceable on 1.4.1 |
Thanks @andriihrachov. I'll take a look and report back. |
What region is your cluster in? |
The new Go Connector version (and Proxy version) introduces a metadata exchange check prior to the database protocol taking over. You're seeing this panic because your instance doesn't understand the metadata exchange. This is not expected behavior. Talking with some folks on the AlloyDB team, I'm told a recent rollout may have failed for a very small number of instances. Your instance seems to be one. If you have a support contract, you can open a ticket to get a fix sooner than the next rollout (tell them to cc me and I'll help get the right people to look at this). Otherwise, I'd suggest staying on v1.4.1 and I'll report back when I see the few stragglers fixed. |
us-east4 Thanks a lot, I downgraded to 1.4.1 |
Can you point us to the time window when you saw these errors? |
17.11.2023 ~17:00 CET |
FWIW I tested this on a newly created instance and didn't see any issues. It's possible some existing instances will be affected. |
thanks. it was test db anyway, so I'll recreate |
This remains an unexpected problem. I'll leave this open for anyone else who runs into it and we'll close it once the underlying issue has been fixed. |
FYI I've moved our |
Quick update here after digging into this a bit more. If anyone runs into this issue here's how to work around it:
To get Auto IAM AuthN working:
If you're still hitting this error, reach out to support for more help. We expect this issue will be fully resolved during our next data plane rollout. Sorry for the inconvenience. |
For those playing the home edition of "fix that bug!", these are the explicit steps to temporarily revert:
|
This has now been fixed in the latest rollout. |
Bug Description
alloydb proxy crashes with error, when deployed as a sidecar on kubernetes
Example code (or command)
No response
Stacktrace
Steps to reproduce?
Environment
GKE
Additional Details
No response
The text was updated successfully, but these errors were encountered: