-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using a custom domain with the Read/Write Splitting Plugin plugin does not work. #1129
Comments
Hi, @briete Thank you for reaching out with this issue. Could you please provide more details about your custom domain? You can omit real domain name for your privacy and replace it with some fake domain name. We would like to understand your DNS aliases (CNAME). The wrapper driver supports user custom domains. Let's assume that a user custom domain is Instance names are completely made up and they could be any names. The driver expects that DNS is configured to the following way. CNAME
Thank you! |
Hi, @sergiyvamz Thanks for the reply! CNAME record is set on cluster endpoint, not on instance endpoint
The JDBC URL is set to the CNAME domain of the writer's cluster endpoint.
I would like the query to be routed to the cluster reader endpoint instead of the instance endpoint, is there such an option? |
Hi, @sergiyvamz Sorry, I solved myself. I was able to set the
|
Describe the bug
I am trying to connect to Aurora's cluster reader endpoints using the Read/Write Splitting Plugin plugin, but it does not work with custom domains.
I currently have a custom domain set up for Aurora's cluster endpoints in Route53.
If I specify Aurora's cluster writer endpoint as it is in the JDBC URL, I can connect to the reader instance, but if I use the custom domain, I cannot connect to the reader instance. The following logs were output and the hostname was incorrect.
I looked at the documentation and source code and thought I could use the
clusterInstanceHostPattern
parameter, but it seems that the subdomain depends on the DB instance identifier.I don't want to change my custom domain for this.
Expected Behavior
Is there a way to simply specify the writer and reader endpoints for a cluster?
What plugins are used? What other connection properties were set?
Read/Write Splitting Plugin
Current Behavior
Reproduction Steps
We use Spring Boot:3.1.4. Methods that use reader instances are given
@Transactional(readonly = true)
.application.yml
Config
Possible Solution
Allow cluster reader endpoints to be set in the Read/Write Splitting Plugin parameters
Additional Information/Context
No response
The AWS Advanced JDBC Driver version used
2.3.9
JDK version used
openjdk 17.0.6 2023-01-17 LTS OpenJDK Runtime Environment Corretto-17.0.6.10.1 (build 17.0.6+10-LTS) OpenJDK 64-Bit Server VM Corretto-17.0.6.10.1 (build 17.0.6+10-LTS, mixed mode, sharing)
Operating System and version
Linux
The text was updated successfully, but these errors were encountered: