You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Earlier Setup
32 partitions on DB
User Pool Resource Limits
CPU limit : 60
Queue Depth = 80
Connection version used : 4.1.6 aws marketplace singlestore connection.
I am using aws glue with 32 executors, for above setup and i could fetch 1 billion of records within 30 minutes.
Now one change we have done is changed the partitions count to 150 on database, now what i see is, some queries are running , some queries get queued, and running queries never finish.
Any idea on above , what could be causing this? Do all 150 queries need to execute in parallel?
The text was updated successfully, but these errors were encountered:
If you are using ReadFromAggregators parallel read feature - then yes. All reading tasks must start at the same time.
In the latest version, the connector tries to estimate how many resources the Spark cluster has and run several reading tasks inside of a single Spark task if needed. But generally, it is recommended to have enough big Spark cluster.
If you don't like to depend on number of database partitions in this way, you can use the ReadFromAggregatorsMaterialized (it will use more memory on the database side) feature or disable parallel read at all.
Earlier Setup
32 partitions on DB
User Pool Resource Limits
CPU limit : 60
Queue Depth = 80
Connection version used : 4.1.6 aws marketplace singlestore connection.
I am using aws glue with 32 executors, for above setup and i could fetch 1 billion of records within 30 minutes.
Now one change we have done is changed the partitions count to 150 on database, now what i see is, some queries are running , some queries get queued, and running queries never finish.
Any idea on above , what could be causing this? Do all 150 queries need to execute in parallel?
The text was updated successfully, but these errors were encountered: