You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Oct 9, 2023. It is now read-only.
Don't delete response collectors in a transaction (#250)
## What is the goal of this PR?
We no longer delete response collectors in a transaction after receiving a response to a "single" request, or receiving a "DONE" message in a stream. This fixes a possible error when loading 50+ answers in one query and then performing a second query.
## What are the changes implemented in this PR?
We had previously added code to clean up used response collectors in #247. But this broke in the scenario where we open a transaction, run a query that loads 51 answers (the prefetch size + 1), and then run a second query. The server would respond to the first query with: 50 answers -> CONTINUE -> 1 answer [compensating for latency] -> DONE. The client would respond to CONTINUE with STREAM to keep iterating, and the server would respond to STREAM with a 2nd DONE message.
The iterator for query 1 finishes as soon as we see the first DONE message, so we stop reading responses at that point, meaning the second DONE may never be read by the client. But opening the iterator for query 2 causes us to continue reading messages from the transaction stream - note that we have no control over which request is being "currently served"; all responses use the same pipeline, the same gRPC stream. That's why we have the Response Collectors - when we get a response for a request that is different to the request we actually asked for, we need to store it in its respective Collector bucket.
We could mitigate the issue by patching the server, but its current behaviour is actually pretty intuitive - if you send it a STREAM request and it has no more answers, it responds with DONE. We could change it to not respond at all, but that would be adding complexity where it is not really necessary to do so.
So instead, we're reverting back to the old client behaviour, where the response collectors follow the lifetime of the Transaction, noting that Transactions are typically short-lived so cleanup will be performed in a timely manner anyway.
MISSING_DB_NAME=ClientErrorMessage(7, "Database name cannot be empty.")
85
85
DB_DOES_NOT_EXIST=ClientErrorMessage(8, "The database '%s' does not exist.")
86
86
MISSING_RESPONSE=ClientErrorMessage(9, "Unexpected empty response for request ID '%s'.")
87
-
UNKNOWN_REQUEST_ID=ClientErrorMessage(10, "Received a response with unknown request id '%s'.")
87
+
UNKNOWN_REQUEST_ID=ClientErrorMessage(10, "Received a response with unknown request id '%s':\n%s")
88
88
CLUSTER_NO_PRIMARY_REPLICA_YET=ClientErrorMessage(11, "No replica has been marked as the primary replica for latest known term '%d'.")
89
89
CLUSTER_UNABLE_TO_CONNECT=ClientErrorMessage(12, "Unable to connect to TypeDB Cluster. Attempted connecting to the cluster members, but none are available: '%s'.")
90
90
CLUSTER_REPLICA_NOT_PRIMARY=ClientErrorMessage(13, "The replica is not the primary replica.")
0 commit comments