-
-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Add deletion methods to clear data from TSDB. #5317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
(This name is terrible, sorry.)
|
Adding to the list of "things that should be done but I don't really want to do them right now": there should be a base test suite (similar to the Redis backend test suite) that runs the same test methods against each backend to ensure that all backends support the same functionality correctly. |
src/sentry/tsdb/base.py
Outdated
|
|
||
| def delete_frequencies(self, models, keys, start=None, end=None, timestamp=None): | ||
| """ | ||
| Delete all distinct counters. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops
LewisJEllis
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
| for model in models: | ||
| for key in keys: | ||
| model_key = self.get_model_key(key) | ||
| client.hdel( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it worth trying to think "are we going to end up just deleting all keys in this hash?" and then just delete the whole toplevel hash key instead of each entry one by one, or is that not going to be happening much?
| }, | ||
| } | ||
|
|
||
| self.db.delete_frequencies( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could be nice to have a test case where we delete just a sub-time-interval of the data and assert on the stuff that remains? current cases all just gut the dataset entirely unless I'm misreading something
…e operations. (#5451) This was an oversight as part of GH-5317, and wasn't caught in tests because the test suite didn't actually run the tests against multiple databases. The easiest way to test this (other than depending on the tests to have the correct behavior) is to cross-reference the routing logic in the other mutation commands and make sure it's consistent.
#nochanges