Skip to content

Conversation

@mattford
Copy link

@mattford mattford commented Aug 17, 2023

#48100

Currently when calling flush() on a TaggableStore, all that happens is the namespace is reset, meaning the reference to any cached data is lost, and any further cache reads miss.

This however leads to the data staying in the store, and therefore gradually leaking memory - this is problematic in long running processes for example and can lead to unpredictable crashes.

The approach here is to keep an additional cache entry: <namespace>:meta:entries consisting of a list of keys within the tag, and on calling flush() looping through these keys and removing them from the store.

As it happens the RedisTaggedCache/RedisTagSet already has similar behaviour, so I have not applied this new logic there.

@mattford mattford closed this Aug 17, 2023
@mattford mattford reopened this Aug 18, 2023
@mattford mattford marked this pull request as ready for review August 18, 2023 09:23
@taylorotwell
Copy link
Member

I think that is the point? We can quickly "invalidate" the tags in this way and cache systems like Memcached will rotate out keys when it is full anyways.

@mattford
Copy link
Author

mattford commented Aug 18, 2023

@taylorotwell I disagree, for starters Laravel offers the ArrayStore (which is where this issue leads to out of memory issues), which definitely doesn't do any rotating of keys when full, instead just crashing the process.

As for other cache providers, the Redis provider already implements similar logic to that in this PR, keeping track of keys associated with tags and clearing them from the store when the tag is flushed. The others do not have this logic, and given the store has no way of telling that the app no longer has any reference to the stale keys, how would it know to clear this data?

Edit: I checked the comments in the PR introducing the logic in the Redis store and saw you considered the other drivers there.

It seems if memcached is covering this off, the issue will only be with ArrayStore, in which case would you accept a fix limited to that driver?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants