-
Notifications
You must be signed in to change notification settings - Fork 3
Add sqlite cache #903
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: trunk
Are you sure you want to change the base?
Add sqlite cache #903
Conversation
cc353da
to
85e77ac
Compare
85e77ac
to
720fb28
Compare
If the database is designed to be used as a cache, do we even need to migrate the data? By definition, users should not expect cached data always to be there, right? |
This is a great question. I see this as a replacement for Core Data / FluxC – so while we could just throw away the data every time we need to change something, it seems wasteful of system resources. For a large site, destroying the cache could mean re-downloading thousands of posts/media items/comments/etc. That's a lot of CPU time and battery that we don't need to spend if we're a little bit careful about our data model. I also like this model of migrations because each one can be small and easy to reason about – if something weird happens you'll know where it happened in the chain and can pick up where you left off. Though this is still a cache – if something goes wrong with it, the move is to destroy it and re-create it, so we don't need to be quite as careful as user data. I think this PR represents a good balance between flexibility and being careful about user resources, but happy to discuss further if anyone disagrees! |
I think the "re-downloading" is going to happen regardless of whether the posts have been fetched and saved in the local SQLite database. When users go to view the post list, we'd have to re-download every post again. That's the only way we can guarantee that the most recent versions of the posts are displayed on the list. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jkmassel Thanks for the POC! I've left some suggestions.
self.connection.execute(query, ())?; | ||
} | ||
|
||
self.insert_migration((index + 1) as u64)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.insert_migration((index + 1) as u64)?; | |
self.insert_migration((next_migration_id + index + 1) as u64)?; |
The enumerate()
call will start the index from 0
which means the current code will re-run the migrations from the start.
Here is a test that showcases the issue which is probably worth adding to our suite:
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_migration_numbering_should_be_sequential() {
let connection = Connection::open_in_memory().unwrap();
let mut migration_manager = MigrationManager::new(&connection).unwrap();
// Create migrations table and run first migration manually
migration_manager.create_migrations_table().unwrap();
migration_manager.insert_migration(1).unwrap();
// Try to run remaining migrations
// BUG: This currently fails because line 112 tries to insert migration ID 1 again
// instead of migration ID 2. The fix should be:
// self.insert_migration((next_migration_id + index + 1) as u64)?;
let result = migration_manager.perform_migrations();
// This test FAILS with the current bug - it should succeed after fixing line 112
result.expect("Migration should succeed after fixing the numbering bug on line 112");
// After the bug is fixed, verify migration IDs are sequential
let mut stmt = connection
.prepare("SELECT migration_id FROM _migrations ORDER BY migration_id")
.unwrap();
let migration_ids: Vec<u64> = stmt
.query_map([], |row| row.get::<_, u64>(0))
.unwrap()
.collect::<Result<Vec<_>, _>>()
.unwrap();
// Verify migration IDs are sequential and complete
let expected_ids: Vec<u64> = (1..=MIGRATION_QUERIES.len() as u64).collect();
assert_eq!(
migration_ids, expected_ids,
"Migration IDs should be sequential from 1 to {}", MIGRATION_QUERIES.len()
);
}
}
pub fn flush(&self) -> Result<(), SqliteDbError> { | ||
self.inner.connection.execute("commit", ())?; | ||
Ok(()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm unsure about the intended usage of this function. SQLite auto-commits by default unless you're in an explicit transaction, so calling commit
here seems incorrect. Could we remove this function until it's actually needed? That way we'll have proper context about the intended behavior and can implement it correctly.
native/kotlin/api/kotlin/src/integrationTest/kotlin/WordPressApiCacheTest.kt
Outdated
Show resolved
Hide resolved
native/kotlin/api/kotlin/src/main/kotlin/rs/wordpress/cache/kotlin/WordPressApiCache.kt
Outdated
Show resolved
Hide resolved
native/kotlin/api/kotlin/src/integrationTest/kotlin/WordPressApiCacheTest.kt
Outdated
Show resolved
Hide resolved
native/kotlin/api/kotlin/src/integrationTest/kotlin/WordPressApiCacheTest.kt
Show resolved
Hide resolved
Co-authored-by: Oguz Kocer <[email protected]>
…piCacheTest.kt Co-authored-by: Oguz Kocer <[email protected]>
I've mentioned this on Slack as well, but just leaving a note for myself. I've re-reviewed up to 0bedb9c and it's looking great. Just a couple more unaddressed comments and I think we are good to merge this. |
Adds an initial SQLite cache implementation that works on Android and iOS, implements row-level change updates, and has a migration mechanism.
We'll need the ability to store and fetch objects, but a lot of this should just be generated code instead of being hand-written – that's for a future PR.