Skip to content

Conversation

jkmassel
Copy link
Contributor

Adds an initial SQLite cache implementation that works on Android and iOS, implements row-level change updates, and has a migration mechanism.

We'll need the ability to store and fetch objects, but a lot of this should just be generated code instead of being hand-written – that's for a future PR.

@jkmassel jkmassel force-pushed the add/sqlite-poc branch 6 times, most recently from cc353da to 85e77ac Compare September 22, 2025 01:53
@jkmassel jkmassel marked this pull request as ready for review September 22, 2025 04:15
@crazytonyli
Copy link
Contributor

If the database is designed to be used as a cache, do we even need to migrate the data? By definition, users should not expect cached data always to be there, right?

@jkmassel
Copy link
Contributor Author

If the database is designed to be used as a cache, do we even need to migrate the data

This is a great question. I see this as a replacement for Core Data / FluxC – so while we could just throw away the data every time we need to change something, it seems wasteful of system resources. For a large site, destroying the cache could mean re-downloading thousands of posts/media items/comments/etc. That's a lot of CPU time and battery that we don't need to spend if we're a little bit careful about our data model.

I also like this model of migrations because each one can be small and easy to reason about – if something weird happens you'll know where it happened in the chain and can pick up where you left off.

Though this is still a cache – if something goes wrong with it, the move is to destroy it and re-create it, so we don't need to be quite as careful as user data.

I think this PR represents a good balance between flexibility and being careful about user resources, but happy to discuss further if anyone disagrees!

@crazytonyli
Copy link
Contributor

For a large site, destroying the cache could mean re-downloading thousands of posts/media items/comments/etc.

I think the "re-downloading" is going to happen regardless of whether the posts have been fetched and saved in the local SQLite database. When users go to view the post list, we'd have to re-download every post again. That's the only way we can guarantee that the most recent versions of the posts are displayed on the list.

Copy link
Contributor

@oguzkocer oguzkocer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jkmassel Thanks for the POC! I've left some suggestions.

self.connection.execute(query, ())?;
}

self.insert_migration((index + 1) as u64)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self.insert_migration((index + 1) as u64)?;
self.insert_migration((next_migration_id + index + 1) as u64)?;

The enumerate() call will start the index from 0 which means the current code will re-run the migrations from the start.

Here is a test that showcases the issue which is probably worth adding to our suite:

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_migration_numbering_should_be_sequential() {
        let connection = Connection::open_in_memory().unwrap();
        let mut migration_manager = MigrationManager::new(&connection).unwrap();

        // Create migrations table and run first migration manually
        migration_manager.create_migrations_table().unwrap();
        migration_manager.insert_migration(1).unwrap();

        // Try to run remaining migrations
        // BUG: This currently fails because line 112 tries to insert migration ID 1 again
        // instead of migration ID 2. The fix should be:
        // self.insert_migration((next_migration_id + index + 1) as u64)?;
        let result = migration_manager.perform_migrations();

        // This test FAILS with the current bug - it should succeed after fixing line 112
        result.expect("Migration should succeed after fixing the numbering bug on line 112");

        // After the bug is fixed, verify migration IDs are sequential
        let mut stmt = connection
            .prepare("SELECT migration_id FROM _migrations ORDER BY migration_id")
            .unwrap();
        let migration_ids: Vec<u64> = stmt
            .query_map([], |row| row.get::<_, u64>(0))
            .unwrap()
            .collect::<Result<Vec<_>, _>>()
            .unwrap();

        // Verify migration IDs are sequential and complete
        let expected_ids: Vec<u64> = (1..=MIGRATION_QUERIES.len() as u64).collect();
        assert_eq!(
            migration_ids, expected_ids,
            "Migration IDs should be sequential from 1 to {}", MIGRATION_QUERIES.len()
        );
    }
}

Comment on lines 45 to 48
pub fn flush(&self) -> Result<(), SqliteDbError> {
self.inner.connection.execute("commit", ())?;
Ok(())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm unsure about the intended usage of this function. SQLite auto-commits by default unless you're in an explicit transaction, so calling commit here seems incorrect. Could we remove this function until it's actually needed? That way we'll have proper context about the intended behavior and can implement it correctly.

@oguzkocer
Copy link
Contributor

I've mentioned this on Slack as well, but just leaving a note for myself. I've re-reviewed up to 0bedb9c and it's looking great. Just a couple more unaddressed comments and I think we are good to merge this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants