⚡️ Speed up function get_google_messages_from_parts by 115%
          #471
        
          
      
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
📄 115% (1.15x) speedup for
get_google_messages_from_partsinmarimo/_ai/_convert.py⏱️ Runtime :
277 microseconds→129 microseconds(best of163runs)📝 Explanation and details
The optimized version achieves a 115% speedup through several key micro-optimizations targeting the hot path in the type checking loop:
Key optimizations:
Pre-computed role mapping: Moved
"user" if role == "user" else "model"outside the loop to avoid repeated conditional evaluation (visible in profiler as reduced time on role assignment lines).Type checking optimization: Replaced
isinstance(part, Class)withtype(part) is Class_using pre-bound class references. This avoids MRO traversal and attribute lookups, showing significant time reduction in the type checking lines (from ~19.9% to ~15.3% for TextPart checks).Inlined data extraction: Removed the
_extract_data()function call for FilePart processing and inlined the logic, eliminating function call overhead.Eliminated intermediate variables: Removed temporary variable assignments (like
text_message,reasoning_message) and directly append dictionary literals, reducing object creation overhead.Performance characteristics by test case:
The optimization is particularly effective for high-volume AI message processing where the function is called with many parts, as evidenced by the dramatic improvements in large-scale test cases.
✅ Correctness verification report:
⚙️ Existing Unit Tests and Runtime
_ai/test_chat_convert.py::test_get_google_messages_from_parts_empty_ai/test_chat_convert.py::test_get_google_messages_from_parts_role_mapping_ai/test_chat_convert.py::test_get_google_messages_from_parts_text_only_ai/test_chat_convert.py::test_get_google_messages_from_parts_with_reasoning_ai/test_chat_convert.py::test_get_google_messages_from_parts_with_tool_invocation🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-get_google_messages_from_parts-mhb4ns1qand push.