- 
          
- 
                Notifications
    You must be signed in to change notification settings 
- Fork 19
Protecting our API with beartype #227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| Codecov ReportAll modified and coverable lines are covered by tests ✅ 
 Additional details and impacted files@@            Coverage Diff            @@
##              main      #227   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           39        39           
  Lines         2447      2439    -8     
  Branches       335       331    -4     
=========================================
- Hits          2447      2439    -8     ☔ View full report in Codecov by Sentry. | 
a2c1456    to
    7e2efa1      
    Compare
  
    This enables beartype on all of scraperlib, raising exceptions on all calls to our API that violates the requested types (and function returning incorrect types) Changes: - removed tests purposedly testing incorrect input types - fixed some return types or call params to match intent - logger console arg expecting io.StringIO - turned a couple NamedTuple into dataclass to ease with type declarations - Removed some unreachable code that was expecting invalid types - Introducing new protocols for IO-based type inputs, based on typeshed's (as protocols, those are ignored in test/coverage) - Image-related IO are declared as io.BytesIO instead of `IO[bytes]`. Somewhat equivalent but typing.IO is strongly discouraged everywhere. Cannot harmonize with rest of our code base as we pass this to Pillow. - Same goes for logger which eventually accepts TextIO - stream_file behavior changed a bit. Code assumed that if fpath is not there we want byte_stream. Given it's not properly tested, I changed it to accept both fpath and byte_stream simultaneously. I believe we should change the API to have a single input that supports the byte stream and the path and adapt behavior. We should do that through the code base though so that would be separate. I'll open a ticket if we agree on going this way
Three blocks of code had to be marked no cover. Those blocks are entered and tested but coverage reports them as missing. It might be due to the decorator… I've tried to simplify the code leading to it but could fix the missing lines… Once we merge this, I'll open a ticket to revisit this in the future.
We should really ban wide `pyright: ignore` statements and use specifix expections wherever necessary (or comply!) This fixes it in the current codebase
237e4a4    to
    6f93ffe      
    Compare
  
    There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, looks perfect, thanks a lot
This targets #221 and build ont it. Here's the recap of the commits:
Using beartype on all of scraperlib
This enables beartype on all of scraperlib, raising exceptions on all calls to our API
that violates the requested types (and function returning incorrect types)
Changes:
IO[bytes]. Somewhat equivalent but typing.IO is strongly discouraged everywhere. Cannot harmonize with rest of our code base as we pass this to Pillow.--
Disable cover on special blocks
Three blocks of code had to be marked no cover. Those blocks are entered and tested
but coverage reports them as missing. It might be due to the decorator…
I've tried to simplify the code leading to it but could fix the missing lines…
Once we merge this, I'll open a ticket to revisit this in the future.
--
Removed generic pyright: ignore statements
We should really ban wide
pyright: ignorestatements and use specifix expectionswherever necessary (or comply!)
This fixes it in the current codebase