Skip to content

Conversation

@psfinaki
Copy link
Contributor

The code is a bit underrefactored as I plan to come back to this once I see how this behaves in CI and once we have more such tests.

@psfinaki psfinaki requested a review from a team as a code owner April 25, 2023 14:19
@T-Gro
Copy link
Member

T-Gro commented Apr 25, 2023

A test API that I think would help increasing coverage for this kind of functionality - an hopefully this would be doable without being an IDE test, although I can imagine this might be harder than it looks.

let codeBefore = " .. some F# code as string"
let actualCodeAfter = CodeActionHelper.AcceptAllSuggestions(codeBefore)

.. and compare with expectedCodeAfter.

The "AcceptAll" could either be as is without any argument (simple variant), or accepting an instance of a chosen provider.

@baronfel
Copy link
Member

What we've done here in FSAC is to have a set of magic strings that signify cursors and expected ranges for diagnostics. so you'd input a code string and place eg. $0 in the string where you wanted the cursor to be when the diagnostic was invoked. Then we provide helpers for finding and invoking a specific diagnostic (since multiple diagnostics may want to run at that specific position). Once invoked, we provide helpers for applying any edits returned from that action and comparing against an 'expected' source string. It's quite nice for codefixes, at least those that can be accurately tested in just a single file. Multi-file scenarios require more set-up of course.

@psfinaki
Copy link
Contributor Author

A test API that I think would help increasing coverage for this kind of functionality - an hopefully this would be doable without being an IDE test, although I can imagine this might be harder than it looks.

let codeBefore = " .. some F# code as string"
let actualCodeAfter = CodeActionHelper.AcceptAllSuggestions(codeBefore)

.. and compare with expectedCodeAfter.

The "AcceptAll" could either be as is without any argument (simple variant), or accepting an instance of a chosen provider.

Sure, that's about how we should have it. I am all up for unit tests here and don't plan to add (many) integration tests for a while.

As for how hard it is going to be, I made a quick sketch a few weeks ago and it didn't look trivial, but it should be somewhat easier after your recent refactorings. I mean, we'll do this, no principal obstacles there.

@psfinaki
Copy link
Contributor Author

What we've done here in FSAC is to have a set of magic strings that signify cursors and expected ranges for diagnostics. so you'd input a code string and place eg. $0 in the string where you wanted the cursor to be when the diagnostic was invoked. Then we provide helpers for finding and invoking a specific diagnostic (since multiple diagnostics may want to run at that specific position). Once invoked, we provide helpers for applying any edits returned from that action and comparing against an 'expected' source string. It's quite nice for codefixes, at least those that can be accurately tested in just a single file. Multi-file scenarios require more set-up of course.

Yep, that's also a way to go. We have similar mini-frameworks in some other areas, e.g. quick info. IMO this increases the cognitive load a bit as one cannot just copy-paste the code from tests anymore and needs to make this brain move to understand that those bucks are test placeholders. But - not a big deal, I guess we'll decide on the go.

And good point about bulk fixes. It's good to have them in mind when creating some framework here.

Copy link
Member

@T-Gro T-Gro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine to approve this as a temporary stash- even though I am not really sure what it tests apart from overall VS code action machinery (not our code, but good to have at least 1 test like this as a sanity check that VS did not change things).

Considering the number of codefixes and the variations of situations within them, this is unlikely to be a sustainable strategy for testing the logic within them (both the logic for filtering in some of them, as well as the logic for the string replacement).

(38s for the first one must be warmpup time for IdeFact test suite I guess)

  Passed FSharp.Editor.IntegrationTests.CodeActionTests.AddNewKeywordToDisposables (VS2022) [38 s]
  Passed FSharp.Editor.IntegrationTests.CodeActionTests.UnusedOpenDeclarations (VS2022) [4 s]
  Passed FSharp.Editor.IntegrationTests.CodeActionTests.AddMissingFunKeyword (VS2022) [2 s]

=> after this stash, the focus IMO should be how to efficiently test the logic within the codefix (the replacement it suggests).
Then 1 sanity integration test can remain for checking that VS still works the same.

@psfinaki
Copy link
Contributor Author

Yes, this can be viewed as a stash in that sense.

Fair point, this is basically validating some VS mechanics. The time tests take IMO is not the biggest issue, I mean yes basically the VS "warms up" during the first test, and then others take reasonable time. It's rather - guess what - flakiness and potential false alarms, given all the voodoo used inside.

TLDR - yes, let's keep this set or even reduce it in the future unless we discover some different VS mechanics used in some other scenarios.

@psfinaki psfinaki enabled auto-merge (squash) April 25, 2023 16:09
@psfinaki psfinaki merged commit ec122d9 into dotnet:main May 2, 2023
@psfinaki psfinaki deleted the int-tests-4 branch May 23, 2023 22:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

5 participants