-
Notifications
You must be signed in to change notification settings - Fork 29.3k
[flutter_tools] analyze --suggestions --machine command #112217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This reminds me, you actually need to remove this code that depends on depot_tools from the integration tests: #100254. We can't add more tests relying on this code.
Ahh, ok. I'll remove it here and do it another way for this PR. I'll also send a PR later removing the depot_tools system completely once I get around to ripping out the already landed migrate stuff. |
Sounds good. @Jasguerrero was mentioning that you should be able to leverage the work he did for doing diagnostics on a flutter project; by adding a |
Sure the command in question is |
I took a look at the |
@GaryQian I agree currently |
@Jasguerrero Ahh, I see. My concern is that I would agree that this doesn't necessarily need its own command, it could perhaps live under a |
Isn't this something the migrate tool will call under the hood, unseen to the user? If so, what does the flag name matter? In other words, whether |
I see, sure the name |
If we want to hide a certain feature, I dont think the right way to go about it is putting it under unclear flags, rather, we should just directly hide the flag's existence (which I think we can already do that via some parameter) in documentation and help. I guess there isn't currently a better place than analyze to put this capability, so that seems reasonable to me especially if there is some consensus in previous discussions. |
My suggestion was more to use what we have under |
Sorry, I'm not familiar with the validators system, but I don't see how adding this within the validators system improves the organization of this. Specifically, one thing is that the validator prints everything with formatting in a box, which does not help with the intention of this command being purely machine-ingested. This command doesn't "validate" anything, if the values are invalid/wrong, this command should print it out anyways. It is meant to poll the state of the tool. Let me know if you want to chat real quick to be clear on this! *edit Upon further inspection, let me try to move my code into a ProjectValidator and change ValidateProject.run() to machine-dump |
Ok, this should be ready for review. I've integrated it with the suggestions and ProjectValidators systems. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why have a different code path for the machine dump, rather than standardizing on a single set of validators used in both --machine and non-machine modes, and simply print them differently?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not certain how we are choosing which properties to include in the non-machine --suggestions, but it seems that the machine version will expose much deeper and less-human-relevant properties. Also, the non-machine validators encode the name
and value
as human readable while this command prefers a direct blah.foo.bar
structure for the name and a json compatible string as value.
Converting between the human readable version and the machine version seems tedious and unnecessary, especially since I'm not doing any "work" here, just dumping values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we have --machine mode be a superset of the non-machine mode, then? I'd like to avoid having essentially two different code paths to maintain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cleanest way to do this is still the separate code path. The way the validator API works is that it stores name
- value
pairings as two strings in each ProjectValidatorResult
that it produces.
For example, in the existing --suggestions validator, we store the app's pubspec name as ProjectValidatorResult(name: 'App name', value: 'my_app', status: StatusProjectValidator.success)
. There is no way of converting this to the desired machine format of ProjectValidatorResult(name: 'FlutterProject.manifest.appName', value: '"my_app"', status: StatusProjectValidator.info)
without just hardcoding the different name
and value
properties. This same issue is repeated for every single existing validation. Any benefit of overlapping validations is made moot by having to write a bunch of if statements on what string to store creating much more complicated, harder to maintain, and longer code.
The other concern is that the existing validator is simply not trying to accomplish the same task as the new validator I'm adding. It is trying to check if different properties are good or not, while the dump validator is just trying to expose the values regardless of the correctness. Lumping them together seems wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as our current testing on the toilet suggests, can you parse the stdout with json.decode
and then assert on the keys and values?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What we can do is having 2 types of ProjectValidator
, make a new abstract class that inherits from ProjectValidator
called MachineProjectValidator
move all of the validators under allProjectValidators
and on validate_project.dart
run and output only the ones needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
auto label is removed for flutter/flutter, pr: 112217, due to - The status or check suite Windows tool_integration_tests_1_6 has failed. Please fix the issues identified (or deflake) before re-applying this label. |
Adds
flutter analyze --suggestions --machine
command, which outputs a simple json map of key properties computed by the Flutter tool.Example output:
The values reported can be expanded as needed in the future. These values are primarily the values used by flutter/packages#2441