-
Notifications
You must be signed in to change notification settings - Fork 264
ENH: Improve par-rec support #253
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
nhdr.set_slope_inter(1., 0.) | ||
else: | ||
nhdr.set_slope_inter(slope, intercept) | ||
data = raw_data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@matthew-brett, this was done to support writing data that had variable scale factors.
The raw_data.dtype
is np.int16 for one of my datasets, which is not good. Does the Nifti format support writing other datatypes, like maybe np.float32? I tried outputting without casting (which means np.float64 after this math), and that definitely did not work. It would be nice not to have to use an integer format, especially for data like this that must be scaled before being saved.
The cleaner alternative here would be to save a multidimensional vector of scale factors, but from looking at the nifti code a bit it seems like it expects single scalars.
And from a bit of reading it seems like other dtypes are supported, but I'm not sure how to get it to work... seems like the header will have to be changed to "know" that the dtype is not np.int16...
One thing @mrjeffs suggested was that for DTI scans there should be an option (enabled by default) to discard the trace volume(s), if present. Interested to hear if others agree, too. |
Can you say more about the trace volumes? |
In a couple of our data files the trace / mean diffusivity images are given for |
Sorry to take a little while to get back to you with a review - internet is very tough here, but I should be able to get to my mail every couple of days. Here's a review all in a lump because I could write it offline, and the github web interface is very slow to use here. Thanks a lot for this. The PEP changes in the parrec2nii parser have the effect of adding an annoying
instead of:
In general the PEP8 changes are a bit confusing. Probably too late now, but You've found that the par / rec code is not organized very well for
If the REC file has one scale factor and no reordering, then there is a direct At the moment, I think I'm right in saying that all the data reordering is in Also - again partly because of all the code in So, I think the best thing to do here is to move the non-option code out of The header could have the logic for resorting the slices, with a helper method
This would generate a vector of
That would make it easy to rewrite the
The slice_indices method would make it fairly easy to support things like Reordering time and slice would just be a question of returning slice indices Your ArrayProxy could be something very simple like:
where the header takes care of the all the nasty re-sorting, scaling. It would be good to support easy conversion of a parrec header to a nifti
Sorry - I should have documented this feature better, but it's a way that any If you do that, then you just do:
and let Can you move
Then you could maybe make It would be good to do the bvals, bvec calculation within the header object
The q_vectors / bvecs will need rotating to match the stored image
There are some useful test files that Yarick H put up on github at: https://github.com/yarikoptic/nitest-balls1 We could easily include some or all of these in the nibabel repo. Could you Thanks a lot for working on this. With just a little more work I think we |
Thanks for the comprehensive review. The only part I disagree with is the "little" in the phrase "little mork work" :) But to be serious, the next issue in my mind was also that most of the heavy lifting is done with I'm happy to implement all of the suggestions here, they all seem reasonable. My only concern is the timeline. I can try to do this this week, but I'm a little bit wary of adequately implementing and testing all the changes by the time |
Okay, I made some progress along the lines you suggested. It likely broke some of the old tests. I'll take a look at the test files that you mentioned hopefully soon. I get confused about how all the various classes and class methods are meant to interact, so it's probably not correct yet. But I'll at least try to get tests passing and it working on my files, then we can make it cleaner. |
@@ -142,47 +176,69 @@ def proc_file(infile, opts): | |||
else: | |||
# anatomical or DTI | |||
nhdr.set_xyzt_units('mm', 'unknown') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@matthew-brett for example, could I somehow move this into my pr_hdr.as_analyze_map
function so that I wouldn't have to set it in nhdr
in parrec2nii
? I think from your comment I'd need to update my dict returned by as_analyze_map
, but I don't know how exactly (as_analyze_map doesn't seem to be used anywhere other than in as_analyze_map
, where it's not clear what's happening).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or in analyze.py
, rather.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right - yes - the units thing would go very well in
PARRECHeader.as_analyze_map - thanks again for working on this. I'll help
as much as I can...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just not sure what the syntax would look like, since as_analyze_map
doesn't seem to be used anywhere / documented anywhere I don't have anything to go off of...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh dear - sorry that is true.
I will write something better today and try and push it to master, but
as_analyze_map is a method of a header that takes 'self' as the only
argument (self being the header) and, returns a dict, where the keys are
field names in the nifti or analyze header and the values are the values
for those keys in the analyze header. So, if you look at - say - the
nifti1.py file header field definitions, you coudl return in the output
dict any named field in that definition. In the case of the
set_xyzt_units, you would have to go to the nifti1.py source to see what
header fields that sets, and set those with the key, value pairs of the
returned dict. Is that a little clearer? I'll post something soom with
better documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's good enough for now. If you're feeling adventurous, you could even just open a PR into my branch to extend the support if I don't get to it, then this could serve as the example. I'll see how far I can get and ping you to check it out.
Got tests passing again, now I need to make them more comprehensive / incorporate the new files. |
@matthew-brett none of the Philips files in that test set contain the XYTZ ordering, unfortunately. What would you recommend for extending tests? I'm not experienced enough with these data yet to know what the appropriate tests are. Any chance you can open a PR into my branch to clean up loose ends / add tests? |
Ok - I will add a PR into your branch - will maybe be on Monday - won't |
Sounds good |
It's a bit late for you now, but I just push this up for review, for If you have time to rebase on top of that in the next 10 minutes or so, that would help. Don't worry otherwise, I'm going offline again :) |
I could rebase, but there are no merge conflicts...? |
OK - don't worry - I'll do it offline, push on Monday or so. |
Refactor header conversion routine to use mapping if available, then basic conversion always. Document the ``as_analyze_map`` interface in docstring. Add ``as_analyze_map`` tests and as API documentation. Rename ``_set_format_specifics`` to ``_clean_after_mapping`` to make clear that the routine is to set needed defaults after creating a header from a mapping. Remove unused ``_set_format_specifics`` from freesurfer/mghformat
Sorry to be slow - various things came up this week, I might have more time over the weekend. |
Don't sweat it -- we're going to work on some testing with our data in parallel, hopefully next week as well. |
Rebased to |
Sorry for the delay - I got distracted by various things. I will take a On Mon, Sep 29, 2014 at 12:32 PM, Eric Larson [email protected]
|
Eric - are you online? I'm planning to upload some work but I wanted your advice? |
Indeed -- what's up? |
I've done some work on your branch. I found I needed to merge in another PR. I would like to merge the PRs to master, then push the rebased branch to another PR, closing this one. Is that OK? |
Sure, I can always open PRs into that branch instead if necessary -- whatever organization makes most sense at your end is fine by me. |
BTW might be good to get #254 in sooner rather than later since it touches many files, too... it makes testing output much cleaner / easier to parse. |
OK - thanks - have just pushed #261 with rebased changes. |
MRG: Cleaner tests In the process of working on #253 I was a little bit surprised by the number of warnings during testing. This cleans that up, and fixes a couple minor errors along the way.
MRG: Cleaner tests In the process of working on nipy#253 I was a little bit surprised by the number of warnings during testing. This cleans that up, and fixes a couple minor errors along the way.
Goals:
Although it's not 100% complete, it would be good to get a round of review to make sure I'm not way off.