-
Notifications
You must be signed in to change notification settings - Fork 264
Description
Problems
Memory usage
When opening big gzip files, as for instance those shipped by the human connectome project, nibabel takes too much memory (see for instance #208 ).
The breakage happens in gzip.open and can be explained as follow (see the traceback in #208 to understand better):
- gzip does not know how much memory it needs to allocate, so it uses 1) for decompression a growing chunk size, with powers of two and capped at gzip.GzipFile.max_read_chunk, 2) for storing the output a buffer that it grows progressively by appending the chunk read.
- for memory problems, that buffer is the worst problem as its memory footprint gets doubled when the append happens (it is not an append: a new buffer is created, and the old one removed).
- if the final data has 4Gb + epsilon, the gzip opener will have at a given instant 2 4Gb buffers, and, in @rphlypo's case blow the memory
Read is slow
In addition to the memory usage, this pattern leads to very inefficient code, as the temporary buffer is grown again and again, which leads to costly memory copies. These are especially costly since the size of the memory copy isn't small compared to the total size of the memory, and thus the system has to move other memory pages belonging to other programs to allocate this memory. This is typically the behavior that renders systems unresponsive when the memory usage gets close 100%. To witness how bad the I/O speed is with data of a size on the same order of magnitude than the total memory, an easy experiment is to compare the time a 'nibabel.get_data()' takes to the time required to:
- gzip.GzipFile('file.nii.gz').read() (slow)
- gunzip file.nii.gz -c > /dev/null (almost twice as fast)
- open in fslview (almost twice as fast)
Proposed improvement
The only way that I can think of to improve the memory usage would be to preallocate a buffer big enough. We roughly know the size. However, the gzip module does not allow us to do this, and it would require fairly deep monkey patching that I think is not reasonable. I give up on this aspect of the problem.
For the speed, the simple option is to increase the 'max_read_chunk' of the GzipFile object. With a file that is 400Mb big, here is an experiment that shows the benefit:
import gzip gzip_file = gzip.open('0010042_rest_tshift_RPI_voreg_mni.nii.gz') gzip_file.max_read_chunk = max_read_chunk = 100 * 1024 * 1024 # 100Mb data = gzip_file.read()
The extra line (third line) makes a factor of 2 of speed in the above example.
To see the impact on nibabel, a little bit of monkey-patching enables experimentation:
import gzip gzip.GzipFile.max_read_chunk = max_read_chunk = 200 * 1024 * 1024 # 200Mb
Roughly factors of two speedups
Action point
I am proposing to submit a patch to nibabel that uses the following strategy for an opener:
gzip_file = gzip.open('0010042_rest_tshift_RPI_voreg_mni.nii.gz') gzip_file.max_read_chunk = max_read_chunk = 100 * 1024 * 1024 # 100Mb data = gzip_file.read()
It should be an easy, local, modification. @matthew-brett : would you be in favor of such a patch?