Skip to content

Commit 678d4e9

Browse files
committed
updated the io.rst file
1 parent 760291f commit 678d4e9

File tree

2 files changed

+3
-5
lines changed

2 files changed

+3
-5
lines changed

ci/requirements/doc.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,6 @@ dependencies:
99
- cartopy
1010
- cfgrib
1111
- kerchunk
12-
- s3fs
1312
- dask-core>=2022.1
1413
- dask-expr
1514
- hypothesis>=6.75.8

doc/user-guide/io.rst

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1008,19 +1008,19 @@ directory `here <https://github.com/pydata/xarray-data>`_.
10081008
Packages like `kerchunk`_ and `virtualizarr <https://github.com/zarr-developers/VirtualiZarr>`_
10091009
help in creating and reading these references.
10101010

1011+
10111012
Reading these data archives becomes really easy with ``kerchunk`` in combination
10121013
with ``xarray``, especially when these archives are large in size. A single combined
10131014
reference can refer to thousands of the original data files present in these archives.
10141015
You can view the whole dataset with from this `combined reference` using the above packages.
10151016

1016-
The following example shows opening a combined references generated from a ``.hdf`` file.
1017+
The following example shows opening a combined references generated from a ``.hdf`` file stored locally.
10171018

10181019
.. ipython:: python
10191020
10201021
storage_options = {
1021-
"remote_protocol": "s3",
1022-
"skip_instance_cache": True,
10231022
"target_protocol": "file",
1023+
"remote_protocol": "s3", # assuming you're working with a file in AWS
10241024
}
10251025
10261026
ds = xr.open_dataset(
@@ -1202,7 +1202,6 @@ The simplest way to serialize an xarray object is to use Python's built-in pickl
12021202
module:
12031203

12041204
.. ipython:: python
1205-
:okexcept:
12061205
12071206
import pickle
12081207

0 commit comments

Comments
 (0)