Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,14 @@
"source": [
"## Cloudless Mosaic\n",
"\n",
"In this tutorial, you'll learn how to constructs a *cloudless mosaic* (also known as a composite) from a time series of satellite images. The tutorial covers the following steps:\n",
"This tutorial constructs a *cloudless mosaic* (also known as a composite) from a time series of satellite images. We'll see the following:\n",
"\n",
"* [Find a time series of images at a particular point on Earth](#Discover-data)\n",
"* [Stack those images together into a single array](#Stack-images)\n",
"* [Compute the cloudless mosaic by taking a median](#Median-composite)\n",
"* [Create mosaics after grouping the data](#Monthly-composite)\n",
"* Find a time series of images at a particular point on Earth\n",
"* Stack those images together into a single array\n",
"* Compute the cloudless mosaic by taking a median\n",
"* Visualize the results\n",
"\n",
"This example uses [Sentinel-2 Level-2A](https://planetarycomputer.microsoft.com/dataset/sentinel-2-l2a) data. The techniques used here work equally well with other remote-sensing datasets.\n",
"\n",
"---"
"This example uses [Sentinel-2 Level-2A](https://planetarycomputer.microsoft.com/dataset/sentinel-2-l2a) data. The techniques used here apply equally well to other remote-sensing datasets."
]
},
{
Expand Down Expand Up @@ -43,7 +41,7 @@
"source": [
"### Create a Dask cluster\n",
"\n",
"This example requires processing a large amms match our seaunt of data. To cut down on the execution time, use a Dask cluster to do the computation in parallel, adaptively scaling to add and remove workers as needed. See [Scale With Dask](../quickstarts/scale-with-dask.ipynb) for more on using Dask."
"We're going to process a large amount of data. To cut down on the execution time, we'll use a Dask cluster to do the computation in parallel, adaptively scaling to add and remove workers as needed. See [Scale With Dask](../quickstarts/scale-with-dask.ipynb) for more on using Dask."
]
},
{
Expand All @@ -60,7 +58,7 @@
}
],
"source": [
"cluster = GatewayCluster() # creates the Dask Scheduler - might take a minute.\n",
"cluster = GatewayCluster() # Creates the Dask Scheduler. Might take a minute.\n",
"\n",
"client = cluster.get_client()\n",
"\n",
Expand All @@ -74,7 +72,7 @@
"source": [
"### Discover data\n",
"\n",
"In this example, the area of interest is located near Redmond, Washington. It is defined as a GeoJSON object."
"In this example, we define our area of interest as a GeoJSON object. It's near Redmond, Washington."
]
},
{
Expand Down Expand Up @@ -102,7 +100,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Use `pystac_client` to search the Planetary Computer's STAC endpoint for items matching your query parameters:"
"Using `pystac_client` we can search the Planetary Computer's STAC endpoint for items matching our query parameters."
]
},
{
Expand Down Expand Up @@ -137,11 +135,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, there are 138 items that match your search requirements in terms of location, time, and cloudiness. Those items will still have *some* clouds over portions of the scenes, though. \n",
"\n",
"### Stack images\n",
"\n",
"To create a cloudless mosaic, first, load the data into an [xarray](https://xarray.pydata.org/en/stable/) DataArray using [stackstac](https://stackstac.readthedocs.io/):"
"So 138 items match our search requirements, over space, time, and cloudiness. Those items will still have *some* clouds over portions of the scenes, though. To create our cloudless mosaic, we'll load the data into an [xarray](https://xarray.pydata.org/en/stable/) DataArray using [stackstac](https://stackstac.readthedocs.io/) and then reduce the time-series of images down to a single image."
]
},
{
Expand All @@ -166,13 +160,6 @@
" signed_items.append(planetary_computer.sign(item).to_dict())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, reduce the time series of images down to a single image:"
]
},
{
"cell_type": "code",
"execution_count": 6,
Expand Down Expand Up @@ -1484,7 +1471,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Since the data matching your query isn't too large, you can persist it in distributed memory. Once it is stored in memory, subsequent operations will be much faster."
"Since the data matching our query isn't too large we can persist it in distributed memory. Once in memory, subsequent operations will be much faster."
]
},
{
Expand All @@ -1502,7 +1489,7 @@
"source": [
"### Median composite\n",
"\n",
"Using regular xarray operations, you can [compute the median](http://xarray.pydata.org/en/stable/generated/xarray.DataArray.median.html) over the time dimension. Under the assumption that clouds are transient, the composite shouldn't contain (many) clouds, since clouds shouldn't be the median pixel value at that point over many images.\n",
"Using normal xarray operations, we can [compute the median](http://xarray.pydata.org/en/stable/generated/xarray.DataArray.median.html) over the time dimension. Under the assumption that clouds are transient, the composite shouldn't contain (many) clouds, since they shouldn't be the median pixel value at that point over many images.\n",
"\n",
"This will be computed in parallel on the cluster (make sure to open the Dask Dashboard using the link printed out above)."
]
Expand All @@ -1520,7 +1507,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Use Xarray-Spatial's `true_color` method to visualize your data by converting it to red/green/blue values."
"To visualize the data, we'll use xarray-spatial's `true_color` method to convert to red/green/blue values."
]
},
{
Expand Down Expand Up @@ -1565,7 +1552,7 @@
"source": [
"### Monthly composite\n",
"\n",
"Now suppose you don't want to combine images from different parts of the year (for example, you might not want to combine images from January that often include snow with images from July). Again using standard xarray syntax, you can create sets of per-month composites by grouping by month and then computing the median:"
"Now suppose we don't want to combine images from different parts of the year (for example, we might not want to combine images from January that often include snow with images from July). Again using standard xarray syntax, we can create set of per-month composites by grouping by month and then taking the median."
]
},
{
Expand All @@ -1581,7 +1568,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Convert each of those arrays to a true-color image and plot the results as a grid:"
"Let's convert each of those arrays to a true-color image and plot the results as a grid."
]
},
{
Expand Down Expand Up @@ -1617,17 +1604,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Next steps\n",
"\n",
"To learn more about using the Planetary Computer's STAC API, see [Reading data from the STAC API](../quickstarts/reading-stac.ipynb). To learn more about Dask, see [Scaling with Dask](../quickstarts/scale-with-dask.ipynb).\n",
"### Learn more\n",
"\n",
"Click on this link to go to the next notebook: [04 Geospatial Classification](04_Geospatial_Classification.ipynb)"
"To learn more about using the the Planetary Computer's STAC API, see [Reading data from the STAC API](../quickstarts/reading-stac.ipynb). To learn more about Dask, see [Scaling with Dask](../quickstarts/scale-with-dask.ipynb)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
Expand All @@ -1641,7 +1626,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.8.8"
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
Expand Down
Loading