Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix links in docs when viewed in browser #1102

Merged
merged 4 commits into from
Jul 19, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions examples/getting_started/2_Pipeline.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,11 @@
"\n",
"In this notebook, we'll first put together a simple, artificial example to get some data, and then show how to configure and customize each of the data-processing stages involved:\n",
"\n",
"1. [Projection](#Projection)\n",
"2. [Aggregation](#Aggregation)\n",
"3. [Transformation](#Transformation)\n",
"4. [Colormapping](#Colormapping)\n",
"5. [Embedding](#Embedding)\n",
"1. [Projection](#projection)\n",
"2. [Aggregation](#aggregation)\n",
"3. [Transformation](#transformation)\n",
"4. [Colormapping](#colormapping)\n",
"5. [Embedding](#embedding)\n",
"\n",
"## Data\n",
"\n",
Expand Down Expand Up @@ -237,7 +237,7 @@
"source": [
"Here ``count()`` renders each possible count in a different color, to show the statistical distribution by count, while ``any()`` turns on a pixel if any point lands in that bin, and ``mean('y')`` averages the `y` column for every datapoint that falls in that bin. Of course, since every datapoint falling into a bin happens to have the same `y` value, the mean reduction with `y` simply scales each pixel by its `y` location.\n",
"\n",
"For the last image above, we specified that the `val` column should be used for the `mean` reduction, which in this case results in each category being assigned a different shade of blue, because in our dataset all items in the same category happen to have the same `val`. Here we also manipulated the result of the aggregation before displaying it by subtracting it from 50, a *Transformation* as described in more detail [below](#Transformation)."
"For the last image above, we specified that the `val` column should be used for the `mean` reduction, which in this case results in each category being assigned a different shade of blue, because in our dataset all items in the same category happen to have the same `val`. Here we also manipulated the result of the aggregation before displaying it by subtracting it from 50, a *Transformation* as described in more detail [below](#transformation)."
]
},
{
Expand Down Expand Up @@ -349,7 +349,7 @@
"\n",
"Other custom categorizers can also be defined as new Python classes; see `category_codes` in [reductions.py](https://github.com/holoviz/datashader/blob/master/datashader/reductions.py).\n",
"\n",
"As for `aggc`, the `agg3D` and `agg3D_modulo` arrays can then be selected or collapsed to get a 2D aggregate array for display, or each `val` can be mapped to a different color for display in the [colormapping](#Colormapping) stage. "
"As for `aggc`, the `agg3D` and `agg3D_modulo` arrays can then be selected or collapsed to get a 2D aggregate array for display, or each `val` can be mapped to a different color for display in the [colormapping](#colormapping) stage. "
]
},
{
Expand Down Expand Up @@ -526,7 +526,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Here the different colors mix not just visually due to blurring, but are actually mixed mathematically per pixel, such that pixels that include data from multiple categories will take intermediate color values. The total (summed) data values across all categories are used to calculate the alpha channel, with the previously computed color being revealed to a greater or lesser extent depending on the value of the aggregate for that bin. Thus you can still see which pixels have the highest counts (as they have a higher opacity and thus have brighter colors here), while the specific colors show you which category(ies) are most common in each pixel. See [Colormapping with negative values](#Colormapping-with-negative-values) below for more details on how these colors and transparencies are calculated."
"Here the different colors mix not just visually due to blurring, but are actually mixed mathematically per pixel, such that pixels that include data from multiple categories will take intermediate color values. The total (summed) data values across all categories are used to calculate the alpha channel, with the previously computed color being revealed to a greater or lesser extent depending on the value of the aggregate for that bin. Thus you can still see which pixels have the highest counts (as they have a higher opacity and thus have brighter colors here), while the specific colors show you which category(ies) are most common in each pixel. See [Colormapping with negative values](#colormapping-with-negative-values) below for more details on how these colors and transparencies are calculated."
]
},
{
Expand Down Expand Up @@ -761,7 +761,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The categorical examples above focus on counts, but `ds.by` works on other aggregate types as well, colorizing by category but aggregating by sum, mean, etc. (but see the [following section](#Colormapping-with-negative-values) for details on how to interpret such colors):"
"The categorical examples above focus on counts, but `ds.by` works on other aggregate types as well, colorizing by category but aggregating by sum, mean, etc. (but see the [following section](#colormapping-with-negative-values) for details on how to interpret such colors):"
]
},
{
Expand Down Expand Up @@ -884,7 +884,7 @@
"source": [
"See [the API docs](https://datashader.org/api.html#transfer-functions) for more details. Image composition operators to provide for the `how` argument of `tf.stack` (e.g. `over` (default), `source`, `add`, and `saturate`) are listed in [composite.py](https://raw.githubusercontent.com/holoviz/datashader/master/datashader/composite.py) and illustrated [here](http://cairographics.org/operators).\n",
"\n",
"## Moving on\n",
"## Embedding\n",
"\n",
"The steps outlined above represent a complete pipeline from data to images, which is one way to use Datashader. However, in practice one will usually want to add one last additional step, which is to embed these images into a plotting program to be able to get axes, legends, interactive zooming and panning, etc. The [next notebook](3_Interactivity.ipynb) shows how to do such embedding."
]
Expand Down
16 changes: 8 additions & 8 deletions examples/user_guide/1_Plotting_Pitfalls.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@
"\n",
"We'll cover:\n",
"\n",
"1. [Overplotting](#1.-Overplotting)\n",
"2. [Oversaturation](#2.-Oversaturation)\n",
"3. [Undersampling](#3.-Undersampling)\n",
"4. [Undersaturation](#4.-Undersaturation)\n",
"5. [Underutilized range](#5.-Underutilized-range)\n",
"6. [Nonuniform colormapping](#6.-Nonuniform-colormapping)\n",
"\n",
"You can [skip to the end](#Summary) if you just want to see an illustration of these problems.\n",
"1. [Overplotting](#overplotting)\n",
"2. [Oversaturation](#oversaturation)\n",
"3. [Undersampling](#undersampling)\n",
"4. [Undersaturation](#undersaturation)\n",
"5. [Underutilized range](#underutilized-range)\n",
"6. [Nonuniform colormapping](#nonuniform-colormapping)\n",
"\n",
"You can [skip to the end](#summary) if you just want to see an illustration of these problems.\n",
"\n",
"This notebook requires [HoloViews](https://holoviews.org), [colorcet](https://colorcet.pyviz.org), and matplotlib, and optionally scikit-image, which can be installed with:\n",
"\n",
Expand Down
4 changes: 2 additions & 2 deletions examples/user_guide/5_Grids.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"\n",
"In some cases, your data is *already* rasterized, such as data from imaging experiments, simulations on a regular grid, or other regularly sampled processes. Even so, the rasters you have already are not always the ones you need for a given purpose, having the wrong shape, range, or size to be suitable for overlaying with or comparing against other data, maps, and so on. Datashader provides fast methods for [\"regridding\"](https://climatedataguide.ucar.edu/climate-data-tools-and-analysis/regridding-overview)/[\"re-sampling\"](http://gisgeography.com/raster-resampling/)/\"re-rasterizing\" your regularly gridded datasets, generating new rasters on demand that can be used together with those it generates for any other data types. Rasterizing into a common grid can help you implement complex cross-datatype analyses or visualizations. \n",
"\n",
"In other cases, your data is stored in a 2D array similar to a raster, but represents values that are not regularly sampled in the underlying coordinate space. Datashader also provides fast methods for rasterizing these more general rectilinear or curvilinear grids, known as [quadmeshes](#Quadmesh-Rasterization) as described later below. Fully arbitrary unstructured grids ([Trimeshes](Trimesh.ipynb)) are discussed separately.\n",
"In other cases, your data is stored in a 2D array similar to a raster, but represents values that are not regularly sampled in the underlying coordinate space. Datashader also provides fast methods for rasterizing these more general rectilinear or curvilinear grids, known as [quadmeshes](#quadmesh-Rasterization) as described later below. Fully arbitrary unstructured grids ([Trimeshes](Trimesh.ipynb)) are discussed separately.\n",
"\n",
"## Re-rasterization\n",
"\n",
Expand Down Expand Up @@ -60,7 +60,7 @@
"source": [
"## Interpolation (upsampling)\n",
"\n",
"So, what if we want a larger version? We can do that with the Datashader `Canvas.raster` method. Note that `Canvas.raster` only supports regularly spaced rectilinear, 2D or 3D ``DataArray``s, and so will not accept the additional dimensions or non-separable coordinate arrays that xarray allows (for which see [Quadmesh](#Quadmesh-rasterization), below). Also see the Quadmesh docs below if you want to use a [GPU](https://datashader.org/user_guide/Performance.html#Data-objects) for processing your raster data, because the Datashader `Canvas.raster` implementation does not yet support GPU arrays.\n",
"So, what if we want a larger version? We can do that with the Datashader `Canvas.raster` method. Note that `Canvas.raster` only supports regularly spaced rectilinear, 2D or 3D ``DataArray``s, and so will not accept the additional dimensions or non-separable coordinate arrays that xarray allows (for which see [Quadmesh](#quadmesh-rasterization), below). Also see the Quadmesh docs below if you want to use a [GPU](https://datashader.org/user_guide/Performance.html#Data-objects) for processing your raster data, because the Datashader `Canvas.raster` implementation does not yet support GPU arrays.\n",
"\n",
"Assuming you are ready to use `Canvas.raster`, let's try upsampling with either nearest-neighbor or bilinear interpolation (the default):"
]
Expand Down