Skip to content

Commit

Permalink
[DOCS] Add info on calendar v fixed interval. Closes #29410
Browse files Browse the repository at this point in the history
  • Loading branch information
Sue-Gallagher committed Jun 27, 2018
1 parent d7e08f0 commit fe0131f
Showing 1 changed file with 127 additions and 48 deletions.
175 changes: 127 additions & 48 deletions docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc
Original file line number Diff line number Diff line change
@@ -1,12 +1,105 @@
[[search-aggregations-bucket-datehistogram-aggregation]]
=== Date Histogram Aggregation

A multi-bucket aggregation similar to the <<search-aggregations-bucket-histogram-aggregation,histogram>> except it can
only be applied on date values. Since dates are represented in Elasticsearch internally as long values, it is possible
to use the normal `histogram` on dates as well, though accuracy will be compromised. The reason for this is in the fact
that time based intervals are not fixed (think of leap years and on the number of days in a month). For this reason,
we need special support for time based data. From a functionality perspective, this histogram supports the same features
as the normal <<search-aggregations-bucket-histogram-aggregation,histogram>>. The main difference is that the interval can be specified by date/time expressions.
This multi-bucket aggregation supports exactly the same features as the normal
<<search-aggregations-bucket-histogram-aggregation,histogram>>, but it can
only be applied on date values. Because dates are represented internally in
Elasticsearch as long values, it is possible, but not as accurate, to use the normal `histogram` on dates as well. The main difference in the two APIs is
that here the interval can be specified using date/time expressions. Time-based
data requires special support because time-based intervals are not always a
fixed length.

==== Setting intervals

There seems to be no limit to the creativity we humans apply to setting our clocks and
calendars. We've invented leap years and leap seconds, standard and daylight savings
times, and timezone offsets of 30 or 45 minutes rather than a full hour. While these
creations help keep us in sync with the cosmos and our environment, they can make
specifying time intervals accurately a real challenge. The only universal truth our
researchers have yet to disprove is that a millisecond is always the same duration,
and a second is always 1000 milliseconds. Beyond that, things get complicated.

Generally speaking, when you specify a single time unit, such as 1 hour or 1 day, you
are working with a _calendar interval_, but multiples, such as 6 hours or 3 days, are
_fixed-duration intervals_.

For example, a specification of 1 day (1d) is a calendar interval that means "at
this exact time tomorrow" no matter the length of the day. A change to or from
daylight savings time that results in a 23 or 25 hour day is compensated for and the
specification of "this exact time tomorrow" is maintained. But if you specify 2 or
more days, each day must be of the same fixed duration (24 hours). In this case, if
the specified interval includes the change to or from daylight savings time, the
interval will end an hour sooner or later than you expect.

There are similar differences to consider when you specify single versus multiple minutes or hours. Multiple time periods longer than a day are not supported.

Here are the valid time specifications and their meanings:

*milliseconds (ms)* +
Fixed length interval; supports multiples.

*seconds (s)* +
1000 milliseconds; fixed length interval (except for the last second of a
minute that contains a leap-second, which is 2000ms long); supports multiples.

*minutes (m)* +
All minutes begin at 00 seconds. +
* One minute (1m) is the interval between 00 seconds of the first minute and 00
seconds of the following minute in the specified timezone, compensating for any
intervening leap seconds, so that the number of minutes and seconds past the
hour is the same at the start and end.
* Multiple minutes (_n_m) are intervals of exactly 60x1000=60,000 milliseconds
each.

*hours (h)* +
All hours begin at 00 minutes and 00 seconds. +
* One hour (1h) is the interval between 00:00 minutes of the first hour and 00:00
minutes of the following hour in the specified timezone, compensating for any
intervening leap seconds, so that the number of minutes and seconds past the hour is the same at the start and end.
* Multiple hours (_n_h) are intervals of exactly 60x60x1000=3,600,000 milliseconds each.

*days (d)* +
All days begin at the earliest possible time, which is usually 00:00:00
(midnight).
* One day (1d) is the interval between 00:00:00 of the first day and 00:00:00
of the following day in the specified timezone, compensating for any intervening
time changes, so that the number of hours, minutes, and seconds of the day is
the same at the start and end.
* Multiple days (_n_d) are intervals of exactly 24x60x60x1000=86,400,000 milliseconds each.

*weeks (w)* +
* One week (1w) is the interval between the start day:hour:minute:second and
the same day and time of the following week in the specified timezone, so that
the day and time are the same at the start and end.
* Multiple weeks (_n_w) are not supported.

*months (m)* +
* One month (1m) is the interval between the start date and time and the same date and time of the following month in the specified timezone, so that the date and time are the same at the start and end.
* Multiple months (_n_m) are not supported.

*years (y)* +
* One year (1y) is the interval between the start month, date, and time and the same
month, date, and time the following year in the specified timezone, so that the date and time are the same at the start and end.
* Multiple years (_n_y) are not supported.

*quarters (q)* +
* One quarter (1q) is the interval between the start month, date, and time and the same date and time three months later, so that the date and time are the
same at the start and end.
* Multiple quarters (_n_q) are not supported.

Widely distributed applications must also consider vagaries such as countries that
start and stop daylight savings time at 12:01 A.M., so end up with one minute of
Sunday followed by an additional 59 minutes of Saturday every fall, and countries
that decide at whim to move across the international date line. Situations like
that can make irregular timezone offsets seem easy.

As always, rigorous testing will ensure that your time interval specification is
what you intend it to be.

WARNING:
To avoid unexpected results, all connected servers and clients must sync to the same network time service.

==== Examples

Requesting bucket intervals of a month.

Expand All @@ -27,13 +120,10 @@ POST /sales/_search?size=0
// CONSOLE
// TEST[setup:sales]

Available expressions for interval: `year` (`1y`), `quarter` (`1q`), `month` (`1M`), `week` (`1w`),
`day` (`1d`), `hour` (`1h`), `minute` (`1m`), `second` (`1s`)

Time values can also be specified via abbreviations supported by <<time-units,time units>> parsing.
You can also specify time values using abbreviations supported by
<<time-units,time units>> parsing.
Note that fractional time values are not supported, but you can address this by shifting to another
time unit (e.g., `1.5h` could instead be specified as `90m`). Also note that time intervals larger than
than days do not support arbitrary values but can only be one unit large (e.g. `1y` is valid, `2y` is not).
time unit (e.g., `1.5h` could instead be specified as `90m`).

[source,js]
--------------------------------------------------
Expand All @@ -52,15 +142,15 @@ POST /sales/_search?size=0
// CONSOLE
// TEST[setup:sales]

==== Keys
===== Keys

Internally, a date is represented as a 64 bit number representing a timestamp
in milliseconds-since-the-epoch. These timestamps are returned as the bucket
in milliseconds-since-the-epoch (01/01/1970). These timestamps are returned as the bucket
++key++s. The `key_as_string` is the same timestamp converted to a formatted
date string using the format specified with the `format` parameter:
date string using the `format` parameter sprcification:

TIP: If no `format` is specified, then it will use the first date
<<mapping-date-format,format>> specified in the field mapping.
TIP: If you don't specify `format`, the first date
<<mapping-date-format,format>> specified in the field mapping is used.

[source,js]
--------------------------------------------------
Expand Down Expand Up @@ -113,13 +203,13 @@ Response:
--------------------------------------------------
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]

==== Time Zone
===== Time Zone

Date-times are stored in Elasticsearch in UTC. By default, all bucketing and
rounding is also done in UTC. The `time_zone` parameter can be used to indicate
rounding is also done in UTC. Use the `time_zone` parameter to indicate
that bucketing should use a different time zone.

Time zones may either be specified as an ISO 8601 UTC offset (e.g. `+01:00` or
You can specify time zones as either an ISO 8601 UTC offset (e.g. `+01:00` or
`-08:00`) or as a timezone id, an identifier used in the TZ database like
`America/Los_Angeles`.

Expand Down Expand Up @@ -151,7 +241,7 @@ GET my_index/_search?size=0
---------------------------------
// CONSOLE

UTC is used if no time zone is specified, which would result in both of these
If you don't specify a time zone, UTC is used. This would result in both of these
documents being placed into the same day bucket, which starts at midnight UTC
on 1 October 2015:

Expand All @@ -174,7 +264,7 @@ on 1 October 2015:
---------------------------------
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]

If a `time_zone` of `-01:00` is specified, then midnight starts at one hour before
If you specify a `time_zone` of `-01:00`, then midnight starts at one hour before
midnight UTC:

[source,js]
Expand Down Expand Up @@ -225,25 +315,14 @@ second document falls into the bucket for 1 October 2015:
<1> The `key_as_string` value represents midnight on each day
in the specified time zone.

WARNING: When using time zones that follow DST (daylight savings time) changes,
buckets close to the moment when those changes happen can have slightly different
sizes than would be expected from the used `interval`.
For example, consider a DST start in the `CET` time zone: on 27 March 2016 at 2am,
clocks were turned forward 1 hour to 3am local time. When using `day` as `interval`,
the bucket covering that day will only hold data for 23 hours instead of the usual
24 hours for other buckets. The same is true for shorter intervals like e.g. 12h.
Here, we will have only a 11h bucket on the morning of 27 March when the DST shift
happens.


==== Offset
===== Offset

The `offset` parameter is used to change the start value of each bucket by the
Use the `offset` parameter to change the start value of each bucket by the
specified positive (`+`) or negative offset (`-`) duration, such as `1h` for
an hour, or `1d` for a day. See <<time-units>> for more possible time
duration options.

For instance, when using an interval of `day`, each bucket runs from midnight
For example, when using an interval of `day`, each bucket runs from midnight
to midnight. Setting the `offset` parameter to `+6h` would change each bucket
to run from 6am to 6am:

Expand Down Expand Up @@ -301,12 +380,12 @@ documents into buckets starting at 6am:
-----------------------------
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]

NOTE: The start `offset` of each bucket is calculated after the `time_zone`
NOTE: The start `offset` of each bucket is calculated after `time_zone`
adjustments have been made.

==== Keyed Response
===== Keyed Response

Setting the `keyed` flag to `true` will associate a unique string key with each bucket and return the ranges as a hash rather than an array:
Setting the `keyed` flag to `true` associates a unique string key with each bucket and returns the ranges as a hash rather than an array:

[source,js]
--------------------------------------------------
Expand Down Expand Up @@ -358,20 +437,20 @@ Response:
--------------------------------------------------
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]

==== Scripts
===== Scripts

Like with the normal <<search-aggregations-bucket-histogram-aggregation,histogram>>, both document level scripts and
As with the normal <<search-aggregations-bucket-histogram-aggregation,histogram>>, both document level scripts and
value level scripts are supported. It is also possible to control the order of the returned buckets using the `order`
settings and filter the returned buckets based on a `min_doc_count` setting (by default all buckets between the first
bucket that matches documents and the last one are returned). This histogram also supports the `extended_bounds`
setting, which enables extending the bounds of the histogram beyond the data itself (to read more on why you'd want to
do that please refer to the explanation <<search-aggregations-bucket-histogram-aggregation-extended-bounds,here>>).

==== Missing value
===== Missing value

The `missing` parameter defines how documents that are missing a value should be treated.
By default they will be ignored but it is also possible to treat them as if they
had a value.
The `missing` parameter defines how to treat documents that are missing a value.
By default, they are ignored, but it is also possible to treat them as if they
have a value.

[source,js]
--------------------------------------------------
Expand All @@ -393,14 +472,14 @@ POST /sales/_search?size=0

<1> Documents without a value in the `publish_date` field will fall into the same bucket as documents that have the value `2000-01-01`.

==== Order
===== Order

By default the returned buckets are sorted by their `key` ascending, though the order behaviour can be controlled using
By default the returned buckets are sorted by their `key` ascending, but you can control order behaviour using
the `order` setting. Supports the same `order` functionality as the <<search-aggregations-bucket-terms-aggregation-order,`Terms Aggregation`>>.

deprecated[6.0.0, Use `_key` instead of `_time` to order buckets by their dates/keys]

==== Use of a script to aggregate by day of the week
===== Using a script to aggregate by day of the week

There are some cases where date histogram can't help us, like for example, when we need
to aggregate the results by day of the week.
Expand Down

0 comments on commit fe0131f

Please sign in to comment.