Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Lens] Expression function to scale numbers to a specific time interval #77692

Closed
Tracked by #74813
wylieconlon opened this issue Sep 16, 2020 · 5 comments · Fixed by #82104
Closed
Tracked by #74813

[Lens] Expression function to scale numbers to a specific time interval #77692

wylieconlon opened this issue Sep 16, 2020 · 5 comments · Fixed by #82104
Assignees
Labels
Feature:Lens Project:LensDefault Team:Visualizations Visualization editors, elastic-charts and infrastructure

Comments

@wylieconlon
Copy link
Contributor

wylieconlon commented Sep 16, 2020

On the expression level, this is how we plan on implementing the "time scaling" logic that is part of supporting time series functions in Lens:

  1. First, we need to know the exact interval for date histogram by adding more metadata
  2. Any function that does time scaling logic will call an expression function like this:
| esaggs aggs={...}
| derivative ...
| time_scale output="1s"
  1. Apply a suffix to the formatted numbers to indicate the rate in text.

Algorithm to calculate initial scaling factor

The time scale function takes an output target scale which is defined by a fixed interval which can get statically translated into milliseconds (not relying on time zones etc.).

To get the input scale (the size of the current time bucket), use the utility function getInterval #78745 on the column meta data to get the used interval on ES level. Then use moment to expand that into a millisecond range, starting with the start of the bucket defined by either the very beginning of the queried time range or the key of the current bucket), using the add function of the moment object, capping at the very end of the queried time range.

The value of the metric column is multiplied by target scale in ms / input scale in ms for current bucket. This is repeated for each row in the current data table.

Implementation

Because of the specifics of the function and the difficulties to use it correctly in outside of the Lens context, this function will be implemented within the Lens plugin for now.

@wylieconlon wylieconlon added Team:Visualizations Visualization editors, elastic-charts and infrastructure Feature:Lens Project:LensDefault labels Sep 16, 2020
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-app (Team:KibanaApp)

@monfera
Copy link
Contributor

monfera commented Oct 1, 2020

Feedback on the format: functional specifications for calculations would be useful (this and further comments written on request of @flash1293)

These are github issues in a conversational format that link to other github issues, PRs etc. Neither the description nor the combined thread cannot be construed as specification:

  • it's less than that as at best, the details have to be pieced together from numerous links, and comments that may update the original description
  • it's also more than that, as these gh items mix the concern of what with the concept of how.

Wiki pages or some kind of live docs would be a better place than conversational github issues; docs can be revised, questions can still be asked. Eg. in Google Docs, comments on the margin directly link to the words in question.

While everything under the Sun does not need written specification, I think calculation functions do. Time series functions may have a surprising amount of subtlety.

There'd be multiple uses of jotting down a functional specification that's comprehensive, focuses on the what, inputs and outputs, while refraining from looping in implementation detail(*) which would be noise when reading a spec.

  • helps with making unit tests and integration tests
  • helps review code - clear functional criteria of acceptance - and helps when testing the functionality via the UI too (same)
  • can be communicated with ES developers to check if logic is equivalent, where we want to match calculations; it's crucial once some kind of query planner can push down operators into Elasticsearch, eg. if some prerequisites make it equivalent
  • good basis for end-user documentation on the web and maybe from within Lens UI - users need to know how each thing is computed
  • someone on the team or new to the team can get a self-contained understanding without chasing down information in a piecemeal manner
  • helps upstream teams eg. dataviz in longer term curation of features
  • being "trivial" shouldn't be grounds for not specifying something; if it's indeed trivial, then jotting it down is also trivial, doesn't take much time
  • if something is complex, then jotting down a spec is important, otherwise the only documentation of a complex calculation function is the source code
  • something may appear trivial; writing it down may lead to interesting thoughts

As currently there are no functional specs yet, it's hard to even ask clarifying questions, would be good to have a baseline. I already posted some questions and may add more.

(*) remarks for the implementors, eg. efficiency hints, can still be looped in, in clearly marked, separated points or callouts with different indentation, font etc., so there's total clarity as to where each implementation note begins and ends

@monfera
Copy link
Contributor

monfera commented Oct 1, 2020

These are things a spec might address; many questions go away with certain simplifying conditions if they're made explicit in a spec. Also, it's totally not a functional wishlist, the purpose is to merely see the scope, maybe with the odd well-separated comment for known future expectations.

1. Would be great to specify what the time granularities can be

  • what are the units? eg. hour, second, ... sounds trivial or may be easy to look up, but nice to collect it in one place, or directly link, if can be done unambiguously without "this link, plus this, sans this, with the change that..."
  • what are their corresponding abbreviations? eg. h, s, ... and do they need localisation?
  • can there be arbitrary multipliers of a unit, eg. 3h? can it be 2.5h, maybe built from a minute-level granularity? (I guess, done as "150mins")
  • is there a distinction between serial time, eg. absolute hour, vs. relative time, such as hour of day? Similarly, sequential day vs day of the year/month/week? A spec may admit them, rule them out or postpone, as the need may be legit for showing or aggregating on cyclical or seasonal phenomena

2. Edge cases

  • if eg. 3h is specified, would the first bin, and all subsequent bins except generally the last one have 3 hours in them?
  • is the bin that doesn't have all the required parts, eg. a trailing bin with 2 hours only, using a different divisor, for comparability?
  • if it's ruled out, how do the time bounds relate to the "outer" temporal filter specification? Only bins that fully fit are returned or used? Or if filter bounds would cut bins in half, is the time domain expanded accordingly? Or is the user only ever able to specify exact bounds?
  • even if such a clipping effect is ruled out at the ES query level, can pipeline functions, and user-specified upstream calculation steps yield non-snapped bins, eg. beginning or end of series with truncation?

3. Composition

  • Can these output="1s" etc. clauses be specified differently for different calculations within a single Lens state? I guess the answer is yes, based on screenshots in [Lens] Support for all time series functions #74813 (Advanced)
  • Is it expected that differing calculations lead to differing bins that need to be visualized simultaneously eg. daily and hourly on the same chart? What's the expectation toward that?

4. Interaction with other calculations

  • certain indicators eg. moving averages, differences etc. might alter bin count; eg. in case of differences, the first bin is sometimes dropped (we don't know from what the first value changed -> can't compute difference unless equal value, zero etc. is assumed) - will Lens just use and render the bins that's the temporal intersection of everything calculated?
  • if this is the case, then would a rate calculation that's downstream of a time cropping operator (eg. differences) just receive the subset?
  • Is it a given that subsequent operators have the same time raster? For example, an upstream, 1hr resolution difference op may pipe data into a downstream temporal normalizer which is on a 3hr cadence, but the first hour is lost due to cropping (if cropping can happen - see other calcs)

5. Temporal raster

  • Is the data represented in a dense raster even if some bins have no data, or if eg. only 3 bins have data in an interval that has 8 bins, then the input will only have the 3 bins?
  • Do empty bins or bins with incomplete data need special handling? Will we just divide 0 with 0 and visualize that, or is there a need to signal missing data?
  • In the case of levels: what happens if the field is not a Gauge field, so it looks like events, but it's semantically levels? Can bins propagate and fill values from the past, based on user choice or metadata? Would they be step functions, or some kind of interpolation is needed?
  • Will Lens need to compute bin bounds from timestamps in some cases? If the latter, can the time units be specified? Eg. do leap seconds go into leap hours? Do coarser rasters eg. hours and days need to follow timezone-aware calculations?

6. Temporal normalization with gauge/level

(worth separating the spec for calculations from the spec of what user/config options there may be, and how the user could interact)

  • agreeing with the note "Converting [gauge ie. level values] into a rate would usually cause confusion", what would be the way of ruling out rate calculation on levels (gauge values)?
  • if it's ruled out, is it obligate, or does the user need to have a bypass? (doesn't sound sensible tbh.)
  • is it perhaps an implicit approach where, simply, eg. the bin average is computed for gauges, ie. the user asking for hourly rates silently gets averages? So the UI would need wording that doesn't imply, "we divide your numbers to your bin width" but it says something like "values normalized to bin width"

7. Temporal normalization with counter

  • similar to gauge/level, what would be the exact method? Using an average value of the counter over the bin width, weighted by the length of time a counter was at a specific number? Or last value, middle value (in time) or median value (ie. counters are sorted and mid value is picked)? Or is it bypassed because [...description of what exactly is done in ES that we need to follow]
  • or maybe with counters, the last value is taken, and that's divided with the bin width for normalization? (would sound more sensible from a temporal normalization PoV)

Temporal normalization with cumulative sum

  • minor: will it be named as such on the UI? If we call differences "derivation", this would be "integration" :-)
  • cumulative sums need bootrstrapping (first value) or are assumed some initial value eg. zero - how does this interact with rate calculation?
  • would rate calculation be based on the underlying thing rather than the cumulative sum?
  • or would it be ruled out that the user computes bin width normalized rates? How? Would it need to be overridden sometimes?

Formatting

  • focusing on precision comes up, as divisions will generally lead to long fractions - specify N significant digits? N digits for the fraction? Pivot to SI units above/below certain values? Or options for using common suffixes, eg. "5k/h" meaning 5000/h? (there's a linked discuss issue for formatting which isn't a spec)
  • would Lens decide on what such compression to apply, if any, eg. in function of space, or would the user specify it on the UI?
  • are prefixes ever needed? Money may need them, not sure if there's anything else
  • likely taken for granted: other formatting aspects like thousands separator, internationalization etc. - maybe enough to just link to a definitive place that specifies these options, with some remark of user controllability in Lens
  • assuming Lens UI allows formatting specification, do they earmark the specific field, or the specific channel such as X-axis such that switching among variables preserves the formatting? Or both options, with the field-assigned spec having precedence?

@monfera
Copy link
Contributor

monfera commented Oct 1, 2020

... I see #77811 is an attempt to distill the topic of this issue and might answer a couple of questions; also, clear separation of function vs. implementation notes. As that's on the verge of turning into a discussion too, I'm not commenting there

@flash1293
Copy link
Contributor

flash1293 commented Oct 2, 2020

Thanks for the questions, @monfera - some non-authoritative answers

The biggest issues I see are the bootstrapping of cumulative sum and the handling of partial buckets. IMHO we should split out the bootstrapping and solve it separately in a later iteration. Partial bucket handling is not a new problem for scaling, we already have that issue today - that's why I think we should tackle is separately in a later version as well.

Detailed answers:

  1. Would be great to specify what the time granularities can be

I agree there are a bunch of edge cases, I propose starting out simple with second, minute, hour and day. Multipliers are not super relevant in the first version IMHO. As those can be shown in a dropdown we can handle them using the i18n process in place.

is there a distinction between serial time, eg. absolute hour, vs. relative time, such as hour of day

Is this about things like DST? If yes I propose starting with absolute time (e.g. day -> hour always uses a factor of 1/24)

  1. Edge cases

if eg. 3h is specified, would the first bin, and all subsequent bins except generally the last one have 3 hours in them?

Yes, we only consider this kind of binning (the meta data will only carry a single bucket interval)

is the bin that doesn't have all the required parts, eg. a trailing bin with 2 hours only, using a different divisor, for comparability?
if it's ruled out, how do the time bounds relate to the "outer" temporal filter specification? Only bins that fully fit are returned or used? Or if filter bounds would cut bins in half, is the time domain expanded accordingly? Or is the user only ever able to specify exact bounds?
even if such a clipping effect is ruled out at the ES query level, can pipeline functions, and user-specified upstream calculation steps yield non-snapped bins, eg. beginning or end of series with truncation?

That's a good point.

In Visualize we mitigate this by indicating incomplete buckets in the chart:
Screenshot 2020-10-02 at 09 58 44

Also, there is an option to drop partial buckets on the search service level.

One option would be adding partial bucket information to the meta data of the table together with the bucket interval - maybe like this (unit string just for illustration, it should probably use milliseconds)

meta: {
  interval: "3h",
  firstBucket: "2h",
  lastBucket: "1h"
}

That way time_scale can use the information to use a different divisor for first and last bucket.

No strong opinion, I would probably go with accepting this shortcoming for now (first and last bucket being cut of is a general problem in Elasticsearch/Kibana).

Can these output="1s" etc. clauses be specified differently for different calculations within a single Lens state? I guess the answer is yes, based on screenshots in #74813 (Advanced)

Yes, this is a metric level setting.

Is it expected that differing calculations lead to differing bins that need to be visualized simultaneously eg. daily and hourly on the same chart? What's the expectation toward that?

This functionality won't influence binning at al, it's just about scaling numerical value by a factor. Mixing daily and hourly bins on the same chart is already possible.

certain indicators eg. moving averages, differences etc. might alter bin count; eg. in case of differences, the first bin is sometimes dropped (we don't know from what the first value changed -> can't compute difference unless equal value, zero etc. is assumed) - will Lens just use and render the bins that's the temporal intersection of everything calculated?

Yes, I thank that's what a user would expect

if this is the case, then would a rate calculation that's downstream of a time cropping operator (eg. differences) just receive the subset?

Yes

Is it a given that subsequent operators have the same time raster? For example, an upstream, 1hr resolution difference op may pipe data into a downstream temporal normalizer which is on a 3hr cadence, but the first hour is lost due to cropping (if cropping can happen - see other calcs)

I'm not sure I get that correctly - there is no temporal normalizer changing the interval / bin width - it's just about applying a factor to numerical values. The upstream difference op would null the value in the first row in the table and the time scaling op would just skip it.

Is the data represented in a dense raster even if some bins have no data, or if eg. only 3 bins have data in an interval that has 8 bins, then the input will only have the 3 bins?
Do empty bins or bins with incomplete data need special handling? Will we just divide 0 with 0 and visualize that, or is there a need to signal missing data?

Empty intervals can always happen and all functions will have to deal with that - in case of time scaling it's just skipping those numbers. Charts are also handling empty values. No rows will be removed from the table based on this.

In the case of levels: what happens if the field is not a Gauge field, so it looks like events, but it's semantically levels? Can bins propagate and fill values from the past, based on user choice or metadata? Would they be step functions, or some kind of interpolation is needed?

For xy charts we are exposing the interpolation capabilities of elastic-charts - we don't have meta information whether a field is gauge, event attribute or counter. The user has to configure that (at least for now)

Will Lens need to compute bin bounds from timestamps in some cases? If the latter, can the time units be specified? Eg. do leap seconds go into leap hours? Do coarser rasters eg. hours and days need to follow timezone-aware calculations?

Seems like this is similar to the second point of this comment

agreeing with the note "Converting [gauge ie. level values] into a rate would usually cause confusion", what would be the way of ruling out rate calculation on levels (gauge values)?
if it's ruled out, is it obligate, or does the user need to have a bypass? (doesn't sound sensible tbh.)
is it perhaps an implicit approach where, simply, eg. the bin average is computed for gauges, ie. the user asking for hourly rates silently gets averages? So the UI would need wording that doesn't imply, "we divide your numbers to your bin width" but it says something like "values normalized to bin width"

We don't have this meta information so we can't guide the user. The best thing we can do right now IMHO is a tooltip text. We can improve on that once we have the meta information.

similar to gauge/level, what would be the exact method? Using an average value of the counter over the bin width, weighted by the length of time a counter was at a specific number? Or last value, middle value (in time) or median value (ie. counters are sorted and mid value is picked)? Or is it bypassed because [...description of what exactly is done in ES that we need to follow]
or maybe with counters, the last value is taken, and that's divided with the bin width for normalization? (would sound more sensible from a temporal normalization PoV)

We already do that in TSVB by using the max for each bucket for counters. That's not perfect but ES is working on providing a more correct approach taking resets within the buckets into account. So it's max per bucket -> difference to previous bucket -> cap at 0 to accomodate for resets -> scale by bucket interval (if configured)

cumulative sums need bootrstrapping (first value) or are assumed some initial value eg. zero?

That's a good point in itself - currently cumulative sum in Elasticsearch/Kibana is always bootstrapped with zero. I would keep it like this for now but think about how we can improve that.

or would it be ruled out that the user computes bin width normalized rates? How? Would it need to be overridden sometimes?

Time scaling by a constant factor for each bucket doesn't make sense for cumulative sum - for now I would disallow using time scaling for cumulative sums. This is possible in the UI because we don't allow nested operations for now.

focusing on precision comes up, as divisions will generally lead to long fractions - specify N significant digits? N digits for the fraction? Pivot to SI units above/below certain values? Or options for using common suffixes, eg. "5k/h" meaning 5000/h? (there's a linked discuss issue for formatting which isn't a spec)
would Lens decide on what such compression to apply, if any, eg. in function of space, or would the user specify it on the UI?

A user has a field formatter configured for each field - we can use that and add a suffix "per second", "per hour" and so on. It's also overwritable from within the Lens UI.

are prefixes ever needed? Money may need them, not sure if there's anything else
likely taken for granted: other formatting aspects like thousands separator, internationalization etc. - maybe enough to just link to a definitive place that specifies these options, with some remark of user controllability in Lens
assuming Lens UI allows formatting specification, do they earmark the specific field, or the specific channel such as X-axis such that switching among variables preserves the formatting? Or both options, with the field-assigned spec having precedence?

Formatting is preserved when switching. FIeld-assigned formatting information is overwritten by Lens level formatting information if provided.

@flash1293 flash1293 added the loe:needs-research This issue requires some research before it can be worked on or estimated label Oct 2, 2020
@flash1293 flash1293 removed the loe:needs-research This issue requires some research before it can be worked on or estimated label Oct 12, 2020
@flash1293 flash1293 self-assigned this Oct 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature:Lens Project:LensDefault Team:Visualizations Visualization editors, elastic-charts and infrastructure
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants