From 7d118e9130ad4b3e2d5d0c086b1d0da151ea6c16 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Fri, 2 Aug 2024 01:51:46 +0000 Subject: [PATCH] build based on 03ccc2d --- dev/.documenter-siteinfo.json | 2 +- dev/convenience_methods/index.html | 12 ++++----- dev/document_strings/index.html | 2 +- dev/feature_importances/index.html | 2 +- dev/fitting_distributions/index.html | 2 +- dev/form_of_data/index.html | 2 +- dev/how_to_register/index.html | 2 +- dev/implementing_a_data_front_end/index.html | 2 +- dev/index.html | 2 +- dev/iterative_models/index.html | 2 +- dev/model_wrappers/index.html | 7 +++-- dev/objects.inv | Bin 2645 -> 2667 bytes dev/outlier_detection_models/index.html | 2 +- dev/quick_start_guide/index.html | 2 +- dev/reference/index.html | 24 +++++++++--------- dev/search_index.js | 2 +- dev/serialization/index.html | 2 +- dev/static_models/index.html | 2 +- dev/summary_of_methods/index.html | 2 +- dev/supervised_models/index.html | 2 +- .../index.html | 2 +- dev/the_fit_method/index.html | 2 +- dev/the_fitted_params_method/index.html | 2 +- dev/the_model_type_hierarchy/index.html | 2 +- dev/the_predict_joint_method/index.html | 2 +- dev/the_predict_method/index.html | 2 +- dev/training_losses/index.html | 2 +- dev/trait_declarations/index.html | 4 +-- dev/type_declarations/index.html | 2 +- dev/unsupervised_models/index.html | 2 +- dev/where_to_put_code/index.html | 2 +- 31 files changed, 51 insertions(+), 48 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index f799195..77581e7 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-07-22T21:08:38","documenter_version":"1.5.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-08-02T01:51:42","documenter_version":"1.5.0"}} \ No newline at end of file diff --git a/dev/convenience_methods/index.html b/dev/convenience_methods/index.html index e85b0c0..0814cdc 100644 --- a/dev/convenience_methods/index.html +++ b/dev/convenience_methods/index.html @@ -1,5 +1,5 @@ -Convenience methods · MLJModelInterface

Convenience methods

MLJModelInterface.tableFunction
table(columntable; prototype=nothing)

Convert a named tuple of vectors or tuples columntable, into a table of the "preferred sink type" of prototype. This is often the type of prototype itself, when prototype is a sink; see the Tables.jl documentation. If prototype is not specified, then a named tuple of vectors is returned.

table(A::AbstractMatrix; names=nothing, prototype=nothing)

Wrap an abstract matrix A as a Tables.jl compatible table with the specified column names (a tuple of symbols). If names are not specified, names=(:x1, :x2, ..., :xn) is used, where n=size(A, 2).

If a prototype is specified, then the matrix is materialized as a table of the preferred sink type of prototype, rather than wrapped. Note that if prototype is not specified, then matrix(table(A)) is essentially a no-op.

source
MLJModelInterface.matrixFunction
matrix(X; transpose=false)

If X isa AbstractMatrix, return X or permutedims(X) if transpose=true. Otherwise if X is a Tables.jl compatible table source, convert X into a Matrix.

source
MLJModelInterface.intFunction
int(x)

The positional integer of the CategoricalString or CategoricalValue x, in the ordering defined by the pool of x. The type of int(x) is the reference type of x.

Not to be confused with x.ref, which is unchanged by reordering of the pool of x, but has the same type.

int(X::CategoricalArray)
+Convenience methods · MLJModelInterface

Convenience methods

MLJModelInterface.tableFunction
table(columntable; prototype=nothing)

Convert a named tuple of vectors or tuples columntable, into a table of the "preferred sink type" of prototype. This is often the type of prototype itself, when prototype is a sink; see the Tables.jl documentation. If prototype is not specified, then a named tuple of vectors is returned.

table(A::AbstractMatrix; names=nothing, prototype=nothing)

Wrap an abstract matrix A as a Tables.jl compatible table with the specified column names (a tuple of symbols). If names are not specified, names=(:x1, :x2, ..., :xn) is used, where n=size(A, 2).

If a prototype is specified, then the matrix is materialized as a table of the preferred sink type of prototype, rather than wrapped. Note that if prototype is not specified, then matrix(table(A)) is essentially a no-op.

source
MLJModelInterface.matrixFunction
matrix(X; transpose=false)

If X isa AbstractMatrix, return X or permutedims(X) if transpose=true. Otherwise if X is a Tables.jl compatible table source, convert X into a Matrix.

source
MLJModelInterface.intFunction
int(x)

The positional integer of the CategoricalString or CategoricalValue x, in the ordering defined by the pool of x. The type of int(x) is the reference type of x.

Not to be confused with x.ref, which is unchanged by reordering of the pool of x, but has the same type.

int(X::CategoricalArray)
 int(W::Array{<:CategoricalString})
 int(W::Array{<:CategoricalValue})

Broadcasted versions of int.

julia> v = categorical(["c", "b", "c", "a"])
 4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:
@@ -19,7 +19,7 @@
  0x00000003
  0x00000002
  0x00000003
- 0x00000001

See also: decoder.

source
MLJModelInterface.UnivariateFiniteFunction
UnivariateFinite(
     support,
     probs;
     pool=nothing,
@@ -79,7 +79,7 @@
  UnivariateFinite{Multiclass{4}}(x=>0.727, y=>0.234, z=>0.0391)
  UnivariateFinite{Multiclass{4}}(x=>0.674, y=>0.00535, z=>0.321)
  ⋮
- UnivariateFinite{Multiclass{4}}(x=>0.292, y=>0.339, z=>0.369)

Probability augmentation

If augment=true the provided array is augmented by inserting appropriate elements ahead of those provided, along the last dimension of the array. This means the user only provides probabilities for the classes c2, c3, ..., cn. The class c1 probabilities are chosen so that each UnivariateFinite distribution in the returned array is a bona fide probability distribution.


UnivariateFinite(prob_given_class; pool=nothing, ordered=false)

Construct a discrete univariate distribution whose finite support is the set of keys of the provided dictionary, prob_given_class, and whose values specify the corresponding probabilities.

The type requirements on the keys of the dictionary are the same as the elements of support given above with this exception: if non-categorical elements (raw labels) are used as keys, then pool=... must be specified and cannot be missing.

If the values (probabilities) are arrays instead of scalars, then an abstract array of UnivariateFinite elements is created, with the same size as the array.

source
MLJModelInterface.classesFunction
classes(x)

All the categorical elements with the same pool as x (including x), returned as a list, with an ordering consistent with the pool. Here x has CategoricalValue type, and classes(x) is a vector of the same eltype. Note that x in classes(x) is always true.

Not to be confused with levels(x.pool). See the example below.

julia> v = categorical(["c", "b", "c", "a"])
+ UnivariateFinite{Multiclass{4}}(x=>0.292, y=>0.339, z=>0.369)

Probability augmentation

If augment=true the provided array is augmented by inserting appropriate elements ahead of those provided, along the last dimension of the array. This means the user only provides probabilities for the classes c2, c3, ..., cn. The class c1 probabilities are chosen so that each UnivariateFinite distribution in the returned array is a bona fide probability distribution.


UnivariateFinite(prob_given_class; pool=nothing, ordered=false)

Construct a discrete univariate distribution whose finite support is the set of keys of the provided dictionary, prob_given_class, and whose values specify the corresponding probabilities.

The type requirements on the keys of the dictionary are the same as the elements of support given above with this exception: if non-categorical elements (raw labels) are used as keys, then pool=... must be specified and cannot be missing.

If the values (probabilities) are arrays instead of scalars, then an abstract array of UnivariateFinite elements is created, with the same size as the array.

source
MLJModelInterface.classesFunction
classes(x)

All the categorical elements with the same pool as x (including x), returned as a list, with an ordering consistent with the pool. Here x has CategoricalValue type, and classes(x) is a vector of the same eltype. Note that x in classes(x) is always true.

Not to be confused with levels(x.pool). See the example below.

julia> v = categorical(["c", "b", "c", "a"])
 4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:
  "c"
  "b"
@@ -105,7 +105,7 @@
 3-element Vector{String}:
  "a"
  "b"
- "c"
source
MLJModelInterface.decoderFunction
decoder(x)

Return a callable object for decoding the integer representation of a CategoricalValue sharing the same pool the CategoricalValue x. Specifically, one has decoder(x)(int(y)) == y for all CategoricalValues y having the same pool as x. One can also call decoder(x) on integer arrays, in which case decoder(x) is broadcast over all elements.

Examples

julia> v = categorical(["c", "b", "c", "a"])
+ "c"
source
MLJModelInterface.decoderFunction
decoder(x)

Return a callable object for decoding the integer representation of a CategoricalValue sharing the same pool the CategoricalValue x. Specifically, one has decoder(x)(int(y)) == y for all CategoricalValues y having the same pool as x. One can also call decoder(x) on integer arrays, in which case decoder(x) is broadcast over all elements.

Examples

julia> v = categorical(["c", "b", "c", "a"])
 4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:
  "c"
  "b"
@@ -122,7 +122,7 @@
 julia> d = decoder(v[3]);
 
 julia> d(int(v)) == v
-true

Warning:

It is not true that int(d(u)) == u always holds.

See also: int.

source
MLJModelInterface.selectFunction
select(X, r, c)

Select element(s) of a table or matrix at row(s) r and column(s) c. An object of the sink type of X (or a matrix) is returned unless c is a single integer or symbol. In that case a vector is returned, unless r is a single integer, in which case a single element is returned.

See also: selectrows, selectcols.

source
MLJModelInterface.selectrowsFunction
selectrows(X, r)

Select single or multiple rows from a table, abstract vector or matrix X. If X is tabular, the object returned is a table of the preferred sink type of typeof(X), even if only a single row is selected.

If the object is neither a table, abstract vector or matrix, X is returned and r is ignored.

source
MLJModelInterface.selectcolsFunction
selectcols(X, c)

Select single or multiple columns from a matrix or table X. If c is an abstract vector of integers or symbols, then the object returned is a table of the preferred sink type of typeof(X). If c is a single integer or column, then an AbstractVector is returned.

source
MLJModelInterface.selectFunction
select(X, r, c)

Select element(s) of a table or matrix at row(s) r and column(s) c. An object of the sink type of X (or a matrix) is returned unless c is a single integer or symbol. In that case a vector is returned, unless r is a single integer, in which case a single element is returned.

See also: selectrows, selectcols.

source
MLJModelInterface.selectrowsFunction
selectrows(X, r)

Select single or multiple rows from a table, abstract vector or matrix X. If X is tabular, the object returned is a table of the preferred sink type of typeof(X), even if only a single row is selected.

If the object is neither a table, abstract vector or matrix, X is returned and r is ignored.

source
MLJModelInterface.selectcolsFunction
selectcols(X, c)

Select single or multiple columns from a matrix or table X. If c is an abstract vector of integers or symbols, then the object returned is a table of the preferred sink type of typeof(X). If c is a single integer or column, then an AbstractVector is returned.

source
MLJModelInterface.UnivariateFiniteFunction
UnivariateFinite(
     support,
     probs;
     pool=nothing,
@@ -182,4 +182,4 @@
  UnivariateFinite{Multiclass{4}}(x=>0.727, y=>0.234, z=>0.0391)
  UnivariateFinite{Multiclass{4}}(x=>0.674, y=>0.00535, z=>0.321)
  ⋮
- UnivariateFinite{Multiclass{4}}(x=>0.292, y=>0.339, z=>0.369)

Probability augmentation

If augment=true the provided array is augmented by inserting appropriate elements ahead of those provided, along the last dimension of the array. This means the user only provides probabilities for the classes c2, c3, ..., cn. The class c1 probabilities are chosen so that each UnivariateFinite distribution in the returned array is a bona fide probability distribution.


UnivariateFinite(prob_given_class; pool=nothing, ordered=false)

Construct a discrete univariate distribution whose finite support is the set of keys of the provided dictionary, prob_given_class, and whose values specify the corresponding probabilities.

The type requirements on the keys of the dictionary are the same as the elements of support given above with this exception: if non-categorical elements (raw labels) are used as keys, then pool=... must be specified and cannot be missing.

If the values (probabilities) are arrays instead of scalars, then an abstract array of UnivariateFinite elements is created, with the same size as the array.

source
+ UnivariateFinite{Multiclass{4}}(x=>0.292, y=>0.339, z=>0.369)

Probability augmentation

If augment=true the provided array is augmented by inserting appropriate elements ahead of those provided, along the last dimension of the array. This means the user only provides probabilities for the classes c2, c3, ..., cn. The class c1 probabilities are chosen so that each UnivariateFinite distribution in the returned array is a bona fide probability distribution.


UnivariateFinite(prob_given_class; pool=nothing, ordered=false)

Construct a discrete univariate distribution whose finite support is the set of keys of the provided dictionary, prob_given_class, and whose values specify the corresponding probabilities.

The type requirements on the keys of the dictionary are the same as the elements of support given above with this exception: if non-categorical elements (raw labels) are used as keys, then pool=... must be specified and cannot be missing.

If the values (probabilities) are arrays instead of scalars, then an abstract array of UnivariateFinite elements is created, with the same size as the array.

source
diff --git a/dev/document_strings/index.html b/dev/document_strings/index.html index 7ce2b0f..8a4d976 100644 --- a/dev/document_strings/index.html +++ b/dev/document_strings/index.html @@ -22,4 +22,4 @@ """ FooRegressor -

Variation to augment existing document string

For models that have a native API with separate documentation, one may want to call doc_header(FooRegressor, augment=true) instead. In that case, the output will look like this:

From MLJ, the FooRegressor type can be imported using

FooRegressor = @load FooRegressor pkg=FooRegressorPkg

Construct an instance with default hyper-parameters using the syntax model = FooRegressor(). Provide keyword arguments to override hyper-parameter defaults, as in FooRegressor(a=...).

source

The document string standard

Your document string must include the following components, in order:

+

Variation to augment existing document string

For models that have a native API with separate documentation, one may want to call doc_header(FooRegressor, augment=true) instead. In that case, the output will look like this:

From MLJ, the FooRegressor type can be imported using

FooRegressor = @load FooRegressor pkg=FooRegressorPkg

Construct an instance with default hyper-parameters using the syntax model = FooRegressor(). Provide keyword arguments to override hyper-parameter defaults, as in FooRegressor(a=...).

source

The document string standard

Your document string must include the following components, in order:

diff --git a/dev/feature_importances/index.html b/dev/feature_importances/index.html index 80b3557..46833ee 100644 --- a/dev/feature_importances/index.html +++ b/dev/feature_importances/index.html @@ -1,2 +1,2 @@ -Feature importances · MLJModelInterface

Feature importances

MLJModelInterface.feature_importancesFunction
feature_importances(model::M, fitresult, report)

For a given model of model type M supporting intrinsic feature importances, calculate the feature importances from the model's fitresult and report as an abstract vector of feature::Symbol => importance::Real pairs (e.g [:gender =>0.23, :height =>0.7, :weight => 0.1]).

New model implementations

The following trait overload is also required: MLJModelInterface.reports_feature_importances(::Type{<:M}) = true

If for some reason a model is sometimes unable to report feature importances then feature_importances should return all importances as 0.0, as in [:gender =>0.0, :height =>0.0, :weight => 0.0].

source

Trait values can also be set using the metadata_model method, see below.

+Feature importances · MLJModelInterface

Feature importances

MLJModelInterface.feature_importancesFunction
feature_importances(model::M, fitresult, report)

For a given model of model type M supporting intrinsic feature importances, calculate the feature importances from the model's fitresult and report as an abstract vector of feature::Symbol => importance::Real pairs (e.g [:gender =>0.23, :height =>0.7, :weight => 0.1]).

New model implementations

The following trait overload is also required: MLJModelInterface.reports_feature_importances(::Type{<:M}) = true

If for some reason a model is sometimes unable to report feature importances then feature_importances should return all importances as 0.0, as in [:gender =>0.0, :height =>0.0, :weight => 0.0].

source

Trait values can also be set using the metadata_model method, see below.

diff --git a/dev/fitting_distributions/index.html b/dev/fitting_distributions/index.html index 671140e..0b1037c 100644 --- a/dev/fitting_distributions/index.html +++ b/dev/fitting_distributions/index.html @@ -1,2 +1,2 @@ -Models that learn a probability distribution · MLJModelInterface

Models that learn a probability distribution

Experimental

The following API is experimental. It is subject to breaking changes during minor or major releases without warning. Models implementing this interface will not work with MLJBase versions earlier than 0.17.5.

Models that fit a probability distribution to some data should be regarded as Probabilistic <: Supervised models with target y = data and X = nothing.

The predict method should return a single distribution.

A working implementation of a model that fits a UnivariateFinite distribution to some categorical data using Laplace smoothing controlled by a hyperparameter alpha is given here.

+Models that learn a probability distribution · MLJModelInterface

Models that learn a probability distribution

Experimental

The following API is experimental. It is subject to breaking changes during minor or major releases without warning. Models implementing this interface will not work with MLJBase versions earlier than 0.17.5.

Models that fit a probability distribution to some data should be regarded as Probabilistic <: Supervised models with target y = data and X = nothing.

The predict method should return a single distribution.

A working implementation of a model that fits a UnivariateFinite distribution to some categorical data using Laplace smoothing controlled by a hyperparameter alpha is given here.

diff --git a/dev/form_of_data/index.html b/dev/form_of_data/index.html index c034a1d..2750d54 100644 --- a/dev/form_of_data/index.html +++ b/dev/form_of_data/index.html @@ -1,2 +1,2 @@ -The form of data for fitting and predicting · MLJModelInterface

The form of data for fitting and predicting

The model implementer does not have absolute control over the types of data X, y and Xnew appearing in the fit and predict methods they must implement. Rather, they can specify the scientific type of this data by making appropriate declarations of the traits input_scitype and target_scitype discussed later under Trait declarations.

Important Note. Unless it genuinely makes little sense to do so, the MLJ recommendation is to specify a Table scientific type for X (and hence Xnew) and an AbstractVector scientific type (e.g., AbstractVector{Continuous}) for targets y. Algorithms requiring matrix input can coerce their inputs appropriately; see below.

Additional type coercions

If the core algorithm being wrapped requires data in a different or more specific form, then fit will need to coerce the table into the form desired (and the same coercions applied to X will have to be repeated for Xnew in predict). To assist with common cases, MLJ provides the convenience method MMI.matrix. MMI.matrix(Xtable) has type Matrix{T} where T is the tightest common type of elements of Xtable, and Xtable is any table. (If Xtable is itself just a wrapped matrix, Xtable=Tables.table(A), then A=MMI.table(Xtable) will be returned without any copying.)

Alternatively, a more performant option is to implement a data front-end for your model; see Implementing a data front-end.

Other auxiliary methods provided by MLJModelInterface for handling tabular data are: selectrows, selectcols, select and schema (for extracting the size, names and eltypes of a table's columns). See Convenience methods below for details.

Important convention

It is to be understood that the columns of table X correspond to features and the rows to observations. So, for example, the predict method for a linear regression model might look like predict(model, w, Xnew) = MMI.matrix(Xnew)*w, where w is the vector of learned coefficients.

+The form of data for fitting and predicting · MLJModelInterface

The form of data for fitting and predicting

The model implementer does not have absolute control over the types of data X, y and Xnew appearing in the fit and predict methods they must implement. Rather, they can specify the scientific type of this data by making appropriate declarations of the traits input_scitype and target_scitype discussed later under Trait declarations.

Important Note. Unless it genuinely makes little sense to do so, the MLJ recommendation is to specify a Table scientific type for X (and hence Xnew) and an AbstractVector scientific type (e.g., AbstractVector{Continuous}) for targets y. Algorithms requiring matrix input can coerce their inputs appropriately; see below.

Additional type coercions

If the core algorithm being wrapped requires data in a different or more specific form, then fit will need to coerce the table into the form desired (and the same coercions applied to X will have to be repeated for Xnew in predict). To assist with common cases, MLJ provides the convenience method MMI.matrix. MMI.matrix(Xtable) has type Matrix{T} where T is the tightest common type of elements of Xtable, and Xtable is any table. (If Xtable is itself just a wrapped matrix, Xtable=Tables.table(A), then A=MMI.table(Xtable) will be returned without any copying.)

Alternatively, a more performant option is to implement a data front-end for your model; see Implementing a data front-end.

Other auxiliary methods provided by MLJModelInterface for handling tabular data are: selectrows, selectcols, select and schema (for extracting the size, names and eltypes of a table's columns). See Convenience methods below for details.

Important convention

It is to be understood that the columns of table X correspond to features and the rows to observations. So, for example, the predict method for a linear regression model might look like predict(model, w, Xnew) = MMI.matrix(Xnew)*w, where w is the vector of learned coefficients.

diff --git a/dev/how_to_register/index.html b/dev/how_to_register/index.html index 32415ba..a12b894 100644 --- a/dev/how_to_register/index.html +++ b/dev/how_to_register/index.html @@ -1,2 +1,2 @@ -How to add models to the MLJ Model Registry · MLJModelInterface

How to add models to the MLJ model registry

The MLJ model registry is located in the MLJModels.jl repository. To add a model, you need to follow these steps

  • Ensure your model conforms to the interface defined above

  • Raise an issue at MLJModels.jl and point out where the MLJ-interface implementation is, e.g. by providing a link to the code.

  • An administrator will then review your implementation and work with you to add the model to the registry

+How to add models to the MLJ Model Registry · MLJModelInterface

How to add models to the MLJ model registry

The MLJ model registry is located in the MLJModels.jl repository. To add a model, you need to follow these steps

  • Ensure your model conforms to the interface defined above

  • Raise an issue at MLJModels.jl and point out where the MLJ-interface implementation is, e.g. by providing a link to the code.

  • An administrator will then review your implementation and work with you to add the model to the registry

diff --git a/dev/implementing_a_data_front_end/index.html b/dev/implementing_a_data_front_end/index.html index 0c5c250..8956871 100644 --- a/dev/implementing_a_data_front_end/index.html +++ b/dev/implementing_a_data_front_end/index.html @@ -15,4 +15,4 @@ # for predict: MMI.reformat(::SomeSupervised, X) = (MMI.matrix(X)',) -MMI.selectrows(::SomeSupervised, I, Xmatrix) = (view(Xmatrix, :, I),)

With these additions, fit and predict are refactored, so that X and Xnew represent matrices with features as rows.

+MMI.selectrows(::SomeSupervised, I, Xmatrix) = (view(Xmatrix, :, I),)

With these additions, fit and predict are refactored, so that X and Xnew represent matrices with features as rows.

diff --git a/dev/index.html b/dev/index.html index 18e3abc..8377be4 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · MLJModelInterface

Adding Models for General Use

The machine learning tools provided by MLJ can be applied to the models in any package that imports MLJModelInterface and implements the API defined there, as outlined in this document.

Tip

This is a reference document, which has become rather sprawling over the evolution of the MLJ project. We recommend starting with Quick start guide, which covers the main points relevant to most new model implementations. Most topics are only detailed for Supervised models, so if you are implementing another kind of model, you may still need to refer to the Supervised models section.

Interface code can be hosted by the package providing the core machine learning algorithm, or by a stand-alone "interface-only" package, using the template MLJExampleInterface.jl (see Where to place code implementing new models below). For a list of packages implementing the MLJ model API (natively, and in interface packages) see here.

Important

MLJModelInterface is a very light-weight interface allowing you to define your interface, but does not provide the functionality required to use or test your interface; this requires MLJBase. So, while you only need to add MLJModelInterface to your project's [deps], for testing purposes you need to add MLJBase to your project's [extras] and [targets]. In testing, simply use MLJBase in place of MLJModelInterface.

It is assumed the reader has read the Getting Started section of the MLJ manual. To implement the API described here, some familiarity with the following packages is also helpful:

  • ScientificTypes.jl (for specifying model requirements of data)

  • Distributions.jl (for probabilistic predictions)

  • CategoricalArrays.jl (essential if you are implementing a model handling data of Multiclass or OrderedFactor scitype; familiarity with CategoricalPool objects required)

  • Tables.jl (if your algorithm needs input data in a novel format).

In MLJ, the basic interface exposed to the user, built atop the model interface described here, is the machine interface. After a first reading of this document, the reader may wish to refer to MLJ Internals for context.

+Home · MLJModelInterface

Adding Models for General Use

The machine learning tools provided by MLJ can be applied to the models in any package that imports MLJModelInterface and implements the API defined there, as outlined in this document.

Tip

This is a reference document, which has become rather sprawling over the evolution of the MLJ project. We recommend starting with Quick start guide, which covers the main points relevant to most new model implementations. Most topics are only detailed for Supervised models, so if you are implementing another kind of model, you may still need to refer to the Supervised models section.

Interface code can be hosted by the package providing the core machine learning algorithm, or by a stand-alone "interface-only" package, using the template MLJExampleInterface.jl (see Where to place code implementing new models below). For a list of packages implementing the MLJ model API (natively, and in interface packages) see here.

Important

MLJModelInterface is a very light-weight interface allowing you to define your interface, but does not provide the functionality required to use or test your interface; this requires MLJBase. So, while you only need to add MLJModelInterface to your project's [deps], for testing purposes you need to add MLJBase to your project's [extras] and [targets]. In testing, simply use MLJBase in place of MLJModelInterface.

It is assumed the reader has read the Getting Started section of the MLJ manual. To implement the API described here, some familiarity with the following packages is also helpful:

  • ScientificTypes.jl (for specifying model requirements of data)

  • Distributions.jl (for probabilistic predictions)

  • CategoricalArrays.jl (essential if you are implementing a model handling data of Multiclass or OrderedFactor scitype; familiarity with CategoricalPool objects required)

  • Tables.jl (if your algorithm needs input data in a novel format).

In MLJ, the basic interface exposed to the user, built atop the model interface described here, is the machine interface. After a first reading of this document, the reader may wish to refer to MLJ Internals for context.

diff --git a/dev/iterative_models/index.html b/dev/iterative_models/index.html index 6ec68fc..bf72342 100644 --- a/dev/iterative_models/index.html +++ b/dev/iterative_models/index.html @@ -2,4 +2,4 @@ Iterative models and the update! method · MLJModelInterface

Iterative models and the update! method

An update method may be optionally overloaded to enable a call by MLJ to retrain a model (on the same training data) to avoid repeating computations unnecessarily.

MMI.update(model::SomeSupervisedModel, verbosity, old_fitresult, old_cache, X, y) -> fit
 result, cache, report
 MMI.update(model::SomeSupervisedModel, verbosity, old_fitresult, old_cache, X, y, w=nothing) -> fit
-result, cache, report

Here the second variation applies if SomeSupervisedModel supports sample weights.

If an MLJ Machine is being fit! and it is not the first time, then update is called instead of fit, unless the machine fit! has been called with a new rows keyword argument. However, MLJModelInterface defines a fallback for update which just calls fit. For context, see the Internals section of the MLJ manual.

Learning networks wrapped as models constitute one use case (see the Composing Models section of the MLJ manual): one would like each component model to be retrained only when hyperparameter changes "upstream" make this necessary. In this case, MLJ provides a fallback (specifically, the fallback is for any subtype of SupervisedNetwork = Union{DeterministicNetwork,ProbabilisticNetwork}). A second more generally relevant use case is iterative models, where calls to increase the number of iterations only restarts the iterative procedure if other hyperparameters have also changed. (A useful method for inspecting model changes in such cases is MLJModelInterface.is_same_except. ) For an example, see MLJEnsembles.jl.

A third use case is to avoid repeating the time-consuming preprocessing of X and y required by some models.

If the argument fitresult (returned by a preceding call to fit) is not sufficient for performing an update, the author can arrange for fit to output in its cache return value any additional information required (for example, pre-processed versions of X and y), as this is also passed as an argument to the update method.

+result, cache, report

Here the second variation applies if SomeSupervisedModel supports sample weights.

If an MLJ Machine is being fit! and it is not the first time, then update is called instead of fit, unless the machine fit! has been called with a new rows keyword argument. However, MLJModelInterface defines a fallback for update which just calls fit. For context, see the Internals section of the MLJ manual.

Learning networks wrapped as models constitute one use case (see the Composing Models section of the MLJ manual): one would like each component model to be retrained only when hyperparameter changes "upstream" make this necessary. In this case, MLJ provides a fallback (specifically, the fallback is for any subtype of SupervisedNetwork = Union{DeterministicNetwork,ProbabilisticNetwork}). A second more generally relevant use case is iterative models, where calls to increase the number of iterations only restarts the iterative procedure if other hyperparameters have also changed. (A useful method for inspecting model changes in such cases is MLJModelInterface.is_same_except. ) For an example, see MLJEnsembles.jl.

A third use case is to avoid repeating the time-consuming preprocessing of X and y required by some models.

If the argument fitresult (returned by a preceding call to fit) is not sufficient for performing an update, the author can arrange for fit to output in its cache return value any additional information required (for example, pre-processed versions of X and y), as this is also passed as an argument to the update method.

diff --git a/dev/model_wrappers/index.html b/dev/model_wrappers/index.html index 3db2afd..6bdb420 100644 --- a/dev/model_wrappers/index.html +++ b/dev/model_wrappers/index.html @@ -1,4 +1,7 @@ -Model wrappers · MLJModelInterface

Model wrappers

A model that can have one or more other models as hyper-parameters should overload the trait is_wrapper, as in this example:

MLJModelInterface.target_in_fit(::Type{<:MyWrapper}) = true

The constructor for such a model does not need provide default values for the model-valued hyper-parameters. If only a single model is wrapped, then the hyper-parameter should have the name :model and this should be an optional positional argument, as well as a keyword argument.

For example, EnsembleModel is a model wrapper, and we can construct an instance like this:

using MLJ
+Model wrappers · MLJModelInterface

Model wrappers

A model that can have one or more other models as hyper-parameters should overload the trait is_wrapper, as in this example:

MLJModelInterface.target_in_fit(::Type{<:MyWrapper}) = true

The constructor for such a model does not need provide default values for the model-valued hyper-parameters. If only a single model is wrapped, then the hyper-parameter should have the name :model and this should be an optional positional argument, as well as a keyword argument.

For example, EnsembleModel is a model wrapper, and we can construct an instance like this:

using MLJ
 atom = ConstantClassfier()
-EnsembleModel(tree, n=100)

but also like this:

EnsembleModel(model=tree, n=100)

This is the only case in MLJ where positional arguments in a model constructor are allowed.

+EnsembleModel(tree, n=100)

but also like this:

EnsembleModel(model=tree, n=100)

This is the only case in MLJ where positional arguments in a model constructor are allowed.

Handling generic constructors

Model wrappers frequently have a public facing constructor with a name different from that of the model type constructed. For example, TunedModel(model, ...) is a constructor that will construct either an instance of DeterministicTunedModel or ProbabilisticTunedModel, depending on the type of model. In this case it is necessary to overload the constructor trait, which in that case looks like this:

MLJModelInterface.constructor(::Type{<:Union{
+    DeterministicTunedModel,
+	ProbabilisticTunedModel,
+	}}) = TunedModel

This allows the MLJ Model Registry to correctly associate model metadata to the constructor, rather than the (private) types.

diff --git a/dev/objects.inv b/dev/objects.inv index 45d3cf5d6537e7cc4daa1ae1bacfca3432109b92..e26dd8ded8cc63aa81636d4ed831e232e67c6316 100644 GIT binary patch delta 1871 zcmV-V2eA0n6zdd_zJJ(v0QLc}*q3MbR8%>CBvR$yLH?s5_RC|vCes!s#C+$L%{_T+2P~^O|99Hn*%3$hTsh8W)i(H z=UK@FQ$=0L0B<+8)e`KO6ryHh3pHZit3KADkqKLd6D5) zt#+8QyAGAz&sfw#*|Ru71@~=0Lq|CcFCRhIBTHnSPbMtT~td@{+a;L z%yh7V6%{BgEq`k%d1y)D6oA+PVg`~uXQ~xEEsjks6sxjOJ~F){S7wlSgZd0=+Mq|X z<|HaKxB_NfjTn1BQN{GE>u4pIQX-w z)23BvR)-mJG`Sxi* zr33x}f8pdc43y{qOo}8nf!+Z<|(HO*GklT~-N-`u8+hB!>GyOv@0lMhaRT+4vq9^ z7-pDVNMYV+i$ zdmMn=cut)6m`4j%0W1-Ui9OU-v_Y0B>1VZgL4U)b1e_lVF{-PwZh!BSU(b@pG80z>( zAD|J7rYlGM#H_<)H5b`9OR>{lo9u4|l0$HK$}(W! zQs;zYt0HkHiW!0s1EO<-1BqAvUHT2HfB((Je}LB)|Hzs6gE56RbFb4-7}p zvym4IKa(v8DSz*V;n^eMYgU)=AFI}C(^T|2IS#I_8N|ZPUALf@pI1Qtc5Ba>VW*OB zd!Imd@r5iGU3ospN-!Xv*CZY2mP-e+^Oru_&a-*Ph!u`gaXfxP6()JQS%)&img%P3 z`l9ofeP6kTV^wdj4F}ElYa~IOUh;&^X1R9Ex^;ia)i8MTd5mj zh1BBAiq@$9d1J7_+*&C;>rrBVrMmK55eT{rJMkAK=kT2;G#19)lx>DJgq-N3oM~hG zlBx6wt+0{&nzHa|4Daj?LEp_--^-9$p?_}KRzP~P3n>TUB>wbaGeyq#+XLSeRx}9y J4+MtL)A3NZv0MNE delta 1849 zcmV-92gdm86x9@vzJDY-0Q&$~?8~!D7gf$5iB$PFd7TmZB_XftBTs&Bi|k`66%#7G zYm1!wOGcAB@^2iXdWh(uL>+i~taDzb>eTS`P;kLSb~yP!Q)_nQ=D>-bAvi<2nM5zl zc~&yPR8dzlz#C5P$dCNN2l6Kq**09<7n0}Z6am&~a!1}&N`KUOo1ruaACF9&(t4jB zxjNMOKzca4Bfpq2Qtyc!OZK%m;+mW(=8OB|Mj1GQl6oclAp1h8qPioj%1I-5US#-F zs~x87u0v(_GZwW__AE|N!F?Oh&{0l9jMj8u{frN5k2e7r&Kf8?76bWE7u6D!za{`Q zGaam8MFmPr%YRx*9$Hd31t4~Sn1N)^nQ8@3i(^v@#i}fnk4*2#l^G=7pgx0|Ht5l; zIf)7lxlkPcyyp=+9?pRoyAfhnqX~ifNZWvQ@lTw3;Fu*p@(S%LW!e_gy{JL6yb4r5 zyAbyCrU|najMD_B$4;0&6zY(|%`2&me$FQcYthHt3V(MwTmifGc=mSIf6}~=N))&4 zNcYhNJ92Gc)qyEeawaMOZ$y2|Zi})|C*m(p+G;k&M5cG-y*Fqtkw9QoMHY)v{sdfK z|8!Dyt4au*U_XlMpcMg_TC*cRnHkc}I9L<09|UXQm;qQgV|7K`??iG@u(J3=TWxR) ze*$jd|9?ziO^eo(qF$@JV8h|uj=ZuX#Ev9cUETAl;NWesKpmOhk(d2o7pfl)9Q;|; zY167StHX>qn%t4!aEPdyG$jKVE#W%~sa$4~$)SEgVZrafctGVI@?dHQ>G=UsS>9PV z{6g=&e)rZY{AayboD#wJa0tAGhu977V4?jp`I9*VTM>TxW% zr33x}e_^dR$?jJ?Jg#=C`DMi3r)4SlFy;&j)#nkDJ>-#!=k)gEyzv}*VlartAP$4v zo}44vP0-J8PmO8KNDKpvnnX*xsxNlvVJ_~_NRNiGfZ3%aL`%G$7wvmw^RJ_2h~Fo> z>By;m@UTx7C@9(~vkw*RmG#ANoQ%X0;HZF3e}1~h0Z3ry1Y?gLv|trL<$&?;p#Y){ zQb9>StHlfY8%7Qw=1?tBT{?Zc%)S2)Sl-#Lt*fu!f4!Zpt%v;!eC@do#-PCNdX;Wl)GO|5dzDd- z8MQn=U4Xf`{L%rP+Z_ElXsvWx_sta@4u5!L$aEL8rQh?5#_Ea6H`XiE{o%O+B}(}t zU1iture1qIK=B*o7Q%0l>kgO>#Ml9&*a4${Wk4%eQw^}k-84X7(@*mn0_SBdfBj1R z;UZqUSkf2_5@IA^(wdOHak=Jmo5nL>xfO(qn-?2+&_t4sJF ztJZ4Ml-)Wx4z8{l#KO&8N#D!QE1-Y7wdc&RoyfPnPawPaLY9lJJRf8w7!c2Ek`8ps zr32adOCN1}*t}!J3dgB99zUT9lRVw5Lz!V;bJJ~o(fP~1uUyyg3cO44=Y6q43Jwk_ zA-MuevO)RX+Z!OSe=o?7wK{-5ueeqO4nwg4G!fT=E=1b3Ptln~^YL!8L4^IUVTM|v zh6vC&BTXoP-?OsC9`Q;IgrA323J(K)$e>o-JPI^qPfxIPdiex;P+2%+)5MRhJ`t#xcgJ(V~X4+N_%RYW$HS)t_e5T2=A9RjW`S@9oHtmTkhAQ?!;zSYeK$oZ}H7k#NeAnTrr1h&YVv=Mc~ zg(^ms$l}R`fBBNK*~WbNGi?j-^jD6MS8OYFL#&WmoLSKt)jw|xHkey0rDr`#?4OdZ zJXZvQF2hdzvyXH5&J!97<8I0}!x} -Outlier detection models · MLJModelInterface

Outlier detection models

Experimental API

The Outlier Detection API is experimental and may change in future releases of MLJ.

Outlier detection or anomaly detection is predominantly an unsupervised learning task, transforming each data point to an outlier score quantifying the level of "outlierness". However, because detectors can also be semi-supervised or supervised, MLJModelInterface provides a collection of abstract model types, that enable the different characteristics, namely:

  • MLJModelInterface.SupervisedDetector
  • MLJModelInterface.UnsupervisedDetector
  • MLJModelInterface.ProbabilisticSupervisedDetector
  • MLJModelInterface.ProbabilisticUnsupervisedDetector
  • MLJModelInterface.DeterministicSupervisedDetector
  • MLJModelInterface.DeterministicUnsupervisedDetector

All outlier detection models subtyping from any of the above supertypes have to implement MLJModelInterface.fit(model, verbosity, X, [y]). Models subtyping from either SupervisedDetector or UnsupervisedDetector have to implement MLJModelInterface.transform(model, fitresult, Xnew), which should return the raw outlier scores (<:Continuous) of all points in Xnew.

Probabilistic and deterministic outlier detection models provide an additional option to predict a normalized estimate of outlierness or a concrete outlier label and thus enable evaluation of those models. All corresponding supertypes have to implement (in addition to the previously described fit and transform) MLJModelInterface.predict(model, fitresult, Xnew), with deterministic predictions conforming to OrderedFactor{2}, with the first class being the normal class and the second class being the outlier. Probabilistic models predict a UnivariateFinite estimate of those classes.

It is typically possible to automatically convert an outlier detection model to a probabilistic or deterministic model if the training scores are stored in the model's report. Below mentioned OutlierDetection.jl package, for example, stores the training scores under the scores key in the report returned from fit. It is then possible to use model wrappers such as OutlierDetection.ProbabilisticDetector to automatically convert a model to enable predictions of the required output type.

External outlier detection packages

OutlierDetection.jl provides an opinionated interface on top of MLJ for outlier detection models, standardizing things like class names, dealing with training scores, score normalization and more.

+Outlier detection models · MLJModelInterface

Outlier detection models

Experimental API

The Outlier Detection API is experimental and may change in future releases of MLJ.

Outlier detection or anomaly detection is predominantly an unsupervised learning task, transforming each data point to an outlier score quantifying the level of "outlierness". However, because detectors can also be semi-supervised or supervised, MLJModelInterface provides a collection of abstract model types, that enable the different characteristics, namely:

  • MLJModelInterface.SupervisedDetector
  • MLJModelInterface.UnsupervisedDetector
  • MLJModelInterface.ProbabilisticSupervisedDetector
  • MLJModelInterface.ProbabilisticUnsupervisedDetector
  • MLJModelInterface.DeterministicSupervisedDetector
  • MLJModelInterface.DeterministicUnsupervisedDetector

All outlier detection models subtyping from any of the above supertypes have to implement MLJModelInterface.fit(model, verbosity, X, [y]). Models subtyping from either SupervisedDetector or UnsupervisedDetector have to implement MLJModelInterface.transform(model, fitresult, Xnew), which should return the raw outlier scores (<:Continuous) of all points in Xnew.

Probabilistic and deterministic outlier detection models provide an additional option to predict a normalized estimate of outlierness or a concrete outlier label and thus enable evaluation of those models. All corresponding supertypes have to implement (in addition to the previously described fit and transform) MLJModelInterface.predict(model, fitresult, Xnew), with deterministic predictions conforming to OrderedFactor{2}, with the first class being the normal class and the second class being the outlier. Probabilistic models predict a UnivariateFinite estimate of those classes.

It is typically possible to automatically convert an outlier detection model to a probabilistic or deterministic model if the training scores are stored in the model's report. Below mentioned OutlierDetection.jl package, for example, stores the training scores under the scores key in the report returned from fit. It is then possible to use model wrappers such as OutlierDetection.ProbabilisticDetector to automatically convert a model to enable predictions of the required output type.

External outlier detection packages

OutlierDetection.jl provides an opinionated interface on top of MLJ for outlier detection models, standardizing things like class names, dealing with training scores, score normalization and more.

diff --git a/dev/quick_start_guide/index.html b/dev/quick_start_guide/index.html index 420011f..86ac44f 100644 --- a/dev/quick_start_guide/index.html +++ b/dev/quick_start_guide/index.html @@ -47,4 +47,4 @@ supports_weights = false, # does the model support sample weights? descr = "A short description of your model" load_path = "YourPackage.SubModuleContainingModelStructDefinition.YourModel1" -)

Important. Do not omit the load_path specification. Without a correct load_path MLJ will be unable to import your model.

Examples:

Adding a model to the model registry

See How to add models to the MLJ model registry.

+)

Important. Do not omit the load_path specification. Without a correct load_path MLJ will be unable to import your model.

Examples:

Adding a model to the model registry

See How to add models to the MLJ model registry.

diff --git a/dev/reference/index.html b/dev/reference/index.html index 87ffe45..25a5e65 100644 --- a/dev/reference/index.html +++ b/dev/reference/index.html @@ -1,5 +1,5 @@ -Reference · MLJModelInterface

Reference

MLJModelInterface.UnivariateFiniteFunction
UnivariateFinite(
+Reference · MLJModelInterface

Reference

MLJModelInterface.UnivariateFiniteFunction
UnivariateFinite(
     support,
     probs;
     pool=nothing,
@@ -59,7 +59,7 @@
  UnivariateFinite{Multiclass{4}}(x=>0.727, y=>0.234, z=>0.0391)
  UnivariateFinite{Multiclass{4}}(x=>0.674, y=>0.00535, z=>0.321)
  ⋮
- UnivariateFinite{Multiclass{4}}(x=>0.292, y=>0.339, z=>0.369)

Probability augmentation

If augment=true the provided array is augmented by inserting appropriate elements ahead of those provided, along the last dimension of the array. This means the user only provides probabilities for the classes c2, c3, ..., cn. The class c1 probabilities are chosen so that each UnivariateFinite distribution in the returned array is a bona fide probability distribution.


UnivariateFinite(prob_given_class; pool=nothing, ordered=false)

Construct a discrete univariate distribution whose finite support is the set of keys of the provided dictionary, prob_given_class, and whose values specify the corresponding probabilities.

The type requirements on the keys of the dictionary are the same as the elements of support given above with this exception: if non-categorical elements (raw labels) are used as keys, then pool=... must be specified and cannot be missing.

If the values (probabilities) are arrays instead of scalars, then an abstract array of UnivariateFinite elements is created, with the same size as the array.

source
MLJModelInterface.classesMethod
classes(x)

All the categorical elements with the same pool as x (including x), returned as a list, with an ordering consistent with the pool. Here x has CategoricalValue type, and classes(x) is a vector of the same eltype. Note that x in classes(x) is always true.

Not to be confused with levels(x.pool). See the example below.

julia> v = categorical(["c", "b", "c", "a"])
+ UnivariateFinite{Multiclass{4}}(x=>0.292, y=>0.339, z=>0.369)

Probability augmentation

If augment=true the provided array is augmented by inserting appropriate elements ahead of those provided, along the last dimension of the array. This means the user only provides probabilities for the classes c2, c3, ..., cn. The class c1 probabilities are chosen so that each UnivariateFinite distribution in the returned array is a bona fide probability distribution.


UnivariateFinite(prob_given_class; pool=nothing, ordered=false)

Construct a discrete univariate distribution whose finite support is the set of keys of the provided dictionary, prob_given_class, and whose values specify the corresponding probabilities.

The type requirements on the keys of the dictionary are the same as the elements of support given above with this exception: if non-categorical elements (raw labels) are used as keys, then pool=... must be specified and cannot be missing.

If the values (probabilities) are arrays instead of scalars, then an abstract array of UnivariateFinite elements is created, with the same size as the array.

source
MLJModelInterface.classesMethod
classes(x)

All the categorical elements with the same pool as x (including x), returned as a list, with an ordering consistent with the pool. Here x has CategoricalValue type, and classes(x) is a vector of the same eltype. Note that x in classes(x) is always true.

Not to be confused with levels(x.pool). See the example below.

julia> v = categorical(["c", "b", "c", "a"])
 4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:
  "c"
  "b"
@@ -85,7 +85,7 @@
 3-element Vector{String}:
  "a"
  "b"
- "c"
source
MLJModelInterface.decoderMethod
decoder(x)

Return a callable object for decoding the integer representation of a CategoricalValue sharing the same pool the CategoricalValue x. Specifically, one has decoder(x)(int(y)) == y for all CategoricalValues y having the same pool as x. One can also call decoder(x) on integer arrays, in which case decoder(x) is broadcast over all elements.

Examples

julia> v = categorical(["c", "b", "c", "a"])
+ "c"
source
MLJModelInterface.decoderMethod
decoder(x)

Return a callable object for decoding the integer representation of a CategoricalValue sharing the same pool the CategoricalValue x. Specifically, one has decoder(x)(int(y)) == y for all CategoricalValues y having the same pool as x. One can also call decoder(x) on integer arrays, in which case decoder(x) is broadcast over all elements.

Examples

julia> v = categorical(["c", "b", "c", "a"])
 4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:
  "c"
  "b"
@@ -102,7 +102,7 @@
 julia> d = decoder(v[3]);
 
 julia> d(int(v)) == v
-true

Warning:

It is not true that int(d(u)) == u always holds.

See also: int.

source
MLJModelInterface.fitFunction
MLJModelInterface.fit(model, verbosity, data...) -> fitresult, cache, report

All models must implement a fit method. Here data is the output of reformat on user-provided data, or some some resampling thereof. The fallback of reformat returns the user-provided data (eg, a table).

source
MLJModelInterface.fitted_paramsMethod
fitted_params(model, fitresult) -> human_readable_fitresult # named_tuple

Models may overload fitted_params. The fallback returns (fitresult=fitresult,).

Other training-related outcomes should be returned in the report part of the tuple returned by fit.

source
MLJModelInterface.intMethod
int(x)

The positional integer of the CategoricalString or CategoricalValue x, in the ordering defined by the pool of x. The type of int(x) is the reference type of x.

Not to be confused with x.ref, which is unchanged by reordering of the pool of x, but has the same type.

int(X::CategoricalArray)
+true

Warning:

It is not true that int(d(u)) == u always holds.

See also: int.

source
MLJModelInterface.fitFunction
MLJModelInterface.fit(model, verbosity, data...) -> fitresult, cache, report

All models must implement a fit method. Here data is the output of reformat on user-provided data, or some some resampling thereof. The fallback of reformat returns the user-provided data (eg, a table).

source
MLJModelInterface.fitted_paramsMethod
fitted_params(model, fitresult) -> human_readable_fitresult # named_tuple

Models may overload fitted_params. The fallback returns (fitresult=fitresult,).

Other training-related outcomes should be returned in the report part of the tuple returned by fit.

source
MLJModelInterface.intMethod
int(x)

The positional integer of the CategoricalString or CategoricalValue x, in the ordering defined by the pool of x. The type of int(x) is the reference type of x.

Not to be confused with x.ref, which is unchanged by reordering of the pool of x, but has the same type.

int(X::CategoricalArray)
 int(W::Array{<:CategoricalString})
 int(W::Array{<:CategoricalValue})

Broadcasted versions of int.

julia> v = categorical(["c", "b", "c", "a"])
 4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:
@@ -122,23 +122,23 @@
  0x00000003
  0x00000002
  0x00000003
- 0x00000001

See also: decoder.

source
MLJModelInterface.is_same_exceptMethod
is_same_except(m1, m2, exceptions::Symbol...; deep_properties=Symbol[])

If both m1 and m2 are of MLJType, return true if the following conditions all hold, and false otherwise:

  • typeof(m1) === typeof(m2)

  • propertynames(m1) === propertynames(m2)

  • with the exception of properties listed as exceptions or bound to an AbstractRNG, each pair of corresponding property values is either "equal" or both undefined. (If a property appears as a propertyname but not a fieldname, it is deemed as always defined.)

The meaining of "equal" depends on the type of the property value:

  • values that are themselves of MLJType are "equal" if they are equal in the sense of is_same_except with no exceptions.

  • values that are not of MLJType are "equal" if they are ==.

In the special case of a "deep" property, "equal" has a different meaning; see deep_properties) for details.

If m1 or m2 are not MLJType objects, then return ==(m1, m2).

source
MLJModelInterface.isrepresentedMethod
isrepresented(object::MLJType, objects)

Test if object has a representative in the iterable objects. This is a weaker requirement than object in objects.

Here we say m1 represents m2 if is_same_except(m1, m2) is true.

source
MLJModelInterface.matrixMethod
matrix(X; transpose=false)

If X isa AbstractMatrix, return X or permutedims(X) if transpose=true. Otherwise if X is a Tables.jl compatible table source, convert X into a Matrix.

source
MLJModelInterface.metadata_modelMethod
metadata_model(T; args...)

Helper function to write the metadata for a model T.

Keywords

  • input_scitype=Unknown: allowed scientific type of the input data
  • target_scitype=Unknown: allowed scitype of the target (supervised)
  • output_scitype=Unknown: allowed scitype of the transformed data (unsupervised)
  • supports_weights=false: whether the model supports sample weights
  • supports_class_weights=false: whether the model supports class weights
  • load_path="unknown": where the model is (usually PackageName.ModelName)
  • human_name=nothing: human name of the model
  • supports_training_losses=nothing: whether the (necessarily iterative) model can report training losses
  • reports_feature_importances=nothing: whether the model reports feature importances

Example

metadata_model(KNNRegressor,
+ 0x00000001

See also: decoder.

source
MLJModelInterface.is_same_exceptMethod
is_same_except(m1, m2, exceptions::Symbol...; deep_properties=Symbol[])

If both m1 and m2 are of MLJType, return true if the following conditions all hold, and false otherwise:

  • typeof(m1) === typeof(m2)

  • propertynames(m1) === propertynames(m2)

  • with the exception of properties listed as exceptions or bound to an AbstractRNG, each pair of corresponding property values is either "equal" or both undefined. (If a property appears as a propertyname but not a fieldname, it is deemed as always defined.)

The meaining of "equal" depends on the type of the property value:

  • values that are themselves of MLJType are "equal" if they are equal in the sense of is_same_except with no exceptions.

  • values that are not of MLJType are "equal" if they are ==.

In the special case of a "deep" property, "equal" has a different meaning; see deep_properties) for details.

If m1 or m2 are not MLJType objects, then return ==(m1, m2).

source
MLJModelInterface.isrepresentedMethod
isrepresented(object::MLJType, objects)

Test if object has a representative in the iterable objects. This is a weaker requirement than object in objects.

Here we say m1 represents m2 if is_same_except(m1, m2) is true.

source
MLJModelInterface.matrixMethod
matrix(X; transpose=false)

If X isa AbstractMatrix, return X or permutedims(X) if transpose=true. Otherwise if X is a Tables.jl compatible table source, convert X into a Matrix.

source
MLJModelInterface.metadata_modelMethod
metadata_model(T; args...)

Helper function to write the metadata for a model T.

Keywords

  • input_scitype=Unknown: allowed scientific type of the input data
  • target_scitype=Unknown: allowed scitype of the target (supervised)
  • output_scitype=Unknown: allowed scitype of the transformed data (unsupervised)
  • supports_weights=false: whether the model supports sample weights
  • supports_class_weights=false: whether the model supports class weights
  • load_path="unknown": where the model is (usually PackageName.ModelName)
  • human_name=nothing: human name of the model
  • supports_training_losses=nothing: whether the (necessarily iterative) model can report training losses
  • reports_feature_importances=nothing: whether the model reports feature importances

Example

metadata_model(KNNRegressor,
     input_scitype=MLJModelInterface.Table(MLJModelInterface.Continuous),
     target_scitype=AbstractVector{MLJModelInterface.Continuous},
     supports_weights=true,
-    load_path="NearestNeighbors.KNNRegressor")
source
MLJModelInterface.metadata_pkgMethod
metadata_pkg(T; args...)

Helper function to write the metadata for a package providing model T. Use it with broadcasting to define the metadata of the package providing a series of models.

Keywords

  • package_name="unknown" : package name
  • package_uuid="unknown" : package uuid
  • package_url="unknown" : package url
  • is_pure_julia=missing : whether the package is pure julia
  • package_license="unknown": package license
  • is_wrapper=false : whether the package is a wrapper

Example

metadata_pkg.((KNNRegressor, KNNClassifier),
+    load_path="NearestNeighbors.KNNRegressor")
source
MLJModelInterface.metadata_pkgMethod
metadata_pkg(T; args...)

Helper function to write the metadata for a package providing model T. Use it with broadcasting to define the metadata of the package providing a series of models.

Keywords

  • package_name="unknown" : package name
  • package_uuid="unknown" : package uuid
  • package_url="unknown" : package url
  • is_pure_julia=missing : whether the package is pure julia
  • package_license="unknown": package license
  • is_wrapper=false : whether the package is a wrapper

Example

metadata_pkg.((KNNRegressor, KNNClassifier),
     package_name="NearestNeighbors",
     package_uuid="b8a86587-4115-5ab1-83bc-aa920d37bbce",
     package_url="https://github.com/KristofferC/NearestNeighbors.jl",
     is_pure_julia=true,
     package_license="MIT",
-    is_wrapper=false)
source
MLJModelInterface.paramsMethod
params(m::MLJType)

Recursively convert any transparent object m into a named tuple, keyed on the fields of m. An object is transparent if MLJModelInterface.istransparent(m) == true. The named tuple is possibly nested because params is recursively applied to the field values, which themselves might be transparent.

Most objects of type MLJType are transparent.

julia> params(EnsembleModel(model=ConstantClassifier()))
+    is_wrapper=false)
source
MLJModelInterface.paramsMethod
params(m::MLJType)

Recursively convert any transparent object m into a named tuple, keyed on the fields of m. An object is transparent if MLJModelInterface.istransparent(m) == true. The named tuple is possibly nested because params is recursively applied to the field values, which themselves might be transparent.

Most objects of type MLJType are transparent.

julia> params(EnsembleModel(model=ConstantClassifier()))
 (model = (target_type = Bool,),
  weights = Float64[],
  bagging_fraction = 0.8,
  rng_seed = 0,
  n = 100,
- parallel = true,)
source
MLJModelInterface.predictFunction
predict(model, fitresult, new_data...)

Supervised and SupervisedAnnotator models must implement the predict operation. Here new_data is the output of reformat called on user-specified data.

source
MLJModelInterface.reformatMethod
MLJModelInterface.reformat(model, args...) -> data

Models optionally overload reformat to define transformations of user-supplied data into some model-specific representation (e.g., from a table to a matrix). When implemented, the MLJ user can avoid repeating such transformations unnecessarily, and can additionally make use of more efficient row subsampling, which is then based on the model-specific representation of data, rather than the user-representation. When reformat is overloaded, selectrows(::Model, ...) must be as well (see selectrows). Furthermore, the model fit method(s), and operations, such as predict and transform, must be refactored to act on the model-specific representations of the data.

To implement the reformat data front-end for a model, refer to "Implementing a data front-end" in the MLJ manual.

source
MLJModelInterface.scitypeMethod
scitype(X)

The scientific type (interpretation) of X, distinct from its machine type.

Examples

julia> scitype(3.14)
+ parallel = true,)
source
MLJModelInterface.predictFunction
predict(model, fitresult, new_data...)

Supervised and SupervisedAnnotator models must implement the predict operation. Here new_data is the output of reformat called on user-specified data.

source
MLJModelInterface.reformatMethod
MLJModelInterface.reformat(model, args...) -> data

Models optionally overload reformat to define transformations of user-supplied data into some model-specific representation (e.g., from a table to a matrix). When implemented, the MLJ user can avoid repeating such transformations unnecessarily, and can additionally make use of more efficient row subsampling, which is then based on the model-specific representation of data, rather than the user-representation. When reformat is overloaded, selectrows(::Model, ...) must be as well (see selectrows). Furthermore, the model fit method(s), and operations, such as predict and transform, must be refactored to act on the model-specific representations of the data.

To implement the reformat data front-end for a model, refer to "Implementing a data front-end" in the MLJ manual.

source
MLJModelInterface.scitypeMethod
scitype(X)

The scientific type (interpretation) of X, distinct from its machine type.

Examples

julia> scitype(3.14)
 Continuous
 
 julia> scitype([1, 2, missing])
@@ -153,13 +153,13 @@
             ndevices = [1, 3, 2, 3, 2]);
 
 julia> scitype(X)
-Table{Union{AbstractVector{Count}, AbstractVector{Multiclass{2}}}}
source
MLJModelInterface.selectFunction
select(X, r, c)

Select element(s) of a table or matrix at row(s) r and column(s) c. An object of the sink type of X (or a matrix) is returned unless c is a single integer or symbol. In that case a vector is returned, unless r is a single integer, in which case a single element is returned.

See also: selectrows, selectcols.

source
MLJModelInterface.selectcolsFunction
selectcols(X, c)

Select single or multiple columns from a matrix or table X. If c is an abstract vector of integers or symbols, then the object returned is a table of the preferred sink type of typeof(X). If c is a single integer or column, then an AbstractVector is returned.

source
MLJModelInterface.selectrowsFunction
selectrows(X, r)

Select single or multiple rows from a table, abstract vector or matrix X. If X is tabular, the object returned is a table of the preferred sink type of typeof(X), even if only a single row is selected.

If the object is neither a table, abstract vector or matrix, X is returned and r is ignored.

source
MLJModelInterface.selectrowsMethod
MLJModelInterface.selectrows(::Model, I, data...) -> sampled_data

A model overloads selectrows whenever it buys into the optional reformat front-end for data preprocessing. See reformat for details. The fallback assumes data is a tuple and calls selectrows(X, I) for each X in data, returning the results in a new tuple of the same length. This call makes sense when X is a table, abstract vector or abstract matrix. In the last two cases, a new object and not a view is returned.

source
MLJModelInterface.tableMethod
table(columntable; prototype=nothing)

Convert a named tuple of vectors or tuples columntable, into a table of the "preferred sink type" of prototype. This is often the type of prototype itself, when prototype is a sink; see the Tables.jl documentation. If prototype is not specified, then a named tuple of vectors is returned.

table(A::AbstractMatrix; names=nothing, prototype=nothing)

Wrap an abstract matrix A as a Tables.jl compatible table with the specified column names (a tuple of symbols). If names are not specified, names=(:x1, :x2, ..., :xn) is used, where n=size(A, 2).

If a prototype is specified, then the matrix is materialized as a table of the preferred sink type of prototype, rather than wrapped. Note that if prototype is not specified, then matrix(table(A)) is essentially a no-op.

source
MLJModelInterface.training_lossesMethod
MLJModelInterface.training_losses(model::M, report)

If M is an iterative model type which calculates training losses, implement this method to return an AbstractVector of the losses in historical order. If the model calculates scores instead, then the sign of the scores should be reversed.

The following trait overload is also required: MLJModelInterface.supports_training_losses(::Type{<:M}) = true.

source
MLJModelInterface.updateMethod
MLJModelInterface.update(model, verbosity, fitresult, cache, data...)

Models may optionally implement an update method. The fallback calls fit.

source
StatisticalTraits.deep_propertiesFunction
deep_properties(::Type{<:MLJType})

Given an MLJType subtype M, the value of this trait should be a tuple of any properties of M to be regarded as "deep".

When two instances of type M are to be tested for equality, in the sense of == or is_same_except, then the values of a "deep" property (whose values are assumed to be of composite type) are deemed to agree if all corresponding properties of those property values are ==.

Any property of M whose values are themselves of MLJType are "deep" automatically, and should not be included in the trait return value.

See also is_same_except

Example

Consider an MLJType subtype Foo, with a single field of type Bar which is not a subtype of MLJType:

mutable struct Bar
+Table{Union{AbstractVector{Count}, AbstractVector{Multiclass{2}}}}
source
MLJModelInterface.selectFunction
select(X, r, c)

Select element(s) of a table or matrix at row(s) r and column(s) c. An object of the sink type of X (or a matrix) is returned unless c is a single integer or symbol. In that case a vector is returned, unless r is a single integer, in which case a single element is returned.

See also: selectrows, selectcols.

source
MLJModelInterface.selectcolsFunction
selectcols(X, c)

Select single or multiple columns from a matrix or table X. If c is an abstract vector of integers or symbols, then the object returned is a table of the preferred sink type of typeof(X). If c is a single integer or column, then an AbstractVector is returned.

source
MLJModelInterface.selectrowsFunction
selectrows(X, r)

Select single or multiple rows from a table, abstract vector or matrix X. If X is tabular, the object returned is a table of the preferred sink type of typeof(X), even if only a single row is selected.

If the object is neither a table, abstract vector or matrix, X is returned and r is ignored.

source
MLJModelInterface.selectrowsMethod
MLJModelInterface.selectrows(::Model, I, data...) -> sampled_data

A model overloads selectrows whenever it buys into the optional reformat front-end for data preprocessing. See reformat for details. The fallback assumes data is a tuple and calls selectrows(X, I) for each X in data, returning the results in a new tuple of the same length. This call makes sense when X is a table, abstract vector or abstract matrix. In the last two cases, a new object and not a view is returned.

source
MLJModelInterface.tableMethod
table(columntable; prototype=nothing)

Convert a named tuple of vectors or tuples columntable, into a table of the "preferred sink type" of prototype. This is often the type of prototype itself, when prototype is a sink; see the Tables.jl documentation. If prototype is not specified, then a named tuple of vectors is returned.

table(A::AbstractMatrix; names=nothing, prototype=nothing)

Wrap an abstract matrix A as a Tables.jl compatible table with the specified column names (a tuple of symbols). If names are not specified, names=(:x1, :x2, ..., :xn) is used, where n=size(A, 2).

If a prototype is specified, then the matrix is materialized as a table of the preferred sink type of prototype, rather than wrapped. Note that if prototype is not specified, then matrix(table(A)) is essentially a no-op.

source
MLJModelInterface.training_lossesMethod
MLJModelInterface.training_losses(model::M, report)

If M is an iterative model type which calculates training losses, implement this method to return an AbstractVector of the losses in historical order. If the model calculates scores instead, then the sign of the scores should be reversed.

The following trait overload is also required: MLJModelInterface.supports_training_losses(::Type{<:M}) = true.

source
MLJModelInterface.updateMethod
MLJModelInterface.update(model, verbosity, fitresult, cache, data...)

Models may optionally implement an update method. The fallback calls fit.

source
StatisticalTraits.deep_propertiesFunction
deep_properties(::Type{<:MLJType})

Given an MLJType subtype M, the value of this trait should be a tuple of any properties of M to be regarded as "deep".

When two instances of type M are to be tested for equality, in the sense of == or is_same_except, then the values of a "deep" property (whose values are assumed to be of composite type) are deemed to agree if all corresponding properties of those property values are ==.

Any property of M whose values are themselves of MLJType are "deep" automatically, and should not be included in the trait return value.

See also is_same_except

Example

Consider an MLJType subtype Foo, with a single field of type Bar which is not a subtype of MLJType:

mutable struct Bar
     x::Int
 end
 
 mutable struct Foo <: MLJType
     bar::Bar
-end

Then the mutability of Foo implies Foo(1) != Foo(1) and so, by the definition == for MLJType objects (see is_same_except) we have

Bar(Foo(1)) != Bar(Foo(1))

However after the declaration

MLJModelInterface.deep_properties(::Type{<:Foo}) = (:bar,)

We have

Bar(Foo(1)) == Bar(Foo(1))
source
MLJModelInterface._model_cleanerMethod
_model_cleaner(modelname, defaults, constraints)

Build the expression of the cleaner associated with the constraints specified in a model def.

source
MLJModelInterface._model_constructorMethod
_model_constructor(modelname, params, defaults)

Build the expression of the keyword constructor associated with a model definition. When the constructor is called, the clean! function is called as well to check that parameter assignments are valid.

source
MLJModelInterface._process_model_defMethod
_process_model_def(modl, ex)

Take an expression defining a model (mutable struct Model ...) and unpack key elements for further processing:

  • Model name (modelname)
  • Names of parameters (params)
  • Default values (defaults)
  • Constraints (constraints)

When no default field value is given a heuristic is to guess an appropriate default (eg, zero for a Float64 parameter). To this end, the specified type expression is evaluated in the module modl.

source
MLJModelInterface._unpack!Method
_unpack!(ex, rep)

Internal function to allow to read a constraint given after a default value for a parameter and transform it in an executable condition (which is returned to be executed later). For instance if we have

alpha::Int = 0.5::(arg > 0.0)

Then it would transform the (arg > 0.0) in (alpha > 0.0) which is executable.

source
MLJModelInterface.doc_headerMethod
MLJModelInterface.doc_header(SomeModelType; augment=false)

Return a string suitable for interpolation in the document string of an MLJ model type. In the example given below, the header expands to something like this:

FooRegressor

A model type for constructing a foo regressor, based on FooRegressorPkg.jl.

From MLJ, the type can be imported using

FooRegressor = @load FooRegressor pkg=FooRegressorPkg

Construct an instance with default hyper-parameters using the syntax model = FooRegressor(). Provide keyword arguments to override hyper-parameter defaults, as in FooRegressor(a=...).

Ordinarily, doc_header is used in document strings defined after the model type definition, as doc_header assumes model traits (in particular, package_name and package_url) to be defined; see also MLJModelInterface.metadata_pkg.

Example

Suppose a model type and traits have been defined by:

mutable struct FooRegressor
+end

Then the mutability of Foo implies Foo(1) != Foo(1) and so, by the definition == for MLJType objects (see is_same_except) we have

Bar(Foo(1)) != Bar(Foo(1))

However after the declaration

MLJModelInterface.deep_properties(::Type{<:Foo}) = (:bar,)

We have

Bar(Foo(1)) == Bar(Foo(1))
source
MLJModelInterface._model_cleanerMethod
_model_cleaner(modelname, defaults, constraints)

Build the expression of the cleaner associated with the constraints specified in a model def.

source
MLJModelInterface._model_constructorMethod
_model_constructor(modelname, params, defaults)

Build the expression of the keyword constructor associated with a model definition. When the constructor is called, the clean! function is called as well to check that parameter assignments are valid.

source
MLJModelInterface._process_model_defMethod
_process_model_def(modl, ex)

Take an expression defining a model (mutable struct Model ...) and unpack key elements for further processing:

  • Model name (modelname)
  • Names of parameters (params)
  • Default values (defaults)
  • Constraints (constraints)

When no default field value is given a heuristic is to guess an appropriate default (eg, zero for a Float64 parameter). To this end, the specified type expression is evaluated in the module modl.

source
MLJModelInterface._unpack!Method
_unpack!(ex, rep)

Internal function to allow to read a constraint given after a default value for a parameter and transform it in an executable condition (which is returned to be executed later). For instance if we have

alpha::Int = 0.5::(arg > 0.0)

Then it would transform the (arg > 0.0) in (alpha > 0.0) which is executable.

source
MLJModelInterface.doc_headerMethod
MLJModelInterface.doc_header(SomeModelType; augment=false)

Return a string suitable for interpolation in the document string of an MLJ model type. In the example given below, the header expands to something like this:

FooRegressor

A model type for constructing a foo regressor, based on FooRegressorPkg.jl.

From MLJ, the type can be imported using

FooRegressor = @load FooRegressor pkg=FooRegressorPkg

Construct an instance with default hyper-parameters using the syntax model = FooRegressor(). Provide keyword arguments to override hyper-parameter defaults, as in FooRegressor(a=...).

Ordinarily, doc_header is used in document strings defined after the model type definition, as doc_header assumes model traits (in particular, package_name and package_url) to be defined; see also MLJModelInterface.metadata_pkg.

Example

Suppose a model type and traits have been defined by:

mutable struct FooRegressor
     a::Int
     b::Float64
 end
@@ -182,7 +182,7 @@
 
 """
 FooRegressor
-

Variation to augment existing document string

For models that have a native API with separate documentation, one may want to call doc_header(FooRegressor, augment=true) instead. In that case, the output will look like this:

From MLJ, the FooRegressor type can be imported using

FooRegressor = @load FooRegressor pkg=FooRegressorPkg

Construct an instance with default hyper-parameters using the syntax model = FooRegressor(). Provide keyword arguments to override hyper-parameter defaults, as in FooRegressor(a=...).

source
MLJModelInterface.feature_importancesFunction
feature_importances(model::M, fitresult, report)

For a given model of model type M supporting intrinsic feature importances, calculate the feature importances from the model's fitresult and report as an abstract vector of feature::Symbol => importance::Real pairs (e.g [:gender =>0.23, :height =>0.7, :weight => 0.1]).

New model implementations

The following trait overload is also required: MLJModelInterface.reports_feature_importances(::Type{<:M}) = true

If for some reason a model is sometimes unable to report feature importances then feature_importances should return all importances as 0.0, as in [:gender =>0.0, :height =>0.0, :weight => 0.0].

source
MLJModelInterface.flat_paramsMethod
flat_params(m::Model)

Deconstruct any Model instance model as a flat named tuple, keyed on property names. Properties of nested model instances are recursively exposed,.as shown in the example below. For most Model objects, properties are synonymous with fields, but this is not a hard requirement.

julia> using MLJModels
+

Variation to augment existing document string

For models that have a native API with separate documentation, one may want to call doc_header(FooRegressor, augment=true) instead. In that case, the output will look like this:

From MLJ, the FooRegressor type can be imported using

FooRegressor = @load FooRegressor pkg=FooRegressorPkg

Construct an instance with default hyper-parameters using the syntax model = FooRegressor(). Provide keyword arguments to override hyper-parameter defaults, as in FooRegressor(a=...).

source
MLJModelInterface.feature_importancesFunction
feature_importances(model::M, fitresult, report)

For a given model of model type M supporting intrinsic feature importances, calculate the feature importances from the model's fitresult and report as an abstract vector of feature::Symbol => importance::Real pairs (e.g [:gender =>0.23, :height =>0.7, :weight => 0.1]).

New model implementations

The following trait overload is also required: MLJModelInterface.reports_feature_importances(::Type{<:M}) = true

If for some reason a model is sometimes unable to report feature importances then feature_importances should return all importances as 0.0, as in [:gender =>0.0, :height =>0.0, :weight => 0.0].

source
MLJModelInterface.flat_paramsMethod
flat_params(m::Model)

Deconstruct any Model instance model as a flat named tuple, keyed on property names. Properties of nested model instances are recursively exposed,.as shown in the example below. For most Model objects, properties are synonymous with fields, but this is not a hard requirement.

julia> using MLJModels
 julia> using EnsembleModels
 julia> tree = (@load DecisionTreeClassifier pkg=DecisionTree)();
 
@@ -202,4 +202,4 @@
  rng = Random._GLOBAL_RNG(),
  n = 100,
  acceleration = CPU1{Nothing}(nothing),
- out_of_bag_measure = Any[],)
source
MLJModelInterface.reportMethod
MLJModelInterface.report(model, report_given_method)

Merge the reports in the dictionary report_given_method into a single property-accessible object. It is supposed that each key of the dictionary is either :fit or the name of an operation, such as :predict or :transform. Each value will be the report component returned by a training method (fit or update) dispatched on the model type, in the case of :fit, or the report component returned by an operation that supports reporting.

New model implementations

Overloading this method is optional, unless the model generates reports that are neither named tuples nor nothing.

Assuming each value in the report_given_method dictionary is either a named tuple or nothing, and there are no conflicts between the keys of the dictionary values (the individual reports), the fallback returns the usual named tuple merge of the dictionary values, ignoring any nothing value. If there is a key conflict, all operation reports are first wrapped in a named tuple of length one, as in (predict=predict_report,). A :fit report is never wrapped.

If any dictionary value is neither a named tuple nor nothing, it is first wrapped as (report=value, ) before merging.

source
MLJModelInterface.schemaMethod
schema(X)

Inspect the column types and scitypes of a tabular object. returns nothing if the column types and scitypes can't be inspected.

source
+ out_of_bag_measure = Any[],)
source
MLJModelInterface.reportMethod
MLJModelInterface.report(model, report_given_method)

Merge the reports in the dictionary report_given_method into a single property-accessible object. It is supposed that each key of the dictionary is either :fit or the name of an operation, such as :predict or :transform. Each value will be the report component returned by a training method (fit or update) dispatched on the model type, in the case of :fit, or the report component returned by an operation that supports reporting.

New model implementations

Overloading this method is optional, unless the model generates reports that are neither named tuples nor nothing.

Assuming each value in the report_given_method dictionary is either a named tuple or nothing, and there are no conflicts between the keys of the dictionary values (the individual reports), the fallback returns the usual named tuple merge of the dictionary values, ignoring any nothing value. If there is a key conflict, all operation reports are first wrapped in a named tuple of length one, as in (predict=predict_report,). A :fit report is never wrapped.

If any dictionary value is neither a named tuple nor nothing, it is first wrapped as (report=value, ) before merging.

source
MLJModelInterface.schemaMethod
schema(X)

Inspect the column types and scitypes of a tabular object. returns nothing if the column types and scitypes can't be inspected.

source
diff --git a/dev/search_index.js b/dev/search_index.js index 5a89f2d..4b72f8c 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"the_model_type_hierarchy/#The-model-type-hierarchy","page":"The model type hierarchy","title":"The model type hierarchy","text":"","category":"section"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"A model is an object storing hyperparameters associated with some machine learning algorithm, and that is all. In MLJ, hyperparameters include configuration parameters, like the number of threads, and special instructions, such as \"compute feature rankings\", which may or may not affect the final learning outcome. However, the logging level (verbosity below) is excluded. Learned parameters (such as the coefficients in a linear model) have no place in the model struct.","category":"page"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"The name of the Julia type associated with a model indicates the associated algorithm (e.g., DecisionTreeClassifier). The outcome of training a learning algorithm is called a fitresult. For ordinary multivariate regression, for example, this would be the coefficients and intercept. For a general supervised model, it is the (generally minimal) information needed to make new predictions.","category":"page"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"The ultimate supertype of all models is MLJModelInterface.Model, which has two abstract subtypes:","category":"page"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"abstract type Supervised <: Model end\nabstract type Unsupervised <: Model end","category":"page"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"Supervised models are further divided according to whether they are able to furnish probabilistic predictions of the target (which they will then do by default) or directly predict \"point\" estimates, for each new input pattern:","category":"page"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"abstract type Probabilistic <: Supervised end\nabstract type Deterministic <: Supervised end","category":"page"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"Further division of model types is realized through Trait declarations.","category":"page"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"Associated with every concrete subtype of Model there must be a fit method, which implements the associated algorithm to produce the fitresult. Additionally, every Supervised model has a predict method, while Unsupervised models must have a transform method. More generally, methods such as these, that are dispatched on a model instance and a fitresult (plus other data), are called operations. Probabilistic supervised models optionally implement a predict_mode operation (in the case of classifiers) or a predict_mean and/or predict_median operations (in the case of regressors) although MLJModelInterface also provides fallbacks that will suffice in most cases. Unsupervised models may implement an inverse_transform operation.","category":"page"},{"location":"quick_start_guide/#Quick-start-guide","page":"Quick-start guide","title":"Quick start guide","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"The following are condensed and informal instructions for implementing the MLJ model interface for a new machine learning model. We assume: (i) you have a Julia registered package YourPackage.jl implementing some machine learning models; (ii) that you would like to interface and register these models with MLJ; and (iii) that you have a rough understanding of how things work with MLJ. In particular, you are familiar with:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"what scientific types are\nwhat Probabilistic, Deterministic and Unsupervised models are\nthe fact that MLJ generally works with tables rather than matrices. Here a table is a container X satisfying the Tables.jl API and satisfying Tables.istable(X) == true (e.g., DataFrame, JuliaDB table, CSV file, named tuple of equal-length vectors)\nCategoricalArrays.jl, if working with finite discrete data, e.g., doing classification; see also the Working with Categorical Data section of the MLJ manual.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"If you're not familiar with any one of these points, the Getting Started section of the MLJ manual may help.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"But tables don't make sense for my model! If a case can be made that tabular input does not make sense for your particular model, then MLJ can still handle this; you just need to define a non-tabular input_scitype trait. However, you should probably open an issue to clarify the appropriate declaration. The discussion below assumes input data is tabular.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"For simplicity, this document assumes no data front-end is to be defined for your model. Adding a data front-end, which offers the MLJ user some performance benefits, is easy to add post-facto, and is described in Implementing a data front-end.","category":"page"},{"location":"quick_start_guide/#Overview","page":"Quick-start guide","title":"Overview","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"To write an interface create a file or a module in your package which includes:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"a using MLJModelInterface or import MLJModelInterface statement\nMLJ-compatible model types and constructors,\nimplementation of fit, predict/transform and optionally fitted_params for your models,\nmetadata for your package and for each of your models","category":"page"},{"location":"quick_start_guide/#Important","page":"Quick-start guide","title":"Important","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"MLJModelInterface is a very light-weight interface allowing you to define your interface, but does not provide the functionality required to use or test your interface; this requires MLJBase. So, while you only need to add MLJModelInterface to your project's [deps], for testing purposes you need to add MLJBase to your project's [extras] and [targets]. In testing, simply use MLJBase in place of MLJModelInterface.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"We give some details for each step below with, each time, a few examples that you can mimic. The instructions are intentionally brief.","category":"page"},{"location":"quick_start_guide/#Model-type-and-constructor","page":"Quick-start guide","title":"Model type and constructor","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"MLJ-compatible constructors for your models need to meet the following requirements:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"be mutable struct,\nbe subtypes of MLJModelInterface.Probabilistic or MLJModelInterface.Deterministic or MLJModelInterface.Unsupervised,\nhave fields corresponding exclusively to hyperparameters,\nhave a keyword constructor assigning default values to all hyperparameters.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"You may use the @mlj_model macro from MLJModelInterface to declare a (non parametric) model type:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"MLJModelInterface.@mlj_model mutable struct YourModel <: MLJModelInterface.Deterministic\n a::Float64 = 0.5::(_ > 0)\n b::String = \"svd\"::(_ in (\"svd\",\"qr\"))\nend","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"That macro specifies:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"A keyword constructor (here YourModel(; a=..., b=...)),\nDefault values for the hyperparameters,\nConstraints on the hyperparameters where _ refers to a value passed.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Further to the last point, a::Float64 = 0.5::(_ > 0) indicates that the field a is a Float64, takes 0.5 as its default value, and expects its value to be positive.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Please see this issue for a known issue and workaround relating to the use of @mlj_model with negative defaults.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"If you decide not to use the @mlj_model macro (e.g. in the case of a parametric type), you will need to write a keyword constructor and a clean! method:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"mutable struct YourModel <: MLJModelInterface.Deterministic\n a::Float64\nend\nfunction YourModel(; a=0.5)\n model = YourModel(a)\n message = MLJModelInterface.clean!(model)\n isempty(message) || @warn message\n return model\nend\nfunction MLJModelInterface.clean!(m::YourModel)\n warning = \"\"\n if m.a <= 0\n warning *= \"Parameter `a` expected to be positive, resetting to 0.5\"\n m.a = 0.5\n end\n return warning\nend","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Additional notes:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Please annotate all fields with concrete types, if possible, using type parameters if necessary.\nPlease prefer Symbol over String if you can (e.g. to pass the name of a solver).\nPlease add constraints to your fields even if they seem obvious to you.\nYour model may have 0 fields, that's fine.\nAlthough not essential, try to avoid Union types for model fields. For example, a field declaration features::Vector{Symbol} with a default of Symbol[] (detected with the isempty method) is preferred to features::Union{Vector{Symbol}, Nothing} with a default of nothing.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Examples:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"KNNClassifier which uses @mlj_model,\nXGBoostRegressor which does not.","category":"page"},{"location":"quick_start_guide/#Fit","page":"Quick-start guide","title":"Fit","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"The implementation of fit will look like","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"function MLJModelInterface.fit(m::YourModel, verbosity, X, y, w=nothing)\n # body ...\n return (fitresult, cache, report)\nend","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"where y should only be there for a supervised model and w for a supervised model that supports sample weights. You must type verbosity to Int and you must not type X, y and w (MLJ handles that).","category":"page"},{"location":"quick_start_guide/#Regressor","page":"Quick-start guide","title":"Regressor","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"In the body of the fit function, you should assume that X is a table and that y is an AbstractVector (for multitask regression it may be a table).","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Typical steps in the body of the fit function will be:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"forming a matrix-view of the data, possibly transposed if your model expects a p x n formalism (MLJ assumes columns are features by default i.e. n x p), use MLJModelInterface.matrix for this,\npassing the data to your model,\nreturning the results as a tuple (fitresult, cache, report).","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"The fitresult part should contain everything that is needed at the predict or transform step, it should not be expected to be accessed by users. The cache should be left to nothing for now. The report should be a NamedTuple with any auxiliary useful information that a user would want to know about the fit (e.g., feature rankings). See more on this below.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Example: GLM's LinearRegressor","category":"page"},{"location":"quick_start_guide/#Classifier","page":"Quick-start guide","title":"Classifier","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"For a classifier, the steps are fairly similar to a regressor with these differences:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"y will be a categorical vector and you will typically want to use the integer encoding of y instead of CategoricalValues; use MLJModelInterface.int for this.\nYou will need to pass the full pool of target labels (not just those observed in the training data) and additionally, in the Deterministic case, the encoding, to make these available to predict. A simple way to do this is to pass y[1] in the fitresult, for then MLJModelInterface.classes(y[1]) is a complete list of possible categorical elements, and d = MLJModelInterface.decoder(y[1]) is a method for recovering categorical elements from their integer representations (e.g., d(2) is the categorical element with 2 as encoding).\nIn the case of a probabilistic classifier you should pass all probabilities simultaneously to the UnivariateFinite constructor to get an abstract UnivariateFinite vector (type UnivariateFiniteArray) rather than use comprehension or broadcasting to get a vanilla vector. This is for performance reasons.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"If implementing a classifier, you should probably consult the more detailed instructions at The predict method.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Examples:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"GLM's BinaryClassifier (Probabilistic)\nLIBSVM's SVC (Deterministic)","category":"page"},{"location":"quick_start_guide/#Transformer","page":"Quick-start guide","title":"Transformer","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Nothing special for a transformer.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Example: FillImputer","category":"page"},{"location":"quick_start_guide/#Fitted-parameters","page":"Quick-start guide","title":"Fitted parameters","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"There is a function you can optionally implement which will return the learned parameters of your model for user inspection. For instance, in the case of a linear regression, the user may want to get direct access to the coefficients and intercept. This should be as human and machine-readable as practical (not a graphical representation) and the information should be combined in the form of a named tuple.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"The function will always look like:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"function MLJModelInterface.fitted_params(model::YourModel, fitresult)\n # extract what's relevant from `fitresult`\n # ...\n # then return as a NamedTuple\n return (learned_param1 = ..., learned_param2 = ...)\nend","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Example: for GLM models","category":"page"},{"location":"quick_start_guide/#Summary-of-user-interface-points-(or,-What-to-put-where?)","page":"Quick-start guide","title":"Summary of user interface points (or, What to put where?)","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Recall that the fitresult returned as part of fit represents everything needed by predict (or transform) to make new predictions. It is not intended to be directly inspected by the user. Here is a summary of the interface points for users that your implementation creates:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Use fitted_params to expose learned parameters, such as linear coefficients, to the user in a machine and human-readable form (for re-use in another model, for example).\nUse the fields of your model struct for hyperparameters, i.e., those parameters declared by the user ahead of time that generally affect the outcome of training. It is okay to add \"control\" parameters (such as specifying an acceleration parameter specifying computational resources, as here).\nUse report to return everything else, including model-specific methods (or other callable objects). This includes feature rankings, decision boundaries, SVM support vectors, clustering centres, methods for visualizing training outcomes, methods for saving learned parameters in a custom format, degrees of freedom, deviance, etc. If there is a performance cost to extra functionality you want to expose, the functionality can be toggled on/off through a hyperparameter, but this should otherwise be avoided. For, example, in a decision tree model report.print_tree(depth) might generate a pretty tree representation of the learned tree, up to the specified depth.","category":"page"},{"location":"quick_start_guide/#Predict/Transform","page":"Quick-start guide","title":"Predict/Transform","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"The implementation of predict (for a supervised model) or transform (for an unsupervised one) will look like:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"function MLJModelInterface.predict(m::YourModel, fitresult, Xnew)\n # ...\nend","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Here Xnew is expected to be a table and part of the logic in predict or transform may be similar to that in fit.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"The values returned should be:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"model subtype return value of predict/transform\nDeterministic vector of values (or table if multi-target)\nProbabilistic vector of Distribution objects, for classifiers in particular, a vector of UnivariateFinite\nUnsupervised table","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"In the case of a Probabilistic model, you may further want to implement a predict_mean or a predict_mode. However, MLJModelInterface provides fallbacks, defined in terms of predict, whose performance may suffice.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Examples","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Deterministic regression: KNNRegressor\nProbabilistic regression: LinearRegressor and the predict_mean\nProbabilistic classification: LogisticClassifier","category":"page"},{"location":"quick_start_guide/#Metadata-(traits)","page":"Quick-start guide","title":"Metadata (traits)","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Adding metadata for your model(s) is crucial for the discoverability of your package and its models and to make sure your model is used with data it can handle. You can individually overload a number of trait functions that encode this metadata by following the instructions in Adding Models for General Use), which also explains these traits in more detail. However, your most convenient option is to use metadata_model and metadata_pkg functionalities from MLJModelInterface to do this:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"const ALL_MODELS = Union{YourModel1, YourModel2, ...}\n\nMLJModelInterface.metadata_pkg.(ALL_MODELS\n name = \"YourPackage\",\n uuid = \"6ee0df7b-...\", # see your Project.toml\n url = \"https://...\", # URL to your package repo\n julia = true, # is it written entirely in Julia?\n license = \"MIT\", # your package license\n is_wrapper = false, # does it wrap around some other package?\n)\n\n# Then for each model,\nMLJModelInterface.metadata_model(YourModel1,\n input_scitype = MLJModelInterface.Table(MLJModelInterface.Continuous), # what input data is supported?\n target_scitype = AbstractVector{MLJModelInterface.Continuous}, # for a supervised model, what target?\n output_scitype = MLJModelInterface.Table(MLJModelInterface.Continuous), # for an unsupervised, what output?\n supports_weights = false, # does the model support sample weights?\n descr = \"A short description of your model\"\n load_path = \"YourPackage.SubModuleContainingModelStructDefinition.YourModel1\"\n)","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Important. Do not omit the load_path specification. Without a correct load_path MLJ will be unable to import your model.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Examples:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"package metadata\nGLM\nMLJLinearModels\nmodel metadata\nLinearRegressor\nDecisionTree\nA series of regressors","category":"page"},{"location":"quick_start_guide/#Adding-a-model-to-the-model-registry","page":"Quick-start guide","title":"Adding a model to the model registry","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"See How to add models to the MLJ model registry.","category":"page"},{"location":"convenience_methods/#Convenience-methods","page":"Convenience methods","title":"Convenience methods","text":"","category":"section"},{"location":"convenience_methods/","page":"Convenience methods","title":"Convenience methods","text":"MMI.table\nMMI.matrix\nMMI.int\nMMI.UnivariateFinite\nMMI.classes\nMMI.decoder\nMMI.select\nMMI.selectrows\nMMI.selectcols\nMMI.UnivariateFinite","category":"page"},{"location":"convenience_methods/#MLJModelInterface.table-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.table","text":"table(columntable; prototype=nothing)\n\nConvert a named tuple of vectors or tuples columntable, into a table of the \"preferred sink type\" of prototype. This is often the type of prototype itself, when prototype is a sink; see the Tables.jl documentation. If prototype is not specified, then a named tuple of vectors is returned.\n\ntable(A::AbstractMatrix; names=nothing, prototype=nothing)\n\nWrap an abstract matrix A as a Tables.jl compatible table with the specified column names (a tuple of symbols). If names are not specified, names=(:x1, :x2, ..., :xn) is used, where n=size(A, 2).\n\nIf a prototype is specified, then the matrix is materialized as a table of the preferred sink type of prototype, rather than wrapped. Note that if prototype is not specified, then matrix(table(A)) is essentially a no-op.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.matrix-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.matrix","text":"matrix(X; transpose=false)\n\nIf X isa AbstractMatrix, return X or permutedims(X) if transpose=true. Otherwise if X is a Tables.jl compatible table source, convert X into a Matrix.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.int-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.int","text":"int(x)\n\nThe positional integer of the CategoricalString or CategoricalValue x, in the ordering defined by the pool of x. The type of int(x) is the reference type of x.\n\nNot to be confused with x.ref, which is unchanged by reordering of the pool of x, but has the same type.\n\nint(X::CategoricalArray)\nint(W::Array{<:CategoricalString})\nint(W::Array{<:CategoricalValue})\n\nBroadcasted versions of int.\n\njulia> v = categorical([\"c\", \"b\", \"c\", \"a\"])\n4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"c\"\n \"b\"\n \"c\"\n \"a\"\n\njulia> levels(v)\n3-element Vector{String}:\n \"a\"\n \"b\"\n \"c\"\n\njulia> int(v)\n4-element Vector{UInt32}:\n 0x00000003\n 0x00000002\n 0x00000003\n 0x00000001\n\nSee also: decoder.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.UnivariateFinite-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.UnivariateFinite","text":"UnivariateFinite(\n support,\n probs;\n pool=nothing,\n augmented=false,\n ordered=false\n)\n\nConstruct a discrete univariate distribution whose finite support is the elements of the vector support, and whose corresponding probabilities are elements of the vector probs. Alternatively, construct an abstract array of UnivariateFinite distributions by choosing probs to be an array of one higher dimension than the array generated.\n\nHere the word \"probabilities\" is an abuse of terminology as there is no requirement that probabilities actually sum to one, only that they be non-negative. So UnivariateFinite objects actually implement arbitrary non-negative measures over finite sets of labelled points. A UnivariateDistribution will be a bona fide probability measure when constructed using the augment=true option (see below) or when fit to data.\n\nUnless pool is specified, support should have type AbstractVector{<:CategoricalValue} and all elements are assumed to share the same categorical pool, which may be larger than support.\n\nImportant. All levels of the common pool have associated probabilities, not just those in the specified support. However, these probabilities are always zero (see example below).\n\nIf probs is a matrix, it should have a column for each class in support (or one less, if augment=true). More generally, probs will be an array whose size is of the form (n1, n2, ..., nk, c), where c = length(support) (or one less, if augment=true) and the constructor then returns an array of UnivariateFinite distributions of size (n1, n2, ..., nk).\n\nExamples\n\njulia> v = categorical([\"x\", \"x\", \"y\", \"x\", \"z\"])\n5-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"x\"\n \"y\"\n \"x\"\n \"z\"\n\njulia> UnivariateFinite(classes(v), [0.2, 0.3, 0.5])\nUnivariateFinite{Multiclass{3}}(x=>0.2, y=>0.3, z=>0.5)\n\njulia> d = UnivariateFinite([v[1], v[end]], [0.1, 0.9])\nUnivariateFinite{Multiclass{3}}(x=>0.1, z=>0.9)\n\njulia> rand(d, 3)\n3-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"z\"\n \"x\"\n\njulia> levels(d)\n3-element Vector{String}:\n \"x\"\n \"y\"\n \"z\"\n\njulia> pdf(d, \"y\")\n0.0\n\n\nSpecifying a pool\n\nAlternatively, support may be a list of raw (non-categorical) elements if pool is:\n\nsome CategoricalArray, CategoricalValue or CategoricalPool, such that support is a subset of levels(pool)\nmissing, in which case a new categorical pool is created which has support as its only levels.\n\nIn the last case, specify ordered=true if the pool is to be considered ordered.\n\njulia> UnivariateFinite([\"x\", \"z\"], [0.1, 0.9], pool=missing, ordered=true)\nUnivariateFinite{OrderedFactor{2}}(x=>0.1, z=>0.9)\n\njulia> d = UnivariateFinite([\"x\", \"z\"], [0.1, 0.9], pool=v) # v defined above\nUnivariateFinite{Multiclass{3}}(x=>0.1, z=>0.9)\n\njulia> pdf(d, \"y\") # allowed as `\"y\" in levels(v)`\n0.0\n\njulia> v = categorical([\"x\", \"x\", \"y\", \"x\", \"z\", \"w\"])\n6-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"x\"\n \"y\"\n \"x\"\n \"z\"\n \"w\"\n\njulia> probs = rand(100, 3); probs = probs ./ sum(probs, dims=2);\n\njulia> UnivariateFinite([\"x\", \"y\", \"z\"], probs, pool=v)\n100-element UnivariateFiniteVector{Multiclass{4}, String, UInt32, Float64}:\n UnivariateFinite{Multiclass{4}}(x=>0.194, y=>0.3, z=>0.505)\n UnivariateFinite{Multiclass{4}}(x=>0.727, y=>0.234, z=>0.0391)\n UnivariateFinite{Multiclass{4}}(x=>0.674, y=>0.00535, z=>0.321)\n ⋮\n UnivariateFinite{Multiclass{4}}(x=>0.292, y=>0.339, z=>0.369)\n\nProbability augmentation\n\nIf augment=true the provided array is augmented by inserting appropriate elements ahead of those provided, along the last dimension of the array. This means the user only provides probabilities for the classes c2, c3, ..., cn. The class c1 probabilities are chosen so that each UnivariateFinite distribution in the returned array is a bona fide probability distribution.\n\n\n\nUnivariateFinite(prob_given_class; pool=nothing, ordered=false)\n\nConstruct a discrete univariate distribution whose finite support is the set of keys of the provided dictionary, prob_given_class, and whose values specify the corresponding probabilities.\n\nThe type requirements on the keys of the dictionary are the same as the elements of support given above with this exception: if non-categorical elements (raw labels) are used as keys, then pool=... must be specified and cannot be missing.\n\nIf the values (probabilities) are arrays instead of scalars, then an abstract array of UnivariateFinite elements is created, with the same size as the array.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.classes-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.classes","text":"classes(x)\n\nAll the categorical elements with the same pool as x (including x), returned as a list, with an ordering consistent with the pool. Here x has CategoricalValue type, and classes(x) is a vector of the same eltype. Note that x in classes(x) is always true.\n\nNot to be confused with levels(x.pool). See the example below.\n\njulia> v = categorical([\"c\", \"b\", \"c\", \"a\"])\n4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"c\"\n \"b\"\n \"c\"\n \"a\"\n\njulia> levels(v)\n3-element Vector{String}:\n \"a\"\n \"b\"\n \"c\"\n\njulia> x = v[4]\nCategoricalArrays.CategoricalValue{String, UInt32} \"a\"\n\njulia> classes(x)\n3-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"a\"\n \"b\"\n \"c\"\n\njulia> levels(x.pool)\n3-element Vector{String}:\n \"a\"\n \"b\"\n \"c\"\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.decoder-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.decoder","text":"decoder(x)\n\nReturn a callable object for decoding the integer representation of a CategoricalValue sharing the same pool the CategoricalValue x. Specifically, one has decoder(x)(int(y)) == y for all CategoricalValues y having the same pool as x. One can also call decoder(x) on integer arrays, in which case decoder(x) is broadcast over all elements.\n\nExamples\n\njulia> v = categorical([\"c\", \"b\", \"c\", \"a\"])\n4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"c\"\n \"b\"\n \"c\"\n \"a\"\n\njulia> int(v)\n4-element Vector{UInt32}:\n 0x00000003\n 0x00000002\n 0x00000003\n 0x00000001\n\njulia> d = decoder(v[3]);\n\njulia> d(int(v)) == v\ntrue\n\nWarning:\n\nIt is not true that int(d(u)) == u always holds.\n\nSee also: int.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.select-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.select","text":"select(X, r, c)\n\nSelect element(s) of a table or matrix at row(s) r and column(s) c. An object of the sink type of X (or a matrix) is returned unless c is a single integer or symbol. In that case a vector is returned, unless r is a single integer, in which case a single element is returned.\n\nSee also: selectrows, selectcols.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.selectrows-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.selectrows","text":"selectrows(X, r)\n\nSelect single or multiple rows from a table, abstract vector or matrix X. If X is tabular, the object returned is a table of the preferred sink type of typeof(X), even if only a single row is selected.\n\nIf the object is neither a table, abstract vector or matrix, X is returned and r is ignored.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.selectcols-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.selectcols","text":"selectcols(X, c)\n\nSelect single or multiple columns from a matrix or table X. If c is an abstract vector of integers or symbols, then the object returned is a table of the preferred sink type of typeof(X). If c is a single integer or column, then an AbstractVector is returned.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.UnivariateFinite-convenience_methods-2","page":"Convenience methods","title":"MLJModelInterface.UnivariateFinite","text":"UnivariateFinite(\n support,\n probs;\n pool=nothing,\n augmented=false,\n ordered=false\n)\n\nConstruct a discrete univariate distribution whose finite support is the elements of the vector support, and whose corresponding probabilities are elements of the vector probs. Alternatively, construct an abstract array of UnivariateFinite distributions by choosing probs to be an array of one higher dimension than the array generated.\n\nHere the word \"probabilities\" is an abuse of terminology as there is no requirement that probabilities actually sum to one, only that they be non-negative. So UnivariateFinite objects actually implement arbitrary non-negative measures over finite sets of labelled points. A UnivariateDistribution will be a bona fide probability measure when constructed using the augment=true option (see below) or when fit to data.\n\nUnless pool is specified, support should have type AbstractVector{<:CategoricalValue} and all elements are assumed to share the same categorical pool, which may be larger than support.\n\nImportant. All levels of the common pool have associated probabilities, not just those in the specified support. However, these probabilities are always zero (see example below).\n\nIf probs is a matrix, it should have a column for each class in support (or one less, if augment=true). More generally, probs will be an array whose size is of the form (n1, n2, ..., nk, c), where c = length(support) (or one less, if augment=true) and the constructor then returns an array of UnivariateFinite distributions of size (n1, n2, ..., nk).\n\nExamples\n\njulia> v = categorical([\"x\", \"x\", \"y\", \"x\", \"z\"])\n5-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"x\"\n \"y\"\n \"x\"\n \"z\"\n\njulia> UnivariateFinite(classes(v), [0.2, 0.3, 0.5])\nUnivariateFinite{Multiclass{3}}(x=>0.2, y=>0.3, z=>0.5)\n\njulia> d = UnivariateFinite([v[1], v[end]], [0.1, 0.9])\nUnivariateFinite{Multiclass{3}}(x=>0.1, z=>0.9)\n\njulia> rand(d, 3)\n3-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"z\"\n \"x\"\n\njulia> levels(d)\n3-element Vector{String}:\n \"x\"\n \"y\"\n \"z\"\n\njulia> pdf(d, \"y\")\n0.0\n\n\nSpecifying a pool\n\nAlternatively, support may be a list of raw (non-categorical) elements if pool is:\n\nsome CategoricalArray, CategoricalValue or CategoricalPool, such that support is a subset of levels(pool)\nmissing, in which case a new categorical pool is created which has support as its only levels.\n\nIn the last case, specify ordered=true if the pool is to be considered ordered.\n\njulia> UnivariateFinite([\"x\", \"z\"], [0.1, 0.9], pool=missing, ordered=true)\nUnivariateFinite{OrderedFactor{2}}(x=>0.1, z=>0.9)\n\njulia> d = UnivariateFinite([\"x\", \"z\"], [0.1, 0.9], pool=v) # v defined above\nUnivariateFinite{Multiclass{3}}(x=>0.1, z=>0.9)\n\njulia> pdf(d, \"y\") # allowed as `\"y\" in levels(v)`\n0.0\n\njulia> v = categorical([\"x\", \"x\", \"y\", \"x\", \"z\", \"w\"])\n6-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"x\"\n \"y\"\n \"x\"\n \"z\"\n \"w\"\n\njulia> probs = rand(100, 3); probs = probs ./ sum(probs, dims=2);\n\njulia> UnivariateFinite([\"x\", \"y\", \"z\"], probs, pool=v)\n100-element UnivariateFiniteVector{Multiclass{4}, String, UInt32, Float64}:\n UnivariateFinite{Multiclass{4}}(x=>0.194, y=>0.3, z=>0.505)\n UnivariateFinite{Multiclass{4}}(x=>0.727, y=>0.234, z=>0.0391)\n UnivariateFinite{Multiclass{4}}(x=>0.674, y=>0.00535, z=>0.321)\n ⋮\n UnivariateFinite{Multiclass{4}}(x=>0.292, y=>0.339, z=>0.369)\n\nProbability augmentation\n\nIf augment=true the provided array is augmented by inserting appropriate elements ahead of those provided, along the last dimension of the array. This means the user only provides probabilities for the classes c2, c3, ..., cn. The class c1 probabilities are chosen so that each UnivariateFinite distribution in the returned array is a bona fide probability distribution.\n\n\n\nUnivariateFinite(prob_given_class; pool=nothing, ordered=false)\n\nConstruct a discrete univariate distribution whose finite support is the set of keys of the provided dictionary, prob_given_class, and whose values specify the corresponding probabilities.\n\nThe type requirements on the keys of the dictionary are the same as the elements of support given above with this exception: if non-categorical elements (raw labels) are used as keys, then pool=... must be specified and cannot be missing.\n\nIf the values (probabilities) are arrays instead of scalars, then an abstract array of UnivariateFinite elements is created, with the same size as the array.\n\n\n\n\n\n","category":"function"},{"location":"form_of_data/#The-form-of-data-for-fitting-and-predicting","page":"The form of data for fitting and predicting","title":"The form of data for fitting and predicting","text":"","category":"section"},{"location":"form_of_data/","page":"The form of data for fitting and predicting","title":"The form of data for fitting and predicting","text":"The model implementer does not have absolute control over the types of data X, y and Xnew appearing in the fit and predict methods they must implement. Rather, they can specify the scientific type of this data by making appropriate declarations of the traits input_scitype and target_scitype discussed later under Trait declarations.","category":"page"},{"location":"form_of_data/","page":"The form of data for fitting and predicting","title":"The form of data for fitting and predicting","text":"Important Note. Unless it genuinely makes little sense to do so, the MLJ recommendation is to specify a Table scientific type for X (and hence Xnew) and an AbstractVector scientific type (e.g., AbstractVector{Continuous}) for targets y. Algorithms requiring matrix input can coerce their inputs appropriately; see below.","category":"page"},{"location":"form_of_data/#Additional-type-coercions","page":"The form of data for fitting and predicting","title":"Additional type coercions","text":"","category":"section"},{"location":"form_of_data/","page":"The form of data for fitting and predicting","title":"The form of data for fitting and predicting","text":"If the core algorithm being wrapped requires data in a different or more specific form, then fit will need to coerce the table into the form desired (and the same coercions applied to X will have to be repeated for Xnew in predict). To assist with common cases, MLJ provides the convenience method MMI.matrix. MMI.matrix(Xtable) has type Matrix{T} where T is the tightest common type of elements of Xtable, and Xtable is any table. (If Xtable is itself just a wrapped matrix, Xtable=Tables.table(A), then A=MMI.table(Xtable) will be returned without any copying.)","category":"page"},{"location":"form_of_data/","page":"The form of data for fitting and predicting","title":"The form of data for fitting and predicting","text":"Alternatively, a more performant option is to implement a data front-end for your model; see Implementing a data front-end.","category":"page"},{"location":"form_of_data/","page":"The form of data for fitting and predicting","title":"The form of data for fitting and predicting","text":"Other auxiliary methods provided by MLJModelInterface for handling tabular data are: selectrows, selectcols, select and schema (for extracting the size, names and eltypes of a table's columns). See Convenience methods below for details.","category":"page"},{"location":"form_of_data/#Important-convention","page":"The form of data for fitting and predicting","title":"Important convention","text":"","category":"section"},{"location":"form_of_data/","page":"The form of data for fitting and predicting","title":"The form of data for fitting and predicting","text":"It is to be understood that the columns of table X correspond to features and the rows to observations. So, for example, the predict method for a linear regression model might look like predict(model, w, Xnew) = MMI.matrix(Xnew)*w, where w is the vector of learned coefficients.","category":"page"},{"location":"serialization/#Serialization","page":"Serialization","title":"Serialization","text":"","category":"section"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"warning: New in MLJBase 0.20\nThe following API is incompatible with versions of MLJBase < 0.20, even for model implementations compatible with MLJModelInterface 1^","category":"page"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"This section may be occasionally relevant when wrapping models implemented in languages other than Julia.","category":"page"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"The MLJ user can serialize and deserialize machines, as she would any other julia object. (This user has the option of first removing data from the machine. See the Saving machines section of the MLJ manual for details.) However, a problem can occur if a model's fitresult (see The fit method) is not a persistent object. For example, it might be a C pointer that would have no meaning in a new Julia session.","category":"page"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"If that is the case a model implementation needs to implement a save and restore method for switching between a fitresult and some persistent, serializable representation of that result.","category":"page"},{"location":"serialization/#The-save-method","page":"Serialization","title":"The save method","text":"","category":"section"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"MMI.save(model::SomeModel, fitresult; kwargs...) -> serializable_fitresult","category":"page"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"Implement this method to return a persistent serializable representation of the fitresult component of the MMI.fit return value.","category":"page"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"The fallback of save performs no action and returns fitresult.","category":"page"},{"location":"serialization/#The-restore-method","page":"Serialization","title":"The restore method","text":"","category":"section"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"MMI.restore(model::SomeModel, serializable_fitresult) -> fitresult","category":"page"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"Implement this method to reconstruct a valid fitresult (as would be returned by MMI.fit) from a persistent representation constructed using MMI.save as described above.","category":"page"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"The fallback of restore performs no action and returns serializable_fitresult.","category":"page"},{"location":"serialization/#Example","page":"Serialization","title":"Example","text":"","category":"section"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"Refer to the model implementations at MLJXGBoostInterface.jl.","category":"page"},{"location":"iterative_models/#Iterative-models-and-the-update!-method","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"","category":"section"},{"location":"iterative_models/","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"An update method may be optionally overloaded to enable a call by MLJ to retrain a model (on the same training data) to avoid repeating computations unnecessarily.","category":"page"},{"location":"iterative_models/","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"MMI.update(model::SomeSupervisedModel, verbosity, old_fitresult, old_cache, X, y) -> fit\nresult, cache, report\nMMI.update(model::SomeSupervisedModel, verbosity, old_fitresult, old_cache, X, y, w=nothing) -> fit\nresult, cache, report","category":"page"},{"location":"iterative_models/","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"Here the second variation applies if SomeSupervisedModel supports sample weights.","category":"page"},{"location":"iterative_models/","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"If an MLJ Machine is being fit! and it is not the first time, then update is called instead of fit, unless the machine fit! has been called with a new rows keyword argument. However, MLJModelInterface defines a fallback for update which just calls fit. For context, see the Internals section of the MLJ manual.","category":"page"},{"location":"iterative_models/","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"Learning networks wrapped as models constitute one use case (see the Composing Models section of the MLJ manual): one would like each component model to be retrained only when hyperparameter changes \"upstream\" make this necessary. In this case, MLJ provides a fallback (specifically, the fallback is for any subtype of SupervisedNetwork = Union{DeterministicNetwork,ProbabilisticNetwork}). A second more generally relevant use case is iterative models, where calls to increase the number of iterations only restarts the iterative procedure if other hyperparameters have also changed. (A useful method for inspecting model changes in such cases is MLJModelInterface.is_same_except. ) For an example, see MLJEnsembles.jl.","category":"page"},{"location":"iterative_models/","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"A third use case is to avoid repeating the time-consuming preprocessing of X and y required by some models.","category":"page"},{"location":"iterative_models/","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"If the argument fitresult (returned by a preceding call to fit) is not sufficient for performing an update, the author can arrange for fit to output in its cache return value any additional information required (for example, pre-processed versions of X and y), as this is also passed as an argument to the update method.","category":"page"},{"location":"fitting_distributions/#Models-that-learn-a-probability-distribution","page":"Models that learn a probability distribution","title":"Models that learn a probability distribution","text":"","category":"section"},{"location":"fitting_distributions/","page":"Models that learn a probability distribution","title":"Models that learn a probability distribution","text":"warning: Experimental\nThe following API is experimental. It is subject to breaking changes during minor or major releases without warning. Models implementing this interface will not work with MLJBase versions earlier than 0.17.5.","category":"page"},{"location":"fitting_distributions/","page":"Models that learn a probability distribution","title":"Models that learn a probability distribution","text":"Models that fit a probability distribution to some data should be regarded as Probabilistic <: Supervised models with target y = data and X = nothing.","category":"page"},{"location":"fitting_distributions/","page":"Models that learn a probability distribution","title":"Models that learn a probability distribution","text":"The predict method should return a single distribution.","category":"page"},{"location":"fitting_distributions/","page":"Models that learn a probability distribution","title":"Models that learn a probability distribution","text":"A working implementation of a model that fits a UnivariateFinite distribution to some categorical data using Laplace smoothing controlled by a hyperparameter alpha is given here.","category":"page"},{"location":"supervised_models/#Supervised-models","page":"Introduction","title":"Supervised models","text":"","category":"section"},{"location":"supervised_models/#Mathematical-assumptions","page":"Introduction","title":"Mathematical assumptions","text":"","category":"section"},{"location":"supervised_models/","page":"Introduction","title":"Introduction","text":"At present, MLJ's performance estimate functionality (resampling using evaluate/evaluate!) tacitly assumes that feature-label pairs of observations (X1, y1), (X2, y2), (X2, y2), ... are being modelled as identically independent random variables (i.i.d.), and constructs some kind of representation of an estimate of the conditional probability p(y | X) (y and X single observations). It may be that a model implementing the MLJ interface has the potential to make predictions under weaker assumptions (e.g., time series forecasting models). However the output of the compulsory predict method described below should be the output of the model under the i.i.d assumption.","category":"page"},{"location":"supervised_models/","page":"Introduction","title":"Introduction","text":"In the future, newer methods may be introduced to handle weaker assumptions (see, e.g., The predict_joint method below).","category":"page"},{"location":"supervised_models/","page":"Introduction","title":"Introduction","text":"The following sections were written with Supervised models in mind, but also cover material relevant to general models:","category":"page"},{"location":"supervised_models/","page":"Introduction","title":"Introduction","text":"Summary of methods\nThe form of data for fitting and predicting\nThe fit method\nThe fitted_params method\nThe predict method\nThe predict_joint method\nTraining losses\nFeature importances\nTrait declarations\nIterative models and the update! method\nImplementing a data front end\nSupervised models with a transform method\nModels that learn a probability distribution","category":"page"},{"location":"implementing_a_data_front_end/#Implementing-a-data-front-end","page":"Implementing a data front end","title":"Implementing a data front-end","text":"","category":"section"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"note: Note\nIt is suggested that packages implementing MLJ's model API, that later implement a data front-end, should tag their changes in a breaking release. (The changes will not break the use of models for the ordinary MLJ user, who interacts with models exclusively through the machine interface. However, it will break usage for some external packages that have chosen to depend directly on the model API.)","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"MLJModelInterface.reformat(model, args...) -> data\nMLJModelInterface.selectrows(::Model, I, data...) -> sampled_data","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"Models optionally overload reformat to define transformations of user-supplied data into some model-specific representation (e.g., from a table to a matrix). Computational overheads associated with multiple fit!/predict/transform calls (on MLJ machines) are then avoided when memory resources allow. The fallback returns args (no transformation).","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"The selectrows(::Model, I, data...) method is overloaded to specify how the model-specific data is to be subsampled, for some observation indices I (a colon, :, or instance of AbstractVector{<:Integer}). In this way, implementing a data front-end also allows more efficient resampling of data (in user calls to evaluate!).","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"After detailing formal requirements for implementing a data front-end, we give a Sample implementation. A simple implementation also appears in the MLJDecisionTreeInterface.jl package.","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"Here \"user-supplied data\" is what the MLJ user supplies when constructing a machine, as in machine(models, args...), which coincides with the arguments expected by fit(model, verbosity, args...) when reformat is not overloaded.","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"Overloading reformat is permitted for any Model subtype, except for subtypes of Static. Here is a complete list of responsibilities for such an implementation, for some model::SomeModelType (a sample implementation follows after):","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"A reformat(model::SomeModelType, args...) -> data method must be implemented for each form of args... appearing in a valid machine construction machine(model, args...) (there will be one for each possible signature of fit(::SomeModelType, ...)).\nAdditionally, if not included above, there must be a single argument form of reformat, reformat(model::SomeModelType, arg) -> (data,), serving as a data front-end for operations like predict. It must always hold that reformat(model, args...)[1] = reformat(model, args[1]).","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"The fallback is reformat(model, args...) = args (i.e., slurps provided data).","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"Important. reformat(model::SomeModelType, args...) must always return a tuple, even if this has length one. The length of the tuple need not match length(args).","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"fit(model::SomeModelType, verbosity, data...) should be implemented as if data is the output of reformat(model, args...), where args is the data an MLJ user has bound to model in some machine. The same applies to any overloading of update.\nEach implemented operation, such as predict and transform - but excluding inverse_transform - must be defined as if its data arguments are reformated versions of user-supplied data. For example, in the supervised case, data_new in predict(model::SomeModelType, fitresult, data_new) is reformat(model, Xnew), where Xnew is the data provided by the MLJ user in a call predict(mach, Xnew) (mach.model == model).\nTo specify how the model-specific representation of data is to be resampled, implement selectrows(model::SomeModelType, I, data...) -> resampled_data for each overloading of reformat(model::SomeModel, args...) -> data above. Here I is an arbitrary abstract integer vector or : (type Colon).","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"Important. selectrows(model::SomeModelType, I, args...) must always return a tuple of the same length as args, even if this is one.","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"The fallback for selectrows is described at selectrows.","category":"page"},{"location":"implementing_a_data_front_end/#Sample-implementation","page":"Implementing a data front end","title":"Sample implementation","text":"","category":"section"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"Suppose a supervised model type SomeSupervised supports sample weights, leading to two different fit signatures, and that it has a single operation predict:","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"fit(model::SomeSupervised, verbosity, X, y)\nfit(model::SomeSupervised, verbosity, X, y, w)\n\npredict(model::SomeSupervised, fitresult, Xnew)","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"Without a data front-end implemented, suppose X is expected to be a table and y a vector, but suppose the core algorithm always converts X to a matrix with features as rows (each record corresponds to a column in the table). Then a new data-front end might look like this:","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"constant MMI = MLJModelInterface\n\n# for fit:\nMMI.reformat(::SomeSupervised, X, y) = (MMI.matrix(X)', y)\nMMI.reformat(::SomeSupervised, X, y, w) = (MMI.matrix(X)', y, w)\nMMI.selectrows(::SomeSupervised, I, Xmatrix, y) =\n (view(Xmatrix, :, I), view(y, I))\nMMI.selectrows(::SomeSupervised, I, Xmatrix, y, w) =\n (view(Xmatrix, :, I), view(y, I), view(w, I))\n\n# for predict:\nMMI.reformat(::SomeSupervised, X) = (MMI.matrix(X)',)\nMMI.selectrows(::SomeSupervised, I, Xmatrix) = (view(Xmatrix, :, I),)","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"With these additions, fit and predict are refactored, so that X and Xnew represent matrices with features as rows.","category":"page"},{"location":"the_fitted_params_method/#The-fitted_params-method","page":"The fitted_params method","title":"The fitted_params method","text":"","category":"section"},{"location":"the_fitted_params_method/","page":"The fitted_params method","title":"The fitted_params method","text":"A fitted_params method may be optionally overloaded. Its purpose is to provide MLJ access to a user-friendly representation of the learned parameters of the model (as opposed to the hyperparameters). They must be extractable from fitresult.","category":"page"},{"location":"the_fitted_params_method/","page":"The fitted_params method","title":"The fitted_params method","text":"MMI.fitted_params(model::SomeSupervisedModel, fitresult) -> friendly_fitresult::NamedTuple","category":"page"},{"location":"the_fitted_params_method/","page":"The fitted_params method","title":"The fitted_params method","text":"For a linear model, for example, one might declare something like friendly_fitresult=(coefs=[...], bias=...).","category":"page"},{"location":"the_fitted_params_method/","page":"The fitted_params method","title":"The fitted_params method","text":"The fallback is to return (fitresult=fitresult,).","category":"page"},{"location":"unsupervised_models/#Unsupervised-models","page":"Unsupervised models","title":"Unsupervised models","text":"","category":"section"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"Unsupervised models implement the MLJ model interface in a very similar fashion. The main differences are:","category":"page"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"The fit method, which still returns (fitresult, cache, report) will typically have only one training argument X, as in MLJModelInterface.fit(model, verbosity, X), although this is not a hard requirement; see Transformers requiring a target variable in training below. Furthermore, in the case of models that subtype Static <: Unsupervised (see Static models) fit has no training arguments at all, but does not need to be implemented as a fallback returns (nothing, nothing, nothing).\nA transform and/or predict method is implemented, and has the same signature as predict does in the supervised case, as in MLJModelInterface.transform(model, fitresult, Xnew). However, it may only have one data argument Xnew, unless model <: Static, in which case there is no restriction. A use-case for predict is K-means clustering that predicts labels and transforms input features into a space of lower dimension. See the Transformers that also predict section of the MLJ manual for an example.\nThe target_scitype refers to the output of predict, if implemented. A new trait, output_scitype, is for the output of transform. Unless the model is Static (see Static models) the trait input_scitype is for the single data argument of transform (and predict, if implemented). If fit has more than one data argument, you must overload the trait fit_data_scitype, which bounds the allowed data passed to fit(model, verbosity, data...) and will always be a Tuple type.\nAn inverse_transform can be optionally implemented. The signature is the same as transform, as in MLJModelInterface.inverse_transform(model::MyUnsupervisedModel, fitresult, Xout), which:\nmust make sense for any Xout for which scitype(Xout) <: output_scitype(MyUnsupervisedModel); and\nmust return an object Xin satisfying scitype(Xin) <: input_scitype(MyUnsupervisedModel).","category":"page"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"For sample implementations, see MLJ's built-in transformers and the clustering models at MLJClusteringInterface.jl.","category":"page"},{"location":"unsupervised_models/#Transformers-requiring-a-target-variable-in-training","page":"Unsupervised models","title":"Transformers requiring a target variable in training","text":"","category":"section"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"An Unsupervised model that is not Static may include a second argument y in it's fit signature, as in fit(::MyTransformer, verbosity, X, y). For example, some feature selection tools require a target variable y in training. (Unlike Supervised models, an Unsupervised model is not required to implement predict, and in pipelines it is the output of transform, and not predict, that is always propagated to the next model.) Such a model should overload the trait target_in_fit, as in this example:","category":"page"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"MLJModelInterface.target_in_fit(::Type{<:MyTransformer}) = true","category":"page"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"This ensures that such models can appear in pipelines, and that a target provided to the pipeline model is passed on to the model in training. ","category":"page"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"If the model implements more than one fit signature (e.g., one with a target y and one without) then fit_data_scitype must also be overloaded, as in this example:","category":"page"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"MLJModelInterface.fit_data_scitype(::Type{<:MyTransformer}) = Union{\n Tuple{Table(Continuous)},\n\tTuple{Table(Continous), AbstractVector{<:Finite}},\n}","category":"page"},{"location":"how_to_register/#How-to-add-models-to-the-MLJ-model-registry","page":"How to add models to the MLJ Model Registry","title":"How to add models to the MLJ model registry","text":"","category":"section"},{"location":"how_to_register/","page":"How to add models to the MLJ Model Registry","title":"How to add models to the MLJ Model Registry","text":"The MLJ model registry is located in the MLJModels.jl repository. To add a model, you need to follow these steps","category":"page"},{"location":"how_to_register/","page":"How to add models to the MLJ Model Registry","title":"How to add models to the MLJ Model Registry","text":"Ensure your model conforms to the interface defined above\nRaise an issue at MLJModels.jl and point out where the MLJ-interface implementation is, e.g. by providing a link to the code.\nAn administrator will then review your implementation and work with you to add the model to the registry","category":"page"},{"location":"summary_of_methods/#Summary-of-methods","page":"Summary of methods","title":"Summary of methods","text":"","category":"section"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"The compulsory and optional methods to be implemented for each concrete type SomeSupervisedModel <: MMI.Supervised are summarized below.","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"An = indicates the return value for a fallback version of the method.","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Compulsory:","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.fit(model::SomeSupervisedModel, verbosity, X, y) -> fitresult, cache, report\nMMI.predict(model::SomeSupervisedModel, fitresult, Xnew) -> yhat","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optional, to check and correct invalid hyperparameter values:","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.clean!(model::SomeSupervisedModel) = \"\"","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optional, to return user-friendly form of fitted parameters:","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.fitted_params(model::SomeSupervisedModel, fitresult) = fitresult","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optional, to avoid redundant calculations when re-fitting machines associated with a model:","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.update(model::SomeSupervisedModel, verbosity, old_fitresult, old_cache, X, y) =\n MMI.fit(model, verbosity, X, y)","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optional, to specify default hyperparameter ranges (for use in tuning):","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.hyperparameter_ranges(T::Type) = Tuple(fill(nothing, length(fieldnames(T))))","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optional, if SomeSupervisedModel <: Probabilistic:","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.predict_mode(model::SomeSupervisedModel, fitresult, Xnew) =\n mode.(predict(model, fitresult, Xnew))\nMMI.predict_mean(model::SomeSupervisedModel, fitresult, Xnew) =\n mean.(predict(model, fitresult, Xnew))\nMMI.predict_median(model::SomeSupervisedModel, fitresult, Xnew) =\n median.(predict(model, fitresult, Xnew))","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Required, if the model is to be registered (findable by general users):","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.load_path(::Type{<:SomeSupervisedModel}) = \"\"\nMMI.package_name(::Type{<:SomeSupervisedModel}) = \"Unknown\"\nMMI.package_uuid(::Type{<:SomeSupervisedModel}) = \"Unknown\"","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.input_scitype(::Type{<:SomeSupervisedModel}) = Unknown","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Strongly recommended, to constrain the form of target data passed to fit:","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.target_scitype(::Type{<:SomeSupervisedModel}) = Unknown","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optional but recommended:","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.package_url(::Type{<:SomeSupervisedModel}) = \"unknown\"\nMMI.is_pure_julia(::Type{<:SomeSupervisedModel}) = false\nMMI.package_license(::Type{<:SomeSupervisedModel}) = \"unknown\"","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"If SomeSupervisedModel supports sample weights or class weights, then instead of the fit above, one implements","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.fit(model::SomeSupervisedModel, verbosity, X, y, w=nothing) -> fitresult, cache, report","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"and, if appropriate","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.update(model::SomeSupervisedModel, verbosity, old_fitresult, old_cache, X, y, w=nothing) =\n MMI.fit(model, verbosity, X, y, w)","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Additionally, if SomeSupervisedModel supports sample weights, one must declare","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.supports_weights(model::Type{<:SomeSupervisedModel}) = true","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optionally, an implementation may add a data front-end, for transforming user data (such as a table) into some model-specific format (such as a matrix), and/or add methods to specify how reformatted data is resampled. This alters the interpretation of the data arguments of fit, update and predict, whose number may also change. See Implementing a data front-end for details). A data front-end provides the MLJ user certain performance advantages when retraining a machine.","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Third-party packages that interact directly with models using the MLJModelInterface.jl API, rather than through the machine interface, will also need to understand how the data front-end works, so they incorporate reformat into their fit/update/predict calls. See also this issue.","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MLJModelInterface.reformat(model::SomeSupervisedModel, args...) = args\nMLJModelInterface.selectrows(model::SomeSupervisedModel, I, data...) = data","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optionally, to customized support for serialization of machines (see Serialization), overload","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.save(filename, model::SomeModel, fitresult; kwargs...) = fitresult","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"and possibly","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.restore(filename, model::SomeModel, serializable_fitresult) -> serializable_fitresult","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"These last two are unlikely to be needed if wrapping pure Julia code.","category":"page"},{"location":"the_fit_method/#The-fit-method","page":"The fit method","title":"The fit method","text":"","category":"section"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"A compulsory fit method returns three objects:","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"MMI.fit(model::SomeSupervisedModel, verbosity, X, y) -> fitresult, cache, report","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"fitresult is the fitresult in the sense above (which becomes an argument for predict discussed below).\nreport is a (possibly empty) NamedTuple, for example, report=(deviance=..., dof_residual=..., stderror=..., vcov=...). Any training-related statistics, such as internal estimates of the generalization error, and feature rankings, should be returned in the report tuple. How, or if, these are generated should be controlled by hyperparameters (the fields of model). Fitted parameters, such as the coefficients of a linear model, do not go in the report as they will be extractable from fitresult (and accessible to MLJ through the fitted_params method described below).\nThe value of cache can be nothing, unless one is also defining an update method (see below). The Julia type of cache is not presently restricted.","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"note: Note\nThe fit (and update) methods should not mutate the model. If necessary, fit can create a deepcopy of model first.","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"It is not necessary for fit to provide type or dimension checks on X or y or to call clean! on the model; MLJ will carry out such checks.","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"The types of X and y are constrained by the input_scitype and target_scitype trait declarations; see Trait declarations below. (That is, unless a data front-end is implemented, in which case these traits refer instead to the arguments of the overloaded reformat method, and the types of X and y are determined by the output of reformat.)","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"The method fit should never alter hyperparameter values, the sole exception being fields of type <:AbstractRNG. If the package is able to suggest better hyperparameters, as a byproduct of training, return these in the report field.","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"The verbosity level (0 for silent) is for passing to the learning algorithm itself. A fit method wrapping such an algorithm should generally avoid doing any of its own logging.","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"Sample weight support. If supports_weights(::Type{<:SomeSupervisedModel}) has been declared true, then one instead implements the following variation on the above fit:","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"MMI.fit(model::SomeSupervisedModel, verbosity, X, y, w=nothing) -> fitresult, cache, report","category":"page"},{"location":"model_wrappers/#Model-wrappers","page":"Model wrappers","title":"Model wrappers","text":"","category":"section"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"A model that can have one or more other models as hyper-parameters should overload the trait is_wrapper, as in this example:","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"MLJModelInterface.target_in_fit(::Type{<:MyWrapper}) = true","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"The constructor for such a model does not need provide default values for the model-valued hyper-parameters. If only a single model is wrapped, then the hyper-parameter should have the name :model and this should be an optional positional argument, as well as a keyword argument.","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"For example, EnsembleModel is a model wrapper, and we can construct an instance like this:","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"using MLJ\natom = ConstantClassfier()\nEnsembleModel(tree, n=100)","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"but also like this:","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"EnsembleModel(model=tree, n=100)","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"This is the only case in MLJ where positional arguments in a model constructor are allowed.","category":"page"},{"location":"trait_declarations/#Trait-declarations","page":"Trait declarations","title":"Trait declarations","text":"","category":"section"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"Two trait functions allow the implementer to restrict the types of data X, y and Xnew discussed above. The MLJ task interface uses these traits for data type checks but also for model search. If they are omitted (and your model is registered) then a general user may attempt to use your model with inappropriately typed data.","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"The trait functions input_scitype and target_scitype take scientific data types as values. We assume here familiarity with ScientificTypes.jl (see Getting Started for the basics).","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"For example, to ensure that the X presented to the DecisionTreeClassifier fit method is a table whose columns all have Continuous element type (and hence AbstractFloat machine type), one declares","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.input_scitype(::Type{<:DecisionTreeClassifier}) = MMI.Table(MMI.Continuous)","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"or, equivalently,","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.input_scitype(::Type{<:DecisionTreeClassifier}) = Table(Continuous)","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"If, instead, columns were allowed to have either: (i) a mixture of Continuous and Missing values, or (ii) Count (i.e., integer) values, then the declaration would be","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.input_scitype(::Type{<:DecisionTreeClassifier}) = Table(Union{Continuous,Missing},Count)","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"Similarly, to ensure the target is an AbstractVector whose elements have Finite scitype (and hence CategoricalValue machine type) we declare","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.target_scitype(::Type{<:DecisionTreeClassifier}) = AbstractVector{<:Finite}","category":"page"},{"location":"trait_declarations/#Multivariate-targets","page":"Trait declarations","title":"Multivariate targets","text":"","category":"section"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"The above remarks continue to hold unchanged for the case multivariate targets. For example, if we declare","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"target_scitype(SomeSupervisedModel) = Table(Continuous)","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"then this constrains the target to be any table whose columns have Continuous element scitype (i.e., AbstractFloat), while","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"target_scitype(SomeSupervisedModel) = Table(Continuous, Finite{2})","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"restricts to tables with continuous or binary (ordered or unordered) columns.","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"For predicting variable length sequences of, say, binary values (CategoricalValues) with some common size-two pool) we declare","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"target_scitype(SomeSupervisedModel) = AbstractVector{<:NTuple{<:Finite{2}}}","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"The trait functions controlling the form of data are summarized as follows:","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"method return type declarable return values fallback value\ninput_scitype Type some scientific type Unknown\ntarget_scitype Type some scientific type Unknown","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"Additional trait functions tell MLJ's @load macro how to find your model if it is registered, and provide other self-explanatory metadata about the model:","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"method return type declarable return values fallback value\nload_path String unrestricted \"unknown\"\npackage_name String unrestricted \"unknown\"\npackage_uuid String unrestricted \"unknown\"\npackage_url String unrestricted \"unknown\"\npackage_license String unrestricted \"unknown\"\nis_pure_julia Bool true or false false\nsupports_weights Bool true or false false\nsupports_class_weights Bool true or false false\nsupports_training_losses Bool true or false false\nreports_feature_importances Bool true or false false","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"Here is the complete list of trait function declarations for DecisionTreeClassifier, whose core algorithms are provided by DecisionTree.jl, but whose interface actually lives at MLJDecisionTreeInterface.jl.","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.input_scitype(::Type{<:DecisionTreeClassifier}) = MMI.Table(MMI.Continuous)\nMMI.target_scitype(::Type{<:DecisionTreeClassifier}) = AbstractVector{<:MMI.Finite}\nMMI.load_path(::Type{<:DecisionTreeClassifier}) = \"MLJDecisionTreeInterface.DecisionTreeClassifier\"\nMMI.package_name(::Type{<:DecisionTreeClassifier}) = \"DecisionTree\"\nMMI.package_uuid(::Type{<:DecisionTreeClassifier}) = \"7806a523-6efd-50cb-b5f6-3fa6f1930dbb\"\nMMI.package_url(::Type{<:DecisionTreeClassifier}) = \"https://github.com/bensadeghi/DecisionTree.jl\"\nMMI.is_pure_julia(::Type{<:DecisionTreeClassifier}) = true","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"Alternatively, these traits can also be declared using MMI.metadata_pkg and MMI.metadata_model helper functions as:","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.metadata_pkg(\n DecisionTreeClassifier,\n name=\"DecisionTree\",\n package_uuid=\"7806a523-6efd-50cb-b5f6-3fa6f1930dbb\",\n package_url=\"https://github.com/bensadeghi/DecisionTree.jl\",\n is_pure_julia=true\n)\n\nMMI.metadata_model(\n DecisionTreeClassifier,\n input_scitype=MMI.Table(MMI.Continuous),\n target_scitype=AbstractVector{<:MMI.Finite},\n load_path=\"MLJDecisionTreeInterface.DecisionTreeClassifier\"\n)","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"Important. Do not omit the load_path specification. If unsure what it should be, post an issue at MLJ.","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.metadata_pkg","category":"page"},{"location":"trait_declarations/#MLJModelInterface.metadata_pkg","page":"Trait declarations","title":"MLJModelInterface.metadata_pkg","text":"metadata_pkg(T; args...)\n\nHelper function to write the metadata for a package providing model T. Use it with broadcasting to define the metadata of the package providing a series of models.\n\nKeywords\n\npackage_name=\"unknown\" : package name\npackage_uuid=\"unknown\" : package uuid\npackage_url=\"unknown\" : package url\nis_pure_julia=missing : whether the package is pure julia\npackage_license=\"unknown\": package license\nis_wrapper=false : whether the package is a wrapper\n\nExample\n\nmetadata_pkg.((KNNRegressor, KNNClassifier),\n package_name=\"NearestNeighbors\",\n package_uuid=\"b8a86587-4115-5ab1-83bc-aa920d37bbce\",\n package_url=\"https://github.com/KristofferC/NearestNeighbors.jl\",\n is_pure_julia=true,\n package_license=\"MIT\",\n is_wrapper=false)\n\n\n\n\n\n","category":"function"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.metadata_model","category":"page"},{"location":"trait_declarations/#MLJModelInterface.metadata_model","page":"Trait declarations","title":"MLJModelInterface.metadata_model","text":"metadata_model(T; args...)\n\nHelper function to write the metadata for a model T.\n\nKeywords\n\ninput_scitype=Unknown: allowed scientific type of the input data\ntarget_scitype=Unknown: allowed scitype of the target (supervised)\noutput_scitype=Unknown: allowed scitype of the transformed data (unsupervised)\nsupports_weights=false: whether the model supports sample weights\nsupports_class_weights=false: whether the model supports class weights\nload_path=\"unknown\": where the model is (usually PackageName.ModelName)\nhuman_name=nothing: human name of the model\nsupports_training_losses=nothing: whether the (necessarily iterative) model can report training losses\nreports_feature_importances=nothing: whether the model reports feature importances\n\nExample\n\nmetadata_model(KNNRegressor,\n input_scitype=MLJModelInterface.Table(MLJModelInterface.Continuous),\n target_scitype=AbstractVector{MLJModelInterface.Continuous},\n supports_weights=true,\n load_path=\"NearestNeighbors.KNNRegressor\")\n\n\n\n\n\n","category":"function"},{"location":"type_declarations/#New-model-type-declarations","page":"New model type declarations","title":"New model type declarations","text":"","category":"section"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"Here is an example of a concrete supervised model type declaration, for a model with a single hyperparameter:","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"import MLJModelInterface\nconst MMI = MLJModelInterface\n\nmutable struct RidgeRegressor <: MMI.Deterministic\n lambda::Float64\nend","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"Models (which are mutable) should not be given internal constructors. It is recommended that they be given an external lazy keyword constructor of the same name. This constructor defines default values for every field, and optionally corrects invalid field values by calling a clean! method (whose fallback returns an empty message string):","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"function MMI.clean!(model::RidgeRegressor)\n warning = \"\"\n if model.lambda < 0\n warning *= \"Need lambda ≥ 0. Resetting lambda=0. \"\n model.lambda = 0\n end\n return warning\nend\n\n# keyword constructor\nfunction RidgeRegressor(; lambda=0.0)\n model = RidgeRegressor(lambda)\n message = MMI.clean!(model)\n isempty(message) || @warn message\n return model\nend","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"Important. Performing clean!(model) a second time should not mutate model. That is, this test should hold:","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"clean!(model)\nclone = deepcopy(model)\nclean!(model)\n@test model == clone","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"Although not essential, try to avoid Union types for model fields. For example, a field declaration features::Vector{Symbol} with a default of Symbol[] (detected with isempty method) is preferred to features::Union{Vector{Symbol}, Nothing} with a default of nothing.","category":"page"},{"location":"type_declarations/#Hyperparameters-for-parallelization-options","page":"New model type declarations","title":"Hyperparameters for parallelization options","text":"","category":"section"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"The section Acceleration and Parallelism of the MLJ manual indicates how users specify an option to run an algorithm using distributed processing or multithreading. A hyperparameter specifying such an option should be called acceleration. Its value a should satisfy a isa AbstractResource where AbstractResource is defined in the ComputationalResources.jl package. An option to run on a GPU is ordinarily indicated with the CUDALibs() resource.","category":"page"},{"location":"type_declarations/#hyperparameter-access-and-mutation","page":"New model type declarations","title":"hyperparameter access and mutation","text":"","category":"section"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"To support hyperparameter optimization (see the Tuning Models section of the MLJ manual) any hyperparameter to be individually controlled must be:","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"property-accessible; nested property access allowed, as in model.detector.K\nmutable","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"For an un-nested hyperparameter, the requirement is that getproperty(model, :param_name) and setproperty!(model, :param_name, value) have the expected behavior.","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"Combining hyperparameters in a named tuple does not generally work: although property-accessible (with nesting), an individual value cannot be mutated.","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"For a suggested way to deal with hyperparameters varying in number, see the implementation of Stack, where the model struct stores a varying number of base models internally as a vector, but components are named at construction and accessed by overloading getproperty/setproperty! appropriately.","category":"page"},{"location":"type_declarations/#Macro-shortcut","page":"New model type declarations","title":"Macro shortcut","text":"","category":"section"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"An alternative to declaring the model struct, clean! method and keyword constructor, is to use the @mlj_model macro, as in the following example:","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"@mlj_model mutable struct YourModel <: MMI.Deterministic\n a::Float64 = 0.5::(_ > 0)\n b::String = \"svd\"::(_ in (\"svd\",\"qr\"))\nend","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"This declaration specifies:","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"A keyword constructor (here YourModel(; a=..., b=...)),\nDefault values for the hyperparameters,\nConstraints on the hyperparameters where _ refers to a value passed.","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"For example, a::Float64 = 0.5::(_ > 0) indicates that the field a is a Float64, takes 0.5 as default value, and expects its value to be positive.","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"You cannot use the @mlj_model macro if your model struct has type parameters.","category":"page"},{"location":"type_declarations/#Known-issue-with-@mlj_macro","page":"New model type declarations","title":"Known issue with @mlj_macro","text":"","category":"section"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"Defaults with negative values can trip up the @mlj_macro (see this issue). So, for example, this does not work:","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"@mlj_model mutable struct Bar\n a::Int = -1::(_ > -2)\nend","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"But this does:","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"@mlj_model mutable struct Bar\n a::Int = (-)(1)::(_ > -2)\nend","category":"page"},{"location":"where_to_put_code/#Where-to-place-code-implementing-new-models","page":"Where to place code implementing new models","title":"Where to place code implementing new models","text":"","category":"section"},{"location":"where_to_put_code/","page":"Where to place code implementing new models","title":"Where to place code implementing new models","text":"Note that different packages can implement models having the same name without causing conflicts, although an MLJ user cannot simultaneously load two such models.","category":"page"},{"location":"where_to_put_code/","page":"Where to place code implementing new models","title":"Where to place code implementing new models","text":"There are two options for making a new model implementation available to all MLJ users:","category":"page"},{"location":"where_to_put_code/","page":"Where to place code implementing new models","title":"Where to place code implementing new models","text":"Native implementations (preferred option). The implementation code lives in the same package that contains the learning algorithms implementing the interface. An example is EvoTrees.jl. In this case, it is sufficient to open an issue at MLJ requesting the package to be registered with MLJ. Registering a package allows the MLJ user to access its models' metadata and to selectively load them.\nSeparate interface package. Implementation code lives in a separate interface package, which has the algorithm-providing package as a dependency. See the template repository MLJExampleInterface.jl.","category":"page"},{"location":"where_to_put_code/","page":"Where to place code implementing new models","title":"Where to place code implementing new models","text":"Additionally, one needs to ensure that the implementation code defines the package_name and load_path model traits appropriately, so that MLJ's @load macro can find the necessary code (see MLJModels/src for examples).","category":"page"},{"location":"the_predict_method/#The-predict-method","page":"The predict method","title":"The predict method","text":"","category":"section"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"A compulsory predict method has the form","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"MMI.predict(model::SomeSupervisedModel, fitresult, Xnew) -> yhat","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"Here Xnew will have the same form as the X passed to fit.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"Note that while Xnew generally consists of multiple observations (e.g., has multiple rows in the case of a table) it is assumed, in view of the i.i.d assumption recalled above, that calling predict(..., Xnew) is equivalent to broadcasting some method predict_one(..., x) over the individual observations x in Xnew (a method implementing the probability distribution p(X |y) above).","category":"page"},{"location":"the_predict_method/#Prediction-types-for-deterministic-responses.","page":"The predict method","title":"Prediction types for deterministic responses.","text":"","category":"section"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"In the case of Deterministic models, yhat should have the same scitype as the y passed to fit (see above). If y is a CategoricalVector (classification) then elements of the prediction yhat must have a pool == to the pool of the target y presented in training, even if not all levels appear in the training data or prediction itself.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"Unfortunately, code not written with the preservation of categorical levels in mind poses special problems. To help with this, MLJModelInterface provides some utilities: MLJModelInterface.int (for converting a CategoricalValue into an integer, the ordering of these integers being consistent with that of the pool) and MLJModelInterface.decoder (for constructing a callable object that decodes the integers back into CategoricalValue objects). Refer to Convenience methods below for important details.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"Note that a decoder created during fit may need to be bundled with fitresult to make it available to predict during re-encoding. So, for example, if the core algorithm being wrapped by fit expects a nominal target yint of type Vector{<:Integer} then a fit method may look something like this:","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"function MMI.fit(model::SomeSupervisedModel, verbosity, X, y)\n yint = MMI.int(y)\n a_target_element = y[1] # a CategoricalValue/String\n decode = MMI.decoder(a_target_element) # can be called on integers\n\n core_fitresult = SomePackage.fit(X, yint, verbosity=verbosity)\n\n fitresult = (decode, core_fitresult)\n cache = nothing\n report = nothing\n return fitresult, cache, report\nend","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"while a corresponding deterministic predict operation might look like this:","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"function MMI.predict(model::SomeSupervisedModel, fitresult, Xnew)\n decode, core_fitresult = fitresult\n yhat = SomePackage.predict(core_fitresult, Xnew)\n return decode.(yhat)\nend","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"For a concrete example, refer to the code for SVMClassifier.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"Of course, if you are coding a learning algorithm from scratch, rather than wrapping an existing one, these extra measures may be unnecessary.","category":"page"},{"location":"the_predict_method/#Prediction-types-for-probabilistic-responses","page":"The predict method","title":"Prediction types for probabilistic responses","text":"","category":"section"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"In the case of Probabilistic models with univariate targets, yhat must be an AbstractVector or table whose elements are distributions. In the common case of a vector (single target), this means one distribution per row of Xnew.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"A distribution is some object that, at the least, implements Base.rng (i.e., is something that can be sampled). Currently, all performance measures (metrics) defined in MLJBase.jl additionally assume that a distribution is either:","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"An instance of some subtype of Distributions.Distribution, an abstract type defined in the Distributions.jl package; or\nAn instance of CategoricalDistributions.UnivariateFinite, from the CategoricalDistributions.jl package, which should be used for all probabilistic classifiers, i.e., for predictors whose target has scientific type <:AbstractVector{<:Finite}.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"All such distributions implement the probability mass or density function Distributions.pdf. If your model's predictions cannot be predict objects of this form, then you will need to implement appropriate performance measures to buy into MLJ's performance evaluation apparatus.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"An implementation can avoid CategoricalDistributions.jl as a dependency by using the \"dummy\" constructor MLJModelInterface.UnivariateFinite, which is bound to the true one when MLJBase.jl is loaded.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"For efficiency, one should not construct UnivariateFinite instances one at a time. Rather, once a probability vector, matrix, or dictionary is known, construct an instance of UnivariateFiniteVector <: AbstractArray{<:UnivariateFinite},1} to return. Both UnivariateFinite and UnivariateFiniteVector objects are constructed using the single UnivariateFinite function.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"For example, suppose the target y arrives as a subsample of some ybig and is missing some classes:","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"ybig = categorical([:a, :b, :a, :a, :b, :a, :rare, :a, :b])\ny = ybig[1:6]","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"Your fit method has bundled the first element of y with the fitresult to make it available to predict for purposes of tracking the complete pool of classes. Let's call this an_element = y[1]. Then, supposing the corresponding probabilities of the observed classes [:a, :b] are in an n x 2 matrix probs (where n the number of rows of Xnew) then you return","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"yhat = MLJModelInterface.UnivariateFinite([:a, :b], probs, pool=an_element)","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"This object automatically assigns zero-probability to the unseen class :rare (i.e., pdf.(yhat, :rare) works and returns a zero vector). If you would like to assign :rare non-zero probabilities, simply add it to the first vector (the support) and supply a larger probs matrix.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"In a binary classification problem, it suffices to specify a single vector of probabilities, provided you specify augment=true, as in the following example, and note carefully that these probabilities are associated with the last (second) class you specify in the constructor:","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"y = categorical([:TRUE, :FALSE, :FALSE, :TRUE, :TRUE])\nan_element = y[1]\nprobs = rand(10)\nyhat = MLJModelInterface.UnivariateFinite([:FALSE, :TRUE], probs, augment=true, pool=an_element)","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"The constructor has a lot of options, including passing a dictionary instead of vectors. See CategoricalDistributions.UnivariateFinite for details.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"See LinearBinaryClassifier for an example of a Probabilistic classifier implementation.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"Important note on binary classifiers. There is no \"Binary\" scitype distinct from Multiclass{2} or OrderedFactor{2}; Binary is just an alias for Union{Multiclass{2},OrderedFactor{2}}. The target_scitype of a binary classifier will generally be AbstractVector{<:Binary} and according to the mlj scitype convention, elements of y have type CategoricalValue, and not Bool. See BinaryClassifier for an example.","category":"page"},{"location":"the_predict_method/#Report-items-returned-by-predict","page":"The predict method","title":"Report items returned by predict","text":"","category":"section"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"A predict method, or other operation such as transform, can contribute to the report accessible in any machine associated with a model. See Reporting byproducts of a static transformation below for details.","category":"page"},{"location":"static_models/#Static-models","page":"Static models","title":"Static models","text":"","category":"section"},{"location":"static_models/","page":"Static models","title":"Static models","text":"A model type subtypes Static <: Unsupervised if it does not generalize to new data but nevertheless has hyperparameters. See the Static transformers section of the MLJ manual for examples. In the Static case, transform can have multiple arguments and input_scitype refers to the allowed scitype of the slurped data, even if there is only a single argument. For example, if the signature is transform(static_model, X1, X2), then the allowed input_scitype might be Tuple{Table(Continuous), Table(Continuous)}; if the signature is transform(static_model, X), the allowed input_scitype might be Tuple{Table(Continuous)}. The other traits are as for regular Unsupervised models.","category":"page"},{"location":"static_models/#Reporting-byproducts-of-a-static-transformation","page":"Static models","title":"Reporting byproducts of a static transformation","text":"","category":"section"},{"location":"static_models/","page":"Static models","title":"Static models","text":"As a static transformer does not implement fit, the usual mechanism for creating a report is not available. Instead, byproducts of the computation performed by transform can be returned by transform itself by returning a pair (output, report) instead of just output. Here report should be a named tuple. In fact, any operation, (e.g., predict) can do this for any model type. However, this exceptional behavior must be flagged with an appropriate trait declaration, as in","category":"page"},{"location":"static_models/","page":"Static models","title":"Static models","text":"MLJModelInterface.reporting_operations(::Type{<:SomeModelType}) = (:transform,)","category":"page"},{"location":"static_models/","page":"Static models","title":"Static models","text":"If mach is a machine wrapping a model of this kind, then the report(mach) will include the report item form transform's output. For sample implementations, see this issue or the code for DBSCAN clustering.","category":"page"},{"location":"outlier_detection_models/#Outlier-detection-models","page":"Outlier detection models","title":"Outlier detection models","text":"","category":"section"},{"location":"outlier_detection_models/","page":"Outlier detection models","title":"Outlier detection models","text":"warning: Experimental API\nThe Outlier Detection API is experimental and may change in future releases of MLJ.","category":"page"},{"location":"outlier_detection_models/","page":"Outlier detection models","title":"Outlier detection models","text":"Outlier detection or anomaly detection is predominantly an unsupervised learning task, transforming each data point to an outlier score quantifying the level of \"outlierness\". However, because detectors can also be semi-supervised or supervised, MLJModelInterface provides a collection of abstract model types, that enable the different characteristics, namely:","category":"page"},{"location":"outlier_detection_models/","page":"Outlier detection models","title":"Outlier detection models","text":"MLJModelInterface.SupervisedDetector\nMLJModelInterface.UnsupervisedDetector\nMLJModelInterface.ProbabilisticSupervisedDetector\nMLJModelInterface.ProbabilisticUnsupervisedDetector\nMLJModelInterface.DeterministicSupervisedDetector\nMLJModelInterface.DeterministicUnsupervisedDetector","category":"page"},{"location":"outlier_detection_models/","page":"Outlier detection models","title":"Outlier detection models","text":"All outlier detection models subtyping from any of the above supertypes have to implement MLJModelInterface.fit(model, verbosity, X, [y]). Models subtyping from either SupervisedDetector or UnsupervisedDetector have to implement MLJModelInterface.transform(model, fitresult, Xnew), which should return the raw outlier scores (<:Continuous) of all points in Xnew.","category":"page"},{"location":"outlier_detection_models/","page":"Outlier detection models","title":"Outlier detection models","text":"Probabilistic and deterministic outlier detection models provide an additional option to predict a normalized estimate of outlierness or a concrete outlier label and thus enable evaluation of those models. All corresponding supertypes have to implement (in addition to the previously described fit and transform) MLJModelInterface.predict(model, fitresult, Xnew), with deterministic predictions conforming to OrderedFactor{2}, with the first class being the normal class and the second class being the outlier. Probabilistic models predict a UnivariateFinite estimate of those classes.","category":"page"},{"location":"outlier_detection_models/","page":"Outlier detection models","title":"Outlier detection models","text":"It is typically possible to automatically convert an outlier detection model to a probabilistic or deterministic model if the training scores are stored in the model's report. Below mentioned OutlierDetection.jl package, for example, stores the training scores under the scores key in the report returned from fit. It is then possible to use model wrappers such as OutlierDetection.ProbabilisticDetector to automatically convert a model to enable predictions of the required output type.","category":"page"},{"location":"outlier_detection_models/","page":"Outlier detection models","title":"Outlier detection models","text":"note: External outlier detection packages\nOutlierDetection.jl provides an opinionated interface on top of MLJ for outlier detection models, standardizing things like class names, dealing with training scores, score normalization and more.","category":"page"},{"location":"reference/#Reference","page":"Reference","title":"Reference","text":"","category":"section"},{"location":"reference/","page":"Reference","title":"Reference","text":"Pages = [\"reference.md\"]","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"Modules = [MLJModelInterface,]\nPrivate = false\nOrder = [:constant, :type, :function, :macro, :module]","category":"page"},{"location":"reference/#MLJModelInterface.UnivariateFinite","page":"Reference","title":"MLJModelInterface.UnivariateFinite","text":"UnivariateFinite(\n support,\n probs;\n pool=nothing,\n augmented=false,\n ordered=false\n)\n\nConstruct a discrete univariate distribution whose finite support is the elements of the vector support, and whose corresponding probabilities are elements of the vector probs. Alternatively, construct an abstract array of UnivariateFinite distributions by choosing probs to be an array of one higher dimension than the array generated.\n\nHere the word \"probabilities\" is an abuse of terminology as there is no requirement that probabilities actually sum to one, only that they be non-negative. So UnivariateFinite objects actually implement arbitrary non-negative measures over finite sets of labelled points. A UnivariateDistribution will be a bona fide probability measure when constructed using the augment=true option (see below) or when fit to data.\n\nUnless pool is specified, support should have type AbstractVector{<:CategoricalValue} and all elements are assumed to share the same categorical pool, which may be larger than support.\n\nImportant. All levels of the common pool have associated probabilities, not just those in the specified support. However, these probabilities are always zero (see example below).\n\nIf probs is a matrix, it should have a column for each class in support (or one less, if augment=true). More generally, probs will be an array whose size is of the form (n1, n2, ..., nk, c), where c = length(support) (or one less, if augment=true) and the constructor then returns an array of UnivariateFinite distributions of size (n1, n2, ..., nk).\n\nExamples\n\njulia> v = categorical([\"x\", \"x\", \"y\", \"x\", \"z\"])\n5-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"x\"\n \"y\"\n \"x\"\n \"z\"\n\njulia> UnivariateFinite(classes(v), [0.2, 0.3, 0.5])\nUnivariateFinite{Multiclass{3}}(x=>0.2, y=>0.3, z=>0.5)\n\njulia> d = UnivariateFinite([v[1], v[end]], [0.1, 0.9])\nUnivariateFinite{Multiclass{3}}(x=>0.1, z=>0.9)\n\njulia> rand(d, 3)\n3-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"z\"\n \"x\"\n\njulia> levels(d)\n3-element Vector{String}:\n \"x\"\n \"y\"\n \"z\"\n\njulia> pdf(d, \"y\")\n0.0\n\n\nSpecifying a pool\n\nAlternatively, support may be a list of raw (non-categorical) elements if pool is:\n\nsome CategoricalArray, CategoricalValue or CategoricalPool, such that support is a subset of levels(pool)\nmissing, in which case a new categorical pool is created which has support as its only levels.\n\nIn the last case, specify ordered=true if the pool is to be considered ordered.\n\njulia> UnivariateFinite([\"x\", \"z\"], [0.1, 0.9], pool=missing, ordered=true)\nUnivariateFinite{OrderedFactor{2}}(x=>0.1, z=>0.9)\n\njulia> d = UnivariateFinite([\"x\", \"z\"], [0.1, 0.9], pool=v) # v defined above\nUnivariateFinite{Multiclass{3}}(x=>0.1, z=>0.9)\n\njulia> pdf(d, \"y\") # allowed as `\"y\" in levels(v)`\n0.0\n\njulia> v = categorical([\"x\", \"x\", \"y\", \"x\", \"z\", \"w\"])\n6-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"x\"\n \"y\"\n \"x\"\n \"z\"\n \"w\"\n\njulia> probs = rand(100, 3); probs = probs ./ sum(probs, dims=2);\n\njulia> UnivariateFinite([\"x\", \"y\", \"z\"], probs, pool=v)\n100-element UnivariateFiniteVector{Multiclass{4}, String, UInt32, Float64}:\n UnivariateFinite{Multiclass{4}}(x=>0.194, y=>0.3, z=>0.505)\n UnivariateFinite{Multiclass{4}}(x=>0.727, y=>0.234, z=>0.0391)\n UnivariateFinite{Multiclass{4}}(x=>0.674, y=>0.00535, z=>0.321)\n ⋮\n UnivariateFinite{Multiclass{4}}(x=>0.292, y=>0.339, z=>0.369)\n\nProbability augmentation\n\nIf augment=true the provided array is augmented by inserting appropriate elements ahead of those provided, along the last dimension of the array. This means the user only provides probabilities for the classes c2, c3, ..., cn. The class c1 probabilities are chosen so that each UnivariateFinite distribution in the returned array is a bona fide probability distribution.\n\n\n\nUnivariateFinite(prob_given_class; pool=nothing, ordered=false)\n\nConstruct a discrete univariate distribution whose finite support is the set of keys of the provided dictionary, prob_given_class, and whose values specify the corresponding probabilities.\n\nThe type requirements on the keys of the dictionary are the same as the elements of support given above with this exception: if non-categorical elements (raw labels) are used as keys, then pool=... must be specified and cannot be missing.\n\nIf the values (probabilities) are arrays instead of scalars, then an abstract array of UnivariateFinite elements is created, with the same size as the array.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.classes-Tuple{Any}","page":"Reference","title":"MLJModelInterface.classes","text":"classes(x)\n\nAll the categorical elements with the same pool as x (including x), returned as a list, with an ordering consistent with the pool. Here x has CategoricalValue type, and classes(x) is a vector of the same eltype. Note that x in classes(x) is always true.\n\nNot to be confused with levels(x.pool). See the example below.\n\njulia> v = categorical([\"c\", \"b\", \"c\", \"a\"])\n4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"c\"\n \"b\"\n \"c\"\n \"a\"\n\njulia> levels(v)\n3-element Vector{String}:\n \"a\"\n \"b\"\n \"c\"\n\njulia> x = v[4]\nCategoricalArrays.CategoricalValue{String, UInt32} \"a\"\n\njulia> classes(x)\n3-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"a\"\n \"b\"\n \"c\"\n\njulia> levels(x.pool)\n3-element Vector{String}:\n \"a\"\n \"b\"\n \"c\"\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.decoder-Tuple{Any}","page":"Reference","title":"MLJModelInterface.decoder","text":"decoder(x)\n\nReturn a callable object for decoding the integer representation of a CategoricalValue sharing the same pool the CategoricalValue x. Specifically, one has decoder(x)(int(y)) == y for all CategoricalValues y having the same pool as x. One can also call decoder(x) on integer arrays, in which case decoder(x) is broadcast over all elements.\n\nExamples\n\njulia> v = categorical([\"c\", \"b\", \"c\", \"a\"])\n4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"c\"\n \"b\"\n \"c\"\n \"a\"\n\njulia> int(v)\n4-element Vector{UInt32}:\n 0x00000003\n 0x00000002\n 0x00000003\n 0x00000001\n\njulia> d = decoder(v[3]);\n\njulia> d(int(v)) == v\ntrue\n\nWarning:\n\nIt is not true that int(d(u)) == u always holds.\n\nSee also: int.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.evaluate","page":"Reference","title":"MLJModelInterface.evaluate","text":"some meta-models may choose to implement the evaluate operations\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.fit","page":"Reference","title":"MLJModelInterface.fit","text":"MLJModelInterface.fit(model, verbosity, data...) -> fitresult, cache, report\n\nAll models must implement a fit method. Here data is the output of reformat on user-provided data, or some some resampling thereof. The fallback of reformat returns the user-provided data (eg, a table).\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.fitted_params-Tuple{Model, Any}","page":"Reference","title":"MLJModelInterface.fitted_params","text":"fitted_params(model, fitresult) -> human_readable_fitresult # named_tuple\n\nModels may overload fitted_params. The fallback returns (fitresult=fitresult,).\n\nOther training-related outcomes should be returned in the report part of the tuple returned by fit.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.int-Tuple{Any}","page":"Reference","title":"MLJModelInterface.int","text":"int(x)\n\nThe positional integer of the CategoricalString or CategoricalValue x, in the ordering defined by the pool of x. The type of int(x) is the reference type of x.\n\nNot to be confused with x.ref, which is unchanged by reordering of the pool of x, but has the same type.\n\nint(X::CategoricalArray)\nint(W::Array{<:CategoricalString})\nint(W::Array{<:CategoricalValue})\n\nBroadcasted versions of int.\n\njulia> v = categorical([\"c\", \"b\", \"c\", \"a\"])\n4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"c\"\n \"b\"\n \"c\"\n \"a\"\n\njulia> levels(v)\n3-element Vector{String}:\n \"a\"\n \"b\"\n \"c\"\n\njulia> int(v)\n4-element Vector{UInt32}:\n 0x00000003\n 0x00000002\n 0x00000003\n 0x00000001\n\nSee also: decoder.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.inverse_transform","page":"Reference","title":"MLJModelInterface.inverse_transform","text":"Unsupervised models may implement the inverse_transform operation.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.is_same_except-Tuple{Any, Any}","page":"Reference","title":"MLJModelInterface.is_same_except","text":"is_same_except(m1, m2, exceptions::Symbol...; deep_properties=Symbol[])\n\nIf both m1 and m2 are of MLJType, return true if the following conditions all hold, and false otherwise:\n\ntypeof(m1) === typeof(m2)\npropertynames(m1) === propertynames(m2)\nwith the exception of properties listed as exceptions or bound to an AbstractRNG, each pair of corresponding property values is either \"equal\" or both undefined. (If a property appears as a propertyname but not a fieldname, it is deemed as always defined.)\n\nThe meaining of \"equal\" depends on the type of the property value:\n\nvalues that are themselves of MLJType are \"equal\" if they are equal in the sense of is_same_except with no exceptions.\nvalues that are not of MLJType are \"equal\" if they are ==.\n\nIn the special case of a \"deep\" property, \"equal\" has a different meaning; see deep_properties) for details.\n\nIf m1 or m2 are not MLJType objects, then return ==(m1, m2).\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.isrepresented-Tuple{MLJType, Nothing}","page":"Reference","title":"MLJModelInterface.isrepresented","text":"isrepresented(object::MLJType, objects)\n\nTest if object has a representative in the iterable objects. This is a weaker requirement than object in objects.\n\nHere we say m1 represents m2 if is_same_except(m1, m2) is true.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.matrix-Tuple{Any}","page":"Reference","title":"MLJModelInterface.matrix","text":"matrix(X; transpose=false)\n\nIf X isa AbstractMatrix, return X or permutedims(X) if transpose=true. Otherwise if X is a Tables.jl compatible table source, convert X into a Matrix.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.metadata_model-Tuple{Any}","page":"Reference","title":"MLJModelInterface.metadata_model","text":"metadata_model(T; args...)\n\nHelper function to write the metadata for a model T.\n\nKeywords\n\ninput_scitype=Unknown: allowed scientific type of the input data\ntarget_scitype=Unknown: allowed scitype of the target (supervised)\noutput_scitype=Unknown: allowed scitype of the transformed data (unsupervised)\nsupports_weights=false: whether the model supports sample weights\nsupports_class_weights=false: whether the model supports class weights\nload_path=\"unknown\": where the model is (usually PackageName.ModelName)\nhuman_name=nothing: human name of the model\nsupports_training_losses=nothing: whether the (necessarily iterative) model can report training losses\nreports_feature_importances=nothing: whether the model reports feature importances\n\nExample\n\nmetadata_model(KNNRegressor,\n input_scitype=MLJModelInterface.Table(MLJModelInterface.Continuous),\n target_scitype=AbstractVector{MLJModelInterface.Continuous},\n supports_weights=true,\n load_path=\"NearestNeighbors.KNNRegressor\")\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.metadata_pkg-Tuple{Any}","page":"Reference","title":"MLJModelInterface.metadata_pkg","text":"metadata_pkg(T; args...)\n\nHelper function to write the metadata for a package providing model T. Use it with broadcasting to define the metadata of the package providing a series of models.\n\nKeywords\n\npackage_name=\"unknown\" : package name\npackage_uuid=\"unknown\" : package uuid\npackage_url=\"unknown\" : package url\nis_pure_julia=missing : whether the package is pure julia\npackage_license=\"unknown\": package license\nis_wrapper=false : whether the package is a wrapper\n\nExample\n\nmetadata_pkg.((KNNRegressor, KNNClassifier),\n package_name=\"NearestNeighbors\",\n package_uuid=\"b8a86587-4115-5ab1-83bc-aa920d37bbce\",\n package_url=\"https://github.com/KristofferC/NearestNeighbors.jl\",\n is_pure_julia=true,\n package_license=\"MIT\",\n is_wrapper=false)\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.nrows-Tuple{Any}","page":"Reference","title":"MLJModelInterface.nrows","text":"nrows(X)\n\nReturn the number of rows for a table, AbstractVector or AbstractMatrix, X.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.params-Tuple{Any}","page":"Reference","title":"MLJModelInterface.params","text":"params(m::MLJType)\n\nRecursively convert any transparent object m into a named tuple, keyed on the fields of m. An object is transparent if MLJModelInterface.istransparent(m) == true. The named tuple is possibly nested because params is recursively applied to the field values, which themselves might be transparent.\n\nMost objects of type MLJType are transparent.\n\njulia> params(EnsembleModel(model=ConstantClassifier()))\n(model = (target_type = Bool,),\n weights = Float64[],\n bagging_fraction = 0.8,\n rng_seed = 0,\n n = 100,\n parallel = true,)\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict","page":"Reference","title":"MLJModelInterface.predict","text":"predict(model, fitresult, new_data...)\n\nSupervised and SupervisedAnnotator models must implement the predict operation. Here new_data is the output of reformat called on user-specified data.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.predict_joint","page":"Reference","title":"MLJModelInterface.predict_joint","text":"JointProbabilistic supervised models MUST overload predict_joint.\n\nProbabilistic supervised models MAY overload predict_joint.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.predict_mean","page":"Reference","title":"MLJModelInterface.predict_mean","text":"Models types M for which prediction_type(M) == :probablisitic may overload predict_mean.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.predict_median","page":"Reference","title":"MLJModelInterface.predict_median","text":"Models types M for which prediction_type(M) == :probablisitic may overload predict_median.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.predict_mode","page":"Reference","title":"MLJModelInterface.predict_mode","text":"Models types M for which prediction_type(M) == :probablisitic may overload predict_mode.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.reformat-Tuple{Model, Vararg{Any}}","page":"Reference","title":"MLJModelInterface.reformat","text":"MLJModelInterface.reformat(model, args...) -> data\n\nModels optionally overload reformat to define transformations of user-supplied data into some model-specific representation (e.g., from a table to a matrix). When implemented, the MLJ user can avoid repeating such transformations unnecessarily, and can additionally make use of more efficient row subsampling, which is then based on the model-specific representation of data, rather than the user-representation. When reformat is overloaded, selectrows(::Model, ...) must be as well (see selectrows). Furthermore, the model fit method(s), and operations, such as predict and transform, must be refactored to act on the model-specific representations of the data.\n\nTo implement the reformat data front-end for a model, refer to \"Implementing a data front-end\" in the MLJ manual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.scitype-Tuple{Any}","page":"Reference","title":"MLJModelInterface.scitype","text":"scitype(X)\n\nThe scientific type (interpretation) of X, distinct from its machine type.\n\nExamples\n\njulia> scitype(3.14)\nContinuous\n\njulia> scitype([1, 2, missing])\nAbstractVector{Union{Missing, Count}} \n\njulia> scitype((5, \"beige\"))\nTuple{Count, Textual}\n\njulia> using CategoricalArrays\n\njulia> X = (gender = categorical(['M', 'M', 'F', 'M', 'F']),\n ndevices = [1, 3, 2, 3, 2]);\n\njulia> scitype(X)\nTable{Union{AbstractVector{Count}, AbstractVector{Multiclass{2}}}}\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.select","page":"Reference","title":"MLJModelInterface.select","text":"select(X, r, c)\n\nSelect element(s) of a table or matrix at row(s) r and column(s) c. An object of the sink type of X (or a matrix) is returned unless c is a single integer or symbol. In that case a vector is returned, unless r is a single integer, in which case a single element is returned.\n\nSee also: selectrows, selectcols.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.selectcols","page":"Reference","title":"MLJModelInterface.selectcols","text":"selectcols(X, c)\n\nSelect single or multiple columns from a matrix or table X. If c is an abstract vector of integers or symbols, then the object returned is a table of the preferred sink type of typeof(X). If c is a single integer or column, then an AbstractVector is returned.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.selectrows","page":"Reference","title":"MLJModelInterface.selectrows","text":"selectrows(X, r)\n\nSelect single or multiple rows from a table, abstract vector or matrix X. If X is tabular, the object returned is a table of the preferred sink type of typeof(X), even if only a single row is selected.\n\nIf the object is neither a table, abstract vector or matrix, X is returned and r is ignored.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.selectrows-Tuple{Model, Any, Vararg{Any}}","page":"Reference","title":"MLJModelInterface.selectrows","text":"MLJModelInterface.selectrows(::Model, I, data...) -> sampled_data\n\nA model overloads selectrows whenever it buys into the optional reformat front-end for data preprocessing. See reformat for details. The fallback assumes data is a tuple and calls selectrows(X, I) for each X in data, returning the results in a new tuple of the same length. This call makes sense when X is a table, abstract vector or abstract matrix. In the last two cases, a new object and not a view is returned.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.table-Tuple{Any}","page":"Reference","title":"MLJModelInterface.table","text":"table(columntable; prototype=nothing)\n\nConvert a named tuple of vectors or tuples columntable, into a table of the \"preferred sink type\" of prototype. This is often the type of prototype itself, when prototype is a sink; see the Tables.jl documentation. If prototype is not specified, then a named tuple of vectors is returned.\n\ntable(A::AbstractMatrix; names=nothing, prototype=nothing)\n\nWrap an abstract matrix A as a Tables.jl compatible table with the specified column names (a tuple of symbols). If names are not specified, names=(:x1, :x2, ..., :xn) is used, where n=size(A, 2).\n\nIf a prototype is specified, then the matrix is materialized as a table of the preferred sink type of prototype, rather than wrapped. Note that if prototype is not specified, then matrix(table(A)) is essentially a no-op.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.training_losses-Tuple{Any, Any}","page":"Reference","title":"MLJModelInterface.training_losses","text":"MLJModelInterface.training_losses(model::M, report)\n\nIf M is an iterative model type which calculates training losses, implement this method to return an AbstractVector of the losses in historical order. If the model calculates scores instead, then the sign of the scores should be reversed.\n\nThe following trait overload is also required: MLJModelInterface.supports_training_losses(::Type{<:M}) = true.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.transform","page":"Reference","title":"MLJModelInterface.transform","text":"Unsupervised models must implement the transform operation.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.update-Tuple{Model, Any, Any, Any, Vararg{Any}}","page":"Reference","title":"MLJModelInterface.update","text":"MLJModelInterface.update(model, verbosity, fitresult, cache, data...)\n\nModels may optionally implement an update method. The fallback calls fit.\n\n\n\n\n\n","category":"method"},{"location":"reference/#StatisticalTraits.deep_properties","page":"Reference","title":"StatisticalTraits.deep_properties","text":"deep_properties(::Type{<:MLJType})\n\nGiven an MLJType subtype M, the value of this trait should be a tuple of any properties of M to be regarded as \"deep\".\n\nWhen two instances of type M are to be tested for equality, in the sense of == or is_same_except, then the values of a \"deep\" property (whose values are assumed to be of composite type) are deemed to agree if all corresponding properties of those property values are ==.\n\nAny property of M whose values are themselves of MLJType are \"deep\" automatically, and should not be included in the trait return value.\n\nSee also is_same_except\n\nExample\n\nConsider an MLJType subtype Foo, with a single field of type Bar which is not a subtype of MLJType:\n\nmutable struct Bar\n x::Int\nend\n\nmutable struct Foo <: MLJType\n bar::Bar\nend\n\nThen the mutability of Foo implies Foo(1) != Foo(1) and so, by the definition == for MLJType objects (see is_same_except) we have\n\nBar(Foo(1)) != Bar(Foo(1))\n\nHowever after the declaration\n\nMLJModelInterface.deep_properties(::Type{<:Foo}) = (:bar,)\n\nWe have\n\nBar(Foo(1)) == Bar(Foo(1))\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.@mlj_model-Tuple{Any}","page":"Reference","title":"MLJModelInterface.@mlj_model","text":"@mlj_model\n\nMacro to help define MLJ models with constraints on the default parameters.\n\n\n\n\n\n","category":"macro"},{"location":"reference/","page":"Reference","title":"Reference","text":"Modules = [MLJModelInterface,]\nPublic = false\nOrder = [:constant, :type, :function, :macro, :module]","category":"page"},{"location":"reference/#MLJModelInterface._model_cleaner-Tuple{Any, Any, Any}","page":"Reference","title":"MLJModelInterface._model_cleaner","text":"_model_cleaner(modelname, defaults, constraints)\n\nBuild the expression of the cleaner associated with the constraints specified in a model def.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface._model_constructor-Tuple{Any, Any, Any}","page":"Reference","title":"MLJModelInterface._model_constructor","text":"_model_constructor(modelname, params, defaults)\n\nBuild the expression of the keyword constructor associated with a model definition. When the constructor is called, the clean! function is called as well to check that parameter assignments are valid.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface._process_model_def-Tuple{Any, Any}","page":"Reference","title":"MLJModelInterface._process_model_def","text":"_process_model_def(modl, ex)\n\nTake an expression defining a model (mutable struct Model ...) and unpack key elements for further processing:\n\nModel name (modelname)\nNames of parameters (params)\nDefault values (defaults)\nConstraints (constraints)\n\nWhen no default field value is given a heuristic is to guess an appropriate default (eg, zero for a Float64 parameter). To this end, the specified type expression is evaluated in the module modl.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface._unpack!-Tuple{Expr, Any}","page":"Reference","title":"MLJModelInterface._unpack!","text":"_unpack!(ex, rep)\n\nInternal function to allow to read a constraint given after a default value for a parameter and transform it in an executable condition (which is returned to be executed later). For instance if we have\n\nalpha::Int = 0.5::(arg > 0.0)\n\nThen it would transform the (arg > 0.0) in (alpha > 0.0) which is executable.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.doc_header-Tuple{Any}","page":"Reference","title":"MLJModelInterface.doc_header","text":"MLJModelInterface.doc_header(SomeModelType; augment=false)\n\nReturn a string suitable for interpolation in the document string of an MLJ model type. In the example given below, the header expands to something like this:\n\nFooRegressorA model type for constructing a foo regressor, based on FooRegressorPkg.jl.From MLJ, the type can be imported usingFooRegressor = @load FooRegressor pkg=FooRegressorPkgConstruct an instance with default hyper-parameters using the syntax model = FooRegressor(). Provide keyword arguments to override hyper-parameter defaults, as in FooRegressor(a=...).\n\nOrdinarily, doc_header is used in document strings defined after the model type definition, as doc_header assumes model traits (in particular, package_name and package_url) to be defined; see also MLJModelInterface.metadata_pkg.\n\nExample\n\nSuppose a model type and traits have been defined by:\n\nmutable struct FooRegressor\n a::Int\n b::Float64\nend\n\nmetadata_pkg(FooRegressor,\n name=\"FooRegressorPkg\",\n uuid=\"10745b16-79ce-11e8-11f9-7d13ad32a3b2\",\n url=\"http://existentialcomics.com/\",\n )\nmetadata_model(FooRegressor,\n input=Table(Continuous),\n target=AbstractVector{Continuous})\n\nThen the docstring is defined after these declarations with the following code:\n\n\"\"\"\n$(MLJModelInterface.doc_header(FooRegressor))\n\n### Training data\n\nIn MLJ or MLJBase, bind an instance `model` ...\n\n\n\n\"\"\"\nFooRegressor\n\n\nVariation to augment existing document string\n\nFor models that have a native API with separate documentation, one may want to call doc_header(FooRegressor, augment=true) instead. In that case, the output will look like this:\n\nFrom MLJ, the FooRegressor type can be imported usingFooRegressor = @load FooRegressor pkg=FooRegressorPkgConstruct an instance with default hyper-parameters using the syntax model = FooRegressor(). Provide keyword arguments to override hyper-parameter defaults, as in FooRegressor(a=...).\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.feature_importances","page":"Reference","title":"MLJModelInterface.feature_importances","text":"feature_importances(model::M, fitresult, report)\n\nFor a given model of model type M supporting intrinsic feature importances, calculate the feature importances from the model's fitresult and report as an abstract vector of feature::Symbol => importance::Real pairs (e.g [:gender =>0.23, :height =>0.7, :weight => 0.1]).\n\nNew model implementations\n\nThe following trait overload is also required: MLJModelInterface.reports_feature_importances(::Type{<:M}) = true\n\nIf for some reason a model is sometimes unable to report feature importances then feature_importances should return all importances as 0.0, as in [:gender =>0.0, :height =>0.0, :weight => 0.0].\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.flat_params-Tuple{Any}","page":"Reference","title":"MLJModelInterface.flat_params","text":"flat_params(m::Model)\n\nDeconstruct any Model instance model as a flat named tuple, keyed on property names. Properties of nested model instances are recursively exposed,.as shown in the example below. For most Model objects, properties are synonymous with fields, but this is not a hard requirement.\n\njulia> using MLJModels\njulia> using EnsembleModels\njulia> tree = (@load DecisionTreeClassifier pkg=DecisionTree)();\n\njulia> flat_params(EnsembleModel(model=tree))\n(model__max_depth = -1,\n model__min_samples_leaf = 1,\n model__min_samples_split = 2,\n model__min_purity_increase = 0.0,\n model__n_subfeatures = 0,\n model__post_prune = false,\n model__merge_purity_threshold = 1.0,\n model__display_depth = 5,\n model__feature_importance = :impurity,\n model__rng = Random._GLOBAL_RNG(),\n atomic_weights = Float64[],\n bagging_fraction = 0.8,\n rng = Random._GLOBAL_RNG(),\n n = 100,\n acceleration = CPU1{Nothing}(nothing),\n out_of_bag_measure = Any[],)\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.istable-Tuple{Any}","page":"Reference","title":"MLJModelInterface.istable","text":"istable(X)\n\nReturn true if X is tabular.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.report-Tuple{Any, Any}","page":"Reference","title":"MLJModelInterface.report","text":"MLJModelInterface.report(model, report_given_method)\n\nMerge the reports in the dictionary report_given_method into a single property-accessible object. It is supposed that each key of the dictionary is either :fit or the name of an operation, such as :predict or :transform. Each value will be the report component returned by a training method (fit or update) dispatched on the model type, in the case of :fit, or the report component returned by an operation that supports reporting.\n\nNew model implementations\n\nOverloading this method is optional, unless the model generates reports that are neither named tuples nor nothing.\n\nAssuming each value in the report_given_method dictionary is either a named tuple or nothing, and there are no conflicts between the keys of the dictionary values (the individual reports), the fallback returns the usual named tuple merge of the dictionary values, ignoring any nothing value. If there is a key conflict, all operation reports are first wrapped in a named tuple of length one, as in (predict=predict_report,). A :fit report is never wrapped.\n\nIf any dictionary value is neither a named tuple nor nothing, it is first wrapped as (report=value, ) before merging.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.schema-Tuple{Any}","page":"Reference","title":"MLJModelInterface.schema","text":"schema(X)\n\nInspect the column types and scitypes of a tabular object. returns nothing if the column types and scitypes can't be inspected.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.synthesize_docstring-Tuple{Any}","page":"Reference","title":"MLJModelInterface.synthesize_docstring","text":"synthesize_docstring\n\nPrivate method.\n\nGenerates a value for the docstring trait for use with a model which does not have a standard document string, to use as the fallback. See metadata_model.\n\n\n\n\n\n","category":"method"},{"location":"training_losses/#Training-losses","page":"Training losses","title":"Training losses","text":"","category":"section"},{"location":"training_losses/","page":"Training losses","title":"Training losses","text":"MLJModelInterface.training_losses","category":"page"},{"location":"training_losses/#MLJModelInterface.training_losses-training_losses","page":"Training losses","title":"MLJModelInterface.training_losses","text":"MLJModelInterface.training_losses(model::M, report)\n\nIf M is an iterative model type which calculates training losses, implement this method to return an AbstractVector of the losses in historical order. If the model calculates scores instead, then the sign of the scores should be reversed.\n\nThe following trait overload is also required: MLJModelInterface.supports_training_losses(::Type{<:M}) = true.\n\n\n\n\n\n","category":"function"},{"location":"training_losses/","page":"Training losses","title":"Training losses","text":"Trait values can also be set using the metadata_model method, see below.","category":"page"},{"location":"supervised_models_with_transform/#Supervised-models-with-a-transform-method","page":"Supervised models with a transform method","title":"Supervised models with a transform method","text":"","category":"section"},{"location":"supervised_models_with_transform/","page":"Supervised models with a transform method","title":"Supervised models with a transform method","text":"A supervised model may optionally implement a transform method, whose signature is the same as predict. In that case, the implementation should define a value for the output_scitype trait. A declaration","category":"page"},{"location":"supervised_models_with_transform/","page":"Supervised models with a transform method","title":"Supervised models with a transform method","text":"output_scitype(::Type{<:SomeSupervisedModel}) = T","category":"page"},{"location":"supervised_models_with_transform/","page":"Supervised models with a transform method","title":"Supervised models with a transform method","text":"is an assurance that scitype(transform(model, fitresult, Xnew)) <: T always holds, for any model of type SomeSupervisedModel.","category":"page"},{"location":"supervised_models_with_transform/","page":"Supervised models with a transform method","title":"Supervised models with a transform method","text":"A use-case for a transform method for a supervised model is a neural network that learns feature embeddings for categorical input features as part of overall training. Such a model becomes a transformer that other supervised models can use to transform the categorical features (instead of applying the higher-dimensional one-hot encoding representations).","category":"page"},{"location":"document_strings/#Document-strings","page":"Document strings","title":"Document strings","text":"","category":"section"},{"location":"document_strings/","page":"Document strings","title":"Document strings","text":"To be registered, MLJ models must include a detailed document string for the model type, and this must conform to the standard outlined below. We recommend you simply adapt an existing compliant document string and read the requirements below if you're not sure, or to use as a checklist. Here are examples of compliant doc-strings (go to the end of the linked files):","category":"page"},{"location":"document_strings/","page":"Document strings","title":"Document strings","text":"Regular supervised models (classifiers and regressors): MLJDecisionTreeInterface.jl (see the end of the file)\nTranformers: MLJModels.jl","category":"page"},{"location":"document_strings/","page":"Document strings","title":"Document strings","text":"A utility function is available for generating a standardized header for your doc-strings (but you provide most detail by hand):","category":"page"},{"location":"document_strings/","page":"Document strings","title":"Document strings","text":"MLJModelInterface.doc_header","category":"page"},{"location":"document_strings/#MLJModelInterface.doc_header","page":"Document strings","title":"MLJModelInterface.doc_header","text":"MLJModelInterface.doc_header(SomeModelType; augment=false)\n\nReturn a string suitable for interpolation in the document string of an MLJ model type. In the example given below, the header expands to something like this:\n\nFooRegressorA model type for constructing a foo regressor, based on FooRegressorPkg.jl.From MLJ, the type can be imported usingFooRegressor = @load FooRegressor pkg=FooRegressorPkgConstruct an instance with default hyper-parameters using the syntax model = FooRegressor(). Provide keyword arguments to override hyper-parameter defaults, as in FooRegressor(a=...).\n\nOrdinarily, doc_header is used in document strings defined after the model type definition, as doc_header assumes model traits (in particular, package_name and package_url) to be defined; see also MLJModelInterface.metadata_pkg.\n\nExample\n\nSuppose a model type and traits have been defined by:\n\nmutable struct FooRegressor\n a::Int\n b::Float64\nend\n\nmetadata_pkg(FooRegressor,\n name=\"FooRegressorPkg\",\n uuid=\"10745b16-79ce-11e8-11f9-7d13ad32a3b2\",\n url=\"http://existentialcomics.com/\",\n )\nmetadata_model(FooRegressor,\n input=Table(Continuous),\n target=AbstractVector{Continuous})\n\nThen the docstring is defined after these declarations with the following code:\n\n\"\"\"\n$(MLJModelInterface.doc_header(FooRegressor))\n\n### Training data\n\nIn MLJ or MLJBase, bind an instance `model` ...\n\n\n\n\"\"\"\nFooRegressor\n\n\nVariation to augment existing document string\n\nFor models that have a native API with separate documentation, one may want to call doc_header(FooRegressor, augment=true) instead. In that case, the output will look like this:\n\nFrom MLJ, the FooRegressor type can be imported usingFooRegressor = @load FooRegressor pkg=FooRegressorPkgConstruct an instance with default hyper-parameters using the syntax model = FooRegressor(). Provide keyword arguments to override hyper-parameter defaults, as in FooRegressor(a=...).\n\n\n\n\n\n","category":"function"},{"location":"document_strings/#The-document-string-standard","page":"Document strings","title":"The document string standard","text":"","category":"section"},{"location":"document_strings/","page":"Document strings","title":"Document strings","text":"Your document string must include the following components, in order:","category":"page"},{"location":"document_strings/","page":"Document strings","title":"Document strings","text":"A header, closely matching the example given above.\nA reference describing the algorithm or an actual description of the algorithm, if necessary. Detail any non-standard aspects of the implementation. Generally, defer details on the role of hyperparameters to the \"Hyperparameters\" section (see below).\nInstructions on how to import the model type from MLJ (because a user can already inspect the doc-string in the Model Registry, without having loaded the code-providing package).\nInstructions on how to instantiate with default hyperparameters or with keywords.\nA Training data section: explains how to bind a model to data in a machine with all possible signatures (eg, machine(model, X, y) but also machine(model, X, y, w) if, say, weights are supported); the role and scitype requirements for each data argument should be itemized.\nInstructions on how to fit the machine (in the same section).\nA Hyperparameters section (unless there aren't any): an itemized list of the parameters, with defaults given.\nAn Operations section: each implemented operation (predict, predict_mode, transform, inverse_transform, etc ) is itemized and explained. This should include operations with no data arguments, such as training_losses and feature_importances.\nA Fitted parameters section: To explain what is returned by fitted_params(mach) (the same as MLJModelInterface.fitted_params(model, fitresult) - see later) with the fields of that named tuple itemized.\nA Report section (if report is non-empty): To explain what, if anything, is included in the report(mach) (the same as the report return value of MLJModelInterface.fit) with the fields itemized.\nAn optional but highly recommended Examples section, which includes MLJ examples, but which could also include others if the model type also implements a second \"local\" interface, i.e., defined in the same module. (Note that each module referring to a type can declare separate doc-strings which appear concatenated in doc-string queries.)\nA closing \"See also\" sentence which includes a @ref link to the raw model type (if you are wrapping one).","category":"page"},{"location":"feature_importances/#Feature-importances","page":"Feature importances","title":"Feature importances","text":"","category":"section"},{"location":"feature_importances/","page":"Feature importances","title":"Feature importances","text":"MLJModelInterface.feature_importances","category":"page"},{"location":"feature_importances/#MLJModelInterface.feature_importances-feature_importances","page":"Feature importances","title":"MLJModelInterface.feature_importances","text":"feature_importances(model::M, fitresult, report)\n\nFor a given model of model type M supporting intrinsic feature importances, calculate the feature importances from the model's fitresult and report as an abstract vector of feature::Symbol => importance::Real pairs (e.g [:gender =>0.23, :height =>0.7, :weight => 0.1]).\n\nNew model implementations\n\nThe following trait overload is also required: MLJModelInterface.reports_feature_importances(::Type{<:M}) = true\n\nIf for some reason a model is sometimes unable to report feature importances then feature_importances should return all importances as 0.0, as in [:gender =>0.0, :height =>0.0, :weight => 0.0].\n\n\n\n\n\n","category":"function"},{"location":"feature_importances/","page":"Feature importances","title":"Feature importances","text":"Trait values can also be set using the metadata_model method, see below.","category":"page"},{"location":"#Adding-Models-for-General-Use","page":"Home","title":"Adding Models for General Use","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"The machine learning tools provided by MLJ can be applied to the models in any package that imports MLJModelInterface and implements the API defined there, as outlined in this document. ","category":"page"},{"location":"","page":"Home","title":"Home","text":"tip: Tip\nThis is a reference document, which has become rather sprawling over the evolution of the MLJ project. We recommend starting with Quick start guide, which covers the main points relevant to most new model implementations. Most topics are only detailed for Supervised models, so if you are implementing another kind of model, you may still need to refer to the Supervised models section. ","category":"page"},{"location":"","page":"Home","title":"Home","text":"Interface code can be hosted by the package providing the core machine learning algorithm, or by a stand-alone \"interface-only\" package, using the template MLJExampleInterface.jl (see Where to place code implementing new models below). For a list of packages implementing the MLJ model API (natively, and in interface packages) see here.","category":"page"},{"location":"#Important","page":"Home","title":"Important","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"MLJModelInterface is a very light-weight interface allowing you to define your interface, but does not provide the functionality required to use or test your interface; this requires MLJBase. So, while you only need to add MLJModelInterface to your project's [deps], for testing purposes you need to add MLJBase to your project's [extras] and [targets]. In testing, simply use MLJBase in place of MLJModelInterface.","category":"page"},{"location":"","page":"Home","title":"Home","text":"It is assumed the reader has read the Getting Started section of the MLJ manual. To implement the API described here, some familiarity with the following packages is also helpful:","category":"page"},{"location":"","page":"Home","title":"Home","text":"ScientificTypes.jl (for specifying model requirements of data)\nDistributions.jl (for probabilistic predictions)\nCategoricalArrays.jl (essential if you are implementing a model handling data of Multiclass or OrderedFactor scitype; familiarity with CategoricalPool objects required)\nTables.jl (if your algorithm needs input data in a novel format).","category":"page"},{"location":"","page":"Home","title":"Home","text":"In MLJ, the basic interface exposed to the user, built atop the model interface described here, is the machine interface. After a first reading of this document, the reader may wish to refer to MLJ Internals for context.","category":"page"},{"location":"the_predict_joint_method/#The-predict_joint-method","page":"The predict_joint method","title":"The predict_joint method","text":"","category":"section"},{"location":"the_predict_joint_method/","page":"The predict_joint method","title":"The predict_joint method","text":"warning: Experimental\nThe following API is experimental. It is subject to breaking changes during minor or major releases without warning.","category":"page"},{"location":"the_predict_joint_method/","page":"The predict_joint method","title":"The predict_joint method","text":"MMI.predict_joint(model::SomeSupervisedModel, fitresult, Xnew) -> yhat","category":"page"},{"location":"the_predict_joint_method/","page":"The predict_joint method","title":"The predict_joint method","text":"Any Probabilistic model type SomeModelmay optionally implement a predict_joint method, which has the same signature as predict, but whose predictions are a single distribution (rather than a vector of per-observation distributions).","category":"page"},{"location":"the_predict_joint_method/","page":"The predict_joint method","title":"The predict_joint method","text":"Specifically, the output yhat of predict_joint should be an instance of Distributions.Sampleable{<:Multivariate,V}, where scitype(V) = target_scitype(SomeModel) and samples have length n, where n is the number of observations in Xnew.","category":"page"},{"location":"the_predict_joint_method/","page":"The predict_joint method","title":"The predict_joint method","text":"If a new model type subtypes JointProbabilistic <: Probabilistic then implementation of predict_joint is compulsory.","category":"page"}] +[{"location":"the_model_type_hierarchy/#The-model-type-hierarchy","page":"The model type hierarchy","title":"The model type hierarchy","text":"","category":"section"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"A model is an object storing hyperparameters associated with some machine learning algorithm, and that is all. In MLJ, hyperparameters include configuration parameters, like the number of threads, and special instructions, such as \"compute feature rankings\", which may or may not affect the final learning outcome. However, the logging level (verbosity below) is excluded. Learned parameters (such as the coefficients in a linear model) have no place in the model struct.","category":"page"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"The name of the Julia type associated with a model indicates the associated algorithm (e.g., DecisionTreeClassifier). The outcome of training a learning algorithm is called a fitresult. For ordinary multivariate regression, for example, this would be the coefficients and intercept. For a general supervised model, it is the (generally minimal) information needed to make new predictions.","category":"page"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"The ultimate supertype of all models is MLJModelInterface.Model, which has two abstract subtypes:","category":"page"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"abstract type Supervised <: Model end\nabstract type Unsupervised <: Model end","category":"page"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"Supervised models are further divided according to whether they are able to furnish probabilistic predictions of the target (which they will then do by default) or directly predict \"point\" estimates, for each new input pattern:","category":"page"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"abstract type Probabilistic <: Supervised end\nabstract type Deterministic <: Supervised end","category":"page"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"Further division of model types is realized through Trait declarations.","category":"page"},{"location":"the_model_type_hierarchy/","page":"The model type hierarchy","title":"The model type hierarchy","text":"Associated with every concrete subtype of Model there must be a fit method, which implements the associated algorithm to produce the fitresult. Additionally, every Supervised model has a predict method, while Unsupervised models must have a transform method. More generally, methods such as these, that are dispatched on a model instance and a fitresult (plus other data), are called operations. Probabilistic supervised models optionally implement a predict_mode operation (in the case of classifiers) or a predict_mean and/or predict_median operations (in the case of regressors) although MLJModelInterface also provides fallbacks that will suffice in most cases. Unsupervised models may implement an inverse_transform operation.","category":"page"},{"location":"quick_start_guide/#Quick-start-guide","page":"Quick-start guide","title":"Quick start guide","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"The following are condensed and informal instructions for implementing the MLJ model interface for a new machine learning model. We assume: (i) you have a Julia registered package YourPackage.jl implementing some machine learning models; (ii) that you would like to interface and register these models with MLJ; and (iii) that you have a rough understanding of how things work with MLJ. In particular, you are familiar with:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"what scientific types are\nwhat Probabilistic, Deterministic and Unsupervised models are\nthe fact that MLJ generally works with tables rather than matrices. Here a table is a container X satisfying the Tables.jl API and satisfying Tables.istable(X) == true (e.g., DataFrame, JuliaDB table, CSV file, named tuple of equal-length vectors)\nCategoricalArrays.jl, if working with finite discrete data, e.g., doing classification; see also the Working with Categorical Data section of the MLJ manual.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"If you're not familiar with any one of these points, the Getting Started section of the MLJ manual may help.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"But tables don't make sense for my model! If a case can be made that tabular input does not make sense for your particular model, then MLJ can still handle this; you just need to define a non-tabular input_scitype trait. However, you should probably open an issue to clarify the appropriate declaration. The discussion below assumes input data is tabular.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"For simplicity, this document assumes no data front-end is to be defined for your model. Adding a data front-end, which offers the MLJ user some performance benefits, is easy to add post-facto, and is described in Implementing a data front-end.","category":"page"},{"location":"quick_start_guide/#Overview","page":"Quick-start guide","title":"Overview","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"To write an interface create a file or a module in your package which includes:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"a using MLJModelInterface or import MLJModelInterface statement\nMLJ-compatible model types and constructors,\nimplementation of fit, predict/transform and optionally fitted_params for your models,\nmetadata for your package and for each of your models","category":"page"},{"location":"quick_start_guide/#Important","page":"Quick-start guide","title":"Important","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"MLJModelInterface is a very light-weight interface allowing you to define your interface, but does not provide the functionality required to use or test your interface; this requires MLJBase. So, while you only need to add MLJModelInterface to your project's [deps], for testing purposes you need to add MLJBase to your project's [extras] and [targets]. In testing, simply use MLJBase in place of MLJModelInterface.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"We give some details for each step below with, each time, a few examples that you can mimic. The instructions are intentionally brief.","category":"page"},{"location":"quick_start_guide/#Model-type-and-constructor","page":"Quick-start guide","title":"Model type and constructor","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"MLJ-compatible constructors for your models need to meet the following requirements:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"be mutable struct,\nbe subtypes of MLJModelInterface.Probabilistic or MLJModelInterface.Deterministic or MLJModelInterface.Unsupervised,\nhave fields corresponding exclusively to hyperparameters,\nhave a keyword constructor assigning default values to all hyperparameters.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"You may use the @mlj_model macro from MLJModelInterface to declare a (non parametric) model type:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"MLJModelInterface.@mlj_model mutable struct YourModel <: MLJModelInterface.Deterministic\n a::Float64 = 0.5::(_ > 0)\n b::String = \"svd\"::(_ in (\"svd\",\"qr\"))\nend","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"That macro specifies:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"A keyword constructor (here YourModel(; a=..., b=...)),\nDefault values for the hyperparameters,\nConstraints on the hyperparameters where _ refers to a value passed.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Further to the last point, a::Float64 = 0.5::(_ > 0) indicates that the field a is a Float64, takes 0.5 as its default value, and expects its value to be positive.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Please see this issue for a known issue and workaround relating to the use of @mlj_model with negative defaults.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"If you decide not to use the @mlj_model macro (e.g. in the case of a parametric type), you will need to write a keyword constructor and a clean! method:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"mutable struct YourModel <: MLJModelInterface.Deterministic\n a::Float64\nend\nfunction YourModel(; a=0.5)\n model = YourModel(a)\n message = MLJModelInterface.clean!(model)\n isempty(message) || @warn message\n return model\nend\nfunction MLJModelInterface.clean!(m::YourModel)\n warning = \"\"\n if m.a <= 0\n warning *= \"Parameter `a` expected to be positive, resetting to 0.5\"\n m.a = 0.5\n end\n return warning\nend","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Additional notes:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Please annotate all fields with concrete types, if possible, using type parameters if necessary.\nPlease prefer Symbol over String if you can (e.g. to pass the name of a solver).\nPlease add constraints to your fields even if they seem obvious to you.\nYour model may have 0 fields, that's fine.\nAlthough not essential, try to avoid Union types for model fields. For example, a field declaration features::Vector{Symbol} with a default of Symbol[] (detected with the isempty method) is preferred to features::Union{Vector{Symbol}, Nothing} with a default of nothing.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Examples:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"KNNClassifier which uses @mlj_model,\nXGBoostRegressor which does not.","category":"page"},{"location":"quick_start_guide/#Fit","page":"Quick-start guide","title":"Fit","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"The implementation of fit will look like","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"function MLJModelInterface.fit(m::YourModel, verbosity, X, y, w=nothing)\n # body ...\n return (fitresult, cache, report)\nend","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"where y should only be there for a supervised model and w for a supervised model that supports sample weights. You must type verbosity to Int and you must not type X, y and w (MLJ handles that).","category":"page"},{"location":"quick_start_guide/#Regressor","page":"Quick-start guide","title":"Regressor","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"In the body of the fit function, you should assume that X is a table and that y is an AbstractVector (for multitask regression it may be a table).","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Typical steps in the body of the fit function will be:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"forming a matrix-view of the data, possibly transposed if your model expects a p x n formalism (MLJ assumes columns are features by default i.e. n x p), use MLJModelInterface.matrix for this,\npassing the data to your model,\nreturning the results as a tuple (fitresult, cache, report).","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"The fitresult part should contain everything that is needed at the predict or transform step, it should not be expected to be accessed by users. The cache should be left to nothing for now. The report should be a NamedTuple with any auxiliary useful information that a user would want to know about the fit (e.g., feature rankings). See more on this below.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Example: GLM's LinearRegressor","category":"page"},{"location":"quick_start_guide/#Classifier","page":"Quick-start guide","title":"Classifier","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"For a classifier, the steps are fairly similar to a regressor with these differences:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"y will be a categorical vector and you will typically want to use the integer encoding of y instead of CategoricalValues; use MLJModelInterface.int for this.\nYou will need to pass the full pool of target labels (not just those observed in the training data) and additionally, in the Deterministic case, the encoding, to make these available to predict. A simple way to do this is to pass y[1] in the fitresult, for then MLJModelInterface.classes(y[1]) is a complete list of possible categorical elements, and d = MLJModelInterface.decoder(y[1]) is a method for recovering categorical elements from their integer representations (e.g., d(2) is the categorical element with 2 as encoding).\nIn the case of a probabilistic classifier you should pass all probabilities simultaneously to the UnivariateFinite constructor to get an abstract UnivariateFinite vector (type UnivariateFiniteArray) rather than use comprehension or broadcasting to get a vanilla vector. This is for performance reasons.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"If implementing a classifier, you should probably consult the more detailed instructions at The predict method.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Examples:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"GLM's BinaryClassifier (Probabilistic)\nLIBSVM's SVC (Deterministic)","category":"page"},{"location":"quick_start_guide/#Transformer","page":"Quick-start guide","title":"Transformer","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Nothing special for a transformer.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Example: FillImputer","category":"page"},{"location":"quick_start_guide/#Fitted-parameters","page":"Quick-start guide","title":"Fitted parameters","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"There is a function you can optionally implement which will return the learned parameters of your model for user inspection. For instance, in the case of a linear regression, the user may want to get direct access to the coefficients and intercept. This should be as human and machine-readable as practical (not a graphical representation) and the information should be combined in the form of a named tuple.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"The function will always look like:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"function MLJModelInterface.fitted_params(model::YourModel, fitresult)\n # extract what's relevant from `fitresult`\n # ...\n # then return as a NamedTuple\n return (learned_param1 = ..., learned_param2 = ...)\nend","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Example: for GLM models","category":"page"},{"location":"quick_start_guide/#Summary-of-user-interface-points-(or,-What-to-put-where?)","page":"Quick-start guide","title":"Summary of user interface points (or, What to put where?)","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Recall that the fitresult returned as part of fit represents everything needed by predict (or transform) to make new predictions. It is not intended to be directly inspected by the user. Here is a summary of the interface points for users that your implementation creates:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Use fitted_params to expose learned parameters, such as linear coefficients, to the user in a machine and human-readable form (for re-use in another model, for example).\nUse the fields of your model struct for hyperparameters, i.e., those parameters declared by the user ahead of time that generally affect the outcome of training. It is okay to add \"control\" parameters (such as specifying an acceleration parameter specifying computational resources, as here).\nUse report to return everything else, including model-specific methods (or other callable objects). This includes feature rankings, decision boundaries, SVM support vectors, clustering centres, methods for visualizing training outcomes, methods for saving learned parameters in a custom format, degrees of freedom, deviance, etc. If there is a performance cost to extra functionality you want to expose, the functionality can be toggled on/off through a hyperparameter, but this should otherwise be avoided. For, example, in a decision tree model report.print_tree(depth) might generate a pretty tree representation of the learned tree, up to the specified depth.","category":"page"},{"location":"quick_start_guide/#Predict/Transform","page":"Quick-start guide","title":"Predict/Transform","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"The implementation of predict (for a supervised model) or transform (for an unsupervised one) will look like:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"function MLJModelInterface.predict(m::YourModel, fitresult, Xnew)\n # ...\nend","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Here Xnew is expected to be a table and part of the logic in predict or transform may be similar to that in fit.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"The values returned should be:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"model subtype return value of predict/transform\nDeterministic vector of values (or table if multi-target)\nProbabilistic vector of Distribution objects, for classifiers in particular, a vector of UnivariateFinite\nUnsupervised table","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"In the case of a Probabilistic model, you may further want to implement a predict_mean or a predict_mode. However, MLJModelInterface provides fallbacks, defined in terms of predict, whose performance may suffice.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Examples","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Deterministic regression: KNNRegressor\nProbabilistic regression: LinearRegressor and the predict_mean\nProbabilistic classification: LogisticClassifier","category":"page"},{"location":"quick_start_guide/#Metadata-(traits)","page":"Quick-start guide","title":"Metadata (traits)","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Adding metadata for your model(s) is crucial for the discoverability of your package and its models and to make sure your model is used with data it can handle. You can individually overload a number of trait functions that encode this metadata by following the instructions in Adding Models for General Use), which also explains these traits in more detail. However, your most convenient option is to use metadata_model and metadata_pkg functionalities from MLJModelInterface to do this:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"const ALL_MODELS = Union{YourModel1, YourModel2, ...}\n\nMLJModelInterface.metadata_pkg.(ALL_MODELS\n name = \"YourPackage\",\n uuid = \"6ee0df7b-...\", # see your Project.toml\n url = \"https://...\", # URL to your package repo\n julia = true, # is it written entirely in Julia?\n license = \"MIT\", # your package license\n is_wrapper = false, # does it wrap around some other package?\n)\n\n# Then for each model,\nMLJModelInterface.metadata_model(YourModel1,\n input_scitype = MLJModelInterface.Table(MLJModelInterface.Continuous), # what input data is supported?\n target_scitype = AbstractVector{MLJModelInterface.Continuous}, # for a supervised model, what target?\n output_scitype = MLJModelInterface.Table(MLJModelInterface.Continuous), # for an unsupervised, what output?\n supports_weights = false, # does the model support sample weights?\n descr = \"A short description of your model\"\n load_path = \"YourPackage.SubModuleContainingModelStructDefinition.YourModel1\"\n)","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Important. Do not omit the load_path specification. Without a correct load_path MLJ will be unable to import your model.","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"Examples:","category":"page"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"package metadata\nGLM\nMLJLinearModels\nmodel metadata\nLinearRegressor\nDecisionTree\nA series of regressors","category":"page"},{"location":"quick_start_guide/#Adding-a-model-to-the-model-registry","page":"Quick-start guide","title":"Adding a model to the model registry","text":"","category":"section"},{"location":"quick_start_guide/","page":"Quick-start guide","title":"Quick-start guide","text":"See How to add models to the MLJ model registry.","category":"page"},{"location":"convenience_methods/#Convenience-methods","page":"Convenience methods","title":"Convenience methods","text":"","category":"section"},{"location":"convenience_methods/","page":"Convenience methods","title":"Convenience methods","text":"MMI.table\nMMI.matrix\nMMI.int\nMMI.UnivariateFinite\nMMI.classes\nMMI.decoder\nMMI.select\nMMI.selectrows\nMMI.selectcols\nMMI.UnivariateFinite","category":"page"},{"location":"convenience_methods/#MLJModelInterface.table-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.table","text":"table(columntable; prototype=nothing)\n\nConvert a named tuple of vectors or tuples columntable, into a table of the \"preferred sink type\" of prototype. This is often the type of prototype itself, when prototype is a sink; see the Tables.jl documentation. If prototype is not specified, then a named tuple of vectors is returned.\n\ntable(A::AbstractMatrix; names=nothing, prototype=nothing)\n\nWrap an abstract matrix A as a Tables.jl compatible table with the specified column names (a tuple of symbols). If names are not specified, names=(:x1, :x2, ..., :xn) is used, where n=size(A, 2).\n\nIf a prototype is specified, then the matrix is materialized as a table of the preferred sink type of prototype, rather than wrapped. Note that if prototype is not specified, then matrix(table(A)) is essentially a no-op.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.matrix-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.matrix","text":"matrix(X; transpose=false)\n\nIf X isa AbstractMatrix, return X or permutedims(X) if transpose=true. Otherwise if X is a Tables.jl compatible table source, convert X into a Matrix.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.int-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.int","text":"int(x)\n\nThe positional integer of the CategoricalString or CategoricalValue x, in the ordering defined by the pool of x. The type of int(x) is the reference type of x.\n\nNot to be confused with x.ref, which is unchanged by reordering of the pool of x, but has the same type.\n\nint(X::CategoricalArray)\nint(W::Array{<:CategoricalString})\nint(W::Array{<:CategoricalValue})\n\nBroadcasted versions of int.\n\njulia> v = categorical([\"c\", \"b\", \"c\", \"a\"])\n4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"c\"\n \"b\"\n \"c\"\n \"a\"\n\njulia> levels(v)\n3-element Vector{String}:\n \"a\"\n \"b\"\n \"c\"\n\njulia> int(v)\n4-element Vector{UInt32}:\n 0x00000003\n 0x00000002\n 0x00000003\n 0x00000001\n\nSee also: decoder.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.UnivariateFinite-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.UnivariateFinite","text":"UnivariateFinite(\n support,\n probs;\n pool=nothing,\n augmented=false,\n ordered=false\n)\n\nConstruct a discrete univariate distribution whose finite support is the elements of the vector support, and whose corresponding probabilities are elements of the vector probs. Alternatively, construct an abstract array of UnivariateFinite distributions by choosing probs to be an array of one higher dimension than the array generated.\n\nHere the word \"probabilities\" is an abuse of terminology as there is no requirement that probabilities actually sum to one, only that they be non-negative. So UnivariateFinite objects actually implement arbitrary non-negative measures over finite sets of labelled points. A UnivariateDistribution will be a bona fide probability measure when constructed using the augment=true option (see below) or when fit to data.\n\nUnless pool is specified, support should have type AbstractVector{<:CategoricalValue} and all elements are assumed to share the same categorical pool, which may be larger than support.\n\nImportant. All levels of the common pool have associated probabilities, not just those in the specified support. However, these probabilities are always zero (see example below).\n\nIf probs is a matrix, it should have a column for each class in support (or one less, if augment=true). More generally, probs will be an array whose size is of the form (n1, n2, ..., nk, c), where c = length(support) (or one less, if augment=true) and the constructor then returns an array of UnivariateFinite distributions of size (n1, n2, ..., nk).\n\nExamples\n\njulia> v = categorical([\"x\", \"x\", \"y\", \"x\", \"z\"])\n5-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"x\"\n \"y\"\n \"x\"\n \"z\"\n\njulia> UnivariateFinite(classes(v), [0.2, 0.3, 0.5])\nUnivariateFinite{Multiclass{3}}(x=>0.2, y=>0.3, z=>0.5)\n\njulia> d = UnivariateFinite([v[1], v[end]], [0.1, 0.9])\nUnivariateFinite{Multiclass{3}}(x=>0.1, z=>0.9)\n\njulia> rand(d, 3)\n3-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"z\"\n \"x\"\n\njulia> levels(d)\n3-element Vector{String}:\n \"x\"\n \"y\"\n \"z\"\n\njulia> pdf(d, \"y\")\n0.0\n\n\nSpecifying a pool\n\nAlternatively, support may be a list of raw (non-categorical) elements if pool is:\n\nsome CategoricalArray, CategoricalValue or CategoricalPool, such that support is a subset of levels(pool)\nmissing, in which case a new categorical pool is created which has support as its only levels.\n\nIn the last case, specify ordered=true if the pool is to be considered ordered.\n\njulia> UnivariateFinite([\"x\", \"z\"], [0.1, 0.9], pool=missing, ordered=true)\nUnivariateFinite{OrderedFactor{2}}(x=>0.1, z=>0.9)\n\njulia> d = UnivariateFinite([\"x\", \"z\"], [0.1, 0.9], pool=v) # v defined above\nUnivariateFinite{Multiclass{3}}(x=>0.1, z=>0.9)\n\njulia> pdf(d, \"y\") # allowed as `\"y\" in levels(v)`\n0.0\n\njulia> v = categorical([\"x\", \"x\", \"y\", \"x\", \"z\", \"w\"])\n6-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"x\"\n \"y\"\n \"x\"\n \"z\"\n \"w\"\n\njulia> probs = rand(100, 3); probs = probs ./ sum(probs, dims=2);\n\njulia> UnivariateFinite([\"x\", \"y\", \"z\"], probs, pool=v)\n100-element UnivariateFiniteVector{Multiclass{4}, String, UInt32, Float64}:\n UnivariateFinite{Multiclass{4}}(x=>0.194, y=>0.3, z=>0.505)\n UnivariateFinite{Multiclass{4}}(x=>0.727, y=>0.234, z=>0.0391)\n UnivariateFinite{Multiclass{4}}(x=>0.674, y=>0.00535, z=>0.321)\n ⋮\n UnivariateFinite{Multiclass{4}}(x=>0.292, y=>0.339, z=>0.369)\n\nProbability augmentation\n\nIf augment=true the provided array is augmented by inserting appropriate elements ahead of those provided, along the last dimension of the array. This means the user only provides probabilities for the classes c2, c3, ..., cn. The class c1 probabilities are chosen so that each UnivariateFinite distribution in the returned array is a bona fide probability distribution.\n\n\n\nUnivariateFinite(prob_given_class; pool=nothing, ordered=false)\n\nConstruct a discrete univariate distribution whose finite support is the set of keys of the provided dictionary, prob_given_class, and whose values specify the corresponding probabilities.\n\nThe type requirements on the keys of the dictionary are the same as the elements of support given above with this exception: if non-categorical elements (raw labels) are used as keys, then pool=... must be specified and cannot be missing.\n\nIf the values (probabilities) are arrays instead of scalars, then an abstract array of UnivariateFinite elements is created, with the same size as the array.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.classes-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.classes","text":"classes(x)\n\nAll the categorical elements with the same pool as x (including x), returned as a list, with an ordering consistent with the pool. Here x has CategoricalValue type, and classes(x) is a vector of the same eltype. Note that x in classes(x) is always true.\n\nNot to be confused with levels(x.pool). See the example below.\n\njulia> v = categorical([\"c\", \"b\", \"c\", \"a\"])\n4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"c\"\n \"b\"\n \"c\"\n \"a\"\n\njulia> levels(v)\n3-element Vector{String}:\n \"a\"\n \"b\"\n \"c\"\n\njulia> x = v[4]\nCategoricalArrays.CategoricalValue{String, UInt32} \"a\"\n\njulia> classes(x)\n3-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"a\"\n \"b\"\n \"c\"\n\njulia> levels(x.pool)\n3-element Vector{String}:\n \"a\"\n \"b\"\n \"c\"\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.decoder-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.decoder","text":"decoder(x)\n\nReturn a callable object for decoding the integer representation of a CategoricalValue sharing the same pool the CategoricalValue x. Specifically, one has decoder(x)(int(y)) == y for all CategoricalValues y having the same pool as x. One can also call decoder(x) on integer arrays, in which case decoder(x) is broadcast over all elements.\n\nExamples\n\njulia> v = categorical([\"c\", \"b\", \"c\", \"a\"])\n4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"c\"\n \"b\"\n \"c\"\n \"a\"\n\njulia> int(v)\n4-element Vector{UInt32}:\n 0x00000003\n 0x00000002\n 0x00000003\n 0x00000001\n\njulia> d = decoder(v[3]);\n\njulia> d(int(v)) == v\ntrue\n\nWarning:\n\nIt is not true that int(d(u)) == u always holds.\n\nSee also: int.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.select-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.select","text":"select(X, r, c)\n\nSelect element(s) of a table or matrix at row(s) r and column(s) c. An object of the sink type of X (or a matrix) is returned unless c is a single integer or symbol. In that case a vector is returned, unless r is a single integer, in which case a single element is returned.\n\nSee also: selectrows, selectcols.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.selectrows-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.selectrows","text":"selectrows(X, r)\n\nSelect single or multiple rows from a table, abstract vector or matrix X. If X is tabular, the object returned is a table of the preferred sink type of typeof(X), even if only a single row is selected.\n\nIf the object is neither a table, abstract vector or matrix, X is returned and r is ignored.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.selectcols-convenience_methods","page":"Convenience methods","title":"MLJModelInterface.selectcols","text":"selectcols(X, c)\n\nSelect single or multiple columns from a matrix or table X. If c is an abstract vector of integers or symbols, then the object returned is a table of the preferred sink type of typeof(X). If c is a single integer or column, then an AbstractVector is returned.\n\n\n\n\n\n","category":"function"},{"location":"convenience_methods/#MLJModelInterface.UnivariateFinite-convenience_methods-2","page":"Convenience methods","title":"MLJModelInterface.UnivariateFinite","text":"UnivariateFinite(\n support,\n probs;\n pool=nothing,\n augmented=false,\n ordered=false\n)\n\nConstruct a discrete univariate distribution whose finite support is the elements of the vector support, and whose corresponding probabilities are elements of the vector probs. Alternatively, construct an abstract array of UnivariateFinite distributions by choosing probs to be an array of one higher dimension than the array generated.\n\nHere the word \"probabilities\" is an abuse of terminology as there is no requirement that probabilities actually sum to one, only that they be non-negative. So UnivariateFinite objects actually implement arbitrary non-negative measures over finite sets of labelled points. A UnivariateDistribution will be a bona fide probability measure when constructed using the augment=true option (see below) or when fit to data.\n\nUnless pool is specified, support should have type AbstractVector{<:CategoricalValue} and all elements are assumed to share the same categorical pool, which may be larger than support.\n\nImportant. All levels of the common pool have associated probabilities, not just those in the specified support. However, these probabilities are always zero (see example below).\n\nIf probs is a matrix, it should have a column for each class in support (or one less, if augment=true). More generally, probs will be an array whose size is of the form (n1, n2, ..., nk, c), where c = length(support) (or one less, if augment=true) and the constructor then returns an array of UnivariateFinite distributions of size (n1, n2, ..., nk).\n\nExamples\n\njulia> v = categorical([\"x\", \"x\", \"y\", \"x\", \"z\"])\n5-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"x\"\n \"y\"\n \"x\"\n \"z\"\n\njulia> UnivariateFinite(classes(v), [0.2, 0.3, 0.5])\nUnivariateFinite{Multiclass{3}}(x=>0.2, y=>0.3, z=>0.5)\n\njulia> d = UnivariateFinite([v[1], v[end]], [0.1, 0.9])\nUnivariateFinite{Multiclass{3}}(x=>0.1, z=>0.9)\n\njulia> rand(d, 3)\n3-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"z\"\n \"x\"\n\njulia> levels(d)\n3-element Vector{String}:\n \"x\"\n \"y\"\n \"z\"\n\njulia> pdf(d, \"y\")\n0.0\n\n\nSpecifying a pool\n\nAlternatively, support may be a list of raw (non-categorical) elements if pool is:\n\nsome CategoricalArray, CategoricalValue or CategoricalPool, such that support is a subset of levels(pool)\nmissing, in which case a new categorical pool is created which has support as its only levels.\n\nIn the last case, specify ordered=true if the pool is to be considered ordered.\n\njulia> UnivariateFinite([\"x\", \"z\"], [0.1, 0.9], pool=missing, ordered=true)\nUnivariateFinite{OrderedFactor{2}}(x=>0.1, z=>0.9)\n\njulia> d = UnivariateFinite([\"x\", \"z\"], [0.1, 0.9], pool=v) # v defined above\nUnivariateFinite{Multiclass{3}}(x=>0.1, z=>0.9)\n\njulia> pdf(d, \"y\") # allowed as `\"y\" in levels(v)`\n0.0\n\njulia> v = categorical([\"x\", \"x\", \"y\", \"x\", \"z\", \"w\"])\n6-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"x\"\n \"y\"\n \"x\"\n \"z\"\n \"w\"\n\njulia> probs = rand(100, 3); probs = probs ./ sum(probs, dims=2);\n\njulia> UnivariateFinite([\"x\", \"y\", \"z\"], probs, pool=v)\n100-element UnivariateFiniteVector{Multiclass{4}, String, UInt32, Float64}:\n UnivariateFinite{Multiclass{4}}(x=>0.194, y=>0.3, z=>0.505)\n UnivariateFinite{Multiclass{4}}(x=>0.727, y=>0.234, z=>0.0391)\n UnivariateFinite{Multiclass{4}}(x=>0.674, y=>0.00535, z=>0.321)\n ⋮\n UnivariateFinite{Multiclass{4}}(x=>0.292, y=>0.339, z=>0.369)\n\nProbability augmentation\n\nIf augment=true the provided array is augmented by inserting appropriate elements ahead of those provided, along the last dimension of the array. This means the user only provides probabilities for the classes c2, c3, ..., cn. The class c1 probabilities are chosen so that each UnivariateFinite distribution in the returned array is a bona fide probability distribution.\n\n\n\nUnivariateFinite(prob_given_class; pool=nothing, ordered=false)\n\nConstruct a discrete univariate distribution whose finite support is the set of keys of the provided dictionary, prob_given_class, and whose values specify the corresponding probabilities.\n\nThe type requirements on the keys of the dictionary are the same as the elements of support given above with this exception: if non-categorical elements (raw labels) are used as keys, then pool=... must be specified and cannot be missing.\n\nIf the values (probabilities) are arrays instead of scalars, then an abstract array of UnivariateFinite elements is created, with the same size as the array.\n\n\n\n\n\n","category":"function"},{"location":"form_of_data/#The-form-of-data-for-fitting-and-predicting","page":"The form of data for fitting and predicting","title":"The form of data for fitting and predicting","text":"","category":"section"},{"location":"form_of_data/","page":"The form of data for fitting and predicting","title":"The form of data for fitting and predicting","text":"The model implementer does not have absolute control over the types of data X, y and Xnew appearing in the fit and predict methods they must implement. Rather, they can specify the scientific type of this data by making appropriate declarations of the traits input_scitype and target_scitype discussed later under Trait declarations.","category":"page"},{"location":"form_of_data/","page":"The form of data for fitting and predicting","title":"The form of data for fitting and predicting","text":"Important Note. Unless it genuinely makes little sense to do so, the MLJ recommendation is to specify a Table scientific type for X (and hence Xnew) and an AbstractVector scientific type (e.g., AbstractVector{Continuous}) for targets y. Algorithms requiring matrix input can coerce their inputs appropriately; see below.","category":"page"},{"location":"form_of_data/#Additional-type-coercions","page":"The form of data for fitting and predicting","title":"Additional type coercions","text":"","category":"section"},{"location":"form_of_data/","page":"The form of data for fitting and predicting","title":"The form of data for fitting and predicting","text":"If the core algorithm being wrapped requires data in a different or more specific form, then fit will need to coerce the table into the form desired (and the same coercions applied to X will have to be repeated for Xnew in predict). To assist with common cases, MLJ provides the convenience method MMI.matrix. MMI.matrix(Xtable) has type Matrix{T} where T is the tightest common type of elements of Xtable, and Xtable is any table. (If Xtable is itself just a wrapped matrix, Xtable=Tables.table(A), then A=MMI.table(Xtable) will be returned without any copying.)","category":"page"},{"location":"form_of_data/","page":"The form of data for fitting and predicting","title":"The form of data for fitting and predicting","text":"Alternatively, a more performant option is to implement a data front-end for your model; see Implementing a data front-end.","category":"page"},{"location":"form_of_data/","page":"The form of data for fitting and predicting","title":"The form of data for fitting and predicting","text":"Other auxiliary methods provided by MLJModelInterface for handling tabular data are: selectrows, selectcols, select and schema (for extracting the size, names and eltypes of a table's columns). See Convenience methods below for details.","category":"page"},{"location":"form_of_data/#Important-convention","page":"The form of data for fitting and predicting","title":"Important convention","text":"","category":"section"},{"location":"form_of_data/","page":"The form of data for fitting and predicting","title":"The form of data for fitting and predicting","text":"It is to be understood that the columns of table X correspond to features and the rows to observations. So, for example, the predict method for a linear regression model might look like predict(model, w, Xnew) = MMI.matrix(Xnew)*w, where w is the vector of learned coefficients.","category":"page"},{"location":"serialization/#Serialization","page":"Serialization","title":"Serialization","text":"","category":"section"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"warning: New in MLJBase 0.20\nThe following API is incompatible with versions of MLJBase < 0.20, even for model implementations compatible with MLJModelInterface 1^","category":"page"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"This section may be occasionally relevant when wrapping models implemented in languages other than Julia.","category":"page"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"The MLJ user can serialize and deserialize machines, as she would any other julia object. (This user has the option of first removing data from the machine. See the Saving machines section of the MLJ manual for details.) However, a problem can occur if a model's fitresult (see The fit method) is not a persistent object. For example, it might be a C pointer that would have no meaning in a new Julia session.","category":"page"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"If that is the case a model implementation needs to implement a save and restore method for switching between a fitresult and some persistent, serializable representation of that result.","category":"page"},{"location":"serialization/#The-save-method","page":"Serialization","title":"The save method","text":"","category":"section"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"MMI.save(model::SomeModel, fitresult; kwargs...) -> serializable_fitresult","category":"page"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"Implement this method to return a persistent serializable representation of the fitresult component of the MMI.fit return value.","category":"page"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"The fallback of save performs no action and returns fitresult.","category":"page"},{"location":"serialization/#The-restore-method","page":"Serialization","title":"The restore method","text":"","category":"section"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"MMI.restore(model::SomeModel, serializable_fitresult) -> fitresult","category":"page"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"Implement this method to reconstruct a valid fitresult (as would be returned by MMI.fit) from a persistent representation constructed using MMI.save as described above.","category":"page"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"The fallback of restore performs no action and returns serializable_fitresult.","category":"page"},{"location":"serialization/#Example","page":"Serialization","title":"Example","text":"","category":"section"},{"location":"serialization/","page":"Serialization","title":"Serialization","text":"Refer to the model implementations at MLJXGBoostInterface.jl.","category":"page"},{"location":"iterative_models/#Iterative-models-and-the-update!-method","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"","category":"section"},{"location":"iterative_models/","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"An update method may be optionally overloaded to enable a call by MLJ to retrain a model (on the same training data) to avoid repeating computations unnecessarily.","category":"page"},{"location":"iterative_models/","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"MMI.update(model::SomeSupervisedModel, verbosity, old_fitresult, old_cache, X, y) -> fit\nresult, cache, report\nMMI.update(model::SomeSupervisedModel, verbosity, old_fitresult, old_cache, X, y, w=nothing) -> fit\nresult, cache, report","category":"page"},{"location":"iterative_models/","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"Here the second variation applies if SomeSupervisedModel supports sample weights.","category":"page"},{"location":"iterative_models/","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"If an MLJ Machine is being fit! and it is not the first time, then update is called instead of fit, unless the machine fit! has been called with a new rows keyword argument. However, MLJModelInterface defines a fallback for update which just calls fit. For context, see the Internals section of the MLJ manual.","category":"page"},{"location":"iterative_models/","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"Learning networks wrapped as models constitute one use case (see the Composing Models section of the MLJ manual): one would like each component model to be retrained only when hyperparameter changes \"upstream\" make this necessary. In this case, MLJ provides a fallback (specifically, the fallback is for any subtype of SupervisedNetwork = Union{DeterministicNetwork,ProbabilisticNetwork}). A second more generally relevant use case is iterative models, where calls to increase the number of iterations only restarts the iterative procedure if other hyperparameters have also changed. (A useful method for inspecting model changes in such cases is MLJModelInterface.is_same_except. ) For an example, see MLJEnsembles.jl.","category":"page"},{"location":"iterative_models/","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"A third use case is to avoid repeating the time-consuming preprocessing of X and y required by some models.","category":"page"},{"location":"iterative_models/","page":"Iterative models and the update! method","title":"Iterative models and the update! method","text":"If the argument fitresult (returned by a preceding call to fit) is not sufficient for performing an update, the author can arrange for fit to output in its cache return value any additional information required (for example, pre-processed versions of X and y), as this is also passed as an argument to the update method.","category":"page"},{"location":"fitting_distributions/#Models-that-learn-a-probability-distribution","page":"Models that learn a probability distribution","title":"Models that learn a probability distribution","text":"","category":"section"},{"location":"fitting_distributions/","page":"Models that learn a probability distribution","title":"Models that learn a probability distribution","text":"warning: Experimental\nThe following API is experimental. It is subject to breaking changes during minor or major releases without warning. Models implementing this interface will not work with MLJBase versions earlier than 0.17.5.","category":"page"},{"location":"fitting_distributions/","page":"Models that learn a probability distribution","title":"Models that learn a probability distribution","text":"Models that fit a probability distribution to some data should be regarded as Probabilistic <: Supervised models with target y = data and X = nothing.","category":"page"},{"location":"fitting_distributions/","page":"Models that learn a probability distribution","title":"Models that learn a probability distribution","text":"The predict method should return a single distribution.","category":"page"},{"location":"fitting_distributions/","page":"Models that learn a probability distribution","title":"Models that learn a probability distribution","text":"A working implementation of a model that fits a UnivariateFinite distribution to some categorical data using Laplace smoothing controlled by a hyperparameter alpha is given here.","category":"page"},{"location":"supervised_models/#Supervised-models","page":"Introduction","title":"Supervised models","text":"","category":"section"},{"location":"supervised_models/#Mathematical-assumptions","page":"Introduction","title":"Mathematical assumptions","text":"","category":"section"},{"location":"supervised_models/","page":"Introduction","title":"Introduction","text":"At present, MLJ's performance estimate functionality (resampling using evaluate/evaluate!) tacitly assumes that feature-label pairs of observations (X1, y1), (X2, y2), (X2, y2), ... are being modelled as identically independent random variables (i.i.d.), and constructs some kind of representation of an estimate of the conditional probability p(y | X) (y and X single observations). It may be that a model implementing the MLJ interface has the potential to make predictions under weaker assumptions (e.g., time series forecasting models). However the output of the compulsory predict method described below should be the output of the model under the i.i.d assumption.","category":"page"},{"location":"supervised_models/","page":"Introduction","title":"Introduction","text":"In the future, newer methods may be introduced to handle weaker assumptions (see, e.g., The predict_joint method below).","category":"page"},{"location":"supervised_models/","page":"Introduction","title":"Introduction","text":"The following sections were written with Supervised models in mind, but also cover material relevant to general models:","category":"page"},{"location":"supervised_models/","page":"Introduction","title":"Introduction","text":"Summary of methods\nThe form of data for fitting and predicting\nThe fit method\nThe fitted_params method\nThe predict method\nThe predict_joint method\nTraining losses\nFeature importances\nTrait declarations\nIterative models and the update! method\nImplementing a data front end\nSupervised models with a transform method\nModels that learn a probability distribution","category":"page"},{"location":"implementing_a_data_front_end/#Implementing-a-data-front-end","page":"Implementing a data front end","title":"Implementing a data front-end","text":"","category":"section"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"note: Note\nIt is suggested that packages implementing MLJ's model API, that later implement a data front-end, should tag their changes in a breaking release. (The changes will not break the use of models for the ordinary MLJ user, who interacts with models exclusively through the machine interface. However, it will break usage for some external packages that have chosen to depend directly on the model API.)","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"MLJModelInterface.reformat(model, args...) -> data\nMLJModelInterface.selectrows(::Model, I, data...) -> sampled_data","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"Models optionally overload reformat to define transformations of user-supplied data into some model-specific representation (e.g., from a table to a matrix). Computational overheads associated with multiple fit!/predict/transform calls (on MLJ machines) are then avoided when memory resources allow. The fallback returns args (no transformation).","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"The selectrows(::Model, I, data...) method is overloaded to specify how the model-specific data is to be subsampled, for some observation indices I (a colon, :, or instance of AbstractVector{<:Integer}). In this way, implementing a data front-end also allows more efficient resampling of data (in user calls to evaluate!).","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"After detailing formal requirements for implementing a data front-end, we give a Sample implementation. A simple implementation also appears in the MLJDecisionTreeInterface.jl package.","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"Here \"user-supplied data\" is what the MLJ user supplies when constructing a machine, as in machine(models, args...), which coincides with the arguments expected by fit(model, verbosity, args...) when reformat is not overloaded.","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"Overloading reformat is permitted for any Model subtype, except for subtypes of Static. Here is a complete list of responsibilities for such an implementation, for some model::SomeModelType (a sample implementation follows after):","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"A reformat(model::SomeModelType, args...) -> data method must be implemented for each form of args... appearing in a valid machine construction machine(model, args...) (there will be one for each possible signature of fit(::SomeModelType, ...)).\nAdditionally, if not included above, there must be a single argument form of reformat, reformat(model::SomeModelType, arg) -> (data,), serving as a data front-end for operations like predict. It must always hold that reformat(model, args...)[1] = reformat(model, args[1]).","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"The fallback is reformat(model, args...) = args (i.e., slurps provided data).","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"Important. reformat(model::SomeModelType, args...) must always return a tuple, even if this has length one. The length of the tuple need not match length(args).","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"fit(model::SomeModelType, verbosity, data...) should be implemented as if data is the output of reformat(model, args...), where args is the data an MLJ user has bound to model in some machine. The same applies to any overloading of update.\nEach implemented operation, such as predict and transform - but excluding inverse_transform - must be defined as if its data arguments are reformated versions of user-supplied data. For example, in the supervised case, data_new in predict(model::SomeModelType, fitresult, data_new) is reformat(model, Xnew), where Xnew is the data provided by the MLJ user in a call predict(mach, Xnew) (mach.model == model).\nTo specify how the model-specific representation of data is to be resampled, implement selectrows(model::SomeModelType, I, data...) -> resampled_data for each overloading of reformat(model::SomeModel, args...) -> data above. Here I is an arbitrary abstract integer vector or : (type Colon).","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"Important. selectrows(model::SomeModelType, I, args...) must always return a tuple of the same length as args, even if this is one.","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"The fallback for selectrows is described at selectrows.","category":"page"},{"location":"implementing_a_data_front_end/#Sample-implementation","page":"Implementing a data front end","title":"Sample implementation","text":"","category":"section"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"Suppose a supervised model type SomeSupervised supports sample weights, leading to two different fit signatures, and that it has a single operation predict:","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"fit(model::SomeSupervised, verbosity, X, y)\nfit(model::SomeSupervised, verbosity, X, y, w)\n\npredict(model::SomeSupervised, fitresult, Xnew)","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"Without a data front-end implemented, suppose X is expected to be a table and y a vector, but suppose the core algorithm always converts X to a matrix with features as rows (each record corresponds to a column in the table). Then a new data-front end might look like this:","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"constant MMI = MLJModelInterface\n\n# for fit:\nMMI.reformat(::SomeSupervised, X, y) = (MMI.matrix(X)', y)\nMMI.reformat(::SomeSupervised, X, y, w) = (MMI.matrix(X)', y, w)\nMMI.selectrows(::SomeSupervised, I, Xmatrix, y) =\n (view(Xmatrix, :, I), view(y, I))\nMMI.selectrows(::SomeSupervised, I, Xmatrix, y, w) =\n (view(Xmatrix, :, I), view(y, I), view(w, I))\n\n# for predict:\nMMI.reformat(::SomeSupervised, X) = (MMI.matrix(X)',)\nMMI.selectrows(::SomeSupervised, I, Xmatrix) = (view(Xmatrix, :, I),)","category":"page"},{"location":"implementing_a_data_front_end/","page":"Implementing a data front end","title":"Implementing a data front end","text":"With these additions, fit and predict are refactored, so that X and Xnew represent matrices with features as rows.","category":"page"},{"location":"the_fitted_params_method/#The-fitted_params-method","page":"The fitted_params method","title":"The fitted_params method","text":"","category":"section"},{"location":"the_fitted_params_method/","page":"The fitted_params method","title":"The fitted_params method","text":"A fitted_params method may be optionally overloaded. Its purpose is to provide MLJ access to a user-friendly representation of the learned parameters of the model (as opposed to the hyperparameters). They must be extractable from fitresult.","category":"page"},{"location":"the_fitted_params_method/","page":"The fitted_params method","title":"The fitted_params method","text":"MMI.fitted_params(model::SomeSupervisedModel, fitresult) -> friendly_fitresult::NamedTuple","category":"page"},{"location":"the_fitted_params_method/","page":"The fitted_params method","title":"The fitted_params method","text":"For a linear model, for example, one might declare something like friendly_fitresult=(coefs=[...], bias=...).","category":"page"},{"location":"the_fitted_params_method/","page":"The fitted_params method","title":"The fitted_params method","text":"The fallback is to return (fitresult=fitresult,).","category":"page"},{"location":"unsupervised_models/#Unsupervised-models","page":"Unsupervised models","title":"Unsupervised models","text":"","category":"section"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"Unsupervised models implement the MLJ model interface in a very similar fashion. The main differences are:","category":"page"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"The fit method, which still returns (fitresult, cache, report) will typically have only one training argument X, as in MLJModelInterface.fit(model, verbosity, X), although this is not a hard requirement; see Transformers requiring a target variable in training below. Furthermore, in the case of models that subtype Static <: Unsupervised (see Static models) fit has no training arguments at all, but does not need to be implemented as a fallback returns (nothing, nothing, nothing).\nA transform and/or predict method is implemented, and has the same signature as predict does in the supervised case, as in MLJModelInterface.transform(model, fitresult, Xnew). However, it may only have one data argument Xnew, unless model <: Static, in which case there is no restriction. A use-case for predict is K-means clustering that predicts labels and transforms input features into a space of lower dimension. See the Transformers that also predict section of the MLJ manual for an example.\nThe target_scitype refers to the output of predict, if implemented. A new trait, output_scitype, is for the output of transform. Unless the model is Static (see Static models) the trait input_scitype is for the single data argument of transform (and predict, if implemented). If fit has more than one data argument, you must overload the trait fit_data_scitype, which bounds the allowed data passed to fit(model, verbosity, data...) and will always be a Tuple type.\nAn inverse_transform can be optionally implemented. The signature is the same as transform, as in MLJModelInterface.inverse_transform(model::MyUnsupervisedModel, fitresult, Xout), which:\nmust make sense for any Xout for which scitype(Xout) <: output_scitype(MyUnsupervisedModel); and\nmust return an object Xin satisfying scitype(Xin) <: input_scitype(MyUnsupervisedModel).","category":"page"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"For sample implementations, see MLJ's built-in transformers and the clustering models at MLJClusteringInterface.jl.","category":"page"},{"location":"unsupervised_models/#Transformers-requiring-a-target-variable-in-training","page":"Unsupervised models","title":"Transformers requiring a target variable in training","text":"","category":"section"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"An Unsupervised model that is not Static may include a second argument y in it's fit signature, as in fit(::MyTransformer, verbosity, X, y). For example, some feature selection tools require a target variable y in training. (Unlike Supervised models, an Unsupervised model is not required to implement predict, and in pipelines it is the output of transform, and not predict, that is always propagated to the next model.) Such a model should overload the trait target_in_fit, as in this example:","category":"page"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"MLJModelInterface.target_in_fit(::Type{<:MyTransformer}) = true","category":"page"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"This ensures that such models can appear in pipelines, and that a target provided to the pipeline model is passed on to the model in training. ","category":"page"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"If the model implements more than one fit signature (e.g., one with a target y and one without) then fit_data_scitype must also be overloaded, as in this example:","category":"page"},{"location":"unsupervised_models/","page":"Unsupervised models","title":"Unsupervised models","text":"MLJModelInterface.fit_data_scitype(::Type{<:MyTransformer}) = Union{\n Tuple{Table(Continuous)},\n\tTuple{Table(Continous), AbstractVector{<:Finite}},\n}","category":"page"},{"location":"how_to_register/#How-to-add-models-to-the-MLJ-model-registry","page":"How to add models to the MLJ Model Registry","title":"How to add models to the MLJ model registry","text":"","category":"section"},{"location":"how_to_register/","page":"How to add models to the MLJ Model Registry","title":"How to add models to the MLJ Model Registry","text":"The MLJ model registry is located in the MLJModels.jl repository. To add a model, you need to follow these steps","category":"page"},{"location":"how_to_register/","page":"How to add models to the MLJ Model Registry","title":"How to add models to the MLJ Model Registry","text":"Ensure your model conforms to the interface defined above\nRaise an issue at MLJModels.jl and point out where the MLJ-interface implementation is, e.g. by providing a link to the code.\nAn administrator will then review your implementation and work with you to add the model to the registry","category":"page"},{"location":"summary_of_methods/#Summary-of-methods","page":"Summary of methods","title":"Summary of methods","text":"","category":"section"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"The compulsory and optional methods to be implemented for each concrete type SomeSupervisedModel <: MMI.Supervised are summarized below.","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"An = indicates the return value for a fallback version of the method.","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Compulsory:","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.fit(model::SomeSupervisedModel, verbosity, X, y) -> fitresult, cache, report\nMMI.predict(model::SomeSupervisedModel, fitresult, Xnew) -> yhat","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optional, to check and correct invalid hyperparameter values:","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.clean!(model::SomeSupervisedModel) = \"\"","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optional, to return user-friendly form of fitted parameters:","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.fitted_params(model::SomeSupervisedModel, fitresult) = fitresult","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optional, to avoid redundant calculations when re-fitting machines associated with a model:","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.update(model::SomeSupervisedModel, verbosity, old_fitresult, old_cache, X, y) =\n MMI.fit(model, verbosity, X, y)","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optional, to specify default hyperparameter ranges (for use in tuning):","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.hyperparameter_ranges(T::Type) = Tuple(fill(nothing, length(fieldnames(T))))","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optional, if SomeSupervisedModel <: Probabilistic:","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.predict_mode(model::SomeSupervisedModel, fitresult, Xnew) =\n mode.(predict(model, fitresult, Xnew))\nMMI.predict_mean(model::SomeSupervisedModel, fitresult, Xnew) =\n mean.(predict(model, fitresult, Xnew))\nMMI.predict_median(model::SomeSupervisedModel, fitresult, Xnew) =\n median.(predict(model, fitresult, Xnew))","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Required, if the model is to be registered (findable by general users):","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.load_path(::Type{<:SomeSupervisedModel}) = \"\"\nMMI.package_name(::Type{<:SomeSupervisedModel}) = \"Unknown\"\nMMI.package_uuid(::Type{<:SomeSupervisedModel}) = \"Unknown\"","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.input_scitype(::Type{<:SomeSupervisedModel}) = Unknown","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Strongly recommended, to constrain the form of target data passed to fit:","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.target_scitype(::Type{<:SomeSupervisedModel}) = Unknown","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optional but recommended:","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.package_url(::Type{<:SomeSupervisedModel}) = \"unknown\"\nMMI.is_pure_julia(::Type{<:SomeSupervisedModel}) = false\nMMI.package_license(::Type{<:SomeSupervisedModel}) = \"unknown\"","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"If SomeSupervisedModel supports sample weights or class weights, then instead of the fit above, one implements","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.fit(model::SomeSupervisedModel, verbosity, X, y, w=nothing) -> fitresult, cache, report","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"and, if appropriate","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.update(model::SomeSupervisedModel, verbosity, old_fitresult, old_cache, X, y, w=nothing) =\n MMI.fit(model, verbosity, X, y, w)","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Additionally, if SomeSupervisedModel supports sample weights, one must declare","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.supports_weights(model::Type{<:SomeSupervisedModel}) = true","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optionally, an implementation may add a data front-end, for transforming user data (such as a table) into some model-specific format (such as a matrix), and/or add methods to specify how reformatted data is resampled. This alters the interpretation of the data arguments of fit, update and predict, whose number may also change. See Implementing a data front-end for details). A data front-end provides the MLJ user certain performance advantages when retraining a machine.","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Third-party packages that interact directly with models using the MLJModelInterface.jl API, rather than through the machine interface, will also need to understand how the data front-end works, so they incorporate reformat into their fit/update/predict calls. See also this issue.","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MLJModelInterface.reformat(model::SomeSupervisedModel, args...) = args\nMLJModelInterface.selectrows(model::SomeSupervisedModel, I, data...) = data","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"Optionally, to customized support for serialization of machines (see Serialization), overload","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.save(filename, model::SomeModel, fitresult; kwargs...) = fitresult","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"and possibly","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"MMI.restore(filename, model::SomeModel, serializable_fitresult) -> serializable_fitresult","category":"page"},{"location":"summary_of_methods/","page":"Summary of methods","title":"Summary of methods","text":"These last two are unlikely to be needed if wrapping pure Julia code.","category":"page"},{"location":"the_fit_method/#The-fit-method","page":"The fit method","title":"The fit method","text":"","category":"section"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"A compulsory fit method returns three objects:","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"MMI.fit(model::SomeSupervisedModel, verbosity, X, y) -> fitresult, cache, report","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"fitresult is the fitresult in the sense above (which becomes an argument for predict discussed below).\nreport is a (possibly empty) NamedTuple, for example, report=(deviance=..., dof_residual=..., stderror=..., vcov=...). Any training-related statistics, such as internal estimates of the generalization error, and feature rankings, should be returned in the report tuple. How, or if, these are generated should be controlled by hyperparameters (the fields of model). Fitted parameters, such as the coefficients of a linear model, do not go in the report as they will be extractable from fitresult (and accessible to MLJ through the fitted_params method described below).\nThe value of cache can be nothing, unless one is also defining an update method (see below). The Julia type of cache is not presently restricted.","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"note: Note\nThe fit (and update) methods should not mutate the model. If necessary, fit can create a deepcopy of model first.","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"It is not necessary for fit to provide type or dimension checks on X or y or to call clean! on the model; MLJ will carry out such checks.","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"The types of X and y are constrained by the input_scitype and target_scitype trait declarations; see Trait declarations below. (That is, unless a data front-end is implemented, in which case these traits refer instead to the arguments of the overloaded reformat method, and the types of X and y are determined by the output of reformat.)","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"The method fit should never alter hyperparameter values, the sole exception being fields of type <:AbstractRNG. If the package is able to suggest better hyperparameters, as a byproduct of training, return these in the report field.","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"The verbosity level (0 for silent) is for passing to the learning algorithm itself. A fit method wrapping such an algorithm should generally avoid doing any of its own logging.","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"Sample weight support. If supports_weights(::Type{<:SomeSupervisedModel}) has been declared true, then one instead implements the following variation on the above fit:","category":"page"},{"location":"the_fit_method/","page":"The fit method","title":"The fit method","text":"MMI.fit(model::SomeSupervisedModel, verbosity, X, y, w=nothing) -> fitresult, cache, report","category":"page"},{"location":"model_wrappers/#Model-wrappers","page":"Model wrappers","title":"Model wrappers","text":"","category":"section"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"A model that can have one or more other models as hyper-parameters should overload the trait is_wrapper, as in this example:","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"MLJModelInterface.target_in_fit(::Type{<:MyWrapper}) = true","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"The constructor for such a model does not need provide default values for the model-valued hyper-parameters. If only a single model is wrapped, then the hyper-parameter should have the name :model and this should be an optional positional argument, as well as a keyword argument.","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"For example, EnsembleModel is a model wrapper, and we can construct an instance like this:","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"using MLJ\natom = ConstantClassfier()\nEnsembleModel(tree, n=100)","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"but also like this:","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"EnsembleModel(model=tree, n=100)","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"This is the only case in MLJ where positional arguments in a model constructor are allowed.","category":"page"},{"location":"model_wrappers/#Handling-generic-constructors","page":"Model wrappers","title":"Handling generic constructors","text":"","category":"section"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"Model wrappers frequently have a public facing constructor with a name different from that of the model type constructed. For example, TunedModel(model, ...) is a constructor that will construct either an instance of DeterministicTunedModel or ProbabilisticTunedModel, depending on the type of model. In this case it is necessary to overload the constructor trait, which in that case looks like this:","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"MLJModelInterface.constructor(::Type{<:Union{\n DeterministicTunedModel,\n\tProbabilisticTunedModel,\n\t}}) = TunedModel","category":"page"},{"location":"model_wrappers/","page":"Model wrappers","title":"Model wrappers","text":"This allows the MLJ Model Registry to correctly associate model metadata to the constructor, rather than the (private) types.","category":"page"},{"location":"trait_declarations/#Trait-declarations","page":"Trait declarations","title":"Trait declarations","text":"","category":"section"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"Two trait functions allow the implementer to restrict the types of data X, y and Xnew discussed above. The MLJ task interface uses these traits for data type checks but also for model search. If they are omitted (and your model is registered) then a general user may attempt to use your model with inappropriately typed data.","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"The trait functions input_scitype and target_scitype take scientific data types as values. We assume here familiarity with ScientificTypes.jl (see Getting Started for the basics).","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"For example, to ensure that the X presented to the DecisionTreeClassifier fit method is a table whose columns all have Continuous element type (and hence AbstractFloat machine type), one declares","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.input_scitype(::Type{<:DecisionTreeClassifier}) = MMI.Table(MMI.Continuous)","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"or, equivalently,","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.input_scitype(::Type{<:DecisionTreeClassifier}) = Table(Continuous)","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"If, instead, columns were allowed to have either: (i) a mixture of Continuous and Missing values, or (ii) Count (i.e., integer) values, then the declaration would be","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.input_scitype(::Type{<:DecisionTreeClassifier}) = Table(Union{Continuous,Missing},Count)","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"Similarly, to ensure the target is an AbstractVector whose elements have Finite scitype (and hence CategoricalValue machine type) we declare","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.target_scitype(::Type{<:DecisionTreeClassifier}) = AbstractVector{<:Finite}","category":"page"},{"location":"trait_declarations/#Multivariate-targets","page":"Trait declarations","title":"Multivariate targets","text":"","category":"section"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"The above remarks continue to hold unchanged for the case multivariate targets. For example, if we declare","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"target_scitype(SomeSupervisedModel) = Table(Continuous)","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"then this constrains the target to be any table whose columns have Continuous element scitype (i.e., AbstractFloat), while","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"target_scitype(SomeSupervisedModel) = Table(Continuous, Finite{2})","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"restricts to tables with continuous or binary (ordered or unordered) columns.","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"For predicting variable length sequences of, say, binary values (CategoricalValues) with some common size-two pool) we declare","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"target_scitype(SomeSupervisedModel) = AbstractVector{<:NTuple{<:Finite{2}}}","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"The trait functions controlling the form of data are summarized as follows:","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"method return type declarable return values fallback value\ninput_scitype Type some scientific type Unknown\ntarget_scitype Type some scientific type Unknown","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"Additional trait functions tell MLJ's @load macro how to find your model if it is registered, and provide other self-explanatory metadata about the model:","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"method return type declarable return values fallback value\nload_path String unrestricted \"unknown\"\npackage_name String unrestricted \"unknown\"\npackage_uuid String unrestricted \"unknown\"\npackage_url String unrestricted \"unknown\"\npackage_license String unrestricted \"unknown\"\nis_pure_julia Bool true or false false\nsupports_weights Bool true or false false\nsupports_class_weights Bool true or false false\nsupports_training_losses Bool true or false false\nreports_feature_importances Bool true or false false","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"Here is the complete list of trait function declarations for DecisionTreeClassifier, whose core algorithms are provided by DecisionTree.jl, but whose interface actually lives at MLJDecisionTreeInterface.jl.","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.input_scitype(::Type{<:DecisionTreeClassifier}) = MMI.Table(MMI.Continuous)\nMMI.target_scitype(::Type{<:DecisionTreeClassifier}) = AbstractVector{<:MMI.Finite}\nMMI.load_path(::Type{<:DecisionTreeClassifier}) = \"MLJDecisionTreeInterface.DecisionTreeClassifier\"\nMMI.package_name(::Type{<:DecisionTreeClassifier}) = \"DecisionTree\"\nMMI.package_uuid(::Type{<:DecisionTreeClassifier}) = \"7806a523-6efd-50cb-b5f6-3fa6f1930dbb\"\nMMI.package_url(::Type{<:DecisionTreeClassifier}) = \"https://github.com/bensadeghi/DecisionTree.jl\"\nMMI.is_pure_julia(::Type{<:DecisionTreeClassifier}) = true","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"Alternatively, these traits can also be declared using MMI.metadata_pkg and MMI.metadata_model helper functions as:","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.metadata_pkg(\n DecisionTreeClassifier,\n name=\"DecisionTree\",\n package_uuid=\"7806a523-6efd-50cb-b5f6-3fa6f1930dbb\",\n package_url=\"https://github.com/bensadeghi/DecisionTree.jl\",\n is_pure_julia=true\n)\n\nMMI.metadata_model(\n DecisionTreeClassifier,\n input_scitype=MMI.Table(MMI.Continuous),\n target_scitype=AbstractVector{<:MMI.Finite},\n load_path=\"MLJDecisionTreeInterface.DecisionTreeClassifier\"\n)","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"Important. Do not omit the load_path specification. If unsure what it should be, post an issue at MLJ.","category":"page"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.metadata_pkg","category":"page"},{"location":"trait_declarations/#MLJModelInterface.metadata_pkg","page":"Trait declarations","title":"MLJModelInterface.metadata_pkg","text":"metadata_pkg(T; args...)\n\nHelper function to write the metadata for a package providing model T. Use it with broadcasting to define the metadata of the package providing a series of models.\n\nKeywords\n\npackage_name=\"unknown\" : package name\npackage_uuid=\"unknown\" : package uuid\npackage_url=\"unknown\" : package url\nis_pure_julia=missing : whether the package is pure julia\npackage_license=\"unknown\": package license\nis_wrapper=false : whether the package is a wrapper\n\nExample\n\nmetadata_pkg.((KNNRegressor, KNNClassifier),\n package_name=\"NearestNeighbors\",\n package_uuid=\"b8a86587-4115-5ab1-83bc-aa920d37bbce\",\n package_url=\"https://github.com/KristofferC/NearestNeighbors.jl\",\n is_pure_julia=true,\n package_license=\"MIT\",\n is_wrapper=false)\n\n\n\n\n\n","category":"function"},{"location":"trait_declarations/","page":"Trait declarations","title":"Trait declarations","text":"MMI.metadata_model","category":"page"},{"location":"trait_declarations/#MLJModelInterface.metadata_model","page":"Trait declarations","title":"MLJModelInterface.metadata_model","text":"metadata_model(T; args...)\n\nHelper function to write the metadata for a model T.\n\nKeywords\n\ninput_scitype=Unknown: allowed scientific type of the input data\ntarget_scitype=Unknown: allowed scitype of the target (supervised)\noutput_scitype=Unknown: allowed scitype of the transformed data (unsupervised)\nsupports_weights=false: whether the model supports sample weights\nsupports_class_weights=false: whether the model supports class weights\nload_path=\"unknown\": where the model is (usually PackageName.ModelName)\nhuman_name=nothing: human name of the model\nsupports_training_losses=nothing: whether the (necessarily iterative) model can report training losses\nreports_feature_importances=nothing: whether the model reports feature importances\n\nExample\n\nmetadata_model(KNNRegressor,\n input_scitype=MLJModelInterface.Table(MLJModelInterface.Continuous),\n target_scitype=AbstractVector{MLJModelInterface.Continuous},\n supports_weights=true,\n load_path=\"NearestNeighbors.KNNRegressor\")\n\n\n\n\n\n","category":"function"},{"location":"type_declarations/#New-model-type-declarations","page":"New model type declarations","title":"New model type declarations","text":"","category":"section"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"Here is an example of a concrete supervised model type declaration, for a model with a single hyperparameter:","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"import MLJModelInterface\nconst MMI = MLJModelInterface\n\nmutable struct RidgeRegressor <: MMI.Deterministic\n lambda::Float64\nend","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"Models (which are mutable) should not be given internal constructors. It is recommended that they be given an external lazy keyword constructor of the same name. This constructor defines default values for every field, and optionally corrects invalid field values by calling a clean! method (whose fallback returns an empty message string):","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"function MMI.clean!(model::RidgeRegressor)\n warning = \"\"\n if model.lambda < 0\n warning *= \"Need lambda ≥ 0. Resetting lambda=0. \"\n model.lambda = 0\n end\n return warning\nend\n\n# keyword constructor\nfunction RidgeRegressor(; lambda=0.0)\n model = RidgeRegressor(lambda)\n message = MMI.clean!(model)\n isempty(message) || @warn message\n return model\nend","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"Important. Performing clean!(model) a second time should not mutate model. That is, this test should hold:","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"clean!(model)\nclone = deepcopy(model)\nclean!(model)\n@test model == clone","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"Although not essential, try to avoid Union types for model fields. For example, a field declaration features::Vector{Symbol} with a default of Symbol[] (detected with isempty method) is preferred to features::Union{Vector{Symbol}, Nothing} with a default of nothing.","category":"page"},{"location":"type_declarations/#Hyperparameters-for-parallelization-options","page":"New model type declarations","title":"Hyperparameters for parallelization options","text":"","category":"section"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"The section Acceleration and Parallelism of the MLJ manual indicates how users specify an option to run an algorithm using distributed processing or multithreading. A hyperparameter specifying such an option should be called acceleration. Its value a should satisfy a isa AbstractResource where AbstractResource is defined in the ComputationalResources.jl package. An option to run on a GPU is ordinarily indicated with the CUDALibs() resource.","category":"page"},{"location":"type_declarations/#hyperparameter-access-and-mutation","page":"New model type declarations","title":"hyperparameter access and mutation","text":"","category":"section"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"To support hyperparameter optimization (see the Tuning Models section of the MLJ manual) any hyperparameter to be individually controlled must be:","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"property-accessible; nested property access allowed, as in model.detector.K\nmutable","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"For an un-nested hyperparameter, the requirement is that getproperty(model, :param_name) and setproperty!(model, :param_name, value) have the expected behavior.","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"Combining hyperparameters in a named tuple does not generally work: although property-accessible (with nesting), an individual value cannot be mutated.","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"For a suggested way to deal with hyperparameters varying in number, see the implementation of Stack, where the model struct stores a varying number of base models internally as a vector, but components are named at construction and accessed by overloading getproperty/setproperty! appropriately.","category":"page"},{"location":"type_declarations/#Macro-shortcut","page":"New model type declarations","title":"Macro shortcut","text":"","category":"section"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"An alternative to declaring the model struct, clean! method and keyword constructor, is to use the @mlj_model macro, as in the following example:","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"@mlj_model mutable struct YourModel <: MMI.Deterministic\n a::Float64 = 0.5::(_ > 0)\n b::String = \"svd\"::(_ in (\"svd\",\"qr\"))\nend","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"This declaration specifies:","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"A keyword constructor (here YourModel(; a=..., b=...)),\nDefault values for the hyperparameters,\nConstraints on the hyperparameters where _ refers to a value passed.","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"For example, a::Float64 = 0.5::(_ > 0) indicates that the field a is a Float64, takes 0.5 as default value, and expects its value to be positive.","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"You cannot use the @mlj_model macro if your model struct has type parameters.","category":"page"},{"location":"type_declarations/#Known-issue-with-@mlj_macro","page":"New model type declarations","title":"Known issue with @mlj_macro","text":"","category":"section"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"Defaults with negative values can trip up the @mlj_macro (see this issue). So, for example, this does not work:","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"@mlj_model mutable struct Bar\n a::Int = -1::(_ > -2)\nend","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"But this does:","category":"page"},{"location":"type_declarations/","page":"New model type declarations","title":"New model type declarations","text":"@mlj_model mutable struct Bar\n a::Int = (-)(1)::(_ > -2)\nend","category":"page"},{"location":"where_to_put_code/#Where-to-place-code-implementing-new-models","page":"Where to place code implementing new models","title":"Where to place code implementing new models","text":"","category":"section"},{"location":"where_to_put_code/","page":"Where to place code implementing new models","title":"Where to place code implementing new models","text":"Note that different packages can implement models having the same name without causing conflicts, although an MLJ user cannot simultaneously load two such models.","category":"page"},{"location":"where_to_put_code/","page":"Where to place code implementing new models","title":"Where to place code implementing new models","text":"There are two options for making a new model implementation available to all MLJ users:","category":"page"},{"location":"where_to_put_code/","page":"Where to place code implementing new models","title":"Where to place code implementing new models","text":"Native implementations (preferred option). The implementation code lives in the same package that contains the learning algorithms implementing the interface. An example is EvoTrees.jl. In this case, it is sufficient to open an issue at MLJ requesting the package to be registered with MLJ. Registering a package allows the MLJ user to access its models' metadata and to selectively load them.\nSeparate interface package. Implementation code lives in a separate interface package, which has the algorithm-providing package as a dependency. See the template repository MLJExampleInterface.jl.","category":"page"},{"location":"where_to_put_code/","page":"Where to place code implementing new models","title":"Where to place code implementing new models","text":"Additionally, one needs to ensure that the implementation code defines the package_name and load_path model traits appropriately, so that MLJ's @load macro can find the necessary code (see MLJModels/src for examples).","category":"page"},{"location":"the_predict_method/#The-predict-method","page":"The predict method","title":"The predict method","text":"","category":"section"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"A compulsory predict method has the form","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"MMI.predict(model::SomeSupervisedModel, fitresult, Xnew) -> yhat","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"Here Xnew will have the same form as the X passed to fit.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"Note that while Xnew generally consists of multiple observations (e.g., has multiple rows in the case of a table) it is assumed, in view of the i.i.d assumption recalled above, that calling predict(..., Xnew) is equivalent to broadcasting some method predict_one(..., x) over the individual observations x in Xnew (a method implementing the probability distribution p(X |y) above).","category":"page"},{"location":"the_predict_method/#Prediction-types-for-deterministic-responses.","page":"The predict method","title":"Prediction types for deterministic responses.","text":"","category":"section"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"In the case of Deterministic models, yhat should have the same scitype as the y passed to fit (see above). If y is a CategoricalVector (classification) then elements of the prediction yhat must have a pool == to the pool of the target y presented in training, even if not all levels appear in the training data or prediction itself.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"Unfortunately, code not written with the preservation of categorical levels in mind poses special problems. To help with this, MLJModelInterface provides some utilities: MLJModelInterface.int (for converting a CategoricalValue into an integer, the ordering of these integers being consistent with that of the pool) and MLJModelInterface.decoder (for constructing a callable object that decodes the integers back into CategoricalValue objects). Refer to Convenience methods below for important details.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"Note that a decoder created during fit may need to be bundled with fitresult to make it available to predict during re-encoding. So, for example, if the core algorithm being wrapped by fit expects a nominal target yint of type Vector{<:Integer} then a fit method may look something like this:","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"function MMI.fit(model::SomeSupervisedModel, verbosity, X, y)\n yint = MMI.int(y)\n a_target_element = y[1] # a CategoricalValue/String\n decode = MMI.decoder(a_target_element) # can be called on integers\n\n core_fitresult = SomePackage.fit(X, yint, verbosity=verbosity)\n\n fitresult = (decode, core_fitresult)\n cache = nothing\n report = nothing\n return fitresult, cache, report\nend","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"while a corresponding deterministic predict operation might look like this:","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"function MMI.predict(model::SomeSupervisedModel, fitresult, Xnew)\n decode, core_fitresult = fitresult\n yhat = SomePackage.predict(core_fitresult, Xnew)\n return decode.(yhat)\nend","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"For a concrete example, refer to the code for SVMClassifier.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"Of course, if you are coding a learning algorithm from scratch, rather than wrapping an existing one, these extra measures may be unnecessary.","category":"page"},{"location":"the_predict_method/#Prediction-types-for-probabilistic-responses","page":"The predict method","title":"Prediction types for probabilistic responses","text":"","category":"section"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"In the case of Probabilistic models with univariate targets, yhat must be an AbstractVector or table whose elements are distributions. In the common case of a vector (single target), this means one distribution per row of Xnew.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"A distribution is some object that, at the least, implements Base.rng (i.e., is something that can be sampled). Currently, all performance measures (metrics) defined in MLJBase.jl additionally assume that a distribution is either:","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"An instance of some subtype of Distributions.Distribution, an abstract type defined in the Distributions.jl package; or\nAn instance of CategoricalDistributions.UnivariateFinite, from the CategoricalDistributions.jl package, which should be used for all probabilistic classifiers, i.e., for predictors whose target has scientific type <:AbstractVector{<:Finite}.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"All such distributions implement the probability mass or density function Distributions.pdf. If your model's predictions cannot be predict objects of this form, then you will need to implement appropriate performance measures to buy into MLJ's performance evaluation apparatus.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"An implementation can avoid CategoricalDistributions.jl as a dependency by using the \"dummy\" constructor MLJModelInterface.UnivariateFinite, which is bound to the true one when MLJBase.jl is loaded.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"For efficiency, one should not construct UnivariateFinite instances one at a time. Rather, once a probability vector, matrix, or dictionary is known, construct an instance of UnivariateFiniteVector <: AbstractArray{<:UnivariateFinite},1} to return. Both UnivariateFinite and UnivariateFiniteVector objects are constructed using the single UnivariateFinite function.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"For example, suppose the target y arrives as a subsample of some ybig and is missing some classes:","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"ybig = categorical([:a, :b, :a, :a, :b, :a, :rare, :a, :b])\ny = ybig[1:6]","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"Your fit method has bundled the first element of y with the fitresult to make it available to predict for purposes of tracking the complete pool of classes. Let's call this an_element = y[1]. Then, supposing the corresponding probabilities of the observed classes [:a, :b] are in an n x 2 matrix probs (where n the number of rows of Xnew) then you return","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"yhat = MLJModelInterface.UnivariateFinite([:a, :b], probs, pool=an_element)","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"This object automatically assigns zero-probability to the unseen class :rare (i.e., pdf.(yhat, :rare) works and returns a zero vector). If you would like to assign :rare non-zero probabilities, simply add it to the first vector (the support) and supply a larger probs matrix.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"In a binary classification problem, it suffices to specify a single vector of probabilities, provided you specify augment=true, as in the following example, and note carefully that these probabilities are associated with the last (second) class you specify in the constructor:","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"y = categorical([:TRUE, :FALSE, :FALSE, :TRUE, :TRUE])\nan_element = y[1]\nprobs = rand(10)\nyhat = MLJModelInterface.UnivariateFinite([:FALSE, :TRUE], probs, augment=true, pool=an_element)","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"The constructor has a lot of options, including passing a dictionary instead of vectors. See CategoricalDistributions.UnivariateFinite for details.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"See LinearBinaryClassifier for an example of a Probabilistic classifier implementation.","category":"page"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"Important note on binary classifiers. There is no \"Binary\" scitype distinct from Multiclass{2} or OrderedFactor{2}; Binary is just an alias for Union{Multiclass{2},OrderedFactor{2}}. The target_scitype of a binary classifier will generally be AbstractVector{<:Binary} and according to the mlj scitype convention, elements of y have type CategoricalValue, and not Bool. See BinaryClassifier for an example.","category":"page"},{"location":"the_predict_method/#Report-items-returned-by-predict","page":"The predict method","title":"Report items returned by predict","text":"","category":"section"},{"location":"the_predict_method/","page":"The predict method","title":"The predict method","text":"A predict method, or other operation such as transform, can contribute to the report accessible in any machine associated with a model. See Reporting byproducts of a static transformation below for details.","category":"page"},{"location":"static_models/#Static-models","page":"Static models","title":"Static models","text":"","category":"section"},{"location":"static_models/","page":"Static models","title":"Static models","text":"A model type subtypes Static <: Unsupervised if it does not generalize to new data but nevertheless has hyperparameters. See the Static transformers section of the MLJ manual for examples. In the Static case, transform can have multiple arguments and input_scitype refers to the allowed scitype of the slurped data, even if there is only a single argument. For example, if the signature is transform(static_model, X1, X2), then the allowed input_scitype might be Tuple{Table(Continuous), Table(Continuous)}; if the signature is transform(static_model, X), the allowed input_scitype might be Tuple{Table(Continuous)}. The other traits are as for regular Unsupervised models.","category":"page"},{"location":"static_models/#Reporting-byproducts-of-a-static-transformation","page":"Static models","title":"Reporting byproducts of a static transformation","text":"","category":"section"},{"location":"static_models/","page":"Static models","title":"Static models","text":"As a static transformer does not implement fit, the usual mechanism for creating a report is not available. Instead, byproducts of the computation performed by transform can be returned by transform itself by returning a pair (output, report) instead of just output. Here report should be a named tuple. In fact, any operation, (e.g., predict) can do this for any model type. However, this exceptional behavior must be flagged with an appropriate trait declaration, as in","category":"page"},{"location":"static_models/","page":"Static models","title":"Static models","text":"MLJModelInterface.reporting_operations(::Type{<:SomeModelType}) = (:transform,)","category":"page"},{"location":"static_models/","page":"Static models","title":"Static models","text":"If mach is a machine wrapping a model of this kind, then the report(mach) will include the report item form transform's output. For sample implementations, see this issue or the code for DBSCAN clustering.","category":"page"},{"location":"outlier_detection_models/#Outlier-detection-models","page":"Outlier detection models","title":"Outlier detection models","text":"","category":"section"},{"location":"outlier_detection_models/","page":"Outlier detection models","title":"Outlier detection models","text":"warning: Experimental API\nThe Outlier Detection API is experimental and may change in future releases of MLJ.","category":"page"},{"location":"outlier_detection_models/","page":"Outlier detection models","title":"Outlier detection models","text":"Outlier detection or anomaly detection is predominantly an unsupervised learning task, transforming each data point to an outlier score quantifying the level of \"outlierness\". However, because detectors can also be semi-supervised or supervised, MLJModelInterface provides a collection of abstract model types, that enable the different characteristics, namely:","category":"page"},{"location":"outlier_detection_models/","page":"Outlier detection models","title":"Outlier detection models","text":"MLJModelInterface.SupervisedDetector\nMLJModelInterface.UnsupervisedDetector\nMLJModelInterface.ProbabilisticSupervisedDetector\nMLJModelInterface.ProbabilisticUnsupervisedDetector\nMLJModelInterface.DeterministicSupervisedDetector\nMLJModelInterface.DeterministicUnsupervisedDetector","category":"page"},{"location":"outlier_detection_models/","page":"Outlier detection models","title":"Outlier detection models","text":"All outlier detection models subtyping from any of the above supertypes have to implement MLJModelInterface.fit(model, verbosity, X, [y]). Models subtyping from either SupervisedDetector or UnsupervisedDetector have to implement MLJModelInterface.transform(model, fitresult, Xnew), which should return the raw outlier scores (<:Continuous) of all points in Xnew.","category":"page"},{"location":"outlier_detection_models/","page":"Outlier detection models","title":"Outlier detection models","text":"Probabilistic and deterministic outlier detection models provide an additional option to predict a normalized estimate of outlierness or a concrete outlier label and thus enable evaluation of those models. All corresponding supertypes have to implement (in addition to the previously described fit and transform) MLJModelInterface.predict(model, fitresult, Xnew), with deterministic predictions conforming to OrderedFactor{2}, with the first class being the normal class and the second class being the outlier. Probabilistic models predict a UnivariateFinite estimate of those classes.","category":"page"},{"location":"outlier_detection_models/","page":"Outlier detection models","title":"Outlier detection models","text":"It is typically possible to automatically convert an outlier detection model to a probabilistic or deterministic model if the training scores are stored in the model's report. Below mentioned OutlierDetection.jl package, for example, stores the training scores under the scores key in the report returned from fit. It is then possible to use model wrappers such as OutlierDetection.ProbabilisticDetector to automatically convert a model to enable predictions of the required output type.","category":"page"},{"location":"outlier_detection_models/","page":"Outlier detection models","title":"Outlier detection models","text":"note: External outlier detection packages\nOutlierDetection.jl provides an opinionated interface on top of MLJ for outlier detection models, standardizing things like class names, dealing with training scores, score normalization and more.","category":"page"},{"location":"reference/#Reference","page":"Reference","title":"Reference","text":"","category":"section"},{"location":"reference/","page":"Reference","title":"Reference","text":"Pages = [\"reference.md\"]","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"Modules = [MLJModelInterface,]\nPrivate = false\nOrder = [:constant, :type, :function, :macro, :module]","category":"page"},{"location":"reference/#MLJModelInterface.UnivariateFinite","page":"Reference","title":"MLJModelInterface.UnivariateFinite","text":"UnivariateFinite(\n support,\n probs;\n pool=nothing,\n augmented=false,\n ordered=false\n)\n\nConstruct a discrete univariate distribution whose finite support is the elements of the vector support, and whose corresponding probabilities are elements of the vector probs. Alternatively, construct an abstract array of UnivariateFinite distributions by choosing probs to be an array of one higher dimension than the array generated.\n\nHere the word \"probabilities\" is an abuse of terminology as there is no requirement that probabilities actually sum to one, only that they be non-negative. So UnivariateFinite objects actually implement arbitrary non-negative measures over finite sets of labelled points. A UnivariateDistribution will be a bona fide probability measure when constructed using the augment=true option (see below) or when fit to data.\n\nUnless pool is specified, support should have type AbstractVector{<:CategoricalValue} and all elements are assumed to share the same categorical pool, which may be larger than support.\n\nImportant. All levels of the common pool have associated probabilities, not just those in the specified support. However, these probabilities are always zero (see example below).\n\nIf probs is a matrix, it should have a column for each class in support (or one less, if augment=true). More generally, probs will be an array whose size is of the form (n1, n2, ..., nk, c), where c = length(support) (or one less, if augment=true) and the constructor then returns an array of UnivariateFinite distributions of size (n1, n2, ..., nk).\n\nExamples\n\njulia> v = categorical([\"x\", \"x\", \"y\", \"x\", \"z\"])\n5-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"x\"\n \"y\"\n \"x\"\n \"z\"\n\njulia> UnivariateFinite(classes(v), [0.2, 0.3, 0.5])\nUnivariateFinite{Multiclass{3}}(x=>0.2, y=>0.3, z=>0.5)\n\njulia> d = UnivariateFinite([v[1], v[end]], [0.1, 0.9])\nUnivariateFinite{Multiclass{3}}(x=>0.1, z=>0.9)\n\njulia> rand(d, 3)\n3-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"z\"\n \"x\"\n\njulia> levels(d)\n3-element Vector{String}:\n \"x\"\n \"y\"\n \"z\"\n\njulia> pdf(d, \"y\")\n0.0\n\n\nSpecifying a pool\n\nAlternatively, support may be a list of raw (non-categorical) elements if pool is:\n\nsome CategoricalArray, CategoricalValue or CategoricalPool, such that support is a subset of levels(pool)\nmissing, in which case a new categorical pool is created which has support as its only levels.\n\nIn the last case, specify ordered=true if the pool is to be considered ordered.\n\njulia> UnivariateFinite([\"x\", \"z\"], [0.1, 0.9], pool=missing, ordered=true)\nUnivariateFinite{OrderedFactor{2}}(x=>0.1, z=>0.9)\n\njulia> d = UnivariateFinite([\"x\", \"z\"], [0.1, 0.9], pool=v) # v defined above\nUnivariateFinite{Multiclass{3}}(x=>0.1, z=>0.9)\n\njulia> pdf(d, \"y\") # allowed as `\"y\" in levels(v)`\n0.0\n\njulia> v = categorical([\"x\", \"x\", \"y\", \"x\", \"z\", \"w\"])\n6-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"x\"\n \"x\"\n \"y\"\n \"x\"\n \"z\"\n \"w\"\n\njulia> probs = rand(100, 3); probs = probs ./ sum(probs, dims=2);\n\njulia> UnivariateFinite([\"x\", \"y\", \"z\"], probs, pool=v)\n100-element UnivariateFiniteVector{Multiclass{4}, String, UInt32, Float64}:\n UnivariateFinite{Multiclass{4}}(x=>0.194, y=>0.3, z=>0.505)\n UnivariateFinite{Multiclass{4}}(x=>0.727, y=>0.234, z=>0.0391)\n UnivariateFinite{Multiclass{4}}(x=>0.674, y=>0.00535, z=>0.321)\n ⋮\n UnivariateFinite{Multiclass{4}}(x=>0.292, y=>0.339, z=>0.369)\n\nProbability augmentation\n\nIf augment=true the provided array is augmented by inserting appropriate elements ahead of those provided, along the last dimension of the array. This means the user only provides probabilities for the classes c2, c3, ..., cn. The class c1 probabilities are chosen so that each UnivariateFinite distribution in the returned array is a bona fide probability distribution.\n\n\n\nUnivariateFinite(prob_given_class; pool=nothing, ordered=false)\n\nConstruct a discrete univariate distribution whose finite support is the set of keys of the provided dictionary, prob_given_class, and whose values specify the corresponding probabilities.\n\nThe type requirements on the keys of the dictionary are the same as the elements of support given above with this exception: if non-categorical elements (raw labels) are used as keys, then pool=... must be specified and cannot be missing.\n\nIf the values (probabilities) are arrays instead of scalars, then an abstract array of UnivariateFinite elements is created, with the same size as the array.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.classes-Tuple{Any}","page":"Reference","title":"MLJModelInterface.classes","text":"classes(x)\n\nAll the categorical elements with the same pool as x (including x), returned as a list, with an ordering consistent with the pool. Here x has CategoricalValue type, and classes(x) is a vector of the same eltype. Note that x in classes(x) is always true.\n\nNot to be confused with levels(x.pool). See the example below.\n\njulia> v = categorical([\"c\", \"b\", \"c\", \"a\"])\n4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"c\"\n \"b\"\n \"c\"\n \"a\"\n\njulia> levels(v)\n3-element Vector{String}:\n \"a\"\n \"b\"\n \"c\"\n\njulia> x = v[4]\nCategoricalArrays.CategoricalValue{String, UInt32} \"a\"\n\njulia> classes(x)\n3-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"a\"\n \"b\"\n \"c\"\n\njulia> levels(x.pool)\n3-element Vector{String}:\n \"a\"\n \"b\"\n \"c\"\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.decoder-Tuple{Any}","page":"Reference","title":"MLJModelInterface.decoder","text":"decoder(x)\n\nReturn a callable object for decoding the integer representation of a CategoricalValue sharing the same pool the CategoricalValue x. Specifically, one has decoder(x)(int(y)) == y for all CategoricalValues y having the same pool as x. One can also call decoder(x) on integer arrays, in which case decoder(x) is broadcast over all elements.\n\nExamples\n\njulia> v = categorical([\"c\", \"b\", \"c\", \"a\"])\n4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"c\"\n \"b\"\n \"c\"\n \"a\"\n\njulia> int(v)\n4-element Vector{UInt32}:\n 0x00000003\n 0x00000002\n 0x00000003\n 0x00000001\n\njulia> d = decoder(v[3]);\n\njulia> d(int(v)) == v\ntrue\n\nWarning:\n\nIt is not true that int(d(u)) == u always holds.\n\nSee also: int.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.evaluate","page":"Reference","title":"MLJModelInterface.evaluate","text":"some meta-models may choose to implement the evaluate operations\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.fit","page":"Reference","title":"MLJModelInterface.fit","text":"MLJModelInterface.fit(model, verbosity, data...) -> fitresult, cache, report\n\nAll models must implement a fit method. Here data is the output of reformat on user-provided data, or some some resampling thereof. The fallback of reformat returns the user-provided data (eg, a table).\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.fitted_params-Tuple{Model, Any}","page":"Reference","title":"MLJModelInterface.fitted_params","text":"fitted_params(model, fitresult) -> human_readable_fitresult # named_tuple\n\nModels may overload fitted_params. The fallback returns (fitresult=fitresult,).\n\nOther training-related outcomes should be returned in the report part of the tuple returned by fit.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.int-Tuple{Any}","page":"Reference","title":"MLJModelInterface.int","text":"int(x)\n\nThe positional integer of the CategoricalString or CategoricalValue x, in the ordering defined by the pool of x. The type of int(x) is the reference type of x.\n\nNot to be confused with x.ref, which is unchanged by reordering of the pool of x, but has the same type.\n\nint(X::CategoricalArray)\nint(W::Array{<:CategoricalString})\nint(W::Array{<:CategoricalValue})\n\nBroadcasted versions of int.\n\njulia> v = categorical([\"c\", \"b\", \"c\", \"a\"])\n4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:\n \"c\"\n \"b\"\n \"c\"\n \"a\"\n\njulia> levels(v)\n3-element Vector{String}:\n \"a\"\n \"b\"\n \"c\"\n\njulia> int(v)\n4-element Vector{UInt32}:\n 0x00000003\n 0x00000002\n 0x00000003\n 0x00000001\n\nSee also: decoder.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.inverse_transform","page":"Reference","title":"MLJModelInterface.inverse_transform","text":"Unsupervised models may implement the inverse_transform operation.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.is_same_except-Tuple{Any, Any}","page":"Reference","title":"MLJModelInterface.is_same_except","text":"is_same_except(m1, m2, exceptions::Symbol...; deep_properties=Symbol[])\n\nIf both m1 and m2 are of MLJType, return true if the following conditions all hold, and false otherwise:\n\ntypeof(m1) === typeof(m2)\npropertynames(m1) === propertynames(m2)\nwith the exception of properties listed as exceptions or bound to an AbstractRNG, each pair of corresponding property values is either \"equal\" or both undefined. (If a property appears as a propertyname but not a fieldname, it is deemed as always defined.)\n\nThe meaining of \"equal\" depends on the type of the property value:\n\nvalues that are themselves of MLJType are \"equal\" if they are equal in the sense of is_same_except with no exceptions.\nvalues that are not of MLJType are \"equal\" if they are ==.\n\nIn the special case of a \"deep\" property, \"equal\" has a different meaning; see deep_properties) for details.\n\nIf m1 or m2 are not MLJType objects, then return ==(m1, m2).\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.isrepresented-Tuple{MLJType, Nothing}","page":"Reference","title":"MLJModelInterface.isrepresented","text":"isrepresented(object::MLJType, objects)\n\nTest if object has a representative in the iterable objects. This is a weaker requirement than object in objects.\n\nHere we say m1 represents m2 if is_same_except(m1, m2) is true.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.matrix-Tuple{Any}","page":"Reference","title":"MLJModelInterface.matrix","text":"matrix(X; transpose=false)\n\nIf X isa AbstractMatrix, return X or permutedims(X) if transpose=true. Otherwise if X is a Tables.jl compatible table source, convert X into a Matrix.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.metadata_model-Tuple{Any}","page":"Reference","title":"MLJModelInterface.metadata_model","text":"metadata_model(T; args...)\n\nHelper function to write the metadata for a model T.\n\nKeywords\n\ninput_scitype=Unknown: allowed scientific type of the input data\ntarget_scitype=Unknown: allowed scitype of the target (supervised)\noutput_scitype=Unknown: allowed scitype of the transformed data (unsupervised)\nsupports_weights=false: whether the model supports sample weights\nsupports_class_weights=false: whether the model supports class weights\nload_path=\"unknown\": where the model is (usually PackageName.ModelName)\nhuman_name=nothing: human name of the model\nsupports_training_losses=nothing: whether the (necessarily iterative) model can report training losses\nreports_feature_importances=nothing: whether the model reports feature importances\n\nExample\n\nmetadata_model(KNNRegressor,\n input_scitype=MLJModelInterface.Table(MLJModelInterface.Continuous),\n target_scitype=AbstractVector{MLJModelInterface.Continuous},\n supports_weights=true,\n load_path=\"NearestNeighbors.KNNRegressor\")\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.metadata_pkg-Tuple{Any}","page":"Reference","title":"MLJModelInterface.metadata_pkg","text":"metadata_pkg(T; args...)\n\nHelper function to write the metadata for a package providing model T. Use it with broadcasting to define the metadata of the package providing a series of models.\n\nKeywords\n\npackage_name=\"unknown\" : package name\npackage_uuid=\"unknown\" : package uuid\npackage_url=\"unknown\" : package url\nis_pure_julia=missing : whether the package is pure julia\npackage_license=\"unknown\": package license\nis_wrapper=false : whether the package is a wrapper\n\nExample\n\nmetadata_pkg.((KNNRegressor, KNNClassifier),\n package_name=\"NearestNeighbors\",\n package_uuid=\"b8a86587-4115-5ab1-83bc-aa920d37bbce\",\n package_url=\"https://github.com/KristofferC/NearestNeighbors.jl\",\n is_pure_julia=true,\n package_license=\"MIT\",\n is_wrapper=false)\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.nrows-Tuple{Any}","page":"Reference","title":"MLJModelInterface.nrows","text":"nrows(X)\n\nReturn the number of rows for a table, AbstractVector or AbstractMatrix, X.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.params-Tuple{Any}","page":"Reference","title":"MLJModelInterface.params","text":"params(m::MLJType)\n\nRecursively convert any transparent object m into a named tuple, keyed on the fields of m. An object is transparent if MLJModelInterface.istransparent(m) == true. The named tuple is possibly nested because params is recursively applied to the field values, which themselves might be transparent.\n\nMost objects of type MLJType are transparent.\n\njulia> params(EnsembleModel(model=ConstantClassifier()))\n(model = (target_type = Bool,),\n weights = Float64[],\n bagging_fraction = 0.8,\n rng_seed = 0,\n n = 100,\n parallel = true,)\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.predict","page":"Reference","title":"MLJModelInterface.predict","text":"predict(model, fitresult, new_data...)\n\nSupervised and SupervisedAnnotator models must implement the predict operation. Here new_data is the output of reformat called on user-specified data.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.predict_joint","page":"Reference","title":"MLJModelInterface.predict_joint","text":"JointProbabilistic supervised models MUST overload predict_joint.\n\nProbabilistic supervised models MAY overload predict_joint.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.predict_mean","page":"Reference","title":"MLJModelInterface.predict_mean","text":"Models types M for which prediction_type(M) == :probablisitic may overload predict_mean.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.predict_median","page":"Reference","title":"MLJModelInterface.predict_median","text":"Models types M for which prediction_type(M) == :probablisitic may overload predict_median.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.predict_mode","page":"Reference","title":"MLJModelInterface.predict_mode","text":"Models types M for which prediction_type(M) == :probablisitic may overload predict_mode.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.reformat-Tuple{Model, Vararg{Any}}","page":"Reference","title":"MLJModelInterface.reformat","text":"MLJModelInterface.reformat(model, args...) -> data\n\nModels optionally overload reformat to define transformations of user-supplied data into some model-specific representation (e.g., from a table to a matrix). When implemented, the MLJ user can avoid repeating such transformations unnecessarily, and can additionally make use of more efficient row subsampling, which is then based on the model-specific representation of data, rather than the user-representation. When reformat is overloaded, selectrows(::Model, ...) must be as well (see selectrows). Furthermore, the model fit method(s), and operations, such as predict and transform, must be refactored to act on the model-specific representations of the data.\n\nTo implement the reformat data front-end for a model, refer to \"Implementing a data front-end\" in the MLJ manual.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.scitype-Tuple{Any}","page":"Reference","title":"MLJModelInterface.scitype","text":"scitype(X)\n\nThe scientific type (interpretation) of X, distinct from its machine type.\n\nExamples\n\njulia> scitype(3.14)\nContinuous\n\njulia> scitype([1, 2, missing])\nAbstractVector{Union{Missing, Count}} \n\njulia> scitype((5, \"beige\"))\nTuple{Count, Textual}\n\njulia> using CategoricalArrays\n\njulia> X = (gender = categorical(['M', 'M', 'F', 'M', 'F']),\n ndevices = [1, 3, 2, 3, 2]);\n\njulia> scitype(X)\nTable{Union{AbstractVector{Count}, AbstractVector{Multiclass{2}}}}\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.select","page":"Reference","title":"MLJModelInterface.select","text":"select(X, r, c)\n\nSelect element(s) of a table or matrix at row(s) r and column(s) c. An object of the sink type of X (or a matrix) is returned unless c is a single integer or symbol. In that case a vector is returned, unless r is a single integer, in which case a single element is returned.\n\nSee also: selectrows, selectcols.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.selectcols","page":"Reference","title":"MLJModelInterface.selectcols","text":"selectcols(X, c)\n\nSelect single or multiple columns from a matrix or table X. If c is an abstract vector of integers or symbols, then the object returned is a table of the preferred sink type of typeof(X). If c is a single integer or column, then an AbstractVector is returned.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.selectrows","page":"Reference","title":"MLJModelInterface.selectrows","text":"selectrows(X, r)\n\nSelect single or multiple rows from a table, abstract vector or matrix X. If X is tabular, the object returned is a table of the preferred sink type of typeof(X), even if only a single row is selected.\n\nIf the object is neither a table, abstract vector or matrix, X is returned and r is ignored.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.selectrows-Tuple{Model, Any, Vararg{Any}}","page":"Reference","title":"MLJModelInterface.selectrows","text":"MLJModelInterface.selectrows(::Model, I, data...) -> sampled_data\n\nA model overloads selectrows whenever it buys into the optional reformat front-end for data preprocessing. See reformat for details. The fallback assumes data is a tuple and calls selectrows(X, I) for each X in data, returning the results in a new tuple of the same length. This call makes sense when X is a table, abstract vector or abstract matrix. In the last two cases, a new object and not a view is returned.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.table-Tuple{Any}","page":"Reference","title":"MLJModelInterface.table","text":"table(columntable; prototype=nothing)\n\nConvert a named tuple of vectors or tuples columntable, into a table of the \"preferred sink type\" of prototype. This is often the type of prototype itself, when prototype is a sink; see the Tables.jl documentation. If prototype is not specified, then a named tuple of vectors is returned.\n\ntable(A::AbstractMatrix; names=nothing, prototype=nothing)\n\nWrap an abstract matrix A as a Tables.jl compatible table with the specified column names (a tuple of symbols). If names are not specified, names=(:x1, :x2, ..., :xn) is used, where n=size(A, 2).\n\nIf a prototype is specified, then the matrix is materialized as a table of the preferred sink type of prototype, rather than wrapped. Note that if prototype is not specified, then matrix(table(A)) is essentially a no-op.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.training_losses-Tuple{Any, Any}","page":"Reference","title":"MLJModelInterface.training_losses","text":"MLJModelInterface.training_losses(model::M, report)\n\nIf M is an iterative model type which calculates training losses, implement this method to return an AbstractVector of the losses in historical order. If the model calculates scores instead, then the sign of the scores should be reversed.\n\nThe following trait overload is also required: MLJModelInterface.supports_training_losses(::Type{<:M}) = true.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.transform","page":"Reference","title":"MLJModelInterface.transform","text":"Unsupervised models must implement the transform operation.\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.update-Tuple{Model, Any, Any, Any, Vararg{Any}}","page":"Reference","title":"MLJModelInterface.update","text":"MLJModelInterface.update(model, verbosity, fitresult, cache, data...)\n\nModels may optionally implement an update method. The fallback calls fit.\n\n\n\n\n\n","category":"method"},{"location":"reference/#StatisticalTraits.deep_properties","page":"Reference","title":"StatisticalTraits.deep_properties","text":"deep_properties(::Type{<:MLJType})\n\nGiven an MLJType subtype M, the value of this trait should be a tuple of any properties of M to be regarded as \"deep\".\n\nWhen two instances of type M are to be tested for equality, in the sense of == or is_same_except, then the values of a \"deep\" property (whose values are assumed to be of composite type) are deemed to agree if all corresponding properties of those property values are ==.\n\nAny property of M whose values are themselves of MLJType are \"deep\" automatically, and should not be included in the trait return value.\n\nSee also is_same_except\n\nExample\n\nConsider an MLJType subtype Foo, with a single field of type Bar which is not a subtype of MLJType:\n\nmutable struct Bar\n x::Int\nend\n\nmutable struct Foo <: MLJType\n bar::Bar\nend\n\nThen the mutability of Foo implies Foo(1) != Foo(1) and so, by the definition == for MLJType objects (see is_same_except) we have\n\nBar(Foo(1)) != Bar(Foo(1))\n\nHowever after the declaration\n\nMLJModelInterface.deep_properties(::Type{<:Foo}) = (:bar,)\n\nWe have\n\nBar(Foo(1)) == Bar(Foo(1))\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.@mlj_model-Tuple{Any}","page":"Reference","title":"MLJModelInterface.@mlj_model","text":"@mlj_model\n\nMacro to help define MLJ models with constraints on the default parameters.\n\n\n\n\n\n","category":"macro"},{"location":"reference/","page":"Reference","title":"Reference","text":"Modules = [MLJModelInterface,]\nPublic = false\nOrder = [:constant, :type, :function, :macro, :module]","category":"page"},{"location":"reference/#MLJModelInterface._model_cleaner-Tuple{Any, Any, Any}","page":"Reference","title":"MLJModelInterface._model_cleaner","text":"_model_cleaner(modelname, defaults, constraints)\n\nBuild the expression of the cleaner associated with the constraints specified in a model def.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface._model_constructor-Tuple{Any, Any, Any}","page":"Reference","title":"MLJModelInterface._model_constructor","text":"_model_constructor(modelname, params, defaults)\n\nBuild the expression of the keyword constructor associated with a model definition. When the constructor is called, the clean! function is called as well to check that parameter assignments are valid.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface._process_model_def-Tuple{Any, Any}","page":"Reference","title":"MLJModelInterface._process_model_def","text":"_process_model_def(modl, ex)\n\nTake an expression defining a model (mutable struct Model ...) and unpack key elements for further processing:\n\nModel name (modelname)\nNames of parameters (params)\nDefault values (defaults)\nConstraints (constraints)\n\nWhen no default field value is given a heuristic is to guess an appropriate default (eg, zero for a Float64 parameter). To this end, the specified type expression is evaluated in the module modl.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface._unpack!-Tuple{Expr, Any}","page":"Reference","title":"MLJModelInterface._unpack!","text":"_unpack!(ex, rep)\n\nInternal function to allow to read a constraint given after a default value for a parameter and transform it in an executable condition (which is returned to be executed later). For instance if we have\n\nalpha::Int = 0.5::(arg > 0.0)\n\nThen it would transform the (arg > 0.0) in (alpha > 0.0) which is executable.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.doc_header-Tuple{Any}","page":"Reference","title":"MLJModelInterface.doc_header","text":"MLJModelInterface.doc_header(SomeModelType; augment=false)\n\nReturn a string suitable for interpolation in the document string of an MLJ model type. In the example given below, the header expands to something like this:\n\nFooRegressorA model type for constructing a foo regressor, based on FooRegressorPkg.jl.From MLJ, the type can be imported usingFooRegressor = @load FooRegressor pkg=FooRegressorPkgConstruct an instance with default hyper-parameters using the syntax model = FooRegressor(). Provide keyword arguments to override hyper-parameter defaults, as in FooRegressor(a=...).\n\nOrdinarily, doc_header is used in document strings defined after the model type definition, as doc_header assumes model traits (in particular, package_name and package_url) to be defined; see also MLJModelInterface.metadata_pkg.\n\nExample\n\nSuppose a model type and traits have been defined by:\n\nmutable struct FooRegressor\n a::Int\n b::Float64\nend\n\nmetadata_pkg(FooRegressor,\n name=\"FooRegressorPkg\",\n uuid=\"10745b16-79ce-11e8-11f9-7d13ad32a3b2\",\n url=\"http://existentialcomics.com/\",\n )\nmetadata_model(FooRegressor,\n input=Table(Continuous),\n target=AbstractVector{Continuous})\n\nThen the docstring is defined after these declarations with the following code:\n\n\"\"\"\n$(MLJModelInterface.doc_header(FooRegressor))\n\n### Training data\n\nIn MLJ or MLJBase, bind an instance `model` ...\n\n\n\n\"\"\"\nFooRegressor\n\n\nVariation to augment existing document string\n\nFor models that have a native API with separate documentation, one may want to call doc_header(FooRegressor, augment=true) instead. In that case, the output will look like this:\n\nFrom MLJ, the FooRegressor type can be imported usingFooRegressor = @load FooRegressor pkg=FooRegressorPkgConstruct an instance with default hyper-parameters using the syntax model = FooRegressor(). Provide keyword arguments to override hyper-parameter defaults, as in FooRegressor(a=...).\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.feature_importances","page":"Reference","title":"MLJModelInterface.feature_importances","text":"feature_importances(model::M, fitresult, report)\n\nFor a given model of model type M supporting intrinsic feature importances, calculate the feature importances from the model's fitresult and report as an abstract vector of feature::Symbol => importance::Real pairs (e.g [:gender =>0.23, :height =>0.7, :weight => 0.1]).\n\nNew model implementations\n\nThe following trait overload is also required: MLJModelInterface.reports_feature_importances(::Type{<:M}) = true\n\nIf for some reason a model is sometimes unable to report feature importances then feature_importances should return all importances as 0.0, as in [:gender =>0.0, :height =>0.0, :weight => 0.0].\n\n\n\n\n\n","category":"function"},{"location":"reference/#MLJModelInterface.flat_params-Tuple{Any}","page":"Reference","title":"MLJModelInterface.flat_params","text":"flat_params(m::Model)\n\nDeconstruct any Model instance model as a flat named tuple, keyed on property names. Properties of nested model instances are recursively exposed,.as shown in the example below. For most Model objects, properties are synonymous with fields, but this is not a hard requirement.\n\njulia> using MLJModels\njulia> using EnsembleModels\njulia> tree = (@load DecisionTreeClassifier pkg=DecisionTree)();\n\njulia> flat_params(EnsembleModel(model=tree))\n(model__max_depth = -1,\n model__min_samples_leaf = 1,\n model__min_samples_split = 2,\n model__min_purity_increase = 0.0,\n model__n_subfeatures = 0,\n model__post_prune = false,\n model__merge_purity_threshold = 1.0,\n model__display_depth = 5,\n model__feature_importance = :impurity,\n model__rng = Random._GLOBAL_RNG(),\n atomic_weights = Float64[],\n bagging_fraction = 0.8,\n rng = Random._GLOBAL_RNG(),\n n = 100,\n acceleration = CPU1{Nothing}(nothing),\n out_of_bag_measure = Any[],)\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.istable-Tuple{Any}","page":"Reference","title":"MLJModelInterface.istable","text":"istable(X)\n\nReturn true if X is tabular.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.report-Tuple{Any, Any}","page":"Reference","title":"MLJModelInterface.report","text":"MLJModelInterface.report(model, report_given_method)\n\nMerge the reports in the dictionary report_given_method into a single property-accessible object. It is supposed that each key of the dictionary is either :fit or the name of an operation, such as :predict or :transform. Each value will be the report component returned by a training method (fit or update) dispatched on the model type, in the case of :fit, or the report component returned by an operation that supports reporting.\n\nNew model implementations\n\nOverloading this method is optional, unless the model generates reports that are neither named tuples nor nothing.\n\nAssuming each value in the report_given_method dictionary is either a named tuple or nothing, and there are no conflicts between the keys of the dictionary values (the individual reports), the fallback returns the usual named tuple merge of the dictionary values, ignoring any nothing value. If there is a key conflict, all operation reports are first wrapped in a named tuple of length one, as in (predict=predict_report,). A :fit report is never wrapped.\n\nIf any dictionary value is neither a named tuple nor nothing, it is first wrapped as (report=value, ) before merging.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.schema-Tuple{Any}","page":"Reference","title":"MLJModelInterface.schema","text":"schema(X)\n\nInspect the column types and scitypes of a tabular object. returns nothing if the column types and scitypes can't be inspected.\n\n\n\n\n\n","category":"method"},{"location":"reference/#MLJModelInterface.synthesize_docstring-Tuple{Any}","page":"Reference","title":"MLJModelInterface.synthesize_docstring","text":"synthesize_docstring\n\nPrivate method.\n\nGenerates a value for the docstring trait for use with a model which does not have a standard document string, to use as the fallback. See metadata_model.\n\n\n\n\n\n","category":"method"},{"location":"training_losses/#Training-losses","page":"Training losses","title":"Training losses","text":"","category":"section"},{"location":"training_losses/","page":"Training losses","title":"Training losses","text":"MLJModelInterface.training_losses","category":"page"},{"location":"training_losses/#MLJModelInterface.training_losses-training_losses","page":"Training losses","title":"MLJModelInterface.training_losses","text":"MLJModelInterface.training_losses(model::M, report)\n\nIf M is an iterative model type which calculates training losses, implement this method to return an AbstractVector of the losses in historical order. If the model calculates scores instead, then the sign of the scores should be reversed.\n\nThe following trait overload is also required: MLJModelInterface.supports_training_losses(::Type{<:M}) = true.\n\n\n\n\n\n","category":"function"},{"location":"training_losses/","page":"Training losses","title":"Training losses","text":"Trait values can also be set using the metadata_model method, see below.","category":"page"},{"location":"supervised_models_with_transform/#Supervised-models-with-a-transform-method","page":"Supervised models with a transform method","title":"Supervised models with a transform method","text":"","category":"section"},{"location":"supervised_models_with_transform/","page":"Supervised models with a transform method","title":"Supervised models with a transform method","text":"A supervised model may optionally implement a transform method, whose signature is the same as predict. In that case, the implementation should define a value for the output_scitype trait. A declaration","category":"page"},{"location":"supervised_models_with_transform/","page":"Supervised models with a transform method","title":"Supervised models with a transform method","text":"output_scitype(::Type{<:SomeSupervisedModel}) = T","category":"page"},{"location":"supervised_models_with_transform/","page":"Supervised models with a transform method","title":"Supervised models with a transform method","text":"is an assurance that scitype(transform(model, fitresult, Xnew)) <: T always holds, for any model of type SomeSupervisedModel.","category":"page"},{"location":"supervised_models_with_transform/","page":"Supervised models with a transform method","title":"Supervised models with a transform method","text":"A use-case for a transform method for a supervised model is a neural network that learns feature embeddings for categorical input features as part of overall training. Such a model becomes a transformer that other supervised models can use to transform the categorical features (instead of applying the higher-dimensional one-hot encoding representations).","category":"page"},{"location":"document_strings/#Document-strings","page":"Document strings","title":"Document strings","text":"","category":"section"},{"location":"document_strings/","page":"Document strings","title":"Document strings","text":"To be registered, MLJ models must include a detailed document string for the model type, and this must conform to the standard outlined below. We recommend you simply adapt an existing compliant document string and read the requirements below if you're not sure, or to use as a checklist. Here are examples of compliant doc-strings (go to the end of the linked files):","category":"page"},{"location":"document_strings/","page":"Document strings","title":"Document strings","text":"Regular supervised models (classifiers and regressors): MLJDecisionTreeInterface.jl (see the end of the file)\nTranformers: MLJModels.jl","category":"page"},{"location":"document_strings/","page":"Document strings","title":"Document strings","text":"A utility function is available for generating a standardized header for your doc-strings (but you provide most detail by hand):","category":"page"},{"location":"document_strings/","page":"Document strings","title":"Document strings","text":"MLJModelInterface.doc_header","category":"page"},{"location":"document_strings/#MLJModelInterface.doc_header","page":"Document strings","title":"MLJModelInterface.doc_header","text":"MLJModelInterface.doc_header(SomeModelType; augment=false)\n\nReturn a string suitable for interpolation in the document string of an MLJ model type. In the example given below, the header expands to something like this:\n\nFooRegressorA model type for constructing a foo regressor, based on FooRegressorPkg.jl.From MLJ, the type can be imported usingFooRegressor = @load FooRegressor pkg=FooRegressorPkgConstruct an instance with default hyper-parameters using the syntax model = FooRegressor(). Provide keyword arguments to override hyper-parameter defaults, as in FooRegressor(a=...).\n\nOrdinarily, doc_header is used in document strings defined after the model type definition, as doc_header assumes model traits (in particular, package_name and package_url) to be defined; see also MLJModelInterface.metadata_pkg.\n\nExample\n\nSuppose a model type and traits have been defined by:\n\nmutable struct FooRegressor\n a::Int\n b::Float64\nend\n\nmetadata_pkg(FooRegressor,\n name=\"FooRegressorPkg\",\n uuid=\"10745b16-79ce-11e8-11f9-7d13ad32a3b2\",\n url=\"http://existentialcomics.com/\",\n )\nmetadata_model(FooRegressor,\n input=Table(Continuous),\n target=AbstractVector{Continuous})\n\nThen the docstring is defined after these declarations with the following code:\n\n\"\"\"\n$(MLJModelInterface.doc_header(FooRegressor))\n\n### Training data\n\nIn MLJ or MLJBase, bind an instance `model` ...\n\n\n\n\"\"\"\nFooRegressor\n\n\nVariation to augment existing document string\n\nFor models that have a native API with separate documentation, one may want to call doc_header(FooRegressor, augment=true) instead. In that case, the output will look like this:\n\nFrom MLJ, the FooRegressor type can be imported usingFooRegressor = @load FooRegressor pkg=FooRegressorPkgConstruct an instance with default hyper-parameters using the syntax model = FooRegressor(). Provide keyword arguments to override hyper-parameter defaults, as in FooRegressor(a=...).\n\n\n\n\n\n","category":"function"},{"location":"document_strings/#The-document-string-standard","page":"Document strings","title":"The document string standard","text":"","category":"section"},{"location":"document_strings/","page":"Document strings","title":"Document strings","text":"Your document string must include the following components, in order:","category":"page"},{"location":"document_strings/","page":"Document strings","title":"Document strings","text":"A header, closely matching the example given above.\nA reference describing the algorithm or an actual description of the algorithm, if necessary. Detail any non-standard aspects of the implementation. Generally, defer details on the role of hyperparameters to the \"Hyperparameters\" section (see below).\nInstructions on how to import the model type from MLJ (because a user can already inspect the doc-string in the Model Registry, without having loaded the code-providing package).\nInstructions on how to instantiate with default hyperparameters or with keywords.\nA Training data section: explains how to bind a model to data in a machine with all possible signatures (eg, machine(model, X, y) but also machine(model, X, y, w) if, say, weights are supported); the role and scitype requirements for each data argument should be itemized.\nInstructions on how to fit the machine (in the same section).\nA Hyperparameters section (unless there aren't any): an itemized list of the parameters, with defaults given.\nAn Operations section: each implemented operation (predict, predict_mode, transform, inverse_transform, etc ) is itemized and explained. This should include operations with no data arguments, such as training_losses and feature_importances.\nA Fitted parameters section: To explain what is returned by fitted_params(mach) (the same as MLJModelInterface.fitted_params(model, fitresult) - see later) with the fields of that named tuple itemized.\nA Report section (if report is non-empty): To explain what, if anything, is included in the report(mach) (the same as the report return value of MLJModelInterface.fit) with the fields itemized.\nAn optional but highly recommended Examples section, which includes MLJ examples, but which could also include others if the model type also implements a second \"local\" interface, i.e., defined in the same module. (Note that each module referring to a type can declare separate doc-strings which appear concatenated in doc-string queries.)\nA closing \"See also\" sentence which includes a @ref link to the raw model type (if you are wrapping one).","category":"page"},{"location":"feature_importances/#Feature-importances","page":"Feature importances","title":"Feature importances","text":"","category":"section"},{"location":"feature_importances/","page":"Feature importances","title":"Feature importances","text":"MLJModelInterface.feature_importances","category":"page"},{"location":"feature_importances/#MLJModelInterface.feature_importances-feature_importances","page":"Feature importances","title":"MLJModelInterface.feature_importances","text":"feature_importances(model::M, fitresult, report)\n\nFor a given model of model type M supporting intrinsic feature importances, calculate the feature importances from the model's fitresult and report as an abstract vector of feature::Symbol => importance::Real pairs (e.g [:gender =>0.23, :height =>0.7, :weight => 0.1]).\n\nNew model implementations\n\nThe following trait overload is also required: MLJModelInterface.reports_feature_importances(::Type{<:M}) = true\n\nIf for some reason a model is sometimes unable to report feature importances then feature_importances should return all importances as 0.0, as in [:gender =>0.0, :height =>0.0, :weight => 0.0].\n\n\n\n\n\n","category":"function"},{"location":"feature_importances/","page":"Feature importances","title":"Feature importances","text":"Trait values can also be set using the metadata_model method, see below.","category":"page"},{"location":"#Adding-Models-for-General-Use","page":"Home","title":"Adding Models for General Use","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"The machine learning tools provided by MLJ can be applied to the models in any package that imports MLJModelInterface and implements the API defined there, as outlined in this document. ","category":"page"},{"location":"","page":"Home","title":"Home","text":"tip: Tip\nThis is a reference document, which has become rather sprawling over the evolution of the MLJ project. We recommend starting with Quick start guide, which covers the main points relevant to most new model implementations. Most topics are only detailed for Supervised models, so if you are implementing another kind of model, you may still need to refer to the Supervised models section. ","category":"page"},{"location":"","page":"Home","title":"Home","text":"Interface code can be hosted by the package providing the core machine learning algorithm, or by a stand-alone \"interface-only\" package, using the template MLJExampleInterface.jl (see Where to place code implementing new models below). For a list of packages implementing the MLJ model API (natively, and in interface packages) see here.","category":"page"},{"location":"#Important","page":"Home","title":"Important","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"MLJModelInterface is a very light-weight interface allowing you to define your interface, but does not provide the functionality required to use or test your interface; this requires MLJBase. So, while you only need to add MLJModelInterface to your project's [deps], for testing purposes you need to add MLJBase to your project's [extras] and [targets]. In testing, simply use MLJBase in place of MLJModelInterface.","category":"page"},{"location":"","page":"Home","title":"Home","text":"It is assumed the reader has read the Getting Started section of the MLJ manual. To implement the API described here, some familiarity with the following packages is also helpful:","category":"page"},{"location":"","page":"Home","title":"Home","text":"ScientificTypes.jl (for specifying model requirements of data)\nDistributions.jl (for probabilistic predictions)\nCategoricalArrays.jl (essential if you are implementing a model handling data of Multiclass or OrderedFactor scitype; familiarity with CategoricalPool objects required)\nTables.jl (if your algorithm needs input data in a novel format).","category":"page"},{"location":"","page":"Home","title":"Home","text":"In MLJ, the basic interface exposed to the user, built atop the model interface described here, is the machine interface. After a first reading of this document, the reader may wish to refer to MLJ Internals for context.","category":"page"},{"location":"the_predict_joint_method/#The-predict_joint-method","page":"The predict_joint method","title":"The predict_joint method","text":"","category":"section"},{"location":"the_predict_joint_method/","page":"The predict_joint method","title":"The predict_joint method","text":"warning: Experimental\nThe following API is experimental. It is subject to breaking changes during minor or major releases without warning.","category":"page"},{"location":"the_predict_joint_method/","page":"The predict_joint method","title":"The predict_joint method","text":"MMI.predict_joint(model::SomeSupervisedModel, fitresult, Xnew) -> yhat","category":"page"},{"location":"the_predict_joint_method/","page":"The predict_joint method","title":"The predict_joint method","text":"Any Probabilistic model type SomeModelmay optionally implement a predict_joint method, which has the same signature as predict, but whose predictions are a single distribution (rather than a vector of per-observation distributions).","category":"page"},{"location":"the_predict_joint_method/","page":"The predict_joint method","title":"The predict_joint method","text":"Specifically, the output yhat of predict_joint should be an instance of Distributions.Sampleable{<:Multivariate,V}, where scitype(V) = target_scitype(SomeModel) and samples have length n, where n is the number of observations in Xnew.","category":"page"},{"location":"the_predict_joint_method/","page":"The predict_joint method","title":"The predict_joint method","text":"If a new model type subtypes JointProbabilistic <: Probabilistic then implementation of predict_joint is compulsory.","category":"page"}] } diff --git a/dev/serialization/index.html b/dev/serialization/index.html index 578b4a5..774629e 100644 --- a/dev/serialization/index.html +++ b/dev/serialization/index.html @@ -1,2 +1,2 @@ -Serialization · MLJModelInterface

Serialization

New in MLJBase 0.20

The following API is incompatible with versions of MLJBase < 0.20, even for model implementations compatible with MLJModelInterface 1^

This section may be occasionally relevant when wrapping models implemented in languages other than Julia.

The MLJ user can serialize and deserialize machines, as she would any other julia object. (This user has the option of first removing data from the machine. See the Saving machines section of the MLJ manual for details.) However, a problem can occur if a model's fitresult (see The fit method) is not a persistent object. For example, it might be a C pointer that would have no meaning in a new Julia session.

If that is the case a model implementation needs to implement a save and restore method for switching between a fitresult and some persistent, serializable representation of that result.

The save method

MMI.save(model::SomeModel, fitresult; kwargs...) -> serializable_fitresult

Implement this method to return a persistent serializable representation of the fitresult component of the MMI.fit return value.

The fallback of save performs no action and returns fitresult.

The restore method

MMI.restore(model::SomeModel, serializable_fitresult) -> fitresult

Implement this method to reconstruct a valid fitresult (as would be returned by MMI.fit) from a persistent representation constructed using MMI.save as described above.

The fallback of restore performs no action and returns serializable_fitresult.

Example

Refer to the model implementations at MLJXGBoostInterface.jl.

+Serialization · MLJModelInterface

Serialization

New in MLJBase 0.20

The following API is incompatible with versions of MLJBase < 0.20, even for model implementations compatible with MLJModelInterface 1^

This section may be occasionally relevant when wrapping models implemented in languages other than Julia.

The MLJ user can serialize and deserialize machines, as she would any other julia object. (This user has the option of first removing data from the machine. See the Saving machines section of the MLJ manual for details.) However, a problem can occur if a model's fitresult (see The fit method) is not a persistent object. For example, it might be a C pointer that would have no meaning in a new Julia session.

If that is the case a model implementation needs to implement a save and restore method for switching between a fitresult and some persistent, serializable representation of that result.

The save method

MMI.save(model::SomeModel, fitresult; kwargs...) -> serializable_fitresult

Implement this method to return a persistent serializable representation of the fitresult component of the MMI.fit return value.

The fallback of save performs no action and returns fitresult.

The restore method

MMI.restore(model::SomeModel, serializable_fitresult) -> fitresult

Implement this method to reconstruct a valid fitresult (as would be returned by MMI.fit) from a persistent representation constructed using MMI.save as described above.

The fallback of restore performs no action and returns serializable_fitresult.

Example

Refer to the model implementations at MLJXGBoostInterface.jl.

diff --git a/dev/static_models/index.html b/dev/static_models/index.html index ee67480..5b0a548 100644 --- a/dev/static_models/index.html +++ b/dev/static_models/index.html @@ -1,2 +1,2 @@ -Static models · MLJModelInterface

Static models

A model type subtypes Static <: Unsupervised if it does not generalize to new data but nevertheless has hyperparameters. See the Static transformers section of the MLJ manual for examples. In the Static case, transform can have multiple arguments and input_scitype refers to the allowed scitype of the slurped data, even if there is only a single argument. For example, if the signature is transform(static_model, X1, X2), then the allowed input_scitype might be Tuple{Table(Continuous), Table(Continuous)}; if the signature is transform(static_model, X), the allowed input_scitype might be Tuple{Table(Continuous)}. The other traits are as for regular Unsupervised models.

Reporting byproducts of a static transformation

As a static transformer does not implement fit, the usual mechanism for creating a report is not available. Instead, byproducts of the computation performed by transform can be returned by transform itself by returning a pair (output, report) instead of just output. Here report should be a named tuple. In fact, any operation, (e.g., predict) can do this for any model type. However, this exceptional behavior must be flagged with an appropriate trait declaration, as in

MLJModelInterface.reporting_operations(::Type{<:SomeModelType}) = (:transform,)

If mach is a machine wrapping a model of this kind, then the report(mach) will include the report item form transform's output. For sample implementations, see this issue or the code for DBSCAN clustering.

+Static models · MLJModelInterface

Static models

A model type subtypes Static <: Unsupervised if it does not generalize to new data but nevertheless has hyperparameters. See the Static transformers section of the MLJ manual for examples. In the Static case, transform can have multiple arguments and input_scitype refers to the allowed scitype of the slurped data, even if there is only a single argument. For example, if the signature is transform(static_model, X1, X2), then the allowed input_scitype might be Tuple{Table(Continuous), Table(Continuous)}; if the signature is transform(static_model, X), the allowed input_scitype might be Tuple{Table(Continuous)}. The other traits are as for regular Unsupervised models.

Reporting byproducts of a static transformation

As a static transformer does not implement fit, the usual mechanism for creating a report is not available. Instead, byproducts of the computation performed by transform can be returned by transform itself by returning a pair (output, report) instead of just output. Here report should be a named tuple. In fact, any operation, (e.g., predict) can do this for any model type. However, this exceptional behavior must be flagged with an appropriate trait declaration, as in

MLJModelInterface.reporting_operations(::Type{<:SomeModelType}) = (:transform,)

If mach is a machine wrapping a model of this kind, then the report(mach) will include the report item form transform's output. For sample implementations, see this issue or the code for DBSCAN clustering.

diff --git a/dev/summary_of_methods/index.html b/dev/summary_of_methods/index.html index a85f04f..e5360b6 100644 --- a/dev/summary_of_methods/index.html +++ b/dev/summary_of_methods/index.html @@ -12,4 +12,4 @@ MMI.is_pure_julia(::Type{<:SomeSupervisedModel}) = false MMI.package_license(::Type{<:SomeSupervisedModel}) = "unknown"

If SomeSupervisedModel supports sample weights or class weights, then instead of the fit above, one implements

MMI.fit(model::SomeSupervisedModel, verbosity, X, y, w=nothing) -> fitresult, cache, report

and, if appropriate

MMI.update(model::SomeSupervisedModel, verbosity, old_fitresult, old_cache, X, y, w=nothing) =
    MMI.fit(model, verbosity, X, y, w)

Additionally, if SomeSupervisedModel supports sample weights, one must declare

MMI.supports_weights(model::Type{<:SomeSupervisedModel}) = true

Optionally, an implementation may add a data front-end, for transforming user data (such as a table) into some model-specific format (such as a matrix), and/or add methods to specify how reformatted data is resampled. This alters the interpretation of the data arguments of fit, update and predict, whose number may also change. See Implementing a data front-end for details). A data front-end provides the MLJ user certain performance advantages when retraining a machine.

Third-party packages that interact directly with models using the MLJModelInterface.jl API, rather than through the machine interface, will also need to understand how the data front-end works, so they incorporate reformat into their fit/update/predict calls. See also this issue.

MLJModelInterface.reformat(model::SomeSupervisedModel, args...) = args
-MLJModelInterface.selectrows(model::SomeSupervisedModel, I, data...) = data

Optionally, to customized support for serialization of machines (see Serialization), overload

MMI.save(filename, model::SomeModel, fitresult; kwargs...) = fitresult

and possibly

MMI.restore(filename, model::SomeModel, serializable_fitresult) -> serializable_fitresult

These last two are unlikely to be needed if wrapping pure Julia code.

+MLJModelInterface.selectrows(model::SomeSupervisedModel, I, data...) = data

Optionally, to customized support for serialization of machines (see Serialization), overload

MMI.save(filename, model::SomeModel, fitresult; kwargs...) = fitresult

and possibly

MMI.restore(filename, model::SomeModel, serializable_fitresult) -> serializable_fitresult

These last two are unlikely to be needed if wrapping pure Julia code.

diff --git a/dev/supervised_models/index.html b/dev/supervised_models/index.html index 1d7fa10..aed874f 100644 --- a/dev/supervised_models/index.html +++ b/dev/supervised_models/index.html @@ -1,2 +1,2 @@ -Introduction · MLJModelInterface

Supervised models

Mathematical assumptions

At present, MLJ's performance estimate functionality (resampling using evaluate/evaluate!) tacitly assumes that feature-label pairs of observations (X1, y1), (X2, y2), (X2, y2), ... are being modelled as identically independent random variables (i.i.d.), and constructs some kind of representation of an estimate of the conditional probability p(y | X) (y and X single observations). It may be that a model implementing the MLJ interface has the potential to make predictions under weaker assumptions (e.g., time series forecasting models). However the output of the compulsory predict method described below should be the output of the model under the i.i.d assumption.

In the future, newer methods may be introduced to handle weaker assumptions (see, e.g., The predict_joint method below).

The following sections were written with Supervised models in mind, but also cover material relevant to general models:

+Introduction · MLJModelInterface

Supervised models

Mathematical assumptions

At present, MLJ's performance estimate functionality (resampling using evaluate/evaluate!) tacitly assumes that feature-label pairs of observations (X1, y1), (X2, y2), (X2, y2), ... are being modelled as identically independent random variables (i.i.d.), and constructs some kind of representation of an estimate of the conditional probability p(y | X) (y and X single observations). It may be that a model implementing the MLJ interface has the potential to make predictions under weaker assumptions (e.g., time series forecasting models). However the output of the compulsory predict method described below should be the output of the model under the i.i.d assumption.

In the future, newer methods may be introduced to handle weaker assumptions (see, e.g., The predict_joint method below).

The following sections were written with Supervised models in mind, but also cover material relevant to general models:

diff --git a/dev/supervised_models_with_transform/index.html b/dev/supervised_models_with_transform/index.html index cf2de65..0c0975d 100644 --- a/dev/supervised_models_with_transform/index.html +++ b/dev/supervised_models_with_transform/index.html @@ -1,2 +1,2 @@ -Supervised models with a transform method · MLJModelInterface

Supervised models with a transform method

A supervised model may optionally implement a transform method, whose signature is the same as predict. In that case, the implementation should define a value for the output_scitype trait. A declaration

output_scitype(::Type{<:SomeSupervisedModel}) = T

is an assurance that scitype(transform(model, fitresult, Xnew)) <: T always holds, for any model of type SomeSupervisedModel.

A use-case for a transform method for a supervised model is a neural network that learns feature embeddings for categorical input features as part of overall training. Such a model becomes a transformer that other supervised models can use to transform the categorical features (instead of applying the higher-dimensional one-hot encoding representations).

+Supervised models with a transform method · MLJModelInterface

Supervised models with a transform method

A supervised model may optionally implement a transform method, whose signature is the same as predict. In that case, the implementation should define a value for the output_scitype trait. A declaration

output_scitype(::Type{<:SomeSupervisedModel}) = T

is an assurance that scitype(transform(model, fitresult, Xnew)) <: T always holds, for any model of type SomeSupervisedModel.

A use-case for a transform method for a supervised model is a neural network that learns feature embeddings for categorical input features as part of overall training. Such a model becomes a transformer that other supervised models can use to transform the categorical features (instead of applying the higher-dimensional one-hot encoding representations).

diff --git a/dev/the_fit_method/index.html b/dev/the_fit_method/index.html index 17343b9..38096ed 100644 --- a/dev/the_fit_method/index.html +++ b/dev/the_fit_method/index.html @@ -1,2 +1,2 @@ -The fit method · MLJModelInterface

The fit method

A compulsory fit method returns three objects:

MMI.fit(model::SomeSupervisedModel, verbosity, X, y) -> fitresult, cache, report
  1. fitresult is the fitresult in the sense above (which becomes an argument for predict discussed below).

  2. report is a (possibly empty) NamedTuple, for example, report=(deviance=..., dof_residual=..., stderror=..., vcov=...). Any training-related statistics, such as internal estimates of the generalization error, and feature rankings, should be returned in the report tuple. How, or if, these are generated should be controlled by hyperparameters (the fields of model). Fitted parameters, such as the coefficients of a linear model, do not go in the report as they will be extractable from fitresult (and accessible to MLJ through the fitted_params method described below).

  3. The value of cache can be nothing, unless one is also defining an update method (see below). The Julia type of cache is not presently restricted.

Note

The fit (and update) methods should not mutate the model. If necessary, fit can create a deepcopy of model first.

It is not necessary for fit to provide type or dimension checks on X or y or to call clean! on the model; MLJ will carry out such checks.

The types of X and y are constrained by the input_scitype and target_scitype trait declarations; see Trait declarations below. (That is, unless a data front-end is implemented, in which case these traits refer instead to the arguments of the overloaded reformat method, and the types of X and y are determined by the output of reformat.)

The method fit should never alter hyperparameter values, the sole exception being fields of type <:AbstractRNG. If the package is able to suggest better hyperparameters, as a byproduct of training, return these in the report field.

The verbosity level (0 for silent) is for passing to the learning algorithm itself. A fit method wrapping such an algorithm should generally avoid doing any of its own logging.

Sample weight support. If supports_weights(::Type{<:SomeSupervisedModel}) has been declared true, then one instead implements the following variation on the above fit:

MMI.fit(model::SomeSupervisedModel, verbosity, X, y, w=nothing) -> fitresult, cache, report
+The fit method · MLJModelInterface

The fit method

A compulsory fit method returns three objects:

MMI.fit(model::SomeSupervisedModel, verbosity, X, y) -> fitresult, cache, report
  1. fitresult is the fitresult in the sense above (which becomes an argument for predict discussed below).

  2. report is a (possibly empty) NamedTuple, for example, report=(deviance=..., dof_residual=..., stderror=..., vcov=...). Any training-related statistics, such as internal estimates of the generalization error, and feature rankings, should be returned in the report tuple. How, or if, these are generated should be controlled by hyperparameters (the fields of model). Fitted parameters, such as the coefficients of a linear model, do not go in the report as they will be extractable from fitresult (and accessible to MLJ through the fitted_params method described below).

  3. The value of cache can be nothing, unless one is also defining an update method (see below). The Julia type of cache is not presently restricted.

Note

The fit (and update) methods should not mutate the model. If necessary, fit can create a deepcopy of model first.

It is not necessary for fit to provide type or dimension checks on X or y or to call clean! on the model; MLJ will carry out such checks.

The types of X and y are constrained by the input_scitype and target_scitype trait declarations; see Trait declarations below. (That is, unless a data front-end is implemented, in which case these traits refer instead to the arguments of the overloaded reformat method, and the types of X and y are determined by the output of reformat.)

The method fit should never alter hyperparameter values, the sole exception being fields of type <:AbstractRNG. If the package is able to suggest better hyperparameters, as a byproduct of training, return these in the report field.

The verbosity level (0 for silent) is for passing to the learning algorithm itself. A fit method wrapping such an algorithm should generally avoid doing any of its own logging.

Sample weight support. If supports_weights(::Type{<:SomeSupervisedModel}) has been declared true, then one instead implements the following variation on the above fit:

MMI.fit(model::SomeSupervisedModel, verbosity, X, y, w=nothing) -> fitresult, cache, report
diff --git a/dev/the_fitted_params_method/index.html b/dev/the_fitted_params_method/index.html index e452a6f..7331af4 100644 --- a/dev/the_fitted_params_method/index.html +++ b/dev/the_fitted_params_method/index.html @@ -1,2 +1,2 @@ -The fitted_params method · MLJModelInterface

The fitted_params method

A fitted_params method may be optionally overloaded. Its purpose is to provide MLJ access to a user-friendly representation of the learned parameters of the model (as opposed to the hyperparameters). They must be extractable from fitresult.

MMI.fitted_params(model::SomeSupervisedModel, fitresult) -> friendly_fitresult::NamedTuple

For a linear model, for example, one might declare something like friendly_fitresult=(coefs=[...], bias=...).

The fallback is to return (fitresult=fitresult,).

+The fitted_params method · MLJModelInterface

The fitted_params method

A fitted_params method may be optionally overloaded. Its purpose is to provide MLJ access to a user-friendly representation of the learned parameters of the model (as opposed to the hyperparameters). They must be extractable from fitresult.

MMI.fitted_params(model::SomeSupervisedModel, fitresult) -> friendly_fitresult::NamedTuple

For a linear model, for example, one might declare something like friendly_fitresult=(coefs=[...], bias=...).

The fallback is to return (fitresult=fitresult,).

diff --git a/dev/the_model_type_hierarchy/index.html b/dev/the_model_type_hierarchy/index.html index 29ddaa4..6839d9c 100644 --- a/dev/the_model_type_hierarchy/index.html +++ b/dev/the_model_type_hierarchy/index.html @@ -1,4 +1,4 @@ The model type hierarchy · MLJModelInterface

The model type hierarchy

A model is an object storing hyperparameters associated with some machine learning algorithm, and that is all. In MLJ, hyperparameters include configuration parameters, like the number of threads, and special instructions, such as "compute feature rankings", which may or may not affect the final learning outcome. However, the logging level (verbosity below) is excluded. Learned parameters (such as the coefficients in a linear model) have no place in the model struct.

The name of the Julia type associated with a model indicates the associated algorithm (e.g., DecisionTreeClassifier). The outcome of training a learning algorithm is called a fitresult. For ordinary multivariate regression, for example, this would be the coefficients and intercept. For a general supervised model, it is the (generally minimal) information needed to make new predictions.

The ultimate supertype of all models is MLJModelInterface.Model, which has two abstract subtypes:

abstract type Supervised <: Model end
 abstract type Unsupervised <: Model end

Supervised models are further divided according to whether they are able to furnish probabilistic predictions of the target (which they will then do by default) or directly predict "point" estimates, for each new input pattern:

abstract type Probabilistic <: Supervised end
-abstract type Deterministic <: Supervised end

Further division of model types is realized through Trait declarations.

Associated with every concrete subtype of Model there must be a fit method, which implements the associated algorithm to produce the fitresult. Additionally, every Supervised model has a predict method, while Unsupervised models must have a transform method. More generally, methods such as these, that are dispatched on a model instance and a fitresult (plus other data), are called operations. Probabilistic supervised models optionally implement a predict_mode operation (in the case of classifiers) or a predict_mean and/or predict_median operations (in the case of regressors) although MLJModelInterface also provides fallbacks that will suffice in most cases. Unsupervised models may implement an inverse_transform operation.

+abstract type Deterministic <: Supervised end

Further division of model types is realized through Trait declarations.

Associated with every concrete subtype of Model there must be a fit method, which implements the associated algorithm to produce the fitresult. Additionally, every Supervised model has a predict method, while Unsupervised models must have a transform method. More generally, methods such as these, that are dispatched on a model instance and a fitresult (plus other data), are called operations. Probabilistic supervised models optionally implement a predict_mode operation (in the case of classifiers) or a predict_mean and/or predict_median operations (in the case of regressors) although MLJModelInterface also provides fallbacks that will suffice in most cases. Unsupervised models may implement an inverse_transform operation.

diff --git a/dev/the_predict_joint_method/index.html b/dev/the_predict_joint_method/index.html index f165fdf..7eb3680 100644 --- a/dev/the_predict_joint_method/index.html +++ b/dev/the_predict_joint_method/index.html @@ -1,2 +1,2 @@ -The predict_joint method · MLJModelInterface

The predict_joint method

Experimental

The following API is experimental. It is subject to breaking changes during minor or major releases without warning.

MMI.predict_joint(model::SomeSupervisedModel, fitresult, Xnew) -> yhat

Any Probabilistic model type SomeModelmay optionally implement a predict_joint method, which has the same signature as predict, but whose predictions are a single distribution (rather than a vector of per-observation distributions).

Specifically, the output yhat of predict_joint should be an instance of Distributions.Sampleable{<:Multivariate,V}, where scitype(V) = target_scitype(SomeModel) and samples have length n, where n is the number of observations in Xnew.

If a new model type subtypes JointProbabilistic <: Probabilistic then implementation of predict_joint is compulsory.

+The predict_joint method · MLJModelInterface

The predict_joint method

Experimental

The following API is experimental. It is subject to breaking changes during minor or major releases without warning.

MMI.predict_joint(model::SomeSupervisedModel, fitresult, Xnew) -> yhat

Any Probabilistic model type SomeModelmay optionally implement a predict_joint method, which has the same signature as predict, but whose predictions are a single distribution (rather than a vector of per-observation distributions).

Specifically, the output yhat of predict_joint should be an instance of Distributions.Sampleable{<:Multivariate,V}, where scitype(V) = target_scitype(SomeModel) and samples have length n, where n is the number of observations in Xnew.

If a new model type subtypes JointProbabilistic <: Probabilistic then implementation of predict_joint is compulsory.

diff --git a/dev/the_predict_method/index.html b/dev/the_predict_method/index.html index f6eca93..2fdff6e 100644 --- a/dev/the_predict_method/index.html +++ b/dev/the_predict_method/index.html @@ -18,4 +18,4 @@ y = ybig[1:6]

Your fit method has bundled the first element of y with the fitresult to make it available to predict for purposes of tracking the complete pool of classes. Let's call this an_element = y[1]. Then, supposing the corresponding probabilities of the observed classes [:a, :b] are in an n x 2 matrix probs (where n the number of rows of Xnew) then you return

yhat = MLJModelInterface.UnivariateFinite([:a, :b], probs, pool=an_element)

This object automatically assigns zero-probability to the unseen class :rare (i.e., pdf.(yhat, :rare) works and returns a zero vector). If you would like to assign :rare non-zero probabilities, simply add it to the first vector (the support) and supply a larger probs matrix.

In a binary classification problem, it suffices to specify a single vector of probabilities, provided you specify augment=true, as in the following example, and note carefully that these probabilities are associated with the last (second) class you specify in the constructor:

y = categorical([:TRUE, :FALSE, :FALSE, :TRUE, :TRUE])
 an_element = y[1]
 probs = rand(10)
-yhat = MLJModelInterface.UnivariateFinite([:FALSE, :TRUE], probs, augment=true, pool=an_element)

The constructor has a lot of options, including passing a dictionary instead of vectors. See CategoricalDistributions.UnivariateFinite for details.

See LinearBinaryClassifier for an example of a Probabilistic classifier implementation.

Important note on binary classifiers. There is no "Binary" scitype distinct from Multiclass{2} or OrderedFactor{2}; Binary is just an alias for Union{Multiclass{2},OrderedFactor{2}}. The target_scitype of a binary classifier will generally be AbstractVector{<:Binary} and according to the mlj scitype convention, elements of y have type CategoricalValue, and not Bool. See BinaryClassifier for an example.

Report items returned by predict

A predict method, or other operation such as transform, can contribute to the report accessible in any machine associated with a model. See Reporting byproducts of a static transformation below for details.

+yhat = MLJModelInterface.UnivariateFinite([:FALSE, :TRUE], probs, augment=true, pool=an_element)

The constructor has a lot of options, including passing a dictionary instead of vectors. See CategoricalDistributions.UnivariateFinite for details.

See LinearBinaryClassifier for an example of a Probabilistic classifier implementation.

Important note on binary classifiers. There is no "Binary" scitype distinct from Multiclass{2} or OrderedFactor{2}; Binary is just an alias for Union{Multiclass{2},OrderedFactor{2}}. The target_scitype of a binary classifier will generally be AbstractVector{<:Binary} and according to the mlj scitype convention, elements of y have type CategoricalValue, and not Bool. See BinaryClassifier for an example.

Report items returned by predict

A predict method, or other operation such as transform, can contribute to the report accessible in any machine associated with a model. See Reporting byproducts of a static transformation below for details.

diff --git a/dev/training_losses/index.html b/dev/training_losses/index.html index b39eadf..db114d0 100644 --- a/dev/training_losses/index.html +++ b/dev/training_losses/index.html @@ -1,2 +1,2 @@ -Training losses · MLJModelInterface

Training losses

MLJModelInterface.training_lossesFunction
MLJModelInterface.training_losses(model::M, report)

If M is an iterative model type which calculates training losses, implement this method to return an AbstractVector of the losses in historical order. If the model calculates scores instead, then the sign of the scores should be reversed.

The following trait overload is also required: MLJModelInterface.supports_training_losses(::Type{<:M}) = true.

source

Trait values can also be set using the metadata_model method, see below.

+Training losses · MLJModelInterface

Training losses

MLJModelInterface.training_lossesFunction
MLJModelInterface.training_losses(model::M, report)

If M is an iterative model type which calculates training losses, implement this method to return an AbstractVector of the losses in historical order. If the model calculates scores instead, then the sign of the scores should be reversed.

The following trait overload is also required: MLJModelInterface.supports_training_losses(::Type{<:M}) = true.

source

Trait values can also be set using the metadata_model method, see below.

diff --git a/dev/trait_declarations/index.html b/dev/trait_declarations/index.html index 43ff237..09e4da6 100644 --- a/dev/trait_declarations/index.html +++ b/dev/trait_declarations/index.html @@ -24,8 +24,8 @@ package_url="https://github.com/KristofferC/NearestNeighbors.jl", is_pure_julia=true, package_license="MIT", - is_wrapper=false)source
MLJModelInterface.metadata_modelFunction
metadata_model(T; args...)

Helper function to write the metadata for a model T.

Keywords

  • input_scitype=Unknown: allowed scientific type of the input data
  • target_scitype=Unknown: allowed scitype of the target (supervised)
  • output_scitype=Unknown: allowed scitype of the transformed data (unsupervised)
  • supports_weights=false: whether the model supports sample weights
  • supports_class_weights=false: whether the model supports class weights
  • load_path="unknown": where the model is (usually PackageName.ModelName)
  • human_name=nothing: human name of the model
  • supports_training_losses=nothing: whether the (necessarily iterative) model can report training losses
  • reports_feature_importances=nothing: whether the model reports feature importances

Example

metadata_model(KNNRegressor,
+    is_wrapper=false)
source
MLJModelInterface.metadata_modelFunction
metadata_model(T; args...)

Helper function to write the metadata for a model T.

Keywords

  • input_scitype=Unknown: allowed scientific type of the input data
  • target_scitype=Unknown: allowed scitype of the target (supervised)
  • output_scitype=Unknown: allowed scitype of the transformed data (unsupervised)
  • supports_weights=false: whether the model supports sample weights
  • supports_class_weights=false: whether the model supports class weights
  • load_path="unknown": where the model is (usually PackageName.ModelName)
  • human_name=nothing: human name of the model
  • supports_training_losses=nothing: whether the (necessarily iterative) model can report training losses
  • reports_feature_importances=nothing: whether the model reports feature importances

Example

metadata_model(KNNRegressor,
     input_scitype=MLJModelInterface.Table(MLJModelInterface.Continuous),
     target_scitype=AbstractVector{MLJModelInterface.Continuous},
     supports_weights=true,
-    load_path="NearestNeighbors.KNNRegressor")
source
+ load_path="NearestNeighbors.KNNRegressor")source diff --git a/dev/type_declarations/index.html b/dev/type_declarations/index.html index 20a0fbf..a398eaa 100644 --- a/dev/type_declarations/index.html +++ b/dev/type_declarations/index.html @@ -29,4 +29,4 @@ a::Int = -1::(_ > -2) end

But this does:

@mlj_model mutable struct Bar
     a::Int = (-)(1)::(_ > -2)
-end
+end diff --git a/dev/unsupervised_models/index.html b/dev/unsupervised_models/index.html index b82dd1f..b735053 100644 --- a/dev/unsupervised_models/index.html +++ b/dev/unsupervised_models/index.html @@ -2,4 +2,4 @@ Unsupervised models · MLJModelInterface

Unsupervised models

Unsupervised models implement the MLJ model interface in a very similar fashion. The main differences are:

  • The fit method, which still returns (fitresult, cache, report) will typically have only one training argument X, as in MLJModelInterface.fit(model, verbosity, X), although this is not a hard requirement; see Transformers requiring a target variable in training below. Furthermore, in the case of models that subtype Static <: Unsupervised (see Static models) fit has no training arguments at all, but does not need to be implemented as a fallback returns (nothing, nothing, nothing).

  • A transform and/or predict method is implemented, and has the same signature as predict does in the supervised case, as in MLJModelInterface.transform(model, fitresult, Xnew). However, it may only have one data argument Xnew, unless model <: Static, in which case there is no restriction. A use-case for predict is K-means clustering that predicts labels and transforms input features into a space of lower dimension. See the Transformers that also predict section of the MLJ manual for an example.

  • The target_scitype refers to the output of predict, if implemented. A new trait, output_scitype, is for the output of transform. Unless the model is Static (see Static models) the trait input_scitype is for the single data argument of transform (and predict, if implemented). If fit has more than one data argument, you must overload the trait fit_data_scitype, which bounds the allowed data passed to fit(model, verbosity, data...) and will always be a Tuple type.

  • An inverse_transform can be optionally implemented. The signature is the same as transform, as in MLJModelInterface.inverse_transform(model::MyUnsupervisedModel, fitresult, Xout), which:

    • must make sense for any Xout for which scitype(Xout) <: output_scitype(MyUnsupervisedModel); and
    • must return an object Xin satisfying scitype(Xin) <: input_scitype(MyUnsupervisedModel).

For sample implementations, see MLJ's built-in transformers and the clustering models at MLJClusteringInterface.jl.

Transformers requiring a target variable in training

An Unsupervised model that is not Static may include a second argument y in it's fit signature, as in fit(::MyTransformer, verbosity, X, y). For example, some feature selection tools require a target variable y in training. (Unlike Supervised models, an Unsupervised model is not required to implement predict, and in pipelines it is the output of transform, and not predict, that is always propagated to the next model.) Such a model should overload the trait target_in_fit, as in this example:

MLJModelInterface.target_in_fit(::Type{<:MyTransformer}) = true

This ensures that such models can appear in pipelines, and that a target provided to the pipeline model is passed on to the model in training.

If the model implements more than one fit signature (e.g., one with a target y and one without) then fit_data_scitype must also be overloaded, as in this example:

MLJModelInterface.fit_data_scitype(::Type{<:MyTransformer}) = Union{
     Tuple{Table(Continuous)},
 	Tuple{Table(Continous), AbstractVector{<:Finite}},
-}
+} diff --git a/dev/where_to_put_code/index.html b/dev/where_to_put_code/index.html index 4f99996..3012340 100644 --- a/dev/where_to_put_code/index.html +++ b/dev/where_to_put_code/index.html @@ -1,2 +1,2 @@ -Where to place code implementing new models · MLJModelInterface

Where to place code implementing new models

Note that different packages can implement models having the same name without causing conflicts, although an MLJ user cannot simultaneously load two such models.

There are two options for making a new model implementation available to all MLJ users:

  1. Native implementations (preferred option). The implementation code lives in the same package that contains the learning algorithms implementing the interface. An example is EvoTrees.jl. In this case, it is sufficient to open an issue at MLJ requesting the package to be registered with MLJ. Registering a package allows the MLJ user to access its models' metadata and to selectively load them.

  2. Separate interface package. Implementation code lives in a separate interface package, which has the algorithm-providing package as a dependency. See the template repository MLJExampleInterface.jl.

Additionally, one needs to ensure that the implementation code defines the package_name and load_path model traits appropriately, so that MLJ's @load macro can find the necessary code (see MLJModels/src for examples).

+Where to place code implementing new models · MLJModelInterface

Where to place code implementing new models

Note that different packages can implement models having the same name without causing conflicts, although an MLJ user cannot simultaneously load two such models.

There are two options for making a new model implementation available to all MLJ users:

  1. Native implementations (preferred option). The implementation code lives in the same package that contains the learning algorithms implementing the interface. An example is EvoTrees.jl. In this case, it is sufficient to open an issue at MLJ requesting the package to be registered with MLJ. Registering a package allows the MLJ user to access its models' metadata and to selectively load them.

  2. Separate interface package. Implementation code lives in a separate interface package, which has the algorithm-providing package as a dependency. See the template repository MLJExampleInterface.jl.

Additionally, one needs to ensure that the implementation code defines the package_name and load_path model traits appropriately, so that MLJ's @load macro can find the necessary code (see MLJModels/src for examples).