Releases: dmlc/xgboost
Releases · dmlc/xgboost
1.3.1 Patch Release
- Enable loading model from <1.0.0 trained with
objective='binary:logitraw'
(#6517) - Fix handling of print period in
EvaluationMonitor
(#6499) - Fix a bug in metric configuration after loading model. (#6504)
- Fix
save_best
early stopping option (#6523) - Remove
cupy.array_equal
, since it's not compatible with cuPy 7.8 (#6528)
You can verify the downloaded source code xgboost.tar.gz
by running this on your unix shell:
echo "fd51e844dd0291fd9e7129407be85aaeeda2309381a6e3fc104938b27fb09279 *xgboost.tar.gz" | shasum -a 256 --check
Release 1.3.0 stable
XGBoost4J-Spark: Exceptions should cancel jobs gracefully instead of killing SparkContext (#6019).
- By default, exceptions in XGBoost4J-Spark causes the whole SparkContext to shut down, necessitating the restart of the Spark cluster. This behavior is often a major inconvenience.
- Starting from 1.3.0 release, XGBoost adds a new parameter
killSparkContextOnWorkerFailure
to optionally prevent killing SparkContext. If this parameter is set, exceptions will gracefully cancel training jobs instead of killing SparkContext.
GPUTreeSHAP: GPU acceleration of the TreeSHAP algorithm (#6038, #6064, #6087, #6099, #6163, #6281, #6332)
- SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain predictions of machine learning models. It computes feature importance scores for individual examples, establishing how each feature influences a particular prediction. TreeSHAP is an optimized SHAP algorithm specifically designed for decision tree ensembles.
- Starting with 1.3.0 release, it is now possible to leverage CUDA-capable GPUs to accelerate the TreeSHAP algorithm. Check out the demo notebook.
- The CUDA implementation of the TreeSHAP algorithm is hosted at rapidsai/GPUTreeSHAP. XGBoost imports it as a Git submodule.
New style Python callback API (#6199, #6270, #6320, #6348, #6376, #6399, #6441)
- The XGBoost Python package now offers a re-designed callback API. The new callback API lets you design various extensions of training in idomatic Python. In addition, the new callback API allows you to use early stopping with the native Dask API (
xgboost.dask
). Check out the tutorial and the demo.
Enable the use of DeviceQuantileDMatrix
/ DaskDeviceQuantileDMatrix
with large data (#6201, #6229, #6234).
DeviceQuantileDMatrix
can achieve memory saving by avoiding extra copies of the training data, and the saving is bigger for large data. Unfortunately, large data with more than 2^31 elements was triggering integer overflow bugs in CUB and Thrust. Tracking issue: #6228.- This release contains a series of work-arounds to allow the use of
DeviceQuantileDMatrix
with large data:
Support slicing of tree models (#6302)
- Accessing the best iteration of a model after the application of early stopping used to be error-prone, need to manually pass the
ntree_limit
argument to thepredict()
function. - Now we provide a simple interface to slice tree models by specifying a range of boosting rounds. The tree ensemble can be split into multiple sub-ensembles via the slicing interface. Check out an example.
- In addition, the early stopping callback now supports
save_best
option. When enabled, XGBoost will save (persist) the model at the best boosting round and discard the trees that were fit subsequent to the best round.
Weighted subsampling of features (columns) (#5962)
- It is now possible to sample features (columns) via weighted subsampling, in which features with higher weights are more likely to be selected in the sample. Weighted subsampling allows you to encode domain knowledge by emphasizing a particular set of features in the choice of tree splits. In addition, you can prevent particular features from being used in any splits, by assigning them zero weights.
- Check out the demo.
Improved integration with Dask
- Support reverse-proxy environment such as Google Kubernetes Engine (#6343, #6475)
- An XGBoost training job will no longer use all available workers. Instead, it will only use the workers that contain input data (#6343).
- The new callback API works well with the Dask training API.
- The
predict()
andfit()
function ofDaskXGBClassifier
andDaskXGBRegressor
now accept a base margin (#6155). - Support more meta data in the Dask API (#6130, #6132, #6333).
- Allow passing extra keyword arguments as
kwargs
inpredict()
(#6117) - Fix typo in dask interface:
sample_weights
->sample_weight
(#6240) - Allow empty data matrix in AFT survival, as Dask may produce empty partitions (#6379)
- Speed up prediction by overlapping prediction jobs in all workers (#6412)
Experimental support for direct splits with categorical features (#6028, #6128, #6137, #6140, #6164, #6165, #6166, #6179, #6194, #6219)
- Currently, XGBoost requires users to one-hot-encode categorical variables. This has adverse performance implications, as the creation of many dummy variables results into higher memory consumption and may require fitting deeper trees to achieve equivalent model accuracy.
- The 1.3.0 release of XGBoost contains an experimental support for direct handling of categorical variables in test nodes. Each test node will have the condition of form
feature_value \in match_set
, where thematch_set
on the right hand side contains one or more matching categories. The matching categories inmatch_set
represent the condition for traversing to the right child node. Currently, XGBoost will only generate categorical splits with only a single matching category ("one-vs-rest split"). In a future release, we plan to remove this restriction and produce splits with multiple matching categories inmatch_set
. - The categorical split requires the use of JSON model serialization. The legacy binary serialization method cannot be used to save (persist) models with categorical splits.
- Note. This feature is currently highly experimental. Use it at your own risk. See the detailed list of limitations at #5949.
Experimental plugin for RAPIDS Memory Manager (#5873, #6131, #6146, #6150, #6182)
- RAPIDS Memory Manager library (rapidsai/rmm) provides a collection of efficient memory allocators for NVIDIA GPUs. It is now possible to use XGBoost with memory allocators provided by RMM, by enabling the RMM integration plugin. With this plugin, XGBoost is now able to share a common GPU memory pool with other applications using RMM, such as the RAPIDS data science packages.
- See the demo for a working example, as well as directions for building XGBoost with the RMM plugin.
- The plugin will be soon considered non-experimental, once #6297 is resolved.
Experimental plugin for oneAPI programming model (#5825)
- oneAPI is a programming interface developed by Intel aimed at providing one programming model for many types of hardware such as CPU, GPU, FGPA and other hardware accelerators.
- XGBoost now includes an experimental plugin for using oneAPI for the predictor and objective functions. The plugin is hosted in the directory
plugin/updater_oneapi
. - Roadmap: #5442
Pickling the XGBoost model will now trigger JSON serialization (#6027)
- The pickle will now contain the JSON string representation of the XGBoost model, as well as related configuration.
Performance improvements
- Various performance improvement on multi-core CPUs
- Optimize DMatrix build time by up to 3.7x. (#5877)
- CPU predict performance improvement, by up to 3.6x. (#6127)
- Optimize CPU sketch allreduce for sparse data (#6009)
- Thread local memory allocation for BuildHist, leading to speedup up to 1.7x. (#6358)
- Disable hyperthreading for DMatrix creation (#6386). This speeds up DMatrix creation by up to 2x.
- Simple fix for static shedule in predict (#6357)
- Unify thread configuration, to make it easy to utilize all CPU cores (#6186)
- [jvm-packages] Clean the way deterministic paritioning is computed (#6033)
- Speed up JSON serialization by implementing an intrusive pointer class (#6129). It leads to 1.5x-2x performance boost.
API additions
- [R] Add SHAP summary plot using ggplot2 (#5882)
- Modin DataFrame can now be used as input (#6055)
- [jvm-packages] Add
getNumFeature
method (#6075) - Add MAPE metric (#6119)
- Implement GPU predict leaf. (#6187)
- Enable cuDF/cuPy inputs in
XGBClassifier
(#6269) - Document tree method for feature weights. (#6312)
- Add
fail_on_invalid_gpu_id
parameter, which will cause XGBoost to terminate upon seeing an invalid value ofgpu_id
(#6342)
Breaking: the default evaluation metric for classification is changed to logloss
/ mlogloss
(#6183)
- The default metric used to be accuracy, and it is not statistically consistent to perform early stopping with the accuracy metric when we are really optimizing the log loss for the
binary:logistic
objective. - For statistical consistency, the default metric for classification has been changed to
logloss
. Users may choose to preserve the old behavior by explicitly specifyingeval_metric
.
Breaking: skmaker
is now removed (#5971)
- The
skmaker
updater has not been documented nor tested.
Breaking: the JSON model format no longer stores the leaf child count (#6094).
- The leaf child count field has been deprecated and is not used anywhere in the XGBoost codebase.
Breaking: XGBoost now requires MacOS 10.14 (Mojave) and later.
- Homebrew has dropped support for MacOS 10.13 (High Sierra), so we are not able to install the OpenMP runtime (
libomp
) from Homebrew on MacOS 10.13. Please use MacOS 10.14 (Mojave) or later.
Deprecation notices
- The use of
LabelEncoder
inXGBClassifier
is now deprecated and will be re...
Release Candidate of version 1.3.0
R package: xgboost_1.3.0.1.tar.gz
1.2.1 Patch Release
This patch release applies the following patches to 1.2.0 release:
- Hide C++ symbols from dmlc-core (#6188)
Release 1.2.0 stable
XGBoost4J-Spark now supports the GPU algorithm (#5171)
- Now XGBoost4J-Spark is able to leverage NVIDIA GPU hardware to speed up training.
- There is on-going work for accelerating the rest of the data pipeline with NVIDIA GPUs (#5950, #5972).
XGBoost now supports CUDA 11 (#5808)
- It is now possible to build XGBoost with CUDA 11. Note that we do not yet distribute pre-built binaries built with CUDA 11; all current distributions use CUDA 10.0.
Better guidance for persisting XGBoost models in an R environment (#5940, #5964)
- Users are strongly encouraged to use
xgb.save()
andxgb.save.raw()
instead ofsaveRDS()
. This is so that the persisted models can be accessed with future releases of XGBoost. - The previous release (1.1.0) had problems loading models that were saved with
saveRDS()
. This release adds a compatibility layer to restore access to the old RDS files. Note that this is meant to be a temporary measure; users are advised to stop usingsaveRDS()
and migrate toxgb.save()
andxgb.save.raw()
.
New objectives and metrics
- The pseudo-Huber loss
reg:pseudohubererror
is added (#5647). The corresponding metric ismphe
. Right now, the slope is hard-coded to 1. - The Accelerated Failure Time objective for survival analysis (
survival:aft
) is now accelerated on GPUs (#5714, #5716). The survival metricsaft-nloglik
andinterval-regression-accuracy
are also accelerated on GPUs.
Improved integration with scikit-learn
- Added
n_features_in_
attribute to the scikit-learn interface to store the number of features used (#5780). This is useful for integrating with some scikit-learn features such asStackingClassifier
. See this link for more details. XGBoostError
now inheritsValueError
, which conforms scikit-learn's exception requirement (#5696).
Improved integration with Dask
- The XGBoost Dask API now exposes an asynchronous interface (#5862). See the document for details.
- Zero-copy ingestion of GPU arrays via
DaskDeviceQuantileDMatrix
(#5623, #5799, #5800, #5803, #5837, #5874, #5901): Previously, the Dask interface had to make 2 data copies: one for concatenating the Dask partition/block into a single block and another for internal representation. To save memory, we introduceDaskDeviceQuantileDMatrix
. As long as Dask partitions are resident in the GPU memory,DaskDeviceQuantileDMatrix
is able to ingest them directly without making copies. This matrix type wrapsDeviceQuantileDMatrix
. - The prediction function now returns GPU Series type if the input is from Dask-cuDF (#5710). This is to preserve the input data type.
Robust handling of external data types (#5689, #5893)
- As we support more and more external data types, the handling logic has proliferated all over the code base and became hard to keep track. It also became unclear how missing values and threads are handled. We refactored the Python package code to collect all data handling logic to a central location, and now we have an explicit list of of all supported data types.
Improvements in GPU-side data matrix (DeviceQuantileDMatrix
)
- The GPU-side data matrix now implements its own quantile sketching logic, so that data don't have to be transported back to the main memory (#5700, #5747, #5760, #5846, #5870, #5898). The GK sketching algorithm is also now better documented.
- Now we can load extremely sparse dataset like URL, although performance is still sub-optimal.
- The GPU-side data matrix now exposes an iterative interface (#5783), so that users are able to construct a matrix from a data iterator. See the Python demo.
New language binding: Swift (#5728)
- Visit https://github.com/kongzii/SwiftXGBoost for more details.
Robust model serialization with JSON (#5772, #5804, #5831, #5857, #5934)
- We continue efforts from the 1.0.0 release to adopt JSON as the format to save and load models robustly.
- JSON model IO is significantly faster and produces smaller model files.
- Round-trip reproducibility is guaranteed, via the introduction of an efficient float-to-string conversion algorithm known as the Ryū algorithm. The conversion is locale-independent, producing consistent numeric representation regardless of the locale setting of the user's machine.
- We fixed an issue in loading large JSON files to memory.
- It is now possible to load a JSON file from a remote source such as S3.
Performance improvements
- CPU hist tree method optimization
- Skip missing lookup in hist row partitioning if data is dense. (#5644)
- Specialize training procedures for CPU hist tree method on distributed environment. (#5557)
- Add single point histogram for CPU hist. Previously gradient histogram for CPU hist is hard coded to be 64 bit, now users can specify the parameter
single_precision_histogram
to use 32 bit histogram instead for faster training performance. (#5624, #5811)
- GPU hist tree method optimization
- Removed some unnecessary synchronizations and better memory allocation pattern. (#5707)
- Optimize GPU Hist for wide dataset. Previously for wide dataset the atomic operation is performed on global memory, now it can run on shared memory for faster histogram building. But there's a known small regression on GeForce cards with dense data. (#5795, #5926, #5948, #5631)
API additions
- Support passing fmap to importance plot (#5719). Now importance plot can show actual names of features instead of default ones.
- Support 64bit seed. (#5643)
- A new C API
XGBoosterGetNumFeature
is added for getting number of features in booster (#5856). - Feature names and feature types are now stored in C++ core and saved in binary DMatrix (#5858).
Breaking: The predict()
method of DaskXGBClassifier
now produces class predictions (#5986). Use predict_proba()
to obtain probability predictions.
- Previously,
DaskXGBClassifier.predict()
produced probability predictions. This is inconsistent with the behavior of other scikit-learn classifiers, wherepredict()
returns class predictions. We make a breaking change in 1.2.0 release so thatDaskXGBClassifier.predict()
now correctly produces class predictions and thus behave like other scikit-learn classifiers. Furthermore, we introduce thepredict_proba()
method for obtaining probability predictions, again to be in line with other scikit-learn classifiers.
Breaking: Custom evaluation metric now receives raw prediction (#5954)
- Previously, the custom evaluation metric received a transformed prediction result when used with a classifier. Now the custom metric will receive a raw (untransformed) prediction and will need to transform the prediction itself. See demo/guide-python/custom_softmax.py for an example.
- This change is to make the custom metric behave consistently with the custom objective, which already receives raw prediction (#5564).
Breaking: XGBoost4J-Spark now requires Spark 3.0 and Scala 2.12 (#5836, #5890)
- Starting with version 3.0, Spark can manage GPU resources and allocate them among executors.
- Spark 3.0 dropped support for Scala 2.11 and now only supports Scala 2.12. Thus, XGBoost4J-Spark also only supports Scala 2.12.
Breaking: XGBoost Python package now requires Python 3.6 and later (#5715)
- Python 3.6 has many useful features such as f-strings.
Breaking: XGBoost now adopts the C++14 standard (#5664)
- Make sure to use a sufficiently modern C++ compiler that supports C++14, such as Visual Studio 2017, GCC 5.0+, and Clang 3.4+.
Bug-fixes
- Fix a data race in the prediction function (#5853). As a byproduct, the prediction function now uses a thread-local data store and became thread-safe.
- Restore capability to run prediction when the test input has fewer features than the training data (#5955). This capability is necessary to support predicting with LIBSVM inputs. The previous release (1.1) had broken this capability, so we restore it in this version with better tests.
- Fix OpenMP build with CMake for R package, to support CMake 3.13 (#5895).
- Fix Windows 2016 build (#5902, #5918).
- Fix edge cases in scikit-learn interface with Pandas input by disabling feature validation. (#5953)
- [R] Enable weighted learning to rank (#5945)
- [R] Fix early stopping with custom objective (#5923)
- Fix NDK Build (#5886)
- Add missing explicit template specializations for greater portability (#5921)
- Handle empty rows in data iterators correctly (#5929). This bug affects file loader and JVM data frames.
- Fix
IsDense
(#5702) - [jvm-packages] Fix wrong method name
setAllowZeroForMissingValue
(#5740) - Fix shape inference for Dask predict (#5989)
Usability Improvements, Documentation
- [Doc] Document that CUDA 10.0 is required (#5872)
- Refactored command line interface (CLI). Now CLI is able to handle user errors and output basic document. (#5574)
- Better error handling in Python: use
raise from
syntax to preserve full stacktrace (#5787). - The JSON model dump now has a formal schema (#5660, #5818). The benefit is to prevent
dump_model()
function from breaking. See this document to understand the difference between saving and dumping models. - Add a reference to the GPU external memory paper (#5684)
- Document more objective parameters in the R package (#5682)
- Document the existence of pre-built binary wheels for MacOS (#5711)
- Remove `m...
Release Candidate 2 of version 1.2.0
R package: xgboost_1.2.0.1.tar.gz (Manual: xgboost_1.2.0.1-manual.pdf)
Release Candidate of version 1.2.0
R package: xgboost_1.2.0.1.tar.gz (Manual: xgboost_1.2.0.1-manual.pdf)
1.1.1 Patch Release
Release 1.1.0 stable
Better performance on multi-core CPUs (#5244, #5334, #5522)
- Poor performance scaling of the
hist
algorithm for multi-core CPUs has been under investigation (#3810). #5244 concludes the ongoing effort to improve performance scaling on multi-CPUs, in particular Intel CPUs. Roadmap: #5104 - #5334 makes steps toward reducing memory consumption for the
hist
tree method on CPU. - #5522 optimizes random number generation for data sampling.
Deterministic GPU algorithm for regression and classification (#5361)
- GPU algorithm for regression and classification tasks is now deterministic.
- Roadmap: #5023. Currently only single-GPU training is deterministic. Distributed training with multiple GPUs is not yet deterministic.
Improve external memory support on GPUs (#5093, #5365)
- Starting from 1.0.0 release, we added support for external memory on GPUs to enable training with larger datasets. Gradient-based sampling (#5093) speeds up the external memory algorithm by intelligently sampling a subset of the training data to copy into the GPU memory. Learn more about out-of-core GPU gradient boosting.
- GPU-side data sketching now works with data from external memory (#5365).
Parameter validation: detection of unused or incorrect parameters (#5477, #5569, #5508)
- Mis-spelled training parameter is a common user mistake. In previous versions of XGBoost, mis-spelled parameters were silently ignored. Starting with 1.0.0 release, XGBoost will produce a warning message if there is any unused training parameters. The 1.1.0 release makes parameter validation available to the scikit-learn interface (#5477) and the R binding (#5569).
Thread-safe, in-place prediction method (#5389, #5512)
- Previously, the prediction method was not thread-safe (#5339). This release adds a new API function
inplace_predict()
that is thread-safe. It is now possible to serve concurrent requests for prediction using a shared model object. - It is now possible to compute prediction in-place for selected data formats (
numpy.ndarray
/scipy.sparse.csr_matrix
/cupy.ndarray
/cudf.DataFrame
/pd.DataFrame
) without creating aDMatrix
object.
Addition of Accelerated Failure Time objective for survival analysis (#4763, #5473, #5486, #5552, #5553)
- Survival analysis (regression) models the time it takes for an event of interest to occur. The target label is potentially censored, i.e. the label is a range rather than a single number. We added a new objective
survival:aft
to support survival analysis. Also added is the new API to specify the ranged labels. Check out the tutorial and the demos. - GPU support is work in progress (#5714).
Improved installation experience on Mac OSX (#5597, #5602, #5606, #5701)
- It only takes two commands to install the XGBoost Python package:
brew install libomp
followed bypip install xgboost
. The installed XGBoost will use all CPU cores. Even better, starting with this release, we distribute pre-compiled binary wheels targeting Mac OSX. Now the install commandpip install xgboost
finishes instantly, as it no longer compiles the C++ source of XGBoost. The last three Mac versions (High Sierra, Mojave, Catalina) are supported. - R package: the 1.1.0 release fixes the error
Initializing libomp.dylib, but found libomp.dylib already initialized
(#5701)
Ranking metrics are now accelerated on GPUs (#5380, #5387, #5398)
GPU-side data matrix to ingest data directly from other GPU libraries (#5420, #5465)
- Previously, data on GPU memory had to be copied back to the main memory before it could be used by XGBoost. Starting with 1.1.0 release, XGBoost provides a dedicated interface (
DeviceQuantileDMatrix
) so that it can ingest data from GPU memory directly. The result is that XGBoost interoperates better with GPU-accelerated data science libraries, such as cuDF, cuPy, and PyTorch. - Set device in device dmatrix. (#5596)
Robust model serialization with JSON (#5123, #5217)
- We continue efforts from the 1.0.0 release to adopt JSON as the format to save and load models robustly. Refer to the release note for 1.0.0 to learn more.
- It is now possible to store internal configuration of the trained model (
Booster
) object in R as a JSON string (#5123, #5217).
Improved integration with Dask
- Pass through
verbose
parameter for dask fit (#5413) - Use
DMLC_TASK_ID
. (#5415) - Order the prediction result. (#5416)
- Honor
nthreads
from dask worker. (#5414) - Enable grid searching with scikit-learn. (#5417)
- Check non-equal when setting threads. (#5421)
- Accept other inputs for prediction. (#5428)
- Fix missing value for scikit-learn interface. (#5435)
XGBoost4J-Spark: Check number of columns in the data iterator (#5202, #5303)
- Before, the native layer in XGBoost did not know the number of columns (features) ahead of time and had to guess the number of columns by counting the feature index when ingesting data. This method has a failure more in distributed setting: if the training data is highly sparse, some features may be completely missing in one or more worker partitions. Thus, one or more workers may deduce an incorrect data shape, leading to crashes or silently wrong models.
- Enforce correct data shape by passing the number of columns explicitly from the JVM layer into the native layer.
Major refactoring of the DMatrix
class
- Continued from 1.0.0 release.
- Remove update prediction cache from predictors. (#5312)
- Predict on Ellpack. (#5327)
- Partial rewrite EllpackPage (#5352)
- Use ellpack for prediction only when sparsepage doesn't exist. (#5504)
- RFC: #4354, Roadmap: #5143
Breaking: XGBoost Python package now requires Pip 19.0 and higher (#5589)
- Your Linux machine may have an old version of Pip and may attempt to install a source package, leading to long installation time. This is because we are now using
manylinux2010
tag in the binary wheel release. Ensure you have Pip 19.0 or newer by runningpython3 -m pip -V
to check the version. Upgrade Pip with command
python3 -m pip install --upgrade pip
Upgrading to latest pip allows us to depend on newer versions of system libraries. TensorFlow also requires Pip 19.0+.
Breaking: GPU algorithm now requires CUDA 10.0 and higher (#5649)
- CUDA 10.0 is necessary to make the GPU algorithm deterministic (#5361).
Breaking: silent
parameter is now removed (#5476)
- Please use
verbosity
instead.
Breaking: Set output_margin
to True for custom objectives (#5564)
- Now both R and Python interface custom objectives get un-transformed (raw) prediction outputs.
Breaking: Makefile
is now removed. We use CMake exclusively to build XGBoost (#5513)
- Exception: the R package uses Autotools, as the CRAN ecosystem did not yet adopt CMake widely.
Breaking: distcol
updater is now removed (#5507)
- The
distcol
updater has been long broken, and currently we lack resources to implement a working implementation from scratch.
Deprecation notices
- Python 3.5. This release is the last release to support Python 3.5. The following release (1.2.0) will require Python 3.6.
- Scala 2.11. Currently XGBoost4J supports Scala 2.11. However, if a future release of XGBoost adopts Spark 3, it will not support Scala 2.11, as Spark 3 requires Scala 2.12+. We do not yet know which XGBoost release will adopt Spark 3.
Known limitations
- (Python package) When early stopping is activated with
early_stopping_rounds
at training time, the prediction method (xgb.predict()
) behaves in a surprising way. If XGBoost runs for M rounds and chooses iteration N (N < M) as the best iteration, then the prediction method will use M trees by default. To use the best iteration (N trees), users will need to manually take the best iteration fieldbst.best_iteration
and pass it as thentree_limit
argument toxgb.predict()
. See #5209 and #4052 for additional context. - GPU ranking objective is currently not deterministic (#5561).
- When training parameter
reg_lambda
is set to zero, some leaf nodes may be assigned a NaN value. (See discussion.) For now, please setreg_lambda
to a nonzero value.
Community and Governance
- The XGBoost Project Management Committee (PMC) is pleased to announce a new committer: Egor Smirnov (@SmirnovEgorRu). He has led a major initiative to improve the performance of XGBoost on multi-core CPUs.
Bug-fixes
- Improved compatibility with scikit-learn (#5255, #5505, #5538)
- Remove f-string, since it's not supported by Python 3.5 (#5330). Note that Python 3.5 support is deprecated and schedule to be dropped in the upcoming release (1.2.0).
- Fix the pruner so that it doesn't prune the same branch twice (#5335)
- Enforce only major version in JSON model schema (#5336). Any major revision of the model schema would bump up the major version.
- Fix a small typo in sklearn.py that broke multiple eval metrics (#5341)
- Restore loading model from a memory buffer (#5360)
- Define lazy isinstance for Python compat (#5364)
- [R] fixed uses of
class()
(#5426) - Force compressed buffer to be 4 bytes aligned, to keep cuda-memcheck happy (#5441)
- Remove warning for calling host function (
std::max
) on a GPU device (#5453) - Fix uninitialized value bug in xgboost callback (#5463)
- Fix model dump in CLI (#5485)
- Fix out-of-bound array access in
WQSummary::SetPrune()
(#5493) - Ensure that configured
dmlc/build_config.h
is picked up by Rabit and XGBoost, to fix build on Alpine (#5514) - Fix a misspelled method, made i...
Release Candidate 2 of version 1.1.0
R package: xgboost_1.1.0.1.tar.gz