From 7935b026f02c9b05e2e637ad732dcc9c5c368da5 Mon Sep 17 00:00:00 2001 From: Annica Gehlen Date: Sun, 26 Jun 2022 16:57:50 +0200 Subject: [PATCH 1/8] Add copy buttons for documentation. --- docs/rtd_environment.yml | 1 + docs/source/algorithms.rst | 212 +++++++++++++++++++ docs/source/conf.py | 1 + docs/source/getting_started/installation.rst | 36 +++- environment.yml | 1 + 5 files changed, 242 insertions(+), 9 deletions(-) diff --git a/docs/rtd_environment.yml b/docs/rtd_environment.yml index a1fd1e856..3c7a04b7a 100644 --- a/docs/rtd_environment.yml +++ b/docs/rtd_environment.yml @@ -11,6 +11,7 @@ dependencies: - black - sphinx - sphinxcontrib-bibtex + - sphinx-copybutton - sphinx-panels - ipython - ipython_genutils diff --git a/docs/source/algorithms.rst b/docs/source/algorithms.rst index f34e09dac..ccd2680ab 100644 --- a/docs/source/algorithms.rst +++ b/docs/source/algorithms.rst @@ -20,6 +20,10 @@ you install estimagic. .. dropdown:: scipy_lbfgsb + .. code-block:: + + scipy_lbfgsb + Minimize a scalar function of one or more variables using the L-BFGS-B algorithm. The optimizer is taken from scipy, which calls the Fortran code written by the @@ -64,6 +68,10 @@ you install estimagic. .. dropdown:: scipy_slsqp + .. code-block:: + + scipy_slsqp + Minimize a scalar function of one or more variables using the SLSQP algorithm. SLSQP stands for Sequential Least Squares Programming. @@ -84,6 +92,10 @@ you install estimagic. .. dropdown:: scipy_neldermead + + .. code-block:: + + scipy_neldermead Minimize a scalar function using the Nelder-Mead algorithm. @@ -117,6 +129,10 @@ you install estimagic. .. dropdown:: scipy_powell + .. code-block:: + + scipy_powell + Minimize a scalar function using the modified Powell method. .. warning:: @@ -151,6 +167,10 @@ you install estimagic. .. dropdown:: scipy_bfgs + + .. code-block:: + + scipy_bfgs Minimize a scalar function of one or more variables using the BFGS algorithm. @@ -171,6 +191,10 @@ you install estimagic. .. dropdown:: scipy_conjugate_gradient + .. code-block:: + + scipy_conjugate_gradient + Minimize a function using a nonlinear conjugate gradient algorithm. The conjugate gradient method finds functions' local optima using just the gradient. @@ -200,6 +224,10 @@ you install estimagic. .. dropdown:: scipy_newton_cg + .. code-block:: + + scipy_newton_cg + Minimize a scalar function using Newton's conjugate gradient algorithm. .. warning:: @@ -242,6 +270,10 @@ you install estimagic. .. dropdown:: scipy_cobyla + .. code-block:: + + scipy_cobyla + Minimize a scalar function of one or more variables using the COBYLA algorithm. COBYLA stands for Constrained Optimization By Linear Approximation. @@ -272,6 +304,10 @@ you install estimagic. .. dropdown:: scipy_truncated_newton + .. code-block:: + + scipy_truncated_newton + Minimize a scalar function using truncated Newton algorithm. This function differs from scipy_newton_cg because @@ -334,6 +370,10 @@ you install estimagic. .. dropdown:: scipy_trust_constr + .. code-block:: + + scipy_trust_constr + Minimize a scalar function of one or more variables subject to constraints. .. warning:: @@ -387,6 +427,10 @@ you install estimagic. .. dropdown:: scipy_ls_dogbox + .. code-block:: + + scipy_ls_dogbox + Minimize a nonlinear least squares problem using a rectangular trust region method. Typical use case is small problems with bounds. Not recommended for problems with @@ -423,6 +467,10 @@ you install estimagic. .. dropdown:: scipy_ls_trf + .. code-block:: + + scipy_ls_trf + Minimize a nonlinear least squares problem using a trustregion reflective method. Trust Region Reflective algorithm, particularly suitable for large sparse problems @@ -459,6 +507,10 @@ you install estimagic. .. dropdown:: scipy_ls_lm + .. code-block:: + + scipy_ls_lm + Minimize a nonlinear least squares problem using a Levenberg-Marquardt method. Does not handle bounds and sparse Jacobians. Usually the most efficient method for @@ -503,6 +555,10 @@ We implement a few algorithms from scratch. They are currently considered experi .. dropdown:: bhhh + .. code-block:: + + bhhh + Minimize a likelihood function using the BHHH algorithm. BHHH (:cite:`Berndt1974`) can - and should ONLY - be used for minimizing @@ -526,6 +582,10 @@ We implement a few algorithms from scratch. They are currently considered experi .. dropdown:: neldermead_parallel + .. code-block:: + + neldermead_parallel + Minimize a function using the neldermead_parallel algorithm. This is a parallel Nelder-Mead algorithm following Lee D., Wiswall M., A parallel @@ -561,6 +621,10 @@ We implement a few algorithms from scratch. They are currently considered experi .. dropdown:: pounders + .. code-block:: + + pounders + Minimize a function using the POUNDERS algorithm. POUNDERs (:cite:`Benson2017`, :cite:`Wild2015`, `GitHub repository @@ -664,6 +728,10 @@ you need to have `petsc4py `_ installed. .. dropdown:: tao_pounders + .. code-block:: + + tao_pounders + Minimize a function using the POUNDERs algorithm. POUNDERs (:cite:`Benson2017`, :cite:`Wild2015`, `GitHub repository @@ -744,6 +812,10 @@ install each of them separately: .. dropdown:: nag_dfols + .. code-block:: + + nag_dfols + Minimize a function with least squares structure using DFO-LS. The DFO-LS algorithm :cite:`Cartis2018b` is designed to solve the nonlinear @@ -878,6 +950,10 @@ install each of them separately: .. dropdown:: nag_pybobyqa + .. code-block:: + + nag_pybobyqa + Minimize a function using the BOBYQA algorithm. BOBYQA (:cite:`Powell2009`, :cite:`Cartis2018`, :cite:`Cartis2018a`) is a @@ -1001,6 +1077,10 @@ optimizers. .. dropdown:: pygmo_gaco + .. code-block:: + + pygmo_gaco + Minimize a scalar function using the generalized ant colony algorithm. The version available through pygmo is an generalized version of the @@ -1065,6 +1145,10 @@ optimizers. .. dropdown:: pygmo_bee_colony + .. code-block:: + + pygmo_bee_colony + Minimize a scalar function using the artifical bee colony algorithm. The Artificial Bee Colony Algorithm was originally proposed by @@ -1085,6 +1169,10 @@ optimizers. .. dropdown:: pygmo_de + .. code-block:: + + pygmo_de + Minimize a scalar function using the differential evolution algorithm. Differential Evolution is a heuristic optimizer originally presented in @@ -1125,6 +1213,10 @@ optimizers. .. dropdown:: pygmo_sea + .. code-block:: + + pygmo_sea + Minimize a scalar function using the (N+1)-ES simple evolutionary algorithm. This algorithm represents the simplest evolutionary strategy, where a population of @@ -1148,6 +1240,10 @@ optimizers. .. dropdown:: pygmo_sga + .. code-block:: + + pygmo_sga + Minimize a scalar function using a simple genetic algorithm. A detailed description of the algorithm can be found `in the pagmo2 documentation @@ -1180,6 +1276,10 @@ optimizers. .. dropdown:: pygmo_sade + .. code-block:: + + pygmo_sade + Minimize a scalar function using Self-adaptive Differential Evolution. The original Differential Evolution algorithm (pygmo_de) can be significantly @@ -1239,6 +1339,10 @@ optimizers. .. dropdown:: pygmo_cmaes + .. code-block:: + + pygmo_cmaes + Minimize a scalar function using the Covariance Matrix Evolutionary Strategy. CMA-ES is one of the most successful algorithm, classified as an Evolutionary @@ -1277,6 +1381,10 @@ optimizers. .. dropdown:: pygmo_simulated_annealing + .. code-block:: + + pygmo_simulated_annealing + Minimize a function with the simulated annealing algorithm. This version of the simulated annealing algorithm is, essentially, an iterative @@ -1308,6 +1416,10 @@ optimizers. .. dropdown:: pygmo_pso + .. code-block:: + + pygmo_pso + Minimize a scalar function using Particle Swarm Optimization. Particle swarm optimization (PSO) is a population based algorithm inspired by the @@ -1369,6 +1481,10 @@ optimizers. .. dropdown:: pygmo_pso_gen + .. code-block:: + + pygmo_pso_gen + Minimize a scalar function with generational Particle Swarm Optimization. Particle Swarm Optimization (generational) is identical to pso, but does update the @@ -1436,6 +1552,10 @@ optimizers. .. dropdown:: pygmo_mbh + .. code-block:: + + pygmo_mbh + Minimize a scalar function using generalized Monotonic Basin Hopping. Monotonic basin hopping, or simply, basin hopping, is an algorithm rooted in the @@ -1465,6 +1585,10 @@ optimizers. .. dropdown:: pygmo_xnes + .. code-block:: + + pygmo_xnes + Minimize a scalar function using Exponential Evolution Strategies. Exponential Natural Evolution Strategies is an algorithm closely related to CMAES @@ -1505,6 +1629,10 @@ optimizers. .. dropdown:: pygmo_gwo + .. code-block:: + + pygmo_gwo + Minimize a scalar function usinng the Grey Wolf Optimizer. The grey wolf optimizer was proposed by :cite:`Mirjalili2014`. The pygmo @@ -1532,6 +1660,10 @@ optimizers. .. dropdown:: pygmo_compass_search + .. code-block:: + + pygmo_compass_search + Minimize a scalar function using compass search. The algorithm is described in :cite:`Kolda2003`. @@ -1551,6 +1683,10 @@ optimizers. .. dropdown:: pygmo_ihs + .. code-block:: + + pygmo_ihs + Minimize a scalar function using the improved harmony search algorithm. Improved harmony search (IHS) was introduced by :cite:`Mahdavi2007`. @@ -1576,6 +1712,10 @@ optimizers. .. dropdown:: pygmo_de1220 + .. code-block:: + + pygmo_de1220 + Minimize a scalar function using Self-adaptive Differential Evolution, pygmo flavor. See `the PAGMO documentation for details @@ -1640,6 +1780,10 @@ cyipopt``). .. dropdown:: ipopt + .. code-block:: + + ipopt + Minimize a scalar function using the Interior Point Optimizer. This implementation of the Interior Point Optimizer (:cite:`Waechter2005`, @@ -2836,6 +2980,10 @@ fides>=0.7.4``, make sure you have at least 0.7.1). .. dropdown:: fides + .. code-block:: + + fides + `Fides `_ implements an Interior Trust Region Reflective for boundary costrained optimization problems based on the papers :cite:`Coleman1994` and :cite:`Coleman1996`. Accordingly, Fides is named after @@ -2941,6 +3089,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_bobyqa + .. code-block:: + + nlopt_bobyqa + Minimize a scalar function using the BOBYQA algorithm. The implementation is derived from the BOBYQA subroutine of M. J. D. Powell. @@ -2966,6 +3118,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_neldermead + .. code-block:: + + nlopt_neldermead + Minimize a scalar function using the Nelder-Mead simplex algorithm. The basic algorithm is described in :cite:`Nelder1965`. @@ -2988,6 +3144,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_praxis + .. code-block:: + + nlopt_praxis + Minimize a scalar function using principal-axis method. This is a gradient-free local optimizer originally described in :cite:`Brent1972`. @@ -3027,6 +3187,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_cobyla + .. code-block:: + + nlopt_cobyla + Minimize a scalar function using the cobyla method. The alggorithm is derived from Powell's Constrained Optimization BY Linear @@ -3063,6 +3227,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_sbplx + .. code-block:: + + nlopt_sbplx + Minimize a scalar function using the "Subplex" algorithm. The alggorithm is a reimplementation of Tom Rowan's "Subplex" algorithm. @@ -3089,6 +3257,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_newuoa + .. code-block:: + + nlopt_newuoa + Minimize a scalar function using the NEWUOA algorithm. The algorithm is derived from the NEWUOA subroutine of M.J.D Powell which @@ -3118,6 +3290,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_tnewton + .. code-block:: + + nlopt_tnewton + Minimize a scalar function using the "TNEWTON" algorithm. The alggorithm is based on a Fortran implementation of a preconditioned @@ -3144,6 +3320,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_lbfgs + .. code-block:: + + nlopt_lbfgs + Minimize a scalar function using the "LBFGS" algorithm. The alggorithm is based on a Fortran implementation of low storage BFGS algorithm @@ -3170,6 +3350,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_ccsaq + .. code-block:: + + nlopt_ccsaq + Minimize a scalar function using CCSAQ algorithm. CCSAQ uses the quadratic variant of the conservative convex separable approximation. @@ -3196,6 +3380,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_mma + .. code-block:: + + nlopt_mma + Minimize a scalar function using the method of moving asymptotes (MMA). The implementation is based on an algorithm described in :cite:`Svanberg2002`. @@ -3222,6 +3410,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_var + .. code-block:: + + nlopt_var + Minimize a scalar function limited memory switching variable-metric method. The algorithm relies on saving only limited number M of past updates of the @@ -3246,6 +3438,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_slsqp + .. code-block:: + + nlopt_slsqp + Optimize a scalar function based on SLSQP method. SLSQP solves gradient based nonlinearly constrained optimization problems. @@ -3269,6 +3465,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_direct + .. code-block:: + + nlopt_direct + Optimize a scalar function based on DIRECT method. DIRECT is the DIviding RECTangles algorithm for global optimization, described @@ -3311,6 +3511,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_esch + .. code-block:: + + nlopt_esch + Optimize a scalar function using the ESCH algorithm. ESCH is an evolutionary algorithm that supports bound constraints only. Specifi @@ -3333,6 +3537,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_isres + .. code-block:: + + nlopt_isres + Optimize a scalar function using the ISRES algorithm. ISRES is an implementation of "Improved Stochastic Evolution Strategy" @@ -3358,6 +3566,10 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_crs2_lm + .. code-block:: + + nlopt_crs2_lm + Optimize a scalar function using the CRS2_LM algorithm. This implementation of controlled random search method with local mutation is based diff --git a/docs/source/conf.py b/docs/source/conf.py index 6f1ac807a..2daa84804 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -48,6 +48,7 @@ "nbsphinx", "sphinxcontrib.bibtex", "sphinx_panels", + "sphinx_copybutton", ] bibtex_bibfiles = ["refs.bib"] diff --git a/docs/source/getting_started/installation.rst b/docs/source/getting_started/installation.rst index 495ce492b..990768f63 100644 --- a/docs/source/getting_started/installation.rst +++ b/docs/source/getting_started/installation.rst @@ -9,9 +9,13 @@ Basic installation The package can be installed via conda. To do so, type the following commands in a terminal or shell: -``$ conda config --add channels conda-forge`` +.. code-block:: -``$ conda install estimagic`` + conda config --add channels conda-forge + +.. code-block:: + + conda install estimagic The first line adds conda-forge to your conda channels. This is necessary for conda to find all dependencies of estimagic. The second line installs estimagic @@ -33,16 +37,30 @@ see :ref:`list_of_algorithms`. To enable all algorithms at once, do the following: -``conda install nlopt`` +.. code-block:: + + conda install nlopt + +.. code-block:: + + pip install Py-BOBYQA + +.. code-block:: + + pip install DFO-LS + +.. code-block:: + + conda install petsc4py (Not available on Windows) -``pip install Py-BOBYQA`` +.. code-block:: -``pip install DFO-LS`` + conda install cyipopt -``conda install petsc4py`` (Not available on Windows) +.. code-block:: -``conda install cyipopt`` + conda install pygmo -``conda install pygmo`` +.. code-block:: -``pip install fides>=0.7.4 (Make sure you have at least 0.7.1)`` + pip install fides>=0.7.4 (Make sure you have at least 0.7.1) diff --git a/environment.yml b/environment.yml index b59413ca9..6fe5e4a58 100644 --- a/environment.yml +++ b/environment.yml @@ -41,6 +41,7 @@ dependencies: - pytask>=0.0.11 - nlopt - sphinx-panels + - sphinx-copybutton - pygmo - nb_black - pybaum>=0.1.2 From 032c3c3a3e1497954eea55986a5d8e319d728afd Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Sun, 26 Jun 2022 15:03:25 +0000 Subject: [PATCH 2/8] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- docs/source/algorithms.rst | 106 +++++++++---------- docs/source/getting_started/installation.rst | 6 +- 2 files changed, 56 insertions(+), 56 deletions(-) diff --git a/docs/source/algorithms.rst b/docs/source/algorithms.rst index ccd2680ab..d68841f78 100644 --- a/docs/source/algorithms.rst +++ b/docs/source/algorithms.rst @@ -69,7 +69,7 @@ you install estimagic. .. dropdown:: scipy_slsqp .. code-block:: - + scipy_slsqp Minimize a scalar function of one or more variables using the SLSQP algorithm. @@ -92,7 +92,7 @@ you install estimagic. .. dropdown:: scipy_neldermead - + .. code-block:: scipy_neldermead @@ -167,9 +167,9 @@ you install estimagic. .. dropdown:: scipy_bfgs - + .. code-block:: - + scipy_bfgs Minimize a scalar function of one or more variables using the BFGS algorithm. @@ -193,7 +193,7 @@ you install estimagic. .. code-block:: - scipy_conjugate_gradient + scipy_conjugate_gradient Minimize a function using a nonlinear conjugate gradient algorithm. @@ -225,7 +225,7 @@ you install estimagic. .. dropdown:: scipy_newton_cg .. code-block:: - + scipy_newton_cg Minimize a scalar function using Newton's conjugate gradient algorithm. @@ -271,7 +271,7 @@ you install estimagic. .. dropdown:: scipy_cobyla .. code-block:: - + scipy_cobyla Minimize a scalar function of one or more variables using the COBYLA algorithm. @@ -305,7 +305,7 @@ you install estimagic. .. dropdown:: scipy_truncated_newton .. code-block:: - + scipy_truncated_newton Minimize a scalar function using truncated Newton algorithm. @@ -371,7 +371,7 @@ you install estimagic. .. dropdown:: scipy_trust_constr .. code-block:: - + scipy_trust_constr Minimize a scalar function of one or more variables subject to constraints. @@ -428,7 +428,7 @@ you install estimagic. .. dropdown:: scipy_ls_dogbox .. code-block:: - + scipy_ls_dogbox Minimize a nonlinear least squares problem using a rectangular trust region method. @@ -468,7 +468,7 @@ you install estimagic. .. dropdown:: scipy_ls_trf .. code-block:: - + scipy_ls_trf Minimize a nonlinear least squares problem using a trustregion reflective method. @@ -508,7 +508,7 @@ you install estimagic. .. dropdown:: scipy_ls_lm .. code-block:: - + scipy_ls_lm Minimize a nonlinear least squares problem using a Levenberg-Marquardt method. @@ -556,7 +556,7 @@ We implement a few algorithms from scratch. They are currently considered experi .. dropdown:: bhhh .. code-block:: - + bhhh Minimize a likelihood function using the BHHH algorithm. @@ -583,7 +583,7 @@ We implement a few algorithms from scratch. They are currently considered experi .. dropdown:: neldermead_parallel .. code-block:: - + neldermead_parallel Minimize a function using the neldermead_parallel algorithm. @@ -622,7 +622,7 @@ We implement a few algorithms from scratch. They are currently considered experi .. dropdown:: pounders .. code-block:: - + pounders Minimize a function using the POUNDERS algorithm. @@ -729,7 +729,7 @@ you need to have `petsc4py `_ installed. .. dropdown:: tao_pounders .. code-block:: - + tao_pounders Minimize a function using the POUNDERs algorithm. @@ -813,7 +813,7 @@ install each of them separately: .. dropdown:: nag_dfols .. code-block:: - + nag_dfols Minimize a function with least squares structure using DFO-LS. @@ -951,7 +951,7 @@ install each of them separately: .. dropdown:: nag_pybobyqa .. code-block:: - + nag_pybobyqa Minimize a function using the BOBYQA algorithm. @@ -1078,7 +1078,7 @@ optimizers. .. dropdown:: pygmo_gaco .. code-block:: - + pygmo_gaco Minimize a scalar function using the generalized ant colony algorithm. @@ -1146,7 +1146,7 @@ optimizers. .. dropdown:: pygmo_bee_colony .. code-block:: - + pygmo_bee_colony Minimize a scalar function using the artifical bee colony algorithm. @@ -1170,7 +1170,7 @@ optimizers. .. dropdown:: pygmo_de .. code-block:: - + pygmo_de Minimize a scalar function using the differential evolution algorithm. @@ -1214,7 +1214,7 @@ optimizers. .. dropdown:: pygmo_sea .. code-block:: - + pygmo_sea Minimize a scalar function using the (N+1)-ES simple evolutionary algorithm. @@ -1241,7 +1241,7 @@ optimizers. .. dropdown:: pygmo_sga .. code-block:: - + pygmo_sga Minimize a scalar function using a simple genetic algorithm. @@ -1277,7 +1277,7 @@ optimizers. .. dropdown:: pygmo_sade .. code-block:: - + pygmo_sade Minimize a scalar function using Self-adaptive Differential Evolution. @@ -1340,7 +1340,7 @@ optimizers. .. dropdown:: pygmo_cmaes .. code-block:: - + pygmo_cmaes Minimize a scalar function using the Covariance Matrix Evolutionary Strategy. @@ -1382,7 +1382,7 @@ optimizers. .. dropdown:: pygmo_simulated_annealing .. code-block:: - + pygmo_simulated_annealing Minimize a function with the simulated annealing algorithm. @@ -1417,7 +1417,7 @@ optimizers. .. dropdown:: pygmo_pso .. code-block:: - + pygmo_pso Minimize a scalar function using Particle Swarm Optimization. @@ -1482,7 +1482,7 @@ optimizers. .. dropdown:: pygmo_pso_gen .. code-block:: - + pygmo_pso_gen Minimize a scalar function with generational Particle Swarm Optimization. @@ -1553,9 +1553,9 @@ optimizers. .. dropdown:: pygmo_mbh .. code-block:: - + pygmo_mbh - + Minimize a scalar function using generalized Monotonic Basin Hopping. Monotonic basin hopping, or simply, basin hopping, is an algorithm rooted in the @@ -1586,7 +1586,7 @@ optimizers. .. dropdown:: pygmo_xnes .. code-block:: - + pygmo_xnes Minimize a scalar function using Exponential Evolution Strategies. @@ -1630,7 +1630,7 @@ optimizers. .. dropdown:: pygmo_gwo .. code-block:: - + pygmo_gwo Minimize a scalar function usinng the Grey Wolf Optimizer. @@ -1661,7 +1661,7 @@ optimizers. .. dropdown:: pygmo_compass_search .. code-block:: - + pygmo_compass_search Minimize a scalar function using compass search. @@ -1684,7 +1684,7 @@ optimizers. .. dropdown:: pygmo_ihs .. code-block:: - + pygmo_ihs Minimize a scalar function using the improved harmony search algorithm. @@ -1713,7 +1713,7 @@ optimizers. .. dropdown:: pygmo_de1220 .. code-block:: - + pygmo_de1220 Minimize a scalar function using Self-adaptive Differential Evolution, pygmo flavor. @@ -1781,7 +1781,7 @@ cyipopt``). .. dropdown:: ipopt .. code-block:: - + ipopt Minimize a scalar function using the Interior Point Optimizer. @@ -2981,7 +2981,7 @@ fides>=0.7.4``, make sure you have at least 0.7.1). .. dropdown:: fides .. code-block:: - + fides `Fides `_ implements an Interior @@ -3090,7 +3090,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_bobyqa .. code-block:: - + nlopt_bobyqa Minimize a scalar function using the BOBYQA algorithm. @@ -3119,7 +3119,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_neldermead .. code-block:: - + nlopt_neldermead Minimize a scalar function using the Nelder-Mead simplex algorithm. @@ -3145,7 +3145,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_praxis .. code-block:: - + nlopt_praxis Minimize a scalar function using principal-axis method. @@ -3188,7 +3188,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_cobyla .. code-block:: - + nlopt_cobyla Minimize a scalar function using the cobyla method. @@ -3228,7 +3228,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_sbplx .. code-block:: - + nlopt_sbplx Minimize a scalar function using the "Subplex" algorithm. @@ -3258,7 +3258,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_newuoa .. code-block:: - + nlopt_newuoa Minimize a scalar function using the NEWUOA algorithm. @@ -3291,7 +3291,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_tnewton .. code-block:: - + nlopt_tnewton Minimize a scalar function using the "TNEWTON" algorithm. @@ -3321,7 +3321,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_lbfgs .. code-block:: - + nlopt_lbfgs Minimize a scalar function using the "LBFGS" algorithm. @@ -3351,7 +3351,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_ccsaq .. code-block:: - + nlopt_ccsaq Minimize a scalar function using CCSAQ algorithm. @@ -3381,7 +3381,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_mma .. code-block:: - + nlopt_mma Minimize a scalar function using the method of moving asymptotes (MMA). @@ -3411,7 +3411,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_var .. code-block:: - + nlopt_var Minimize a scalar function limited memory switching variable-metric method. @@ -3439,7 +3439,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_slsqp .. code-block:: - + nlopt_slsqp Optimize a scalar function based on SLSQP method. @@ -3466,7 +3466,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_direct .. code-block:: - + nlopt_direct Optimize a scalar function based on DIRECT method. @@ -3512,7 +3512,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_esch .. code-block:: - + nlopt_esch Optimize a scalar function using the ESCH algorithm. @@ -3538,7 +3538,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_isres .. code-block:: - + nlopt_isres Optimize a scalar function using the ISRES algorithm. @@ -3567,7 +3567,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. dropdown:: nlopt_crs2_lm .. code-block:: - + nlopt_crs2_lm Optimize a scalar function using the CRS2_LM algorithm. diff --git a/docs/source/getting_started/installation.rst b/docs/source/getting_started/installation.rst index 990768f63..bd12c5996 100644 --- a/docs/source/getting_started/installation.rst +++ b/docs/source/getting_started/installation.rst @@ -11,10 +11,10 @@ a terminal or shell: .. code-block:: - conda config --add channels conda-forge + conda config --add channels conda-forge .. code-block:: - + conda install estimagic The first line adds conda-forge to your conda channels. This is necessary for @@ -38,7 +38,7 @@ see :ref:`list_of_algorithms`. To enable all algorithms at once, do the following: .. code-block:: - + conda install nlopt .. code-block:: From 61f83594826205fffba732cc643a33640fe9c759 Mon Sep 17 00:00:00 2001 From: Annica Gehlen Date: Sun, 26 Jun 2022 18:29:07 +0200 Subject: [PATCH 3/8] Edit installation file. --- docs/source/getting_started/installation.rst | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/docs/source/getting_started/installation.rst b/docs/source/getting_started/installation.rst index 990768f63..a6a96fef9 100644 --- a/docs/source/getting_started/installation.rst +++ b/docs/source/getting_started/installation.rst @@ -14,7 +14,7 @@ a terminal or shell: conda config --add channels conda-forge .. code-block:: - + conda install estimagic The first line adds conda-forge to your conda channels. This is necessary for @@ -51,7 +51,9 @@ To enable all algorithms at once, do the following: .. code-block:: - conda install petsc4py (Not available on Windows) + conda install petsc4py + +*Note*: ```petsc4py``` is not available on Windows. .. code-block:: @@ -63,4 +65,6 @@ To enable all algorithms at once, do the following: .. code-block:: - pip install fides>=0.7.4 (Make sure you have at least 0.7.1) + pip install fides>=0.7.4 + +*Note*: Make sure you have at least 0.7.1. \ No newline at end of file From 2148f610e0d4131c1cecbda04fa1704a208f5b30 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Sun, 26 Jun 2022 16:30:10 +0000 Subject: [PATCH 4/8] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- docs/source/getting_started/installation.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/getting_started/installation.rst b/docs/source/getting_started/installation.rst index 5cdce27c5..124353106 100644 --- a/docs/source/getting_started/installation.rst +++ b/docs/source/getting_started/installation.rst @@ -51,7 +51,7 @@ To enable all algorithms at once, do the following: .. code-block:: - conda install petsc4py + conda install petsc4py *Note*: ```petsc4py``` is not available on Windows. @@ -67,4 +67,4 @@ To enable all algorithms at once, do the following: pip install fides>=0.7.4 -*Note*: Make sure you have at least 0.7.1. \ No newline at end of file +*Note*: Make sure you have at least 0.7.1. From bd6163916f21658e140e6de50903dbf9bf4fbe12 Mon Sep 17 00:00:00 2001 From: Annica Gehlen Date: Tue, 28 Jun 2022 18:47:05 +0200 Subject: [PATCH 5/8] Add quotesa round algroithms. --- docs/source/algorithms.rst | 106 ++++++++++++++++++------------------- 1 file changed, 53 insertions(+), 53 deletions(-) diff --git a/docs/source/algorithms.rst b/docs/source/algorithms.rst index d68841f78..d2f1aeb3f 100644 --- a/docs/source/algorithms.rst +++ b/docs/source/algorithms.rst @@ -22,7 +22,7 @@ you install estimagic. .. code-block:: - scipy_lbfgsb + "scipy_lbfgsb" Minimize a scalar function of one or more variables using the L-BFGS-B algorithm. @@ -70,7 +70,7 @@ you install estimagic. .. code-block:: - scipy_slsqp + "scipy_slsqp" Minimize a scalar function of one or more variables using the SLSQP algorithm. @@ -95,7 +95,7 @@ you install estimagic. .. code-block:: - scipy_neldermead + "scipy_neldermead" Minimize a scalar function using the Nelder-Mead algorithm. @@ -131,7 +131,7 @@ you install estimagic. .. code-block:: - scipy_powell + "scipy_powell" Minimize a scalar function using the modified Powell method. @@ -170,7 +170,7 @@ you install estimagic. .. code-block:: - scipy_bfgs + "scipy_bfgs" Minimize a scalar function of one or more variables using the BFGS algorithm. @@ -193,7 +193,7 @@ you install estimagic. .. code-block:: - scipy_conjugate_gradient + "scipy_conjugate_gradient" Minimize a function using a nonlinear conjugate gradient algorithm. @@ -226,7 +226,7 @@ you install estimagic. .. code-block:: - scipy_newton_cg + "scipy_newton_cg" Minimize a scalar function using Newton's conjugate gradient algorithm. @@ -272,7 +272,7 @@ you install estimagic. .. code-block:: - scipy_cobyla + "scipy_cobyla" Minimize a scalar function of one or more variables using the COBYLA algorithm. @@ -306,7 +306,7 @@ you install estimagic. .. code-block:: - scipy_truncated_newton + "scipy_truncated_newton" Minimize a scalar function using truncated Newton algorithm. @@ -372,7 +372,7 @@ you install estimagic. .. code-block:: - scipy_trust_constr + "scipy_trust_constr" Minimize a scalar function of one or more variables subject to constraints. @@ -429,7 +429,7 @@ you install estimagic. .. code-block:: - scipy_ls_dogbox + "scipy_ls_dogbox" Minimize a nonlinear least squares problem using a rectangular trust region method. @@ -469,7 +469,7 @@ you install estimagic. .. code-block:: - scipy_ls_trf + "scipy_ls_trf" Minimize a nonlinear least squares problem using a trustregion reflective method. @@ -509,7 +509,7 @@ you install estimagic. .. code-block:: - scipy_ls_lm + "scipy_ls_lm" Minimize a nonlinear least squares problem using a Levenberg-Marquardt method. @@ -557,7 +557,7 @@ We implement a few algorithms from scratch. They are currently considered experi .. code-block:: - bhhh + "bhhh" Minimize a likelihood function using the BHHH algorithm. @@ -584,7 +584,7 @@ We implement a few algorithms from scratch. They are currently considered experi .. code-block:: - neldermead_parallel + "neldermead_parallel" Minimize a function using the neldermead_parallel algorithm. @@ -623,7 +623,7 @@ We implement a few algorithms from scratch. They are currently considered experi .. code-block:: - pounders + "pounders" Minimize a function using the POUNDERS algorithm. @@ -730,7 +730,7 @@ you need to have `petsc4py `_ installed. .. code-block:: - tao_pounders + "tao_pounders" Minimize a function using the POUNDERs algorithm. @@ -814,7 +814,7 @@ install each of them separately: .. code-block:: - nag_dfols + "nag_dfols" Minimize a function with least squares structure using DFO-LS. @@ -952,7 +952,7 @@ install each of them separately: .. code-block:: - nag_pybobyqa + "nag_pybobyqa" Minimize a function using the BOBYQA algorithm. @@ -1079,7 +1079,7 @@ optimizers. .. code-block:: - pygmo_gaco + "pygmo_gaco" Minimize a scalar function using the generalized ant colony algorithm. @@ -1147,7 +1147,7 @@ optimizers. .. code-block:: - pygmo_bee_colony + "pygmo_bee_colony" Minimize a scalar function using the artifical bee colony algorithm. @@ -1171,7 +1171,7 @@ optimizers. .. code-block:: - pygmo_de + "pygmo_de" Minimize a scalar function using the differential evolution algorithm. @@ -1215,7 +1215,7 @@ optimizers. .. code-block:: - pygmo_sea + "pygmo_sea" Minimize a scalar function using the (N+1)-ES simple evolutionary algorithm. @@ -1242,7 +1242,7 @@ optimizers. .. code-block:: - pygmo_sga + "pygmo_sga" Minimize a scalar function using a simple genetic algorithm. @@ -1278,7 +1278,7 @@ optimizers. .. code-block:: - pygmo_sade + "pygmo_sade" Minimize a scalar function using Self-adaptive Differential Evolution. @@ -1341,7 +1341,7 @@ optimizers. .. code-block:: - pygmo_cmaes + "pygmo_cmaes" Minimize a scalar function using the Covariance Matrix Evolutionary Strategy. @@ -1383,7 +1383,7 @@ optimizers. .. code-block:: - pygmo_simulated_annealing + "pygmo_simulated_annealing" Minimize a function with the simulated annealing algorithm. @@ -1418,7 +1418,7 @@ optimizers. .. code-block:: - pygmo_pso + "pygmo_pso" Minimize a scalar function using Particle Swarm Optimization. @@ -1483,7 +1483,7 @@ optimizers. .. code-block:: - pygmo_pso_gen + "pygmo_pso_gen" Minimize a scalar function with generational Particle Swarm Optimization. @@ -1554,7 +1554,7 @@ optimizers. .. code-block:: - pygmo_mbh + "pygmo_mbh" Minimize a scalar function using generalized Monotonic Basin Hopping. @@ -1587,7 +1587,7 @@ optimizers. .. code-block:: - pygmo_xnes + "pygmo_xnes" Minimize a scalar function using Exponential Evolution Strategies. @@ -1631,7 +1631,7 @@ optimizers. .. code-block:: - pygmo_gwo + "pygmo_gwo" Minimize a scalar function usinng the Grey Wolf Optimizer. @@ -1662,7 +1662,7 @@ optimizers. .. code-block:: - pygmo_compass_search + "pygmo_compass_search" Minimize a scalar function using compass search. @@ -1685,7 +1685,7 @@ optimizers. .. code-block:: - pygmo_ihs + "pygmo_ihs" Minimize a scalar function using the improved harmony search algorithm. @@ -1714,7 +1714,7 @@ optimizers. .. code-block:: - pygmo_de1220 + "pygmo_de1220" Minimize a scalar function using Self-adaptive Differential Evolution, pygmo flavor. @@ -1782,7 +1782,7 @@ cyipopt``). .. code-block:: - ipopt + "ipopt" Minimize a scalar function using the Interior Point Optimizer. @@ -2982,7 +2982,7 @@ fides>=0.7.4``, make sure you have at least 0.7.1). .. code-block:: - fides + "fides" `Fides `_ implements an Interior Trust Region Reflective for boundary costrained optimization problems based on the @@ -3091,7 +3091,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_bobyqa + "nlopt_bobyqa" Minimize a scalar function using the BOBYQA algorithm. @@ -3120,7 +3120,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_neldermead + "nlopt_neldermead" Minimize a scalar function using the Nelder-Mead simplex algorithm. @@ -3146,7 +3146,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_praxis + "nlopt_praxis" Minimize a scalar function using principal-axis method. @@ -3189,7 +3189,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_cobyla + "nlopt_cobyla" Minimize a scalar function using the cobyla method. @@ -3229,7 +3229,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_sbplx + "nlopt_sbplx" Minimize a scalar function using the "Subplex" algorithm. @@ -3259,7 +3259,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_newuoa + "nlopt_newuoa" Minimize a scalar function using the NEWUOA algorithm. @@ -3292,7 +3292,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_tnewton + "nlopt_tnewton" Minimize a scalar function using the "TNEWTON" algorithm. @@ -3322,7 +3322,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_lbfgs + "nlopt_lbfgs" Minimize a scalar function using the "LBFGS" algorithm. @@ -3352,7 +3352,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_ccsaq + "nlopt_ccsaq" Minimize a scalar function using CCSAQ algorithm. @@ -3382,7 +3382,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_mma + "nlopt_mma" Minimize a scalar function using the method of moving asymptotes (MMA). @@ -3412,7 +3412,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_var + "nlopt_var" Minimize a scalar function limited memory switching variable-metric method. @@ -3440,7 +3440,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_slsqp + "nlopt_slsqp" Optimize a scalar function based on SLSQP method. @@ -3467,7 +3467,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_direct + "nlopt_direct" Optimize a scalar function based on DIRECT method. @@ -3513,7 +3513,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_esch + "nlopt_esch" Optimize a scalar function using the ESCH algorithm. @@ -3539,7 +3539,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_isres + "nlopt_isres" Optimize a scalar function using the ISRES algorithm. @@ -3568,7 +3568,7 @@ using an NLOPT algorithm. To install nlopt run ``conda install nlopt``. .. code-block:: - nlopt_crs2_lm + "nlopt_crs2_lm" Optimize a scalar function using the CRS2_LM algorithm. From a5dcccf65fdcf36739b15f2e0eabb335c2c66db6 Mon Sep 17 00:00:00 2001 From: Annica Gehlen Date: Tue, 28 Jun 2022 19:11:45 +0200 Subject: [PATCH 6/8] Version cyiopt. --- environment.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/environment.yml b/environment.yml index 6fe5e4a58..8d9ffeccb 100644 --- a/environment.yml +++ b/environment.yml @@ -16,7 +16,7 @@ dependencies: - click - conda-build - conda-verify - - cyipopt + - cyipopt<0.3.0 - fuzzywuzzy - joblib - cloudpickle From 28ca5a26e58d2f5b8a06927ac209c33cf2d1cae4 Mon Sep 17 00:00:00 2001 From: Annica Gehlen <39128048+amageh@users.noreply.github.com> Date: Tue, 28 Jun 2022 21:34:26 +0200 Subject: [PATCH 7/8] Update environment.yml --- environment.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/environment.yml b/environment.yml index 8d9ffeccb..6fe5e4a58 100644 --- a/environment.yml +++ b/environment.yml @@ -16,7 +16,7 @@ dependencies: - click - conda-build - conda-verify - - cyipopt<0.3.0 + - cyipopt - fuzzywuzzy - joblib - cloudpickle From 5b6610f60af3a379af440ffa00f85f872f392121 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Thu, 30 Jun 2022 20:23:47 +0000 Subject: [PATCH 8/8] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- docs/source/getting_started/installation.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/getting_started/installation.rst b/docs/source/getting_started/installation.rst index b47e4e4df..4314a984a 100644 --- a/docs/source/getting_started/installation.rst +++ b/docs/source/getting_started/installation.rst @@ -67,4 +67,4 @@ To enable all algorithms at once, do the following: pip install fides>=0.7.4 -*Note*: Make sure you have at least 0.7.1. \ No newline at end of file +*Note*: Make sure you have at least 0.7.1.