Skip to content

Commit

Permalink
[Doc] Consistently format Python module names using italics
Browse files Browse the repository at this point in the history
The exception to this is where the reference is specifically to the
package name, for instance when describing installation with pip or conda.
  • Loading branch information
speth committed Feb 16, 2022
1 parent a7c2f32 commit fcaccbd
Show file tree
Hide file tree
Showing 4 changed files with 37 additions and 37 deletions.
36 changes: 18 additions & 18 deletions interfaces/cython/cantera/composite.py
Original file line number Diff line number Diff line change
Expand Up @@ -357,7 +357,7 @@ class SolutionArray:
array of states.
`SolutionArray` can represent both 1D and multi-dimensional arrays of states,
with shapes described in the same way as Numpy arrays. All of the states
with shapes described in the same way as *NumPy* arrays. All of the states
can be set in a single call::
>>> gas = ct.Solution('gri30.yaml')
Expand All @@ -367,13 +367,13 @@ class SolutionArray:
>>> X = 'CH4:1.0, O2:1.0, N2:3.76'
>>> states.TPX = T, P, X
Similar to Numpy arrays, input with fewer non-singleton dimensions than the
Similar to *NumPy* arrays, input with fewer non-singleton dimensions than the
`SolutionArray` is 'broadcast' to generate input of the appropriate shape. In
the above example, the single value for the mole fraction input is applied
to each input, while each row has a constant temperature and each column has
a constant pressure.
Computed properties are returned as Numpy arrays with the same shape as the
Computed properties are returned as *NumPy* arrays with the same shape as the
array of states, with additional dimensions appended as necessary for non-
scalar output (e.g. per-species or per-reaction properties)::
Expand All @@ -394,7 +394,7 @@ class SolutionArray:
>>> states.equilibrate('HP')
>>> states.T # -> adiabatic flame temperature at various equivalence ratios
`SolutionArray` objects can also be 'sliced' like Numpy arrays, which can be
`SolutionArray` objects can also be 'sliced' like *NumPy* arrays, which can be
used both for accessing and setting properties::
>>> states = ct.SolutionArray(gas, (6, 10))
Expand Down Expand Up @@ -449,8 +449,8 @@ class SolutionArray:
For HDF export and import, the (optional) keyword argument ``group`` allows
for saving and accessing of multiple solutions in a single container file.
Note that `write_hdf` and `read_hdf` require a working installation of h5py.
The package `h5py` can be installed using pip or conda.
Note that `write_hdf` and `read_hdf` require a working installation of *h5py*.
The package *h5py* can be installed using pip or conda.
:param phase: The `Solution` object used to compute the thermodynamic,
kinetic, and transport properties
Expand Down Expand Up @@ -1141,11 +1141,11 @@ def read_csv(self, filename, normalize=True):

def to_pandas(self, cols=None, *args, **kwargs):
"""
Returns the data specified by ``cols`` in a single pandas DataFrame.
Returns the data specified by ``cols`` in a single `pandas.DataFrame`.
Additional arguments are passed on to `collect_data`. This method works
only with 1D `SolutionArray` objects and requires a working pandas
installation. Use pip or conda to install `pandas` to enable this method.
only with 1D `SolutionArray` objects and requires a working *pandas*
installation. Use pip or conda to install ``pandas`` to enable this method.
"""

if isinstance(_pandas, ImportError):
Expand All @@ -1158,11 +1158,11 @@ def to_pandas(self, cols=None, *args, **kwargs):

def from_pandas(self, df, normalize=True):
"""
Restores `SolutionArray` data from a pandas DataFrame ``df``.
Restores `SolutionArray` data from a `pandas.DataFrame` ``df``.
This method is intendend for loading of data that were previously
exported by `to_pandas`. The method requires a working pandas
installation. The package 'pandas' can be installed using pip or conda.
exported by `to_pandas`. The method requires a working *pandas*
installation. The package ``pandas`` can be installed using pip or conda.
The ``normalize`` argument is passed on to `restore_data` to normalize
mole or mass fractions. By default, ``normalize`` is ``True``.
Expand Down Expand Up @@ -1219,29 +1219,29 @@ def write_hdf(self, filename, *args, cols=None, group=None, subgroup=None,
Dictionary of user-defined attributes added at the group level
(typically used in conjunction with a subgroup argument).
:param mode:
Mode h5py uses to open the output file {'a' to read/write if file
Mode *h5py* uses to open the output file {'a' to read/write if file
exists, create otherwise (default); 'w' to create file, truncate if
exists; 'r+' to read/write, file must exist}.
:param append:
If False, the content of a pre-existing group is deleted before
writing the `SolutionArray` in the first position. If True, the
current `SolutionArray` objects is appended to the group.
:param compression:
Pre-defined h5py compression filters {None, 'gzip', 'lzf', 'szip'}
Pre-defined *h5py* compression filters {None, 'gzip', 'lzf', 'szip'}
used for data compression.
:param compression_opts:
Options for the h5py compression filter; for 'gzip', this
Options for the *h5py* compression filter; for 'gzip', this
corresponds to the compression level {None, 0-9}.
:return:
Group identifier used for storing HDF data.
Arguments ``compression`` and ``compression_opts`` are mapped to parameters
for `h5py.create_dataset`; in both cases, the choices of ``None`` results
in default values set by h5py.
in default values set by *h5py*.
Additional arguments (that is, ``*args`` and ``**kwargs``) are passed on to
`collect_data`; see `collect_data` for further information. This method
requires a working installation of h5py (`h5py` can be installed using
requires a working installation of *h5py* (``h5py`` can be installed using
pip or conda).
"""
if isinstance(_h5py, ImportError):
Expand Down Expand Up @@ -1326,7 +1326,7 @@ def read_hdf(self, filename, group=None, subgroup=None, force=False, normalize=T
the `SolutionArray` information.
The method imports data using `restore_data` and requires a working
installation of h5py (`h5py` can be installed using pip or conda).
installation of *h5py* (``h5py`` can be installed using pip or conda).
"""
if isinstance(_h5py, ImportError):
raise _h5py
Expand Down
10 changes: 5 additions & 5 deletions interfaces/cython/cantera/ctml2yaml.py
Original file line number Diff line number Diff line change
Expand Up @@ -172,11 +172,11 @@ def float2string(data: float) -> str:
:param data: The floating point data to be formatted.
Uses NumPy's ``format_float_positional()`` and ``format_float_scientific()`` if they
are is available, requires NumPy >= 1.14. In that case, values with magnitude
between 0.01 and 10000 are formatted using ``format_float_positional ()`` and other
values are formatted using ``format_float_scientific()``. If those NumPy functions
are not available, returns the ``repr`` of the input.
Uses *NumPy*'s ``format_float_positional()`` and ``format_float_scientific()`` if
they are is available, requires ``numpy >= 1.14``. In that case, values with
magnitude between 0.01 and 10000 are formatted using ``format_float_positional ()``
and other values are formatted using ``format_float_scientific()``. If those *NumPy*
functions are not available, returns the ``repr`` of the input.
"""
if not HAS_FMT_FLT_POS:
return repr(data)
Expand Down
26 changes: 13 additions & 13 deletions interfaces/cython/cantera/onedim.py
Original file line number Diff line number Diff line change
Expand Up @@ -104,11 +104,11 @@ def set_initial_guess(self, *args, data=None, group=None, **kwargs):
:param data:
Restart data, which are typically based on an earlier simulation
result. Restart data may be specified using a `SolutionArray`,
pandas' DataFrame, or previously saved CSV or HDF container files.
`pandas.DataFrame`, or previously saved CSV or HDF container files.
Note that restart data do not overwrite boundary conditions.
DataFrame input requires a working installation of pandas, whereas
HDF input requires an installation of h5py. These packages can be
installed using pip or conda (`pandas` and `h5py`, respectively).
DataFrame input requires a working installation of *pandas*, whereas
HDF input requires an installation of *h5py*. These packages can be
installed using pip or conda (``pandas`` and ``h5py``, respectively).
:param key:
Group identifier within a HDF container file (only used in
combination with HDF restart data).
Expand Down Expand Up @@ -463,8 +463,8 @@ def to_pandas(self, species='X', normalize=True):
Boolean flag to indicate whether the mole/mass fractions should
be normalized (default is ``True``)
This method uses `to_solution_array` and requires a working pandas
installation. Use pip or conda to install `pandas` to enable this
This method uses `to_solution_array` and requires a working *pandas*
installation. Use pip or conda to install ``pandas`` to enable this
method.
"""
cols = ('extra', 'T', 'D', species)
Expand All @@ -485,7 +485,7 @@ def from_pandas(self, df, restore_boundaries=True, settings=None):
This method is intendend for loading of data that were previously
exported by `to_pandas`. The method uses `from_solution_array` and
requires a working pandas installation. The package 'pandas' can be
requires a working *pandas* installation. The package ``pandas`` can be
installed using pip or conda.
"""
arr = SolutionArray(self.gas, extra=self.other_components())
Expand Down Expand Up @@ -543,16 +543,16 @@ def write_hdf(self, filename, *args, group=None, species='X', mode='a',
Attribute to use obtaining species profiles, e.g. ``X`` for
mole fractions or ``Y`` for mass fractions.
:param mode:
Mode h5py uses to open the output file {'a' to read/write if file
Mode *h5py* uses to open the output file {'a' to read/write if file
exists, create otherwise (default); 'w' to create file, truncate if
exists; 'r+' to read/write, file must exist}.
:param description:
Custom comment describing the dataset to be stored.
:param compression:
Pre-defined h5py compression filters {None, 'gzip', 'lzf', 'szip'}
Pre-defined *h5py* compression filters {None, 'gzip', 'lzf', 'szip'}
used for data compression.
:param compression_opts:
Options for the h5py compression filter; for 'gzip', this
Options for the *h5py* compression filter; for 'gzip', this
corresponds to the compression level {None, 0-9}.
:param quiet:
Suppress message confirming successful file output.
Expand All @@ -563,7 +563,7 @@ def write_hdf(self, filename, *args, group=None, species='X', mode='a',
Additional arguments (that is, ``*args`` and ``**kwargs``) are passed on to
`SolutionArray.collect_data`. The method exports data using
`SolutionArray.write_hdf` via `to_solution_array` and requires a working
installation of h5py (`h5py` can be installed using pip or conda).
installation of *h5py* (``h5py`` can be installed using pip or conda).
"""
cols = ('extra', 'T', 'D', species)
meta = self.settings
Expand Down Expand Up @@ -603,8 +603,8 @@ def read_hdf(self, filename, group=None, restore_boundaries=True, normalize=True
be normalized (default is ``True``)
The method imports data using `SolutionArray.read_hdf` via
`from_solution_array` and requires a working installation of h5py
(`h5py` can be installed using pip or conda).
`from_solution_array` and requires a working installation of *h5py*
(``h5py`` can be installed using pip or conda).
"""
if restore_boundaries:
domains = range(3)
Expand Down
2 changes: 1 addition & 1 deletion interfaces/cython/cantera/utils.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def appdelete():
def use_sparse(sparse=True):
"""
Enable sparse output using `scipy.sparse`. Sparse output requires a working
`scipy` installation. Use pip or conda to install `scipy` to enable this method.
*SciPy* installation. Use pip or conda to install ``scipy`` to enable this method.
"""
global _USE_SPARSE
if sparse and isinstance(_scipy_sparse, ImportError):
Expand Down

0 comments on commit fcaccbd

Please sign in to comment.