Skip to content

Commit

Permalink
add "max" as valid optimize_optimal target
Browse files Browse the repository at this point in the history
  • Loading branch information
jcmgray committed Sep 30, 2024
1 parent 70628cd commit beac8f6
Show file tree
Hide file tree
Showing 2 changed files with 50 additions and 19 deletions.
63 changes: 44 additions & 19 deletions cotengra/pathfinders/path_basic.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,6 +133,28 @@ def compute_con_cost_flops(
return iscore + jscore + cost


def compute_con_cost_max(
temp_legs,
appearances,
sizes,
iscore,
jscore,
):
"""Compute the max flops cost of a contraction given by temporary legs,
also removing any contracted indices from the temporary legs.
"""
cost = 1
for i in range(len(temp_legs) - 1, -1, -1):
ix, ix_count = temp_legs[i]
d = sizes[ix]
cost *= d
if ix_count == appearances[ix]:
# contracted index, remove
del temp_legs[i]

return max((iscore, jscore, cost))


def compute_con_cost_size(
temp_legs,
appearances,
Expand Down Expand Up @@ -247,6 +269,7 @@ def parse_minimize_for_optimal(minimize):
contraction. The string can be one of the following:
- "flops": compute_con_cost_flops
- "max": compute_con_cost_max
- "size": compute_con_cost_size
- "write": compute_con_cost_write
- "combo": compute_con_cost_combo
Expand All @@ -260,6 +283,8 @@ def parse_minimize_for_optimal(minimize):

if minimize == "flops":
return compute_con_cost_flops
elif minimize == "max":
return compute_con_cost_max
elif minimize == "size":
return compute_con_cost_size
elif minimize == "write":
Expand Down Expand Up @@ -1082,20 +1107,20 @@ def optimize_optimal(
size_dict : dict[str, int]
A dictionary mapping indices to their dimension.
minimize : str, optional
How to compute the cost of a contraction. The default is "flops".
Can be one of:
- "flops": minimize with respect to total operation count only
(also known as contraction cost)
- "size": minimize with respect to maximum intermediate size only
(also known as contraction width)
- "write": minimize with respect to total write cost only
- "combo" or "combo-{factor}": minimize with respect sum of flops
and write weighted by specified factor. If the factor is not
given a default value is used.
- "limit" or "limit-{factor}": minimize with respect to max (at
each contraction) of flops or write weighted by specified
factor. If the factor is not given a default value is used.
The cost function to minimize. The options are:
- "flops": minimize with respect to total operation count only
(also known as contraction cost)
- "size": minimize with respect to maximum intermediate size only
(also known as contraction width)
- 'max': minimize the single most expensive contraction, i.e. the
asymptotic (in index size) scaling of the contraction
- 'write' : minimize the sum of all tensor sizes, i.e. memory written
- 'combo' or 'combo={factor}` : minimize the sum of
FLOPS + factor * WRITE, with a default factor of 64.
- 'limit' or 'limit={factor}` : minimize the sum of
MAX(FLOPS, alpha * WRITE) for each individual contraction, with a
default factor of 64.
'combo' is generally a good default in term of practical hardware
performance, where both memory bandwidth and compute are limited.
Expand All @@ -1110,11 +1135,11 @@ def optimize_optimal(
simplify : bool, optional
Whether to perform simplifications before optimizing. These are:
- ignore any indices that appear in all terms
- combine any repeated indices within a single term
- reduce any non-output indices that only appear on a single term
- combine any scalar terms
- combine any tensors with matching indices (hadamard products)
- ignore any indices that appear in all terms
- combine any repeated indices within a single term
- reduce any non-output indices that only appear on a single term
- combine any scalar terms
- combine any tensors with matching indices (hadamard products)
Such simpifications may be required in the general case for the proper
functioning of the core optimization, but may be skipped if the input
Expand Down
6 changes: 6 additions & 0 deletions docs/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,16 @@

- Add [`cmaes`](https://github.com/CyberAgentAILab/cmaes) as an `optlib` method, use it by default for `'auto'` preset if available since ih has less overhead than `optuna`.
- Add [`HyperOptimizer.plot_parameters_parallel`](cotengra.plot.plot_parameters_parallel) for plotting the sampled parameter space of a hyper optimizer method in parallel coordinates.
- Add [`ncon`](cotengra.ncon) interface.
- Add [`utils.save_to_json`](cotengra.utils.save_to_json) and [`utils.load_from_json`](cotengra.utils.load_from_json) for saving and loading contractions to/from json.
- Add `examples/benchmarks` with various json benchmark contractions
- Add [`utils.networkx_graph_to_equation`](cotengra.utils.networkx_graph_to_equation) for converting a networkx graph to cotengra style `inputs`, `output` and `size_dict`.
- Add `"max"` as a valid `minimize` option for `optimize_optimal` (also added to `cotengrust`), which minimizes the single most expensive contraction (i.e. the cost *scaling*)

**Bug fixes**

- Fix [`HyperGraph.plot`](cotengra.plot.plot_hypergraph) when nodes are not labelled as consecutive integers ({issue}`36`)
- Fix [`ContractionTreeCompressed.windowed_reconfigure`] not propagating the default objective


## v0.6.2 (2024-05-21)
Expand Down

0 comments on commit beac8f6

Please sign in to comment.