Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sparsify messagging #287

Draft
wants to merge 31 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
814abc9
starting SolutionReceiver
phstratmann Nov 30, 2023
313881a
transferring unit test from spikeIO
phstratmann Dec 5, 2023
597e9d5
spikeIO working via LMT
phstratmann Dec 5, 2023
3b70b6d
starting unittest for SolutionReadout
phstratmann Dec 5, 2023
2646faf
Remove NxToPy adapters from SolutionReadout
AlessandroPierro Dec 7, 2023
02def67
Implement SolutionReceiver with a single port
AlessandroPierro Dec 7, 2023
20e2675
Incremental update to spikeio readout
AlessandroPierro Dec 8, 2023
e80b96e
revising CostIntegrator
phstratmann Dec 12, 2023
341e778
ensure that run_async also stops if no solution found
phstratmann Dec 12, 2023
f34707d
passing unit tests for new CostIntegrator
phstratmann Dec 13, 2023
3808fc2
nebm_sa: first revision, untested
phstratmann Dec 14, 2023
add481c
spikeIO running in unit test, but no msgs arriving yet
phstratmann Dec 15, 2023
8b7e118
Merge remote-tracking branch 'optimization_origin/qubo_readout' into …
phstratmann Dec 15, 2023
e840f92
spikeIO msgs arriving
phstratmann Dec 15, 2023
c946c7d
unit tests passing for SolutionReceiver and SolutionReadout, Spiker32…
phstratmann Dec 21, 2023
8bcfbe9
revised nebm_sa, old unit tests pass
phstratmann Dec 22, 2023
6dc1607
Add synapses NEBM -> SpikeIntegrators
AlessandroPierro Jan 12, 2024
8a13b22
Merge branch 'main' into qubo_readout
phstratmann Jan 15, 2024
2784092
SolutionReadout functional and tested
phstratmann Jan 16, 2024
d17e085
Renamend SolutionReadout
phstratmann Jan 18, 2024
e35212d
Bug fix in SolutionReadout
AlessandroPierro Jan 19, 2024
0216983
refractor
phstratmann Jan 22, 2024
40617b8
refractor
phstratmann Jan 22, 2024
a34329f
lintin
phstratmann Jan 22, 2024
53ab268
input validation and docstrings
phstratmann Jan 23, 2024
87f14e1
added timeout in SolutionReceiver
phstratmann Jan 23, 2024
b532d53
linting
phstratmann Jan 23, 2024
4bc6814
Merge branch 'main' into qubo_readout
phstratmann Jan 23, 2024
9fa1245
licenses
phstratmann Jan 24, 2024
6b7bc37
Merge remote-tracking branch 'optimization_origin/qubo_readout' into …
phstratmann Jan 24, 2024
d36c489
CostIntegrator accepts sigma input
phstratmann Feb 1, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright (C) 2022 Intel Corporation
# Copyright (C) 2022-2024 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause
# See: https://spdx.org/licenses/
import numpy as np
Expand Down
4 changes: 4 additions & 0 deletions src/lava/lib/optimization/solvers/generic/nebm/models.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# Copyright (C) 2022-2024 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause
# See: https://spdx.org/licenses/

import numpy as np

from lava.lib.optimization.solvers.generic.nebm.process import NEBM
Expand Down
178 changes: 178 additions & 0 deletions src/lava/lib/optimization/solvers/qubo/cost_integrator/process.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,178 @@
# Copyright (C) 2022-2024 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause
# See: https://spdx.org/licenses/
import typing as ty
import numpy as np

from lava.magma.core.process.ports.ports import InPort, OutPort
from lava.magma.core.process.process import AbstractProcess, LogConfig
from lava.magma.core.process.variable import Var


class CostIntegrator(AbstractProcess):
"""Node that monitors execution of the QUBOSolver. It integrates the cost
components from all variables. Whenever a new better solution is found,
it stores the new best cost and the associated timestep, while triggering
the variable neurons to store the new best state. Waits for stopping
criteria to be reached, either cost_target or timeout. Once reached,
it spikes out the best cost, timestep, and a trigger for the variable
neurons to spike out the best state.

Parameters
----------
shape : tuple(int)
The expected number and topology of the input cost components.
cost_target: int
Target cost of the QUBO solver. Once reached, the best_cost,
best_timestep, and best_state are spiked out.
cost_init: int
The initial value of the cost. Only required if mode=='iterative'.
is_iterative: str
Determines if the CostIntegrator receives the cost in a
sparsified form. If set to False, the cost_in port provides the
total cost of the current state. If set to True,
the CostIntegrator receives only the changes since the last state.
timeout: int
Timeout of the QUBO solver. Once reached, the best_cost,
best_timestep, and best_state are spiked out.
name : str, optional
Name of the Process. Default is 'Process_ID', where ID is an
integer value that is determined automatically.
log_config: Configuration options for logging.

InPorts
-------
cost_in
input from the variable neurons. Added, this input denotes
the total cost of the current variable assignment.

OutPorts
--------
control_states_out
Port to the variable neurons.
Can send either of the following three values:
1 -> store the state, since it is the new best state
2 -> store the state and spike it, since stopping criteria reached
3 -> spike the best state
best_cost_out
Port to the SolutionReadout. Sends the best cost found.
best_timestep_out
Port to the SolutionReadout. Sends the timestep when the best cost
was found.

Vars
----
timestep
Holds current timestep
cost_min_last_bytes
Current minimum cost, i.e., the lowest reported cost so far.
Saves the last 3 bytes.
cost_min = cost_min_first_byte << 24 + cost_min_last_bytes
cost_min_first_byte
Current minimum cost, i.e., the lowest reported cost so far.
Saves the first byte.
cost_last_bytes
Current cost.
Saves the last 3 bytes.
cost_min = cost_min_first_byte << 24 + cost_min_last_bytes
cost_first_byte
Current cost.
Saves the first byte.
"""

def __init__(
self,
*,
cost_target: int,
timeout: int,
is_iterative: bool = False,
cost_init: int = None,
shape: ty.Tuple[int, ...] = (1,),
name: ty.Optional[str] = None,
log_config: ty.Optional[LogConfig] = None,
) -> None:
self._input_validation(cost_target=cost_target,
timeout=timeout,
is_iterative=is_iterative,
cost_init=cost_init)
cost_init_first_byte, cost_init_last_bytes, cost_target_fist_byte, \
cost_target_last_bytes = self._transform_inputs(
cost_init, cost_target)

super().__init__(shape=shape,
cost_target_first_byte=cost_target_fist_byte,
cost_target_last_bytes=cost_target_last_bytes,
is_iterative=is_iterative,
timeout=timeout,
name=name,
log_config=log_config)
self.cost_in = InPort(shape=shape)
self.control_states_out = OutPort(shape=shape)
self.best_cost_out = OutPort(shape=shape)
self.best_timestep_out = OutPort(shape=shape)

# Counter for timesteps
self.timestep = Var(shape=shape, init=0)
# Storage for best current time step
self.best_timestep = Var(shape=shape, init=0)

# Var to store current cost
# Note: Total cost = cost_first_byte << 24 + cost_last_bytes
# last 24 bit of cost
self.cost_last_bytes = Var(shape=shape, init=cost_init_last_bytes)
# first 8 bit of cost
self.cost_first_byte = Var(shape=shape, init=cost_init_first_byte)

# Var to store best cost found to far
# Note: Total min cost = cost_min_first_byte << 24 + cost_min_last_bytes
# last 24 bit of cost
self.cost_min_last_bytes = Var(shape=shape, init=0)
# first 8 bit of cost
self.cost_min_first_byte = Var(shape=shape, init=0)

@staticmethod
def _input_validation(cost_target,
timeout,
is_iterative,
cost_init) -> None:
if (cost_target is None and timeout is None):
raise ValueError(
"Both the cost_target and the timeout must be defined")
if cost_target > 0 or cost_target < - 2 ** 31 + 1:
raise ValueError(
f"The target cost must in the range [-2**32 + 1, 0], "
f"but is {cost_target}.")
if timeout <= 0 or timeout > 2 ** 24 - 1:
raise ValueError(
f"The timeout must be in the range (0, 2**24 - 1], but is "
f"{timeout}.")
if is_iterative and cost_init is None:
raise ValueError(
f"For iterative cost integration, an initial cost must be "
f"provided")

def _transform_inputs(self, cost_init, cost_target
) -> [int, int, int, int]:
if cost_init is None:
cost_init = 0

return *self.split_cost(cost_init), *self.split_cost(cost_target)

@staticmethod
def split_cost(cost_32_bit) -> [int, int]:
"""Splits a 32bit cost into a 8bit and a 24bit component. Allows to
be stored and processed in ucode. Split happens according to
cost = (cost_first_byte << 24) + cost_last_bytes"""

cost = np.int32(cost_32_bit)
cost_first_byte = cost >> 24
cost_last_bytes = cost - (cost_first_byte << 24)

# Ensure that transformation is correct.
cost_init_check = (cost_first_byte << 24) + cost_last_bytes
if cost_32_bit != cost_init_check:
raise ValueError(f"Transformation in CostIntegrator was incorrect. "
f"Most likely, the cost {cost_32_bit} is not a "
f"32bit integer.")

return cost_first_byte, cost_last_bytes
157 changes: 157 additions & 0 deletions src/lava/lib/optimization/solvers/qubo/simulated_annealing/process.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,157 @@
# Copyright (C) 2022-2024 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause
# See: https://spdx.org/licenses/

import numpy as np
import typing as ty
from numpy import typing as npty

from lava.magma.core.process.ports.ports import InPort, OutPort
from lava.magma.core.process.process import AbstractProcess
from lava.magma.core.process.variable import Var


class SimulatedAnnealingLocal(AbstractProcess):
"""
Non-equilibrium Boltzmann (NEBM) neuron model to solve QUBO problems.
This model uses purely information available at the level of individual
neurons to decide whether to switch or not, in contrast to the inheriting
Process NEBMSimulatedAnnealing.
"""

def __init__(
self,
*,
shape: ty.Tuple[int, ...],
cost_diagonal: npty.ArrayLike,
max_temperature: npty.ArrayLike,
refract_scaling: ty.Union[npty.ArrayLike, None],
refract_seed: int,
init_value: npty.ArrayLike,
init_state: npty.ArrayLike,
):
"""
SA Process.

Parameters
----------
shape: Tuple
Number of neurons. Default is (1,).

refract_scaling : ArrayLike
After a neuron has switched its binary variable, it remains in a
refractory state that prevents any variable switching for a
number of time steps. This number of time steps is determined by
rand(0, 255) >> refract_scaling
Refract_scaling thus denotes the order of magnitude of timesteps a
neuron remains in a state after a transition.
refract_seed : int
Random seed to initialize the refractory periods. Allows
repeatability.
init_value : ArrayLike
The spiking history with which the network is initialized
init_state : ArrayLike
The state of neurons with which the network is initialized
neuron_model : str
The neuron model to be used. The latest list of allowed values
can be found in NEBMSimulatedAnnealing.enabled_neuron_models.
"""

super().__init__(
shape=shape,
cost_diagonal=cost_diagonal,
refract_scaling=refract_scaling,
)

self.a_in = InPort(shape=shape)
self.delta_temperature_in = InPort(shape=shape)
self.control_cost_integrator = InPort(shape=shape)
self.s_sig_out = OutPort(shape=shape)
self.s_wta_out = OutPort(shape=shape)
self.best_state_out = OutPort(shape=shape)

self.spk_hist = Var(
shape=shape, init=(np.zeros(shape=shape) + init_value).astype(int)
)

self.temperature = Var(shape=shape, init=int(max_temperature))

np.random.seed(refract_seed)
self.refract_counter = Var(
shape=shape,
init=0 + np.right_shift(
np.random.randint(0, 2**8, size=shape), (refract_scaling or 0)
),
)
# Storage for the best state. Will get updated whenever a better
# state was found
# Default is all zeros
self.best_state = Var(shape=shape,
init=np.zeros(shape=shape, dtype=int))
# Initial state determined in DiscreteVariables
self.state = Var(
shape=shape,
init=init_state.astype(int)
if init_state is not None
else np.zeros(shape=shape, dtype=int),
)

@property
def shape(self) -> ty.Tuple[int, ...]:
return self.proc_params["shape"]


class SimulatedAnnealing(SimulatedAnnealingLocal):
"""
Non-equilibrium Boltzmann (NEBM) neuron model to solve QUBO problems.
This model combines the switching intentions of all NEBM neurons to
decide whether to switch or not, to avoid conflicting variable switches.
"""

def __init__(
self,
*,
shape: ty.Tuple[int, ...],
cost_diagonal: npty.ArrayLike,
max_temperature: npty.ArrayLike,
init_value: npty.ArrayLike,
init_state: npty.ArrayLike,
):
"""
SA Process.

Parameters
----------
shape: Tuple
Number of neurons. Default is (1,).

refract_scaling : ArrayLike
After a neuron has switched its binary variable, it remains in a
refractory state that prevents any variable switching for a
number of time steps. This number of time steps is determined by
rand(0, 255) >> refract_scaling
Refract_scaling thus denotes the order of magnitude of timesteps a
neuron remains in a state after a transition.
init_value : ArrayLike
The spiking history with which the network is initialized
init_state : ArrayLike
The state of neurons with which the network is initialized
neuron_model : str
The neuron model to be used. The latest list of allowed values
can be found in NEBMSimulatedAnnealing.enabled_neuron_models.
"""

super().__init__(
shape=shape,
cost_diagonal=cost_diagonal,
max_temperature=max_temperature,
refract_scaling=None,
refract_seed=0,
init_value=init_value,
init_state=init_state,
)

# number of NEBM neurons that suggest switching in a time step
self.n_switches_in = InPort(shape=shape)
# port to notify other NEBM neurons of switching intentions
self.suggest_switch_out = OutPort(shape=shape)
Loading
Loading