Skip to content

Commit

Permalink
Graded (lava-nc#734)
Browse files Browse the repository at this point in the history
* prod neuron

* trying to get prod neuron to work...

* trying to get prod neuron cpu to work...

* prod neuron process cpu backend working with unit test

* remove init file from prod_neuron

* gradedvec process and test

* working on norm vec

* fixed prod neuron license headers

* invsqrt model and tests, reconfigured to process and models

* normvecdelay and tests, timing weirdness with normvecdelay

* test for second channel of norm vec

* renamed to prodneuron.

* fixing some linting errors

* cleanup

* adding some docstring, fixing unused imports

* Fix conv python model to send() before recv() (lava-nc#751)

Co-authored-by: Gavin Parpart <[email protected]>

* Adds support for Monitor a Port to observe if it is blocked (lava-nc#755)

* Adds support for Monitor a Port to observe if it is blocked

* Fix lint issues

* Redesigned Watchdog to use Multiprocessing Manager; Invoke only 2 Event Monitors and use 2 queues for watching events; Configs are piped in via compiler now

* Incorporate Codacy Suggestions

* Fix lint comments

* Fix failing unit tests to add the watchdog builder

* Code review comments

* Set version to dev0 in pyproject.toml

* Update README.md

Updated version in install instructions.

* Update README.md (lava-nc#758)

Updated the installation branch to the most recent version.

Co-authored-by: PhilippPlank <[email protected]>

* Fix DelayDense buffer issue (lava-nc#767)

* update refport unittest to always wait when it writes to port for consistent behavior

Signed-off-by: bamsumit <[email protected]>

* Removed pyproject changes

Signed-off-by: bamsumit <[email protected]>

* Fix to convolution tests. Fixed imcompatible mnist_pretrained for old python versions.

Signed-off-by: bamsumit <[email protected]>

* Missing moudle parent fix

Signed-off-by: bamsumit <[email protected]>

* Added ConvVarModel

* Added iterable callback function

Signed-off-by: bamsumit <[email protected]>

* Fix codacy issues in callback_fx.py

* Fix linting in callback_fx.py

* Fix codacy sig issue in callback_fx.py

* Bugfix to pass the args by keyword

* Delay Dense PyModel fix

Signed-off-by: bamsumit <[email protected]>

* Fixed unittests

Signed-off-by: bamsumit <[email protected]>

* Fixed sparse delay

Signed-off-by: bamsumit <[email protected]>

---------

Signed-off-by: bamsumit <[email protected]>
Co-authored-by: Joyesh Mishra <[email protected]>
Co-authored-by: Marcus G K Williams <[email protected]>

* Allow np.array as input weights for Sparse (lava-nc#772)

* ndarray as input weights for Sparse

* docs

* codacy

* remove implementation details from docstring and from tests

* move tests to corresponding classes

* put weight casting into extra method

* Removed unused import

---------

Co-authored-by: Mathis Richter <[email protected]>

* Bump tornado from 6.3.2 to 6.3.3 (lava-nc#778)

Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.3.2 to 6.3.3.
- [Changelog](https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst)
- [Commits](tornadoweb/tornado@v6.3.2...v6.3.3)

---
updated-dependencies:
- dependency-name: tornado
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump cryptography from 41.0.2 to 41.0.3 (lava-nc#779)

Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.2 to 41.0.3.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](pyca/cryptography@41.0.2...41.0.3)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mathis Richter <[email protected]>

* small docstring, typing and other formatting changes

* small docstring, typing and other formatting changes

* doc strings for graded vec

* removed prodneuron tests

---------

Signed-off-by: bamsumit <[email protected]>
Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: Gavin Parpart <[email protected]>
Co-authored-by: Gavin Parpart <[email protected]>
Co-authored-by: Joyesh Mishra <[email protected]>
Co-authored-by: Marcus G K Williams <[email protected]>
Co-authored-by: PhilippPlank <[email protected]>
Co-authored-by: Alexander Henkes <[email protected]>
Co-authored-by: bamsumit <[email protected]>
Co-authored-by: Svea Marie Meyer <[email protected]>
Co-authored-by: Mathis Richter <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
  • Loading branch information
11 people authored Oct 6, 2023
1 parent 9cbe5a0 commit 8a9861f
Show file tree
Hide file tree
Showing 3 changed files with 596 additions and 0 deletions.
179 changes: 179 additions & 0 deletions src/lava/proc/graded/models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,179 @@
# Copyright (C) 2023 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause
# See: https://spdx.org/licenses/

import numpy as np

from lava.magma.core.sync.protocols.loihi_protocol import LoihiProtocol
from lava.magma.core.model.py.ports import PyInPort, PyOutPort
from lava.magma.core.model.py.type import LavaPyType
from lava.magma.core.resources import CPU
from lava.magma.core.decorator import implements, requires, tag
from lava.magma.core.model.py.model import PyLoihiProcessModel

from lava.proc.graded.process import GradedVec, NormVecDelay, InvSqrt


class AbstractGradedVecModel(PyLoihiProcessModel):
"""Implementation of GradedVec"""

a_in = None
s_out = None
v = None
vth = None
exp = None

def run_spk(self) -> None:
"""The run function that performs the actual computation during
execution orchestrated by a PyLoihiProcessModel using the
LoihiProtocol.
"""
a_in_data = self.a_in.recv()
self.v += a_in_data

is_spike = np.abs(self.v) > self.vth
sp_out = self.v * is_spike

self.v[:] = 0

self.s_out.send(sp_out)


@implements(proc=GradedVec, protocol=LoihiProtocol)
@requires(CPU)
@tag('fixed_pt')
class PyGradedVecModelFixed(AbstractGradedVecModel):
"""Fixed point implementation of GradedVec"""
a_in = LavaPyType(PyInPort.VEC_DENSE, np.int32, precision=24)
s_out = LavaPyType(PyOutPort.VEC_DENSE, np.int32, precision=24)
vth: np.ndarray = LavaPyType(np.ndarray, np.int32, precision=24)
v: np.ndarray = LavaPyType(np.ndarray, np.int32, precision=24)
exp: np.ndarray = LavaPyType(np.ndarray, np.int32, precision=24)


@implements(proc=NormVecDelay, protocol=LoihiProtocol)
@requires(CPU)
@tag('fixed_pt')
class NormVecDelayModel(PyLoihiProcessModel):
"""Implementation of NormVecDelay. This process is typically part of
a network for normalization.
"""
a_in1 = LavaPyType(PyInPort.VEC_DENSE, np.int32, precision=24)
a_in2 = LavaPyType(PyInPort.VEC_DENSE, np.int32, precision=24)
s_out = LavaPyType(PyOutPort.VEC_DENSE, np.int32, precision=24)
s2_out = LavaPyType(PyOutPort.VEC_DENSE, np.int32, precision=24)

vth: np.ndarray = LavaPyType(np.ndarray, np.int32, precision=24)
exp: np.ndarray = LavaPyType(np.ndarray, np.int32, precision=24)
v: np.ndarray = LavaPyType(np.ndarray, np.int32, precision=24)
v2: np.ndarray = LavaPyType(np.ndarray, np.int32, precision=24)

def run_spk(self) -> None:
"""The run function that performs the actual computation during
execution orchestrated by a PyLoihiProcessModel using the
LoihiProtocol.
"""
a_in_data1 = self.a_in1.recv()
a_in_data2 = self.a_in2.recv()

vsq = a_in_data1 ** 2
self.s2_out.send(vsq)

self.v2 = self.v
self.v = a_in_data1

output = self.v2 * a_in_data2

is_spike = np.abs(output) > self.vth
sp_out = output * is_spike

self.s_out.send(sp_out)


@implements(proc=InvSqrt, protocol=LoihiProtocol)
@requires(CPU)
@tag('float')
class InvSqrtModelFloat(PyLoihiProcessModel):
"""Implementation of InvSqrt in floating point"""
a_in = LavaPyType(PyInPort.VEC_DENSE, float)
s_out = LavaPyType(PyOutPort.VEC_DENSE, float)

fp_base: np.ndarray = LavaPyType(np.ndarray, np.int32, precision=24)

def run_spk(self) -> None:
"""The run function that performs the actual computation during
execution orchestrated by a PyLoihiProcessModel using the
LoihiProtocol.
"""
a_in_data = self.a_in.recv()
sp_out = 1 / (a_in_data ** 0.5)

self.s_out.send(sp_out)


def make_fpinv_table(fp_base):
"""
Creates the table for fp inverse algorithm.
"""
n_bits = 24
B = 2**fp_base

Y_est = np.zeros((n_bits), dtype='int')
n_adj = 1.238982962

for m in range(n_bits): # span the 24 bits, negate the decimal base
Y_est[n_bits - m - 1] = 2 * int(B / (2**((m - fp_base) / 2) * n_adj))

return Y_est


def clz(val):
"""
Count lead zeros.
"""
return (24 - (int(np.log2(val)) + 1))


def inv_sqrt(s_fp, n_iters=5, b_fraction=12):
"""
Runs the fixed point inverse square root algorithm
"""
Y_est = make_fpinv_table(b_fraction)
m = clz(s_fp)
b_i = int(s_fp)
Y_i = Y_est[m]
y_i = Y_i // 2

for _ in range(n_iters):
b_i = np.right_shift(np.right_shift(b_i * Y_i,
b_fraction + 1) * Y_i,
b_fraction + 1)
Y_i = np.left_shift(3, b_fraction) - b_i
y_i = np.right_shift(y_i * Y_i, b_fraction + 1)

return y_i


@implements(proc=InvSqrt, protocol=LoihiProtocol)
@requires(CPU)
@tag('fixed_pt')
class InvSqrtModelFP(PyLoihiProcessModel):
"""Implementation of InvSqrt in fixed point"""

a_in = LavaPyType(PyInPort.VEC_DENSE, np.int32, precision=24)
s_out = LavaPyType(PyOutPort.VEC_DENSE, np.int32, precision=24)
fp_base: np.ndarray = LavaPyType(np.ndarray, np.int32, precision=24)

def run_spk(self) -> None:
"""The run function that performs the actual computation during
execution orchestrated by a PyLoihiProcessModel using the
LoihiProtocol.
"""
a_in_data = self.a_in.recv()

if np.any(a_in_data) == 0:
sp_out = 0 * a_in_data
else:
sp_out = np.array([inv_sqrt(a_in_data, 5)])

self.s_out.send(sp_out)
144 changes: 144 additions & 0 deletions src/lava/proc/graded/process.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
# Copyright (C) 2023 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause
# See: https://spdx.org/licenses/

import numpy as np
import typing as ty

from lava.magma.core.process.process import AbstractProcess
from lava.magma.core.process.variable import Var
from lava.magma.core.process.ports.ports import InPort, OutPort


def loihi2round(vv):
"""
Round values in numpy array the way loihi 2
performs rounding/truncation.
"""
return np.fix(vv + (vv > 0) - 0.5).astype('int')


class GradedVec(AbstractProcess):
"""GradedVec
Graded spike vector layer. Transmits accumulated input as
graded spike with no dynamics.
v[t] = a_in
s_out = v[t] * (v[t] > vth)
Parameters
----------
shape: tuple(int)
number and topology of neurons
vth: int
threshold for spiking
exp: int
fixed point base
"""

def __init__(
self,
shape: ty.Tuple[int, ...],
vth: ty.Optional[int] = 1,
exp: ty.Optional[int] = 0) -> None:

super().__init__(shape=shape)

self.a_in = InPort(shape=shape)
self.s_out = OutPort(shape=shape)

self.v = Var(shape=shape, init=0)
self.vth = Var(shape=(1,), init=vth)
self.exp = Var(shape=(1,), init=exp)

@property
def shape(self) -> ty.Tuple[int, ...]:
"""Return shape of the Process."""
return self.proc_params['shape']


class NormVecDelay(AbstractProcess):
"""NormVec
Normalizable graded spike vector. Used in conjunction with
InvSqrt process to create normalization layer.
When configured with InvSqrt, the process will output a normalized
vector with graded spike values. The output is delayed by 2 timesteps.
NormVecDelay has two input and two output channels. The process
outputs the square input values on the second channel to the InvSqrt
neuron. The process waits two timesteps to receive the inverse
square root value returned by the InvSqrt process. The value received
on the second input channel is multiplied by the primary input
value, and the result is output on the primary output channel.
v[t] = a_in1
s2_out = v[t] ** 2
s_out = v[t-2] * a_in2
Parameters
----------
shape: tuple(int)
number and topology of neurons
vth: int
threshold for spiking
exp: int
fixed point base
"""

def __init__(
self,
shape: ty.Tuple[int, ...],
vth: ty.Optional[int] = 1,
exp: ty.Optional[int] = 0) -> None:

super().__init__(shape=shape)

self.a_in1 = InPort(shape=shape)
self.a_in2 = InPort(shape=shape)

self.s_out = OutPort(shape=shape)
self.s2_out = OutPort(shape=shape)

self.vth = Var(shape=(1,), init=vth)
self.exp = Var(shape=(1,), init=exp)

self.v = Var(shape=shape, init=np.zeros(shape, 'int32'))
self.v2 = Var(shape=shape, init=np.zeros(shape, 'int32'))

@property
def shape(self) -> ty.Tuple[int, ...]:
"""Return shape of the Process."""
return self.proc_params['shape']


class InvSqrt(AbstractProcess):
"""InvSqrt
Neuron model for computing inverse square root with 24-bit
fixed point values. Designed to be used in conjunction with
NormVecDelay.
v[t] = a_in
s_out = 1 / sqrt(v[t])
Parameters
----------
fp_base : int
Base of the fixed-point representation
"""

def __init__(
self,
shape: ty.Tuple[int, ...],
fp_base: ty.Optional[int] = 12) -> None:
super().__init__(shape=shape)

# base of the decimal point
self.fp_base = Var(shape=(1,), init=fp_base)
self.a_in = InPort(shape=shape)
self.s_out = OutPort(shape=shape)

@property
def shape(self) -> ty.Tuple[int, ...]:
"""Return shape of the Process."""
return self.proc_params['shape']
Loading

0 comments on commit 8a9861f

Please sign in to comment.