You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Please review and confirm if the below mentioned bug is legitimate or not.
When using lava.proc.io.dataloader.SpikeDataloader with lava.proc.dense.process.LearningDense:
1] getitem of dataset object gets called twice, at SpikeDataloader reset interval timestep
2] post_guard method of SpikeDataloader process-model runs twice & returns True twice
3] run_post_mgmt method of SpikeDataloader process-model runs twice, because post_guard runs twice
4] data-sample fetch is not linear, network gets trained on odd indexes of dataset, then even
NO such issue, when using lava.proc.io.dataloader.SpikeDataloader with lava.proc.dense.process.Dense
To reproduce current behavior
Steps to reproduce the behavior:
When I run below python code...
importosimportmathimporttimeimporttypingastyimportnumpyasnpfrommisc.my_datasetimportRandomDatasetfrommisc.my_dataloaderimportSpikeDataloaderfromlava.proc.lif.processimportLIFReset, LearningLIFfromlava.proc.dense.processimportDense, LearningDensefrommisc.hyper_paramsimportmodel_name, n_neurons, stdp_params, lif1_paramsresults_path="./results/"ifnotos.path.isdir(results_path+model_name):
os.mkdir(results_path+model_name)
fromlava.proc.learning_rules.stdp_learning_ruleimportSTDPLoihistdp=STDPLoihi(learning_rate=stdp_params["learning_rate"],
A_plus=stdp_params["A_plus"],
A_minus=stdp_params["A_minus"],
tau_plus=stdp_params["tau_plus"],
tau_minus=stdp_params["tau_minus"],
t_epoch=stdp_params["t_epoch"])
n_samples=5T_per_sample=20sim_time=n_samples*T_per_sampledataset=RandomDataset(n_samples=n_samples, n_dim=n_neurons["input"], spike_len=T_per_sample)
print(f"[TRAIN] Samples: {n_samples}, T: {T_per_sample}, Sim-Time: {sim_time}")
np.random.seed(22)
init_weights=np.random.rand(n_neurons["hidden"], n_neurons["input"])
print(f"[BEFORE training] weights.min(): {init_weights.min()}, weights.max(): {init_weights.max()}, weights.sum(): {init_weights.sum()}")
# Instantiate SpikeGeneratorspike_gen=SpikeDataloader(dataset=dataset, interval=T_per_sample, offset=0)
plastic_dense=LearningDense(weights=init_weights, learning_rule=stdp, name='plastic_dense')
lif1=LIFReset(shape=(n_neurons["hidden"], ), # Number of units in this processvth=lif1_params["vth"], # Membrane threshold, higher threshold means lesser spikesdv=lif1_params["dv"], # Inverse membrane time-constant, smaller value means samller decaydu=lif1_params["du"], # Inverse synaptic time-constant, smaller value means samller decayreset_interval=T_per_sample,
reset_offset=3,
name='lif1')
print("\nHyperparameters: ", model_name, n_neurons, stdp_params, lif1_params)
# Connect spike_gen to dense_inputspike_gen.s_out.connect(plastic_dense.s_in)
# Connect dense_input to LIF1 populationplastic_dense.a_out.connect(lif1.a_in)
lif1.s_out.connect(plastic_dense.s_in_bap)
fromlava.magma.core.run_conditionsimportRunSteps, RunContinuousfromlava.magma.core.run_configsimportLoihi1SimCfg, Loihi2SimCfgtick=time.time()
spike_gen.run(condition=RunSteps(num_steps=sim_time), run_cfg=Loihi1SimCfg(select_tag="floating_pt"))
tock=time.time()
print("\nRUN time:", (tock-tick)/60, " min")
lif1_weights=plastic_dense.weights.get()
spike_gen.stop()
print(f"[AFTER training] weights.min(): {lif1_weights.min()}, weights.max(): {lif1_weights.max()}, weights.sum(): {lif1_weights.sum()}")
I get the below output where at every timestep post_guard method of SpikeDataloader is executed twice, As a result, SpikeDataloader loads data in wrong sequence (if our dataset iterable has 5 samples, then odd indexes are fetched and injected in network, then the even samples). Below print statements are obtained by adding print statements to ProcessModel of SpikeDataloader.
Afterwards I used lava.proc.io.source.RingBuffer to avoid the wrong data fetch sequnce of SpikeDataloader process model. Along with that I wrote a custom Process and ProcessModel (WeightSnapshot) to access the weight matrix of LearningDense (via RefPort). This custom process with RefPort also has post_guard and run_post_mgmt methods. Therefore, the issue of double execution of post_guard and run_post_mgmt still persists.
NO such issue, when custom process with RefPort is used with lava.proc.dense.process.Dense. This bug seems specific to LearningDense (when we use LearningDense in process graph).
To reproduce current behavior
Steps to reproduce the behavior:
Expected behavior
When using LearningDense in our network process graph, then all the proecsses with post_guard and run_post_mgmt are running twice. This should not be the expected behaviour. This also affects the expected behavoiur of SpikeLoader (expected behaviour: SpikeLoader should load data samples in a linear sequence and post_guard should run only once).
Environment (please complete the following information):
Device: Laptop
OS: UBUNTU 20.04 LTS, python 3.8.10
Lava 0.6.0 (installed from lava_nc-0.6.0.tar.gz)
Additional context
Have dicussed this bug with Sumedh and Sumit (Intel NCL Team).
The text was updated successfully, but these errors were encountered:
Describe the bug
Please review and confirm if the below mentioned bug is legitimate or not.
When using lava.proc.io.dataloader.SpikeDataloader with lava.proc.dense.process.LearningDense:
1] getitem of dataset object gets called twice, at SpikeDataloader reset interval timestep
2] post_guard method of SpikeDataloader process-model runs twice & returns True twice
3] run_post_mgmt method of SpikeDataloader process-model runs twice, because post_guard runs twice
4] data-sample fetch is not linear, network gets trained on odd indexes of dataset, then even
NO such issue, when using lava.proc.io.dataloader.SpikeDataloader with lava.proc.dense.process.Dense
To reproduce current behavior
Steps to reproduce the behavior:
Afterwards I used lava.proc.io.source.RingBuffer to avoid the wrong data fetch sequnce of SpikeDataloader process model. Along with that I wrote a custom Process and ProcessModel (WeightSnapshot) to access the weight matrix of LearningDense (via RefPort). This custom process with RefPort also has post_guard and run_post_mgmt methods. Therefore, the issue of double execution of post_guard and run_post_mgmt still persists.
NO such issue, when custom process with RefPort is used with lava.proc.dense.process.Dense. This bug seems specific to LearningDense (when we use LearningDense in process graph).
To reproduce current behavior
Steps to reproduce the behavior:
Expected behavior
When using LearningDense in our network process graph, then all the proecsses with post_guard and run_post_mgmt are running twice. This should not be the expected behaviour. This also affects the expected behavoiur of SpikeLoader (expected behaviour: SpikeLoader should load data samples in a linear sequence and post_guard should run only once).
Environment (please complete the following information):
Additional context
Have dicussed this bug with Sumedh and Sumit (Intel NCL Team).
The text was updated successfully, but these errors were encountered: