Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Save Model Inputs, Model Outputs, Gradients, Custom Tensors, Layer Inputs, Layer Outputs #282

Merged
merged 155 commits into from
Jul 28, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
155 commits
Select commit Hold shift + click to select a range
be4f48a
save outputs
NihalHarish May 29, 2020
d32d017
assert updates
NihalHarish Jun 3, 2020
8e95f12
update assert
NihalHarish Jun 3, 2020
48f45d6
cleanup
NihalHarish Jun 3, 2020
55f10d4
as_dtype:
NihalHarish Jun 3, 2020
ec82021
model outputs are now constants
NihalHarish Jun 4, 2020
666bcd4
update to test
NihalHarish Jun 4, 2020
d867a9b
update import statement
NihalHarish Jun 4, 2020
5fd3a74
tmp
NihalHarish Jun 5, 2020
11c20c6
Revert "tmp"
NihalHarish Jun 8, 2020
7f260e5
str_to_mode
NihalHarish Jun 8, 2020
345f785
add tensor
NihalHarish Jun 8, 2020
beaa68d
add tensor
NihalHarish Jun 8, 2020
ab3d5c1
add dist tensor:
NihalHarish Jun 8, 2020
61372e8
add tensor
NihalHarish Jun 8, 2020
46c5e0f
for-loop
NihalHarish Jun 8, 2020
650fd6a
fix append
NihalHarish Jun 8, 2020
42fdc3a
fix assert
NihalHarish Jun 8, 2020
16b38d1
add
NihalHarish Jun 8, 2020
9e1d2c5
model output
NihalHarish Jun 8, 2020
14d911b
rename
NihalHarish Jun 8, 2020
20d0413
add to all collections
NihalHarish Jun 9, 2020
d46ebb6
revert
NihalHarish Jun 9, 2020
960d383
add to all
NihalHarish Jun 9, 2020
67f4efc
helper fn
NihalHarish Jun 9, 2020
2df341e
helper fn
NihalHarish Jun 9, 2020
94765d2
extend returns none
NihalHarish Jun 9, 2020
9eff79b
ypred
NihalHarish Jun 9, 2020
61d94e1
ypred
NihalHarish Jun 9, 2020
07d72d3
change assert
NihalHarish Jun 9, 2020
d8a8ea9
init
NihalHarish Jun 10, 2020
f7ead88
do not match in metric
NihalHarish Jun 10, 2020
6e24ca8
update
NihalHarish Jun 10, 2020
cda4e3e
inputs
NihalHarish Jun 10, 2020
9b59d0d
id
NihalHarish Jun 10, 2020
9e5606e
save outputs
NihalHarish May 29, 2020
11ddcdd
assert updates
NihalHarish Jun 3, 2020
34d2294
update assert
NihalHarish Jun 3, 2020
f87ce01
cleanup
NihalHarish Jun 3, 2020
bbb0dc6
as_dtype:
NihalHarish Jun 3, 2020
82f0531
model outputs are now constants
NihalHarish Jun 4, 2020
4663370
update to test
NihalHarish Jun 4, 2020
c64a7a1
update import statement
NihalHarish Jun 4, 2020
15c1d61
tmp
NihalHarish Jun 5, 2020
be6186f
Revert "tmp"
NihalHarish Jun 8, 2020
ae8f96b
str_to_mode
NihalHarish Jun 8, 2020
30bd425
add tensor
NihalHarish Jun 8, 2020
1e7aa1b
add tensor
NihalHarish Jun 8, 2020
85ea95a
add dist tensor:
NihalHarish Jun 8, 2020
95b8bcc
add tensor
NihalHarish Jun 8, 2020
07fd399
for-loop
NihalHarish Jun 8, 2020
7151978
fix append
NihalHarish Jun 8, 2020
72a7256
fix assert
NihalHarish Jun 8, 2020
046d165
add
NihalHarish Jun 8, 2020
070cd6f
model output
NihalHarish Jun 8, 2020
8af4ce8
rename
NihalHarish Jun 8, 2020
1761ca2
add to all collections
NihalHarish Jun 9, 2020
6b581bf
revert
NihalHarish Jun 9, 2020
6b14ee7
add to all
NihalHarish Jun 9, 2020
5c89dff
helper fn
NihalHarish Jun 9, 2020
cc13566
helper fn
NihalHarish Jun 9, 2020
d07dd47
extend returns none
NihalHarish Jun 9, 2020
766902a
ypred
NihalHarish Jun 9, 2020
4e1b802
ypred
NihalHarish Jun 9, 2020
5782846
change assert
NihalHarish Jun 9, 2020
f745186
Merge branch 'y_pred' of https://github.com/awslabs/sagemaker-debugge…
NihalHarish Jun 13, 2020
07c6e75
init
NihalHarish Jun 10, 2020
0d8c6cb
do not match in metric
NihalHarish Jun 10, 2020
ae526c0
update
NihalHarish Jun 10, 2020
bf82f9c
inputs
NihalHarish Jun 10, 2020
101fcb2
id
NihalHarish Jun 10, 2020
cdaf7f8
Merge branch 'save_model_inputs' of https://github.com/awslabs/sagema…
NihalHarish Jun 13, 2020
bc84269
test
NihalHarish Jun 13, 2020
5091415
fuse model inputs and outputs
NihalHarish Jun 14, 2020
13ce988
set fix
NihalHarish Jun 15, 2020
460e0e0
add tests
NihalHarish Jun 15, 2020
c20cc75
update test
NihalHarish Jun 15, 2020
5766aa2
eager mode
NihalHarish Jun 15, 2020
0428d62
update tests
NihalHarish Jun 15, 2020
54ad7a5
rename fn
NihalHarish Jun 15, 2020
40ded77
remove unused imports
NihalHarish Jun 15, 2020
9ead6fa
save custom tensor fn
NihalHarish Jun 16, 2020
c9a6198
test_
NihalHarish Jun 16, 2020
7c7fbb3
revert tests
NihalHarish Jun 16, 2020
ab8d103
save custom tensor fn
NihalHarish Jun 16, 2020
63babf7
test_
NihalHarish Jun 16, 2020
9633e2e
save custom tensor
NihalHarish Jun 16, 2020
a997bfa
save custom tensor
NihalHarish Jun 16, 2020
1376045
init
NihalHarish Jun 17, 2020
05b28c5
save gradients
NihalHarish Jun 17, 2020
9ae86df
ignore smdebug metrics
NihalHarish Jun 17, 2020
c8a0844
update assert
NihalHarish Jun 17, 2020
3db6856
gradients
NihalHarish Jun 17, 2020
32affd2
save inputs
NihalHarish Jun 19, 2020
582cd6e
merge master
NihalHarish Jun 26, 2020
ccde310
checks
NihalHarish Jun 26, 2020
4e14182
change assert
NihalHarish Jun 26, 2020
a68dc3e
check if collection should be saved
NihalHarish Jun 26, 2020
712f94b
set
NihalHarish Jun 26, 2020
cdb0882
revert assert
NihalHarish Jun 26, 2020
c692d8f
revert assert
NihalHarish Jun 26, 2020
cac439d
save inputs
NihalHarish Jun 26, 2020
cd36430
change regex
NihalHarish Jun 26, 2020
60d671b
modify tests
NihalHarish Jun 26, 2020
73b5362
collection
NihalHarish Jun 26, 2020
abdc64b
save fn
NihalHarish Jun 26, 2020
027b022
move test
NihalHarish Jun 26, 2020
6c5e4c9
run only for tf2
NihalHarish Jun 26, 2020
29e1319
mark skip
NihalHarish Jun 26, 2020
9e9092b
fn rename
NihalHarish Jun 26, 2020
e97de64
rename fn
NihalHarish Jun 26, 2020
cec3e09
correct boolean logic
NihalHarish Jun 26, 2020
90a8f23
fix input output logic
NihalHarish Jun 26, 2020
06ebf84
comments
NihalHarish Jun 26, 2020
15851de
grad tape example
NihalHarish Jun 26, 2020
41ca695
save layers
NihalHarish Jun 26, 2020
af1e411
rename
NihalHarish Jun 26, 2020
8cdd13e
change boolean logic
NihalHarish Jun 26, 2020
03e4f18
bug fix
NihalHarish Jun 26, 2020
2660a76
retrigger CI
NihalHarish Jun 29, 2020
fccf7e8
fix flag
NihalHarish Jul 2, 2020
f221f74
duplicate set
NihalHarish Jul 3, 2020
480db00
pred
NihalHarish Jul 7, 2020
c0817b9
nit
NihalHarish Jul 8, 2020
cb79e19
Merge remote-tracking branch 'origin' into save_inputs
NihalHarish Jul 8, 2020
80a65c7
update
NihalHarish Jul 10, 2020
e7cb92a
rename default collection
NihalHarish Jul 15, 2020
39b65df
model inputs
NihalHarish Jul 15, 2020
ca68f77
lint
NihalHarish Jul 15, 2020
281011d
update tests
NihalHarish Jul 15, 2020
74de9c9
modify assert
NihalHarish Jul 15, 2020
6dd95d7
Merge remote-tracking branch 'origin' into save_inputs
NihalHarish Jul 15, 2020
9abe494
modify assert
NihalHarish Jul 16, 2020
33c21c0
save Layers
NihalHarish Jul 16, 2020
7bd87c8
clear saved collections after saving
NihalHarish Jul 17, 2020
651d6ea
refactor
NihalHarish Jul 17, 2020
1aaabe7
nit
NihalHarish Jul 17, 2020
6d3b733
pr comments
NihalHarish Jul 22, 2020
0f08773
save tensor api
NihalHarish Jul 22, 2020
3015cae
revert typo
NihalHarish Jul 22, 2020
cca7fea
save custom tensors
NihalHarish Jul 22, 2020
bbf1bf6
pr comments
NihalHarish Jul 22, 2020
a32a8d4
len
NihalHarish Jul 22, 2020
259414a
default
NihalHarish Jul 23, 2020
b1ad7a0
save smdebug logs
NihalHarish Jul 23, 2020
d3b54c3
comments
NihalHarish Jul 23, 2020
fb548a9
update
NihalHarish Jul 23, 2020
7ca2942
constants
NihalHarish Jul 27, 2020
2df55e0
Implement Save Tensor For Mxnet and Pytorch (#291)
NihalHarish Jul 28, 2020
067e724
parameterize test keras fit
NihalHarish Jul 28, 2020
49550e8
tf eager
NihalHarish Jul 28, 2020
cb44a7d
nit
NihalHarish Jul 28, 2020
b67fa45
nit and remove duped fn
NihalHarish Jul 28, 2020
075b2a0
refactor
NihalHarish Jul 28, 2020
1a1838e
retrigger CI
NihalHarish Jul 28, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
209 changes: 209 additions & 0 deletions examples/tensorflow2/scripts/tf2_save_metrics.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,209 @@
"""
This file is temporary, for testing with 2.X.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so these files will be deleted just before merge?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These files are currently run on the AWS TF test pipeline.
They will be either modified or deleted after merging.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

create a issue to followup for the testing after PR is approved.

We'll need to integrate a more robust testing pipeline and make this part of pytest
before pushing to master.

This was tested with TensorFlow 2.1, by running
`python tests/tensorflow2/test_keras.py` from the main directory.
"""
# Standard Library
import shutil

# Third Party
import pytest
import tensorflow.compat.v2 as tf

# First Party
import smdebug.tensorflow as smd
from smdebug.core.collection import CollectionKeys
from smdebug.tensorflow import SaveConfig


@pytest.fixture(scope="function")
def out_dir():
""" Use this method to construct an out_dir.

Then it will be automatically cleaned up for you, passed into the test method, and we'll have
fewer folders lying around.
"""
out_dir = "/tmp/test"
shutil.rmtree(out_dir, ignore_errors=True)
return out_dir


def helper_keras_fit(
trial_dir,
save_all=False,
include_collections=None,
reduction_config=None,
save_config=None,
hook=None,
steps=None,
add_callbacks=None,
run_eagerly=False,
):

mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255, x_test / 255

model = tf.keras.models.Sequential(
[
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation="softmax"),
]
)

if hook is None:
if save_config is None:
save_config = SaveConfig(save_interval=3)

hook = smd.KerasHook(
trial_dir,
save_config=save_config,
save_all=save_all,
include_collections=include_collections,
reduction_config=reduction_config,
)

if not save_all and include_collections is not None:
for cname in hook.include_collections:
if cname not in include_collections:
hook.get_collection(cname).save_config = SaveConfig(end_step=0)

opt = tf.keras.optimizers.Adam()

opt = hook.wrap_optimizer(opt)
model.compile(
optimizer=opt,
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
run_eagerly=run_eagerly,
)
hooks = []
if add_callbacks:
if "tensorboard" in add_callbacks:
hooks.append(
tf.keras.callbacks.TensorBoard(
log_dir="/tmp/logs", histogram_freq=1, write_grads=True, write_images=True
)
)
hooks.append(hook)

if steps is None:
steps = ["train"]
for step in steps:
if step == "train":
model.fit(x_train, y_train, epochs=1, steps_per_epoch=10, callbacks=hooks, verbose=0)
elif step == "eval":
model.evaluate(x_test, y_test, steps=10, callbacks=hooks, verbose=0)
elif step == "predict":
model.predict(x_test[:100], callbacks=hooks, verbose=0)

hook.close()


def test_keras_fit_eager(out_dir, tf_eager_mode=True):
test_include_collections = [
CollectionKeys.LOSSES,
CollectionKeys.METRICS,
CollectionKeys.WEIGHTS,
CollectionKeys.BIASES,
CollectionKeys.GRADIENTS,
CollectionKeys.INPUTS,
CollectionKeys.OUTPUTS,
CollectionKeys.LAYERS,
CollectionKeys.OPTIMIZER_VARIABLES,
]
hook = smd.KerasHook(out_dir=out_dir, include_collections=test_include_collections)
helper_keras_fit(
include_collections=test_include_collections,
trial_dir=out_dir,
hook=hook,
run_eagerly=tf_eager_mode,
steps=["train", "eval", "predict", "train"],
)
trial = smd.create_trial(path=out_dir)

# We first assert that none of the collections we requested for are empty
assert len(trial.tensor_names(collection=CollectionKeys.LOSSES)) == 1
assert len(trial.tensor_names(collection=CollectionKeys.METRICS)) == 2
assert len(trial.tensor_names(collection=CollectionKeys.WEIGHTS)) == 2
assert len(trial.tensor_names(collection=CollectionKeys.BIASES)) == 2
assert len(trial.tensor_names(collection=CollectionKeys.GRADIENTS)) == 4
assert len(trial.tensor_names(collection=CollectionKeys.INPUTS)) == 1 # 1 Model Input
assert len(trial.tensor_names(collection=CollectionKeys.OUTPUTS)) == 2 # 2 Model outputs
assert len(trial.tensor_names(collection=CollectionKeys.OPTIMIZER_VARIABLES)) == 5

# We assert that all the tensors saved have a valid value
for tname in trial.tensor_names():
assert trial.tensor(tname).value(0) is not None

# We then analyse Layer Inputs and Layer Outputs
# Check that output of layer is equal to the input of the next
boolean_matrix = trial.tensor("flatten/outputs").value(0) == trial.tensor("dense/inputs").value(
0
)
assert boolean_matrix.all()
boolean_matrix = trial.tensor("dense/outputs").value(0) == trial.tensor("dropout/inputs").value(
0
)
assert boolean_matrix.all()
boolean_matrix = trial.tensor("dropout/outputs").value(0) == trial.tensor(
"dense_1/inputs"
).value(0)
assert boolean_matrix.all()


def test_keras_fit_false(out_dir, tf_eager_mode=False):
test_include_collections = [
CollectionKeys.LOSSES,
CollectionKeys.METRICS,
CollectionKeys.WEIGHTS,
CollectionKeys.BIASES,
CollectionKeys.GRADIENTS,
CollectionKeys.INPUTS,
CollectionKeys.OUTPUTS,
CollectionKeys.LAYERS,
CollectionKeys.OPTIMIZER_VARIABLES,
]
hook = smd.KerasHook(out_dir=out_dir, include_collections=test_include_collections)
helper_keras_fit(
include_collections=test_include_collections,
trial_dir=out_dir,
hook=hook,
run_eagerly=tf_eager_mode,
steps=["train", "eval", "predict", "train"],
)
trial = smd.create_trial(path=out_dir)

# We first assert that none of the collections we requested for are empty
assert len(trial.tensor_names(collection=CollectionKeys.LOSSES)) == 1
assert len(trial.tensor_names(collection=CollectionKeys.METRICS)) == 2
assert len(trial.tensor_names(collection=CollectionKeys.WEIGHTS)) == 2
assert len(trial.tensor_names(collection=CollectionKeys.BIASES)) == 2
assert len(trial.tensor_names(collection=CollectionKeys.GRADIENTS)) == 4
assert len(trial.tensor_names(collection=CollectionKeys.INPUTS)) == 1 # 1 Model Input
assert len(trial.tensor_names(collection=CollectionKeys.OUTPUTS)) == 2 # 2 Model outputs
assert len(trial.tensor_names(collection=CollectionKeys.OPTIMIZER_VARIABLES)) == 5

# We assert that all the tensors saved have a valid value
for tname in trial.tensor_names():
assert trial.tensor(tname).value(0) is not None

# We then analyse Layer Inputs and Layer Outputs
# Check that output of layer is equal to the input of the next
boolean_matrix = trial.tensor("flatten_1/outputs").value(0) == trial.tensor(
"dense_2/inputs"
).value(0)
assert boolean_matrix.all()
boolean_matrix = trial.tensor("dense_2/outputs").value(0) == trial.tensor(
"dropout_1/inputs"
).value(0)
assert boolean_matrix.all()
boolean_matrix = trial.tensor("dropout_1/outputs").value(0) == trial.tensor(
"dense_3/inputs"
).value(0)
assert boolean_matrix.all()
4 changes: 3 additions & 1 deletion examples/tensorflow2/scripts/tf_keras_gradienttape.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,9 @@ def train(batch_size, n_epochs, model, hook):
optimizer.apply_gradients(zip(grads, model.trainable_variables))
acc = train_acc_metric(dataset_labels, logits)
# save metrics value
hook.record_tensor_value(tensor_name="accuracy", tensor_value=acc)
hook.save_tensor(
tensor_name="accuracy", tensor_value=acc, collections_to_write="metrics"
)
values = [("Accuracy", train_acc_metric.result())]
progBar.update(idx * batch_size, values=values)

Expand Down
161 changes: 161 additions & 0 deletions examples/tensorflow2/scripts/tf_save_metrics_gradient_tape.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,161 @@
"""
This file is temporary, for testing with 2.X.
We'll need to integrate a more robust testing pipeline and make this part of pytest
before pushing to master.
"""
# Standard Library
import shutil

# Third Party
import pytest
import tensorflow.compat.v2 as tf

# First Party
import smdebug.tensorflow as smd
from smdebug.core.collection import CollectionKeys
from smdebug.tensorflow import SaveConfig


@pytest.fixture(scope="function")
def out_dir():
""" Use this method to construct an out_dir.

Then it will be automatically cleaned up for you, passed into the test method, and we'll have
fewer folders lying around.
"""
out_dir = "/tmp/test"
shutil.rmtree(out_dir, ignore_errors=True)
return out_dir


def helper_keras_gradtape(
trial_dir,
save_all=False,
include_collections=None,
reduction_config=None,
save_config=None,
hook=None,
batch_size=64,
persistent=False,
):
mnist = tf.keras.datasets.mnist
(x_train, y_train), _ = mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(x_train[..., tf.newaxis] / 255, tf.float32), tf.cast(y_train, tf.int64))
)
dataset = dataset.shuffle(1000).batch(batch_size)

model = tf.keras.models.Sequential(
[
# WA for TF issue https://github.com/tensorflow/tensorflow/issues/36279
tf.keras.layers.Flatten(input_shape=(28, 28, 1)),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation="softmax"),
]
)

if hook is None:
if save_config is None:
save_config = SaveConfig(save_interval=3)

hook = smd.KerasHook(
trial_dir,
save_config=save_config,
save_all=save_all,
include_collections=include_collections,
reduction_config=reduction_config,
)

if not save_all and include_collections is not None:
for cname in hook.include_collections:
if cname not in include_collections:
hook.get_collection(cname).save_config = SaveConfig(end_step=0)

opt = tf.keras.optimizers.Adam()
hook.wrap_optimizer(opt)
hook.register_model(model) # Can be skipped in ZCC

cce = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
train_acc_metric = tf.keras.metrics.SparseCategoricalAccuracy()

n_epochs = 1
for epoch in range(n_epochs):
for data, labels in dataset:
dataset_labels = labels
labels = tf.one_hot(labels, depth=10)
with hook.wrap_tape(tf.GradientTape(persistent=persistent)) as tape:
logits = model(data, training=True)
loss_value = cce(labels, logits)
hook.save_tensor("y_labels", labels, "outputs")
grads = tape.gradient(loss_value, model.variables)

# By default, the resources held by a GradientTape are released as
# soon as GradientTape.gradient() method is called. To compute
# multiple gradients over the same computation, create a persistent
# gradient tape. This allows multiple calls to the gradient() method
# as resources are released when the tape object is garbage collected.
if persistent:
_ = tape.gradient(loss_value, model.variables)
opt.apply_gradients(zip(grads, model.variables))
acc = train_acc_metric(dataset_labels, logits)
hook.save_tensor(
tensor_name="accuracy",
tensor_value=acc,
collections_to_write=CollectionKeys.METRICS,
)
train_acc_metric.reset_states()

hook.close()


def test_keras_gradtape(out_dir):
"""
Test save all and save default collection
"""
include_collections = [
CollectionKeys.WEIGHTS,
CollectionKeys.BIASES,
CollectionKeys.GRADIENTS,
CollectionKeys.LAYERS,
CollectionKeys.LOSSES,
CollectionKeys.INPUTS,
CollectionKeys.OUTPUTS,
CollectionKeys.METRICS,
CollectionKeys.OPTIMIZER_VARIABLES,
]
hook = smd.KerasHook(
out_dir=out_dir,
save_config=SaveConfig(save_interval=1),
include_collections=include_collections,
)
helper_keras_gradtape(trial_dir=out_dir, hook=hook)

trial = smd.create_trial(path=out_dir)
assert len(trial.tensor_names(collection=CollectionKeys.BIASES)) == 2
assert len(trial.tensor_names(collection=CollectionKeys.WEIGHTS)) == 2
assert len(trial.tensor_names(collection=CollectionKeys.OPTIMIZER_VARIABLES)) == 5
assert len(trial.tensor_names(collection=CollectionKeys.LAYERS)) == 8
assert len(trial.tensor_names(collection=CollectionKeys.OUTPUTS)) == 2
assert len(trial.tensor_names(collection=CollectionKeys.INPUTS)) == 1
assert len(trial.tensor_names(collection=CollectionKeys.LOSSES)) == 1
assert len(trial.tensor_names(collection=CollectionKeys.METRICS)) == 1

# We assert that all the tensors saved have a valid value
for tname in trial.tensor_names():
assert trial.tensor(tname).value(5) is not None

# We then analyse Layer Inputs and Layer Outputs
# Check that output of a layer is equal to the input of the next
boolean_matrix = trial.tensor("flatten/outputs").value(0) == trial.tensor("dense/inputs").value(
0
)
assert boolean_matrix.all()
boolean_matrix = trial.tensor("dense/outputs").value(0) == trial.tensor("dropout/inputs").value(
0
)
assert boolean_matrix.all()
boolean_matrix = trial.tensor("dropout/outputs").value(0) == trial.tensor(
"dense_1/inputs"
).value(0)
assert boolean_matrix.all()
2 changes: 2 additions & 0 deletions smdebug/core/collection.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ class CollectionKeys:
GRADIENTS = "gradients"
LOSSES = "losses"
BIASES = "biases"
LAYERS = "layers"

# Use this collection to log scalars other than losses/metrics to SageMaker.
# Mainly for Tensorflow. For all other frameworks, call save_scalar() API
Expand Down Expand Up @@ -75,6 +76,7 @@ class CollectionKeys:
CollectionKeys.METRICS,
CollectionKeys.INPUTS,
CollectionKeys.OUTPUTS,
CollectionKeys.LAYERS,
CollectionKeys.SM_METRICS,
CollectionKeys.OPTIMIZER_VARIABLES,
}
Expand Down
Loading