Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NotImplementedError: Cannot convert a symbolic Tensor (Train_gpu0/Loss_R1/gradients/Train_gpu0/Augment_1/transform/ImageProjectiveTransformV2_grad/flat_transforms_to_matrices/strided_slice:0) to a numpy array. #112

Open
kay-aki opened this issue Jun 13, 2022 · 0 comments

Comments

@kay-aki
Copy link

kay-aki commented Jun 13, 2022

Hi,

I am trying to train a GAN but this issue occurs all the time. I do not know if it is a bug or if I am making something wrong. I am using tensorflow 1.x on google colab.

The last line it is executing is:
!python train.py --outdir='/content/drive/MyDrive/stylegan2-ada/training-runs' --gpus=1 --data='/content/drive/MyDrive/stylegan2-ada/datasets/{dataset_name}'
I tried it with some other training configurations but the same error occurred every time

Here is the output of the program:

/content/drive/MyDrive/stylegan2-ada
tcmalloc: large alloc 4294967296 bytes == 0x6ec6000 @ 0x7f71755b4001 0x7f71727db1af 0x7f7172831c23 0x7f7172832a87 0x7f71728d4823 0x5936cc 0x548c51 0x5127f1 0x549e0e 0x4bca8a 0x532b86 0x594a96 0x548cc1 0x5127f1 0x549576 0x4bca8a 0x5134a6 0x549576 0x4bca8a 0x5134a6 0x549e0e 0x4bca8a 0x5134a6 0x593dd7 0x5118f8 0x549576 0x604173 0x5f5506 0x5f8c6c 0x5f9206 0x64faf2
tcmalloc: large alloc 4294967296 bytes == 0x7f6fa9dc6000 @ 0x7f71755b21e7 0x7f71727db0ce 0x7f7172831cf5 0x7f7172831f4f 0x7f71728d4673 0x5936cc 0x548c51 0x5127f1 0x549576 0x593fce 0x548ae9 0x5127f1 0x549576 0x593fce 0x548ae9 0x5127f1 0x549576 0x593fce 0x548ae9 0x5127f1 0x593dd7 0x5118f8 0x549576 0x593fce 0x548ae9 0x51566f 0x549576 0x593fce 0x548ae9 0x5127f1 0x549e0e
tcmalloc: large alloc 4294967296 bytes == 0x7f6ea8dc4000 @ 0x7f71755b21e7 0x7f71727db0ce 0x7f7172831cf5 0x7f7172831f4f 0x7f7135e07235 0x7f713578a792 0x7f713578ad42 0x7f7135743aee 0x59371f 0x548c51 0x51566f 0x593dd7 0x511e2c 0x549e0e 0x4bcb19 0x5134a6 0x549576 0x593fce 0x511e2c 0x549e0e 0x593fce 0x511e2c 0x593dd7 0x511e2c 0x549576 0x4bcb19 0x59c019 0x595ef6 0x5134a6 0x549576 0x593fce

Training options:
{
"G_args": {
"func_name": "training.networks.G_main",
"fmap_base": 8192,
"fmap_max": 512,
"mapping_layers": 2,
"num_fp16_res": 4,
"conv_clamp": 256
},
"D_args": {
"func_name": "training.networks.D_main",
"mbstd_group_size": 4,
"fmap_base": 8192,
"fmap_max": 512,
"num_fp16_res": 4,
"conv_clamp": 256
},
"G_opt_args": {
"beta1": 0.0,
"beta2": 0.99,
"learning_rate": 0.0025
},
"D_opt_args": {
"beta1": 0.0,
"beta2": 0.99,
"learning_rate": 0.0025
},
"loss_args": {
"func_name": "training.loss.stylegan2",
"r1_gamma": 0.8192
},
"augment_args": {
"class_name": "training.augment.AdaptiveAugment",
"tune_heuristic": "rt",
"tune_target": 0.6,
"apply_func": "training.augment.augment_pipeline",
"apply_args": {
"xflip": 1,
"rotate90": 1,
"xint": 1,
"scale": 1,
"rotate": 1,
"aniso": 1,
"xfrac": 1,
"brightness": 1,
"contrast": 1,
"lumaflip": 1,
"hue": 1,
"saturation": 1
}
},
"num_gpus": 1,
"image_snapshot_ticks": 50,
"network_snapshot_ticks": 50,
"train_dataset_args": {
"path": "/content/drive/MyDrive/stylegan2-ada/datasets/Pferde",
"max_label_size": 0,
"resolution": 256,
"mirror_augment": false
},
"metric_arg_list": [
{
"name": "fid50k_full",
"class_name": "metrics.frechet_inception_distance.FID",
"max_reals": null,
"num_fakes": 50000,
"minibatch_per_gpu": 8,
"force_dataset_args": {
"shuffle": false,
"max_images": null,
"repeat": false,
"mirror_augment": false
}
}
],
"metric_dataset_args": {
"path": "/content/drive/MyDrive/stylegan2-ada/datasets/Pferde",
"max_label_size": 0,
"resolution": 256,
"mirror_augment": false
},
"total_kimg": 25000,
"minibatch_size": 16,
"minibatch_gpu": 16,
"G_smoothing_kimg": 5.0,
"G_smoothing_rampup": 0.05,
"run_dir": "/content/drive/MyDrive/stylegan2-ada/training-runs/00001-Pferde-auto1"
}

Output directory: /content/drive/MyDrive/stylegan2-ada/training-runs/00001-Pferde-auto1
Training data: /content/drive/MyDrive/stylegan2-ada/datasets/Pferde
Training length: 25000 kimg
Resolution: 256
Number of GPUs: 1

Creating output directory...
Loading training set...
tcmalloc: large alloc 4294967296 bytes == 0x6ec6000 @ 0x7f71755b4001 0x7f71727db1af 0x7f7172831c23 0x7f7172832a87 0x7f71728d4823 0x5936cc 0x548c51 0x5127f1 0x549e0e 0x4bca8a 0x532b86 0x594a96 0x548cc1 0x5127f1 0x549576 0x4bca8a 0x5134a6 0x549576 0x4bca8a 0x5134a6 0x549e0e 0x4bca8a 0x5134a6 0x593dd7 0x5118f8 0x549576 0x604173 0x5f5506 0x5f8c6c 0x5f9206 0x64faf2
tcmalloc: large alloc 4294967296 bytes == 0x7f6d9d770000 @ 0x7f71755b21e7 0x7f71727db0ce 0x7f7172831cf5 0x7f7172831f4f 0x7f71728d4673 0x5936cc 0x548c51 0x5127f1 0x549576 0x593fce 0x548ae9 0x5127f1 0x549576 0x593fce 0x548ae9 0x5127f1 0x549576 0x593fce 0x548ae9 0x5127f1 0x593dd7 0x5118f8 0x549576 0x593fce 0x548ae9 0x51566f 0x549576 0x593fce 0x548ae9 0x5127f1 0x549e0e
tcmalloc: large alloc 4294967296 bytes == 0x7f6d9d770000 @ 0x7f71755b21e7 0x7f71727db0ce 0x7f7172831cf5 0x7f7172831f4f 0x7f7135e07235 0x7f713578a792 0x7f713578ad42 0x7f7135743aee 0x59371f 0x548c51 0x51566f 0x593dd7 0x511e2c 0x549e0e 0x4bcb19 0x5134a6 0x549576 0x593fce 0x511e2c 0x549e0e 0x593fce 0x511e2c 0x593dd7 0x511e2c 0x549576 0x4bcb19 0x59c019 0x595ef6 0x5134a6 0x549576 0x593fce
Image shape: [3, 256, 256]
Label shape: [0]

Constructing networks...
Setting up TensorFlow plugin "fused_bias_act.cu": Loading... Done.
Setting up TensorFlow plugin "upfirdn_2d.cu": Loading... Done.

G Params OutputShape WeightShape


latents_in - (?, 512) -
labels_in - (?, 0) -
G_mapping/Normalize - (?, 512) -
G_mapping/Dense0 262656 (?, 512) (512, 512)
G_mapping/Dense1 262656 (?, 512) (512, 512)
G_mapping/Broadcast - (?, 14, 512) -
dlatent_avg - (512,) -
Truncation/Lerp - (?, 14, 512) -
G_synthesis/4x4/Const 8192 (?, 512, 4, 4) (1, 512, 4, 4)
G_synthesis/4x4/Conv 2622465 (?, 512, 4, 4) (3, 3, 512, 512)
G_synthesis/4x4/ToRGB 264195 (?, 3, 4, 4) (1, 1, 512, 3)
G_synthesis/8x8/Conv0_up 2622465 (?, 512, 8, 8) (3, 3, 512, 512)
G_synthesis/8x8/Conv1 2622465 (?, 512, 8, 8) (3, 3, 512, 512)
G_synthesis/8x8/Upsample - (?, 3, 8, 8) -
G_synthesis/8x8/ToRGB 264195 (?, 3, 8, 8) (1, 1, 512, 3)
G_synthesis/16x16/Conv0_up 2622465 (?, 512, 16, 16) (3, 3, 512, 512)
G_synthesis/16x16/Conv1 2622465 (?, 512, 16, 16) (3, 3, 512, 512)
G_synthesis/16x16/Upsample - (?, 3, 16, 16) -
G_synthesis/16x16/ToRGB 264195 (?, 3, 16, 16) (1, 1, 512, 3)
G_synthesis/32x32/Conv0_up 2622465 (?, 512, 32, 32) (3, 3, 512, 512)
G_synthesis/32x32/Conv1 2622465 (?, 512, 32, 32) (3, 3, 512, 512)
G_synthesis/32x32/Upsample - (?, 3, 32, 32) -
G_synthesis/32x32/ToRGB 264195 (?, 3, 32, 32) (1, 1, 512, 3)
G_synthesis/64x64/Conv0_up 1442561 (?, 256, 64, 64) (3, 3, 512, 256)
G_synthesis/64x64/Conv1 721409 (?, 256, 64, 64) (3, 3, 256, 256)
G_synthesis/64x64/Upsample - (?, 3, 64, 64) -
G_synthesis/64x64/ToRGB 132099 (?, 3, 64, 64) (1, 1, 256, 3)
G_synthesis/128x128/Conv0_up 426369 (?, 128, 128, 128) (3, 3, 256, 128)
G_synthesis/128x128/Conv1 213249 (?, 128, 128, 128) (3, 3, 128, 128)
G_synthesis/128x128/Upsample - (?, 3, 128, 128) -
G_synthesis/128x128/ToRGB 66051 (?, 3, 128, 128) (1, 1, 128, 3)
G_synthesis/256x256/Conv0_up 139457 (?, 64, 256, 256) (3, 3, 128, 64)
G_synthesis/256x256/Conv1 69761 (?, 64, 256, 256) (3, 3, 64, 64)
G_synthesis/256x256/Upsample - (?, 3, 256, 256) -
G_synthesis/256x256/ToRGB 33027 (?, 3, 256, 256) (1, 1, 64, 3)


Total 23191522

D Params OutputShape WeightShape


images_in - (?, 3, 256, 256) -
labels_in - (?, 0) -
256x256/FromRGB 256 (?, 64, 256, 256) (1, 1, 3, 64)
256x256/Conv0 36928 (?, 64, 256, 256) (3, 3, 64, 64)
256x256/Conv1_down 73856 (?, 128, 128, 128) (3, 3, 64, 128)
256x256/Skip 8192 (?, 128, 128, 128) (1, 1, 64, 128)
128x128/Conv0 147584 (?, 128, 128, 128) (3, 3, 128, 128)
128x128/Conv1_down 295168 (?, 256, 64, 64) (3, 3, 128, 256)
128x128/Skip 32768 (?, 256, 64, 64) (1, 1, 128, 256)
64x64/Conv0 590080 (?, 256, 64, 64) (3, 3, 256, 256)
64x64/Conv1_down 1180160 (?, 512, 32, 32) (3, 3, 256, 512)
64x64/Skip 131072 (?, 512, 32, 32) (1, 1, 256, 512)
32x32/Conv0 2359808 (?, 512, 32, 32) (3, 3, 512, 512)
32x32/Conv1_down 2359808 (?, 512, 16, 16) (3, 3, 512, 512)
32x32/Skip 262144 (?, 512, 16, 16) (1, 1, 512, 512)
16x16/Conv0 2359808 (?, 512, 16, 16) (3, 3, 512, 512)
16x16/Conv1_down 2359808 (?, 512, 8, 8) (3, 3, 512, 512)
16x16/Skip 262144 (?, 512, 8, 8) (1, 1, 512, 512)
8x8/Conv0 2359808 (?, 512, 8, 8) (3, 3, 512, 512)
8x8/Conv1_down 2359808 (?, 512, 4, 4) (3, 3, 512, 512)
8x8/Skip 262144 (?, 512, 4, 4) (1, 1, 512, 512)
4x4/MinibatchStddev - (?, 513, 4, 4) -
4x4/Conv 2364416 (?, 512, 4, 4) (3, 3, 513, 512)
4x4/Dense0 4194816 (?, 512) (8192, 512)
Output 513 (?, 1) (512, 1)


Total 24001089

Exporting sample images...
Replicating networks across 1 GPUs...
Initializing augmentations...
Setting up optimizers...
Constructing training graph...
Traceback (most recent call last):
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py", line 2380, in get_attr
c_api.TF_OperationGetAttrValueProto(self._c_op, name, buf)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Operation 'Train_gpu0/Augment_1/transform/ImageProjectiveTransformV2' has no attr named '_XlaCompile'.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/gradients_util.py", line 345, in _MaybeCompile
xla_compile = op.get_attr("_XlaCompile")
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py", line 2384, in get_attr
raise ValueError(str(e))
ValueError: Operation 'Train_gpu0/Augment_1/transform/ImageProjectiveTransformV2' has no attr named '_XlaCompile'.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 561, in
main()
File "train.py", line 553, in main
run_training(**vars(args))
File "train.py", line 451, in run_training
training_loop.training_loop(**training_options)
File "/content/drive/MyDrive/stylegan2-ada/training/training_loop.py", line 187, in training_loop
terms = dnnlib.util.call_func_by_name(G=G_gpu, D=D_gpu, aug=aug, fake_labels=fake_labels, real_images=real_images_var, real_labels=real_labels_var, **loss_args)
File "/content/drive/MyDrive/stylegan2-ada/dnnlib/util.py", line 281, in call_func_by_name
return func_obj(*args, **kwargs)
File "/content/drive/MyDrive/stylegan2-ada/training/loss.py", line 110, in stylegan2
r1_grads = tf.gradients(tf.reduce_sum(D_real.scores), [real_images])[0]
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/gradients_impl.py", line 158, in gradients
unconnected_gradients)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/gradients_util.py", line 679, in _GradientsHelper
lambda: grad_fn(op, *out_grads))
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/gradients_util.py", line 350, in _MaybeCompile
return grad_fn() # Exit early
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/gradients_util.py", line 679, in
lambda: grad_fn(op, *out_grads))
File "/tensorflow-1.15.2/python3.7/tensorflow_core/contrib/image/python/ops/image_ops.py", line 420, in _image_projective_transform_grad
transforms = flat_transforms_to_matrices(transforms=transforms)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/contrib/image/python/ops/image_ops.py", line 362, in flat_transforms_to_matrices
[transforms, array_ops.ones([num_transforms, 1])], axis=1),
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/array_ops.py", line 2560, in ones
output = _constant_if_small(one, shape, dtype, name)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/array_ops.py", line 2295, in _constant_if_small
if np.prod(shape) < 1000:
File "<array_function internals>", line 6, in prod
File "/usr/local/lib/python3.7/dist-packages/numpy/core/fromnumeric.py", line 3052, in prod
keepdims=keepdims, initial=initial, where=where)
File "/usr/local/lib/python3.7/dist-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py", line 736, in array
" array.".format(self.name))
NotImplementedError: Cannot convert a symbolic Tensor (Train_gpu0/Loss_R1/gradients/Train_gpu0/Augment_1/transform/ImageProjectiveTransformV2_grad/flat_transforms_to_matrices/strided_slice:0) to a numpy array.

Kind regards!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant