Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot feed value of shape (784,) for Tensor 'input:0', which has shape '(?, 784) #40

Open
tmagg opened this issue Feb 25, 2019 · 0 comments

Comments

@tmagg
Copy link

tmagg commented Feb 25, 2019

I am trying to run the MNIST code without any error from the below code ( segment 1) , however at the time of restoring the model which is in segment 2 below , i am getting the error as : Cannot feed value of shape (784,) for Tensor 'input:0', which has shape '(?, 784) , kindly suggest

segment 1

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import matplotlib.pyplot as plt
from random import randint
import numpy as np

logs_path = 'log_mnist_softmax'
batch_size = 100
learning_rate = 0.5
training_epochs = 10
mnist = input_data.read_data_sets("data", one_hot=True)

X = tf.placeholder(tf.float32, [None, 784], name = "input")
Y_ = tf.placeholder(tf.float32, [None, 10])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
XX = tf.reshape(X, [-1, 784])

Y = tf.matmul(X, W) + b
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y_, logits=Y), name = "output")
correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)

tf.summary.scalar("cost", cross_entropy)
tf.summary.scalar("accuracy", accuracy)
summary_op = tf.summary.merge_all()

with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
writer = tf.summary.FileWriter(logs_path,
graph=tf.get_default_graph())
for epoch in range(training_epochs):
batch_count = int(mnist.train.num_examples / batch_size)
for i in range(batch_count):
batch_x, batch_y = mnist.train.next_batch(batch_size)
, summary = sess.run([train_step, summary_op],
feed_dict={X: batch_x,
Y
: batch_y})
writer.add_summary(summary, epoch * batch_count + i)
print("Epoch: ", epoch)

print("Accuracy: ", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))
print("done")

num = randint(0, mnist.test.images.shape[0])
img = mnist.test.images[num]

classification = sess.run(tf.argmax(Y, 1), feed_dict={X: [img]})
print('Neural Network predicted', classification[0])
print('Real label is:', np.argmax(mnist.test.labels[num]))

saver = tf.train.Saver()
save_path = saver.save(sess, "data/saved_mnist_cnn.ckpt")
print("Model saved to %s" % save_path)

Segment 2

import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets('data', one_hot=True)
sess = tf.InteractiveSession()
new_saver = tf.train.import_meta_graph('data\saved_mnist_cnn.ckpt.meta')
new_saver.restore(sess, 'data\saved_mnist_cnn.ckpt')
tf.get_default_graph().as_graph_def()

x = sess.graph.get_tensor_by_name("input:0")

y_conv = sess.graph.get_tensor_by_name("output:0")
image_b = mnist.test.images[100]
result = sess.run(y_conv, feed_dict={x:image_b})
print(result)
print(sess.run(tf.argmax(result, 1)))

plt.imshow(image_b.reshape([28, 28]), cmap='Greys')
plt.show()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant