In my experiment, I use feed to fill in the data. The code at sess is as follows:
1 with tf.Session() as sess:
2 init = tf.global_variables_initializer()
3 sess.run(init)
4 for epoch in range(a.epochs):
5 input, target = load_batch_data(batch_size=16, a=a)
6 batch_input = input.astype(np.float32)
7 batch_target = target.astype(np.float32)
8 sess.run(predict_real, feed_dict={input: batch_input, target: batch_target})
When running: {typeerror} unhashable type: ‘numpy. Ndarray’
Later, we found that:
When defining input and target outside the session, it is written as follows:
1 input = tf.placeholder(dtype=tf.float32, shape=[None, image_size, image_size, num_channels])
2 target = tf.placeholder(dtype=tf.float32, shape=[None, image_size, image_size, num_channels])
However, I defined input, target after opening session. This results in me running the following line of code
1 sess.run(predict_real, feed_dict={input: batch_input, target: batch_target})
There is an error like {typeerror} unhashable type: ‘numpy. Ndarray’. However, this input and target are not input and target outside the session. If you know the reason, it’s easy to correct it. Just change the names of input and target in the session, as follows:
1 with tf.Session() as sess:
2 init = tf.global_variables_initializer()
3 sess.run(init)
4 if a.mode == 'train':
5 for epoch in range(a.epochs):
6 batch_input, batch_target = load_batch_data(a=a)
7 batch_input = batch_input.astype(np.float32)
8 batch_target = batch_target.astype(np.float32)
9 sess.run(model, feed_dict={input: batch_input, target: batch_target})
10 print('epoch' + str(epoch) + ':')
11 saver.save(sess, 'model_parameter/train.ckpt')
12 print('training finished!!!')
13 elif a.mode == 'test':
14 #ceshi
15 ckpt = tf.train.latest_checkpoint(a.checkpoint)
16 saver.restore(sess, ckpt)
17 # Get the image at the time of the test and add the label
18 batch_input, _ = load_batch_data(a=a)
19 # batch_input = batch_input/255.
20 batch_input = batch_input.astype(np.float32)
21 generator_output = sess.run(test_output, feed_dict={input: batch_input})
22 # The result is processed and 3 is subtracted from the image channel to obtain the rgb image
23 result = process_generator_output(generator_output)
24 if result:
25 print('Done!')
26 else:
27 print('the MODE is not avaliable...')
Similar Posts:
- Preservation and recovery of TF. Train. Saver () model of tensorflow
- [Solved] Tensorflow TypeError: Fetch argument array has invalid type ‘numpy.ndarry’
- Chinese character handwriting recognition based on densenetensorflow
- Tensorflow error due to uninitialized variable [How to Fix]
- PaddlePaddle Error: ‘map’ object is not subscriptable
- InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor ‘…
- How to optimize for inference a simple, saved TensorFlow 1.0.1 graph?
- Tensorflow gradients TypeError: Fetch argument None has invalid type
- [Solved] TensorFlow Error: InternalError: Failed copying input tensor
- Python TypeError: ‘numpy.float64’ object cannot be interpreted as an index