Tag Archives: TensorFlow Error

[Solved] Tensorflow Error: ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray)

Problem description

When an array is sent to tensorflow training, the following errors are reported:

ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray)

The array element is an array, and the shape of each array element is inconsistent. An example is as follows:

cropImg[0].shape = (13, 13, 3)
cropImg[1].shape = (14, 13, 3)
cropImg[2].shape = (12, 13, 3)

environment

python 3.7. 9

tensorflow 2.6. 0

keras 2.6. 0

Solution:

There are many similar error reports on stackoverflow, which roughly means that the data type is wrong. The converted data type is not the data type in brackets, such as:

Unsupported object type numpy.Ndarray means that the cropimg array element is not numpy Ndarray type

Bloggers were puzzled and tried many methods. They all showed that the data type of the cropimg array element was numpy Ndarray, but the error always exists

Later, I suddenly realized that when generating the cropimg array, there was a warning:

VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
  cropImg_ar = np.array(img_list)

The cropimg array element is an array with inconsistent shapes, which indicates that the cropimg array element type is actually object. Is it caused by tensorflow not accepting object type data

After converting the cropimg array elements to shape consistency, the problem is solved

 

[Solved] TensorFlow Error: InternalError: Failed copying input tensor

Error when TensorFlow GPU executes model training:

InternalError: Failed copying input tensor from /job:localhost/replica:0/task:0/device:CPU:0 to /job:localhost/replica:0/task:0/device:GPU:0 in order to run _EagerConst: Dst tensor is not initialized.

Solution:

Link: https://stackoverflow.com/questions/37313818/tensorflow-dst-tensor-is-not-initialized

The main reason is the batch_size is too large to load the memory.  If the Batch_size is properly reduced, it can run normally.

By default, TF will allocate as much GPU memory as possible. By adjusting gpuconfig, it can be set to allocate memory on demand. Refer to this document and this code.


Also, During long-term model training in Jupiter notebook, this error may be caused by the failure of GPU memory to be released in time. This problem can be solved by referring to this answer. The following functions are defined:

from keras.backend import set_session
from keras.backend import clear_session
from keras.backend import get_session
import gc

# Reset Keras Session
def reset_keras():
    sess = get_session()
    clear_session()
    sess.close()
    sess = get_session()

    try:
        del classifier # this is from global space - change this as you need
    except:
        pass

    print(gc.collect()) # if it does something you should see a number as output

    # use the same config as you used to create the session
    config = tf.compat.v1.ConfigProto()
    config.gpu_options.per_process_gpu_memory_fraction = 1
    config.gpu_options.visible_device_list = "0"
    set_session(tf.compat.v1.Session(config=config))

Called directly when GPU memory needs to be cleared reset_keras Function. For example:

dense_layers = [0, 1, 2]
layer_sizes = [32, 64, 128]
conv_layers = [1, 2, 3]

for dense_layer in dense_layers:
    for layer_size in layer_sizes:
        for conv_layer in conv_layers:
            reset_keras()
            # training your model here