Tag Archives: yolo3

The solution of CUDA error: out of memory in yolo3 training

Explain the function of static keyword and final keyword in Java in detail>>>

1.CUDA E rror:outofmemorydarknet : ./src/cuda.c:36: check_ error: Assertio `0′ failed.

you need to modify the parameters of subdivision in the used model cfg file

changed from subdivisions = 8 to subdivisions = 64

subdivision: this parameter is very interesting. It will make every batch not be lost to the network all at once. Instead, it is divided into the number of copies of the corresponding number of subdivisions. After running one by one, they are packed together to complete an iteration. This will reduce the occupation of video memory. If this parameter is set to 1, all the batch pictures will be thrown into the network at one time. If it is set to 2, half of them will be lost at one time

http://blog.csdn.net/renhanchi/article/details/71077830?locationNum=11& fps…

if the above methods can not be solved:

The real reasons for CUDA are as follows:

The GPU size of the server is m

Tensorflow can only apply for n (n & lt; M)

In other words, tensorflow tells you that you can’t apply for all the resources of GPU and then quit

Solution:

Find the session in the code

Add before session definition

config = tf.ConfigProto(allow_ soft_ placement=True)

#Up to 70% of GPU resources
GPU_ options = tf.GPUOptions(per_ process_ gpu_ memory_ fraction=0.7)

#In the beginning, instead of giving tensorflow all GPU resources, it will increase
config.gpu as needed_ options.allow_ growth = True
sess = tf.Session(config=config)

So there’s no problem

In fact, tensorflow is a greedy tool

Even with device_ ID specifies that the GPU will also occupy the video memory resources of other GPUs, which must be executed before the program is executed

Execute export CUDA_ VISIBLE_ Devices = n (n is the visible server number)

Then execute the Python code. Py will not occupy other GPU resources

I started tensorflow recently. I used to be Caffe before

It’s tiring to be reported by people in the lab for three consecutive days this week for taking up too much server resources. Just use the above method

That is, to execute exportcuda before executing the code_ VISIBLE_ DEVICES=n

Only one GPU or individual GPU can be seen, and other GPUs can’t be seen

http://blog.csdn.net/wangkun1340378/article/details/72782593

Solution: running ytiny-yolo-voc.cfg will not encounter this kind of situation, but using yolo-voc.cfg and other multi-layer models will encounter cudarrorout of memory error. Due to my ability, I did not find the place to modify the GPU resource allocation mentioned in the above materials in Darknet so you need to check the usage of GPU in NVIDIA SMI before running yolo-voc.cfg. Only when the GPU is completely idle can it run normally( If someone is running other programs at the same time, it won’t work)

2. Before using the detectorrecall function, you need to modify the code of examples/detector. C 2

// change to infrared_ The full path of val.txt