Clearing Tensorflow GPU memory after model execution

Question!

I've trained 3 models and am now running code that loads each of the 3 checkpoints in sequence and runs predictions using them. I'm using the GPU.

When the first model is loaded it pre-allocates the entire GPU memory (which I want for working through the first batch of data). But it doesn't unload memory when it's finished. When the second model is loaded, using both tf.reset_default_graph() and with tf.Graph().as_default() the GPU memory still is fully consumed from the first model, and the second model is then starved of memory.

Is there a way to resolve this, other than using Python subprocesses or multiprocessing to work around the problem (the only solution I've found on via google searches)?



Answers

GPU memory allocated by tensors is released as soon as the tensor is not needed anymore (before the .run call terminates). GPU memory allocated for variables is released when variable containers are destroyed. In case of DirectSession (ie, sess=tf.Session("")) it is when session is closed or explicitly reset (added in 62c159ff)



look at these two forms

<label><input id="inputone" type="checkbox"/> Testing</label>

<input type="checkbox" id="inputtwo"/>
<label for="inputtwo">Testing 2</label>



I would use the following:

var startIndex = $('.start').index();
var endIndex = startIndex - 5;

for(var i = startIndex - 1; i >= endIndex; i--){
    $('.item').eq(i).addClass('good');
}

Updated to not add class 'good' the start index.

https://jsfiddle.net/n2723fmu/



This video can help you solving your question :)
By: admin