How to optimize for inference a simple, saved TensorFlow 1.0.1 graph?

I cannot successfully run theoptimize_for_inferencemodule on a simple, saved TensorFlow graph (Python 2.7; package installed bypip install tensorflow-gpu==1.0.1).

Background

Saving TensorFlow Graph

Here’s my Python script to generate and save a simple graph to add 5 to my inputxplaceholderoperation.

import tensorflow as tf

# make and save a simple graph
G = tf.Graph()
with G.as_default():
    x = tf.placeholder(dtype=tf.float32, shape=(), name="x")
    a = tf.Variable(5.0, name="a")
    y = tf.add(a, x, name="y")
    saver = tf.train.Saver()

with tf.Session(graph=G) as sess:
    sess.run(tf.global_variables_initializer())
    out = sess.run(fetches=[y], feed_dict={x: 1.0})
    print(out)
    saver.save(sess=sess, save_path="test_model")

Restoring TensorFlow Graph

I have a simple restore script that recreates the saved graph and restores graph params. Both the save/restore scripts produce the same output.

import tensorflow as tf

# Restore simple graph and test model output
G = tf.Graph()

with tf.Session(graph=G) as sess:
    # recreate saved graph (structure)
    saver = tf.train.import_meta_graph('./test_model.meta')
    # restore net params
    saver.restore(sess, tf.train.latest_checkpoint('./'))

    x = G.get_operation_by_name("x").outputs[0]
    y = G.get_operation_by_name("y").outputs
    out = sess.run(fetches=[y], feed_dict={x: 1.0})
    print(out[0])

Optimization Attempt

But, while I don’t expect much in terms of optimization, when I try to optimize the graph for inference, I get the following error message. The expected output node does not appear to be in the saved graph.

$ python -m tensorflow.python.tools.optimize_for_inference --input test_model.data-00000-of-00001 --output opt_model --input_names=x --output_names=y  
Traceback (most recent call last):  
  File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main  
    "__main__", fname, loader, pkg_name)  
  File "/usr/lib/python2.7/runpy.py", line 72, in _run_code  
    exec code in run_globals  
  File "/data/tdube/.virtualenvs/tf-1.0/lib/python2.7/site-packages/tensorflow/python/tools/optimize_for_inference.py", line 141, in <module>  
    app.run(main=main, argv=[sys.argv[0]] + unparsed)  
  File "/data/tdube/.virtualenvs/tf-1.0/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 44, in run  
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "/data/tdube/.virtualenvs/tf-1.0/lib/python2.7/site-packages/tensorflow/python/tools/optimize_for_inference.py", line 90, in main  
    FLAGS.output_names.split(","), FLAGS.placeholder_type_enum)  
  File "/data/tdube/.virtualenvs/tf-1.0/local/lib/python2.7/site-packages/tensorflow/python/tools/optimize_for_inference_lib.py", line 91, in optimize_for_inference  
    placeholder_type_enum)  
  File "/data/tdube/.virtualenvs/tf-1.0/local/lib/python2.7/site-packages/tensorflow/python/tools/strip_unused_lib.py", line 71, in strip_unused  
    output_node_names)  
  File "/data/tdube/.virtualenvs/tf-1.0/local/lib/python2.7/site-packages/tensorflow/python/framework/graph_util_impl.py", line 141, in extract_sub_graph  
    assert d in name_to_node_map, "%s is not in graph" % d  
AssertionError: y is not in graph  

Further investigation led me to inspect the checkpoint of the saved graph, which only shows 1 tensor (a, noxand noy).

(tf-1.0.1) $ python -m tensorflow.python.tools.inspect_checkpoint --file_name ./test_model --all_tensors
tensor_name:  a
5.0

Specific Questions

Why do I not seexandyin the checkpoint?Is it because they are operations and not tensors?

Since I need to provide input and output names to theoptimize_for_inferencemodule, how do I build the graph so I can reference the input and output nodes?

—————–

Note: @Ishant answer is correct, and i feel the bounty should be awarded to him.

Here is the detailed guide on how to optimize for inference:

Theoptimize_for_inferencemodule takes afrozen binary GraphDeffile as input and outputs theoptimized Graph Deffile which you can use for inference. And to get thefrozen binary GraphDef fileyou need to use the modulefreeze_graphwhich takes aGraphDef proto, aSaverDef protoand a set of variables stored in a checkpoint file. The steps to achieve that is given below:

1. Saving tensorflow graph

 # make and save a simple graph
 G = tf.Graph()
 with G.as_default():
   x = tf.placeholder(dtype=tf.float32, shape=(), name="x")
   a = tf.Variable(5.0, name="a")
   y = tf.add(a, x, name="y")
   saver = tf.train.Saver()

with tf.Session(graph=G) as sess:
   sess.run(tf.global_variables_initializer())
   out = sess.run(fetches=[y], feed_dict={x: 1.0})

  # Save GraphDef
  tf.train.write_graph(sess.graph_def,'.','graph.pb')
  # Save checkpoint
  saver.save(sess=sess, save_path="test_model")

2. Freeze graph

python -m tensorflow.python.tools.freeze_graph --input_graph graph.pb --input_checkpoint test_model --output_graph graph_frozen.pb --output_node_names=y

3. Optimize for inference

python -m tensorflow.python.tools.optimize_for_inference --input graph_frozen.pb --output graph_optimized.pb --input_names=x --output_names=y

4. Using Optimized graph

with tf.gfile.GFile('graph_optimized.pb', 'rb') as f:
   graph_def_optimized = tf.GraphDef()
   graph_def_optimized.ParseFromString(f.read())

G = tf.Graph()

with tf.Session(graph=G) as sess:
    y, = tf.import_graph_def(graph_def_optimized, return_elements=['y:0'])
    print('Operations in Optimized Graph:')
    print([op.name for op in G.get_operations()])
    x = G.get_tensor_by_name('import/x:0')
    tf.global_variables_initializer().run()
    out = sess.run(y, feed_dict={x: 1.0})
    print(out)

#Output
#Operations in Optimized Graph:
#['import/x', 'import/a', 'import/y']
#6.0

This is a very helpful, constructive and detailed answer. Thank you!–SycoraxAug 3 at 20:30

Thanks, i am glad i could help.–vijay mAug 3 at 20:35

1

Excellent answer! Your comments were invaluable as I didn’t realize you had to save the graph and checkpoint separately. BTW, I did have to change--input_checkpoint test_modelto--input_checkpoint ./test_modelto getfreeze_graphto work.–tdubeAug 3 at 20:46

Iny, = tf.import_graph_def(graph_def_optimized, return_elements=['y:0']), isy,a typo, or am I missing something subtle?–Sycoraxyesterday

up vote0down vote

You are doing it wrong:inputis a graphdef file for thescriptnot the data part of the checkpoint. You need to freeze the model to a.pbfile/ or get the prototxt for graph and use the optimize for inference script.

This script takes either a frozen binary GraphDef file (where the weight variables have been converted into constants by the freeze_graph script), or a text GraphDef proto file (the weight variables are stored in a separate checkpoint file), and outputs a new GraphDef with the optimizations applied.

Get the graph proto file usingwrite_graph

get the frozen modelfreeze graph

Similar Posts: