# In TensorFlow, what is tf.identity used for?

I've seen `tf.identity`

used in a few places, such as the official CIFAR-10 tutorial and the batch-normalization implementation on stackoverflow, but I don't see why it's necessary.

What's it used for? Can anyone give a use case or two?

One proposed answer is that it can be used for transfer between the CPU and GPU. This is not clear to me. Extension to the question, based on this: `loss = tower_loss(scope)`

is under the GPU block, which suggests to me that all operators defined in `tower_loss`

are mapped to the GPU. Then, at the end of `tower_loss`

, we see `total_loss = tf.identity(total_loss)`

before it's returned. Why? What would be the flaw with not using `tf.identity`

here?

After some stumbling I think I've noticed a single use case that fits all the examples I've seen. If there are other use cases, please elaborate with an example.

Use case:

Suppose you'd like to run an operator every time a particular Variable is evaluated. For example, say you'd like to add one to `x`

every time the variable `y`

is evaluated. It might seem like this will work:

```
x = tf.Variable(0.0)
x_plus_1 = tf.assign_add(x, 1)
with tf.control_dependencies([x_plus_1]):
y = x
init = tf.initialize_all_variables()
with tf.Session() as session:
init.run()
for i in xrange(5):
print(y.eval())
```

It doesn't: it'll print 0, 0, 0, 0, 0. Instead, it seems that we need to add a new node to the graph within the `control_dependencies`

block. So we use this trick:

```
x = tf.Variable(0.0)
x_plus_1 = tf.assign_add(x, 1)
with tf.control_dependencies([x_plus_1]):
y = tf.identity(x)
init = tf.initialize_all_variables()
with tf.Session() as session:
init.run()
for i in xrange(5):
print(y.eval())
```

This works: it prints 1, 2, 3, 4, 5.

If in the CIFAR-10 tutorial we dropped `tf.identity`

, then `loss_averages_op`

would never run.

From: stackoverflow.com/q/34877523