Keras, How to get the output of each layer?

I have trained a binary classification model with CNN, and here is my code

    model = Sequential()
    model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
                            border_mode='valid',
                            input_shape=input_shape))
    model.add(Activation('relu'))
    model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=pool_size))
    # (16, 16, 32)
    model.add(Convolution2D(nb_filters*2, kernel_size[0], kernel_size[1]))
    model.add(Activation('relu'))
    model.add(Convolution2D(nb_filters*2, kernel_size[0], kernel_size[1]))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=pool_size))
    # (8, 8, 64) = (2048)
    model.add(Flatten())
    model.add(Dense(1024))
    model.add(Activation('relu'))
    model.add(Dropout(0.5))
    model.add(Dense(2))  # define a binary classification problem
    model.add(Activation('softmax'))

    model.compile(loss='categorical_crossentropy',
                  optimizer='adadelta',
                  metrics=['accuracy'])
    model.fit(x_train, y_train,
              batch_size=batch_size,
              nb_epoch=nb_epoch,
              verbose=1,
              validation_data=(x_test, y_test))

And here, I wanna get the output of each layer just like TensorFlow, how can I do that?

You can easily get the outputs of any layer by using: model.layers[index].output

For all layers use this:

    from keras import backend as K

    inp = model.input                                           # input placeholder
    outputs = [layer.output for layer in model.layers]          # all layer outputs
    functors = [K.function([inp, K.learning_phase()], [out]) for out in outputs]    # evaluation functions

    # Testing
    test = np.random.random(input_shape)[np.newaxis,...]
    layer_outs = [func([test, 1.]) for func in functors]
    print layer_outs

Note: To simulate Dropout use learning_phase as 1. in layer_outs otherwise use 0.

Edit: (based on comments)

K.function creates theano/tensorflow tensor functions which is later used to get the output from the symbolic graph given the input.

Now K.learning_phase() is required as an input as many Keras layers like Dropout/Batchnomalization depend on it to change behavior during training and test time.

So if you remove the dropout layer in your code you can simply use:

    from keras import backend as K

    inp = model.input                                           # input placeholder
    outputs = [layer.output for layer in model.layers]          # all layer outputs
    functors = [K.function([inp], [out]) for out in outputs]    # evaluation functions

    # Testing
    test = np.random.random(input_shape)[np.newaxis,...]
    layer_outs = [func([test]) for func in functors]
    print layer_outs

Edit 2: More optimized

I just realized that the previous answer is not that optimized as for each function evaluation the data will be transferred CPU->GPU memory and also the tensor calculations needs to be done for the lower layers over-n-over.

Instead this is a much better way as you don't need multiple functions but a single function giving you the list of all outputs:

    from keras import backend as K

    inp = model.input                                           # input placeholder
    outputs = [layer.output for layer in model.layers]          # all layer outputs
    functor = K.function([inp, K.learning_phase()], outputs )   # evaluation function

    # Testing
    test = np.random.random(input_shape)[np.newaxis,...]
    layer_outs = functor([test, 1.])
    print layer_outs

From: stackoverflow.com/q/41711190