Loading a trained Keras model and continue training

I was wondering if it was possible to save a partly trained Keras model and continue the training after loading the model again.

The reason for this is that I will have more training data in the future and I do not want to retrain the whole model again.

The functions which I am using are:

    #Partly train model
    model.fit(first_training, first_classes, batch_size=32, nb_epoch=20)

    #Save partly trained model
    model.save('partly_trained.h5')

    #Load partly trained model
    from keras.models import load_model
    model = load_model('partly_trained.h5')

    #Continue training
    model.fit(second_training, second_classes, batch_size=32, nb_epoch=20)

Edit 1: added fully working example

With the first dataset after 10 epochs the loss of the last epoch will be 0.0748 and the accuracy 0.9863.

After saving, deleting and reloading the model the loss and accuracy of the model trained on the second dataset will be 0.1711 and 0.9504 respectively.

Is this caused by the new training data or by a completely re-trained model?

    """
    Model by: http://machinelearningmastery.com/
    """
    # load (downloaded if needed) the MNIST dataset
    import numpy
    from keras.datasets import mnist
    from keras.models import Sequential
    from keras.layers import Dense
    from keras.utils import np_utils
    from keras.models import load_model
    numpy.random.seed(7)

    def baseline_model():
        model = Sequential()
        model.add(Dense(num_pixels, input_dim=num_pixels, init='normal', activation='relu'))
        model.add(Dense(num_classes, init='normal', activation='softmax'))
        model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
        return model

    if __name__ == '__main__':
        # load data
        (X_train, y_train), (X_test, y_test) = mnist.load_data()

        # flatten 28*28 images to a 784 vector for each image
        num_pixels = X_train.shape[1] * X_train.shape[2]
        X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
        X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
        # normalize inputs from 0-255 to 0-1
        X_train = X_train / 255
        X_test = X_test / 255
        # one hot encode outputs
        y_train = np_utils.to_categorical(y_train)
        y_test = np_utils.to_categorical(y_test)
        num_classes = y_test.shape[1]

        # build the model
        model = baseline_model()

        #Partly train model
        dataset1_x = X_train[:3000]
        dataset1_y = y_train[:3000]
        model.fit(dataset1_x, dataset1_y, nb_epoch=10, batch_size=200, verbose=2)

        # Final evaluation of the model
        scores = model.evaluate(X_test, y_test, verbose=0)
        print("Baseline Error: %.2f%%" % (100-scores[1]*100))

        #Save partly trained model
        model.save('partly_trained.h5')
        del model

        #Reload model
        model = load_model('partly_trained.h5')

        #Continue training
        dataset2_x = X_train[3000:]
        dataset2_y = y_train[3000:]
        model.fit(dataset2_x, dataset2_y, nb_epoch=10, batch_size=200, verbose=2)
        scores = model.evaluate(X_test, y_test, verbose=0)
        print("Baseline Error: %.2f%%" % (100-scores[1]*100))

Actually - model.save saves all information need for restarting training in your case. The only thing which could be spoiled by reloading model is your optimizer state. To check that - try to save and reload model and train it on training data.

From: stackoverflow.com/q/42666046