Keras is a high-level Python library for working with neural networks. In this module, we’ll demonstrate using Keras to solve an image classification problem.
This is the seventh module in our series on learning Python and its use in machine learning and AI. In the previous one, we discussed the use of NLTK for text analysis. Let’s now dive into Keras, a high-level library for neural networks.
Installation
Installing Keras is easy with Anaconda's conda install
:
conda install keras
This immediately installs all the dependencies you'll need.
Backend Configuration
Keras can use one of several available libraries as its backend, which is the part that handles low-level operations such as tensors. We'll use TensorFlow, which is the default.
To start, we’re going to slightly tweak the configuration of TensorFlow. Specifically, we’ll set the allow_growth
option to true, which allows TensorFlow to dynamically grow the used GPU memory rather than allocating everything beforehand. If we don’t, TensorFlow may try to allocate so much that your GPU runs out of memory immediately (it did for me). To do this, put the following code at the beginning of your file:
If your backend is TensorFlow 1.x:
from keras.backend.tensorflow_backend import set_session
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
set_session(tf.Session(config=config))
For TensorFlow 2.x, you'll have to call the set_memory_growth
function for your GPU. You can read more about the details behind this in the tf.config.experimental.set_memory_growth documentation.
Which version do you have? Check the result of the command conda list
to see what your TensorFlow version is. I got 1.13.1 on one computer and 2.1.0 on another, even with the same Anaconda installation command.
If you want to force Keras to use your CPU instead of your GPU, add this before the first Keras import:
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
This is a lot slower than GPU, but if you don’t have a lot of GPU memory you may want to do this.
Image Classification with Keras
We're going to demonstrate Keras for image classification, using the Intel Image Classification dataset. This dataset contains images of six classes, separated into six different directories, which is very handy because Keras offers built-in functionality to work with data in that format.
While you don’t have to worry too much about the math behind neural networks, you do need a sufficient understanding because you have to specify exactly what layers your model consists of.
One of the models Keras offers is the sequential model, a stack of layers. Creating a sequential model and adding layers is very easy:
from keras.models import Sequential
model = Sequential()
model.add(layer_one)
model.add(layer_two)
Here’s what our model will look like for the image classification:
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Dropout, Flatten, Dense
from keras.layers.normalization import BatchNormalization
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation="relu", input_shape=(150, 150, 3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(64, kernel_size=(3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(128, kernel_size=(3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dense(128, activation="relu"))
model.add(Dropout(0.15))
model.add(Dense(64, activation="relu"))
model.add(Dense(6, activation="softmax"))
The construction of a neural network model is beyond the scope of this module, but in short: The pattern of a repetition of a convolution layer (with an increasing number of filters because the patterns get more complex), a max pooling layer, and batch normalization is often used as the first step in image classification problems. The output of that step is multi-dimensional and we flatten it to a one-dimensional vector with a Flatten layer. We end with a couple of densely connected layers with a dropout layer in between to help fight against overfitting. The last layer must output a vector with six elements because we have six categories.
Next, we compile the model, meaning we configure it for training:
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
Our loss function, sparse_categorical_crossentropy
, is a good fit for classification problems without overlap between categories. See the Keras documentation for a full discussion of loss functions. We're using the Adam optimizer. A full list of optimizers can be found in the relevant documentation. And metrics
specifies what the model evaluates during training and testing.
Fitting the Model
Now that we have the structure of our model, we can fit it to our dataset.
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(validation_split=0.2)
train_generator =
datagen.flow_from_directory("intel-images/seg_train/seg_train",
batch_size=32, target_size=(150,150), class_mode="sparse",
subset="training")
validation_generator =
datagen.flow_from_directory("intel-images/seg_train/seg_train",
batch_size=32, target_size=(150, 150), class_mode="sparse",
subset="validation")
model.fit_generator(train_generator, epochs=40,
validation_data=validation_generator)
(Feel free to decrease the number of epochs if you want to go through the process a little faster, with lower accuracy, especially if you’re using CPU. 40 epochs take quite a while.)
As per Keras documentation, this is what an ImageDataGenerator
does:
Generate batches of tensor image data with real-time data augmentation. The data will be looped over (in batches).
Our example doesn’t do any data augmentation. We'll take a look at that feature a little later.
In the preceding code, validation_split=0.2
means we'll use 20% of the training set for validation. Since the dataset only comes with a training set and a test set, we’ll have to use a subset of the training set as a validation set.
flow_from_directory
is a neat built-in function that is perfect for a dataset structure like we have: a subdirectory for each category.
class_mode="sparse"
means that we’re working with one-dimensional integer labels.
fit_generator
fits the model to ImageDataGenerator
, in a number of epochs that we specify.
Testing the Model
After training, we can test the model with the evaluate_generator
function. Just like fit_generator
, it takes a generator as an argument. We can create one for our test data, similar to what we did for the training data:
test_datagen = ImageDataGenerator()
test_generator =
datagen.flow_from_directory(
"intel-images/seg_test/seg_test",
target_size=(150,150), class_mode="sparse")
print(model.evaluate_generator(test_generator))
This will return an array of two values for our example: loss and accuracy. (You can check that by looking at the model.metrics_names
value.)
I had an accuracy of 81.7%, which isn’t too bad for a relatively simple model on a non-trivial dataset.
Generating Predictions
You can now use the model.predict
method to generate predictions on any image. This method takes a NumPy array of images as input, where an image is also a NumPy array of shape (150, 150, 3). To make a prediction on one image, you can do this:
import skimage.io
import numpy as np
model.predict(np.expand_dims(skimage.io.imread("file.jpg"), axis=0))
skimage.io.imread
will read an image and expand_dims
will add another dimension to the array. The output is an array of predictions, where each prediction is an array of values indicating the probability of each category.
Data Augmentation
Data augmentation is when you generate new training data based on your existing training set, to improve accuracy and generalization and reduce overfitting.
For images, data augmentation can be done with several transformations on the images: rotations, flipping, zooming, shifting, shearing, and so on.
ImageDataGenerator
makes this easy. To apply data augmentation to our model, change just two lines of the code.
First, instantiate ImageDataGenerator
with more parameters, specifying what transformations we want:
datagen = ImageDataGenerator(rotation_range=30,
horizontal_flip=True,
zoom_range=0.2,
shear_range=0.2)
There are a lot more possibilities — see the Image Preprocessing documentation for a full list of parameters.
The second line is the fit_generator
call. This function has optional steps_per_epoch
and validation_steps
parameters that we could leave out previously because we had a fixed number of training samples. With data augmentation, we have a potentially much larger number of training samples, so we have to specify how many we want to use. If we don't, the function will just use our fixed-size sample set. One step corresponds with a batch of the given batch_size
.
model.fit_generator(train_generator, epochs=40,
validation_data=validation_generator,
steps_per_epoch=1600, validation_steps=32)
Again, feel free to reduce the number of epochs or the number of steps if you’d like the process to be faster. I had an accuracy of 85.5% with this, after 2-3 hours of training.
Saving and Restoring a Model
Keras allows you to save your trained model in the HDF5 format:
model.save("images_model.h5")
Restoring the model is a single line as well:
import keras.models
model = keras.models.load_model("images_model.h5")
This requires the h5py
package, which should have been installed if you used conda install
. If you don't have it, run this pip command in a Jupyter Notebook cell:
!pip install --upgrade h5py
Next Steps
In this module, we walked through the use of Keras in an image classification problem. Keras has a lot more layers available than the ones we used here. If you want to dive deeper, you can use the Keras documentation as a starting point. Iit also offers plenty of examples of common deep learning problems.
In the next module, we'll take a brief look at TensorFlow, NumPy, and scikit-learn.
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.