top of page

Disease Detection in Plants.



We will implement Keras to make this program. Keras is a deep learning API written in Python, running on top of the machine learning platform TensorFlow. It was developed with a focus on enabling fast experimentation. Keras follows best practices for reducing cognitive load: it offers consistent & simple APIs, it minimises the number of user actions required for common use cases, and it provides clear & actionable error messages. It also has extensive documentation and developer guides.


First we will import all the important libraries and the data set CIFAR-10 from tensorflow.datasets.The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.

from tensorflow.keras.datasets import cifar10
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, InputLayer
from keras.models import Sequential
from keras import optimizers
from keras.models import Sequential
from keras.layers import Dense,Conv2D
import tensorflow as tf
from keras.applications import vgg16
from keras.models import Model
import keras
from keras.applications.imagenet_utils import preprocess_input
from keras.preprocessing import image
import os
import cv2
import matplotlib.pyplot as plt
import numpy as np
import random

It can take a lot of time (hours to days) to create neural network models and to train them using the traditional methods. But if one has pre-constructed network structure and pre-trained weights then it may take just a few seconds to do the same. This way, learning outcomes are transferred between different parties.Transfer learning generally refers to a process where a model which is trained on one problem is used in some way on another problem which is relatable. Furthermore, you don’t need to have a large scale training data set once learning outcomes transferred.


Inception V3 is a type of CNN (Convolutional Neural Network) which consists of a lot of convolution and max pooling layers. It also contains fully connected neural networks.

We don'y need to know its structure by heart to work with it, all that is handled by Keras.


We would import Inception V3 and then we will construct a model using it as follows:

from keras.applications.inception_v3 import InceptionV3
from keras.applications.inception_v3 import preprocess_input
from keras.applications.inception_v3 import decode_predictions
from keras.preprocessing import image
import numpy as np
import matplotlib.pyplot as plt

model = InceptionV3(weights='imagenet', include_top=True)

print("model structure: ", model.summary())

Now, we have pre-constructed network structure and pre-trained model for imagenet winner model. We can ask anything to Inception V3.


Now, we will define a function named classification_v3 which will collect the training data and tell the 3 most probable candidates for each image for categories: Apple__Apple_scab, Apple__Cedar_apple_rust and Apple__Frogeye_Spot. Then, display image and its predictions together.

datadir =loc
# specify sub folder names in the list below
catagories = ['Apple___Apple_scab','Apple___Cedar_apple_rust','Apple_Frogeye_Spot']

img_size = 299
training_data = []
def classification_v3():
    for category in catagories:
        path = os.path.join(datadir,category)
        classnum = catagories.index(category)
        for img in os.listdir(path):
          
          img_arr = cv2.imread(os.path.join(path,img))
          new_arr = cv2.resize(img_arr,(img_size,img_size))
          x = np.expand_dims(new_arr, axis = 0)
          x = preprocess_input(x)
 
          features = model.predict(x)
          print(decode_predictions(features, top = 3))
 
          plt.imshow(image.load_img(os.path.join(path,img)))
          plt.show()

          
          training_data.append([features,classnum])
            
classification_v3()
print(len(training_data))
random.shuffle(training_data)

Now that we have constructed a model and collected the training data along with its feature and labels, we will train the model by reshaping and scaling our training data to be fed to the model and defining a function get_features to return the features of the preprocessed training data when called.


The main concept is stacking of convolutional layers to create deep neural networks.We used VGG16 (Visual Group Geaometry 16) model to create a neural network layer.

import tensorflow as tf

x=[]
y=[]
for feature , label in training_data:
    x.append(feature)
    y.append(label)

X = np.array(x).reshape(-1,img_size,img_size,3)#1 is for grayscale for bgr/rgb 3
X.shape

train_imgs_scaled = X.astype('float32')
train_imgs_scaled /= 255

batch_size = 30
num_classes = 5
epochs = 30
input_shape = (150, 150, 3)
vgg = tf.keras.applications.InceptionV3(
    include_top=True, weights='imagenet', input_tensor=None, input_shape=input_shape,
    pooling=None, classes=1000, classifier_activation='softmax'
)
'''
vgg = vgg16.VGG16(include_top=False, weights='imagenet', 
                                     input_shape=input_shape)
'''
output = vgg.layers[-1].output
output = keras.layers.Flatten()(output)

vgg_model = Model(vgg.input, output)
vgg_model.trainable = False

for layer in vgg_model.layers:
    layer.trainable = False


def get_features(model, input_imgs):
    features = model.predict(input_imgs, verbose=0)
    return features
    
train_features_vgg = get_features(vgg_model, train_imgs_scaled)

input_shape = vgg_model.output_shape[1]
model = Sequential()
model.add(InputLayer(input_shape=(input_shape,)))
model.add(Dense(512, activation='relu', input_dim=input_shape))
model.add(Dropout(0.3))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(3, activation='softmax'))

model.compile(loss='sparse_categorical_crossentropy',
              optimizer=optimizers.RMSprop(lr=1e-4),
              metrics=['accuracy'])

history = model.fit(x=train_features_vgg, y=y,
                    validation_split=0.3,
                    batch_size=batch_size,
                    epochs=1,
                    verbose=1)

The function Sequential( ) is used to group a linear stack of layers into a tf.keras.Model.

We defined an instance of Sequential named 'model' and using it we added 3 hidden layers to the neural network. The first two layers have 'relu' as their activation function and the last one has 'softmax' as its activation function. We then combined all the layers using model.compile and trained the model on our training data.


We will now check the accuracy and loss of the model and will plot the same.

f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
t = f.suptitle('Performance', fontsize=12)
f.subplots_adjust(top=0.85, wspace=0.3)

epoch_list = list(range(1,31))
ax1.plot(epoch_list, history.history['accuracy'], label='Train Accuracy')
ax1.plot(epoch_list, history.history['val_accuracy'], label='Validation Accuracy')
ax1.set_xticks(np.arange(0, 31, 5))
ax1.set_ylabel('Accuracy Value')
ax1.set_xlabel('Epoch')
ax1.set_title('Accuracy')
l1 = ax1.legend(loc="best")

ax2.plot(epoch_list, history.history['loss'], label='Train Loss')
ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss')
ax2.set_xticks(np.arange(0, 31, 5))
ax2.set_ylabel('Loss Value')
ax2.set_xlabel('Epoch')
ax2.set_title('Loss')
l2 = ax2.legend(loc="best")

We can see that our model is almost accurate on training data.


GitHub link:


23 views0 comments
bottom of page