In this Article we will perform the Real time Gender Detection using CNN. Let's start.
Gender is our identity. In our social life it is also a significant element. Artificial intelligence gender recognition can be used in many fields, such as smart human-machine interface growth , health, cosmetics, electronic commerce etc. Recognition of people's gender from their facial images is an ongoing and active problem of research. The researchers suggested a number of methods to resolve this problem, but the criteria and actual performance are still inadequate. A statistical pattern recognition approach for solving this problem is proposed in this project. Convolutional Neural Network (ConvNet / CNN), a Deep Learning algorithm, is used as an extractor of features in the proposed solution. CNN takes input images and assigns value to different aspects / objects (learnable weights and biases) of the image and can differentiate between them. ConvNet requires much less preprocessing than other classification algorithms. While the filters are hand-made in primitive methods, ConvNets can learn these filters / features with adequate training.In this research, face images of individuals have been trained with convolutionary neural networks, and age and sex with a high rate of success have been predicted. More than 20,000 images contain age, gender and ethnicity annotations. The images cover a wide range of poses, facial expression, lighting, occlusion, and resolution.
Introduction
The aim is to predict the gender of individuals using image data sets. A growing number of applications, especially after the increase in social networks and social media, are being concerned with automatic gender classification. Age and gender are the two most fundamental facial qualities in social interaction. In smart applications, such as access control, human computer interaction, enforcement, marketing intelligence and visual supervision, etc, it is important to recognize gender using one facial image.
In machine learning there are many algorithms that are the most common technologies used in this project. Convolutional Neural Network (CNN) is one of the most prevalent algorithms that has gained a high reputation in image feature extraction.
Methodology
Architecture
Gender recognition using Convolutional neural network : A Convolutional Neural Network (ConvNet / CNN) is a Deep Learning algorithm, which allows an input image to take on different aspects/objects and can be distinguished from one image (learnable weights and biases). ConvNet requires much less preprocessing than other classification algorithms. While the filters are hand-made in primitive methods, ConvNets can learn these filters / features with adequate training. The ConvNet architecture is similar to that of neurons in the human brain and was influenced by the Visual Cortex organization. Within a limited area of the visual field known as the Receptive Field, only individual neurons respond to stimuli. The entire visual area is protected by a selection of these fields.
About Dataset
The data collection consists of more than 20,000 facial images with age, gender and ethnicity annotations. The images cover a wide range of poses, facial expression, lighting, occlusion, resolution .It can be used for a variety of tasks, for example face detection, age estimation, gender recognition, position of landmarks etc. The survey is focused on the age detection of the neural network (CNN) image dataset architecture. We need to split our dataset into three parts: training dataset, test dataset and validation dataset. The purpose of splitting data is to avoid overfitting which is paying attention to minor details/noise which is not necessary and only optimizes the training dataset accuracy. We need a model that performs well on a dataset that it has never seen (test data), which is called generalization. The training set is the actual subset of the dataset that we use to train the model. The model observes and learns from this data and then optimizes its parameters. The validation dataset is used to select hyperparameters (learning rate, regularization parameters). When the model is performing well enough on our validation dataset, we can stop learning using a training dataset. The test set is the remaining subset of data used to provide an unbiased evaluation of a final model fit on the training dataset.
Description of Library used
Numpy : Numpy is the most basic yet a powerful package for mathematical and scientific computing and data manipulation in python. It is an open source library available in python.
Cv2: OpenCV is a high performance library for digital image processing and computer vision, which is free and open source.
Matplotlib : Matplotlib is a plotting library for the python programming language and its numerical mathematics extensions in numpy.
OS : The OS module in python provides a way of using operating system dependent functionality
KERAS : Keras is an open-source high-level neural network API, written in python. It allows easy and fast prototyping
Technology
Image Processing : Image processing is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it. It is a type of signal processing in which input is an image and output may be image or characteristics/features associated with that image.
Computer Vision : Computer vision is a field of artificial intelligence that trains computers to interpret and understand the visual world. Using digital images from cameras and videos and deep learning models, machines can accurately identify and classify objects.
Tools Used
Anaconda Navigator : Anaconda is a free and open-source distribution of the Python and R programming languages for scientific computing (data science, machine learning applications, large-scale data processing, predictive analytics, etc.), that aims to simplify package management and deployment.
Jupyter Notebook IDE: Jupyter Notebook is an open source web-based application which allows you to create and share documents containing live code, equations, visualizations, and narrative text. The IDE also includes data cleaning and transformation, numerical simulation, statistical modelling, data visualization, and many others.
Import libraries
#import libraries
import cv2,os
import numpy as np
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout, Conv2D, Flatten, MaxPooling2D
from keras.callbacks import ModelCheckpoint
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
from keras.models import load_model
from random import shuffle
from scipy import misc
import imageio
from keras.utils.np_utils import to_categorical
Procedure
First we are changing the current directory to the path where our image dataset is stored by using os.chdir( ) and then we get the list of all the files and directories in the specified path using os.listdir( ).
#use the file path where your dataset is stored
data_path = 'UTKFace'
img_files = os.listdir(data_path)
len(img_files)
Shuffle( ) is used to randomize all the image files.
shuffle(img_files)
gender = [i.split('_')[1] for i in img_files]
Using split( ) we get the gender of each image file and then store it into a list.
target = []
for i in gender:
i = int(i)
target.append(i)
Using cv2.imread( ) and cv2.resize( ), we read an image from each file as an array and resize its dimensions to 32x32 and then store it in a list named data.
data=[]
img_size=32
for img_name in img_files:
img_path = os.path.join(data_path,img_name)
img = cv2.imread(img_path)
try:
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
resized = cv2.resize(gray,(img_size,img_size))
data.append(resized)
except Exception as e:
print("Exception: ",e)
Next we normalize the data by converting the datatype of variable data to float32 and dividing it by 255. Convert the class vector(integers) to a binary class matrix using to_categorical() before training our model.
#data values are normalized
data = np.array(data)/255.0
# Reshaping of data
data = np.reshape(data,(data.shape[0],img_size,img_size,1))
new_target = to_categorical(target, num_classes=2)
# saving the file
np.save('target',new_target)
np.save('data',data)
There are two ways to build keras models : sequential and functional. In our project we are using sequential by which we create a model layer-by-layer.
When we use Conv2D as the first layer we must define the input shape. This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs.
Activation is used for “relu”
The MaxPooling2D layer is used for spatial data.
Dropout layer is used to prevent a model from overfitting.
Dense layer is a linear operation in which every input is connected to every output by a weight.
# build Convolutional neural network leyar
model = Sequential()
model.add(Conv2D(100,(3,3),input_shape=data.shape[1:]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(1,1)))
model.add(Conv2D(100,(2,2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(1,1)))
model.add(Conv2D(100,(2,2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(400,(2,2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(1,1)))
model.add(Flatten())
model.add(Dropout(0.3))
model.add(Dense(50,activation='relu'))
model.add(Dense(2,activation='softmax'))
Next we compile and fit the model.
We summarize the model by using model.summary( ). Total params: 3,621,752, trainable params: 3,621,752 and Non-trainable params: 0.
The total length of the dataset is 23708, out of which 90% is used for training, 10% for testing and 20% for validating the data.
#Split the data
train_data, test_data, train_target, test_target = train_test_split(data, target, test_size=0.1)
We then evaluate the model on a test set to get the accuracy up to 0.88.
Now the model is trained and it's ready to recognize the gender of any random image from a dataset.
We plot a random sample of 15 test images, their predicted labels and ground truth.
We get the output by displaying each image along with its title.
In this project, Keras is used. Keras is an open-source neural network library. It is user friendly and provides several features such as activation.functions, layers, optimizers, etc. and it supports CNN . By using the appropriate class in Keras, deep learning models can be created on iOS and Android through the JVM (Java Virtual Machine). Keras enables the model to perform random transformations and normalization operations on batches of image data by working on different attributes such as height shift, width shift, rotation range, rescale, range of shear, range of zoom, horizontal rip and fill mode. Using these attributes the system can automatically rotate, translate, rescale, and zoom into or out of images, as well as apply shearing transformations, rip images horizontally, fill in newly created pixels, etc. For the purpose of image classification, ConvNet is used.
Training Dataset : The training dataset is used as a set of examples used for training the model, i.e. to fit the different parameters.
Validation Dataset : A validation dataset is used to fit the hyper-parameters of the classifier. A validation dataset is necessary because it helps in the reduction of overfitting. The validation dataset is independent of the training dataset.
Test Dataset : The test dataset is used to test the performance of the classifier or model and to check the performance of characteristics such as accuracy, loss, sensitivity, etc. It is independent of the training and validation dataset.
Real time Gender Recognition
Steps to follow :
Face detection with Haar cascade
Gender Recognition with CNN
1. Face detection with Haar cascades :
This is a part most of us at least have heard of. OpenCV/JavaCV provide direct methods to import Haarcascades and use them to detect faces.
2. Gender Recognition with CNN :
CNN algorithm is used for Gender recognition. The CNN’s output layer (probability layer) in this CNN consists of two classes ‘Male’ and ‘Female’.
img_size=32
#load the best model
model = load_model('model-10.model')
faceCascade=cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
#starts the webcam
cap = cv2.VideoCapture('vp_debate.mp4')
## Label and Color
labels_dict = {0:'Male',1:'Female'}
color_dict = { 0:(0,0,255),1:(0,255,0)}
while 1:
ret,frame = cap.read()
gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(gray,1.3,5)
for x,y,w,h in faces:
face_img = gray[y:y+w,x:x+h]
resized = cv2.resize(face_img,(img_size,img_size))
normalized = resized/255.0
reshaped = np.reshape(normalized,(1,img_size,img_size,1))
result = model.predict(reshaped)
label = np.argmax(result,axis=1)[0]
cv2.rectangle(frame,(x,y),(x+w,y+h),color_dict[label],2)
cv2.putText(frame,labels_dict[label],(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,0.8,(255,255,255),2)
cv2.imshow('Video',frame)
key=cv2.waitKey(1)
cap.release()
cv2.destroyAllWindows()
Conclusion
we proposed a model to Classify the gender by feeding the CNN image dataset, a deep learning algorithm and trained in broad database face-recognition. In all, we think that the accuracy of the model is decent. but can be further improved by using more data, data increase and better network architecture.
תגובות