Home automation is a popular topic in IoT development and machine learning, and facial recognition are also popular topics. This project aims to combine both home automation IoT and machine learning to use well known facial recognition techniques to create the “welcome home” system which is designed to recognize faces of people in the household as they come home and set the home to their desired “settings”. These settings might include turning on lights, turning on the kettle or putting the TV show onto the TV.
Problem statement:
Many people come home and might want to hear some activity in their home or house, or they might want to have some home automation to save time. They might also want a home automation system that focuses on setting the house up when the person arrives home. The problem is that there may be different members of the household who have different settings they want to apply. The problem is therefore to detect each member using facial recognition and then trigger the micro-controller to set the home to their desired settings.
Solution:
The proposed solution is called “The Welcome Home System” it is a system designed to recognise the users face and activate home automation features based on their profile. A picture of the finished system is shown below.
When the system is triggered by facial recognition the lights in the above picture will turn on to simulate a home automation device being turned on.
The system is administered using the “Welcome Home Control panel” pictured below, which is a Tkinter application. In the control panel you can add or remove features from each user’s profile. Features are home automation devices that the user wishes to turn on or off. Currently there are only two users in the system.
Note:
This project will use a LOT of downloaded software packages in Python sometimes the packages DO NOT play nicely with each other and there dependencies in the list of software packages that need to be installed. Also for any Python packages if you are using Python 3 be sure to use pip3 install instead of pip install.
Hardware Requirements:As discussed, the system uses the Raspberry pi, the particle argon, the rpi 5mp camera, 2 bread boards and 5 led lights. The Rasberry pi acts as the face detector, and in a full implementation would be positioned at the entrance to the home or through the peep hole in the door and would be equipped with a motion sensor to detect when a person has come home and communicates via MQTT to the Particle Argon the name of the person who has been recognised. The Particle Argon is the controller, which receives a message from the Rpi to say which face has been recognized and then sends instruction to the devices in the home based on the user profile created in the welcome home control pane above.
Installing software on the RPi:As discussed, this guide assumes that you are in possession of a RPI with the latest version of Raspian installed.
Software Requirements:To get the project to run you will need to successfully you will ultimately need to install the facial recognition package and some additional software required to get it to work and to process images through the RPi camera.
Facial recognition will be run through a python API called face_recogition 1.3.0 or “face_recogition” to run face recognition, OpenCV2 was also required to be installed for the package to run, as well as imutils and numpy.
Paho-mqtt which is a python package for sending and receiving MQTT messages.
Download the following python packages:
• Paho-mqtt
• openCV2
• face_recognition 1.3.0
• numpy
• imutils
• pickle
To install the above the following dependencies need to be installed, this list also includes the commands required to install some of the software mentioned previously.
• sudo apt-get update
• pip3 install opencv-python
• sudo apt-get install libatlas-base-dev
• sudo apt-get install libqt4-test
• sudo pip3 install imutils
• sudo modprobe bcm2835-v4l2
• sudo apt-get install libhdf5-dev
• sudo apt-get install libgfortran3
• sudo apt install gfortran
• sudo pip3 install numpy
• sudo pip3 install face_recognition
To install facial recognition run the following commands on the RPi command line interface (CLI). This is an updated version of the guide provided for installation on the official github page for the facial recognition API.
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential \ \gfortran \ graphicsmagick \ libgraphicsmagick1-dev \ libatlas-base-dev \ libavcodec-dev \ libavformat-de\ libboost-all-dev \ libgtk2.0-dev \libjpeg-dev \liblapack-dev \ibswscale-dev
sudo pip3 install face_recognition
The majority of the packages installed are dependencies and not really an important part of the unit and are not directly used to get facial recognition to work, nonetheless to to use the facial recognition package to work they need to be installed. If the entire list of dependencies are not installed, then face_recognition installation will not complete. Now with the required software installed, we can move onto setting up the required hardware.
Setting up the hardwareA useful item not included in the list of items is a board to attach the components to, in the example photographed a small white board was used.
- First attach the Particle Argon to one of the breadboards and attach it to the left hand corner of the white board.
- Secondly, attach the RPi to the right hand side of the white board.
- Attach the third bread board to the center of the white board.
- Next apply the labels "LOUNGE", "STUDY", "KETTLE", "ALTERED-CARBON" and "ATYPICAL" to the breadboard.
- In between the labels, place LED lights, orientated to the side, this will be the proxy for the home automation devices that the system is meant to control.
- Next; connect the power supply to both the particle argon and the RPi
This guide assumes that you have already set up and installed the particle argon and have a particle io account. If not please refer to one of the following helpful guides:
https://docs.particle.io/quickstart/argon/
Once the particle argon has been set up you will need to enter the following code into a new code window and flash it to your device. What this code does is subscribe to the WELCOME_HOME event published on the RPi (which will come latter in the guide) and then converts the result into a c string (an array of characters) and then compares it to a set of pre-defined keywords. If a match occurs an LED is turned on to simulate home automation.
#include <MQTT.h>
MQTT client("test.mosquitto.org", 1883, callback);
void callback(char* topic, byte* payload, unsigned int length)
{
char p[length + 1];
memcpy(p, payload, length);
p[length] = NULL;
Particle.publish(p);
if (!strcmp(p, "-atypical-on")){
Particle.publish("atypical");
digitalWrite(D2, HIGH);};
if (!strcmp(p,"-altered-carbon-on")){
Particle.publish("alteredcarbon");
digitalWrite(D3, HIGH);};
if (!strcmp(p, "-kettle-on")){
Particle.publish("kettle");
digitalWrite(D4, HIGH);};
if (!strcmp(p, "-study-light-on")){
Particle.publish("study");
digitalWrite(D5, HIGH);}
if (!strcmp(p, "-loung-light-on")){
Particle.publish("lounge");
digitalWrite(D6, HIGH);};
if (!strcmp(p, "-all-off")){
Particle.publish("off");
digitalWrite(D2, LOW);
digitalWrite(D3, LOW);
digitalWrite(D4, LOW);
digitalWrite(D5, LOW);
digitalWrite(D6, LOW);};
}
void setup()
{
client.connect("WELCOME_HOME_CONTROL");
client.subscribe("WELCOME_HOME");
pinMode(D2, OUTPUT);
pinMode(D3, OUTPUT);
pinMode(D4, OUTPUT);
pinMode(D5, OUTPUT);
pinMode(D6, OUTPUT);
}
void loop()
{
if (client.isConnected()){client.loop();}
}
Now that the particle has been flashed and is ready to receive messages from the RPI we need to connect the LED lights. Connect pin D6 on the particle to the top most LED light, pin D5 to the second top most LED and so on until pin D2 has been connected. Connect the power supply for the particle. Note that in the above code, when a particular message is received from the RPi, that one LED light will be turned on.
Setting up the RPiThe RPi only needs to have the the camera connected and the power supply connected for it to work in this project. The camera can be connected by lifting the release on the camera module and inserting the camera. Make sure that the release is pressed back down to properly install the camera module.
The RPi will be the main driver of the WELCOME HOME application it stores the programs which will be used to store images of the users and stores the applications that will be use to match frames from the camera with images stored on the RPi using the facial recognition package.
The first thing to do is explore the folder structure of the WELCOME HOME application, the file structure should be as below. The folder called "photos" is where the photographs of the users should be stored.
Below is the photos folder, with photos for each user following the naming convention. The naming convention is important,
This leads into the first python program, which is used to "encode" the photo into a 128d eigenvector (which is a form of dimension reduction, which projects vector information onto another vector but is not the point of the task, we just focus on the application needing "data" to compare.
The line of code above shows the image being converted to an encoding. This is based on a program loop which loops over all files in the directory, it is important than only pictures of faces are stored in this folder, otherwise the program will crash, it may take several attempts to get a set of working photos.
The code above saves the encoding as a pickle file, a pickle file is a serialisation object which lets objects such as lists and arrays be stored for latter use. The pickle file called "pickle" will be used to store the face encoding. The full WELLCOMEHOMEIMAGEENCODING.py is attached.
The next program to look at is the GUI program, which is used to set the user preferences.
The above GUI program saves the user preferences to a pick file, the user Preference define what messages are sent byt the RPi to the particle once a face is detected.
The function above demonstrates the saving of user preferences to the "users.pkl" file. The user is selected from the userchosen control and the actions appended to a list based on the actionchosen control on the tkinter GUI. As can be seen in the code below.
Each of the functions defined below is attached to a button on the GUI form with the ultimate goal of creating a list of actions to take once a face is recognised by the WELCOMEHOME.py program.
The code below is for the controls which call the above functions based on buttons added to the GUI.
The GUI can also be used to turn off all of the lights on the prototype using the "Going Out" button, which publishes "-all-off" to the MQTT client (the argon) which then sets the power of all pins to "LOW".
As mentioned earlier, the main purpose of the GUI is to create the Users.pkl file which will be used by the WELCOMEHOMESYSTEM.py program. The code which reads the Users profile is shown below:
Which is then used in the following function:
Which publishes an MQTT event to the particle device (as shown earlier the particle would then turn on the lights based on the user profile). The way this program works is that it continuously reads frames from the camera using vs.read(), the frame is then re-sized and the "detector" is used to detect a face based on the hog method, using the xml file provided in the application folder, which provides the coordinates of the face in the image. The program then loops over each of the encoding that have been read by the camera and compares it to the encoding which we loaded (that were saved in the pickle file in a previous step). If a match is found the loop breaks, the camera turns off. Note that the function P:
matches = face_recognition.compare_faces(data["encodings"],
encoding)
name = "Unknown"
Returns a list of faces which have been matched, however in this program the loop terminates immediately as soon as a single match is found.
Once the match is found the message "Your settings have been applied, welcome home X" is shown on the screen, a delay is input using time.sleep(1) and the routine published to the argon is printed on the screen.
Sub routine running
Facial detection
Lights turning on:
This concludes the guide on how to set up/run a WELCOMEHOMESYSTEM. I hope that it was helpful !
Comments