Build an emotion recognition application with Tensorflow.js
You will need Node 8.9+ and Yarn installed on your machine.
In this tutorial, we will examine at how to use Tensorflow.js and Pusher to build a realtime emotion recognition application that accepts an face image of a user, predicts their facial emotion and then updates a dashboard with the detected emotions in realtime. A practical use case of this application will be a company getting realtime feedback from users when they roll out incremental updates to their application.
With the rapid increase in computing power and the ability of machines to make sense of what is going on around them, users now interact with intelligent systems in a lot of their daily interactions. From Spotify’s awesomely accurate discover weekly playlists to Google Photos being able to show you all pictures of “Gaby” in your gallery after identifying her in one picture, companies are now interested in ways they can leverage this “silver bullet” in their service delivery.
What we’ll build
The best part of this is that recognizing a users emotion happens right on the client side and the user’s image is never sent to the over to the server. All that is sent to the server is the emotion detected. This means, your users never have to be worry about you storing their images on your server. Let’s get to the good stuff now!
Prerequisites
- Node installed on your machine (version 8.9 or above)
- Yarn installed on your machine
- Basic knowledge of Javascript
What is Tensorflow.js
Tensorflow.js is a JavaScript library that allows developers train and use machine learning models in the browser. This really changes the game because it means that users no longer need “super” machines to be able to run our models. Once they have a browser, they will be able to get stuff done. This also allows for developer who are more familiar with JavaScript get into building and using machine learning models without the need to learn a new programming language.
Getting started
To create the build the interface of our application, we are going to use Vue.js. Vue.js is a web framework used to build interactive interfaces with JavaScript. To get started, install the Vue CLI using the command:
yarn global add @vue/cli
Afterwards, create a new Vue project using the command:
vue create realtime-feedback
Follow the prompt to create the application using the using the Vue Router preset. This creates a starter Vue.js project which we will then update to fit our application.
Install the other JavaScript libraries you are going to use:
yarn add axios @tensorflow/tfjs @tensorflow-models/knn-classifier @tensorflow-models/mobilenet
To get users’ images and feed them to our model, we are going to make use of the a webcam class. Fetch the file from here and add it to your realtime-feedback/src/assets
directory. Afterwards, go ahead and ahead and get the Pusher logo from here and place it in the realtime-feedback/src/assets
directory.
Creating the homepage component
In the src/components
folder, create component titled Camera
. Components allow us to split the user interface of application into reusable parts. Add the following markup to the new component:
<!-- src/components/Camera.vue -->
<template>
<div>
<video autoplay playsinline muted id="webcam" width="250" height="250"></video>
</div>
</template>
[...]
Add the following code below the closing template tag:
// src/components/Camera.vue
[...]
<script>
import {Webcam} from '../assets/webcam'
export default {
name: "Camera",
data: function(){
return {
webcam: null,
}
},
mounted: function(){
this.loadWebcam();
},
methods: {
loadWebcam: function(){
this.webcam = new Webcam(document.getElementById('webcam'));
this.webcam.setup();
}
}
};
</script>
When this component is mounted, a webcam is loaded and the user can now actively see what is going on from their camera.
Application views
Our application will have two basic views:
- Homepage, where users will interact with and take pictures of themselves.
- Dashboard, where you can see a summary of the emotions recognized in realtime.
Configuring the router
To allow for navigation between pages, we are going to make use of the Vue Router in our application. Go ahead and edit your router.js
file to specify what pages to show on different routes:
// src/router.js
import Vue from "vue";
import Router from "vue-router";
import Home from "./views/Home.vue";
import Dashboard from "./views/Dashboard.vue";
Vue.use(Router);
export default new Router({
mode: "history",
base: process.env.BASE_URL,
routes: [
{
path: "/",
name: "home",
component: Home
},
{
path: "/dashboard",
name: "dashboard",
component: Dashboard
}
]
});
Also, you need to ensure that you include the router in your src/main.js
file like this:
// src/main.js
import Vue from "vue";
import App from "./App.vue";
import router from "./router";
Vue.config.productionTip = false;
new Vue({
router,
render: h => h(App)
}).$mount("#app");
Creating the homepage
On the homepage, there are two basic modes, train
mode and test
mode. To give us the ability to successfully recognize emotions, we are going to make use of a pretrained MobileNet and pass the result from the inference to train KNNClassifier for our different moods. In simpler terms, MobileNet is responsible for getting activations from the image and the KNNClassifier accepts the activation for a particular image and predicts which class the image activation belongs to by selecting the class the activation is closest to.
More explanation on how predictions are generated will be shared later on in the article.
Create a new view in the src/views/
directory of the project:
touch src/views/Home.vue
The homepage has the following template:
<!-- src/views/Home.vue -->
<template>
<div class="train">
<template v-if="mode == 'train'">
<h1>Take pictures that define your different moods in the dropdown</h1>
</template>
<template v-else>
<h1>Take a picture to let us know how you feel about our service</h1>
</template>
<select id="use_case" v-on:change="changeOption()">
<option value="train">Train</option>
<option value="test">Test</option>
</select>
<Camera></Camera>
<template v-if="mode == 'train'">
<select id="emotion_options">
<template v-for="(emotion, index) in emotions">
<option :key="index" :value="index">{{emotion}}</option>
</template>
</select>
<button v-on:click="trainModel()">Train Model</button>
</template>
<template v-else>
<button v-on:click="getEmotion()">Get Emotion</button>
</template>
<h1>{{ detected_e }}</h1>
</div>
</template>
[...]
If the selected mode is train
mode, the camera module is displayed and a dropdown is presented for the user to train the different available classes.
Note: In a real-world application, you’ll likely want to train your model before porting it to the web
If the test
mode is selected, the user is then shown a button prompting them to take a picture of their face and allow the model predict their emotion.
Now, let’s take a look at the rest of the Home
component and see how it all works:
<!-- src/view/Home.vue -->
[...]
<script>
// @ is an alias to /src
import Camera from "@/components/Camera.vue";
import * as tf from '@tensorflow/tfjs';
import * as mobilenetModule from '@tensorflow-models/mobilenet';
import * as knnClassifier from '@tensorflow-models/knn-classifier';
import axios from 'axios';
[...]
First import the Camera
component, the Tensorflow.js library, the MobileNet model and the KNNClassifier. There are also other models available open source on the Tensorflow Github repository.
Afterwards, go ahead and then specify the data to be rendered to the DOM. Notice that there’s an array of the emotions
that we train the model to recognize and predict. The other data properties include:
classifer
- which will represent the KNNClassifier.mobilenet
- which will represents the loaded MobileNet model.class
- which represents the class to train. Used intrain
mode.detected_e
- which represents the emotion that model predicts. Used intest
mode.mode
- which represents what mode is in use.
// src/views/Home.vue
[...]
export default {
name: "Home"
components: {
Camera
},
data: function(){
return {
emotions: ['angry','neutral', 'happy'],
classifier: null,
mobilenet: null,
class: null,
detected_e: null,
mode: 'train',
}
},
[...]
Let’s also add the methods to the Home
component:
// src/view/Home.vue
[...]
mounted: function(){
this.init();
},
methods: {
async init(){
// load the load mobilenet and create a KnnClassifier
this.classifier = knnClassifier.create();
this.mobilenet = await mobilenetModule.load();
},
trainModel(){
let selected = document.getElementById("emotion_options");
this.class = selected.options[selected.selectedIndex].value;
this.addExample();
},
addExample(){
const img= tf.fromPixels(this.$children[0].webcam.webcamElement);
const logits = this.mobilenet.infer(img, 'conv_preds');
this.classifier.addExample(logits, parseInt(this.class));
},
[...]
When the component mounts on the DOM, the init()
function is called. This creates an empty KNN Classifier and also loads the pretrained MobileNet module. When the trainModel()
is called, we fetch the image from the camera element and then feed it to the MobileNet model for inference. This returns intermediate activations (logits) as Tensorflow tensors and then add it as an example for the selected class in the classifier. What have just done is also known as transfer learning.
Let’s take a look at the methods that are called when in the test
mode. When the getEmotion()
method is called, we fetch the image and also obtain logits. Then we call the predictClass
method of the classifier to fetch the class the image belongs to.
After the emotion is obtained, we also call the registerEmotion()
that sends the detected emotion over to a backend server.
Notice here that the users image is never sent anywhere. Only the predicted emotion.
// src/view/Home.vue
[...]
async getEmotion(){
const img = tf.fromPixels(this.$children[0].webcam.webcamElement);
const logits = this.mobilenet.infer(img, 'conv_preds');
const pred = await this.classifier.predictClass(logits);
this.detected_e = this.emotions[pred.classIndex];
this.registerEmotion();
},
changeOption(){
const selected = document.getElementById("use_case");
this.mode = selected.options[selected.selectedIndex].value;
},
registerEmotion(){
axios.post('http://localhost:3128/callback', {
'emotion': this.detected_e
}).then( () => {
alert('Thanks for letting us know how you feel');
});
}
}
};
</script>
Adding realtime functionality with Pusher
Building the backend server
Let’s see how to create the backend server that triggers events in realtime. Create a server
folder inside your realtime-feedback
folder and initialize an empty node project:
mkdir server && cd server
yarn init
Install the necessary modules for the backend server:
yarn add body-parser cors dotenv express pusher
We need a way to be able to trigger realtime events in our application when a new emotion is predicted. To do this, let’s use Pusher. Pusher allows you to seamlessly add realtime features to your applications without worrying about infrastructure. To get started, create a developer account. Once that is done, create your application and obtain your application keys.
Create a .env
in your server
directory to hold the environment variables for this application:
touch .env
Add the following to the .env
file:
PUSHER_APPID='YOUR_APP_ID'
PUSHER_APPKEY='YOUR_APP_KEY'
PUSHER_APPSECRET='YOUR_APP_SECRET'
PUSHER_APPCLUSTER='YOUR_APP_CLUSTER'
Afterward, create an index.js
file in the server
directory and add the following to it:
// server/index.js
require("dotenv").config();
const express = require("express");
const cors = require("cors");
const bodyParser = require("body-parser");
const Pusher = require("pusher");
// create express application
const app = express();
app.use(cors());
app.use(bodyParser.urlencoded({ extended: false }));
app.use(bodyParser.json());
// initialize pusher
const pusher = new Pusher({
appId: process.env.PUSHER_APPID,
key: process.env.PUSHER_APPKEY,
secret: process.env.PUSHER_APPSECRET,
cluster: process.env.PUSHER_APPCLUSTER,
encrypted: true
});
// create application routes
app.post("/callback", function(req, res) {
// now that we are here just go ahead and then
pusher.trigger("emotion_channel", "new_emotion", {
emotion: req.body.emotion
});
return res.json({ status: true });
});
app.listen("3128");
We create a simple Express application, then initialize Pusher using the environment variables specified in the .env
. Afterwards, we create a simple /callback
route that is responsible for triggering a new_emotion
event on the emotion_channel
with the detected emotion passed as the body.
Now, on the dashboard, we are listening on the emotion_channel
for a new_emotion
event. Let’s see how to do this:
Displaying detected emotions in realtime on the dashboard
Firstly, add the Pusher minified script to your index.html
file for use in our application:
<!-- public/index.html -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width,initial-scale=1.0">
<link rel="icon" href="<%= BASE_URL %>favicon.ico">
<title>Realtime Emotion Recognition Feedback Application</title>
<script src="https://js.pusher.com/4.3/pusher.min.js"></script>
</head>
<body>
<noscript>
<strong>We're sorry but the application doesn't work properly without JavaScript enabled. Please enable it to continue.</strong>
</noscript>
<div id="app"></div>
<!-- built files will be auto injected -->
</body>
</html>
Create a new Dashboard view in the src/views
directory of the realtime-feedback
application:
touch src/views/Dashboard.vue
The dashboard page has the following template:
<!-- src/views/Dashboard.vue -->
<template>
<div class="dashboard">
<h1>Here's a summmary of how users feel about your service in realtime</h1>
<div>
<template v-for="(emotion, index) in emotions">
<div :key="index">
<strong>{{index}}</strong> clients: {{ emotion }}
</div>
</template>
</div>
</div>
</template>
[...]
The component only has one function init()
which we call when the component is mounted. The function creates a new Pusher object, subscribes to the emotion_channel
and then listens for a new_emotion
event and then updates the feedback summary on the dashboard in realtime without any need to refresh the page.
Add the following to the Dashboard view:
<!-- src/views/Dashboard.vue -->
[...]
<script>
export default {
name: "Dashboard",
data: function(){
return {
emotions: {
angry: 0,
neutral: 0,
happy: 0
},
pusher_obj: null,
e_channel: null,
}
},
mounted: function(){
this.init();
},
methods: {
init (){
// create a new pusher object
// PUSHER_APPKEY should be your pusher application key
this.pusher_obj = new Pusher('PUSHER_APPKEY',{
cluster: 'PUSHER_APPCLUSTER',
encrypted: true
});
// subscribe to channel
this.e_channel = this.pusher_obj.subscribe('emotion_channel');
// bind the channel to the new event and specify what should be done
let self = this;
this.e_channel.bind('new_emotion', function(data) {
// increment the counnt for the emotion by one
self.emotions[`${data.emotion}`] += 1;
});
},
},
}
</script>
Note: You’ll need to replace
PUSHER_APPKEY
andPUSHER_APPCLUSTER
with your application key and cluster.
Finally, the src/App.vue
is responsible for rendering all our views and components. Edit your src/App.vue
to look like this:
<!-- src/App.vue -->
<template>
<div id="app">
<img alt="Pusher logo" src="./assets/pusher.jpg" height="100px">
<router-view/>
</div>
</template>
<style>
#app {
font-family: 'Avenir', Helvetica, Arial, sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
text-align: center;
color: #2c3e50;
}
#nav {
padding: 30px;
}
#nav a {
font-weight: bold;
color: #2c3e50;
}
#nav a.router-link-exact-active {
color: #42b983;
}
</style>
Now, we can take our application for a spin! Run the frontend server using the command:
yarn serve
And in another terminal tab, navigate to the server/
directory and then run the backend server using the command:
node index.js
When you head over your application and navigate to http://localhost:8080
in your browser to view the homepage.
Open the http://locahost:8080/dashboard
route in another browser tab so you can see your results in realtime on the dashboard.
Note: To allow you see how the training and testing process, you’ll need to train at least 10 samples for each of the 3 classes, also, the training data is lost on refresh of your browser. If you want to persist the trained model, you can save the trained model to your browser’s local storagea
Conclusion
In this tutorial, we went through how to create build a realtime emotion recognition application using Pusher, Tensorflow and Vue.js in the browser without needing to send the image of the user to any external service. Feel free to explore more on machine learning and play with some awesome demos here. Here’s a link to the GitHub repository. Happy hacking!
28 August 2018
by Oreoluwa Ogundipe