Abstract— intelligent Assistants capable of understanding Human emotions. For

Abstract— Intelligent Assistants are becoming an essential part of our life they are present in smartphones like iPhone has Siri Android has Google Assistant and now they are becoming a core part of computer operating systems i.e. Windows has Cortana, Linux has Stella. These Intelligent Assistants are aimed to make life easier by helping the user in routine tasks like a Human Personal Assistant do. But they are unable to take place of Human Personal Assistants, oneof the main reason behind this is that these Intelligent Assistants cannot understand Human emotions. In this paper we have proposed a method that will make these Intelligent Assistants capable of understanding Human Emotions. In the proposed method we are using open source Intelligent Assistant “Open Assistant”, as the user calls the Assistant a real-time picture of the user will be taken using OpenCV library and we use logistic regression algorithm to train on a provided dataset and evaluate the new image. It is detected whether the user is smiling or not then this information is passed to Open Assistant to make him able to react according to user’s emotion. 1.      Introduction In present day Intelligent Assistants are becoming a core part of our life. These personal assistants can remind you about your daily tasks, they can send emails, even they can order lunch for you. These Intelligent Assistant are equipped with a bit of artificial intelligence (AI), they can understand human language, respond to general queries and their artificial intelligence is also used to ensure that their responses are in line with the expectations of the user. But these Assistants are lacking the understanding of human emotions. In 1968, Albert Mehrabian 1 pointed out that in human to human interaction 7% of communication is contributed by verbal cues, 38% is contributed by vocal cues and major portion 55% is contributed by facial expressions. And these facial expressions are used to understand emotions. If a machine can identify human emotions, it can understand human behavior better, thus improving the task efficiency 2. A lot of work has been done for making these intelligent assistants efficient but techniques for making them capable of understanding human emotions are yet not available. In this paper, we have proposed a technique that will make these intelligent Assistants capable of understanding Human emotions. For this we are using an open source Intelligent Assistant “Open Assistant” as the user call the Open Assistant the webcam will take user’s picture then OpenCV computer vision library and logistic regression algorithm would be used to detect whether the user is smiling or neutral then this information is forwarded to Open Assistant for executing particular code. This technique can be used for any Intelligent Assistant to make it more efficient and humanoid. 2.      Related Work The technology giants like Google, Microsoft are working for making their Intelligent Assistant humanoid for bettering the tasks efficiency. There work is not available publicly but the general perspective about their work for Intelligent Assistants is that, they are using data stored on user devices and their daily device usage patterns for making Intelligent Assistant responses inline with the expectations of user. At the time of writing this paper no research work was found for making the Intelligent Assistant capable of understanding human emotions.       3.      Methodology The following is the algorithm for making Intelligent Assistant capable of understanding human-emotions: Fig: Block Diagram for emotion recognition system a) OpenCV The Intelligent Assistant starts by a keyword spoken by the user, as the user speaks the keyword a picture of the user with either a smiling or neutral facial expression using a webcam is taken, this is done before the Intelligent Assistant done any processing. Then our program uses an algorithm adopted from the OpenCV library for localizing the part of face containing mouth. b) Vectorization The image is resized such that the output is 28 pixels by 10 pixels image only containing the person’s mouth. Then these images are converted to grayscale and flattened into a vector of length 280, each of the vector entry represents grayscale of a pixel. c) Logistic Regression A logistic regression program will be given the image vector, it will compute whether the person is smiling or he is neutral. First, the logistic regression is made to take an input of level 280. It is applied to a set of weight for that input and then yields a single scalar. Whether the activation is close to 0 or 1, the model determines whether the real person is smiling or not. Before classifying user images, the model is trained using gradient descent. We use 64 neutral images and 43 smiley images from online dataset to train models and reduce weight. With the appropriate weight and bias, we can input the user’s processed face image into the model and the network can predict whether user is smiling or not.   d) Open Assistant The different responses of Open Assistant are classified according to the emotion of the user like one portions of code for neutral and one portion for happy, when the smile is detected the code of Open Assistant for happy user is executed.   4.      Conclusion In this paper we have used OpenCV computer vision library, Logistic Regression program and Open Assistant to propose a method that can provide emotion understanding to Intelligent Assistant which are becoming an essential part of our lives. undefined undefined undefined undefined undefined undefined undefined undefinedundefined undefined

x

Hi!
I'm Clifton!

Would you like to get a custom essay? How about receiving a customized one?

Check it out