Conjectural schema using Quantum mechanics-AI to express and interpret emotional intellect in a social robot

In this paper, the study represents a theoretical conceptualization on the potential improvements to interpret and devise under the notion of quantum mechanics & AI utilizing psychology for social robots. The framework is elaborated in regard to the development of emotions encoded through information and possibilities, which manipulates the use of transitions of these emotions in states through the Quantum & AI measurements. To support the development, work introduced is an interpretation of quantum mechanics, linked with quantum science, systems science, and Conceptualized Neural Network. The understanding of these emotions are expected to impact the technical capability of robots in a specific aspect to sense, capture, and act in an environment relied on optimization of quantum bits, where a robot, perform these emotions through a network which will allow interaction with the given target, changing the robot’s state of response according to optimization and gates from target itself, computing the robot’s emotional performance based on measurement and mathematical expressions changing in accordance to the available information. This method will help to emphasize its viability and effectiveness in the synthesis of emotional intellect in social robots.


Introduction
Today's technologies and innovations have gradually shifted into a new centric towards human-robot interactional development. Emotional interaction is core and the most important aspect in human-robot interaction, which emphasizes that the robots can convey in a more human-like manner by processing the emotional information and displaying the emotion apt for it. The big question is can a robot develop compassion? If so then how? What are the methods to achieve it? Modern autonomous robots such as social robots require fast vision and human-robot interaction capabilities to perceive and assess 2 their environment in presence of humans. Making a robot intelligent has become more driven, since the time of Artificial Intelligence and Deep Learning in technology maturation and in industries. The outline of this paper is to integrate "Emotional Intellect + Quantum measurements + Artificial Intelligence" which will impact the technical and thinking capability of the robot. [1] Emotional Intelligence was first said to be a mixture of abilities consisting of learning, planning, reasoning, and problem solving. The current year 2021 have seen an increase in the number of social robots appearing in everyday life regarding these abilities. The robots are mobile and have a physical appearance with some form of emotional information framework which can convey and understand human emotions following the pattern of conversations. For example, Buddy, it is said to be the first emotional intellect robot manufactured by Blue Frog Robotics. This robot, Buddy can express emotions following its interactions with other people. Sophia, by Hanson robotics, is the first humanoid robot in history to obtain citizenship, it can recognise almost 62 facial expressions, understand language, and remember previous interactions with humans.
[2] Existing emotional robot-interaction system paves a pathway on the study to understand that robots need to perceive a human emotional state and based on the interactions and feedback in form of emotions recognised by the robot and adjust its behaviour accordingly to the acquired behaviour response from the user. [2] [3]

Relationship with Human Behaviour
Emotion in a robot could be achieved through various methods. In this paper, the research mostly focused on the aspect of human recognition and generation of emotion through augmentation; here augmentation is achieved through learning algorithms, on an emotional scale which is the major importance in relation with the schema [4]. Figure 1 shows the link between emotional recognition, emotional augmentation, and emotional generation.  The findings published here are emphasizing on explaining how the Quantum based derivation could identify and predict to express an emotion through expression more accurately. Emotion is said to be generated when a human meets the robot, as the robot specifies this person as a target, and this target generates an emotional expression depending on the conversation which is augmented through perception which leads to two pathways, the first being, Quantum based derivative using emotional wheels, and predictive machine learning which will then lead to reasoning to understand which emotional response should the robot display and generates the response to the target. In more simpler terms, once the robot understands or recognizes the form of expression from the target, it will generate a response to convey its emotion through facial expressions with the target [3] [4]. The challenge is in proving the theoretical analysis behind the combination of different modules and two passages to achieve, keeping the research up to date focusing on the automatic emotion recognition and generation in relation to social robots. [5] 2.2. Social Robot These so-called agents have been increasing in various applications in the last few decades, the are to perform and execute tasks, which makes them popular based on the environment the agent is placed at like therapy for autistic children, education, assistance, patrol. [6] For example, study in therapy for autistic children shows the agent can communicate with children in the are of rehabilitation and therapy and has made improvements in a patient mentality. All this is achieved through the robot interacting with the user in reference to figure 2, which is expected through inter-personal manner or generally addressing the gathering but all this with social-emotional behaviour expressed by the agent itself. [7] Figure 2. Interaction based context model. But still the main issue to overcome is the robot finding itself hard to understand the vivid emotions of humans and generating a proper response in accordance with it or due to limitation of emotions it can give in response. This will bring down the communication between the target and robot. To understand the spontaneous response the agent is also required to making their own decision depending on subsequent reactions from the target and providing an emotional response in line with target expectations. Hence, this implies to tell that the human or robot is looking for a decision which is partially or completely depending on the other party. Thus, this will lead to mimicking the human's behavioral response. [6] The human emotional response works in such a way that the start point is an emotional stimulus which is received from the other party, now in our brain there are said to be two routes to deliver the response as information inside the brain in (figure 3). The first is a "short route from the thalamus to amygdala directly inside the brain, and the long route from the thalamus to amygdala through the sensory cortex" . Absence of the information being processed by the cortex makes the short route faster to execute by using prediction based on earlier expressions, but this is may or may not be accurate, whereas stimulus passing through the sensory cortex using the long route is said to be the "control of emotion". The main function of the amygdala in the brain is to produce emotions, hence the long route is more accurate." [8] Figure 3. Thalamus to Amygdala (Stimulus to Response) [9] .
The Behaviour of both Chuck and Grace. Let's define 1-trust, 0-disgust. Their behaviour is entangled, i.e., their behaviour affects each other's mode. Now after the gate operation, there is 50% chance that Chuck and grace will talk to each other and lead to mutual trust, on the other hand despite their conversation, they might continue to be disgusted. [9]  As over here in reference to figure 4, the person-to-person communication is through facial expression and communication, the communication can be expressed in various forms, but the expression determines the delivery of the communication. The analysis of the delivered content starts from the assumption of what might be the expression be from the assumption that these expressions are with unique configurations of the face. This categorization always has been focused on two major perspectives, pleasure, and arousal. The pleasure is said to be inherent or revulsion of an object, event, or situation. Whereas arousal indicates whether the object, event, or situation is giving any form of response based on stimulus, and how active one is at that situation.

Geneva Emotional Wheel
Its highly important that the process of a robot recognizing and responding to an emotion is accurate and precise. To satisfy these conditions, the incorporation of Geneva Emotional Wheel was required, Geneva Emotional Wheel is a tool that is theoretically derived to measure robots' emotional reactions to humans, and situations. This Geneva Emotional Wheel in reference to figure 5 is the Circumplex model, where every emotion is a mixture of Arousal (High-Low) and Pleasure (Positive-Negative), but in the Geneva Emotional Wheel, each emotion is labelled to a set of radial circles that converge with the center which depicts neutral emotion or any other emotion that is outside the range of this wheel. The smallest radial segment which is close to the center of the wheel represents low intensity of the emotion that the whole set of radial segments is labelled to be high intensity. Factors which convince us to use the Geneva Emotional Wheel are the importance of incorporation of emotional intensity, which serves as the first layer.

Plutnick Emotion Wheel
The complete concept in this paper revolves around the structure of figure 6. Plutnick Emotion Wheel. As the Geneva Emotional Wheel starts working and plots the intensity on the wheel which is just projected on the Plutnick wheel which is positioned below the Geneva Emotional Wheel as Layer 2. This Plutnick Emotional wheel consists of a deeper subdivision which helps us deliver the exact emotion the robot needs to provide. This Plutnick Emotion Wheel carries unique color codes for each emotion to make it better for the cognition process of the robot. The mapped intensity chart points to a specific emotion on this Plutnick Emotion Wheel, whose corresponding color code labelled will be given as output to the PAD wheel which will decide upon the response part from the robot to the Human [10][11] [12].   Table 1. Mapping of emotional wheel [6] .

S.No Emotion Wheel Angles
1

Representation of quantum merging wheels
Emotion playing a central role in determining the interplay between the dual framework schema of a 2-layer model Based Schema. This works in rules and layers determining the proposed rule or logic is rational for implementation. This model calculates and optimises the reaction which the robot needs to give in return. [14]  Utilising the Geneva Wheel as the first layer, as an intensity assumer to predict the emotion received and the 2nd layer consisting of Plutnick Wheel using the angles to deliver the subsequent emotion depending on the 1st layer (figure 7).

Quantum Effect
In classical computing, the basic unit of information is bit, its counterpart with respect to quantum computing is Qubit. It is a two-level system; however, the properties of quantum mechanics allow the qubits to exist in a superposition, i.e., it is present in both the states simultaneously.

|Ψ〉=α|0〉+β|1〉
Where α, β are the respective state occurrences. These values are constrained by the equation The states form an orthonormal basis for the given vector space. The evolution of qubits is governed by unitary operations. The sequence of unitary operators is called a quantum circuit. As, the initial quantum state that, Substituting α = cos θ and β = sin θ, θ Є |0,2π], then |ψ] = cos θ + sin θ. Hence, the need to map the emotions (Intensity map) from the Geneva Emotional Wheel to a new Plutnick Wheel, the tool of bijection can be used to achieve the transformation, as the angles in these two intervals are continuous. So, the quantum emotion of a robot can be represented by an equation for all the 16 emotions.

Gate Equation Matrix
Transform Notation Now let's understand how the quantum emotion state of a robot is devised and applied. Let's consider an angle δ Є [0, π/4] whose rotation matrix (rotation is around the y-axis of a Bloch sphere by angle 4δ) is defined as, This transforms the computational basis state to the required quantum emotion state, which is expressed as Now, applying several single quantum gates on the quantum equation state, to transform the emotional state. When X gate is applied on the emotion state, it can be expressed as, Here the X gate functions as the emotion inverse operator that flips every given motion. The next gate, which is the Z gate on the emotion state, the Z gate exhibits the property as a negation operator that Z|0〉= |0〉 and Z|1〉= -|1〉, hence the transformation: The H gate implements the conversion H|0〉. Which helps in performing to neutralise an action in relation to an emotional state. So, when applying this on our emotional state, After combining these three transformations, the resulting general form can be represented as a unitary matrix in the form, Here η ϵ [ 0, π/4]. When the C(4η) is applied on the emotion state if the robot it transforms it to The C(4η) operator is required to transforms the existing emotion to a new one, within the bound given between δ and η -δ. Now combining this operation with Z gate, a transformation of the current emotion state to a new state whose angle will reflect η + δ occurs. X, Z and H, are the transformations which are considered as the specializations on the concept of C(4η) where η is equal to π /2, 0, and π /4, respectively. [15] Moreover, with the help of Pauli's Z matrix property and the definition of C(4η) , the transformation ends with the current operation, Taking into consideration of the above equation which shows the increase or decrease of an emotion from δ to η + δ or from δ to η -δ which is achieved by implementing Ry(2η) or Ry(-2η). This can be expressed as, The matrix Ry(2η) exhibits the following properties as it has unit determinant, To shift between emotions once the response is given by the user and the 2 nd emotion could be shifted using the given rotation matrix: Where, δ: angle of first emotion Ⲅ: angle of second emotion

AI & Machine Learning based emotional response
The perspective is to integrate the affective computing principles in machine learning to increase the efficiency of learning, predicting and response. Emotional Augmentation using Machine Learning (such as using Artificial Intelligence) for applying the emotional in a more general goal achievement. By using existing emotional augmentation in using them for efficient learning and including principles of short route between thalamus and amygdala in AI which requires in adaptation of artificial neural networks. In exact words Convolutional Neural Networks applying it to machine learning algorithms used to predict on pre-recorded images and related expressions and emotions and choose the viable path. Through showing lower computational emotional intuition for learning, though there might be better solution to be found, and conceptualising an overall solution in confidence during measurement by a learning algorithm to predict and display emotional state. Emotion will be embedded as pictures for training machine-learning algorithms using convolutional neural networks and later bringing in a backpropagation learning algorithm that is enriched by various outcomes to deliver near exact emotions. As a stored image will show a higher recognition rate and faster processing time to show a relatable emotional response using this method.

CNN Based Recognition model
The Convolutional Neural Networks model is to train with dataset having images of expressions of in accordance with 16 different emotions with reference to figure 8, Where each image is to fall into these 16-category depending on the intensity of emotion received from the target, it will refer the emotion given by user to predict and display emotion. Before applying the Convolutional Neural Networks algorithm once the images are uploaded, optimization of images and geometrical points are to be considered to keep the faces constant in relation with different transformations which can be performed on the images. These images in dataset will be fed to Convolutional Neural Networks and push it Facial recognition analysis. The architecture will consist of 4 convolutional layer, 3 pooling layers, and 2 central layers in differentiating the class of images. Figure 9. Convolutional-neural-network (CNN) architecture using building blocks [16] .
In figure 9, the initial layer will be size 3*3 to generate 64 feature mapping of 64 filters for the images which will; be followed by the 2 nd , 3 rd , and 4 th layers which is of 5*5 size 128 filters and 2 (3*3) size of 512 filters. After every convolutional feature map, normalization will be executed. In all the layers rectified linear unit function is used to add the non-linearity in the neural network. For max pooling of 2*2 with the largest element from all rectified images will be retained. The Full connected Re Lu layers of 256 and 512 Neurons to summarize and compute class model score. In the last step a SoftMax of 6 classes is the output to identify value from 0 to 1 for all the emotions in reference to figure 4 for a shorter travel route. [16] Convolutional Neural Network model images will be now of the constants 0 to 1, the implemented loss of this will be categorized into cross-entropy. The continuation of Convolutional Neural Networks to deliver to the constant result is to input using batch samples of 128 images to avoid over feeding and dropouts. The usage of batch normalization after each step is to expect the model not to overfit.This is just a theoretical conclusion which will be better than fixed learning rate hyperparameter which will be the factor 0.1 till the learning stops improving. The metric used here to represent is accuracy and intensity by counting the correct predictions which is the result of total batch samples utilising argmax evaluation to determine the output. [16] Figure 10. Heatmap representing each of the different layers in CNN Model [16] .
For each emotion derivation for the agent to provide the right classification the heatmap will be used in the images for the decision making. (figure 10). Though this is not accurate it will up to the conclusion of PAD to determine the result achieved from the augmentation phase. This Convolutional Neural Network model can be achieved using Python and Keras API with TensorFlow backend. The model will be evaluated through scikit learn library. This method is not absolute and can approached in many different variable layer's perceptions and other approaches. [16][19]

Reasoning using PAD
The bidirectional approach using AI based ML (Convolutional Neural Network) approach and emotional wheel-based Quantum approach, both can result in same result or variable result. PAD can be said to distributed in emotional space diagram according to the P A D attributes and relatable emotions, this helps in reasoning and differentiating according to their position in space. Emotions are not independent also can be proved through as complex emotions can also be shown which is a continuous development which shows that more emotions can be distributed within the space. [17] The emotions are quantified by co-ordinate axes and grading it using (+/-) where positive expressing positive side and negative expressing negative side of Pleasure, Arousal and Dominance. This model was proposed by Mehrabian as an extension to the existing PA plane model by Russell's 2D emotional model. The "P" indicates for pleasure and displeasure, "A" stands for arousal and non-arousal and "D" stands for dominance and submissiveness. The Behaviour response depicted here determines the In addition to adding the new applications in the field of quantum and in different areas of aspects, including machine learning and Artificial emotional intellect is the proposed concept the 3D PAD model brings the completeness as the robot's intelligence to distinguish between the two routes and choose the right exact emotion. It's not always necessary the path chosen by heat map Convolutional Neural Network to be accurate. Some of the exact characteristics can be derived in terms of Quantum computing as well for robots in emotional plane.  Observing the trends, the Quantum mechanics can also be made into a fuzzy based logic with an algorithm to form the communication and emotional agent for a robot. Over here the QCA Algorithm is the solution to bring together but the implementation in theoretical paper prefers BS and AR axis as a better solution where the robots state is determined by its own self in the influence of rational thinking and choosing the right emotion. [17] This providing as computational logic to bidirectional approach using AI based ML (Convolutional Neural Network) approach and emotional wheel-based Quantum approach. For example, the target is displaying an emotional factor in terms of expression, the perception our robot takes it as an input and send it to AI based ML (Convolutional Neural Network) to look for the expressed emotion using heat map and normalization and coming with the expression said to be Joy which means the target is happy, as a result our outcomes can be of Joy or Trust. But the intensity depicted in the Quantum model is of Anticipation, Joy, Fear, Trust and the quantum takes in the input and plots it out in space diagram and figure out the relative emotion but here the emotion can be only of joy as expression form the robot side, once the results come in pad sees the plotting and possible outcomes and takes into reasoning to depict an ideal expression and the express it to the target and process carries on. Taking into consideration the initial expression was joy and the agent expressed joy and now the phase will be to joy again is the expectation until the user displays a variable emotion. [20] The Question here is all this still deals with basic 8 emotions whereas the proposal was for 16 emotions, where the PWE axis comes into play displaying the hybrid of both basic and complex emotions with the same ideate that it can not be just joy but another form of expression which cannot be expressed by singular emotions.

Complex Emotions
These emotions could be derived from the standalone emotions and PAD representation and use it derive complex emotions as it looks in theoretical point of view as a reasonable development but in terms of Quantum it is futile, as it is not a unitary operator, nevertheless the complex emotions was meant to show a wide range of emotions not only just the basic 8 standalone emotions but the combination of 2 stand-alone to produce a combined emotion which will give the robot a broader aspect to increase its emotional capability, In this paper as the complex emotional dyad sounds feasible, but another possibility in quantum has to be considered for the schema to function. [10][18] [21]. The complex emotions in reference to Figure 12 which is made of their standalone emotions refer Figure 7 should be in terms with the convolutional neural networks which can display 16 emotions. Where the emotions have been derived along with their PAD Representation in PAD, this is a modelling uncertainty which can yield more emotions to the robot to display during the Human Robot conversation. However, the lack of not being able to use Unitary Quantum operators to build a precise physical model due to emotions which are a "complex chain of connected events which is of feelings, psychological behavior, impulses to action and goal-oriented behavior." Which Plutnick considered. [10][18] [21]  Due to the emphasis on complex emotional is controversial with no definite way to identify and classify other than standalone emotions as complex emotions will be displayed at stages after the initial conversation build up [10]. Like Cowen and Keltner considered no complex but all as standalone emotions ranging from "Admiration, adoration, aesthetic appreciation, amusement, anger, anxiety, awe, awkwardness, boredom, calmness, confusion, craving, disgust, empathic pain, entrancement, excitement, fear, horror, interest, joy, nostalgia, relief, romance, sadness, satisfaction, sexual desire, surprise" but the interpretation of adding multiple stand lone emotions doesn't lead to  In Plutnick point of view, the combined emotions depict stronger response consisting of 2 standalone segments to deliver a union of the 2 nearby suitable emotions and to display the versatility of integrating multiple emotions. Table 5. Complex Emotions and Opposite emotions [21] .

7.Model comparison
The development from the existing model provides the pathway to research on more human-robot interaction models, in accordance if you're considering the major points of focus using to develop an expressional social robot are cognitive recognition, Internal/External feelings, behaviour action, expression and passion/desire. When all of these are co-ordinated in a working order, such social robot agents can display emotions at will. More add on process will lead to more complication in design, hence these 5 will be considered the most important.
At this point realisation of emotional communication with the target doesn't have to be just cognitive, but can be in form of Speech recognition, Facial recognition, Emotional expression, behavioural action; Likewise said speech pattern recognises the human language and interprets and translates for robots, whereas facial identifies human emotion by patterns in the face. Emotional is a series of behaviours than communicate with the robots and the behavioural action response reflect body movement and corresponds the emotion. The data presented in table 6 is part of summary including designs from 2013 to 2019. It has information on the name, type classification, year made along with country it was made in, manufacturer along with picture. These robots are much user/personnel focused than other robots, hence the development has been upgraded with upcoming new innovations, here are some of the robots in table 6 which over the years had constant developments and improvements. The understanding and the functionality of the robots are still being enhanced. Let's consider since the first model made its appearance in 1999. It had speech recognition and behaviour action mapping with 16 Degree of Freedom. The robots are made to serve and perform a certain or variety of operations. Its updated version in 2018 had 22 Degree of Freedom and with over 50 voice commands, and overall, of 60 emotional expressions and can see up to 100 faces at a same time, the initial build will be of same idea from scratch at development the end goal maybe be changing. Emotional robots and social robots are available many places and considered to be improving human life. For a robot expressing emotions, it needs to exhibit basic and complex emotions at must. As the conversations builds on the response itself, therefore a facial based expression is more ideal. The various augmentation is parameterised, like kismet using a technique based on interpolation and the existence of three-dimensional space was traced back to then otherwise like Probo robot which as circumplex model of emotions. With 2 dimensions. Since Emotions are complex with uncertainty and limitless boundary, it can be also solved with inclusion of mathematical probabilistic, fuzzy and boundary set theories which can be used to narrow and predict emotions for recognitional agents to recognise and express emotions.
Pepper (table 6) whose main tool of programming was choregraphed which had a good graphical interface and makes animations to display and then buddy (table 6) uses unity engine with SDK which can perform High level functions like emotional behaviour till lovot (table 6) which is a cuddle bot made for highly developed emotional connections and bridges with user and help them have positiveness using ML to make decisions. Like our Schema has proposed emotional augmentation using quantum-based calculations and Convolutional Neural Network using ML to communicate and decide the emotions to be sent as response and continue a conversation with ideology of have a social robot carry this schema to application perspective. [

Summary and Future Scope
In this paper, the focus on designed a fundamental schema for emotional augmentation to make the robot understand and perform its desired actions depending on the human relevant expressions/ emotions. The paper has reviewed elaborately on augmentation of emotions from the point of perception in comparison with human brain with combination of two models Plutnick and Geneva and the induction of Convolutional Neural Networks beside Quantum based derived emotions till using 3D space diagram with QCA algorithm along with BS and AR axis. The most anticipated development was the comparison of how the human brain can be in accordance with the schema developed for the robot and Emotional agent. Since emotional robots need to express themselves vividly using multiple models and more advanced control theory and speculation of a system in comparable to human cerebral cortex and systems and the speed of processing and displaying the emotions in response which can yield in more natural communications between human and robot. The important objective reached here is in accordance with the 2 variable pathways and having the option to choose to display expression based on itself using Quantum or in reference to existing images and cross checking the geometrical points and heat map along with emotional space modelling to display the emotion, and the induction of more emotions than the stand-alone 8 emotions. These support the development of the current schema and human robot interaction in emotional augmentation. Therefore, at present with the few insights the current and possible future directions developing a social robot for try-outs with the emotional intellect is possible with the expectations of quite complex and frequent changes.
Despite the foundation fundamentals settled down, the schema in many areas could be improvised in divulging to provide the robot with more emotions or other possible models. At the same time exploiting the roots in quantum computing, psychology, or convolutional neural network-based AI & ML to aid with behavioural science, control systems and brain science. Currently there are numerous models and schemas available for emotional intellect for Human Robot interaction, the next possible step is to cross reference, compare and upgrade the existing schema. Since this a cross disciplinary method, many outcomes and build-ups were considered to reach the facilitation of robot which can be used for a close and effective interaction with a human user for the purpose of giving assistance or encouraging emotional expressions. In application perspective the foundation is not for application use yet whereas it is a carving stone from the current initiative and point of where it could be applied in the various domains has to be considered for the further build-up. To develop a mature robot to the level of human intelligence there is still room for growth in this exciting Emotional Intellect development in Quantum and using Convolutional Neural network using ML and AI. While not being exhaustive in hope the paper justifies and delivers adequate basic initial set-ups. The goal is to develop the above-mentioned foundations and to harness the potential development and deliver a social robot in accordance.

Acknowledgment
We would like to extend our thanks to Dr. Balakrishnan, VIT Vellore on the extensive knowledge and guidance towards the field of Quantum Mechanics in this paper.