The present disclosure relates to applications, methods and apparatus for signal processing of biometric sensor data from detection of neuro-physiological state in communication enhancement and game applications.
Many humans are adept at empathizing feelings of others during communication; others less so. As electronic mediums have become increasingly used commonly for much interpersonal or human-machine communication, emotional signaling through visible and audible cues has become more difficult or impossible between people using electronic communication media. In text media, users resort to emojis or other manual signals. Often, emotional communication fails, and users misunderstand one another's intent. In addition, some people are adept at disguising their feelings, and sometimes use their skills to deceive or mislead others. Equipment such as lie detectors is used to address these problems in limited contexts but is too cumbersome and intrusive for widespread use.
In a related problem, many computer games are unresponsive to the user's emotional signals, which may cause players to lose interest in game play over time.
It would be desirable, therefore, to develop new methods and other new technologies for enhanced interpersonal or human-machine communication and games, that overcome these and other limitations of the prior art and help producers deliver more compelling entertainment experiences for the audiences of tomorrow.
This summary and the following detailed description should be interpreted as complementary parts of an integrated disclosure, which parts may include redundant subject matter and/or supplemental subject matter. An omission in either section does not indicate priority or relative importance of any element described in the integrated application. Differences between the sections may include supplemental disclosures of alternative embodiments, additional details, or alternative descriptions of identical embodiments using different terminology, as should be apparent from the respective disclosures. A previous application, Ser. No. 62/661,556 filed Apr. 23, 2018, lays a foundation for digitally representing user engagement with audio-video content, including but not limited to digital representation of Content Engagement Power (CEP) based on the sensor data, similar to Composite Neuro-physiological State (CNS) described in the present application. As described more fully in the earlier application, a computer process develops CEP for content based on sensor data from at least one sensor positioned to sense an involuntary response of one or more users while engaged with the audio-video output. For example, the sensor data may include one or more of electroencephalographic (EEG) data, galvanic skin response (GSR) data, facial electromyography (fEMG) data, electrocardiogram (EKG) data, video facial action unit (FAU) data, brain machine interface (BMI) data, video pulse detection (VPD) data, pupil dilation data, functional magnetic resonance imaging (fMRI) data, body chemical sensing data and functional near-infrared data (fNIR) received from corresponding sensors. The same or similar sensors may be used for calculation of CNS. “User” means an audience member, a person experiencing a video game or other application facilitating social interaction as a consumer for entertainment purposes. The present application builds on that foundation, making use of CNS in various applications summarized below.
CNS is an objective, algorithmic and digital electronic measure of a user's biometric state that correlates to a neuro-physiological state of the user during social interaction, for example while playing a video game or participating in an application facilitating social interaction. As used herein, “social interaction” includes any game in which two or more people interact, and other forms of social interaction such as interpersonal communication or simulated social interaction as when a user plays against a non-player character operated by a computer or against (e.g., in comparison with) prior performances by herself. In a given social interaction, the user may be concerned with learning how an inner neuro-physiological state corresponds to an outward effect detectable by sensors. The state of interest may be the user's own neuro-physiological state, or that of another user. As used herein, “neuro-physiological” means indicating or originating from a person's physiological state, neurological state, or both states. “Biometric” means a measure of a biological state, which encompasses “neuro-physiological” and may encompass other information, for example, identity information. Some data, for example, images of people's faces or other body portions, may indicate both identity and neuro-physiological state. As used herein, “biometric” always includes “neuro-physiological.”
CNS expresses at least two orthogonal measures, for example, arousal and valence. As used herein, “arousal” means a state or condition of being physiologically alert, awake and attentive, in accordance with its meaning in psychology. High arousal indicates interest and attention, low arousal indicates boredom and lack of interest. “Valence” is also used here in its psychological sense of attractiveness or goodness. Positive valence indicates attraction, and negative valence indicates aversion.
In an aspect, a method for controlling a social interaction application based on a representation (e.g., a quantitative measure or a qualitative symbol) of a neuro-physiological state of a user may include monitoring, by at least one processor, digital data from a social interaction, e.g., a game or unstructured chat. The method may include receiving sensor data from at least one sensor positioned to sense a neuro-physiological response of at least one user during the social interaction. The method may include determining a Composite Neuro-physiological State (CNS) value, based on the sensor data and recording the CNS value in a computer memory and/or communicating a representation of the CNS value to the user, or to another participant in the social interaction. In an aspect, determining the CNS value may further include determining arousal values based on the sensor data and comparing a stimulation average arousal based on the sensor data with an expectation average arousal. The sensor data may include one or more of electroencephalographic (EEG) data, galvanic skin response (GSR) data, facial electromyography (fEMG) data, electrocardiogram (EKG) data, video facial action unit (FAU) data, brain machine interface (BMI) data, video pulse detection (VPD) data, pupil dilation data, functional magnetic resonance imaging (fMRI) data, and functional near-infrared data (fNIR).
In an aspect, calculating a Composite Neuro-physiological State (CNS) may be based on the cognitive appraisal model. In addition, calculating the CNS value may include determining valence values based on the sensor data and including the valence values in determining the measure of a neuro-physiological state. Determining valence values may be based on sensor data including one or more of electroencephalographic (EEG) data, facial electromyography (fEMG) data, video facial action unit (FAU) data, brain machine interface (BMI) data, functional magnetic resonance imaging (fMRI) data, functional near-infrared data (fNIR), and positron emission tomography (PET).
In a related aspect, the method may include determining the expectation average arousal based on further sensor data measuring a like involuntary response of the recipient while engaged with known audio-video stimuli. Accordingly, the method may include playing the known audio-video stimuli comprising a known non-arousing stimulus and a known arousing stimulus. More detailed aspects of determining the CNS value, calculating one of multiple event powers for each of the one or more users, assigning weights to each of the event powers based on one or more source identities for the sensor data, determining the expectation average arousal and determining valence values based on the sensor data may be as described for other application, herein above or in the more detailed description below.
The foregoing methods may be implemented in any suitable programmable computing apparatus, by provided program instructions in a non-transitory computer-readable medium that, when executed by a computer processor, cause the apparatus to perform the described operations. The processor may be local to the apparatus and user, located remotely, or may include a combination of local and remote processors. An apparatus may include a computer or set of connected computers that is used in measuring and communicating CNS or like engagement measures for content output devices. A content output device may include, for example, a personal computer, mobile phone, an audio receiver (e.g., a Bluetooth earpiece), notepad computer, a television or computer monitor, a projector, a virtual reality device, augmented reality device, or haptic feedback device. Other elements of the apparatus may include, for example, an audio output device and a user input device, which participate in the execution of the method. An apparatus may include a virtual or augmented reality device, such as a headset or other display that reacts to movements of a user's head and other body parts. The apparatus may include biometric sensors that provide data used by a controller to determine a digital representation of CNS.
To the accomplishment of the foregoing and related ends, one or more examples comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative aspects and are indicative of but a few of the various ways in which the principles of the examples may be employed. Other advantages and novel features will become apparent from the following detailed description when considered in conjunction with the drawings and the disclosed examples, which encompass all such aspects and their equivalents.
The features, nature, and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify like elements correspondingly throughout the specification and drawings.
Various aspects are now described with reference to the drawings. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more aspects. It may be evident, however, that the various aspects may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form to facilitate describing these aspects.
Referring to
A suitable client-server environment 100 may include various computer servers and client entities in communication via one or more networks, for example a Wide Area Network (WAN) 102 (e.g., the Internet) and/or a wireless communication network (WCN) 104, for example a cellular telephone network. Computer servers may be implemented in various architectures. For example, the environment 100 may include one or more Web/application servers 124 containing documents and application code compatible with World Wide Web protocols, including but not limited to HTML, XML, PHP and Javascript documents or executable scripts, for example. The Web/application servers 124 may serve applications for outputting a video game or other application facilitating social interaction and for collecting biometric sensor data from users experiencing the content. In an alternative, data collection applications may be served from a math server 110, cloud server 122, blockchain entity 128, or content data server 126. As described in more detail herein below, the environment for experiencing a video game or other application facilitating social interaction may include a physical set for live interactive theater, or a combination of one or more data collection clients feeding data to a modeling and rendering engine that serves a virtual theater.
The environment 100 may include one or more data servers 126 for holding data, for example video, audio-video, audio, and graphical content components of game or social media application content for consumption using a client device, software for execution on or in conjunction with client devices, for example sensor control and sensor signal processing applications, and data collected from users or client devices. Data collected from client devices or users may include, for example, sensor data and application data. Sensor data may be collected by a background (not user-facing) application operating on the client device, and transmitted to a data sink, for example, a cloud-based data server 122 or discrete data server 126. Application data means application state data, including but not limited to records of user interactions with an application or other application inputs, outputs or internal states. Applications may include software for video games, social interaction, or personal training. Applications and data may be served from other types of servers, for example, any server accessing a distributed blockchain data structure 128, or a peer-to-peer (P2P) server 116 such as may be provided by a set of client devices 118, 120 operating contemporaneously as micro-servers or clients.
As used herein, “users” are always consumers of video games or social interaction applications from which a system node collects neuro-physiological response data for use in determining a digital representation of emotional state for use in the game or other social interaction. When actively participating in a game or social experience via an avatar or other agency, users may also be referred to herein as player actors. Viewers are not always users. For example, a bystander may be a passive viewer from which the system collects no biometric response data. As used herein, a “node” includes a client or server participating in a computer network.
The network environment 100 may include various client devices, for example a mobile smart phone client 106 and notepad client 108 connecting to servers via the WCN 104 and WAN 102 or a mixed reality (e.g., virtual reality or augmented reality) client device 114 connecting to servers via a router 112 and the WAN 102. In general, client devices may be, or may include, computers used by users to access video games or other applications facilitating social interaction provided via a server or from local storage. In an aspect, the data processing server 110 may determine digital representations of biometric data for use in real-time or offline applications. Real-time applications may include, for example, video games, in-person social games with emotional feedback via a client device, applications for personal training and self-improvement, and applications for live social interaction, e.g., text chat, voice chat, video conferencing, and virtual presence conferencing. Offline applications may include, for example, ‘green lighting’ production proposals, automated screening of production proposals prior to green lighting, automated or semi-automated packaging of promotional content such as trailers or video ads, and customized editing or design of content for targeted users or user cohorts (both automated and semi-automated).
The server 200 may include a network interface 218 for sending and receiving applications and data, including but not limited to sensor and application data used for digitally representing user neuro-physiological state during a game or social interaction in a computer memory based on biometric sensor data. The content may be served from the server 200 to a client device or stored locally by the client device. If stored local to the client device, the client and server 200 may cooperate to handle collection of sensor data and transmission to the server 200 for processing.
Each processor 202, 214 of the server 200 may be operatively coupled to at least one memory 204 holding functional modules 206, 208, 210, 212 of an application or applications for performing a method as described herein. The modules may include, for example, a correlation module 206 that correlates biometric feedback to one or more metrics such as arousal or valence. The correlation module 206 may include instructions that when executed by the processor 202 and/or 214 cause the server to correlate biometric sensor data to one or more neuro-physiological (e.g., emotional) states of the user, using machine learning (ML) or other processes. An event detection module 208 may include functions for detecting events based on a measure or indicator of one or more biometric sensor inputs exceeding a data threshold. The modules may further include, for example, a normalization module 210. The normalization module 210 may include instructions that when executed by the processor 202 and/or 214 cause the server to normalize measures of valence, arousal, or other values using a baseline input. The modules may further include a calculation function 212 that when executed by the processor causes the server to calculate a Composite Neuro-physiological State (CNS) based on the sensor data and other output from upstream modules. Details of determining a CNS are disclosed later herein. The memory 204 may contain additional instructions, for example an operating system, and supporting modules.
Referring to
A user interface device 324 may be coupled to the processor 302 for providing user control input to a media player and data collection process. The process may include outputting video and audio for a display screen or projection display device. In some embodiments, a video game or other application facilitating social interaction control process may be, or may include, audio-video output for an immersive mixed reality content display process operated by a mixed reality immersive display engine executing on the processor 302.
User control input may include, for example, selections from a graphical user interface or other input (e.g., textual or directional commands) generated via a touch screen, keyboard, pointing device (e.g., game controller), microphone, motion sensor, camera, or some combination of these or other input devices represented by block 324. Such user interface device 324 may be coupled to the processor 302 via an input/output port 326, for example, a Universal Serial Bus (USB) or equivalent port. Control input may also be provided via a sensor 328 coupled to the processor 302. A sensor 328 may be or may include, for example, a motion sensor (e.g., an accelerometer), a position sensor, a camera or camera array (e.g., stereoscopic array), a biometric temperature or pulse sensor, a touch (pressure) sensor, an altimeter, a location sensor (for example, a Global Positioning System (GPS) receiver and controller), a proximity sensor, a motion sensor, a smoke or vapor detector, a gyroscopic position sensor, a radio receiver, a multi-camera tracking sensor/controller, an eye-tracking sensor, a microphone or a microphone array, an electroencephalographic (EEG) sensor, a galvanic skin response (GSR) sensor, a facial electromyography (fEMG) sensor, an electrocardiogram (EKG) sensor, a video facial action unit (FAU) sensor, a brain machine interface (BMI) sensor, a video pulse detection (VPD) sensor, a pupil dilation sensor, a body chemical sensor, a functional magnetic resonance imaging (fMRI) sensor, a photoplethysmography (PPG) sensor, phased-array radar (PAR) sensor, or a functional near-infrared data (fNIR) sensor. Any one or more of an eye-tracking sensor, FAU sensor, PAR sensor, pupil dilation sensor or heartrate sensor may be or may include, for example, a front-facing (or rear-facing) stereoscopic camera such as used in the iPhone 10 and other smartphones for facial recognition. Likewise, cameras in a smartphone or similar device may be used for ambient light detection, for example, to detect ambient light changes for correlating to changes in pupil dilation.
The sensor or sensors 328 may detect biometric data used as an indicator of the user's neuro-physiological state, for example, one or more of facial expression, skin temperature, pupil dilation, respiration rate, muscle tension, nervous system activity, pulse, EEG data, GSR data, fEMG data, EKG data, FAU data, BMI data, pupil dilation data, chemical detection (e.g., oxytocin) data, fMRI data, PPG data or fNIR data. In addition, the sensor(s) 328 may detect a user's context, for example an identity position, size, orientation and movement of the user's physical environment and of objects in the environment, motion or other state of a user interface display, for example, motion of a virtual-reality headset. Sensors may be built into wearable gear or may be non-wearable, including a display device, or in auxiliary equipment such as a smart phone, smart watch, or implanted medical monitoring device. Sensors may also be placed in nearby devices such as, for example, an Internet-connected microphone and/or camera array device used for hands-free network access or in an array over a physical set.
Sensor data from the one or more sensors 328 may be processed locally by the CPU 302 to control display output, and/or transmitted to a server 200 for processing by the server in real time, or for non-real-time processing. As used herein, “real time” refers to processing responsive to user input without any arbitrary delay between inputs and outputs; that is, that reacts as soon as technically feasible. “Non-real time” or “offline” refers to batch processing or other use of sensor data that is not used to provide immediate control input for controlling the display, but that may control the display after some arbitrary amount of delay.
To enable communication with another node of a computer network, for example a video game or other application facilitating social interaction server 200, the client 300 may include a network interface 322, e.g., an Ethernet port, wired or wireless. Network communication may be used, for example, to enable multiplayer experiences, including immersive or non-immersive experiences of a video game or other application facilitating social interaction such as non-directed multi-user applications, for example social networking, group entertainment experiences, instructional environments, video gaming, and so forth. Network communication can also be used for data transfer between the client and other nodes of the network, for purposes including data processing, content delivery, content control, and tracking. The client may manage communications with other network nodes using a communications module 306 that handles application-level communication needs and lower-level communications protocols, preferably without requiring user management.
A display 320 may be coupled to the processor 302, for example via a graphics processing unit 318 integrated in the processor 302 or in a separate chip. The display 320 may include, for example, a flat screen color liquid crystal display (LCD) illuminated by light-emitting diodes (LEDs) or other lamps, a projector driven by an LCD or by a digital light processing (DLP) unit, a laser projector, or other digital display device. The display device 320 may be incorporated into a virtual reality headset or other immersive display system, or may be a computer monitor, home theater or television screen, or projector in a screening room or theater. In a real social interaction application, clients for users and actors may avoid using a display in favor of audible input through an earpiece or the like, or tactile impressions through a tactile suit.
In virtual social interaction applications, video output driven by a mixed reality display engine operating on the processor 302, or other application for coordinating user inputs with an immersive content display and/or generating the display, may be provided to the display device 320 and output as a video display to the user. Similarly, an amplifier/speaker or other audio output transducer 316 may be coupled to the processor 302 via an audio processor 312. Audio output correlated to the video output and generated by the media player module 308, a video game or other application facilitating social interaction or other application may be provided to the audio transducer 316 and output as audible sound to the user. The audio processor 312 may receive an analog audio signal from a microphone 314 and convert it to a digital signal for processing by the processor 302. The microphone can be used as a sensor for detection of neuro-physiological (e.g., emotional) state and as a device for user input of verbal commands, or for social verbal responses to other users.
The 3D environment apparatus 300 may further include a random-access memory (RAM) 304 holding program instructions and data for rapid execution or processing by the processor during controlling of a video game or other application facilitating social interaction in response to biosensor data collected from a user. When the device 300 is powered off or in an inactive state, program instructions and data may be stored in a long-term memory, for example, a non-volatile magnetic, optical, or electronic memory storage device (not shown). Either or both RAM 304 or the storage device may comprise a non-transitory computer-readable medium holding program instructions, that when executed by the processor 302, cause the device 300 to perform a method or operations as described herein. Program instructions may be written in any suitable high-level language, for example, C, C++, C#, JavaScript, PHP, or Java™, and compiled to produce machine-language code for execution by the processor.
Program instructions may be grouped into functional modules 306, 308, to facilitate coding efficiency and comprehensibility. A communication module 306 may include coordinating communication of biometric sensor data or metadata to a calculation server. A sensor control module 308 may include controlling sensor operation and processing raw sensor data for transmission to a calculation server. The modules 306, 308, even if discernable as divisions or grouping in source code, are not necessarily distinguishable as separate code blocks in machine-level coding. Code bundles directed toward a specific type of function may be considered to comprise a module, regardless of whether or not machine code on the bundle can be executed independently of other machine code. The modules may be high-level modules only. The media player module 308 may perform operations of any method described herein, and equivalent methods, in whole or in part. Operations may be performed independently or in cooperation with another network node or nodes, for example, the server 200.
The content control methods disclosed herein may be used with Virtual Reality (VR) or Augmented Reality (AR) output devices, for example in virtual live or robotic interactive theater.
Whether in an immersive environment or non-immersive environment, the application may control the appearance, behavior, and capabilities of a computer-controlled non-player character in response to real-time CNS data from one or more users. For example, if CNS data indicates low arousal, a controller may increase difficulty or pace of the experience, may modify characteristics of avatars, non-player characters, the playing environment, or a combination of the foregoing. For further example, if CNS data indicates excessive tension or frustration, the controller may similarly reduce difficulty or pace of the experience.
The immersive VR stereoscopic display device 400 may include a tablet support structure made of an opaque lightweight structural material (e.g., a rigid polymer, aluminum or cardboard) configured for supporting and allowing for removable placement of a portable tablet computing or smartphone device including a high-resolution display screen, for example, an LCD. The device 400 is designed to be worn close to the user's face, enabling a wide field of view using a small screen size such as in a smartphone. The support structure 426 holds a pair of lenses 422 in relation to the display screen 412. The lenses may be configured to enable the user to comfortably focus on the display screen 412 which may be held approximately one to three inches from the user's eyes.
The device 400 may further include a viewing shroud (not shown) coupled to the support structure 426 and configured of a soft, flexible or other suitable opaque material for form fitting to the user's face and blocking outside light. The shroud may be configured to ensure that the only visible light source to the user is the display screen 412, enhancing the immersive effect of using the device 400. A screen divider may be used to separate the screen 412 into independently driven stereoscopic regions, each of which is visible only through a corresponding one of the lenses 422. Hence, the immersive VR stereoscopic display device 400 may be used to provide stereoscopic display output, providing a more realistic perception of 3D space for the user.
The immersive VR stereoscopic display device 400 may further comprise a bridge (not shown) for positioning over the user's nose, to facilitate accurate positioning of the lenses 422 with respect to the user's eyes. The device 400 may further comprise an elastic strap or band 424, or other headwear for fitting around the user's head and holding the device 400 to the user's head.
The immersive VR stereoscopic display device 400 may include additional electronic components of a display and communications unit 402 (e.g., a tablet computer or smartphone) in relation to a user's head 430. When wearing the support 426, the user views the display 412 though the pair of lenses 422. The display 412 may be driven by the Central Processing Unit (CPU) 403 and/or Graphics Processing Unit (GPU) 410 via an internal bus 417. Components of the display and communications unit 402 may further include, for example, a transmit/receive component or components 418, enabling wireless communication between the CPU and an external server via a wireless coupling. The transmit/receive component 418 may operate using any suitable high-bandwidth wireless technology or protocol, including, for example, cellular telephone technologies such as 3rd 4th or 5th Generation Partnership Project (3GPP) Long Term Evolution (LTE) also known as 3G, 4G, or 5G, Global System for Mobile communications (GSM) or Universal Mobile Telecommunications System (UMTS), and/or a wireless local area network (WLAN) technology for example using a protocol such as Institute of Electrical and Electronics Engineers (IEEE) 802.11. The transmit/receive component or components 418 may enable streaming of video data to the display and communications unit 402 from a local or remote video server, and uplink transmission of sensor and other data to the local or remote video server for control or audience response techniques as described herein.
Components of the display and communications unit 402 may further include, for example, one or more sensors 414 coupled to the CPU 403 via the communications bus 417. Such sensors may include, for example, an accelerometer/inclinometer array providing orientation data for indicating an orientation of the display and communications unit 402. As the display and communications unit 402 is fixed to the user's head 430, this data may also be calibrated to indicate an orientation of the head 430. The one or more sensors 414 may further include, for example, a Global Positioning System (GPS) sensor indicating a geographic position of the user. The one or more sensors 414 may further include, for example, a camera or image sensor positioned to detect an orientation of one or more of the user's eyes, or to capture video images of the user's physical environment (for VR mixed reality), or both. In some embodiments, a camera, image sensor, or other sensor configured to detect a user's eyes or eye movements may be mounted in the support structure 426 and coupled to the CPU 403 via the bus 416 and a serial bus port (not shown), for example, a Universal Serial Bus (USB) or other suitable communications port. The one or more sensors 414 may further include, for example, an interferometer positioned in the support structure 404 and configured to indicate a surface contour to the user's eyes. The one or more sensors 414 may further include, for example, a microphone, an array of microphones, or other audio input transducer for detecting spoken user commands or verbal and non-verbal audible reactions to display output. The one or more sensors may include a subvocalization mask using electrodes as described by Arnav Kapur, Pattie Maes and Shreyas Kapur in a paper presented at the Association for Computing Machinery's ACM Intelligent User Interface conference in 2018. Subvocalized words might be used as command input, as indications of arousal or valence, or both. The one or more sensors may include, for example, electrodes or microphone to sense heart rate, a temperature sensor configured for sensing skin or body temperature of the user, an image sensor coupled to an analysis module to detect facial expression or pupil dilation, a microphone to detect verbal and nonverbal utterances, or other biometric sensors for collecting biofeedback data including nervous system responses capable of indicating emotion via algorithmic processing, including any sensor as already described in connection with
Components of the display and communications unit 402 may further include, for example, an audio output transducer 420, for example a speaker or piezoelectric transducer in the display and communications unit 402 or audio output port for headphones or other audio output transducer mounted in headgear 424 or the like. The audio output device may provide surround sound, multichannel audio, so-called ‘object-oriented audio’, or other audio track output accompanying stereoscopic immersive VR video display content. Components of the display and communications unit 402 may further include, for example, a memory device 408 coupled to the CPU 403 via a memory bus. The memory 408 may store, for example, program instructions that when executed by the processor cause the apparatus 400 to perform operations as described herein. The memory 408 may also store data, for example, audio-video data in a library or buffered during streaming from a network node.
Having described examples of suitable clients, servers, and networks for performing signal processing of biometric sensor data for detection of neuro-physiological state in communication enhancement applications, more detailed aspects of suitable signal processing methods will be addressed.
A correlating operation 510 uses an algorithm to correlate biometric data for a user or user cohort to a neuro-physiological indicator. Optionally, the algorithm may be a machine-learning algorithm configured to process context-indicating data in addition to biometric data, which may improve accuracy. Context-indicating data may include, for example, user location, user position, time-of-day, day-of-week, ambient light level, ambient noise level, and so forth. For example, if the user's context is full of distractions, biofeedback data may have a different significance from that in a quiet environment.
As used herein, a “neuro-physiological indicator” is a machine-readable symbolic value that relates to a real-time neuro-physiological state of a user engaged in a social interaction. The indicator may have constituent elements, which may be quantitative or non-quantitative. For example, an indicator may be designed as a multi-dimensional vector with values representing intensity of psychological qualities such as cognitive load, arousal, and valence. “Valence” in psychology and as used herein means the state of attractiveness or desirability of an event, object or situation; valence is said to be positive when a subject feels something is good or attractive and negative when the subject feels the object is repellant or bad. “Arousal” in psychology and as used herein means the state of alertness and attentiveness of the subject. A machine learning algorithm may include at least one supervised machine learning (SML) algorithm, for example, one or more of a linear regression algorithm, a neural network algorithm, a support vector algorithm, a naïve Bayes algorithm, a linear classification module or a random forest algorithm.
An event detection operation 520 analyzes a time-correlated signal from one or more sensors during output of a video game or other application facilitating social interaction to a user and detects events wherein the signal exceeds a threshold. The threshold may be a fixed predetermined value, or a variable number such as a rolling average. An example for GSR (galvanic skin response) data is provided herein below. Discrete measures of neuro-physiological response may be quantified for each event. Neuro-physiological state cannot be measured directly therefore sensor data indicates sentic modulation. Sentic modulations are modulations of biometric waveforms attributed to neuro-physiological states or changes in neuro-physiological states. In an aspect, to obtain baseline correlations between sentic modulations and neuro-physiological states, player actors may be shown a known visual stimulus (e.g., from focus group testing or a personal calibration session) to elicit a certain type of emotion. While under the stimulus, the test module may capture the player actor's biometric data and compare stimulus biometric data to resting biometric data to identify sentic modulation in biometric data waveforms.
CNS measurement and related methods may be used as a driver or control parameter for social interaction applications. Measured errors between intended effects and group response may be useful for informing design of a video game or other application facilitating social interaction, distribution and marketing, or any activity that is influenced by a cohort's neuro-physiological response to a social interaction application. In addition, the measured errors can be used in a computer-implemented application module to control or influence real-time operation of a social interaction application experience. Use of smartphones or tablets may be useful during focus group testing because such programmable devices already include one or more sensors for collection of biometric data. For example, Apple's™ iPhone™ includes front-facing stereographic cameras that may be useful for eye tracking, FAU detection, pupil dilation measurement, heartrate measurement and ambient light tracking, for example. Participants in the focus group may view the social interaction application on the smartphone or similar device, which collects biometric data with the participant's permission by a focus group application operating on their viewing device.
A normalization operation 530 performs an arithmetic or other numeric comparison between test data for known stimuli and the measured signal for the user and normalizes the measured value for the event. Normalization compensates for variation in individual responses and provides a more useful output. Once the input sensor events are detected and normalized, a calculation operation 540 determines a CNS value for a user or user cohort and records the values in a time-correlated record in a computer memory.
Machine learning, also called AI, can be an efficient tool for uncovering correlations between complex phenomena. As shown in
The ML training process 630 compares human and machine-determined scores of social interactions and uses iterative machine learning methods as known in the art to reduce error between the training data and its own estimates. Creative content analysts may score data from multiple users based on their professional judgment and experience. Individual users may score their own social interactions. For example, users willing to assist in training their personal ‘director software’ to recognize their neuro-physiological states might score their own emotions while playing a game or engaging in other social interaction. A problem with this approach is that the user scoring may interfere with their normal reactions, misleading the machine learning algorithm. Other training approaches include clinical testing of subject biometric responses over short social interactions, followed by surveying the clinical subjects regarding their neuro-physiological states. A combination of these and other approaches may be used to develop training data for the machine learning process 630.
Composite Neuro-physiological State is a measure of composite neuro-physiological response throughout the user experience of a video game or other application facilitating social interaction, which may be monitored and scored during or after completion of the experience for different time periods. Overall user enjoyment is measured as the difference between expectation biometric data modulation power (as measured during calibration) and the average sustained biometric data modulation power. Measures of user engagement may be made by other methods and correlated to Composite Neuro-physiological State or made a part of scoring Composite Neuro-physiological State. For example, exit interview responses or acceptance of offers to purchase, subscribe, or follow may be included in or used to tune calculation of Composite Neuro-physiological State. Offer-response rates may be used during or after participation in a social interaction experience to provide a more complete measure of user neuro-physiological response. However, it should be appreciated that the purpose of calculating CNS does not necessarily include increasing user engagement with passive content, but may be primarily directed to controlling aspects of game play for providing a different and more engaging user experience of social interactions.
The user's mood going into the interaction affects how the narrative entertainment is interpreted so the computation of CNS might calibrate mood out. If a process is unable to calibrate out mood, then it may take it into account in the operation of the social media application. For example, if a user's mood is depressed, a social interaction application might favor more positively valenced interactions or matching to more sympathetic partners. For further example, if a user's mood is elevated, the application might favor more challenging encounters. The instant systems and methods of the present disclosure will work best for healthy and calm individuals though it will enable use of CNS in controlling operation of social interaction applications for everyone who partakes.
Neuro-physiological spaces may be characterized by more than two axes.
In the following detailed example, neuro-physiological state determination from biometric sensors is based on the valence/arousal neuro-physiological model where valence is positive/negative and arousal is magnitude. From this model, producers of social interaction application and other creative productions can verify the intention of the social experience by measuring social theory constructs such as tension (hope vs. fear) and rising tension (increase in arousal over time) and more. During social interaction mediated through the application, an algorithm can use the neuro-physiological model for operation of the application dynamically based on the psychological state or predisposition of the user. The inventive concepts described herein are not limited to the CNS neuro-physiological model described herein and may be adapted for use with any useful neuro-physiological model characterized by quantifiable parameters.
In a test environment, electrodes and other sensors can be placed manually on subject users in a clinical function. For consumer applications, sensor placement should be less intrusive and more convenient. For example, image sensors in visible and infrared wavelengths can be built into display equipment. For further example, a phased-array radar emitter may be fabricated as a microdevice and placed behind the display screen of a mobile phone or tablet, for detecting biometric data such as Facial Action Units or pupil dilation. Where a user wears gear or grasps a controller as when using VR equipment, electrodes can be built into headgear, controllers, and other wearable gear to measure skin conductivity, pulse, and electrical activity.
Target story arcs based on a video game or other application facilitating social interaction can be stored in a computer database as a sequence of targeted values in any useful neuro-physiological model for representing user neuro-physiological state in a social interaction, for example a valence/arousal model. Using the example of a valence/arousal model, a server may perform a difference calculation to determine the error between the planned/predicted and measured arousal and valence. The error may be used in application control or for generating an easily understood representation. Once a delta between the predicted and measured values passes a threshold, then the social interaction application software may command a branching action. For example, if the user's valence is in the ‘wrong’ direction based on the game design then the processor may change the content by the following logic: If absolute value of (Valence Predict−Valence Measured)>0 then Change Content. The change in content can be several different items specific to what the software has learned about the player-actor or it can be a trial or recommendation from an AI process. Likewise, if the arousal error falls below a threshold (e.g. 50%) of predicted (Absolute value of (error)>0.50*Predict) then the processor may change the content.
Likewise, expectation power Px covers a period ‘tx’ that equals a sum of ‘m’ number of event power periods Δtx for the expectation content:
Each of powers Pv and Px is, for any given event ‘n’ or ‘m’, a dot product of a power vector P and a weighting vector W of dimension i, as follows:
In general, the power vector can be defined variously. In any given computation of CNS the power vectors for the social interaction event and the expectation baseline should be defined consistently with one another, and the weighting vectors should be identical. A power vector may include arousal measures only, valence values only, a combination of arousal measures and valence measures, or a combination of any of the foregoing with other measures, for example a confidence measure. A processor may compute multiple different power vectors for the same user at the same time, based on different combinations of sensor data, expectation baselines, and weighting vectors. In one embodiment, CNS is calculated using power vectors defined by a combination of ‘j’ arousal measures ‘aj’ and ‘k’ valence measures ‘vk’, each of which is adjusted by a calibration offset ‘C’ from a known stimulus, wherein j and k are any non-negative integer, as follows:
C=(a1C1, . . . ,ajCj, . . . ,vkCj+k) Eq. 5
wherein
C
j
=S
j
−S
j
O
j
=S
j(1−Oj) Eq. 6
The index ‘j’ in Equation 6 signifies an index from 1 to j+k, Sj signifies a scaling factor and Oj signifies the offset between the minimum of the sensor data range and its true minimum. A weighting vector corresponding to the power vector of Equation 5 may be expressed as:
=(w1, . . . ,wj,wj+1, . . . wk) Eq. 7
wherein each weight value scales its corresponding factor in proportion to the factor's relative estimated reliability.
With calibrated dot products Pv
The ratio tx/tv normalizes inequality in the disparate time series sums and renders the ratio unitless. A user CNS value greater than 1 indicates that a user/player actor/viewer is experiencing a neuro-physiological response greater than baseline for the type of social interaction at issue. A user CNS value less than 1 indicates a neuro-physiological response less than baseline for the type of social interaction. A processor may compute multiple different CNS values for the same user at the same time, based on different power vectors.
Equation 5 describes a calibrated power vector made up of arousal and valence measures derived from biometric sensor data. In an alternative, the processor may define a partially uncalibrated power vector in which the sensor data signal is scaled as part of lower-level digital signal processing before conversion to a digital value but not offset for a user as follows:
=(a1, . . . ,aj,v1, . . . vk) Eq. 9
If using a partially uncalibrated power vector, an aggregate calibration offset may be computed for each factor and subtracted from the dot products Pv
In such case, a calibrated value of the power vector Pv
P
v
−C
v
Eq. 11
The calibrated power vector Px
Referring again to the method 800 in which the foregoing expressions can be used (
Calibration can have both scaling and offset characteristics. To be useful as an indicator of arousal, valence, or other psychological state, sensor data may need calibrating with both scaling and offset factors. For example, GSR may in theory vary between zero and 1, but in practice depend on fixed and variable conditions of human skin that vary across individuals and with time. In any given session, a subject's GSR may range between some GSRmin>0 and some GSRmax<1. Both the magnitude of the range and its scale may be measured by exposing the subject to known stimuli and estimating the magnitude and scale of the calibration factor by comparing the results from the session with known stimuli to the expected range for a sensor of the same type. In many cases, the reliability of calibration may be doubtful or calibration data may be unavailable, making it necessary to estimate calibration factors from live data. In some embodiments, sensor data might be pre-calibrated using an adaptive machine learning algorithm that adjusts calibration factors for each data stream as more data is received and spares higher-level processing from the task of adjusting for calibration.
Once sensors are calibrated, the system normalizes the sensor data response data for genre differences at 812, for example using Equation 8. Different types of social interactions produce different valence and arousal scores. For example, first-person shooter games have a different pace, focus, and intensity from online Poker or social chat. Thus, engagement power cannot be compared across different application types unless the engagement profile of the application type is considered. Genre normalization scores the application relative to applications of the same type, enabling comparison on an equivalent basis across genres. Normalization 812 may be performed on a user or users before beginning play. For example, users may play a trial, simulated game before the real game, and a processor may use data from the simulated game for normalization. In an alternative, a processor may use archived data for the same users or same user cohort to calculate expectation power. Expectation power is calculated using the same algorithms as used or that will be used for measurements of event power and can be adjusted using the same calibration coefficients 816. The processor stores the expectation power 818 for later use.
At 820, a processor receives sensor data during play of the subject content and calculates event power for each measure of concern, such as arousal and one or more valence qualities. At 828, the processor sums or otherwise aggregates the event power for the content after play is concluded, or on a running basis during play. At 830, the processor calculates a representation of the user's neuro-physiological state, for example, Composite Neuro-physiological State (CNS) as previously described. The processor first applies applicable calibration coefficients and then calculates the CNS by dividing the aggregated event power by the expectation power as described above.
Optionally, the calculation function 820 may include comparing, at 824, an event power for each detected event, or for a lesser subset of detected events, to a reference for a social/game experience. A reference may be, for example, a baseline defined by a game designer or by the user's prior data. For example, in Poker or similar wagering games, bluffing is a significant part of game play. A game designer may compare a current event power (e.g., measured when a user is placing a bet) with a baseline reference (e.g., measured between hands or prior to the game). At 826, the processor may save, increment or otherwise accumulate an error vector value describing the error for one or more variables. The error vector may include a difference between the references and a measured response for each measured value (e.g., arousal and valence values) for a specified event or period of a social interaction. The error vector and matrix of vectors may be useful for content evaluation or content control.
Error measurements may include or augment other metrics for content evaluation. Composite Neuro-physiological State and error measurements may be compared to purchases, subscriptions, or other conversions related to presented content. The system may also measure consistency in audience response, using standard deviation or other statistical measures. The system may measure Composite Neuro-physiological State, valence and arousal for individual, cohorts, and aggregate audiences. Error vectors and CNS may be used for a variety of real-time and offline tasks.
Accessories like a headphone 920, hats or VR headsets may be equipped with EEG sensors 922. A processor of the mobile device may detect arousal by pupil dilation via the 3D cameras 908, 910 which also provide eye tracking data. A calibration scheme may be used to discriminate pupil dilation by aperture (light changes) from changes due to emotional arousal. Both front and back cameras of the device 904 may be used for ambient light detection, and for calibration of pupil dilation detection factoring out dilation caused by lighting changes. For example, a measure of pupil dilation distance (mm) versus dynamic range of light expected during the performance for anticipated ambient light conditions may be made during a calibration sequence. From this, a processor may calibrate out effects from lighting vs. effect from emotion or cognitive workload based on the design of the narrative by measuring the extra dilation displacement from narrative elements and the results from the calibration signal tests.
Instead of, or in addition to a stereoscopic camera 908 or 910, a mobile device 904 may include a radar sensor 930, for example a multi-element microchip array radar (MEMAR), to create and track facial action units and pupil dilation. The radar sensor 930 can be embedded underneath and can see through the screen 906 on a mobile device 904 with or without visible light on the subject. The screen 906 is invisible to the RF spectrum radiated by the imaging radar arrays, which can thereby perform radar imaging through the screen in any amount of light or darkness. In an aspect, the MEMAR sensor 930 may include two arrays with 6 elements each. Two small RF radar chip antennas with six elements each create an imaging radar. An advantage of the MEMAR sensor 930 over optical sensors 908, 910 is that illumination of the face is not needed, and thus sensing of facial action units, pupil dilation and eye tracking is not impeded by darkness. While only one 6-chip MEMAR array 930 is shown, a mobile device may be equipped with two or more similar arrays for more robust sensing capabilities.
The method 1000 may be used for competitive bluffing games with or without monetary wagers, for example Poker, Werewolf™, Balderdash™ and similar games. In these games, players compete to fool other players. The method may be used in a training mode wherein only the user sees his or her own CNS indicators, in a competitive mode wherein every player sees the other players' CNS indicators, in a perquisite mode wherein players may win or be randomly awarded access to another player's CNS indicators, in an interactive mode wherein the processor modifies game play based on one or more players' CNS indicators, a spectator mode in which CNS values are provided to spectators, or any combination of the foregoing.
The method 1000 may be used for any game or other social interaction to improve the user experience in response to CNS indicators. For example, if CNS indicators show frustration, the processor may ease game play; if the indicators show boredom, the processor may introduce new elements, change technical parameters of the game affecting appearance and pacing, or provide a difference challenge. In an aspect, a processor may apply a machine learning algorithm to optimize any desired parameter (e.g., user engagement) based on correlating CNS data to game play.
The method 1000 may be used in social games involving sharing preferences for any subject matter, including, for example, picking a preferred friend or date; choosing a favorite item of clothing or merchandise, meme, video clip, photograph or art piece or other stimulus, with or without revealing a user's CNS data to other players. Such social games may be played with or without a competitive element such as electing a most favored person or thing.
The method 1000 may be used in social games for enhancing interpersonal communication, by allowing participants to better understand the emotional impact of their social interactions, and to adjust their behavior accordingly.
The method 1000 may be used in social games in which, like bluffing, the object includes concealing the player's emotional state, or in games in which the object includes revealing the player's emotional state. In either case, the CNS data may provide a quantitative or qualitative basis for comparing the performances of different players.
The method may be used in athletic contests. A processor may provide the CNS to a device belonging to each competitor or competitor's team for managing play. In an alternative, or in addition, a processor may provide the CNS to a device belonging to one or more referees or spectators to improve safety or enjoyment of the contest. In an alternative, or in addition, a processor may provide the CNS to a device belonging to an opponent or the opponent's team, to enable new styles of play.
The method 1000 may include, at 1002 a processor determining, obtaining, or assigning one or more player's identifications and corresponding baseline neuro-physiological responses to stimuli that simulate one or more social interactions that may occur during a social interaction application. For example, in an aspect, the baselines may include baseline arousal and valence values. In an aspect, the baseline neuro-physiological responses may be obtained from a database of biometric data (e.g., 610:
At 1004, the processor initiates the play of the social interaction application in which the one or more players (including human and/or computer players) participate. At 1006, the processor determines whether an event as previously described herein has occurred, such that a measurement of CNS would be triggered. To do so, for example, the processor may monitor the behavior or neuro-physiological state of the one or more players, using sensors and client devices as described herein. For example, the behavior of players may be monitored using sensors described below with respect to the example of an implementation in a game room of a casino in the paragraph immediately below. If no event is detected, at 1008, the processor continues to wait until one is detected. If an event is detected, at 1010, the processor proceeds to calculate the measurement of the CNS value for the one or more players.
For example, in the method 1000 involving a game of Poker among a first player and two or more other players (including a dealer), suppose the first player is a player “under the gun” (meaning required to match another player's bet or leave the game) and immediately following the player that has posted a big blind in the amount of $75. The hand begins, and the dealer deals two down cards to each player. The first player under the gun calls and raises the bet by placing chips in the amount greater than the big blind, e.g., $5000, as detected by the system and sensors described herein. In such case, the processor at 1006 determines that an event (e.g., a wager is raised) has occurred. At 1010, the processor calculates the measurement of CNS value for the first player upon the event, and at 1012, the processor stores a set of data that represents the measured CNS in a memory.
In an aspect, at 1014, the processor determines whether to output the calculated CNS to the first player. For example, suppose the hand in the just-described game of Poker was previously designated as a training session player by the first player training against a computer algorithm. In such case, the processor determines that the CNS calculated at 1010 should be outputted to the first player at 1014.
At 1016, the first player may perceive or sense the output of the calculated CNS in any one or more of suitable qualitative or quantitative forms, including, for example, digital representations (e.g., numerical values of arousal or valence or other biometric data such as temperature, perspiration, facial expressions, postures, gestures, etc.), percentages, colors, sounds (e.g., audio feedback, music, tactile feedbacks, etc. For example, suppose the first player was bluffing when he raised the bet to $5000, and the first player has exhibited neuro-physiological signs detectable by biometric sensors of the present disclosure consistent with an event of bluffing. In such case, in an implementation of the training mode, the processor may provide to the first player an audio feedback, “bluffing,” a recognizable tactile feedback suggesting to the player that the bluff has been detected, an alert message on a display showing the player with a text, “bluffing,” and the like.
When the processor determines that the calculated CNS should not be outputted to the first player, the processor at 1018 determines whether the calculated CNS should be outputted to other players. For example, continuing the example of the Poker game in training mode, wherein at 1014 the processor has determined that the first player is bluffing, but in an alternative training mode where the detection of bluffing is not revealed or outputted to the first player, the processor may instead output the calculated CNS to other players, as part of the training mode programming. At 1020, the calculated CNS may be outputted to the other players, similar in manner as the case for the first player in 1016.
At 1022, the processor may change the play of the social interaction application. For example, continuing the example of the game of Poker in training mode, the processor may determine the course of action of one or more computer algorithm players participating in the Poker game, after the bluffing by the first player is detected as described above. For example, suppose the computer algorithm player, prior to the first player raising the bet, was prepared to call the bet by matching the big blind ($75). Instead, the processor changes the play by calling the bet of the first player and raising it to $5100.
At 1024, the processor may calculate error vector of the measurement of CNS. For example, continuing the example of the Poker game, assume that at the end of the entire hand, the first player, whom the processor previously determined as “bluffing,” turns out winning the round. Then, at 1024, the processor calculates the error vector for the “bluffing” determination. At 1026, the processor selects an action based on the calculated error. For example, continuing the example of the Poker game, the processor at 1026 may update the “bluffing” parametric values, and for the same set of biometric data previously flagged as “bluffing,” the processor would no longer deem the set as “bluffing.” At 1028, the processor may implement a new action. For example, continuing the Poker game example where the parameters for detecting “bluffing” has been updated, in a future round of Poker game in which the first player participates, the processor would not deem the same set of biometric data previously flagged as “bluffing” as such, and instead, the computer algorithm player may, for example, decide to fold in such in case the same set of biometric data is detected from the first player.
The operation 1024 may be performed for other reasons, also. For example, in a social introduction game, the processor may determine based on a high error value that one or more participants in a social introduction session is uneasy. Then, in 1026, the processor may select an operation to reduce the detected discomfort. For example, the processor may execute an intervention script to detect and reduce the source of uneasiness, up to and including at 1028 removing a participant from the session and placing removed participants in a new session with different people. For further example, if the processor determines that a player of an action game is frustrated or bored, it may reduce or increase the level of challenge presented by the game to increase the player's interest and time of play.
At 1030, the processor monitors whether the social interaction application is finished. For example, continuing the example of the Poker game, when the first player playing against other computer algorithm players leaves the table or otherwise ends participating in the game, the game is terminated.
Specific embodiments of the method 1000 may include using biometric feedback to improve the accuracy of estimates of player intent in casino games involving obfuscation of the strength of a player's standing, including anticipation of bets, raises, and bluffs. Systems and sensors as described herein may be used to record biometrics and/or player behaviors, gestures, postures, facial expressions, and other biometric indicators, through video and audio capture, thermal imaging, breath monitoring and other biometrics while a player is engaged in a casino game. A processor may record the CNS score in reference to calibration with emotional feedback when subjects are calm compared to when they are bluffing, or otherwise engaging in acts of deceit. For example, an implementation in a game room of a casino or the like may include: 1) Deploying front facing stereo camera (eyetracking, pupil dilation, FAU), microphone (Audio speech analysis, NLP word analysis), phased array sensor (eyetracking, pupil dilation, FAU), IR sensor (fNIR), laser breath monitor in a casino setting at Poker and Poker-derivative games involving bets and bluffing, and providing real-time analysis and feedback to casino managers and dealers; 2) Improving upon Poker-playing computer algorithms to provide missing information about human opponents' biometric status, and 3) Using machine learning to allow a Poker playing computer to detect human intent, anticipation of bets, raises, and bluffs. Other applications may include, for example, deploying Poker playing computers against human champion Poker players in a tournament setting, i.e. to test ultimate human vs. computer Poker skills. In some implementations, a provider may package a hardware kit including stereo cameras, microphones, phased array, IR and laser sensors for use by Poker playing professionals to train against a computer algorithm that uses biometrics to detect human intent for the purposes of improving their game.
Other applications may use biometric feedback in a strategy game involving obfuscation of the strength of a player's standing, for example, regarding military unit or equipment strength, anticipation of attacks, retreats, ruses, ambushes and bluffs. Availability of biometric feedback for all the players in the strategy game may be provided to human or computer opponents to enhance the determined accuracy of an opponent or opponents' state and intent, to increase the challenge of the game or to offer new forms of play based entirely around bluffing.
Referring to
In a related aspect, the method 1000 may include, at 1120, determining the measure of composite neuro-physiological state at least in part by detecting one or more stimulus events based on the sensor data exceeding a threshold value for a time period. In a related aspect, the method 1000 may include, at 1130, calculating one of multiple event powers for each of the one or more audience members and for each of the stimulus events and aggregating the event powers. In an aspect, the method 1000 may include assigning, by the at least one processor, weights to each of the event powers based on one or more source identities for the sensor data At 1140, the method 1000 may further include determining the measure of composite neuro-physiological state at least in part by determining valence values based on the sensor data and including the valence values in determining the measure of composite neuro-physiological state. A list of non-limiting, example suitable sensors is provided above in connection with
The method and apparatus described herein for controlling a social interaction application production may be adapted for improving person-to-person communication in virtual or real environments.
Each client 1206, 1216 may output the measures via output devices 1204, 1214, for example a display screen, as a graphical display 1240 or other useful format (e.g., audible output). The display 1240 or other output may report neuro-physiological state measures for conversation sequence statements or groups of statements. For example, a display 1240 may include an indication of arousal 1246, 1250 or valence 1248, 1252. The system 1200 may provide an alert any time there's a rapid increase in arousal and also report the valence associated with the increase. The alert can then be appraised by the human for meaning. The system 1200 may be especially useful for human to human communication between players actors within a virtual immersive experience and may find application in other contexts also.
In view the foregoing, and by way of additional example,
Referring to
The method 1300 may include, at 1320, receiving sensor data from at least one sensor positioned to sense a neuro-physiological response of the user related to the social interaction. The sensor data may include any one or more of the data described herein for arousal, valence, or other measures.
The method 1300 may include at 1330 determining a Composite Neuro-physiological State (CNS) value for the social interaction, based on the sensor data, using an algorithm as described herein above. In an alternative, the method may determine a different measure for neuro-physiological response. The method may include at 1340 recording the CNS value or other neuro-physiological measure correlated to the social interaction in a computer memory. In an alternative, the method may include indicating the CNS value or other neuro-physiological measure to the user and/or recipient. In an alternative, the method may include controlling progress of the social interaction application based at least in part on the CNS value.
Referring to
In another aspect, the method 1300 may include, at 1430 playing the known audio-video stimuli comprising a known non-arousing stimulus and a known arousing stimulus. The method 1300 may include, at 1440 determining the CNS value at least in part by detecting one or more stimulus events based on the sensor data exceeding a threshold value for a time period. The method 1300 may include, at 1450 calculating one of multiple event powers for each of the one or more users and for each of the stimulus events and aggregating the event powers. The method 1300 may include, at 1460 assigning weights to each of the event powers based on one or more source identities for the sensor data.
Referring to
In a related aspect, the method 1300 may include, at 1530 determining valence values based on the sensor data. The sensor data for valence may include one or more of electroencephalographic (EEG) data, facial electromyography (fEMG) data, video facial action unit (FAU) data, brain machine interface (BMI) data, functional magnetic resonance imaging (fMRI) data, functional near-infrared data (fNIR) and positron emission tomography (PET). The method 1300 may include, at 1540 normalizing the valence values based on like values collected for the known audio-video stimuli. The method 1300 may include, at 1550 determining a valence error measurement based on comparing the valence values to a targeted valence for the social interaction.
Referring to
As illustrated in
The apparatus 1700 may further include an electrical component 1704 for receiving sensor data from at least one sensor positioned to sense a neuro-physiological response of the user related to the social interaction. The component 1704 may be, or may include, a means for said receiving. Said means may include the processor 1710 coupled to the memory 1716, the processor executing an algorithm based on program instructions stored in the memory. Such algorithm may include a sequence of more detailed operations, for example, configuring a data port to receive sensor data from a known sensor, configuring a connection to the sensor, receiving digital data at the port, and interpreting the digital data as sensor data.
The apparatus 1700 may further include an electrical component 1706 for determining a Composite Neuro-physiological State (CNS) value for the social interaction, based on the sensor data. The component 1706 may be, or may include, a means for said determining. Said means may include the processor 1710 coupled to the memory 1716, the processor executing an algorithm based on program instructions stored in the memory. Such algorithm may include a sequence of more detailed operations, for example, as described in connection with
The apparatus 1700 may further include an electrical component 1708 for at least one of recording the CNS value correlated to the social interaction in a computer memory or indicating the CNS value to the user, indicating the CNS value to another participant in the social interaction, or controlling progress of the social interaction application based at least in part on the CNS value. The component 1708 may be, or may include, a means for said recording or indicating. Said means may include the processor 1710 coupled to the memory 1716, the processor executing an algorithm based on program instructions stored in the memory. Such algorithm may include a sequence of more detailed operations, for example, encoding the CNS value and storing the encoded value in a computer memory, or sending the encoded value to an output device for presentation to the user.
The apparatus 1700 may optionally include a processor module 1710 having at least one processor. The processor 1710 may be in operative communication with the modules 1702-1708 via a bus 1713 or similar communication coupling. In the alternative, one or more of the modules may be instantiated as functional modules in a memory of the processor. The processor 1710 may initiate and schedule the processes or functions performed by electrical components 1702-1708.
In related aspects, the apparatus 1700 may include a network interface module 1712 or equivalent I/O port operable for communicating with system components over a computer network. A network interface module may be, or may include, for example, an Ethernet port or serial port (e.g., a Universal Serial Bus (USB) port), a Wi-Fi interface, or a cellular telephone interface. In further related aspects, the apparatus 1700 may optionally include a module for storing information, such as, for example, a memory device 1716. The computer readable medium or the memory module 1716 may be operatively coupled to the other components of the apparatus 1700 via the bus 1713 or the like. The memory module 1716 may be adapted to store computer readable instructions and data for effecting the processes and behavior of the modules 1702-1708, and subcomponents thereof, or the processor 1710, the method 1300 and one or more of the additional operations 1400-1600 disclosed herein, or any method for performance by a media player described herein. The memory module 1716 may retain instructions for executing functions associated with the modules 1702-1708. While shown as being external to the memory 1716, it is to be understood that the modules 1702-1708 can exist within the memory 1716 or an on-chip memory of the processor 1710.
The apparatus 1700 may include, or may be connected to, one or more biometric sensors 1714, which may be of any suitable types. Various examples of suitable biometric sensors are described herein above. In alternative embodiments, the processor 1710 may include networked microprocessors from devices operating over a computer network. In addition, the apparatus 1700 may connect to an output device as described herein, via the I/O module 1712 or other output port.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
As used in this application, the terms “component”, “module”, “system”, and the like are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component or a module may be, but are not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component or a module. One or more components or modules may reside within a process and/or thread of execution and a component or module may be localized on one computer and/or distributed between two or more computers.
Various aspects will be presented in terms of systems that may include several components, modules, and the like. It is to be understood and appreciated that the various systems may include additional components, modules, etc. and/or may not include all the components, modules, etc. discussed in connection with the figures. A combination of these approaches may also be used. The various aspects disclosed herein can be performed on electrical devices including devices that utilize touch screen display technologies, heads-up user interfaces, wearable interfaces, and/or mouse-and-keyboard type interfaces. Examples of such devices include VR output devices (e.g., VR headsets), AR output devices (e.g., AR headsets), computers (desktop and mobile), televisions, digital projectors, smart phones, personal digital assistants (PDAs), and other electronic devices both wired and wireless.
In addition, the various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device (PLD) or complex PLD (CPLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
Operational aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, digital versatile disk (DVD), Blu-ray™, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a client device or server. In the alternative, the processor and the storage medium may reside as discrete components in a client device or server.
Furthermore, the one or more versions may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed aspects. Non-transitory computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, or other format), optical disks (e.g., compact disk (CD), DVD, Blu-ray™ or other format), smart cards, and flash memory devices (e.g., card, stick, or other format). Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope of the disclosed aspects.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
In view of the exemplary systems described supra, methodologies that may be implemented in accordance with the disclosed subject matter have been described with reference to several flow diagrams. While for purposes of simplicity of explanation, the methodologies are shown and described as a series of blocks, it is to be understood and appreciated that the claimed subject matter is not limited by the order of the blocks, as some blocks may occur in different orders and/or concurrently with other blocks from what is depicted and described herein. Moreover, not all illustrated blocks may be required to implement the methodologies described herein. Additionally, it should be further appreciated that the methodologies disclosed herein are capable of being stored on an article of manufacture to facilitate transporting and transferring such methodologies to computers.
The present application is a continuation of international (PCT) application No. US2019012567 filed Jan. 7, 2019, which claims priority to U.S. provisional patent application Ser. Nos. 62/614,811 filed Jan. 8, 2018, 62/661,556 filed Apr. 23, 2018, and 62/715,766 filed Aug. 7, 2018, which applications are incorporated herein by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
62715766 | Aug 2018 | US | |
62661556 | Apr 2018 | US | |
62614811 | Jan 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2019/012567 | Jan 2019 | US |
Child | 16923033 | US |