Head-mounted display (HMD) devices can be used in various applications, including military, aviation, medicine, video gaming, entertainment, sports, and so forth. See-through HMD devices allow the user to observe the physical world, while optical elements add light from one or more small micro-displays into the user's visual path, to provide an augmented reality image.
A head mounted display (HMD) device and systems and methods that use HMDs in education and instruction are provided. The HMD may be used in a classroom teaching environment. The HMD may be used to provide holographic instruction. The HMD may be used to provide social coaching. The education or instruction may be tailored to the individual based on known skills, learning styles, or characteristics of the individual.
One embodiment includes a method of enhancing an experience. Monitoring of one or more individuals engaged in the experience is performed. The monitoring is based on data from one or more sensors. The monitoring is analyzed to determine how to enhance the experience. The experience is enhanced based on the analyzing. Enhancing the experience includes presenting a signal to at least one see-through head mounted display worn by one of the individuals.
One embodiment includes a system for enhancing education or instruction. The system includes a processor and a processor readable storage device coupled to the processor. The processor readable storage device has stored thereon instructions which, when executed on the processor cause the processor to receive sensor data from at least one HMD. Further the processor monitors one or more individuals based on the sensor data. The processor analyzes the monitoring to determine how to enhance an educational or instructional experience engaged in by the one or more individuals. The processor enhances the educational or instructional experience based on the analyzing. In one embodiment, the enhancing includes providing information to at least one see-through head mounted display (HMD) worn by one or more of the individuals.
One embodiment includes a processor readable storage device that has instructions which, when executed on a processor, cause the processor to perform a method of enhancing an individual's performance. The method includes accessing sensor data, and processing the sensor data to determine information indicative of performance of the individual. The information indicative of performance of the individual is analyzed to determine how to enhance the individual's performance. A signal is provided in a head mounted display worn by the individual to enhance the individual's performance.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
See-through HMD devices can use optical elements such as mirrors, prisms, and holographic lenses to add light from one or two small micro-displays into a user's visual path. The light provides holographic images to the user's eyes via see-though lenses. Technology disclosed herein provides for use of HMDs in a classroom setting. Technology disclosed herein provides for HMD use for holographic instruction. In one embodiment, the HMD is used for social coaching. User profile information may be used to tailor instruction to a specific user based on known skills, learning styles, and/or characteristics.
In one embodiment, the students 13(b) each wear an HMD 2, but the teacher 13(a) is not required to. In this depicted example, the teacher 13(a) is live in-person. However, instruction may be provided without a live teacher. For example, a student 13(b) might access a recorded lesson and view that in their HMD 2. Note the students and teachers are examples of individuals who are engaged in an experience.
Each user has a processing unit 4 that is associated with the HMD 2, in this example. Further details are explained below, but briefly the processing unit 4 may communicate wirelessly or by wireline with it associated HMD 2 to provide processing for the HMD 2. The processing unit 4 may also communicate wirelessly or by wireline over network 50. Therefore, data may be shared. As one example, an HMD 2 may have sensors that can monitor the wearer, as well as a region around (e.g., in front of) the wearer. The sensor data may be analyzed to determine comprehension and/or attention of the student. This information may be provided to the teacher 13(a). Computing system 12 may provide additional processing power. Computing system 12 could be local (e.g., in the classroom or building) or remote (e.g., over the Internet). In one embodiment, the teacher 13(a) can access student profile information from computer system 12, or elsewhere.
In one embodiment, a student 13(b) has access to a presentation device 9. The student may receive instruction and/or feedback through the presentation device 9. In one embodiment, this feedback is based on analyzing sensor data from an HMD 2 worn by an instructor 13(a). The presentation device 9 could have a visual display and/or audio transducer. Examples of presentation device 9 include, but are not limited to,
In one embodiment, the HMD 2 is used for information provision. An example of information provision is to help the wearer prepare a meal. The instructions may be provided to the user via the HMD 2. A computer system 12 may track the process of the meal preparation using a 3D camera on the front of the HMD 2. The system may access user profile information in this, or other embodiments. For example, if the user is an expert chef, then the system does not need to tell the user how to sauté mushrooms, just to add some sautéd mushrooms.
The sensors 51 could include cameras (2D, 3D, RGB, IR, etc.), heart rate monitors, gyros, GPS, etc. Some of the sensors 51 could be part of an HMD 2. For example, an HMD 2 may have sensors (e.g., image sensors) for tracking eye gaze, as well as front facing cameras. The sensors 51 may be used to collect biometric data regarding the individuals. The sensors 51 also may be used to develop a model of the environment around the individual.
In one embodiment, the eye tracking 53 receives sensor data and determines one or more vectors that define the direction an individual is looking. The environmental modeling 55 receives sensor data and determines a model of the environment. This might be a 2D or 3D model. The model could be an environment around a user (e.g., individual, student or teacher) or of user. The biometric analyzer 63 is able to analyze biometric data pertaining to an individual. The biometric data could be sensor data including, but not limited to, heart rate, audio, and image data. In one embodiment, the biometric analyzer determines a level of student comprehension. In one embodiment, the biometric analyzer determines a level of student attention.
The error detection/correction 65 may input instructions/models 57(n). The instructions/models may describe a proper solution to a problem, instructions for assembling a product, etc. They may also include models, such as 2D or 3D models that can be displayed in the HMD 2. In one embodiment, the error detection/correction 65 is able to determine correctness of a solution to a problem, based on the instructions 57(n). For example, the error detection/correction 65 may determine whether an individual is solving a math problem correctly. A possible solution or hint may be generated to help the individual solve the problem. In one embodiment, the error detection/correction 65 is able to determine correctness of assembly of an apparatus, based on the instructions/model. The error detection/correction 65 can send an image to HMD content display 59 to cause the HMD 2 to display an image that shows a holographic image to assist in assembling the product. In one embodiment, the error detection/correction 65 is able to analyze performance of the individual, based on the instructions/model. For example, the error detection/correction 65 could analyze the individual's efforts at forming letters, in a handwriting embodiment.
The feedback 61 may generate feedback to provide in the HMD 2. In one embodiment, the feedback 61 receives eye tracking data to determine which student 13(b) that the teacher 13(a) is looking at, and then accesses the individual profile database 57(2) to provide a GPA, course attendance, etc. The feedback 61 may also receive input from the biometric analyzer 63 to report to the teacher 13(a) (via their HMD 2) which students are comprehending the subject matter.
The individual profile data 57(2) may include information regarding learning styles, characteristics, and/or known skills of individuals 13. In one embodiment, the system builds up this database 57(2). However, the database 57(2) may be built in whole or in part without the aid of the system. As one example, the system could determine whether an individual 13 is right or left handed, strength, agility, etc. and store this for future use. The database 57(2) may contain a level of competency for an individual 13. The system can tailor instruction based thereon.
The teachers/lessons 57(3) may contain recorded lectures, lessons, etc. In one embodiment, the system 75 offers the student 13(b) an option to view a lecture be a different teacher 13 in their HMD 2. This may be based on the profile data 57(2). In one embodiment, the system selects an appropriate lesson for the student 13(b) based on their profile. For example, the same content in a math course might be presented differently based on the individual profile.
The individual record 57(1) contains a record (e.g., recording) of the individual's activities, performance, interactions with others, etc. The computer system 12 may generate the individual record using sensors on the HMD 2, although other sensors may be used. In one embodiment, the individual record allows a parent or teacher 13(a) to analyze the individual's activities, interactions, performance 13 at a later time. In one embodiment, the individual record 57(1) contains a 3D image of an individual's performance, such as a golf swing. This may be developed by sensors 51 such as a camera system capable of generating 3D images. The computer system 12 may input this performance and generate a holographic image of it for presentation on the HMD 2.
In one embodiment, the computer system 12 may broadcast information. For example, if an individual 13 does particularity well, the computer system 12 might post this on a web page of a social network. Therefore, the individual 13 obtains a sense of accomplishment. This may be especially important for challenged learners. In one embodiment, the computer system 12 may work as an enhanced match maker. For example, if the individual 13 wishes to share information on their likes, dislikes, interests, availability etc. this may be broadcast to various sites on the network 50.
Note that the foregoing description of the system of
In step 150, one or more individuals that are engaged in some experience are monitored. This may be an educational or instructional experience. The monitoring may involve using one or more sensors to collect sensor data, and providing that sensor data to a computer system 12. The sensor data may include data from camera systems, heart rate sensors, audio sensors, GPS, etc. Note that step 150 may include some processing of the sensor data. In one embodiment, the monitoring includes building a 3D model of the environment. The monitoring may include detecting eye gaze of a wearer of an HMD 2. In one embodiment, the monitoring includes producing one or more images of a student's face. In one embodiment, the monitoring includes receiving sensor data from at least one HMD 2.
In step 152, data generated from the monitoring step is analyzed to determine how to enhance experience. This data could be raw sensor data (e.g., heart rate) or processed sensor data (e.g., a 3D model). In one embodiment, step 152 includes analyzing facial expressions, heart rate etc. to determine whether a student 13(b) comprehends subject matter. In one embodiment, step 152 includes inputting data (e.g., instructions/models 57(n)) and using that to analyze data that was captured or produced in step 150. For example, step 152 may include determining whether a student 13(b) is solving a math problem correctly, and determining a hint at how to proceed. The data may be transferred over a network to a remote computing device for analysis. Many additional examples are provided below.
In step 154, the experience (e.g., education or instruction) is enhanced based on the analysis of step 152. With respect to the environment of
The HMD device can be worn on the head of a user so that the user can see through a display and thereby see a real-world scene which includes an image which is not generated by the HMD device. The HMD device 2 can be self-contained so that all of its components are carried by, e.g., physically supported by, the frame 3. Optionally, one or more component of the HMD device is not carried by the frame. For example, one of more components which are not carried by the frame can be physically attached by a wire to a component carried by the frame. The clip-shaped sensor 7 attached by a wire 5, is one such example. The sensor 7 is a biological sensor such as a heart rate sensor which can be clipped to the user's ear. One example of a heart rate sensor emits infrared light at one side of the ear and senses, from the other side, the intensity of the light which is transmitted through the vascular tissue in the ear. There will be variations in the intensity due to variations in blood volume which correspond to the heart rate. Another example of a heart rate sensor attaches to the fingertip. Another example of a heart rate sensor uses a chest strap to detect EKG signals which can be transmitted wirelessly or by wire to receiving and processing circuitry of the HMD device. In addition to a level of the heart rate, e.g., the pulse rate, the regularity of the heart rate can be determined. A heart rate can be classified as regular or jittery, for instance.
Heart rate could also be detected from images of the eye which are obtained from eye tracking camera 134B, described below. For example, US2006/0149154, “Method and apparatus for measuring tissue perfusion,” incorporated herein by reference, measures microcirculatory flow of a target tissue such as the surface of the retina without the need to contact the tissue. A pulsed source of light irradiates the tissue, and a matched sensor transduces variations in the reflected light to an electric signal which is indicative of a heart rate and a tissue perfusion index. Another example of a heart rate sensor uses a sensor at the nose bridge, such as discussed in U.S. Pat. No. 6,431,705, “Eyewear heart rate monitor,” incorporated herein by reference.
Further, one of more components which are not carried by the frame can be in wireless communication with a component carried by the frame, and not physically attached by a wire or otherwise to a component carried by the frame. The one or more components which are not carried by the frame can be carried by the user, in one approach, such as on the wrist. The processing unit 4 could be connected to a component in the frame via a wire or via a wireless link. The term “HMD device” can encompass both on-frame and off-frame components.
The processing unit 4 includes much of the computing power used to operate HMD device 2. The processor may execute instructions stored on a processor readable storage device for performing the processes described herein. In one embodiment, the processing unit 4 communicates wirelessly (e.g., using Wi-Fi®, BLUETOOTH®, infrared (e.g., IrDA® or INFRARED DATA ASSOCIATION® standard), or other wireless communication means) to one or more hub computing systems 12.
Control circuits 136 provide various electronics that support the other components of HMD device 2.
Hub computing system 12 may be a computer, a gaming system or console, or the like. According to an example embodiment, the hub computing system 12 may include hardware components and/or software components to execute applications such as gaming applications, non-gaming applications, or the like. The hub computing system 12 may include a processor that may execute instructions stored on a processor readable storage device for performing the processes described herein.
Hub computing system 12 further includes one or more capture devices, such as a capture device 20. The capture device 20 may be, for example, a camera that visually monitors one or more users (e.g., individuals 13, students 13(b) or teachers 11) and the surrounding space such that gestures and/or movements performed by the one or more users, as well as the structure of the surrounding space, may be captured, analyzed, and tracked to perform one or more controls or actions.
Hub computing system 12 may be connected to an audiovisual device 16 such as a television, a monitor, a high-definition television (HDTV), or the like that may provide game or application visuals. For example, hub computing system 12 may include a video adapter such as a graphics card and/or an audio adapter such as a sound card that may provide audiovisual signals associated with the game application, non-game application, etc. The audiovisual device 16 may receive the audiovisual signals from hub computing system 12 and may then output the game or application visuals and/or audio associated with the audiovisual signals.
Hub computing device 10, with capture device 20, may be used to recognize, analyze, and/or track human (and other types of) targets. For example, a user wearing the HMD device 2 may be tracked using the capture device 20 such that the gestures and/or movements of the user may be captured to animate an avatar or on-screen character and/or may be interpreted as controls that may be used to affect the application being executed by hub computing system 12.
A portion of the frame of HMD device 2 surrounds a display that includes one or more lenses. To show the components of HMD device 2, a portion of the frame surrounding the display is not depicted. The display includes a light guide optical element 112, opacity filter 114, see-through lens 116 and see-through lens 118. In one embodiment, opacity filter 114 is behind and aligned with see-through lens 116, light guide optical element 112 is behind and aligned with opacity filter 114, and see-through lens 118 is behind and aligned with light guide optical element 112. See-through lenses 116 and 118 are standard lenses used in eye glasses and can be made to any prescription (including no prescription). In one embodiment, see-through lenses 116 and 118 can be replaced by a variable prescription lens. In some embodiments, HMD device 2 will include only one see-through lens or no see-through lenses. In another alternative, a prescription lens can go inside light guide optical element 112. Opacity filter 114 filters out natural light (either on a per pixel basis or uniformly) to enhance the contrast of the augmented reality imagery. Light guide optical element 112 channels artificial light to the eye.
Mounted to or inside temple 102 is an image source, which (in one embodiment) includes microdisplay 120 for projecting an augmented reality image and lens 122 for directing images from microdisplay 120 into light guide optical element 112. In one embodiment, lens 122 is a collimating lens. An augmented reality emitter can include microdisplay 120, one or more optical components such as the lens 122 and light guide 112, and associated electronics such as a driver. Such an augmented reality emitter is associated with the HMD device, and emits light to a user's eye, where the light represents augmented reality still or video images.
Control circuits 136 provide various electronics that support the other components of HMD device 2. More details of control circuits 136 are provided below with respect to
Microdisplay 120 projects an image through lens 122. Different image generation technologies can be used. For example, with a transmissive projection technology, the light source is modulated by optically active material, and backlit with white light. These technologies are usually implemented using LCD type displays with powerful backlights and high optical energy densities. With a reflective technology, external light is reflected and modulated by an optically active material. The illumination is forward lit by either a white source or RGB source, depending on the technology. Digital light processing (DGP), liquid crystal on silicon (LCOS) and MIRASOL® (a display technology from QUALCOMM®, INC.) are all examples of reflective technologies which are efficient as most energy is reflected away from the modulated structure. With an emissive technology, light is generated by the display. For example, a PicoP™-display engine (available from MICROVISION, INC.) emits a laser signal with a micro mirror steering either onto a tiny screen that acts as a transmissive element or beamed directly into the eye.
Light guide optical element 112 transmits light from microdisplay 120 to the eye 140 of the user wearing the HMD device 2. Light guide optical element 112 also allows light from in front of the HMD device 2 to be transmitted through light guide optical element 112 to eye 140, as depicted by arrow 142, thereby allowing the user to have an actual direct view of the space in front of HMD device 2, in addition to receiving an augmented reality image from microdisplay 120. Thus, the walls of light guide optical element 112 are see-through. Light guide optical element 112 includes a first reflecting surface 124 (e.g., a mirror or other surface). Light from microdisplay 120 passes through lens 122 and is incident on reflecting surface 124. The reflecting surface 124 reflects the incident light from the microdisplay 120 such that light is trapped inside a planar, substrate comprising light guide optical element 112 by internal reflection. After several reflections off the surfaces of the substrate, the trapped light waves reach an array of selectively reflecting surfaces, including example surface 126.
Reflecting surfaces 126 couple the light waves incident upon those reflecting surfaces out of the substrate into the eye 140 of the user. As different light rays will travel and bounce off the inside of the substrate at different angles, the different rays will hit the various reflecting surface 126 at different angles. Therefore, different light rays will be reflected out of the substrate by different ones of the reflecting surfaces. The selection of which light rays will be reflected out of the substrate by which surface 126 is engineered by selecting an appropriate angle of the surfaces 126. More details of a light guide optical element can be found in U.S. Patent Application Publication 2008/0285140, published on Nov. 20, 2008, incorporated herein by reference in its entirety. In one embodiment, each eye will have its own light guide optical element 112. When the HMD device has two light guide optical elements, each eye can have its own microdisplay 120 that can display the same image in both eyes or different images in the two eyes. In another embodiment, there can be one light guide optical element which reflects light into both eyes.
Opacity filter 114, which is aligned with light guide optical element 112, selectively blocks natural light, either uniformly or on a per-pixel basis, from passing through light guide optical element 112. In one embodiment, the opacity filter can be a see-through LCD panel, electrochromic film, or similar device. A see-through LCD panel can be obtained by removing various layers of substrate, backlight and diffusers from a conventional LCD. The LCD panel can include one or more light-transmissive LCD chips which allow light to pass through the liquid crystal. Such chips are used in LCD projectors, for instance.
Opacity filter 114 can include a dense grid of pixels, where the light transmissivity of each pixel is individually controllable between minimum and maximum transmissivities. A transmissivity can be set for each pixel by the opacity filter control circuit 224, described below. More details of an opacity filter are provided in U.S. patent application Ser. No. 12/887,426, “Opacity Filter For See-Through Mounted Display,” filed on Sep. 21, 2010, incorporated herein by reference in its entirety.
In one embodiment, the display and the opacity filter are rendered simultaneously and are calibrated to a user's precise position in space to compensate for angle-offset issues. Eye tracking (e.g., using eye tracking camera 134) can be employed to compute the correct image offset at the extremities of the viewing field.
In the example of
In one example, a visible light camera also commonly referred to as an RGB camera may be the sensor, and an example of an optical element or light directing element is a visible light reflecting mirror which is partially transmissive and partially reflective. The visible light camera provides image data of the pupil of the user's eye, while IR photodetectors 162 capture glints which are reflections in the IR portion of the spectrum. If a visible light camera is used, reflections of virtual images may appear in the eye data captured by the camera. An image filtering technique may be used to remove the virtual image reflections if desired. An IR camera is not sensitive to the virtual image reflections on the eye.
In one embodiment, the at least one sensor 134 is an IR camera or a position sensitive detector (PSD) to which IR radiation may be directed. For example, a hot reflecting surface may transmit visible light but reflect IR radiation. The IR radiation reflected from the eye may be from incident radiation of the illuminators 153, other IR illuminators (not shown) or from ambient IR radiation reflected off the eye. In some examples, sensor 134 may be a combination of an RGB and an IR camera, and the optical light directing elements may include a visible light reflecting or diverting element and an IR radiation reflecting or diverting element. In some examples, a camera may be small, e.g. 2 millimeters (mm) by 2 mm. An example of such a camera sensor is the Omnivision OV7727. In other examples, the camera may be small enough, e.g. the Omnivision OV7727, e.g. that the image sensor or camera 134 may be centered on the optical axis or other location of the display optical system 14. For example, the camera 134 may be embedded within a lens of the system 14. Additionally, an image filtering technique may be applied to blend the camera into a user field of view to lessen any distraction to the user.
In the example of
As mentioned above, in some embodiments which calculate a cornea center as part of determining a gaze vector, two glints, and therefore two illuminators will suffice. However, other embodiments may use additional glints in determining a pupil position and hence a gaze vector. As eye data representing the glints is repeatedly captured, for example at 30 frames a second or greater, data for one glint may be blocked by an eyelid or even an eyelash, but data may be gathered by a glint generated by another illuminator.
Note that some of the components of
In another approach, two or more cameras with a known spacing between them are used as a depth camera to also obtain depth data for objects in a room, indicating the distance from the cameras/HMD device to the object. The cameras of the HMD device can essentially duplicate the functionality of the depth camera provided by the computer hub 12 (see also capture device 20 of
Display out interface 328 and display in interface 330 communicate with band interface 332 which is an interface to processing unit 4, when the processing unit is attached to the frame of the HMD device by a wire, or communicates by a wireless link, and is worn on the wrist of the user on a wrist band. This approach reduces the weight of the frame-carried components of the HMD device. In other approaches, as mentioned, the processing unit can be carried by the frame and a band interface is not used.
Power management circuit 302 includes voltage regulator 334, eye tracking illumination driver 336, audio DAC and amplifier 338, microphone preamplifier audio ADC 340, biological sensor interface 342 and clock generator 345. Voltage regulator 334 receives power from processing unit 4 via band interface 332 and provides that power to the other components of HMD device 2. Eye tracking illumination driver 336 provides the infrared (IR) light source for eye tracking illumination 134A, as described above. Audio DAC and amplifier 338 receives the audio information from earphones 130. Microphone preamplifier and audio ADC 340 provides an interface for microphone 110. Biological sensor interface 342 is an interface for biological sensor 138. Power management unit 302 also provides power and receives data back from three-axis magnetometer 132A, three-axis gyroscope 132B and three axis accelerometer 132C.
In one embodiment, wireless communication component 446 can include a Wi-Fi® enabled communication device, BLUETOOTH® communication device, infrared communication device, etc. The wireless communication component 446 is a wireless communication interface which, in one implementation, receives data in synchronism with the content displayed by the audiovisual device 16. Further, augmented reality images may be displayed in response to the received data. In one approach, such data is received from the hub computing system 12.
The USB port can be used to dock the processing unit 4 to hub computing device 12 to load data or software onto processing unit 4, as well as charge processing unit 4. In one embodiment, CPU 420 and GPU 422 are the main workhorses for determining where, when and how to insert images into the view of the user. More details are provided below.
Power management circuit 406 includes clock generator 460, analog to digital converter 462, battery charger 464, voltage regulator 466, HMD power source 476, and biological sensor interface 472 in communication with biological sensor 474. Analog to digital converter 462 is connected to a charging jack 470 for receiving an AC supply and creating a DC supply for the system. Voltage regulator 466 is in communication with battery 468 for supplying power to the system. Battery charger 464 is used to charge battery 468 (via voltage regulator 466) upon receiving power from charging jack 470. HMD power source 476 provides power to the HMD device 2.
The calculations that determine where, how and when to insert an image may be performed by the HMD device 2 and/or the hub computing device 12.
In one example embodiment, hub computing device 12 will create a model of the environment that the user is in and track various moving objects in that environment. In addition, hub computing device 12 tracks the field of view of the HMD device 2 by tracking the position and orientation of HMD device 2. The model and the tracking information are provided from hub computing device 12 to processing unit 4. Sensor information obtained by HMD device 2 is transmitted to processing unit 4. Processing unit 4 then uses additional sensor information it receives from HMD device 2 to refine the field of view of the user and provide instructions to HMD device 2 on how, where and when to insert the image.
Capture device 20 may include a camera component 523, which may be or may include a depth camera that may capture a depth image of a scene. The depth image may include a two-dimensional (2-D) pixel area of the captured scene where each pixel in the 2-D pixel area may represent a depth value such as a distance in, for example, centimeters, millimeters, or the like of an object in the captured scene from the camera.
Camera component 523 may include an infrared (IR) light component 525, an infrared camera 526, and an RGB (visual image) camera 528 that may be used to capture the depth image of a scene. A 3-D camera is formed by the combination of the infrared emitter 24 and the infrared camera 26. For example, in time-of-flight analysis, the IR light component 525 of the capture device 20 may emit an infrared light onto the scene and may then use sensors (in some embodiments, including sensors not shown) to detect the backscattered light from the surface of one or more targets and objects in the scene using, for example, the 3-D camera 526 and/or the RGB camera 528. In some embodiments, pulsed infrared light may be used such that the time between an outgoing light pulse and a corresponding incoming light pulse may be measured and used to determine a physical distance from the capture device 20 to a particular location on the targets or objects in the scene. Additionally, the phase of the outgoing light wave may be compared to the phase of the incoming light wave to determine a phase shift. The phase shift may then be used to determine a physical distance from the capture device to a particular location on the targets or objects.
A time-of-flight analysis may be used to indirectly determine a physical distance from the capture device 20 to a particular location on the targets or objects by analyzing the intensity of the reflected beam of light over time via various techniques including, for example, shuttered light pulse imaging.
The capture device 20 may use a structured light to capture depth information. In such an analysis, patterned light (i.e., light displayed as a known pattern such as grid pattern, a stripe pattern, or different pattern) may be projected onto the scene via, for example, the IR light component 525. Upon striking the surface of one or more targets or objects in the scene, the pattern may become deformed in response. Such a deformation of the pattern may be captured by, for example, the 3-D camera 526 and/or the RGB camera 528 (and/or other sensor) and may then be analyzed to determine a physical distance from the capture device to a particular location on the targets or objects. In some implementations, the IR light component 525 is displaced from the cameras 526 and 528 so triangulation can be used to determined distance from cameras 526 and 528. In some implementations, the capture device 20 will include a dedicated IR sensor to sense the IR light, or a sensor with an IR filter.
The capture device 20 may include two or more physically separated cameras that may view a scene from different angles to obtain visual stereo data that may be resolved to generate depth information. Other types of depth image sensors can also be used to create a depth image.
The capture device 20 may further include a microphone 530, which includes a transducer or sensor that may receive and convert sound into an electrical signal. Microphone 530 may be used to receive audio signals that may also be provided by hub computing system 12.
A processor 532 is in communication with the image camera component 523. Processor 532 may include a standardized processor, a specialized processor, a microprocessor, or the like that may execute instructions including, for example, instructions for receiving a depth image, generating the appropriate data format (e.g., frame) and transmitting the data to hub computing system 12.
A memory 534 stores the instructions that are executed by processor 532, images or frames of images captured by the 3-D camera and/or RGB camera, or any other suitable information, images, or the like. According to an example embodiment, memory 534 may include RAM, ROM, cache, flash memory, a hard disk, or any other suitable storage component. Memory 534 may be a separate component in communication with the image capture component 523 and processor 532. According to another embodiment, the memory 534 may be integrated into processor 532 and/or the image capture component 523.
Capture device 20 is in communication with hub computing system 12 via a communication link 536. The communication link 536 may be a wired connection including, for example, a USB connection, a FireWire connection, an Ethernet cable connection, or the like and/or a wireless connection such as a wireless 802.11b, g, a, or n connection. According to one embodiment, hub computing system 12 may provide a clock to capture device 20 that may be used to determine when to capture, for example, a scene via the communication link 536. Additionally, the capture device 20 provides the depth information and visual (e.g., RGB or other color) images captured by, for example, the 3-D camera 526 and/or the RGB camera 528 to hub computing system 12 via the communication link 536. In one embodiment, the depth images and visual images are transmitted at 30 frames per second; however, other frame rates can be used. Hub computing system 12 may then create and use a model, depth information, and captured images to, for example, control an application such as a game or word processor and/or animate an avatar or on-screen character.
Hub computing system 12 includes depth image processing and skeletal tracking module 550, which uses the depth images to track one or more persons detectable by the depth camera function of capture device 20. Module 550 provides the tracking information to application 552, which can be educational software, a video game, productivity application, communications application or other software application etc. The audio data and visual image data is also provided to application 552 and module 550. Application 552 provides the tracking information, audio data and visual image data to recognizer engine 554. In another embodiment, recognizer engine 554 receives the tracking information directly from module 550 and receives the audio data and visual image data directly from capture device 20.
Recognizer engine 554 is associated with a collection of filters 560, 562, 564, . . . , 566 each comprising information concerning a gesture, action or condition that may be performed by any person or object detectable by capture device 20. For example, the data from capture device 20 may be processed by filters 560, 562, 564, . . . , 566 to identify when a user or group of users has performed one or more gestures or other actions. Those gestures may be associated with various controls, objects or conditions of application 552. Thus, hub computing system 12 may use the recognizer engine 554, with the filters, to interpret and track movement of objects (including people).
Capture device 20 provides RGB images (or visual images in other formats or color spaces) and depth images to hub computing system 12. The depth image may be a set of observed pixels where each observed pixel has an observed depth value. For example, the depth image may include a two-dimensional (2-D) pixel area of the captured scene where each pixel in the 2-D pixel area may have a depth value such as distance of an object in the captured scene from the capture device. Hub computing system 12 will use the RGB images and depth images to track a user's or object's movements.
Hub computing system 12 also has modules for eye tracking 53, environmental modeling 55, HMD content display 59, feedback 61, biometric analyzer 63, and error detection/correction 65, which have been described with respect to
A GPU 608 and a video encoder/video codec (coder/decoder) 614 form a video processing pipeline for high speed and high resolution graphics processing. Data is carried from the graphics processing unit 608 to the video encoder/video codec 614 via a bus. The video processing pipeline outputs data to an A/V (audio/video) port 640 for transmission to a television or other display. A memory controller 610 is connected to the GPU 608 to facilitate processor access to various types of memory 612, e.g., RAM.
The multimedia console 600 includes an I/O controller 620, a system management controller 622, an audio processing unit 623, a network (NW) interface (I/F) 624, a first USB host controller 626, a second USB controller 628 and a front panel I/O subassembly 630 that are preferably implemented on a module 618. The USB controllers 626 and 628 serve as hosts for peripheral controllers 642 and 643, a wireless adapter 648, and an external memory device 646 (e.g., flash memory, external CD/DVD ROM drive, removable media, etc.). The network interface 624 and/or wireless adapter 648 provide access to a network (e.g., the Internet, home network, etc.) and may be any of a wide variety of various wired or wireless adapter components including an Ethernet card, a modem, a BLUETOOTH® module, a cable modem, and the like.
System memory 643 is provided to store application data that is loaded during the boot process. A media drive 644 is provided and may comprise a DVD/CD drive, Blu-Ray Disk™ drive, hard disk drive, or other removable media drive, etc. The media drive 644 may be internal or external to the multimedia console 600. Application data may be accessed via the media drive 644 for execution, playback, etc. by the multimedia console 600. The media drive 644 is connected to the I/O controller 620 via a bus, such as a Serial ATA bus or other high speed connection (e.g., IEEE 1394 serial bus interface).
The system management controller 622 provides a variety of service functions related to assuring availability of the multimedia console 600. The audio processing unit 623 and an audio codec 632 form a corresponding audio processing pipeline with high fidelity and stereo processing. Audio data is carried between the audio processing unit 623 and the audio codec 632 via a communication link. The audio processing pipeline outputs data to the A/V port 640 for reproduction by an external audio user or device having audio capabilities.
The front panel I/O subassembly 630 supports the functionality of the power button 650 and the eject button 652, as well as any LEDs (light emitting diodes) or other indicators exposed on the outer surface of the multimedia console 600. A system power supply module 636 provides power to the components of the multimedia console 600. A fan 638 cools the circuitry within the multimedia console 600.
The CPU 601, GPU 608, memory controller 610, and various other components within the multimedia console 600 are interconnected via one or more buses, including serial and parallel buses, a memory bus, a peripheral bus, and a processor or local bus using any of a variety of bus architectures. Such architectures can include a PCI bus, PCI-Express bus, etc.
When the multimedia console 600 is powered on, application data may be loaded from the system memory 643 into memory 612 and/or caches 602, 604 and executed on the CPU 601. The application may present a graphical user interface that provides a consistent user experience when navigating to different media types available on the multimedia console 600. In operation, applications and/or other media contained within the media drive 644 may be launched or played from the media drive 644 to provide additional functionalities to the multimedia console 600.
The multimedia console 600 may be operated as a standalone system by simply connecting the system to a television or other display. In this standalone mode, the multimedia console 600 allows one or more users to interact with the system, watch movies, or listen to music. However, with the integration of broadband connectivity made available through the network interface 624 or the wireless adapter 648, the multimedia console 600 may further be operated as a participant in a larger network community. Additionally, multimedia console 600 can communicate with processing unit 4 via wireless adaptor 648.
In step 704, the computer system 12 determines which student 13(b) the teacher 13(a) is looking at. The 3D model of the classroom may include, or be augmented with, the position of each student 13(b). In one embodiment, the HMD 2 of each student 13(b) can uniquely be identified by a signal it transmits or by some physical marker. In one embodiment, each seat in the classroom has a known 3D position, and the computer system 12 knows which student 13(b) is expected to be in each seat. Many other techniques could be used. Steps 702-704 are one embodiment of step 150.
In step 706, information regarding the student 13(b) is determined. In one embodiment, the computer system 12 accesses a profile database 75(2) to retrieve student GPA, name, etc. In one embodiment, the computer system 12 reports whether the student 13(b) comprehends the material or is paying attention. Further details of making this determination are discussed below. Step 706 is one embodiment of step 152.
In step 708, the teacher 13(a) is provided with the information regarding the student 13(b). This information may be provided on the teacher's HMD 2. The information could be audio or visual. Step 708 is one embodiment of step 154.
In step 804, the system determines comprehension and/or attention of the one or more students 13(b). In one embodiment, 3D image data is analyzed to determine facial expressions of a student 13(b), from which comprehension may be inferred. In one embodiment, eye tracking is used to estimate whether the student is paying attention to the lesson or possibly something else. For example, the student might be spending substantial time looking out the window. Note that steps 802 and 804 may be performed over any time interval. Step 804 is one embodiment of step 152.
In step 806, the computer system 12 determines metrics pertaining to collective comprehension of the students 13(b). For example, the computer system 12 may determine what percentage of the students 13(b) comprehend the subject matter. The computer system 12 may determine what portions of the subject matter are understood and which are not. Levels of comprehension may be used to express results. Step 806 is one embodiment of step 152.
In step 808, the teacher 13(a) is provided with information regarding comprehension and/or attention. This may be the information from step 804 and/or 806. This information may be provided on the teacher's HMD 2. Step 808 is one embodiment of step 154.
In one embodiment, the computer system 12 determines whether or not a student has completed some task, problem, assignment, etc. This may be similar to the process of
In step 902, biometric data pertaining to one or more of the students 13(b) is collected. Step 902 is one embodiment of step 150. In step 904, the biometric data is analyzed to determine comprehension by the student 13(b). Steps 902-904 may be similar to steps 802-804. In step 906, a determination is made whether the comprehension is below a threshold. For example, comprehension might be rated on a scale based on experimental data from test subjects by correlating facial expressions to stated or tested levels of comprehension. If the comprehension is below a threshold, then the student 13(b) may be offered the option of a new teacher 13(a).
In step 908, a student profile is accessed from database 57(2). The student profile may have been developed, at least in part, by the computer system 12. However, the student profile may be developed, at least in part, without aid of the computer system 12. The user profile may indicate known learning styles of the student, as one example. For example, some students may prefer a linear learning style, whereas others prefer s non-linear learning style. As another example, a student 13(b) may have a preference for visual, aural or text based learning. Steps 904-908 are one embodiment of step 152 from
In step 910, a different teacher 13(a) is suggested to the student 13(b) based on their user profile. The student 13(b) might be presented with one or more possible teachers 11 to choose from, possibly with a brief description of that teacher 13(a) or why they are being suggested for this student 13(b).
In step 912, the student 13(b) may select a new teacher 13(a). If so, then in step 914, the selected teacher 13(a) is provided to the student 13(b). In one embodiment, the computer system 12 accesses a lecture given by the selected teacher 13(a) from database 57(3). This could be a video recording, which is provided on the HMD 2, along with audio. Note that the student 13(b) may choose to pause, fast forward, rewind, etc. the video recording. Step 914 is one embodiment of step 154 from
A variation of the process of
In step 1002, individual performance is tracked. In one embodiment, sensor data is processed to determine information indicative of performance of the individual. The interaction might be the individual 13 assembling a product, swinging a golf club, a playing music, baking a cake, as a few examples. In step 1002, one or more cameras may be used to build a 3D model. Also, other sensors such as those to capture audio may be used. Step 1002 is one embodiment of step 150 from
In optional step 1004, instructions and/or a model for the performance is accessed. For example, a model of what the assembled product or step by step instructions for assembly may be accessed from database 57(n). As noted above, this may involve access over a network, such as the Internet.
In step 1006, the performance is analyzed to determine how to enhance the individual's performance. The analysis may include determining correctness of the assembly of the product, analysis of flaws in a golf swing, determination of the next note to play in a song, how to bake a cake, etc. Step 1006 is one embodiment of step 152 from
In step 1008, instruction (e.g., holographic instruction) is provided in the HMD 2. For example, the individual 13 is shown a holographic image of two pieces that should be connected together. The holographic image may show the two pieces at first separated, and then coming together. In one embodiment, the individual 13 can manipulate the image by “picking up” a virtual object. As another example, the individual is able to walk around a holographic image of their golf swing and receive feedback as to swing flaws. In one embodiment, a holographic image is presented to demonstrate the next step in baking a cake. Step 1008 is one embodiment of step 154 from
In one embodiment of step 1008, the individual 13 is permitted to ask for help or instructions. For example, the individual 13 might utter a key word such as “instruct”, which triggers the computer system 12 to provide instructions. In one embodiment of step 1008, the individual 13 uses some physical gesture to request help. Multiple gestures and/or voice commands may be used for different requests. Note that gestures and/or voice commands may be used to request help for other embodiments discussed herein.
In step 1102, a 3D model of the individual's environment is built. More specifically, a 3D model of the product being assembled may be built based on image data captured from sensors. In step 1104, the individual's eye gaze is determined. In step 1106, the eye gaze is registered to the 3D model. For example, eye vectors are correlated to a 3D position to determine what the individual 13 is looking at. Steps 1102-1106 are one embodiment of step 1002, as well as one embodiment of step 150.
In step 1108, instructions for assembling the product and/or a model of what the assembled product should look like are accessed from database 57(n). Note that this may be accessed via a network. In one embodiment, the product manufacturer provides such instructions and/or a model on a web site. Step 1108 is one embodiment of step 1004.
In step 1110, feedback is determined. Step 1112 may include comparing the individual's version of the assembled product (at this stage) with the accessed 3D model to determine errors. For example, if the individual is on step three, and the computer system 12 determines that the model is not being assembled correctly a suitable warning may be determined. The computer system 12 might also determine a possible remedy. For example, the computer system 12 might determine that step two was missed by the individual 13. If the individual 13 has stopped assembling, the system might determine whether too much time has passed, which may suggest the individual 13 is stuck. Step 1110 is one embodiment of step 1006, as well as one embodiment of step 152.
In step 1112, an image is provided in the HMD 2 to provide feedback. For example, the individual 13 is shown a holographic image of two pieces that should be connected together. The holographic image may show the two pieces at first separated, and then coming together. Step 1112 may also include providing audio feedback. For example, the system could say, “I think that you skipped step two.” In this case, step two could be presented as a holographic image in the HMD 2. Note that this holographic image may be an animation. The system might ask the individual 13 if they are stuck and need help.
In one embodiment, step 1112 includes providing a holographic image that the individual can view from different perspectives by moving either a virtual object or moving their vantage point. For example, the individual might pick up one or more virtual objects being presented in the HMD 2 and manipulate those objects. Sensors are able to track the individuals hand positions to determine how to display the virtual objects. Step 1112 is one embodiment of step 1008, as well as step 154.
In step 1202, a 3D model of the individual's environment is built. More specifically, a 3D model of an instrument being played may built from image data captured from sensors. The computer system 12 may also determine how the user is interacting with the instrument. For example, the computer system 12 may determine what notes the user is playing on the piano. This may be performed using an audio sensor or by analyzing an image. In step 1202, the computer system 12 may also determine how the HMD 2 is oriented relative to the instrument such that later an image can be presented that matches up with the instrument. For example, a key on a piano can be highlighted by providing a holographic image in the HMD 2. Step 1202 is one embodiment of step 1002, as well as one embodiment of step 150.
In step 1204, a music score is accessed from a database 57. This may be the music being played. Step 1204 is one embodiment of step 1004.
In step 1206, feedback is determined. Step 1206 may include determining which note to play next, determining that an incorrect note was played, determining that fingers are being positioned incorrectly on the guitar strings, etc. Step 1206 is one embodiment of step 1006, as well as one embodiment of step 152.
In step 1208, an image is provided in the HMD 2 to provide feedback. For example, the individual 13 is shown a holographic image that indicates which note to play next. The individual 13 might be shown a holographic image of how they were positioning their fingers on the guitar strings followed by a holographic image of how they should be positioned. Step 1208 may also include providing audio feedback. Step 1208 is one embodiment of step 1008, as well as one embodiment of step 154.
In step 1302, a 3D model of the individual's golf swing is built from image data captured from sensors. This may be similar to building a 3D model of an environment that is discussed elsewhere herein. The individual 13 typically is not wearing the HMD 2 when swinging the golf club, but that is a possibility. The 3D model may be built based on images captured by one or more cameras. In one embodiment, either the 3D model or simply a 2D video stream is stored in database 57(1), such that it is available for a teacher 13(a) to analyze. Step 1302 is one embodiment of step 1002, as well as one embodiment of step 150.
In optional step 1304, a stored model of a golf swing is accessed. This model may be a template for comparison purposes with the individual's swing. Rather than accessing a pre-defined model, a model might be generated on the fly, based on user parameters, the selected club, type of shot (draw, fade, etc.). Also, note that there is typically not one accepted model for a golf swing. Moreover, note that the model could be for any portion of the golf swing (e.g., address position, position at top, position in hitting zone, etc.).
In step 1306, the golf swing is analyzed. Step 1306 may be performed by software running on computer system 12. In one embodiment, the computer system 12 compares the individual's swing with the model accessed in step 1304. Step 1306 may be performed by a human teacher. The analysis could be a critique of where the swing is going wrong, a deviation from the swing and an ideal (e.g., model swing) etc. Step 1306 is one embodiment of step 1006, as well as one embodiment of step 154.
In step 1308, a holographic image of the golf swing is provided in the HMD 2. This image may be a still image of any selected portion of the golf swing or a video stream. In step 1310 analysis of the golf swing is provided. Thus, the individual receives instruction. In one embodiment, the holographic image highlights some region (e.g., right wrist position) to show the individual what needs to be corrected. The image may also show the individual the correct (or better) position for the wrist. Step 1308 may include providing a holographic image that the individual can view from different perspectives by moving their vantage point. In one embodiment, audio commentary is provided while the individual is permitted to “walk around” the holographic image to examine the golf swing from different perspectives. The individual's location may be determined by tracking cameras and/or GPS. Tracking cameras can be used to determine a precise 3D coordinate for the individual's eyes. Steps 1308-1310 are one embodiment of step 1008, as well as one embodiment of step 154.
In step 1352, efforts of the wearer of the HMD 2 to cook are tracked. In one embodiment, sensor data is processed to determine information indicative of performance of the individual. In step 1352, one or more cameras on the HMD 2 may be used. Step 1352 is one embodiment of step 150 from
In step 1354, instructions for cooking are accessed. As noted above, this may involve access over a network, such as the Internet.
In step 1356, the progress in cooking is analyzed to determine how to enhance the individual's performance. The analysis may include determining where the individual 13 is in the process to determine the next step, determining whether the process is being performed correctly, etc. Step 1356 is one embodiment of step 152 from
In step 1356, instruction (e.g., holographic instruction) is provided in the HMD 2 to help the user to cook. For example, the individual 13 is shown a holographic image of the next step in the cooking process. In one embodiment, the individual 13 can manipulate the holographic image. For example, the individual 13 could perform a dry run through of practicing to slice up radishes in a fancy pattern. Step 1356 may include providing audio advice. For example, an audio signal can be played in the HMD 2 to go to the cupboard and get some flour. Step 1356 is one embodiment of step 154 from
In step 1402, an individual's efforts to solve a math problem are tracked. In one embodiment, the individual 13 is working out a math problem on a piece of paper. Optical character recognition may be used to determine how the individual is proceeding to solve the problem. In one embodiment, the individual 13 is working on some electronic device such as a note pad computer. In this case, tracking the individual's efforts might be performed using a camera and optical character recognition. However, data from the electronic device (e.g., note pad computer) may be accessed to track the individual's progress. Step 1402 is one embodiment of 150 from
In step 1404, the individual's efforts to solve a problem are analyzed. In one embodiment step 1404 includes determining correctness of the individual's efforts to solve the problem. The computer system 12 may have access to a proper solution to the problem, as well as suitable steps to solve the problem. However, note that some problems can be solved in numerous ways. Therefore, the computer system 12 may have access to numerous possible solutions. In one embodiment, database 57(n) is accessed to obtain the solution(s). In step 1404, the computer system 12 may determine whether the individual 13 is proceeding properly. If not, a suitable suggestion for proceeding may be determined. Step 1404 is one embodiment of 152 from
In step 1406, the individual 13 is provided instruction using the HMD 2. Step 1406 may include providing one more images in a see-through HMD 2 that is worn by the individual to provide assistance in solving the problem. In one embodiment, the individual 13 is provided a visualization of an equation and what it means. For example, if a math problem involves solving a simultaneous set of two linear equations, a line that represents each linear equation could be shown along with their intersection. As another example, math word problems could be visualized as a real life example (e.g., two trains approaching one another). Step 1406 is one embodiment of 154 from
In step 1502, an individual's efforts to write are tracked. This is one example of tracking an individual's efforts to perform a task or process. In one embodiment, the individual 13 is writing on a piece of paper. A 3D image may be generated using one or more cameras to capture the individual's efforts. In one embodiment, the individual 13 is writing on a display of some electronic device. In this case, data from the device might be accessed to capture the individual's efforts. Step 1502 is one embodiment of 150 from
In step 1504, the accuracy of the individual's writing is analyzed by computer system 12. Step 1504 may include determining feedback to improve performance of the writing. The computer system 12 may have access to valid ways of forming letters, as well as acceptable tolerances (e.g., databases 57(n)). However, note that some variation may be permitted for individual style. Step 1504 is one embodiment of 152 from
In step 1506, the individual 13 is provided feedback or instruction using the HMD 2. In one embodiment, the individual 13 is shown how the letter(s) or characters should be formed. Step 1506 is one embodiment of 154 from
The foregoing examples a just a few of the ways in which instruction may be provided. Note that the instruction may be tailored to the individual. For example, the speed of instruction may be flexible per individual's ability.
In step 1604, a suitable prompt is determined for the individual 13. For example, the computer system 12 determines that the individual 13 might want to comment on the clothes being worn. In one embodiment, the computer system 12 has access to information such as from where the individual 13 knows the individual, how they might be important to a business deal, etc. Step 1604 is one embodiment of step 152 of
In step 1606, a social prompt is provided to the individual's HMD 2. The prompt could be to show the individual's name overlaying their shirt so that the individual 13 does not forget names. The computer system 12 may remember the type of prompts that are important or useful to the individual 13, such as whether the individual 13 easily forgets names, etc. Thus, in one embodiment, the computer system 12 captures actions, and or conversation. The prompt could be an audio signal played in an audio transducer that is on the HMD 2. Step 1606 is one embodiment of step 154 of
In step 1612, data that is pertinent to interactions with others around the individual 13 is captured. Step 1612 may involve capturing a video using one or more cameras on the HMD 2, capturing a conversation using an audio sensor on the HMD 2, reading facial expressions using a 3D camera on the HMD, etc. Step 1612 is one embodiment of step 150 of
In step 1614, the data from step 1612 is transferred over a network to a remote electronic device. For example, a video stream (which may contain audio) is provided over a network, such that a social coach has access. Some of the data may be transferred to a computing device that may analyze things such as facial expressions. This analysis could be provided to the social coach. Note that such analysis may also be provided locally (e.g., in a computing device near the HMD 2).
In step 1616, a suitable prompt for the individual 13 is received over the network. Step 1616 may include receiving a prompt from the social coach. For example, the social coach may determine that the individual 13 might want to comment on the clothes being worn by their date. Steps 1614-1616 are one embodiment of step 152 of
In step 1618, a social prompt is provided to the individual's HMD 2. The prompt could be an audio signal in an audio transducer of the HMD 2. This might be played in the wearer's ear. The prompt may be text displayed in the HMD. 2. The prompt could be any other signal provided by the HMD 2. Step 1618 is one embodiment of step 154 of
In step 1622, a person (“subject”) near the wearer of the HMD 2 is identified. Step 1622 may involve determining what subject the wearer of the HMD 2 is looking at. Step 1622 may be performed using one or more cameras. In one embodiment, facial recognition may be used. In one embodiment, the computer system 12 hears the subject's name. In one embodiment, the subject has some sort of a tag that can be identified. In one embodiment, the subject is wearing an HMD 2, which may be connected over a network to a computer system. In this manner the subject may be identified to a computer system. Step 1622 may be performed locally or remotely. Step 1622 is one embodiment of step 150 of
In step 1624, a suitable prompt is determined for the individual 13. Step 1624 may include accessing one or more databases. This may include accessing a public database, private database, or semiprivate database, such as a corperate database. For example, the public database might be any database that could be located with an Internet search. The private database might include data of interactions that the wearer of the HMD 2 has had with various people. The corporate database might useful if both the wearer and subject work for the same corporation. Step 1624 is one embodiment of step 152 of
In step 1626, a social prompt is provided to the individual's HMD 2. The prompt could be text in the HMD 2, a holographic image in the HMD 2, an audio signal, or any other signal that the HMD is capable of providing. The prompt could be a holographic image that shows a nametag on the subject. Step 1626 is one embodiment of step 154 of
In one embodiment, a recording of the social interactions can be provided to a parent or teacher for evaluation.
In some embodiments such as in
The method embodiment in
The foregoing detailed description of the technology herein has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the technology to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The described embodiments were chosen to best explain the principles of the technology and its practical application to thereby enable others skilled in the art to best utilize the technology in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the technology be defined by the claims appended hereto.