This relates generally to systems and methods of detecting, presenting, and logging relevant user health information based on a context of an electronic device in a three-dimensional environment.
Some computer graphical environments provide two-dimensional and/or three-dimensional environments where at least some objects displayed for a user's viewing are virtual and generated by a computer. Some existing food logging applications require that a user manually enter a description of a food item and/or scan a barcode of the food item using their mobile device; however, these applications still do not provide an efficient way to automatically determine and log the particular food the user is consuming without requiring the user to provide further inputs to manually enter a description of a food item and/or scan a barcode of the food item using their mobile device. Additionally, these applications still do not provide a way to disambiguate between individual food items of a plurality of food items when determining which food item the user of the electronic device is consuming. Thus, there is a need for systems and methods that automatically detect and log the particular food item the user is consuming.
Some examples of the disclosure are directed to systems and methods for displaying a representation of a prediction of a food being consumed by a user of an electronic device in a computer-generated environment. In some examples, an electronic device is in communication with one or more displays and one or more input devices. In some examples, the electronic device detects that a user of the electronic device is initiating consumption of a first object. In some examples, in response to the electronic device detecting that the user of the electronic device is initiating consumption of the first object, the electronic device captures, using the one or more input devices, audio and one or more images of the first object and obtains a first prediction of the first object based on a sound print of the first object included in the audio. In some examples, in accordance with a determination that the first prediction of the first object satisfies one or more criteria, the electronic device initiates a process to analyze the one or more images of the first object.
Some examples of the disclosure are directed to systems and methods for displaying an indication of possible non-compliance of medication based on a context of an electronic device in a computer-generated environment. In some examples, an electronic device in communication with one or more displays and one or more input devices obtains medication information associated with a user of the electronic device. In some examples, while the medication information of the user indicates a dose within a predetermined period of time, the electronic device detects, via the one or more input devices, a change in contextual information. In some examples, in response to the electronic device detecting the change in contextual information, in accordance with a determination that one or more criteria are satisfied, including a criterion that is satisfied when the change in contextual information is associated with possible non-compliance of the medication, the electronic device presents an indication in a computer-generated environment of the possible non-compliance. In some examples, in response to the electronic device detecting the change in contextual information, in accordance with a determination that the one or more criteria are not satisfied, the electronic device foregoes presenting the indication in the computer-generated environment.
The full descriptions of these examples are provided in the Drawings and the Detailed Description, and it is understood that this Summary does not limit the scope of the disclosure in any way.
For improved understanding of the various examples described herein, reference should be made to the Detailed Description below along with the following drawings. Like reference numerals often refer to corresponding parts throughout the drawings.
Some examples of the disclosure are directed to systems and methods for displaying a representation of a prediction of a food being consumed by a user of an electronic device in a computer-generated environment. In some examples, an electronic device is in communication with one or more displays and one or more input devices. In some examples, the electronic device detects that a user of the electronic device is initiating consumption of a first object. In some examples, in response to the electronic device detecting that the user of the electronic device is initiating consumption of the first object, the electronic device captures, using the one or more input devices, audio and one or more images of the first object and obtains a first prediction of the first object based on a sound print of the first object included in the audio. In some examples, in accordance with a determination that the first prediction of the first object satisfies one or more criteria, the electronic device initiates a process to analyze the one or more images of the first object.
Some examples of the disclosure are directed to systems and methods for displaying an indication of possible non-compliance of medication in the computer-generated environment. In some examples, an electronic device in communication with one or more displays and one or more input devices obtains medication information associated with a user of the electronic device. In some examples, while the medication information of the user indicates a dose within a predetermined period of time, the electronic device detects, via the one or more input devices, a change in contextual information. In some examples, in response to the electronic device detecting the change in contextual information, in accordance with a determination that one or more criteria are satisfied, including a criterion that is satisfied when the change in contextual information is associated with possible non-compliance of the medication, the electronic device presents an indication in a computer-generated environment of the possible non-compliance. In some examples, in response to the electronic device detecting the change in contextual information, in accordance with a determination that the one or more criteria are not satisfied, the electronic device foregoes presenting the indication in the computer-generated environment.
In some examples, a three-dimensional object is displayed in a computer-generated three-dimensional environment with a particular orientation that controls one or more behaviors of the three-dimensional object (e.g., when the three-dimensional object is moved within the three-dimensional environment). In some examples, the orientation in which the three-dimensional object is displayed in the three-dimensional environment is selected by a user of the electronic device or automatically selected by the electronic device. For example, when initiating presentation of the three-dimensional object in the three-dimensional environment, the user may select a particular orientation for the three-dimensional object or the electronic device may automatically select the orientation for the three-dimensional object (e.g., based on a type of the three-dimensional object).
In some examples, a three-dimensional object can be displayed in the three-dimensional environment in a world-locked orientation, a body-locked orientation, a tilt-locked orientation, or a head-locked orientation, as described below. As used herein, an object that is displayed in a body-locked orientation in a three-dimensional environment has a distance and orientation offset relative to a portion of the user's body (e.g., the user's torso). Alternatively, in some examples, a body-locked object has a fixed distance from the user without the orientation of the content being referenced to any portion of the user's body (e.g., may be displayed in the same cardinal direction relative to the user, regardless of head and/or body movement). Additionally or alternatively, in some examples, the body-locked object may be configured to always remain gravity or horizon (e.g., normal to gravity) aligned, such that head and/or body changes in the roll direction would not cause the body-locked object to move within the three-dimensional environment. Rather, translational movement in either configuration would cause the body-locked object to be repositioned within the three-dimensional environment to maintain the distance offset.
As used herein, an object that is displayed in a head-locked orientation in a three-dimensional environment has a distance and orientation offset relative to the user's head. In some examples, a head-locked object moves within the three-dimensional environment as the user's head moves (as the viewpoint of the user changes).
As used herein, an object that is displayed in a world-locked orientation in a three-dimensional environment does not have a distance or orientation offset relative to the user.
As used herein, an object that is displayed in a tilt-locked orientation in a three-dimensional environment (referred to herein as a tilt-locked object) has a distance offset relative to the user, such as a portion of the user's body (e.g., the user's torso) or the user's head. In some examples, a tilt-locked object is displayed at a fixed orientation relative to the three-dimensional environment. In some examples, a tilt-locked object moves according to a polar (e.g., spherical) coordinate system centered at a pole through the user (e.g., the user's head). For example, the tilt-locked object is moved in the three-dimensional environment based on movement of the user's head within a spherical space surrounding (e.g., centered at) the user's head. Accordingly, if the user tilts their head (e.g., upward or downward in the pitch direction) relative to gravity, the tilt-locked object would follow the head tilt and move radially along a sphere, such that the tilt-locked object is repositioned within the three-dimensional environment to be the same distance offset relative to the user as before the head tilt while optionally maintaining the same orientation relative to the three-dimensional environment. In some examples, if the user moves their head in the roll direction (e.g., clockwise or counterclockwise) relative to gravity, the tilt-locked object is not repositioned within the three-dimensional environment.
In some examples, as shown in
In some examples, display 120 has a field of view visible to the user (e.g., that may or may not correspond to a field of view of external image sensors 114b and 114c). Because display 120 is optionally part of a head-mounted device, the field of view of display 120 is optionally the same as or similar to the field of view of the user's eyes. In other examples, the field of view of display 120 may be smaller than the field of view of the user's eyes. In some examples, electronic device 101 may be an optical see-through device in which display 120 is a transparent or translucent display through which portions of the physical environment may be directly viewed. In some examples, display 120 may be included within a transparent lens and may overlap all or only a portion of the transparent lens. In other examples, electronic device may be a video-passthrough device in which display 120 is an opaque display configured to display images of the physical environment captured by external image sensors 114b and 114c.
In some examples, in response to a trigger, the electronic device 101 may be configured to display a virtual object 104 in the XR environment represented by a cube illustrated in
It should be understood that virtual object 104 is a representative virtual object and one or more different virtual objects (e.g., of various dimensionality such as two-dimensional or other three-dimensional virtual objects) can be included and rendered in a three-dimensional XR environment. For example, the virtual object can represent an application or a user interface displayed in the XR environment. In some examples, the virtual object can represent content corresponding to the application and/or displayed via the user interface in the XR environment. In some examples, the virtual object 104 is optionally configured to be interactive and responsive to user input (e.g., air gestures, such as air pinch gestures, air tap gestures, and/or air touch gestures), such that a user may virtually touch, tap, move, rotate, or otherwise interact with, the virtual object 104.
In some examples, displaying an object in a three-dimensional environment may include interaction with one or more user interface objects in the three-dimensional environment. For example, initiation of display of the object in the three-dimensional environment can include interaction with one or more virtual options/affordances displayed in the three-dimensional environment. In some examples, a user's gaze may be tracked by the electronic device as an input for identifying one or more virtual options/affordances targeted for selection when initiating display of an object in the three-dimensional environment. For example, gaze can be used to identify one or more virtual options/affordances targeted for selection using another selection input. In some examples, a virtual option/affordance may be selected using hand-tracking input detected via an input device in communication with the electronic device. In some examples, objects displayed in the three-dimensional environment may be moved and/or reoriented in the three-dimensional environment in accordance with movement input detected via the input device.
In the discussion that follows, an electronic device that is in communication with a display generation component and one or more input devices is described. It should be understood that the electronic device optionally is in communication with one or more other physical user-interface devices, such as a touch-sensitive surface, a physical keyboard, a mouse, a joystick, a hand tracking device, an eye tracking device, a stylus, etc. Further, as described above, it should be understood that the described electronic device, display and touch-sensitive surface are optionally distributed amongst two or more devices. Therefore, as used in this disclosure, information displayed on the electronic device or by the electronic device is optionally used to describe information outputted by the electronic device for display on a separate display device (touch-sensitive or not). Similarly, as used in this disclosure, input received on the electronic device (e.g., touch input received on a touch-sensitive surface of the electronic device, or touch input received on the surface of a stylus) is optionally used to describe input received on a separate input device, from which the electronic device receives input information.
The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, a television channel browsing application, and/or a digital video player application.
As illustrated in
Communication circuitry 222 optionally includes circuitry for communicating with electronic devices, networks, such as the Internet, intranets, a wired network and/or a wireless network, cellular networks, and wireless local area networks (LANs). Communication circuitry 222 optionally includes circuitry for communicating using near-field communication (NFC) and/or short-range communication, such as Bluetooth®.
Processor(s) 218 include one or more general processors, one or more graphics processors, and/or one or more digital signal processors. In some examples, memory 220 is a non-transitory computer-readable storage medium (e.g., flash memory, random access memory, or other volatile or non-volatile memory or storage) that stores computer-readable instructions configured to be executed by processor(s) 218 to perform the techniques, processes, and/or methods described below. In some examples, memory 220 can include more than one non-transitory computer-readable storage medium. A non-transitory computer-readable storage medium can be any medium (e.g., excluding a signal) that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. Examples of such storage include magnetic disks, optical discs based on compact disc (CD), digital versatile disc (DVD), or Blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like.
In some examples, display generation component(s) 214 include a single display (e.g., a liquid-crystal display (LCD), organic light-emitting diode (OLED), or other types of display). In some examples, display generation component(s) 214 includes multiple displays. In some examples, display generation component(s) 214 can include a display with touch capability (e.g., a touch screen), a projector, a holographic projector, a retinal projector, a transparent or translucent display, etc. In some examples, electronic device 201 includes touch-sensitive surface(s) 209, respectively, for receiving user inputs, such as tap inputs and swipe inputs or other gestures. In some examples, display generation component(s) 214 and touch-sensitive surface(s) 209 form touch-sensitive display(s) (e.g., a touch screen integrated with electronic device 201 or external to electronic device 201 that is in communication with electronic device 201).
Electronic device 201 optionally includes image sensor(s) 206. Image sensors(s) 206 optionally include one or more visible light image sensors, such as charged coupled device (CCD) sensors, and/or complementary metal-oxide-semiconductor (CMOS) sensors operable to obtain images of physical objects from the real-world environment. Image sensor(s) 206 also optionally include one or more infrared (IR) sensors, such as a passive or an active IR sensor, for detecting infrared light from the real-world environment. For example, an active IR sensor includes an IR emitter for emitting infrared light into the real-world environment. Image sensor(s) 206 also optionally include one or more cameras configured to capture movement of physical objects in the real-world environment. Image sensor(s) 206 also optionally include one or more depth sensors configured to detect the distance of physical objects from electronic device 201. In some examples, information from one or more depth sensors can allow the device to identify and differentiate objects in the real-world environment from other objects in the real-world environment. In some examples, one or more depth sensors can allow the device to determine the texture and/or topography of objects in the real-world environment.
In some examples, electronic device 201 uses CCD sensors, event cameras, and depth sensors in combination to detect the physical environment around electronic device 201. In some examples, image sensor(s) 206 include a first image sensor and a second image sensor. The first image sensor and the second image sensor work in tandem and are optionally configured to capture different information of physical objects in the real-world environment. In some examples, the first image sensor is a visible light image sensor and the second image sensor is a depth sensor. In some examples, electronic device 201 uses image sensor(s) 206 to detect the position and orientation of electronic device 201 and/or display generation component(s) 214 in the real-world environment. For example, electronic device 201 uses image sensor(s) 206 to track the position and orientation of display generation component(s) 214 relative to one or more fixed objects in the real-world environment.
In some examples, electronic device 201 includes microphone(s) 213 or other audio sensors. Electronic device 201 optionally uses microphone(s) 213 to detect sound from the user and/or the real-world environment of the user. In some examples, microphone(s) 213 includes an array of microphones (a plurality of microphones) that optionally operate in tandem, such as to identify ambient noise or to locate the source of sound in space of the real-world environment.
Electronic device 201 includes location sensor(s) 204 for detecting a location of electronic device 201 and/or display generation component(s) 214. For example, location sensor(s) 204 can include a GPS receiver that receives data from one or more satellites and allows electronic device 201 to determine the device's absolute position in the physical world.
Electronic device 201 includes orientation sensor(s) 210 for detecting orientation and/or movement of electronic device 201 and/or display generation component(s) 214. For example, electronic device 201 uses orientation sensor(s) 210 to track changes in the position and/or orientation of electronic device 201 and/or display generation component(s) 214, such as with respect to physical objects in the real-world environment. Orientation sensor(s) 210 optionally include one or more gyroscopes and/or one or more accelerometers.
Electronic device 201 includes hand tracking sensor(s) 202 and/or eye tracking sensor(s) 212 (and/or other body tracking sensor(s), such as leg, torso and/or head tracking sensor(s)), in some examples. Hand tracking sensor(s) 202 are configured to track the position/location of one or more portions of the user's hands, and/or motions of one or more portions of the user's hands with respect to the extended reality environment, relative to the display generation component(s) 214, and/or relative to another defined coordinate system. Eye tracking sensor(s) 212 are configured to track the position and movement of a user's gaze (eyes, face, or head, more generally) with respect to the real-world or extended reality environment and/or relative to the display generation component(s) 214. In some examples, hand tracking sensor(s) 202 and/or eye tracking sensor(s) 212 are implemented together with the display generation component(s) 214. In some examples, the hand tracking sensor(s) 202 and/or eye tracking sensor(s) 212 are implemented separate from the display generation component(s) 214.
In some examples, the hand tracking sensor(s) 202 (and/or other body tracking sensor(s), such as leg, torso and/or head tracking sensor(s)) can use image sensor(s) 206 (e.g., one or more IR cameras, 3D cameras, depth cameras, etc.) that capture three-dimensional information from the real-world including one or more body parts (e.g., leg, torso, head, or hand of a human user). In some examples, the hands can be resolved with sufficient resolution to distinguish fingers and their respective positions. In some examples, one or more image sensors 206 are positioned relative to the user to define a field of view of the image sensor(s) 206 and an interaction space in which finger/hand position, orientation and/or movement captured by the image sensors are used as inputs (e.g., to distinguish from a user's resting hand or other hands of other persons in the real-world environment). Tracking the fingers/hands for input (e.g., gestures, touch, tap, etc.) can be advantageous in that it does not require the user to touch, hold or wear any sort of beacon, sensor, or other marker.
In some examples, eye tracking sensor(s) 212 includes at least one eye tracking camera (e.g., infrared (IR) cameras) and/or illumination sources (e.g., IR light sources, such as LEDs) that emit light towards a user's eyes. The eye tracking cameras may be pointed towards a user's eyes to receive reflected IR light from the light sources directly or indirectly from the eyes. In some examples, both eyes are tracked separately by respective eye tracking cameras and illumination sources, and a focus/gaze can be determined from tracking both eyes. In some examples, one eye (e.g., a dominant eye) is tracked by one or more respective eye tracking cameras/illumination sources.
Electronic device 201 is not limited to the components and configuration of
Attention is now directed towards examples of displaying of a representation of a prediction of a food being consumed by a user of the electronic device and an indication of possible non-compliance of medication based on a context of an electronic device in a computer-generated environment.
As shown in
In some examples, the region defined by the viewport boundary is larger than a range of vision of the user in one or more dimensions (e.g., based on the range of vision of the user, size, optical properties or other physical characteristics of the one or more displays, and/or the location and/or orientation of the one or more displays relative to the eyes of the user). The viewport and viewport boundary typically move as the one or more displays move (e.g., moving with a head of the user for a head mounted device or moving with a hand of a user for a handheld device such as a tablet or smartphone). A viewpoint of a user determines what content is visible in the viewport, a viewpoint generally specifies a location and a direction relative to the three-dimensional environment, and as the viewpoint shifts, the view of the three-dimensional environment will also shift in the viewport. For a head mounted device, a viewpoint is typically based on a location, a direction of the head, face, and/or eyes of a user to provide a view of the three-dimensional environment that is perceptually accurate and provides an immersive experience when the user is using the head-mounted device.
For a handheld or stationed device, the viewpoint shifts as the handheld or stationed device is moved and/or as a position of a user relative to the handheld or stationed device changes (e.g., a user moving toward, away from, up, down, to the right, and/or to the left of the device). For devices that include displays with video passthrough, portions of the physical environment that are visible (e.g., displayed, and/or projected) via the one or more displays are based on a field of view of one or more cameras in communication with the displays which typically move with the displays (e.g., moving with a head of the user for a head mounted device or moving with a hand of a user for a handheld device such as a tablet or smartphone) because the viewpoint of the user moves as the field of view of the one or more cameras moves (and the appearance of one or more virtual objects displayed via the one or more displays is updated based on the viewpoint of the user (e.g., displayed positions and poses of the virtual objects are updated based on the movement of the viewpoint of the user)).
For displays with optical see-through, portions of the physical environment that are visible (e.g., optically visible through one or more partially or fully transparent portions of the display generation component) via the one or more displays are based on a field of view of a user through the partially or fully transparent portion(s) of the display generation component (e.g., moving with a head of the user for a head mounted device or moving with a hand of a user for a handheld device such as a tablet or smartphone) because the viewpoint of the user moves as the field of view of the user through the partially or fully transparent portions of the displays moves (and the appearance of one or more virtual objects is updated based on the viewpoint of the user).
In
As shown in
In some examples, the electronic device 101 detects that a user of the electronic device 101 is initiating consumption of a consumable object (e.g., consumable objects 304, 306, 308, and/or 310). In some examples, consumable objects include food items, drink items, supplements, medication, and/or other substances which may be consumed (e.g., eaten and/or drunk) by the user. In some examples, prior to detecting that the user of the electronic device 101 is initiating consumption of a consumable object, the electronic device 101 determines that one or more first criteria (e.g., consumption criteria) are satisfied, including a criterion that is satisfied when a location of the electronic device 101 corresponds to a particular location (e.g., dining room, kitchen, cafeteria, restaurant, breakroom, or other location where users typically initiate consumption of a consumable object). In some examples, the one or more first criteria include a criterion that is satisfied when the electronic device 101 detects a particular posture of the user of the electronic device that is indicative of the user initiating consumption of a consumable object. For example, the posture of the user corresponds to a sitting position, a position leaning towards table 302 or other position indicative of initiating consumption of a consumable object. In some examples, the one or more first criteria include a criterion that is satisfied when the electronic device 101 detects that a time of day at the electronic device 101 is a particular time of day associated with mealtime. For example, when the electronic device 101 detects that the time of day at the electronic device 101 corresponds to a range of time (e.g., between 7:00-8:30 am, 11:30 am-2:00 pm, and/or 7:00-8:30 pm), the electronic device 101 determines that the user of the electronic device 101 is initiating consumption of a consumable object. In some examples, the electronic device determines that the one or more first criteria described above are satisfied based on data (e.g., signals) received from a subset of the one or more input devices, such as one or more location sensors 204, one or more motion and/or orientation sensors 210, and/or a clock of the electronic device 101.
In some examples, the one or more first criteria include a criterion that is satisfied when the electronic device 101 captures, using one or more input devices (e.g., external image sensors 114b and 114c) consumable objects (e.g., consumable objects 304, 306, 308, or 310) and/or physical objects indicative of initiating consumption of a consumable object in the three-dimensional environment 300 (e.g., table 302, utensils, a plate, a microwave or other physical object indicative of initiating consumption of a consumable object). In some examples, the various criteria described above may be based on learned trends from historical data collections. For example, if the user of the electronic device 101 typically washes their hands before initiating consumption of a consumable object, the electronic device 101 may be determine that the one or more criteria are satisfied after capturing, using one or more input devices (e.g., external image sensors 114b and 114c), a combination of hand movements associated with washing the user's hands and/or the presence of water and/or hand soap. In another example, if the user of the electronic device 101 typically interacts with a secondary device, such as mobile phone or tablet, to watch a television show, listen to a podcast, or other interaction with their secondary device when initiating consumption of a consumable object, the electronic device 101 may determine that the one or more criteria are satisfied after detecting actuation of a physical input device of or in communication with the secondary device. In some examples, the one or more first criteria include a criterion that is satisfied when the electronic device 101 detects user input indicative of initiating consumption of a consumable object, such as a gaze-based input corresponding to the consumable object, audio input associated with the consumable object (e.g., food packaging being opened), a predefined gesture associated with the consumable object (e.g., picking up the consumable object) and/or a voice input from the user associated with the consumable object (e.g., spoken food-related keywords or phrases, such as “Let's eat”, “I'm hungry”, “I'm thirsty”, “What's for lunch?” and/or the like).
In some examples, in accordance with a determination that the one or more criteria are not satisfied (e.g., the location of the electronic device 101 does not correspond to a kitchen or dining area), the electronic device 101 does not detect that the user of the electronic device 101 is initiating consumption of a first object. In some examples, the electronic device 101 does not initiate capturing, using one or more input devices (e.g., external image sensors 114b and 114c), audio and/or imagery of the three-dimensional environment 300 as indicated by microphone indicator 312 and image sensor indicator 314 remaining and/or being deactivated in
As shown in
In some examples, after the electronic device 101 captures the audio and (optionally the one or more images of the first object as described herein), the electronic device 101 obtains a first prediction of the first object based on a sound print of the first object included in the audio. For example, in
In some examples, obtaining the first prediction (e.g., illustrated via representation 320a) of the first object (e.g., consumable object 310) based on the sound print 318 of the first object included in the audio includes the electronic device 101 identifying an object from a plurality of objects that has a respective sound print that matches the sound print of the first object when a score of the object is within a predetermined score. For example, the electronic device 101 transmits the sound print 318 of the first object included in the captured audio and optionally the one or more captured images of the first object for look-up in the remote server/database and/or the local database. In some examples, the electronic device 101 searches for a substantially matching sound print within a database of sounds prints and object pairs. For example, the first prediction corresponds to a first match of the sound print 318 and a known sound print of a respective object. In some examples, the first prediction is selected because a score of the first match is within a predetermined score range (e.g., between 60 and 100 points) or greater than a score threshold. In some examples, the score indicates a probability that the same sound print describes (e.g., is a match for) at least two objects. In this case, and in some examples, the electronic device 101 measures the score against a threshold. If the score is below the threshold, the electronic device 101 initiates a process to analyze the one or more images of the first object to identify the first object unambiguously as will be described below. In some examples, if the score meets or exceeds the predetermined threshold, then the electronic device 101 foregoes initiating the process to analyze the one or more images of the first object and the respective prediction is saved to the database. In some examples, the electronic device 101 saves the respective prediction and the sound print as a sound print and object pair in the database, such that the respective prediction is associated with the sound print.
In
In some examples, initiating a process to analyze the one or more captured images of the first object includes identifying a plurality of objects included in the one or more images. For example, the electronic device 101 may apply computer vision processing, optical character recognition (OCR), or other recognition technique to detect and/or identify the plurality of objects. In some examples, the electronic device 101 transmits the one or more captured images of the three-dimensional environment 300 for look-up in the remote server/database and/or the local database discussed above to identify the plurality of objects in the captured images (e.g., table 302 and consumable objects 304, 306, 308, and 310). In
In some examples, and as shown in
In some examples, obtaining the second prediction of the first object (e.g., illustrated via representation 322a) includes identifying a second object of the plurality of objects included in the one or more images which has a respective sound print that matches the sound print of the first object when a score of the second object is within a predetermined score. For example, based on the image data analysis, the electronic device 101 identifies consumable object 310 (e.g., “Iced Coffee”). In some examples, the electronic device 101 transmits data describing consumable object 310 for look-up in the remote server/database and/or the local database to identify a corresponding sound print. In some examples, the electronic device 101 compares the corresponding sound print with the sound print of the first object (e.g., sound print 318) and if the sounds prints substantially match because a score of the match is within the predetermined score range as described above, the electronic device 101 saves the sound print of the first object (e.g., sound print 318) in the database such that the respective object is associated with this newly captured sound print (e.g., sound print 318).
As discussed above, the first prediction or the second prediction is optionally saved to the database that is remote or local to the electronic device 101. In some examples, the electronic device 101 automatically adds and/or saves data corresponding to the first prediction or the second prediction to a digital journal accessible on the electronic device 101. For example, in
In
In some examples, the electronic device 101 obtains a first prediction of the second object based on a sound print of the second object included in the audio. For example, in
In some examples, power consumption can be reduced by implementing one or more power saving mitigations. For example, power saving mitigations optionally include ceasing image data acquisition and/or analysis in response to a determination that the first prediction (e.g., representation 336a in
In some examples, the electronic device 101 displays, via the display 120, a user interface of the digital journal, such as user interface 340a in
In some examples, the electronic device 101 displays user interface 340a in response to user interaction corresponding to a request to display the user interface 340a. In some examples, user interaction includes: a gaze of the user; a finger of a hand and/or the hand touching, grabbing, holding a physical object; a finger of the hand directed to or within a threshold distance (e.g., 0, 1, 2, 3, 5, 10, 15, 20, 25, 30, or 50 cm) from a location corresponding to a virtual object selectable to display user interface 340a; a finger of the hand touching physical buttons of the electronic device 101; a contact on a touch-sensitive surface; actuation of a physical input device; a predefined gesture, such as a pinch gesture or air tap gesture; and/or a voice input from the user of the electronic device 101 corresponding to the request to display user interface 340a. In some examples, the electronic device 101 displays user interface 340a in response to satisfying one or more conditions. For example, satisfaction of the one or more conditions is based on a predetermined date and/or time (e.g., morning time, end of the day at 5:30 pm, end of the week, or end of the month) and/or a detected event (e.g., before a grocery shopping event, or at the start or end of an eating episode). In some examples, the electronic device 101 displays user interface 340a in response to adding and/or saving data to a digital journal corresponding to a prediction of a food being consumed by the user as described above.
In some examples and as illustrated in
In some examples, a journal entry includes information related to the respective consumption episode, such as environmental contextual information and user state information. Environmental contextual information and user state information optionally describe one or more detected activities of the user during the respective consumption episode (e.g., watching television, reading emails, consuming media content, talking on the phone, interacting with a secondary device, different from electronic device 101, and/or the like). Environmental contextual information and user state information optionally include indications of other users detected during the respective consumption episode (e.g., users physically present in the environment surrounding electronic device 101 or users present in a remote or virtual manner). For example, the electronic device 101 detects, using one or more input devices (e.g., external image sensors 114b and 114c) users physically present in the three-dimensional environment 300 who are within the user's field of view and/or are within a predetermined distance (e.g., 50, 100, 150, 200, 250, 300, 350, 400, 450, or 500 cm) from the user of the electronic device 101. In another example, the electronic device 101 determines that the user of the electronic device 101 is interacting with remote users virtually based on engagement with a video telephony application, videoconference application, and/or the like on the electronic device 101. In some examples, environmental contextual information and user state information include the mood or physiological condition of the user of the electronic device during the respective consumption episode as determined through detected user data related to the user's heart rate, eye gaze, tone of voice, breathing, temperature, and/or posture. In some examples, the above environmental contextual information and user state information related to the respective consumption episode may be utilized by the electronic device 101 to generate trends and/or other user characteristics, such as, for example, the likelihood the user consumes a particular type of food, an amount of a particular food, time and/or location of food intake, and/or the like. For example, if the user always consumes food and/or drink that is high in added sugar when the user is in a stressed user state, the electronic device 101 may determine that the user will likely consume food and/or drink with added sugar after detecting a combination of physiological characteristics (e.g., high heart rate and/or change in tone of voice) and/or other historical data, and in response, the electronic device 101 may notify the user of such trend and optionally recommend different foods and/or drinks that are healthier than the food and/or drink with added sugar. In another example, if the user always consumes food and/or drink that is high in added sugar when the user is engaged in a particular activity, such as watching television, the electronic device 101 may determine that the user will likely consume food and/or drink with added sugar after detecting activation of the television and in response, the electronic device 101 may notify the user of said determined trend and optionally recommend different foods and/or drinks that are healthier than the food and/or drink with added sugar (e.g., foods with lower sugar or no added sugar).
In some examples, the electronic device 101 may be configured to display, via the display 120, content related to a user's medication regime (e.g., reminders to the user of the electronic device 101 to take their medication). For example, in
In some examples, the user of the electronic device 101 may have a medication profile that is accessible and/or stored by the electronic device 101. For example, the user of the electronic device 101 can create or update a medication profile for storage by the electronic device 101 (e.g., by the remote server/database and/or local database described above). In some examples, and as shown in
In some examples, upon receipt of the medication information via user interface 342a, the electronic device 101 may look up, in the above-mentioned databases, information about the medication, such as side effects, interactions, precautions, and/or best use. In some examples, the electronic device 101 may assign or associate the medication to a specification or strategy used as a reference in order to provide notifications to the user related to side effects, interactions, precautions, and/or best use. The notifications optionally include content related to possible non-compliance of taking the medication and/or are presented to the user automatically at favorable times given contextual information, as discussed below.
In some examples, the electronic device 101 initiates the collection of contextual information from the various sensors described herein in accordance with a determination that the medication information indicates a prescribed or recommended dose within a predetermined period of time (e.g., 0.5, 3, 6, 12, or 24 hours). In some examples, the electronic device 101 begins collecting data from the various sensors after the user has already consumed the medication. For example, some medications require a period of rest before the user engages in activities such as consuming foods or exercising. In some examples, the electronic device 101 does not begin collecting data from the various sensors if the current time is outside the predetermined period of time associated with consumption of the medication. In some examples, the electronic device 101 does not begin collecting data from the various sensors if the medication has not yet been consumed.
In some examples, while the medication information of the user indicates a dose within the predetermined period of time described above, the electronic device 101 detects, via the one or more input devices, a change in contextual information. For example, the change in contextual information relates to detected activity of the user during the predetermined period of time of the dose. Detected activity optionally include the user exercising, eating, drinking, consuming other medication, shopping, interactions with machinery, and/or the like. In some examples, the change in contextual information relates to one or more physical characteristics of the user during the predetermined period of time of the dose, such as for example, the user's heart rate, breathing pattern, temperature, eye gaze, and/or posture. In some examples, the change in contextual information relates to location information corresponding to a physical environment of the user of the electronic device during the predetermined period of time of the dose. In some examples, the change in contextual information corresponding to the physical environment includes changes in the physical location, temperature, amount of sun exposure, and/or the like.
In some examples, in response to detecting the change in contextual information, the electronic device 101 determines whether one or more medication criteria are satisfied, including a criterion that is satisfied when the change in contextual information is associated with possible non-compliance of the medication. For example, in
In some examples, while presenting the indication of the possible non-compliance (e.g., representation 346), the electronic device 101 detects, via the one or more input devices (e.g., image sensors 114a, 114b, and/or 114c), a second change in contextual information. For example, in
As mentioned above, the electronic device 101 may determine a favorable time to prompt the user of the electronic device 101 to take their medication. For example, the electronic device 101 determines that a current time of day at the electronic device 101 is within the predetermined period of time of a dose of the medication, and in response to the determination that the current time of day at the electronic device 101 is within the predetermined period of time of the dose of the medication, the electronic device 101 presents an indication in the three-dimensional environment 300 prompting the user of the electronic device 101 to initiate consumption of the medication. For example,
In some examples, the favorable time to prompt the user of the electronic device 101 to take their medication may be based on a location of the user of the electronic device 101 (e.g., and thus the location of the electronic device 101). For example, some medications are better absorbed when taken with food. Accordingly, as shown in
In some examples, the favorable time to prompt the user of the electronic device 101 to take their medication may be based on user state information, such as the mood or physiological condition of the user of the electronic. In some examples, the electronic device 101 detects that the change in contextual information relates to the user state. For example, in
Accordingly, various examples for displaying representations of predictions of foods being consumed by a user of an electronic device and in turn, saving the foods consumed and providing trends related to the foods consumed in a digital journal enables a user to keep track and view information about foods consumed by the user, thereby simplifying the presentation of information to the user and interactions with the user, which enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. The electronic device may additionally provide indications of possible non-compliance of medication. These indications may be generated based, at least in part, on contextual information including a determined state of the user and/or the environment of the user of the electronic device. Moreover, the indications provide the user with information regarding the potential impact of their actions and/or the environment on the user's medicinal therapy.
Attention is now directed to examples of displaying user interfaces based on an electronic device initiating a smoking detection mode according to some examples of the disclosure.
As shown in
In
In some examples, and as described below with reference to
In some examples, other data from the user may indicate the user is smoking. For example, in
In yet another example, if the electronic device 101 detects movement and/or pose of the user (e.g., hand 408 and/or arm gestures and/or movement), facial movement patterns (e.g., pursed lips as illustrated by representations 418a and 418b), and/or thermal image patterns (e.g., change in pixel characteristics as illustrated by representation 420a) indicative of the smoking patterns and/or characteristics, the electronic device 101 may confirm that the user is indeed smoking and initiates activation of the smoking detection mode. For example, representation 420a illustrates a thermal image of heat distribution of the user 420c and a cigarette 420b. In some examples, the electronic device 101 analyzes the heat distribution and determines a “hot spot” or non-uniformity that may indicate the presence of a cigarette 420b. In some examples, other data analyzed by the electronic device 101 to determine a smoking event includes the location of the user of the electronic device 101 and/or a time of day. For example, if the user of the electronic device 101 typically smokes outside the user's workplace building (e.g., known from user profile data and/or provided by a navigation application) and/or after a particular time of day (e.g., afternoon), the electronic device 101 may initiate activation of the smoking detection mode in response to detecting that a current time of day at the electronic device corresponds to afternoon and/or in response to detecting that the current location of the user is outside their workplace building. In some examples, the electronic device 101 may correlate the captured data of the user and/or the environment described above with confirmation that the user is smoking to further train the electronic device 101 (e.g., via machine learning) on smoking habits/characteristics of the user and detection of smoking events.
In some examples, while operating in the smoking detection mode, the electronic device monitors and logs information about the smoking event to generate insights with regards to the user's smoking habits. In some examples, the electronic device may log and/or monitor smoking data such as the frequency and/or timing of smoking sections to provide insights about the user's smoking habits (e.g., the user smokes while at work but not at home, the user does not smoke or smokes less when the outside temperature is below 40 degrees, the user smokes an average five cigarettes a day, the user spends approximately $300 a month on cigarettes, the user's heart rate increases a certain amount while smoking, and/or the like). In
Obtaining a prediction of an object being consumed by the user of the electronic device based on a sound print avoids additional interaction between the user and the electronic device associated with manually inputting a description of the object when automatic detection and logging of the object is desired, thereby reducing errors in the interaction between the user and the electronic device and reducing inputs needed to correct such errors.
It is understood that process 500 is an example and that more, fewer, or different operations can be performed in the same or in a different order. Additionally, the operations in process 500 described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general-purpose processors (e.g., as described with respect to
Automatically presenting an indication of possible non-compliance of medication when a change in contextual information is associated with possible non-compliance of the medication promotes the user's adherence with the prescribed regimen of the medication, thereby complying with the prescribed medicinal regimen and facilitating user actions to resolve and/or avoid such possible non-compliance.
It is understood that process 600 is an example and that more, fewer, or different operations can be performed in the same or in a different order. Additionally, the operations in process 600 described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general-purpose processors (e.g., as described with respect to
Therefore, according to the above, some examples of the disclosure are directed to a method, comprising at an electronic device in communication with one or more displays, and one or more input devices: in accordance with a determination that one or more first criteria are satisfied, using a subset of the one or more input devices, determining a user of the electronic device is initiating consumption of a first object; and in response to determining that the user of the electronic device is initiating consumption of the first object: capturing, using the one or more input devices, audio of the consumption of the first object; and obtaining a first prediction of the first object based on a sound print of the first object included in the audio, including: in accordance with a determination that the first prediction of the first object satisfies one or more second criteria, initiating a process to analyze one or more images of the first object captured by the electronic device.
Additionally or alternatively, in some examples, the one or more first criteria include a criterion that is satisfied when a location of the electronic device, a posture of the user of the electronic device, or a time of day at the electronic device indicates initiating consumption of a first object. Additionally or alternatively, in some examples, the method further comprises: in accordance with a determination that the first prediction of the first object does not satisfy the one or more second criteria, saving the sound print of the first object to a database. Additionally or alternatively, in some examples, the method further comprises: in accordance with a determination that the first prediction of the first object does not satisfy the one or more second criteria, adding the first prediction to a digital journal accessible on the electronic device.
Additionally or alternatively, in some examples, adding the first prediction to the digital journal further includes adding contextual information associated with a physical environment surrounding the electronic device during the initiation of the consumption of the first object. Additionally or alternatively, in some examples, adding the first prediction to the digital journal further includes adding information associated with the user of the electronic device corresponding to one or more physical characteristics of the user during the initiation of the consumption of the first object. Additionally or alternatively, in some examples, obtaining the first prediction of the first object based on the sound print of the first object included in the audio includes: identifying an object from a plurality of objects that has a respective sound print that matches the sound print of the first object when a score of the object is within a predetermined score.
Additionally or alternatively, in some examples, initiating the process to analyze the one or more images of the first object includes: identifying a plurality of objects included in the one or more images; in accordance with a determination that the first prediction of the first object corresponds to a respective object of the plurality of objects, saving the sound print of the first object; and in accordance with a determination that the first prediction of the first object does not correspond to a respective object of the plurality of objects, obtaining a second prediction of the first object based on one or more of the plurality of objects included in the one or more images.
Additionally or alternatively, in some examples, obtaining the second prediction of the first object includes: identifying a second object of the plurality of objects included in the one or more images which has a respective sound print that matches the sound print of the first object when a score of the second object is within a predetermined score; and associating the second object with the sound print of the first object. Additionally or alternatively, in some examples, the method further comprises: after obtaining the first prediction of the first object, in accordance with a determination that the first prediction of the first object does not satisfy the one or more second criteria, presenting, via the one or more displays, an indication in a computer-generated environment that the first prediction has been added to a digital journal accessible on the electronic device.
Additionally or alternatively, in some examples, the method further comprises: after obtaining the first prediction of the first object, presenting, in a computer-generated environment, a request for user confirmation of the first prediction. Additionally or alternatively, in some examples, the method further comprises: after obtaining the first prediction of the first object, in accordance with a determination that the first prediction of the first object does not satisfy the one or more second criteria, presenting, via the one or more displays, a user interface of a digital journal in a computer-generated environment, wherein the user interface includes a representation of a comparison between first data corresponding to the consumption of the first object and second data corresponding to consumption of a second object, different from the first object, and wherein the representation of the comparison indicates a consumption trend.
Additionally or alternatively, in some examples, the method further comprises: after obtaining the first prediction of the first object, in accordance with a determination that the first prediction of the first object does not satisfy the one or more second criteria, presenting, via the one or more displays, a user interface of a digital journal in a computer-generated environment, wherein the user interface includes a representation of a second object, different from the first object, that is recommended for the user of the electronic device based on at least the initiation of the consumption of the first object.
Some examples of the disclosure are directed to a method, comprising at an electronic device in communication with one or more displays, and one or more input devices: obtaining medication information associated with a user of the electronic device; while the medication information of the user indicates a dose within a predetermined period of time, detecting, via the one or more input devices, a change in contextual information; and in response to detecting the change in contextual information: in accordance with a determination that one or more criteria are satisfied, including a criterion that is satisfied when the change in contextual information is associated with possible non-compliance of the medication, presenting an indication in a computer-generated environment of the possible non-compliance; and in accordance with a determination that the one or more criteria are not satisfied, foregoing presenting the indication in the computer-generated environment.
Additionally or alternatively, in some examples, the contextual information includes a detected activity of the user of the electronic device during the predetermined period of time of the dose. Additionally or alternatively, in some examples, the contextual information includes information associated with the user of the electronic device corresponding to one or more physical characteristics of the user during the predetermined period of time of the dose. Additionally or alternatively, in some examples, the contextual information includes location information corresponding to a physical environment of the user of the electronic device during the predetermined period of time of the dose. Additionally or alternatively, in some examples, the indication includes information indicative of an end of the predetermined period of time of the dose.
Additionally or alternatively, in some examples, the indication includes information corresponding to a recommendation based on at least the predetermined period of time of the dose. Additionally or alternatively, in some examples, the method further comprises: while presenting the indication in the computer-generated environment of the possible non-compliance, detecting, via the one or more input devices, a second change in contextual information; and in response to detecting the second change in contextual information: in accordance with a determination that one or more second criteria are satisfied, including a criterion that is satisfied when the second change in contextual information is associated with possible non-compliance of the medication, maintaining presentation of the indication in the computer-generated environment of the possible non-compliance; and in accordance with a determination that the one or more second criteria are not satisfied, ceasing presentation of the indication in the computer-generated environment.
Additionally or alternatively, in some examples, the method further comprises: in response to detecting the change in contextual information: in accordance with a determination that one or more second criteria are satisfied, including a second criterion that is satisfied when a current time of day at the electronic device is within the predetermined period of time of the dose, presenting a second indication in the computer-generated environment prompting the user of the electronic device to initiate consumption of the medication; and in accordance with a determination that the one or more second criteria are not satisfied, foregoing presenting the second indication.
Additionally or alternatively, in some examples, the method further comprises: in response to detecting the change in contextual information: in accordance with a determination that one or more second criteria are satisfied, including a criterion that is satisfied when an opportunity to consume the medication is detected, presenting a second indication in the computer-generated environment prompting the user of the electronic device to initiate consumption of the medication; and in accordance with a determination that the one or more second criteria are not satisfied, foregoing presenting the second indication.
Some examples of the disclosure are directed to an electronic device, comprising: one or more processors; memory; and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the above methods.
Some examples of the disclosure are directed to a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform any of the above methods.
Some examples of the disclosure are directed to an electronic device, comprising one or more processors, memory, and means for performing any of the above methods.
Some examples of the disclosure are directed to an information processing apparatus for use in an electronic device, the information processing apparatus comprising means for performing any of the above methods.
The present disclosure contemplates that in some instances, the data utilized may include personal information data that uniquely identifies or can be used to contact or locate a specific person. Such personal information data can include demographic data, content consumption activity, location-based data, telephone numbers, email addresses, twitter ID's, home addresses, data or records relating to a user's health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other identifying or personal information. Specifically, as described herein, one aspect of the present disclosure is tracking a user's biometric data.
The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, personal information data may be used to display a visual indication based on changes in a user's biometric data. For example, the visual indication includes a recommendation for the user to visit or contact a health professional as a result of the detecting an abnormality compared with baseline biometric data.
The present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country.
Despite the foregoing, the present disclosure also contemplates examples in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services or anytime thereafter. In another example, users can select not to enable recording of personal information data in a specific application (e.g., first application and/or second application). In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon initiating collection that their personal information data will be accessed and then reminded again just before personal information data is accessed by the device(s).
Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods.
The foregoing description, for purpose of explanation, has been described with reference to specific examples. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The examples were chosen and described in order to best explain the principles of the disclosure and its practical applications, to thereby enable others skilled in the art to best use the disclosure and various described examples with various modifications as are suited to the particular use contemplated.
This application claims the benefit of U.S. Provisional Application No. 63/586,341, filed Sep. 28, 2023, the content of which is herein incorporated by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
63586341 | Sep 2023 | US |