This relates generally to electronic devices and, more particularly, to devices having multiple subsystems.
An electronic device such as a head-mounted device can include multiple subsystems that are configured to provide different functions for a user. The power consumption of a first subsystem in an electronic device can be controlled using a first control loop, whereas the power consumption of a second subsystem in the electronic device can be controlled using a second control loop independent of the first control loop. Managing power consumption of the various subsystems using separate decentralized control loops may be inefficient.
It is within this context that the embodiments herein arise.
An electronic device may include a centralized user experience manager configured to manage power, thermal, and performance tradeoffs among various subsystems on the electronic device. An aspect of the disclosure provides a method that includes identifying a user context based a behavior of a plurality of subsystems and placing a restriction on at least one of the plurality of subsystems based on the identified user context using the centralized user experience manager operable to control the plurality of subsystems. Identifying a user context can include monitoring an assertion from one or more of the subsystems or monitoring the behavior of one or more of the subsystems. The centralized user experience manager can place restriction(s), including limiting a frequency, limit the dynamic behavior, limiting a resolution, disabling a function of the at least one of the plurality of subsystems based on the identified user context, or switching from a first reduced power mode to a second reduced power mode. The centralized user experience manager can perform thermal mitigation operations when detecting a rising temperature and/or can perform user discomfort mitigation operations when detecting or predicting that the user is experiencing visual discomfort.
An aspect of the disclosure provides a method that includes identifying a current use case based on one or more signals output from the plurality of subsystems, measuring an internal temperature of the electronic device, and using a centralized user experience manager operable to control the plurality of subsystems to perform thermal mitigation operations while minimizing impact on the current use case in response to detecting that the internal temperature is rising. The centralized user experience manager can adjust thermal mitigation knobs from a list to control the plurality of subsystems, where the list of thermal mitigation knobs is ordered based on an amount of impact each of the subsystem knobs has on the current use case. The list of thermal mitigation knobs can include limiting a frame rate of a graphics rendering subsystem, limiting automatic speech recognition (ASR), text-to-speech (TTS), and dictation quality of a voice-controlled automated assistant subsystem, limiting a streaming quality of a media streaming subsystem, limiting a frequency of a face and body tracking subsystem, limiting a number of foreground applications currently presented to a user of the electronic device, limiting a frequency of a hands tracking subsystem or limiting a frequency of a gaze tracking subsystem.
An aspect of the disclosure provides a method that includes identifying a user mode based on one or more signals output from the plurality of subsystems, adjusting a behavior of one or more of the plurality of subsystems based on a current operating condition, and placing a restriction on at least one of the plurality of subsystems based on the identified user mode using a centralized user experience manager. The method can optionally include idling a hands tracking subsystem in the plurality of subsystems when no hands are being detected and using the centralized user experience manager to limit a maximum frequency of the hands tracking subsystem. The method can optionally include adjusting a fan speed of a fan subsystem based on an internal temperature of the electronic device and proactively increasing or decreasing the fan speed of the fan subsystem based on the identified user mode. The method can optionally include freezing an application that is outside a user's field of view and using the centralized user experience manager to freeze an application that is inside the user's field of view.
An aspect of the disclosure provides a method that includes monitoring one or more states of the subsystems, predicting a user context based on the monitored states, estimating a power consumption level of the predicted user context, and selectively adjusting, with the centralized user experience manager, at least one of the subsystems based on the estimated power consumption level of the predicted user context. The method can further include maintaining historical data of past monitored states of the subsystems, monitoring a usage of one or more applications running on the electronic device, predicting the user context based on the monitored usage of one or more applications running on the electronic device, and maintaining historical data of past monitored usage of one or more applications running on the electronic device. The method can further include predicting a time when the electronic device will be charged to charge a battery in the electronic device and selectively adjusting at least one of the subsystems based on the predicted time of when the electronic device will be charged.
An aspect of the disclosure provides a method that includes monitoring one or more states of the various subsystems, identifying a user context based on the monitored states of the plurality of subsystems, estimating a power consumption level of the identified user context, and reducing, with the centralized user experience manager, power drawn from the battery by adjusting at least one of the plurality of subsystems based on the estimated power consumption level of the identified user context. The method can further include reducing a frame rate of one or more sensors in the device and/or deactivating or limiting resource usage of an algorithm for processing sensor data on the device to reduce the power drawn from the battery. The method for reducing power drawn from the battery can optionally include operating the electronic device in an audio-only feedback mode during which the electronic device outputs only audio or haptic alerts without outputting any visual alerts, operating the electronic device in a single color mode during which the electronic device displays only black and white or grayscale content, reducing display brightness, constraining or deactivating a graphics rendering subsystem in the device, constraining or deactivating a scene understanding subsystem in the device, limiting wireless connectivity of the device, and/or reducing a pull rate with which the device checks for application notifications.
An aspect of the disclosure provides a method that includes capturing images of a scene, outputting the captured images as a passthrough video feed, determining whether the passthrough video feed is being displayed to a user, and adjusting at least one of the subsystems associated with processing the passthrough video feed in response to determining that the passthrough video feed is not being displayed to the user. Adjusting the at least one of the subsystems associated with processing the passthrough video feed can include throttling down one or more of: a scene understanding subsystem, a subsystem configured to perform point-of-view correction, and a subsystem configured to model environment lighting.
An illustrative electronic device is shown in
As shown in
To support communications between device 10 and external equipment, control circuitry 20 may communicate using communications circuitry 22. Communications circuitry 22 may include antennas, radio-frequency transceiver circuitry, and other wireless communications circuitry and/or wired communications circuitry. Circuitry 22, which may sometimes be referred to as control circuitry, control and communications circuitry, and/or a communications subsystem, may support bidirectional wireless communications between device 10 and external equipment (e.g., a companion device such as a computer, cellular telephone, or other electronic device, an accessory such as a point device or a controller, computer stylus, or other input device, speakers or other output devices, etc.) over a wireless link.
For example, communications circuitry 22 may include radio-frequency transceiver circuitry such as wireless local area network transceiver circuitry configured to support communications over a wireless local area network link, near-field communications transceiver circuitry configured to support communications over a near-field communications link, cellular telephone transceiver circuitry configured to support communications over a cellular telephone link, or transceiver circuitry configured to support communications over any other suitable wired or wireless communications link. Wireless communications may, for example, be supported over a Bluetooth® link, a WiFi® link, a wireless link operating at a frequency between 10 GHZ and 400 GHz, a 60 GHz link, or other millimeter wave link, a cellular telephone link, or other wireless communications link. Device 10 may, if desired, include power circuits for transmitting and/or receiving wired and/or wireless power and may include batteries or other energy storage devices. For example, device 10 may include a coil and rectifier to receive wireless power that is provided to circuitry in device 10.
Electronic device 10 may also include input-output circuitry 24. Input-output circuitry 24 may be used to allow data to be received by electronic device 10 from external equipment (e.g., a tethered computer, a portable device such as a handheld device or laptop computer, or other electrical equipment) and to allow a user to provide electronic device 10 with user input. Input-output circuitry 20 may also be used to gather information on the environment in which electronic device 10 is operating. Output components in circuitry 20 may allow electronic device 10 to provide a user with output and may be used to communicate with external electrical equipment.
As shown in
Alternatively, display 14 may be an opaque display that blocks light from physical objects when a user operates electronic device 10. In this type of arrangement, a passthrough or front-facing camera may be used to capture images of the physical environment, and the physical environment images may be displayed on the display for viewing by the user. The real-world content being captured by the front-facing cameras is therefore sometimes referred to as a camera passthrough feed, a (live) video passthrough feed, or a passthrough video feed (stream). Additional computer-generated content (e.g., text, game-content, other visual content, etc.) may optionally be overlaid over the physical environment images to provide an extended reality environment for the user. When display 16 is opaque, the display may also optionally display entirely computer-generated content (e.g., without displaying images of the physical environment).
Display 16 may include one or more optical systems (e.g., lenses) (sometimes referred to as optical assemblies) that allow a viewer to view images on display(s) 14. A single display 14 may produce images for both eyes or a pair of displays 14 may be used to display images. In configurations with multiple displays (e.g., left and right eye displays), the focal length and positions of the lenses may be selected so that any gap present between the displays will not be visible to a user (e.g., so that the images of the left and right displays overlap or merge seamlessly). Display modules (sometimes referred to as display assemblies) that generate different images for the left and right eyes of the user may be referred to as stereoscopic displays. The stereoscopic displays may be capable of presenting two-dimensional content (e.g., a user notification with text) and three-dimensional content (e.g., a simulation of a physical object such as a cube).
Input-output circuitry 24 may include various other input-output devices. For example, input-output circuitry 24 may include sensors 16. Sensors 16 may include one or more outward-facing cameras (that face the physical environment around the user when electronic device 10 is mounted on the user's head, as one example). The cameras may capture visible light images, infrared images, or images of any other desired type. The cameras may be stereo cameras if desired. Outward-facing cameras may capture pass-through video for device 10.
Sensors 16 may also include position and motion sensors (e.g., compasses, gyroscopes, accelerometers, and/or other devices for monitoring the location, orientation, and movement of electronic device 10, satellite navigation system circuitry such as Global Positioning System circuitry for monitoring user location, etc.). With such position and motion sensors, for example, control circuitry 20 can monitor the current direction in which a user's head is oriented relative to the surrounding environment (e.g., a user's head pose). The outward-facing cameras in may also be considered part of the position and motion sensors. The outward-facing cameras may be used for face tracking (e.g., by capturing images of the user's jaw, mouth, etc. while the device is worn on the head of the user), body tracking (e.g., by capturing images of the user's torso, arms, hands, legs, etc. while the device is worn on the head of user), and/or for localization (e.g., using visual odometry, visual inertial odometry, or other simultaneous localization and mapping (SLAM) technique).
Input-output circuitry 24 may also include other sensors and input-output components 18 if desired (e.g., gaze tracking sensors, ambient light sensors, force sensors, temperature sensors, touch sensors, image sensors for detecting hand gestures or body poses, buttons, capacitive proximity sensors, light-based proximity sensors, other proximity sensors, strain gauges, gas sensors, pressure sensors, moisture sensors, magnetic sensors, microphones, speakers, audio components, haptic output devices such as actuators, light-emitting diodes, other light sources, etc.).
Device 10 may further include an energy storage device such as battery 26. Battery 26 may be a rechargeable energy storage device configured to provide electrical power to operate the various components and subsystems within device 10. Battery 26 may have a battery capacity that is typically measured in milliampere-hours (mAh). The current level of charge in battery 26 can be referred to herein as a “battery level” or a “state of charge.” To ensure longevity of battery 26, device 10 may follow best practices for battery management, such as avoiding extreme operating temperatures, avoiding overcharging the battery, and minimizing full charge and discharge cycles. Components 18 can optionally include a battery sensor configured to detect the state of charge of battery 26, a power sensor configured to detect a power drawn from battery 26, a voltage sensor configured to detect a voltage at battery 26, and/or a current sensor configured to detect a current drawn from battery 26. Components 18 can also include one or more temperature sensor(s) for measuring a temperature of battery 26 or other components within device 10.
Multiuser communication session subsystem 58 can be used to establish a multiuser communication session. Herein, a multiuser communication session refers to a communication session in which two or more devices are participating in an extended reality (XR) environment. The multiuser communication session subsystem 58 may control the XR content presented using device 10 during a multiuser communication session. During a multiuser communication session, multiple electronic devices can be connected via a network. Some of the electronic devices (and corresponding users) may be located in different physical environments, whereas some of the electronic devices (and corresponding users) in the multiuser communication session may be located in the same physical environment.
Voice-controlled automated assistant subsystem 59 can be used to monitor voice commands from a user and perform various functions in response to the user's voice commands. For example, voice-controlled automated assistant 59 can be used to, in response to a voice command, play content such as audio and video content, run games or other software, process online database search requests (e.g., internet searches), process orders for products and services, provide a user with calendar information, receive and process email, handle audio communications (e.g., telephone calls), handle video calls (e.g., multiuser communication sessions with accompanying audio), and/or handle other tasks.
The example of
Still referring to
Graphics rendering subsystem 32 can be configured to render or generate virtual reality (VR) content, augmented reality (AR) content, mixed reality (MR) content, extended reality (XR) content, or may be used to carry out other graphics processing functions. Rendering subsystem 32 can synthesize photorealistic or non-photorealistic images from one or more 2-dimensional or 3-dimensional model(s) defined in a scene file that contains information on how to simulate a variety of features such as information on shading (e.g., how color and brightness of a surface varies with lighting), shadows (e.g., how to cast shadows across an object), texture mapping (e.g., how to apply detail to surfaces), reflection, transparency or opacity (e.g., how light is transmitted through a solid object), translucency (e.g., how light is scattered through a solid object), refraction and diffraction, depth of field (e.g., how certain objects can appear out of focus when outside the depth of view), motion blur (e.g., how certain objects can appear blurry due to fast motion), and/or other visible features relating to the lighting or physical characteristics of objects in a scene. Rendering subsystem 32 can apply rendering algorithms such as rasterization, ray casting, ray tracing, and/or radiosity.
Foveation subsystem 34 (sometimes referred to as a dynamic foveation block) can be configured to adjust the detail or quality of a video feed based on the user's gaze, for example by increasing image detail or resolution of a video feed in the area of the user's gaze and/or reducing image detail or resolution of the video feed in areas not aligned with the user's gaze. Scene understanding subsystem 36 can be configured to detect various types of objects in an image (e.g., to detect whether a static object is a wall, to detect whether a moving object is a dog, to determine where each detected object is located in a scene, etc.), maintain a representation of the user's environment over time, and/or detect the location of the user and objects in an image relative to the user's environment.
Camera(s) 38 may include one or more outward-facing cameras (e.g., cameras that face the physical environment around the user when the electronic device is mounted on the user's head, as one example). Camera(s) 38 include image sensors that capture visible light images, infrared images, or images of any other desired type. Camera(s) 38 may be stereo cameras if desired. Outward-facing cameras 38 may capture a pass-through video feed for device 10. Device 10 may have any suitable number of cameras 38. For example, device 10 may have K cameras, where the value of K is at least one, at least two, at least four, at least six, at least eight, at least ten, at least 12, less than 20, less than 14, less than 12, less than 10, 4-10, or other suitable value.
Device 10 may also include additional light sensors such as one or more ambient light sensor(s) 40. Ambient light sensor 40 can have a wider dynamic range and better spectral resolution than a color image sensor within camera 38 and can therefore be used to more effectively capture the absolute light level and color information from the real-world environment. Since ambient light sensor 40 has a larger dynamic range and enhanced spectral resolution than camera 38, ambient light sensor 40 can provide additional useful information for modeling the environment even when measurements from cameras 38 saturate.
Face and body tracking subsystem 42 may include one or more outward-facing cameras that are used for face tracking (e.g., by capturing images of the user's jaw, mouth, etc. while the device is worn on the head of the user), body tracking (e.g., by capturing images of the user's torso, arms, hands, legs, etc. while the device is worn on the head of user). Face and body tracking subsystem 42 can therefore sometimes be referred to as being part of camera(s) 38. If desired, body tracking subsystem 42 can also track the user's head pose by directly determining any movement, yaw, pitch, roll, etc. for device 10. The yaw, roll, and pitch of the user's head may collectively define a user's head pose.
Hands tracker 44 may include one or more sensors 72 configured to monitor a user's hand motion/gesture to obtain hand gestures data. For example, hands tracker 44 may include a camera and/or other gestures tracking components (e.g., outward facing components and/or light sources that emit beams of light so that reflections of the beams from a user's hand may be detected) to monitor the user's hand(s). One or more hands-tracking sensor(s) may be directed towards a user's hands and may track the motion associated with the user's hand(s), may determine whether the user is performing a swiping motion with his/her hand(s), may determine whether the user is performing a non-contact button press or object selection operation with his/her hand(s), may determine whether the user is performing a grabbing or gripping motion with his/her hand(s), may determine whether the user is pointing at a given object that is presented on display 14 using his/her hand(s) or fingers, may determine whether the user is performing a waving or bumping motion with his/her hand(s), or may generally measure/monitor three-dimensional non-contact gestures (“air gestures”) associated with the user's hand(s).
The hand gestures information gathered using hands tracker 44 may be used to provide user input to electronic device 10. For example, a user's hand or finger may serve as a cursor that selects a region of interest on display 14. Non-contact air gestures information is a useful user input technique in extended reality systems with displays that present images close to a user's eyes (and direct contact touch input is therefore not practical). If desired, hands tracker 44 may also track the motion of a controller if the user is holding such controller to control the operation of device 10.
Gaze tracker 46 can be configured to gather gaze information or point of gaze information. Gaze tracker 46 may include one or more inward facing camera(s) and/or other gaze-tracking components (e.g., eye-facing components and/or other light sources that emit beams of light so that reflections of the beams from a user's eyes may be detected) to monitor the user's eyes. One or more gaze-tracking sensor(s) may face a user's eyes and may track a user's gaze. A camera in gaze-tracking subsystem 46 may determine the location of a user's eyes (e.g., the centers of the user's pupils), may determine the direction in which the user's eyes are oriented (the direction of the user's gaze), may determine the user's pupil size (e.g., so that light modulation and/or other optical parameters and/or the amount of gradualness with which one or more of these parameters is spatially adjusted and/or the area in which one or more of these optical parameters is adjusted based on the pupil size), may be used in monitoring the current focus of the lenses in the user's eyes (e.g., whether the user is focusing in the near field or far field, which may be used to assess whether a user is day dreaming or is thinking strategically or tactically), and/or other gaze information. Cameras in gaze tracker 46 may sometimes be referred to as inward-facing cameras, gaze-detection cameras, eye-tracking cameras, gaze-tracking cameras, or eye-monitoring cameras. If desired, other types of image sensors (e.g., infrared and/or visible light-emitting diodes and light detectors, etc.) may also be used in monitoring a user's gaze.
The user's point of gaze gathered using gaze tracker 46 may be used to provide user input to electronic device 10. For example, a user's point of gaze may serve as a cursor that selects a region of interest on display 14. Point of gaze is a useful user input technique in extended reality systems with displays that present images close to a user's eyes (and touch input is therefore not practical).
Audio sensor(s) 48 can be a microphone that is used to measure an ambient noise level of the environment surrounding device 10. Fan subsystem 50 may have a fan speed that optionally depends on the amount of ambient noise measured by audio sensor 48. Fan 50 can be physically coupled to one or more processor(s) within device 10 such as a central processing unit (CPU), a graphics processing unit (GPU), and/or other high performance computing component that can generate a substantial amount of heat that needs to be actively dissipated by fan 50. Device 10 can include more than one fan subsystem 50 (e.g., device 10 might include two or more fans, three or more fans, two to five fans, five to ten fans, or more than ten fans).
Fan 50 may have a fan speed that is controlled based on a temperature level measured using an associated temperature sensor 52. Temperature sensor 52 may monitor the temperature of an associated CPU or GPU or may monitor an ambient temperature level within the housing of device 10. Device 10 can optionally include more than one temperature sensor 52 (e.g., device 10 might include two or more temperature sensors disposed at different portions within the device, three or more temperature sensors, two to five temperature sensors, five to ten temperature sensors, or more than ten temperature sensors disposed at various location within the device). When the temperature measurement obtained using sensor 52 rises, the speed of fan 50 can be increased to help lower the temperature. When the temperature measurement obtained using sensor 52 falls, the fan speed of fan 50 can be lowered to help conserve power and minimize fan noise.
Performance controller 54 can be configured to control the performance of one or more low-level subsystems within device 10. The example of
Device 10 may also be provided with a user discomfort monitoring subsystem such as user discomfort detector 55. Detector 55 may be configured to assess the user's current comfort level. Since it may take some time for a user to experience or notice discomfort such as motion sickness, detector 55 can collect measurements or data associated with the user over a period of time to predict whether the user will experience discomfort. For instance, detector 55 can collect and analyze data gathered over a period of 10-20 minutes, at least 5 minutes, at least 10 minutes, 5-10 minutes, or other suitable duration to predict whether a user is likely to experience some level of visual discomfort. Detector 55 can therefore sometimes be referred to as a user discomfort prediction subsystem.
Whether or not visual discomfort might arise might also be dependent or personalized to each user. For example, a first user might be more prone to experiencing visual discomfort, whereas a second user might be less prone to experiencing visual discomfort given the same use case. Detector 55 can optionally maintain a user discomfort model that is personalized to each user and can use current and historical telemetry of user-related data to adjust a baseline user discomfort model.
In one embodiment, user discomfort detector 55 may employ gaze tracker 44 to monitor a user's gaze to determine whether the user might be experiencing discomfort. For example, if the user is constantly looking away from the primary content, is closing his or her eyes for an extended period of time, is blinking more than usual, or is acting in other ways indicating that the user is experiencing eye strain (as detected using gaze tracker 44), discomfort detector 55 can make a prediction that the user is experiencing an increasing level of discomfort. In another embodiment, user discomfort detector 55 may employ a breathing tracker to monitor a user's breathing rate or pattern. For example, if the user is breathing faster or if the cadence of the user's breath is becoming more irregular (as detected by the breath tracker), discomfort detector 55 can make a prediction that the user is experience an increasing level of discomfort. If desired, user discomfort detector 55 can employ other sensors to monitor visual or motion cues associated with the user (e.g., to measure a user's body motion, pupil motion, head motion, balance, facial expression, heart rate, perspiration, or other biometric data).
The examples above in which user discomfort detector 55 records and monitors measurements directly related to the user are merely illustrative. If desired, user discomfort detector (predictor) 55 can additionally or alternatively monitor parameters associated with one or more subsystems within device 10 as a proxy for determining or predicting whether a user is likely to experience visual discomfort. As an example, detector 55 can predict that the user is likely to experience discomfort when detecting that one or more frames being displayed is being dropped, when detecting that repeated frames are being displayed, or when detecting that the display frame rate is unstable. These display anomalies can occur when one or more subsystems in the display pipeline and/or the graphics rendering pipeline has crashed or is otherwise experiencing some type of slowdown. As another example, detector 55 can predict that the user is likely to experience visual discomfort when detecting some level of movement (or optical flow) in the periphery of the user's current field of view. As another example, detector 55 can predict that the user is likely to experience visual discomfort when detecting that latencies associated with cameras 38, ambient light sensors 40, face and body tracker 42, hands tracker 44, gaze tracker 46, and/or other sensors 16 (see
Device 10 may also be provided with a localization block such as localization subsystem 53. Localization subsystem 53 can be configured to receive depth/distance information from one or more depth sensor(s), position and motion data from an inertial measurement unit, and optionally images from one or more external-facing cameras 38. Subsystem 53 can include visual-inertial odometry (VIO) components that combine the visual information from cameras 38, the data from the inertial measurement unit, and optionally the depth information to estimate the motion of device 10. Additionally or alternatively, subsystem 54 can include simultaneous localization and mapping (SLAM) components that combine the visual information from cameras 38, the data from the inertial measurement unit, and the depth information to construct a 2D or 3D map of a physical environment while simultaneously tracking the location and/or orientation of device 10 within that environment. Configured in this way, subsystem 53 (sometimes referred to as a VIO/SLAM block or a motion and location determination subsystem) can be configured to output motion information, location information, pose/orientation information, and other position-related information associated with device 10 within a physical environment.
Device 10 may also include a power controlling subsystem such as power controller block 51. Power controller 51 can be configured to detect when the state of charge of battery 26 is low or below a threshold level, when a temperature of battery 26 is low or below a temperature threshold, and/or other conditions where brown out might occur. “Brown out” can refer to a phenomenon that occurs in a battery-powered device where the voltage supplied by the battery decreases, temporarily or unintentionally, to a level that may not be sufficient for proper functionality of that device. Brown out events can lead to issues such as reduced performance, erratic behavior, or even unexpected shutdowns when the battery level becomes critically low. Power controller 51 can take suitable preventative or remedial actions to avoid such brown out occurrences.
As shown in
In accordance with an embodiment, electronic device 10 can be provided with a centralized user experience manager such as centralized user experience manager 30 that maintains a centralized policy for balancing performance and power settings for all of the various subsystems within device 10 simultaneously while optimizing for user experience for a particular user context (use case). In other words, centralized user experience manager 30 can detect the current use case and identify a corresponding set of restrictions for any related or unrelated subsystems to optimize for the best user experience for the current user context given the limited available (compute) resources on device 10. Using a centralized user experience manager such as user experience manager 30 to manage power consumption and thermal conditions across a wide range of software and hardware subsystems can provide a more efficient way of balancing performance and power consumption while optimizing the user experience for a particular user context. The subsystems being monitored and/or controlled by centralized user experience manager 30 can include components within input-output devices 24.
As an example, centralized user experience manager 30 receiving an assertion from media player subsystem 56 when subsystem 56 launches a media player or receiving an assertion from media streaming subsystem 57 when a user actively launches a media streaming application may be indicative that the user will be operating device 10 in an immersive media mode. During the immersive media mode, the user may be presented with movie (cinematic) content, gaming content, or other immersive XR content. These examples in which device 10 can operate in a mode that offers the user an immersive experience for consuming media content is merely illustrative. More generally, device 10 can be operated in an immersive mode in which a user can launch any application that provides an immersive experience for the user to consume other types of content.
As another example, centralized user experience manager 30 receiving an assertion from multiuser communication session subsystem 58 when a user actively launches a multiuser communication session call may be indicative that the user will be operating device 10 in a multiuser communication session mode. As another example, centralized user experience manager 30 receiving an assertion from camera(s) subsystem 38 when a user actively launches a camera application and presses a record button may be indicative that the user will be operating device 10 in a spatial capture mode. The spatial capture mode may employ a recording subsystem that records the content that is currently being displayed by device 10, where the recording can be later played back on device 10 or can be viewable on another device. The immersive (media) mode, the multiuser communication session mode, and the spatial capture mode described above are merely illustrative. Device 10 can be operated under any suitable number of extended reality modes (e.g., a travel mode when sensor data indicates that the user's physical location is moving). These examples in which centralized user experience manager 30 identifies a particular user context based on received assertions are merely illustrative. In other embodiments, centralized user experience manager 30 can also leverage existing mechanisms on device 10 that monitor the behavior of the various subsystems on device 10 without any subsystem(s) explicitly outputting an assertion to manager 30.
Referring back to block 64 of
As an example, upon launch of the immersive media mode or other immersive mode, user experience manager 30 can proactively limit the performance and/or frequency of scene understanding subsystem 36, can disable mesh shadows and depth mitigation functions and/or can limit a temporal anti-aliasing (TAA) coverage provided by rendering subsystem 32, can limit the resolution of a passthrough feed obtained by camera(s) 38, can limit the frequency of the environment sensors (e.g., ambient light sensor 40, audio sensor 48, temperature sensor 52, or other sensors 16), can limit the performance and/or frequency of hands tracking subsystem 44, can disable other image correction functions such as point of view correction (PoVC), can switch between different reduced power modes (e.g., to switch from operating using a first low power algorithm to using a second low power algorithm), and/or can set other system constraints without degrading the user experience of the immersive (media) mode. By limiting the performance of subsystems that are less critical to the user experience of the identified mode, the user experience manager can proactively allocate limited system resources for the identified mode.
As another example, upon launch of the multiuser communication session mode, user experience manager 30 can proactively limit the performance of voice-controlled automated assistant 59 (e.g., by limiting the automatic speech recognition, text-to-speech, and dictation quality of subsystem 59), can adjust a phase audio rendering quality, and/or can set other system constraints without degrading the user experience of the multiuser communication session mode.
As another example, upon launch of the spatial capture mode, user experience manager 30 can proactively disable mesh shadows and depth mitigation functions and/or can limit a temporal anti-aliasing (TAA) coverage provided by rendering subsystem 32, can render applications at a lower framerate, can limit the performance and/or frequency of hands tracking subsystem 44, can switch from one reduced power mode to another reduced power mode (e.g. to switch from a first low power mode to a second low power mode), and/or can set other system constraints without degrading the user experience of the spatial capture mode. A reduced or low power mode can refer to or be defined as a mode of operation during which device 10 is configured to prioritize power savings over performance and thus relies on one or more power savings algorithm(s) to help reduced power consumption relative to normal operations. These various tradeoffs established at the launch of any given user experience or context can represent an initial starting point which can evolve over time as system condition changes.
Referring back to block 66 of
As an example, centralized user experience manager 30 can lower the maximum frame rate of graphics rendering subsystem 32 from 90 fps (frames per second) to 45 fps to other lower limits. Such rendering limits can be optionally imposed upon detecting that the rendered content has not changed for some period of time. As another example, centralized user experience manager 30 can lower the maximum hands tracking frequency from 30 Hz to less than 20 Hz, to less than 10 Hz, or to 0 Hz. Such hands tracking frequency limit can be optionally imposed upon detecting that no hands are being detected. As another example, centralized user experience manager 30 can limit the algorithm mode of scene understanding subsystem 36. As another example, background applications or applications outside a user's field of view can be opportunistically frozen or idled. In some embodiments, centralized user experience manager 30 can further extend this constraint to freeze applications that are within the user's field of view if needed. If desired, centralized user experience manager 30 can also set rendering frequency limits on background applications or on active applications with which the user is currently engaged.
As another example, centralized user experience manager 30 can opportunistically raise the fan speed of fan 50 upon detecting using audio sensor 48 that the ambient noise level is high (since the user likely would not hear the increased fan noise anyways due to the high ambient noise). As another example, centralized user experience manager 30 can proactively raise the fan speed of fan 50 upon detecting that the current user context is the immersive media mode or other immersive mode (since the user likely would not hear the increased fan noise anyways due to the loud sound levels played during a movie or a game). Conversely, centralized user experience manager 30 can proactively lower the fan speed of fan 50 upon detecting that the current user context is a meditation mode to allow the user to concentrate and relax during a mediation session. Control of the fan speed by the centralized user experience manager 30 based on the current user context can optionally override the control of the fan speed based on the current internal temperature of device 10. For example, during the immersive media mode, manager 30 might proactively raise the fan speed even though the current internal temperature of device 10 is still relatively low.
During the operations of block 68, centralized user experience manager 30 can perform thermal mitigation techniques as the temperature of one or more internal components rises within device 10. An internal temperature or the temperature of individual subsystems can be measured using one or more temperature sensor(s) 52 (see
For example, a first thermal mitigation knob can be to limit or restrict the frame rate of the graphics rendering subsystem 32. A second thermal mitigation knob listed after the first thermal mitigation knob can be to limit the performance of the voice-controlled automated assistant 59 (e.g., to limit the automatic speech recognition, text-to-speech, and dictation quality of subsystem 59). A third thermal mitigation knob after the second thermal mitigation knob can be to limit the performance and frequency of the scene understanding subsystem 36. A fourth thermal mitigation knob following the third thermal mitigation knob can be to limit the streaming tier or resolution of media streaming subsystem 57. A fifth thermal mitigation knob following the fourth thermal mitigation knob can be to adjust the phase audio rendering quality of an audio rendering subsystem within device 10. A sixth thermal mitigation knob following the fifth thermal mitigation knob can be to limit the face and body tracking frequency of subsystem 42.
A seventh thermal mitigation knob following the sixth thermal mitigation knob can be to disable mesh shadows and depth mitigation functions provided by rendering subsystem 32. An eighth thermal mitigation knob following the seventh thermal mitigation knob can be to limit the number of foreground applications presented to the user. A ninth thermal mitigation knob following the eighth thermal mitigation knob can be to limit a temporal anti-aliasing (TAA) coverage provided by rendering subsystem 32. A tenth thermal mitigation knob following the ninth thermal mitigation knob can be to limit the frequency and/or resolution of one or more environmental sensors (e.g., ambient light sensor 40, audio sensor 48, temperature sensor 52, or other sensors 16) within device 10. An eleventh thermal mitigation knob following the tenth thermal mitigation knob can be to limit the foveation size or tiers of dynamic foveation subsystem 34. A twelfth thermal mitigation knob following the eleventh thermal mitigation knob can be to limit the performance or frequency of the hands tracking subsystem 44. A thirteenth thermal mitigation knob following the twelfth thermal mitigation knob can be to limit the performance or frequency of the gaze tracking subsystem 46. Yet another thermal mitigation knob can include switching between different power modes (e.g., to switch from a first low power mode to a second low power mode).
The above thermal mitigation schemes are merely illustrative and do not represent an exhaustive list. The order of these thermal mitigation knobs can optionally be adjusted based on the current user context. The list and order of thermal mitigation knobs can also be tailored based on the current user context (e.g., different use cases can each rely on a different list of thermal mitigation knobs that least impact that particular use case). As an example, for an immersive mode or user context, limiting the performance or frequency of the hands tracking subsystem 44 can be moved higher up in the ordered list of thermal mitigation knobs (i.e., such limitation can be activated at a lower temperature threshold) since tracking the user's hand might not be as critical during the immersive mode. As another example, for the multiuser communication session mode or user context, adjusting the phase audio rending quality can be moved higher up in the ordered list of thermal mitigation knobs. As another example, for the spatial capture mode or user context, disabling the mesh shadows and depth mitigation functions provided by rendering subsystem 32 can be moved higher up in the ordered list of thermal mitigation knobs since such functions may be less important during the capture mode. The operations of block 66 can occur in tandem or in parallel with the thermal mitigation operations of block 68, as indicated by path 69.
During the operations of block 70, centralized user experience manager 30 can perform user discomfort mitigation operations in response to detecting or predicting that the user will be experiencing visual discomfort (e.g., using discomfort monitoring subsystem 55 of
In response to detecting that the user might be or will be experience visual discomfort, device 10 can be configured to take suitable action to prioritize or to optimize for visual comfort.
For example, in response to detecting unstable frame rates, centralized user experience manager 30 may prioritize or reserve computing power to the rendering and display pipelines to help improve user comfort. As another example, in response to detecting movement or optical flow in the periphery of a user's field of view, centralized user experience manager 30 may reduce field of view or reduce the foveation size to help improve user comfort. As another example, in response to detecting increased latency levels associated with the graphics rendering pipeline, centralized user experience manager 30 may reduce the graphics rendering frame rate or simply fall back on only displaying the passthrough video feed without overlaying any rendered XR content. As another example, in response to detecting that the user is closing his or her eyes for an extended period of time, is blinking more than usual, or is acting in other ways indicating that the user is experiencing eye strain, centralized user experience manager 30 may limit a frequency, limit a resolution, and/or disable a function of at least one of the plurality of subsystems in device 10 help improve user comfort.
Referring back to
Such estimation and prediction of user context at engine 200 may be based on signal assertions from one or more subsystems (see, e.g., blocks 60 and 62 in
The passthrough state might indicate whether a passthrough video feed is currently being displayed to the user. Such passthrough state might be more relevant for device 10 of the type that includes an opaque display that blocks ambient light in the physical scene from directly reaching a user's eyes when the user operates such device 10. If the passthrough state indicates that the passthrough video feed is currently visible to the user, then any subsystem relevant to the processing of the passthrough feed should be taken into consideration by the centralized user experience manager 30 when making system-level adjustments. If the passthrough states indicates that the passthrough video feed is currently not visible to the user, then any subsystem relevant to the processing of the passthrough feed might be given lower priority by the centralized user experience manager 30 when making system-level adjustments.
During the operations of block 212, context engine 200 may monitor user activity and maintain a history of user activities. For example, context engine 200 might monitor application usage (e.g., when certain applications are active or idle), the location of device 10 throughout the day, the location of device 10 when certain application are activated/launched, and/or other information relating to the user or device 10 when certain activities occur. Although the operations of block 212 are shown as occurring after block 210, the operations of block 212 can optionally occur in parallel (simultaneously) with or before the operations of block 210.
During the operations of block 214, context engine 200 may predict a future user activity based on the historical data (e.g., based on the history of monitored system states obtained during block 210 and/or based on the history of user activities obtain during block 212) and/or based on other user data. The other user data can include information gleaned from the user's calendar (sometimes referred to as calendar information) or reminders, which might indicate when the user will be traveling and so will be more likely to run a map application, a GPS (Global Positioning System) application, or other navigation application (as an example). If desired, the prediction performed by context engine 200 may be based on a machine learning (ML) model that is trained on historical data (e.g., historical system state and user activity information) associated with a broad base of users or with only a subset of all users. If desired, context engine 200 can also predict when the user is likely to charge battery 26 of device 10. For example, historical data might indicate that the user tends to plug in device 10 for charging at around 8 AM on the weekdays and at around 10 PM every night throughout the week.
During the operations of block 216, context engine 200 may estimate the power consumption for the current system operation (e.g., to estimate the power consumption of the various subsystems within device 10 in running the current user context or activity) and may estimate the power consumption for an upcoming (future) system operation (e.g., to estimate the power consumption of the various subsystems within device 10 to run the predicted user context or activity). For example, if the predicted user activity includes running a GPS navigation application, then context engine 200 can estimate the power usage of one or more subsystems involved in running the GPS navigation application.
During the operations of block 218, centralized user experience manager 30 can adjust one or more subsystems based on the current battery level (state of charge of battery 26 in
Adjustment of subsystem knobs can optionally be based on the prediction of when the user is likely to charge (or plug in) device 10 as determined by context engine 200 during block 214. For example, if context engine 200 predicts that the user will be charging device 10 relatively soon (e.g., in less than one hour, less than two hours, less three hours, etc.) and that the currently battery level is sufficiently high to support the predicted user activity, then centralized user experience manager 30 need not be overly concerned about conserving power. On the other hand, if context engine 200 predicts that the user will not be charging device 10 for an extended period of time and the current battery level is limited, then centralized user experience manager 30 might adjust certain knobs to proactively reduce power consumption.
During the operations of block 220, centralized user experience manager 30 can optionally perform system adjustments based on whether the passthrough video feed is currently active (e.g., whether the passthrough feed is currently visible to the user). If the passthrough video feed is deactivated (e.g., the passthrough feed is not visible to the user), then centralized user experience manager 30 can automatically scale down (constrain or throttle down) subsystems associated with the processing of the passthrough feed. For example, the scene understanding block 36 (
In accordance with another embodiment associated with input sensor processing, a gaze-to-wake control knob associated with gaze tracking subsystem 46 can optionally be adjusted by centralized user experience manager 30 or context engine 200 based on the current user context (activity) and/or based on the predicted user context (activity). The gaze-to-wake control knob may determine the rate at which the user's gaze location is tracked to wake up device 10 from an idle mode. For example, device 10 may wake up from an idle state in response to the gaze tracker determining that the user's gaze location (or point of gaze) is aligned with a predetermined location in the user's field of view. This gaze-to-wake function may have a corresponding low thermal impact level on the overall thermals of device 10 and a corresponding high battery impact level on the battery usage of device 10. Such thermal and battery impact levels are exemplary. To help conserve battery life for device 10, the frequency (or frame rate) at which the gaze-to-wake function is being performed can optionally be reduced, limited, scaled back, constrained, or deactivated. For example, the gaze-to-wake frequency can be reduced from a nominal frequency of 10 Hz to less than 5 Hz, less than 4 Hz, less than 3 Hz, less than 2 Hz, or less than 1 Hz.
If desired, a Siri-to-wake control knob can similarly be adjusted to conserve battery life. Unlike the gaze-to-wake feature that wakes up device 10 based on gaze data, the Siri-to-wake feature wakes up device 10 from the idle mode in response to detecting a voice command such as “Hey Siri.” For instance, the frequency (or frame rate) at which the Siri-to-wake function is being performed can optionally be reduced, limited, scaled back, constrained, or deactivated. The gaze-to-wake and the Siri-to-wake features can be supported using a subsystem configured to detect a user input (e.g., gaze or voice command) for waking up device 10.
In accordance with another embodiment associated with input sensing, a hands tracking control knob associated with hands tracking subsystem 44 can optionally be adjusted by centralized user experience manager 30 or context engine 200 based on the current user context (activity) and/or based on the predicted user context (activity). The hands tracking control knob may determine the rate at which the user's hands are being tracked by hands tracker 44. To help conserve battery life for device 10, the frequency (or frame rate) at which the hands tracking function is being performed can optionally be reduced, limited, scaled back, constrained, or deactivated. As another example, consider a scenario in which device 10 is being operated by a hearing-impaired user who might communicate using American Sign Language (ASL) in certain scenarios. The hearing-impaired user can input an accessibility setting so that centralized user experience manager 30 is made aware of such condition. Hands tracking subsystem 44 in device 10 can be used to detect the American Sign Language or other sign language. Depending on the current or predicted user context, centralized user experience manager 30 can selectively adjust the hands tracking subsystem 44 to optimize for the current use case or to save power. For instance, if the hearing-impaired user is running a productivity application where the primary user interaction is with a keyboard, then centralized user experience manager 30 may direct hands tracking subsystem 44 to run at a nominal/standard frame rate (or frequency). If the hearing-impaired user is in a multiuser communication session and is communicating with other users in the session via American Sign Language (ASL), centralized user experience manager 30 may scale up the hands tracking subsystem 44 (e.g., by increasing the frame rate of hands tracker 44) to assist with decoding the ASL based on knowledge of the accessibility setting and the current user context. As another example, consider a scenario in which device 10 is being operated by a visually-impaired user. In such scenario, the scene understanding subsystem 36 and/or other computer vision related subsystems (algorithms) can be scaled up to assist the visual-impaired user with navigating or interacting with the environment. If desired, device 10 may be operated in other accessibility modes depending on the currently identified user context (activity) and/or the predicted user context (activity).
In accordance with another embodiment associated with input sensing, a knob for controlling background outward-facing camera (OFC) vision can optionally be adjusted by centralized user experience manager 30 based on the current user context (activity) and/or based on the predicted user context (activity). The background OFC vision capability/function can leverage scene understanding subsystem 36 (
As an example, the background OFC vision feature can detect one or more screens, displays, and/or other planar surfaces that actively emit light in a scene. As another example, the background OFC vision feature can detect one or more faces in a scene (e.g., to detect whether the user of device 10 is having a conversion or is otherwise interacting with another person in the physical environment). As another example, the background OFC vision feature can detect one or more smart home devices in the scene that can be paired with and/or controlled by device 10 (e.g., the background OFC vision feature can detect one or more HomeKit-enabled devices that are compatible with the HomeKit or Apple Home software framework developed by Apple Inc.). Adjusting the background OFC vision capability in this way may have a corresponding medium thermal impact level on the overall thermals of device 10 and a corresponding high battery impact level on the overall battery usage of device 10. Such thermal and battery impact levels are exemplary. To help conserve battery life for device 10, the camera rate at which the background OFC vision uses to capture and process scene images can be reduced (e.g., by constraining or deactivating the scene understanding subsystem). For example, the number of frames being captured and processed by the background OFC vision feature can be reduced from a nominal number of over 40000 frames/day to less than 20000 frames/day, to less than 10000 frames/day, or to less than 5000 frames/day.
In accordance with another embodiment associated with input sensing, a knob for controlling a localization algorithm on device 10 can optionally be adjusted by centralized user experience manager 30 or context engine 200 based on the current user context (activity) and/or based on the predicted user context (activity). For example, the localization algorithm can leverage one or more sensors 16 (
In accordance with an embodiment associated with input sensing, an audio algorithms control knob for processing input audio signals received by a microphone or output audio signals output by a speaker can optionally be adjusted by centralized user experience manager 30 or context engine 200 based on the current user context (activity) and/or based on the predicted user context (activity). The audio algorithms control knob may determine the quality with which audio signals are being processed at device 10. To help conserve battery life for device 10, the resource usage of the audio algorithms can optionally be reduced, limited, scaled back, or otherwise constrained. For example, context engine 200 can limit the amount by which the audio algorithms employs a central processing unit (CPU), application processor, graphics processing unit (GPU), neural processing unit (NPU), and/or other processing resources on device 10. As another example, consider a scenario in which a hearing-impaired user of device 10 is in a multiuser communication session and is communicating with other users in the session via American Sign Language (ASL). In such scenario, the audio algorithms of device 10 can optionally be scaled back or otherwise constrained since the user will not be communicating with her/her voice.
The knobs above relating to input sensing and processing are illustrative. A second category of knobs might include one or more knobs related to the processing and output of information (e.g., for outputting visual and audio information to the user). The second category of knobs can have a second set of corresponding thermal impact levels T2 and a second of corresponding battery impact levels B2. In accordance with an embodiment associated with an output function, a knob for controlling whether device 10 operates in an audio-only feedback mode can optionally be adjusted by centralized user experience manager 30 or context engine 200 based on the current user context (activity) and/or based on the predicted user context (activity). When the audio-only feedback mode (or feature) is deactivated, user alerts output by device 10 can include a graphical, textual, or other visual notification. Such visual alerts will require graphics rendering at subsystem 32 and processing by the display pipeline and will thus consume more power. When the audio-only feedback mode is activated, all user alerts can be simplified to only an audio feedback such as an audio message or a chime without displaying any visual alert (e.g., no change to the display is needed). Activating the audio-only feedback mode/feature can thus help circumvent any additional graphics rendering at subsystem 32 and display pipeline processes associated with rendering and outputting graphical, textual, or other visual alerts and can thus reduce power consumption and conserve battery. Toggling the audio-only feedback mode control knob may have a corresponding high thermal impact level on the overall thermals of device 10 and a corresponding high battery impact level on the overall battery usage of device 10. Such thermal and battery impact levels are exemplary. If desired, the audio-only feedback mode can optionally be extended to also include haptic feedback (e.g., vibrational, touch, or other tactile feedback).
In accordance with another embodiment associated with an output function (or mode), a knob for controlling whether device 10 operates in a low light mode can optionally be adjusted by centralized user experience manager 30 or context engine 200 based on the current user context (activity) and/or based on the predicted user context (activity). The “low light mode” of device 10 can refer to and be defined herein as a mode during which certain subsystems within device 10 are activated to enhance the user's vision in a dark environment when the scene light level is below a threshold. For example, the low light mode might boost the overall image brightness to enhance visibility at the expense of noise. As another example, the low light mode might highlight edges of physical objects in the scene to enhance visibility of the physical objects. As another example, the low light mode might activate image stabilization and/or multi-frame integration to improve the quality of captured images in low light. Activating and deactivating the low light mode may have a corresponding high thermal impact level on the overall thermals of device 10 and a corresponding high battery impact level on the overall battery usage of device 10. Such thermal and battery impact levels are exemplary. To help conserve battery life for device 10, the frame rate (or frequency) at which the low light mode is being operated by device 10 in dark environments can optionally be reduced, limited, scaled back, or otherwise constrained. For example, the camera rate at which the low light mode obtains and processes images can be reduced from a nominal frame rate of 30 fps (frames per second) to less than 20 fps, less than 10 fps, or less than 5 fps.
In accordance with another embodiment associated with a output mode for presenting visual information, a knob for controlling whether device 10 operates in a magnify mode can optionally be adjusted by centralized user experience manager 30 or context engine 200 based on the current user context (activity) and/or based on the predicted user context (activity). The “magnify mode” or magnifying mode of device 10 can refer to and be defined herein as a mode during which certain content in the user's field of view (FOV) is magnified for enhanced visibility and a more comfortable user experience. As an example, small text can be magnified to help prevent the user from having to squint to read the small text. Activating and deactivating the magnify mode may have a corresponding high thermal impact level on the overall thermals of device 10 and a corresponding high battery impact on the overall battery usage of device 10. Such thermal and battery impact levels are exemplary. To help conserve battery life for device 10, the frame rate at which the magnifying mode is being operated by device 10 can optionally be reduced, limited, scaled back, or otherwise constrained. For example, the camera rate at which the magnify mode obtains and processes (magnifies) images can be reduced from a nominal frame rate of 30 fps to less than 20 fps, less than 10 fps, or less than 5 fps.
In accordance with another embodiment associated with an output mode for presenting visual information, a knob for controlling whether device 10 operates in a single color mode can optionally be adjusted by centralized user experience manager 30 or context engine 200 based on the current user context (activity) and/or based on the predicted user context (activity). The “single color mode” of device 10 can refer to and be defined herein as a mode during which display content is limited to a single color. As an example, when the single color mode is activated, notifications and other information being displayed by device 10 may be limited to a black and white or grayscale output format. When the single color mode is deactivated, device 10 can be operated in a “full color mode” during which display content output from device 10 can exhibit the full color spectrum. Activating and deactivating the single color mode may have a corresponding medium thermal impact level on the overall thermals of device 10 and a corresponding low battery impact level on the overall battery usage of device 10. Such thermal and battery impact levels are exemplary. To help conserve battery life for device 10, device 10 may be operated in the single color mode (e.g., by activating the single color mode).
In accordance with another embodiment associated with an output function, a knob for controlling the brightness of display(s) 14 in device 10 can optionally be adjusted by centralized user experience manager 30 or context engine 200 based on the current user context (activity) and/or based on the predicted user context (activity). For example, the display brightness can be increased to enhance visibility or can be decreased to reduce visibility. Adjusting the display brightness may have a corresponding medium thermal impact level on the overall thermals of device 10 and a corresponding low battery impact level on the overall battery usage of device 10. Such thermal and battery impact levels are exemplary. To help conserve battery life for device 10, the display brightness of device 10 may be reduced.
In accordance with another embodiment associated with outputting visual information, a knob for controlling the rendering of user interface (UI) elements or other virtual content at graphics rendering subsystem 32 (
The second category of knobs relating to the output of visual and audio information is illustrative. In accordance with another embodiment, a knob for controlling the wireless connectivity of communications circuitry 20 in device 10 can optionally be adjusted by centralized user experience manager 30 or context engine 200 based on the current user context (activity) and/or based on the predicted user context (activity). For example, a Bluetooth transceiver in circuitry 20 can be temporarily deactivated to limit the Bluetooth connection between device 10 and a companion device. As another example, a WiFi transceiver in circuitry 20 can be temporarily deactivated to limit the WiFi connection between device 10 and a companion device. Adjusting the wireless connectivity of device 10 with one or more external devices may have a corresponding low thermal impact level T3 on the overall thermals of device 10 and a corresponding medium battery impact level B3 on the overall battery usage of device 10. Such thermal and battery impact levels are exemplary. To help conserve battery life for device 10, the wireless connectively of device 10 may be reduced, limited, scaled back, or otherwise constrained.
In accordance with another embodiment, a knob for controlling a notification rate at which device 10 checks for new notifications or updates from one or more applications running on device 10 can optionally be adjusted by centralized user experience manager 30 or context engine 200 based on the current user context (activity) and/or based on the predicted user context (activity). Such notification rate is sometimes referred to as a notification “pull rate.” For example, device 10 can check for new emails using a first pull rate, for new messages using a second pull rate equal to or different than the first pull rate, for application updates using a third pull rate equal to or different than the first and second pull rates, and/or can check for other notifications. Adjusting the pull rate(s) with which device 10 checks for new notifications or updates from the various applications running on device 10 may have a corresponding low thermal impact level T4 on the overall thermals of device 10 and a corresponding high battery impact level B4 on the overall battery usage of device 10. Such thermal and battery impact levels are exemplary. To help conserve battery life for device 10, the notification pull rate of device 10 may be reduced, limited, scaled back, or otherwise constrained (e.g., reduced from a nominal pull rate of 2000 pulls per day to less than 1000 pulls/day, to less than 500 pulls/day, to less than 200 pulls/day, or to less than 100 pulls/day).
In accordance with another embodiment, one or more knobs for controlling a logging algorithm for tracking information for the user of device 10 can optionally be adjusted by centralized user experience manager 30 or context engine 200 based on the current user context (activity) and/or based on the predicted user context (activity). Such logging algorithm can help track certain activities relating to the user's health (as an example). To help conserve battery life for device 10, the resource usage of the such logging algorithm can optionally be reduced, limited, scaled back, or otherwise constrained. For example, context engine 200 can limit the amount by which the logging algorithm(s) employ a CPU, application processor, GPU, NPU, and/or other processing resources on device 10.
The knobs shown in
In general, one or more of the adjustments made by centralized user experience manager 30 can be made in accordance with user experience policies that are customizable based on different user identities, different user classifications or groups, and/or different geographies. For example, at least some of the operations of
The methods and operations described above in connection with
A physical environment refers to a physical world that people can sense and/or interact with without the aid of an electronic device. In contrast, an extended reality (XR) environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic device. For example, the XR environment may include augmented reality (AR) content, mixed reality (MR) content, virtual reality (VR) content, and/or the like. With an XR system, a subset of a person's physical motions, or representations thereof, are tracked, and, in response, one or more characteristics of one or more virtual objects simulated in the XR environment are adjusted in a manner that comports with at least one law of physics.
Many different types of electronic systems can enable a person to sense and/or interact with various XR environments. Examples include head mountable systems, projection-based systems, heads-up displays (HUDs), vehicle windshields having integrated display capability, windows having integrated display capability, displays formed as lenses designed to be placed on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers.
To help protect the privacy of users, any personal user information that is gathered by sensors may be handled using best practices. These best practices including meeting or exceeding any privacy regulations that are applicable. Opt-in and opt-out options and/or other options may be provided that allow users to control usage of their personal data.
The foregoing is merely illustrative and various modifications can be made to the described embodiments. The foregoing embodiments may be implemented individually or in any combination.
This application claims the benefit of provisional patent application No. 63/607,635, filed Dec. 8, 2023, and provisional patent application No. 63/433,383, filed Dec. 16, 2022, which are hereby incorporated by reference herein in their entireties.
Number | Date | Country | |
---|---|---|---|
63607635 | Dec 2023 | US | |
63433383 | Dec 2022 | US |