This disclosure generally relates to artificial reality, in particular to generating free-viewpoint videos.
Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured content (e.g., real-world photographs). The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Artificial reality may be associated with applications, products, accessories, services, or some combination thereof, that are, e.g., used to create content in an artificial reality and/or used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
Particular embodiments described herein relate to systems and methods of generating subframes at a high frame rate based on real-time or close-real-time view directions of the user. The system may generate or receive mainframes that are generated at a mainframe rate. The mainframes may be generated by a remote or local computer based on the content data and may be generated at a relatively low frame rate (e.g., 30 Hz) comparing the subframe rate to accommodate the user's head motion. The system may use a display engine to generate composited frames based on the received mainframe. The composited frames may be generated by the display engine using a ray-casting and sampling process at a higher framerate (e.g., 90 Hz). The frame rate of the composited frames may be limited by the processing speed of the graphic pipeline of the display engine. Then, the system may store the composited frame in a frame buffer and use the pixel data in the frame buffer to generate subframes at an even higher frame rate according to the real-time or close-real-time view directions of the user.
At a high level, the method may use two alternative memory organization frameworks to convert a composed frame (composed by the display engine based on a mainframe) into multiple sub-frames that are adjusted for changes in view direction of the user. The first memory organization framework may use a frame buffer memory local to the display panel having the LEDs (e.g., located in the same die as the LEDs or in a die stacked behind the LEDs and aligned to LEDs). Under the first memory organization framework, the system may shift the pixel data stored in the buffer memory according to the approximate view direction (e.g., view direction as measured in real-time or close-real-time as it changes). For example, the system may use the first memory architecture to generate 4 subframes per composited frame resulting in 360 Hz for the subframe rate. The second memory organization framework may use a frame buffer memory remote to the display panel hosting the LEDs. For example, the frame buffer may be located in the same die as the renderer in the display engine, which is remote to but in communication with the display panel hosting the LEDs. The system may shift the address offsets used for reading the frame buffer according the approximate view direction of the user and read the pixel data from the frame buffer memory to generate the new subframes. For example, the system may use this memory architecture to generate 100 subframes per composited frame resulting in 9 kHz for the subframe rate.
To allow the subframe to be correctly generated by shifting the pixel data in the frame buffer or shifting the reading offset for reading the pixel data, the composited frame and the subframe generated according to the user's view direction may include pixel data corresponding to a number of pixel positions on the view plane that are uniformly distributed in an angle space (rather than in a tangent space). Then, the pixel data may be stored in a frame buffer (e.g., integrated with the display panel having the light-emitting elements or integrated with the display engine which is remote to the display panel with the light-emitting elements). When the system detect the user head motion, the system may generate the corresponding subframe in response to the user's head motion and in accordance with an approximate view direction of the user by adjusting pixel data stored in the frame buffer or adjusting address offsets for the pixel data according to the view direction of the user as it changes over time. The approximate view direction of the user may be a real-time or close-real-time view direction of the user as measured by the head-tracking system rather than predicted based on head direction data of previous frames.
When the pixel values are to be output to LEDs, the system may use the distortion correction block, which samples the pixel values based on the LED locations/lens distortion characteristics, to correct such distortions. Thus, the system can use the sampling process to account for the fractional differences in angles considering both the lens distortions and the LED location distortions. The rate for rendering the mainframes, composited frames, and the rate at which subframes are generated may be adjusted dynamically and independently. When the upstream system indicates that there is fast changing content (e.g., fast moving objects) or there is likely to be occlusion changes (e.g., changing FOVs), the system may increase the render rate of the mainframes, but the subframe rate may be kept the same being independent from the mainframe rate or/and the composited frame rate because the user's view direction is not moving that much. On the other hand, when the user's head moves rapidly, the system may increase the subframe rate independently without increasing the mainframe rate or/and the composited frame rate because the content itself is not changing that much. As a result, the system may allow the subframes to be generated higher frame rates (e.g., subframes at 360 Hz on the basis of 4 subframe per composed frame with a framerate of 90 Hz) to reduce the flashing and flickering artifacts. This may also allow LEDs to be turned on for more of the display time (e.g., 100% duty cycle), which can improve brightness and reduce power consumption because the reduction in the driving current levels. The system may allow the frame distortion correction to be made based on late-latched eye velocity, rather than predicted future eye velocity in advance of rendering each frame. The system may allow the display rate to be adaptive to the amount head motion and allows the render rate to be adaptive to the rate at which the scene and its occlusions are changing.
The embodiments disclosed herein are only examples, and the scope of this disclosure is not limited to them. Particular embodiments may include all, some, or none of the components, elements, features, functions, operations, or steps of the embodiments disclosed above. Embodiments according to the invention are in particular disclosed in the attached claims directed to a method, a storage medium, a system and a computer program product, wherein any feature mentioned in one claim category, e.g. method, can be claimed in another claim category, e.g. system, as well. The dependencies or references back in the attached claims are chosen for formal reasons only. However, any subject matter resulting from a deliberate reference back to any previous claims (in particular multiple dependencies) can be claimed as well, so that any combination of claims and the features thereof are disclosed and can be claimed regardless of the dependencies chosen in the attached claims. The subject-matter which can be claimed comprises not only the combinations of features as set out in the attached claims but also any other combination of features in the claims, wherein each feature mentioned in the claims can be combined with any other feature or combination of other features in the claims. Furthermore, any of the embodiments and features described or depicted herein can be claimed in a separate claim and/or in any combination with any embodiment or feature described or depicted herein or with any of the features of the attached claims.
FIG.7 illustrates an example method of adjusting display content in according to the user's view directions.
In particular embodiments, the display engine 130 may include a controller block (not shown). The control block may receive data and control packages such as position data and surface information from controllers external to the display engine 130 though one or more data buses. For example, the control block may receive input stream data from a body wearable computing system. The input data stream may include a series of mainframe images generated at a mainframe rate of 30-90 Hz. The input stream data including the mainframe images may be converted to the required format and stored into the texture memory 132. In particular embodiments, the control block may receive input from the body wearable computing system and initialize the graphic pipelines in the display engine to prepare and finalize the image data for rendering on the display. The data and control packets may include information related to, for example, one or more surfaces including texel data, position data, and additional rendering instructions. The control block may distribute data as needed to one or more other blocks of the display engine 130. The control block may initiate the graphic pipelines for processing one or more frames to be displayed. In particular embodiments, the graphic pipelines for the two eye display systems may each include a control block or share the same control block.
In particular embodiments, the transform block 133 may determine initial visibility information for surfaces to be displayed in the artificial reality scene. In general, the transform block 133 may cast rays from pixel locations on the screen and produce filter commands (e.g., filtering based on bilinear or other types of interpolation techniques) to send to the pixel block 134. The transform block 133 may perform ray casting from the current viewpoint of the user (e.g., determined using the headset's inertial measurement units, eye tracking sensors, and/or any suitable tracking/localization algorithms, such as simultaneous localization and mapping (SLAM)) into the artificial scene where surfaces are positioned and may produce tile/surface pairs 144 to send to the pixel block 134. In particular embodiments, the transform block 133 may include a four-stage pipeline as follows. A ray caster may issue ray bundles corresponding to arrays of one or more aligned pixels, referred to as tiles (e.g., each tile may include 16×16 aligned pixels). The ray bundles may be warped, before entering the artificial reality scene, according to one or more distortion meshes. The distortion meshes may be configured to correct geometric distortion effects stemming from, at least, the eye display systems the headset system. The transform block 133 may determine whether each ray bundle intersects with surfaces in the scene by comparing a bounding box of each tile to bounding boxes for the surfaces. If a ray bundle does not intersect with an object, it may be discarded. After the tile-surface intersections are detected, the corresponding tile/surface pairs may be passed to the pixel block 134.
In particular embodiments, the pixel block 134 may determine color values or grayscale values for the pixels based on the tile-surface pairs. The color values for each pixel may be sampled from the texel data of surfaces received and stored in texture memory 132. The pixel block 134 may receive tile-surface pairs from the transform block 133 and may schedule bilinear filtering using one or more filer blocks. For each tile-surface pair, the pixel block 134 may sample color information for the pixels within the tile using color values corresponding to where the projected tile intersects the surface. The pixel block 134 may determine pixel values based on the retrieved texels (e.g., using bilinear interpolation). In particular embodiments, the pixel block 134 may process the red, green, and blue color components separately for each pixel. In particular embodiments, the display may include two pixel blocks for the two eye display systems. The two pixel blocks of the two eye display systems may work independently and in parallel with each other. The pixel block 134 may then output its color determinations (e.g., pixels 138) to the display block 135. In particular embodiments, the pixel block 134 may composite two or more surfaces into one surface to when the two or more surfaces have overlapping areas. A composed surface may need less computational resources (e.g., computational units, memory, power, etc.) for the resampling process.
In particular embodiments, the display block 135 may receive pixel color values from the pixel block 134, covert the format of the data to be more suitable for the scanline output of the display, apply one or more brightness corrections to the pixel color values, and prepare the pixel color values for output to the display. In particular embodiments, the display block 135 may each include a row buffer and may process and store the pixel data received from the pixel block 134. The pixel data may be organized in quads (e.g., 2×2 pixels per quad) and tiles (e.g., 16×16 pixels per tile). The display block 135 may convert tile-order pixel color values generated by the pixel block 134 into scanline or row-order data, which may be required by the physical displays. The brightness corrections may include any required brightness correction, gamma mapping, and dithering. The display block 135 may output the corrected pixel color values directly to the driver of the physical display (e.g., pupil display) or may output the pixel values to a block external to the display engine 130 in a variety of formats. For example, the eye display systems of the headset system may include additional hardware or software to further customize backend color processing, to support a wider interface to the display, or to optimize display speed or fidelity.
In particular embodiments, the dithering methods and processes (e.g., spatial dithering method, temporal dithering methods, and spatio-temporal methods) as described in this disclosure may be embodied or implemented in the display block 135 of the display engine 130. In particular embodiments, the display block 135 may include a model-based dithering algorithm or a dithering model for each color channel and send the dithered results of the respective color channels to the respective display driver ICs (DDIs) (e.g., 142A, 142B, 142C) of display system 140. In particular embodiments, before sending the pixel values to the respective display driver ICs (e.g., 142A, 142B, 142C), the display block 135 may further include one or more algorithms for correcting, for example, pixel non-uniformity, LED non-ideality, waveguide non-uniformity, display defects (e.g., dead pixels), display degradation, etc. U.S. patent application Ser. No. 16/998,860, entitled “Display Degradation Compensation,” first named inventor “Edward Buckley,” filed on 20 Aug. 2020, which discloses example systems, methods, and processes for display degradation compensation, is incorporated herein by reference.
In particular embodiments, graphics applications (e.g., games, maps, content-providing apps, etc.) may build a scene graph, which is used together with a given view position and point in time to generate primitives to render on a GPU or display engine. The scene graph may define the logical and/or spatial relationship between objects in the scene. In particular embodiments, the display engine 130 may also generate and store a scene graph that is a simplified form of the full application scene graph. The simplified scene graph may be used to specify the logical and/or spatial relationships between surfaces (e.g., the primitives rendered by the display engine 130, such as quadrilaterals or contours, defined in 3D space, that have corresponding textures generated based on the mainframe rendered by the application). Storing a scene graph allows the display engine 130 to render the scene to multiple display frames and to adjust each element in the scene graph for the current viewpoint (e.g., head position), the current object positions (e.g., they could be moving relative to each other) and other factors that change per display frame. In addition, based on the scene graph, the display engine 130 may also adjust for the geometric and color distortion introduced by the display subsystem and then composite the objects together to generate a frame. Storing a scene graph allows the display engine 130 to approximate the result of doing a full render at the desired high frame rate, while actually running the GPU or display engine 130 at a significantly lower rate.
In particular embodiments, the graphic pipeline 100D may include a resampling step 153, where the display engine 130 may determine the color values from the tile-surfaces pairs to produce pixel color values. The resampling step 153 may be performed by the pixel block 134 in
In particular embodiments, the graphic pipeline 100D may include a bend step 154, a correction and dithering step 155, a serialization step 156, etc. In particular embodiments, the bend step, correction and dithering step, and serialization steps of 154, 155, and 156 may be performed by the display block (e.g., 135 in
Traditional AR/VR systems may render frames according to the user's view direction that are predicted based on head-tracking data associated with previous frames. However, it can be difficult to predict the view direction accurately into future for a time period that is needed for the rendering process. For example, it may be necessary to use the head position at the start of the frame and the predicted head position at the end of the frame to allow smoothly changing the head position as the frame is scanned out. At 100 frames per second, this delay may be 10 ms. It can be hard to accurately predict the user's view direction for 10 ms into future because the user may arbitrarily change the head motion at any time. This inaccuracy in the predicted view direction of the user may negatively affect the quality of the rendered frames. Furthermore, the head/eye tracking system used by AR/VR systems can track and predict the user's head/eye motion only up to a certain speed limit and the display engine or rendering pipeline may also have a rendering speed limit. Because of these speed limits, AR/VR systems may have an upper limit for its highest subframe rate. As a result, when the user moves his head/eye rapidly, the user may perceive artifacts (e.g., flickers or warping) due to the inaccurate view direction prediction and the limited subframe rate of the AR/VR system.
To solve this problem, particular embodiment of the system may generate subframes at a high frame rate based on the view directions of the user as measured by the eye/head tracking system in real-time or close-real-time. At a high level, the method may use two alternative memory organization frameworks to convert a composed frame (e.g., a frame composed by the display engine based on a mainframe) into multiple sub-frames that are adjusted for changes in view direction of the user. The first memory organization framework may use a frame buffer memory local to the display panel having the LEDs (e.g., located in the same die as the LEDs or in a die stacked behind the LEDs and aligned to LEDs). Under the first memory organization framework, the system may shift the pixel data stored in the buffer memory according to the approximate view direction (e.g., view direction as measured in real-time or close-real-time as it changes). For example, the system may use this memory architecture to generate 100 subframes for each composited frame, which has a frame rate of 90 Hz, resulting in a subframe rate of 9 kHz. The second memory organization framework may use a frame buffer memory remote to the display panel hosting the LEDs. For example, the frame buffer may be located in the same die as the renderer in the display engine, which is remote to (e.g., connected by the cables or wireless communication channels) but in communication with the display panel hosting the LEDs. The system may shift the address offsets used for reading the frame buffer according the approximate view direction of the user and read the pixel data from the frame buffer memory to generate the new subframes. For example, the system may use this memory architecture to generate 4 subframes for each composited frame, which has a frame rate of 90 Hz, resulting in a subframe rate of 360 Hz.
To allow the subframe to be correctly generated by shifting the pixel data in the frame buffer or shifting the reading offset for reading the pixel data, the composited frame and the subframe generated according to the user's view direction may include pixel data corresponding to a number of pixel positions on the view plane that are uniformly distributed in an angle space (rather than in a tangent space). Then, the pixel data may be stored in a frame buffer (e.g., integrated with the display panel having the light-emitting elements or integrated with the display engine which is remote to the display panel with the light-emitting elements). When the system detects the user's head motion, the system may generate the corresponding subframe in response to the user's head motion and in accordance with an approximate view direction of the user (as measured by the head tracking system) by adjusting pixel data stored in the frame buffer or adjusting address offsets for the pixel data according to the view direction of the user as it changes over time. The approximate view direction of the user may be a real-time or close-real-time view direction of the user as measured by the head tracking system rather than view directions that are predicted based on head direction data of previous frames.
Particular embodiments of the system may use either of the two memory architectures to generate subframes at a high frame rate and according to the user's view directions as measured in real-time or close-real-time. By avoiding using predicted view directions, which may be not accurate and may compromise the quality of the display content, the system may achieve better display quality with reduced flashing and flickering artifacts and provide better user experience. By using a higher subframe rate, which is independent to the mainframe rate, particular embodiments of the system may allow LEDs to be turned on for a longer display time during each display period (e.g., 100% duty cycle) and can improve brightness and reduce power consumption due to the reduction in the driving current levels. By resampling the pixel values based on the actual LEDs locations and the distortions of the system, particular embodiments of the system may allow the frame distortion to be corrected, and thus may provide improved display quality. By using independent mainframe rate and subframe rate, particular embodiments of the system may allow the display rate to be adaptive to the amount of the user's head motion and allow the render rate to be adaptive to the rate at which the scene and its occlusions are changing, providing optimal performance and optimized computational resource allocations.
In this disclosure, the term “mainframe rate” may refer to a frame rate that is used by the upper stream computing system to generate the mainframes. The term “composited frames” may refer to the frames that are rendered or generated by the render such as a GPU or display engine. A “composite frame” may also be referred to as a “rendered frame.” The term “rendering frame rate” may refer to a frame rate used by the render (e.g., the display engine or GPU) to render or compose composited frames (e.g., based on mainframes received from an upper stream computing system such as a headset or main computer).The term “display frame rate” may refer to a framerate that is used for updating the uLEDs and may be referred to as “display updating frame rate.” The display frame rate or display updating frame rate may equal to the “subframe rate” of the subframes generated by the system to update the uLEDs. In this disclosure, the term “display panel” or “display chip” may refer to a physical panel, a silicon chip, or a display component hosting an array of uLEDs or other types of LEDs. In this disclosure, the term “LED” may refer to any types of light-emitting elements including, for example, but not limited to, micro-LED (uLED). In this disclosure, the term “pixel memory unit” and “pixel block” may be used interchangeably.
In particular embodiments, the system may use a frame buffer memory to support rendering frames at a rendering frame rate that is different from a subframe rate (also referred to as a display frame rate) used for updating the uLED array. For example, the system may generate composited frames (using a display engine or a GUP for rendering display content) at a frame rate of 90 Hz, which may be a compromise result considering two competing factors: (1) the rendering rate needs to be slow enough to reduce the cost of rendering (e.g., ideally at a frame rate less than 60 Hz); and (2) the rate needs to be fast enough to reduce blur when the user's head moves (e.g., ideally at a frame rate up to 360 Hz). In particular embodiments, by using a 90 Hz display frame rate, the system may use a duty cycle of 10% to drive the uLEDs to reduce the blur to an acceptable level when the user moves his head at a high speed. However, the 10% duty cycle may result in strobing artifacts and may require significantly higher current levels to drive the uLEDs, which may increase the drive transistor size and reduce power efficiency.
In particular embodiments, the system may solve this problem by allowing the rendering frame rate used by the renderer (e.g., a GPU or display engine) to be decoupled from the subframe rate that is used to update the uLEDs. Both of the rendering frame rate used by the render and the subframe rate used for updating the uLEDs may be set to values that are suitable for their respective diverging requirements. Further, both the rendering frame rate and the subframe rate may be adaptive to support different workloads. For example, the rendering frame rate may be adaptive to the display content (e.g., based on whether there is a FOV change or whether there is a fast-moving object in the scene). As another example, the subframe rate for updating the uLEDs may be adaptive to the user's head motion speed.
In particular embodiments, the system may decouple rendering frame rate and subframe rate by building a specialized tile processor array, including a number of tile processors, into the silicon chip that drives the uLEDs. The tile processors may collectively store a full frame of pixel data. The system may shift or/and rotate the pixel data in the frame buffer as the user's head moves (e.g., along the left/right or/and up/down directions), so that these head movements can be accounted for with no need to re-render the scene at the subframe rate. In particular embodiments, even for VR displays that use large (e.g. 1″×1″) silicon chips to drive uLED arrays, the area on the silicon chip that drives the uLEDs may be entirely used by the drive transistors, leaving no room for the frame buffer. As an alternative, particular embodiments of the system may use a specialized buffer memory that could be built into the display engine (e.g., GPUs, graphics XRU chips) that drives the uLED array. The display engine may be remote (i.e., in different components) to the silicon chip that drives the uLEDs. The system may adjust the rendered frame to account for the user's head motion (e.g., along the left/right or/and up/down directions) by shifting the address offset used for reading the pixel data from the frame buffer which is remote to the uLED drivers. As a result, the subframes used for updating the uLEDs may account for the user's head motion and the frame rendering process by the display engine may not need to account for the user's head motion.
In particular embodiments, when the frame buffer is located on the silicon chip hosting the uLED array, the system may shift pixels within the memory array to generate subframes. The specific regions of memory may be used to generate brightness for specific tiles of uLEDs. In particular embodiments, when the frame buffer is located at a different component remote to the silicon chip hosting the uLED array, the system may shift an address offset (Xs, Ys) that specifies the position of the origin within the memory array for reading the frame buffer to generate the subframes. When accessing location (X, Y) within the array, the corresponding memory address may be computed as follows:
address=((X+Xs)mod(W))+((Y+Ys)mod(H))×W (1)
where, (Xs, Ys) is the address offset, (X, Y) is the current address, W is the width of the frame buffer memory (e.g., as measured by the number of memory units corresponding to pixels), H is the height of the frame buffer memory (e.g., as measured by the number of memory units corresponding to pixels). In particular embodiments, the frame buffer position in memory may rotate in the left/right direction with changes in Xs and may rotate in the up/down direction with changes in Ys, all without actually moving any of the data already stored in memory. It is notable that that W and H may be not limited to the width and height of the uLED array, but may include an overflow region, so that the frame buffer on a VR device may be larger than the LED array to permit shifting the data with head movement.
In particular embodiments, the system may use an array processor that is designed to be integrated to the silicon chip hosting an array of LEDs (e.g., uOLEDs). In particular embodiments, the system may allow the LEDs to be on close to 100% of the time of the display cycle (i.e., 100% duty cycle) and may provide desired brightness levels, without introducing blur due to head motion. In particular embodiments, the system may eliminate strobing and warping artifacts due to head motion and reducing LED power consumption. In particular embodiments, the system may include the elements including, for example, but not limited to, a pixel data input module, a pixel memory array and an array access interface, tile processors to compute LED driving signal parameter values, an LED data output interface, etc. In particular embodiments, the system may be bonded to a die having an array of LEDs (e.g., 3000×3000 array of uOLEDs). The system may use a VR display with a pancake lens and a 90 degree field of view (FOV). The system may use this high resolution to produce a retinal display, where individual LEDs may be not distinguishable by the viewer. In particular embodiments, the system may use four display chips to produce a larger array of LEDs (e.g., 6000×6000 array of uOLEDs). In particular embodiments, the system may support a flexible or variable display frame rate (e.g., up to 100 fps) for the rates of the mainframes and subframes. Changes in occlusion due to head movement and object movement/changes may be computed at the display frame rate. At 100 fps, changes in occlusion may not be visible to the viewer. The display frame rate may need not be fixed but could be varied depending on the magnitude of occlusion changes. In particular embodiments, the system may load frames of pixel data into the array processor, which may adjust the pixel data to generate subframes at a significantly higher subframe rate than 100 fps to account for changes in the user's viewpoint angle. In particular embodiments, the system may not support head position changes or object changes because that would change occlusion. In particular embodiments, the system may support head position changes or object changes by having a frame buffer storing pixel data that covers a larger area than actually displayed area of the scene.
In particular embodiments, the system may need extra power for introducing the buffer memory into the graphic pipeline. Much of the power for reading and writing operations of memory units may be already accounted for, since it replaces a multi-line buffer in the renderer/display interface. Power per byte access may increase, since the memory array may be large. However, if both the line buffer and the specialized frame buffer are built from SRAM, the power difference may be controlled within an acceptable range. Whether the data is stored in a line buffer or a frame buffer, each pixel may be written to SRAM once and read from SRAM once during the frame, so the active read/write power may be the same. Leakage power may be greater for the larger frame buffer memory than for the smaller line buffer, but this can be reduced significantly by turning off the address drivers for portions of the frame buffer that are not currently being accessed. A more serious challenge may be the extra power required to read and transmit data to the uLED array chip at a higher subframe rate. Inter-chip driving power may dramatically greater than the power for reading SRAM. Thus, the extra power can become a critical issue. In particular embodiments, the system may adopt a solution which continually alters the subframe rate based on the amount of head movement. For example, when the user's head is relatively still, the subframe rate may be 90 fps (i.e., 90 Hz) or less. Only when the head is moving quickly would the subframe rate increase, for example, to 360 fps (i.e., 360 Hz) for the fastest head movements. In particular embodiment, if a frame buffer allows up to 4 times of the subframe rate, the uLED duty cycle may be increased from 10% to 40% of the frame time. This may reduce the current level required to drive the uLEDs, which may reduce the power required and dramatically reduce the size of the drive transistors. In particular embodiments, the uLED duty cycle may be increased up to 100%, which may further reduce the current level and thus the power that is needed to drive the uLEDs.
In particular embodiments, the system may encounter user head rotation as fast as 300 degrees/sec. The system may load the display frame data into pixel memory at a loading speed up to 100 fps. Therefore, there may be up to 3 degrees of viewpoint angle change per display frame. As an example and not by way of limitation, with 3000 uOLEDs in a 90-degree of FOV, the movement of 3 degrees per frame may roughly correspond to 100 uOLEDs per frame. Therefore, to avoid aliasing, uOLED values may be computed at least 100 times per frame or 10,000 times per second. If a 3000×3000 display is processed in tiles of 32×32 uOLEDs per tile, there may be almost 100 horizontal swaths of tiles. This suggests that the display frame time may be divided into up to 100 subframe times, where one swath of pixel values may be loaded per subframe, replacing the entire pixel memory over the course of a display time. Individual swaths of uOLEDs could be turned on except during the one or two subframes while their pixel memory is being accessed. In particular embodiments, the pixel memory may be increased by the worst case supported change in view angle. Thus, supporting 3 degrees of angle change per display frame may require an extra 100 pixels on all four edges of the pixel array. As another example, with 6000 uOLEDs in a 90-degree of FOV, the movement of 3 degrees per frame may roughly correspond to 200 uOLEDs per frame. Therefore, to avoid aliasing, uOLED values may be computed at least 200 times per frame or up to 20,000 times per second.
In particular embodiments, the system may use an array processor that is integrated to the silicon chip hosting the array of LEDs (e.g., uOLEDs). The system may generate subframes that are adjusted for the user's view angle changes at a subframe rate and may correct all other effects related to, for example, but not limited to, object movement or changes in view position. For example, the system may correct pixel misalignment when generating the subframes. As another example, the system may generate the subframes in accordance with the view angle changes of the user while a display frame is being displayed. The view angle changes may be yaw (i.e., horizontal rotation) along the horizontal direction or pitch (i.e., vertical rotation) along the vertical direction. In particular embodiments, the complete view position updates may occur at the display frame rate (i.e., subframe rate), for example, with yaw and pitch being corrected at up to 100 sub-frames per display frame. In particular embodiments, the system may generate the subframes with corrections or adjustments according to torsion changes of the user. Torsion may be turning the head sideways and may occur mostly as part of turning the head to look at something up or down and to the side. The peak torsion angular speed and the percentage of a pixel of offset may occur at the edges of the screen in a single frame time. In particular embodiments, the system may generate the subframes with corrections or adjustments for translation changes. Translation changes may include moving the head in space (translation in head position) and may affect parallax and occlusion. The largest translation may occur when the users' head is also rotating. Peak display translation may be caused by the fast head movement and may be measured by the number of pixels of change in parallax and inter-object occlusion.
In particular embodiments, the system may generate subframes with corrections or adjustments based on the eye movement of the user during the display frame. The eye movement may a significant issue for raster scanned displays, since the raster scanning can result in an appearance of vertical lines tilting on left/right eye movement or objects expanding or shrinking (with corresponding change in brightness) on up/down eye movement. These effects may not occur when the system uses LEDs because LEDs may be flashed on all together after loading the image, rather than using raster scanning. As a result, eye movement may produce a blur effect instead of a raster scanning effect, just like the real world tends to blur with fast eye movement. The human brain may correct for this real world blur and the system may provide the same correction for always-on LEDs to eliminate the blur effect.
In particular embodiments, the pixel array stored in the frame buffer may cover a larger area than the area to be actually displayed to include overflow pixels on the edges for facilitating the pixel shifting operations. As an example and not by way of limitation, the pixel array may cover a larger area than the view plane 330 and the covered area may extend beyond all edges of the view plane 330. It is notable that, although
In particular embodiments, the pixel array may not need to be the same size as the LED array, even discounting overflow pixels around the edges, because the system may use a resampling process to determine the LED values based on the pixel values in the pixel array. By using the resampling process, the pixel array size may be either larger or smaller than the size of the LED array. For example, the angle space pixel arrays as shown in
It is notable that, in particular embodiments, the pixels on the respective view plane may correspond to pixel values that are computed to represent a scene to be displayed and the pixel positions on the view plane may be the intersecting positions as determined using ray casting process and pixel positions may not be aligned to the actual LED positions in the LED array. This may be true to the pixels on the view plane before after the rotation of the view plane. To solve this problem, the system may resample the pixel values in the pixel array to determine the LED values for the LED array. In particular embodiments, the LED values may include any suitable parameters for the driving signals for the LEDs including, for example, but not limited to, a current level, a voltage level, a duty cycle, a display period duration, etc. As illustrated in
If the pixels are uniformly space along the view plane, the pixels may be farther apart at one portion of the view plane than at another portion of the view plane and a memory shifting solution may have to shift pixels by different amounts at different places in the array. In particular embodiments, by using the pixels uniformly distributed in the angle space, the system may allow uniform shifts of pixels for generating subframes in response to the user's view angle changes. Furthermore, because uniformly spaced pixels in the angle space results in more dense pixels in the central areas of the FOV, the angle space pixel array may provide a foveation (e.g., 2:1 foveation) from the center to the edges of the view plane. In general, the system may have the highest resolution at the center of the array and may tolerate lower resolution at the edges. This may be true even without eye tracking since the user's eyes seldom move very far from the center for very long time before moving back to near the center.
In particular embodiments, each tile processor may access a defined region of memory plus one pixel along the edges from adjacent tile processors. However, in particular embodiments, a much larger variation may be supported due to lens distortion. In general, a lens may produce pincushion distortion that varies for different frequencies of light. In particular embodiments, the pincushion distortion may be corrected by barrel distorting the pixels prior to display when a standard VR system is used. In particular embodiments, the barrel distorting may not work because the system may need to keep the pixels in angle space to use pixel shifting method to generate subframes in response to changes of the view angle. As a result, the system may use the memory array to allow each tile processor to access pixels in a local region around each tile processor, depending on the magnitude of the distortion that can occur in that tile processor's row or column and the system may use the system architectures described in this disclosure for supporting this function. As discussed earlier in this disclosure, in particular embodiments, the pixel array stored in the memory may be not aligned with the LED array. The system may use a resampling process to determine the LED values based on the pixel array and the relative positions of the pixels in the array and the LED positions. The pixel positions for the pixel array may be with respect to the view plane and may be determined using a ray casting process and/or a rotation process. In particular embodiments, the system may correct the lens distortion during the resampling process taking into consideration of the LED positions as distorted by the lens.
In particular embodiments, depending on the change of the user's view angle, the pixels in the pixel array may need to be shifted by a non-integer time of pixel units. In this scenario, the system may first shift the pixels by an integer time of pixel units using the closest integer to the target shifting offset. Then, the system may factor in the fraction of pixel units corresponding to the difference between the actually shifted offset and the target offset during the resampling process for determining LED values based on the pixel array and the relative positions of the pixel positions and LED positions. As an example and not by way of limitation, the system may need to shift the pixels in the array for 2.75 pixel units toward left side. The system may first shift the pixel array by 3 pixel units toward left. Then, the system may factor in the 0.25 position difference during the resampling process. As a result, the pixel values in the generated subframes may be correctly calculated corresponding to the 2.75 pixel units. As another example, the system may need to shift the pixel array by 2.1 pixel unit toward right side. The system may first shift the pixel array by 2 pixel unit and may factor in the 0.1 pixel unit during the resampling process. As a result, the pixel values in the generated subframes may be corrected determined corresponding to the 2.1 pixel units. During the resampling process, the system may use an interpolation operation to determine a LED value based on a corresponding 2×2 pixels. The interpolation may be based on the relative positions of the 2×2 pixels with respect to the position of the LED, taking into considerations of (1) the difference fraction of the target shifting offset and actually shifted offset; and (2) the lens distortion effect that distort the relative positions of the pixels and LEDs.
In particular embodiments, the system may use a bilinear interpolation process to resample the pixel array to determine the LED values. To determine the values for one LED, the system may need to access an unaligned 2×2 of pixels. This may be accomplished in a single clock by dividing the 64×64 pixel block into four interleaved blocks. One pixel memory unit or block that store pixels may be used as a reference unit and may have even horizontal (U) and vertical (V) addresses. The other three memory units may store pixels with other combinations of even and odd (U,V) address values. A single (U,V) address may then be used to compute an unaligned 2×2 block that is accessed by the four memory units. As a result, the tile processor may access a 2×2 of pixels in a single cycle, regardless of which of the connected pixel array memory unit the desired pixels are in or whether they are in two or all four of the memory units.
In particular embodiments, the system may have pixel memory units with pre-determined sizes to arrange for no more than four tile processors to connect to each memory unit. In that case, on each clock, one fourth of the tile processors may read from the attached memories, so that it takes four clocks to read the pixel data that is needed to determine LED values for one LED. In particular embodiments, the system may have about 1000 LEDs per tile processor, 100 subframes per composited/rendered frame and 100 rendering frames per second. The system may need 40M operations per second for the interpolation process. When the system runs at 200 MHz, reading pixels for the LEDs may need 20% of the processing time. In particular embodiments, the system may also support interpolation on 4×4 blocks of pixels. With the memory design as described above, the system may need 16 accesses per tile processor. This may increase the time required for 160M accesses per second, or 80% of the processing time when the clock rate is 200 MHz.
In particular embodiments, the system may support changes of view direction while the display frame is being output to the LED array. At the nominal peak head rotation rate of 300 degrees per second, nominal pixel array size of 3000×3000 pixels, a 90-degree field of view, and 100 fps, the view may change by 3 degrees per frame. As a result, the pixels may shift by up to 100 positions over the course of a display frame. Building the pixel array as an explicit shifter may be expensive. The shift may need to occur 10,000 times per second (100 fps rendering rate and 100 sub-frames per rendered frame). With an array that is 2,560 LEDs wide, shifting a single line by one position may require 2,560 reads and 2,560 writes, or 256,000 reads and writes per rendered frame. Instead, in particular embodiments, the memory may be built in blocks in a size of, for example, 64×64 . This may allow 63 pixels per row to be accessed at offset positions within the block. Only the pixels at the edges of each block may need to be shifted to another block, reducing the number of reads and writes by a factor or 64. As a result, it may only take about 4,000 reads and 4,000 writes to shift each row of the array by one position.
In particular embodiments, the display frame may be updated at a nominal rate of 100 fps. This may occur in parallel with displaying the previous frame, so that throughout the frame the LEDs may display a mix of data from the prior and current frames. In particular embodiments, the system may use an interleave of old and new frames for translation and torsion. The translation and torsion may include for all kinds of head movement except changing the pitch (vertical) and yaw (horizontal) of the view angle. The system may ensure that the display frame can be updated while accounting for changes in pitch and yaw during the frame.
FIG.7 illustrates an example method 700 of adjusting display content in according to the user's view directions. The method may begin at step 710, where a computing system may store, in a memory unit, a first array of pixel values to represent a scene as viewed from a viewpoint along a first viewing direction. The first array of pixel values may correspond to a number of positions on a view plane. The positions may be uniformly distributed in an angle space. At step 720, the system may determine, based on sensor data, an angular displacement from the first viewing direction to a second viewing direction. At step 730, the system may determine a second array of pixel values to represent the scene as viewed from the viewpoint along the second viewing direction. The second array of pixel values may be determined by: (1) shifting a portion of the first array of pixel values in the memory unit based on the angular displacement, or (2) reading a portion of the first array of pixel values from the memory unit using an address offset determined based on the angular displacement. At step 740, the system may output the second array of pixel values to a display.
In particular embodiments, the pixels on the respective view plane may correspond to pixel values that are computed to represent a scene to be displayed and the pixel positions on the view plane may be the intersecting positions as determined using ray casting process and pixel positions may not be aligned to the actual LED positions in the LED array. This may be true to the pixels on the view plane before after the rotation of the view plane. In particular embodiments, the system may resample the pixel values in the pixel array to determine the LED values for the LED array. In particular embodiments, the LED values may include any suitable parameters for the driving signals for the LEDs including, for example, but not limited to, a current level, a voltage level, a duty cycle, a display period duration, etc. The system may interpolate pixel values in the pixel array to produce LED values based on the relative positions of the pixels and the LEDs. The system may specify the positions of the LEDs in angle space. In particular embodiments, the system may use the tile processor to access the pixel data in pixel memory units, shift the pixels according to the changes of the user's view angles, and resample the pixel array to determine the corresponding LED brightness values. In particular embodiments, the system may use a bilinear interpolation process to resample the pixel array to determine the LED values.
In particular embodiments, the first array of pixel values may be determined by casting rays from the viewpoint to the scene. The positions on the view plane may correspond to intersections of the cast rays and the view plane. The casted rays may be uniformly distributed in the angle space with each two adjacent rays having a same angle equal to an angle unit. In particular embodiments, the angular displacement may be equal to an integer times of the angle unit. In particular embodiments, the second array of pixel values may be determined by shifting the portion of the first array of pixel values in the memory unit by the integer times of a pixel unit. In particular embodiments, the address offset may correspond to the integer times of a pixel unit. In particular embodiments, the angular displacement may be equal to an integer times of the angle unit plus a fraction of the angle unit. In particular embodiments, the second array of pixel values may be determined by: shifting the portion of the first array of pixel values in the memory unit by the integer times of a pixel unit; and sampling the second array of pixel values with a position shift equal to the fraction of the pixel unit. In particular embodiments, the address offset for reading the first array of pixel values from the memory unit may be determined based on the integer times of a pixel unit. The system may sample the second array of pixel values with a position shift equal to the fraction of a pixel unit. In particular embodiments, the display may have an array of light-emitting elements. Outputting the second array of pixel values to the display may include: sampling the second array pixel values based on LED positions of the array of light-emitting elements; determining driving parameters for the array of light-emitting elements based on the sampling results; and outputting the driving parameters to the array of light-emitting elements. In particular embodiments, the driving parameters for the array of light-emitting elements may include a driving current, a driving voltage, and a duty cycle.
In particular embodiments, the system may determine a distortion mesh for distortions caused by one or more optical components. The LED positions may be adjusted based on the distortion mesh. The sampling results may be corrected for the distortions caused by the one or more optical components. In particular embodiments, the first memory unit may be located on a component of the display comprising an array of light-emitting elements. In particular embodiments, the memory unit storing the first array of pixel values may be integrated with a display engine in communication with and may be remote (e.g., not in the same physical component) to the display. In particular embodiments, the array of light-emitting elements may be uniformly distributed on a display panel of the display. In particular embodiments, the display may provide a foveation ratio of approximately 2:1 from a center of the display to edges of the display. In particular embodiments, the first array of pixel values may correspond to a scene area that is larger than an actually displayed scene area on the display. In particular embodiments, the second array of pixel values may correspond to a subframe to represent the scene. The subframe may be generated at a subframe rate higher than a mainframe rate. In particular embodiments, the memory unit may have extra storage space to catch overflow pixel values. One or more pixel values in the first array of pixel values may be shifted to the extra storage space of the memory unit.
Particular embodiments may repeat one or more steps of the method of
This disclosure contemplates any suitable number of computer systems 800. This disclosure contemplates computer system 800 taking any suitable physical form. As example and not by way of limitation, computer system 800 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these. Where appropriate, computer system 800 may include one or more computer systems 800; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 800 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems 800 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems 800 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate.
In particular embodiments, computer system 800 includes a processor 802, memory 804, storage 1006, an input/output (I/O) interface 808, a communication interface 810, and a bus 812. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.
In particular embodiments, processor 802 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor 802 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 804, or storage 1006; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 804, or storage 1006. In particular embodiments, processor 802 may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor 802 including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor 802 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory 804 or storage 1006, and the instruction caches may speed up retrieval of those instructions by processor 802. Data in the data caches may be copies of data in memory 804 or storage 1006 for instructions executing at processor 802 to operate on; the results of previous instructions executed at processor 802 for access by subsequent instructions executing at processor 802 or for writing to memory 804 or storage 1006; or other suitable data. The data caches may speed up read or write operations by processor 802. The TLBs may speed up virtual-address translation for processor 802. In particular embodiments, processor 802 may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor 802 including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor 802 may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors 802. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor.
In particular embodiments, memory 804 includes main memory for storing instructions for processor 802 to execute or data for processor 802 to operate on. As an example and not by way of limitation, computer system 800 may load instructions from storage 1006 or another source (such as, for example, another computer system 800) to memory 804. Processor 802 may then load the instructions from memory 804 to an internal register or internal cache. To execute the instructions, processor 802 may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor 802 may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor 802 may then write one or more of those results to memory 804. In particular embodiments, processor 802 executes only instructions in one or more internal registers or internal caches or in memory 804 (as opposed to storage 1006 or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory 804 (as opposed to storage 1006 or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor 802 to memory 804. Bus 812 may include one or more memory buses, as described below. In particular embodiments, one or more memory management units (MMUs) reside between processor 802 and memory 804 and facilitate accesses to memory 804 requested by processor 802. In particular embodiments, memory 804 includes random access memory (RAM). This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory 804 may include one or more memories 804, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory.
In particular embodiments, storage 1006 includes mass storage for data or instructions. As an example and not by way of limitation, storage 1006 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Storage 1006 may include removable or non-removable (or fixed) media, where appropriate. Storage 1006 may be internal or external to computer system 800, where appropriate. In particular embodiments, storage 1006 is non-volatile, solid-state memory. In particular embodiments, storage 1006 includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage 1006 taking any suitable physical form. Storage 1006 may include one or more storage control units facilitating communication between processor 802 and storage 1006, where appropriate. Where appropriate, storage 1006 may include one or more storages 1006. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage.
In particular embodiments, I/O interface 808 includes hardware, software, or both, providing one or more interfaces for communication between computer system 800 and one or more I/O devices. Computer system 800 may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system 800. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces 808 for them. Where appropriate, I/O interface 808 may include one or more device or software drivers enabling processor 802 to drive one or more of these I/O devices. I/O interface 808 may include one or more I/O interfaces 808, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface.
In particular embodiments, communication interface 810 includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system 800 and one or more other computer systems 800 or one or more networks. As an example and not by way of limitation, communication interface 810 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface 810 for it. As an example and not by way of limitation, computer system 800 may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system 800 may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination of two or more of these. Computer system 800 may include any suitable communication interface 810 for any of these networks, where appropriate. Communication interface 810 may include one or more communication interfaces 810, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface.
In particular embodiments, bus 812 includes hardware, software, or both coupling components of computer system 800 to each other. As an example and not by way of limitation, bus 812 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. Bus 812 may include one or more buses 812, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect.
Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate.
Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context.
The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages.