This disclosure generally relates to virtual reality, augmented reality, or mixed reality.
The artificial reality system 9100 may further include a computing system 9108. The computer unit may be a stand-alone unit that is physically separate from the HMD or it may be integrated with the HMD. In embodiments where the computing system 9108 is a separate unit, it may be communicatively coupled to the HMD via a wireless or wired link. The computing system 9108 may be a high-performance device, such as a desktop or laptop, or a resource-limited device, such as a mobile phone. A high-performance device may have a dedicated GPU and a high-capacity or constant power source. A resource-limited device, on the other hand, may not have a GPU and may have limited battery capacity. As such, the algorithms that could be practically used by an artificial reality system 9100 depends on the capabilities of its computing system 9108.
The augmented reality system 1000 may further include a controller comprising a trackpad and one or more buttons. The controller may receive inputs from users and relay the inputs to the computing system 1020. The controller may also provide haptic feedback to users. The computing system 1020 may be connected to the AR HMD 1010 and the controller through cables or wireless connections. The computing system 1020 may control the AR HMD 1010 and the controller to provide the augmented reality content to and receive inputs from users. The computing system 1020 may be a standalone host computer system, an on-board computer system integrated with the AR HMD 1010, a mobile device, or any other hardware platform capable of providing artificial reality content to and receiving inputs from users.
The HMD may have external-facing cameras, such as the two forward-facing cameras 9105A and 9105B shown in
In particular embodiments, the pose (e.g., position and orientation) of the HMD within the environment may be needed. For example, in order to render an appropriate display for the user 9102 while he is moving about in a virtual or augmented reality environment, the system 9100 would need to determine his position and orientation at any moment. Based on the pose of the HMD, the system 9100 may further determine the viewpoint of either of the cameras 9105A and 4105B or either of the user's eyes. In particular embodiments, the HMD may be equipped with inertial-measurement units (“IMU”). The data generated by the IMU, along with the stereo imagery captured by the external-facing cameras 9105A-B, allow the system 9100 to compute the pose of the HMD using, for example, SLAM (simultaneous localization and mapping) or other suitable techniques.
Traditional three-dimensional artificial reality environment reconstruction techniques and algorithms may integrate depth information about the real environment gradually over time to create a 3D representation (e.g., voxel grid, point cloud, or mesh) of the world, which can be used to re-render the environment as the user perspective changes. However, these methods are inherently too slow for certain applications, for example augmented reality applications that must quickly respond to changes in a user's pose or objects in the environment that result in rapidly changing viewpoints. For example, users may suddenly move their heads around when viewing a scene and the rendered image may need to respond immediately to the changed perspective by adjusting the view of one or more virtual representations presented to the user. Moreover, traditional artificial reality environment reconstruction techniques and algorithms may require expensive computing resources that limit the ability to recreate the artificial reality environment using components that are compact enough to be housed within an HMD, especially an AR HMD with a small form factor.
One solution to the problems involves generating and resampling “surfaces.” A surface may be one or more primitives rendered by display engine, such as quadrilaterals or contours, defined in 3D space, that have corresponding textures generated based on the mainframe rendered by the application. In particular embodiments one or more surfaces may represent a particular view of an objects within the artificial reality environment, where a surface corresponds to one or more objects that are expected to move/translate, skew, scale, distort, or otherwise change in appearance together, as one unit, as a result of a change in perspective. This method may allow for an efficient shortcut for adjusting a view in response to head movements of the user and/or one or more movements of the objects, and may significantly reduce the processing power that is required by rendering at a lower frame rate (e.g., 60 Hz, or once every 1/60th of a second) and using the surfaces to adjust or interpolate the view to account for rapid movements by the user, thus ensuring that the view is updated quickly enough to sufficiently reduce latency. This may further result in conservation of computing resources, which may be important for AR systems that utilize less-powerful components that are compact enough to be housed within an HMD, especially an AR HMD with a small form factor. Alternatively, the computing system may be capable of rendering the surfaces at a rate that matches the display rate of the HMD (e.g., 4200 Hz, once every 1/4200th of a second). This prevents the user from perceiving latency and to thereby avoid or sufficiently reduce sensory dissonance. Methods for generating and modifying representations of objects in an augmented-reality or virtual reality environment are disclosed in U.S. patent application Ser. No. 16/4586,4590, filed 27 Sep. 2019, which is incorporated by reference.
These two-dimensional surfaces may be used to represent one or more virtual or physical objects in the artificial reality environment as they would appear to a user from a particular viewpoint, and as such, may account for the user's perspective of the one or more objects from the viewpoint at a particular time. A two-dimensional occlusion surface's texture data may be made up of one or more subparts, referred to herein as “texels.” These texels may be blocks (e.g., rectangular blocks) that come together to create a texel array that makes up a two-dimensional occlusion surface. As an example and not by way of limitation, they may be contiguous blocks that make up a surface. For illustrative purposes, a texel of a surface may be conceptualized as being analogous to a pixel of an image. A two-dimensional occlusion surface may be generated by any suitable device. In particular embodiments, the surface may encode for visual information (RGBA) (e.g., as a texture) for one or more of its texels. The alpha component may be a value that specifies a level of transparency that is to be accorded to a texel. As an example and not by way of limitation, an alpha value of 0 may indicate that a texel is fully transparent, an alpha value of 1 may indicate that a texel is opaque, and alpha values in between may indicate a transparency level that is in between (the exact transparency level being determined by the value). A two-dimensional surface could represent virtual objects or physical objects. A surface representing a virtual object could be a snapshot of the virtual object as viewed from a particular viewpoint. A surface representing a physical object could be used as an occlusion surface for determining whether any virtual object or surface representing the virtual object is occluded by the physical object. If an opaque occlusion surface is in front of a surface representing a virtual object, the pixels at which the opaque occlusion surface appears would be turned off to allow the user to see through the display. The end effect is that the user would see the physical object through the display, rather than the virtual object behind the physical object.
A two-dimensional occlusion surface may support any suitable image format. To conserve resources, the two-dimensional occlusion surface may be transmitted as an alpha-mask that represents the occlusion and blending (e.g., transparency) of each pixel in the segmentation mask. The alpha-mask may be a low resolution texture (64×64 pixel), which reduces power consumption, provides for fuzzy (but aesthetically pleasing) borders when overlaid on an occluding object of interest 4210, reduces latency due to smaller resolution, and provides for better scalability.
An occlusion surface could be generated based on sensor measurements of the real-world environment. For example, depth measurements could be used to generate voxels with occupancy values or a 3D mesh. However, simply generating surfaces by projecting the occupied voxels or vertices of triangles of a three-dimensional model of the real environment onto an image plane associated with a viewpoint and rasterizing the pixels to determine if each pixel is part of an object or not may result in aliasing artifacts or edges in the surface when the occlusion surface is viewed from different angles. Additionally, unless the surface is generated at a high resolution, magnifying (e.g., zooming in on) an edge of a surface may result in a aliased, jagged line that is displeasing to the user. As such, it may be more visually appealing to the user to represent the contour of the edge or border of each physical object represented in the surface as anti-aliased, fuzzy boundaries. However, it is often computationally expensive to generate a fuzzy anti-aliasing region at the edges of a surface that can be utilized for transitional blending.
One such technique to obtain aesthetically pleasing surfaces while conserving computing resources is to generate an occlusion surface using a signed distance field (SDF) and subsequently use the SDF information to provide transparency information (Alpha) (e.g., as a texture) for one or more of its texels. This technique permits magnification without noticeable curvature or distortion of the edge or boundary. Generating a surface representing the view of an artificial reality environment using this method permits for aesthetic modifications in smooth edges or borders of each physical object represented in a surface to be displayed to the user of an artificial reality system without the computational costs associated with traditional techniques. This results in an improved smoothness and aesthetics of the generated artificial reality environment when compared to a traditional surface that may result in jagged, aliased edges. In addition, the method described herein allows the boundaries of the occlusion mask encoded within an occlusion surface to be adjusted (e.g., the occlusion mask could be made slightly larger than where the physical object appears in the surface plane). The alpha values of the boundaries may also be adjusted to provide an outward fading effect.
At a high level, a surface is generated by utilizing a depth map of the environment, which permits the computing system to determine a distance from a particular coordinate in a real environment to the edge of a particular surface. Based on these distances, the computing system may generate SDF values for each of the texels, which may then be used to generate visual information (e.g., alpha values) for each of the texels in an array, which effectively achieves generating a fuzzy, smoothed blend region at the edges or borders of the surface without sacrificing significant computing resources. This anti-aliased edge may provide a realistic and more immersive experience when applied to surfaces that are displayed to a user of an artificial reality environment.
In particular embodiments, a computing system associated with an artificial reality system may receive one or more depth measurements of a real environment. In particular embodiments, the one or more depth measurements may be determined based on sensor or image data (e.g., images captured by one or more cameras worn by the user that is connected to a head-mounted display, LIDAR, pre-generated stored depth maps of a real environment, etc.). In particular embodiments the depth measurements may be sparse depth measurements. Using this image or sensor data, the computing system may detect one or more physical objects (e.g., a table, chair, etc.) in the real world.
The computing system may further generate a three-dimensional depth map, point cloud, or similar volume representing the real environment based on the one or more depth measurements. In particular embodiments the depth map may be a three-dimensional grid space (with each cubic volume of the grid being a predetermined size, e.g., 5 cm×5 cm×5 cm). The number of observed depth measurements encapsulated within each cell of the depth map reflects the likelihood of that cell being occupied by a physical object. Each cell coordinate in the depth map may include distance information, for example an SDF value, based on the depth measurements of the real environment. As an example and not by way of limitation, each coordinate in the depth map may comprise a distance from the closest occupied cell in the depth map. In particular embodiments the data for each coordinate may comprise a distance to the closest physical object in the real environment from a particular coordinate in the depth map. In particular embodiments this depth map may be generated based on known or stored properties of the real environment, without the need for receiving images of the real environment. For example, the user may store information related to a particular room, such that the computing system can access a pre-stored depth map of a particular environment. In particular embodiments, the computing system may further generate a 3D mesh of the environment based on the depth map or point cloud. Although this disclosure describes certain techniques for generating or accessing a depth map or point cloud of a real-environment, it should be appreciated that any suitable techniques for determining depth measurements utilizing image data or sensor data from an artificial reality system can be utilized.
In particular embodiments, the computing system may continuously update the depth map as the user moves throughout the real environment. For example, the computing system may continuously receive image data or sensor data that indicates that the current depth map is inaccurate or outdated, due to for example the relocation of one or more physical objects in the real environment since the depth map was generated or last updated.
In particular embodiments the computing system may generate an occlusion surface based on the three-dimensional depth map and representing a current viewpoint of a user of the artificial reality environment. A surface may be a two-dimensional surface generated by the computing system using sensor data or image data captured by one or more cameras or sensors associated with the artificial reality system (e.g., a received image captured by the one or more cameras at a particular time). A “viewpoint” of an artificial reality environment may refer to a user perspective of the artificial reality environment, which may, for example, be determined based on a current position and orientation of an HMD. A surface's texture data may be made up of one or more subparts, referred to herein as “texels.” These texels may be blocks (e.g., rectangular blocks) that come together to create a texel array that makes up a surface. As an example and not by way of limitation, they may be contiguous blocks that make up a surface. For illustrative purposes, a texel of a surface may be conceptualized as being analogous to a pixel of an image. A surface may be generated by any suitable device. In particular embodiments, the surface may encode for information for one or more of its texels.
Particular embodiments described herein supports a technique that is termed “beam racing.” In the graphics rendering pipeline, each primitive is rendered in memory before the scene is rasterized. In other words, pixels in the final scene are generate one by one after objects in the scene have been rendered. The pixels are displayed together and assumed to represent the scene at a particular instant in time. However, since it takes time to generate the pixels, there may be significant time lag (e.g., 11 milliseconds) between the time when objects are rendered and the time when the pixels are displayed. In conventional display contexts (e.g., movies, animation, etc.), the lag may not be noticeable. This is not the case in the VR/AR context, however. In VR/AR, a user expects immediate feedback between movement and visual perception. For example, as the user turns his head, he expects the scene to change at that instant and the current display to reflect his current point of view. Any delays, such the time tag for generating and outputting pixels after rendering, may negatively affect the user experience. For example, if at time to the user is standing up, the system may begin to render a scene based on the elevated perspective of the user. However, by the time the pixels of the scene are output at time to +11 ms, the user may be sitting down. Since the user is now expecting to see a scene from a lower vantage point, seeing a scene that does not reflect such expectation would negatively affect the VR experience and may even cause dizziness or nausea.
Particular embodiments reduce the latency between rendering and image display by outputting pixels scan line by scan line, where each line is generated based on renderings made when it is that line's turn to be output. For example, the system may render at time to and ray cast line 0 (rather than the whole scene) based on the to rendering; render at time ti and ray cast line 1 based on the ti rendering; and so on. Since the system is only processing one line at a time (or multiple predetermined lines at a time but not the all the lines together), the delay between render time and pixel-output time becomes much shorter, and the renderings would be based on the latest movement/perspective of the user. As a result, real-time scene changes would be much more reactive. This “beam racing” technique has the potential to significantly reduce the head-movement to photon latency. Even significant batching, such as hundreds of lines (hundreds of thousands of pixels) could provide large multiplicative reductions in latency over waiting for the full frame before scan-out. In particular embodiments, the system may schedule rendering and ray casting tasks with respect to the scan-out clock.
Referring again to
In particular embodiments, after the system generates lines 1-10, it may proceed to generate lines 11-20, as represented by timeline 2241. The process may begin at time ti, at which time the system may perform visibility tests based on the latest sensor data available at that time. The system may again go through the process of shading and ray casting, and then output lines 11-20 at time t2. Thus, at time t2, the user is presented with lines 11-20 that are generated based on sensory data from time ti. The system may then repeat the process to generate lines 21-30 using the timeline 2242, starting from time t2 and ending at time t3, and then generate lines 31-40 using the timeline 2243, starting from time t3 and ending at time t4. Thus, at time t4, the user is presented with a video frame that includes much more current information (e.g., as early as time t3), compared to the scenario presented by the pipeline 2220, where the user at time t4 is presented with a frame generated based on to data.
In particular embodiments, the rendering system may further predict a user's head position/orientation (head pose) to output scenes that match a user's expectations. For example, if the user is in the process of turning, the system may predict that the user would continue to turn in the next frame and begin rendering a scene based on the predicted camera position/orientation. If the latency is 1 ms, the system would have to predict farther ahead, which his more difficult and likely more erroneous. If the latency is significantly reduced (e.g., to 1 ms), the system would only need to predict 1 ms ahead. This makes the prediction task much easier and less error-prone.
In particular embodiments, the on-board compute unit of an AR/VR headset may receive image assets (e.g., patches or surfaces of images) from a separate compute unit (e.g., a mobile phone, laptop, or any other type of compute unit). The separate compute unit may render the image assets from the perspective of the user's predicted head pose at the time when the first display lines are scanned out. The image assets may then be transmitted to the AR/VR headset. The AR/VR headset is tasked with re-rendering or reprojecting the received image assets onto more up-to-date predictions of the user's head pose for each block of scan lines.
At step 3330, the system may determine a visibility of one or more objects (e.g., the image assets obtained from the separate compute unit) defined within the 3D space by projecting rays based on the predicted head pose to test for intersection with the one or more objects. For example, based on the orientation of the virtual camera in 3D space, the system may project rays into the 3D space to test for intersections with any object that is defined therein. In particular embodiments, the direction of the rays may be based on a focal surface map (or multiple focal surface maps, one per primary color), as described herein. The density of the rays may also be defined by the focal surface map or a separate importance map, as described herein.
At step 3340, the system may generate n lines (e.g., 1, 3, 5, 10, 1100 lines, etc.) of pixels based on the determined visibility of the one or more objects. In particular embodiments, the generation of the pixels may be the result of shading and rasterization processes. The n number of lines that is generated may be a subset of the total lines of pixels in the AR/VR display. Continuing the example from above, if the AR/VR display has a total of 40 lines, at this stage the system may generate line 1-10 of the display. In other words, the number of lines generated may be a subset of the total number of lines.
At step 3350, the system may output the n generated lines of pixels for display by the AR/VR device. As previously described, rather than updating all the lines of the display based on the same virtual camera orientation, the system in particular embodiments may only update a subset, such as lines 1-10.
In particular embodiments, the process of generating and outputting a subset of lines may then be repeated for the next subset of lines. For example, at step 3310, the system may obtain the latest sensor data from the AR/VR device (e.g., the sensor data associated with time ti). At step 3320, the system may again predict a second head pose of the user in the 3D space at the time when the next subset of lines are displayed. At step 3330, the system may then proceed to determine a visibility of any objects defined within the 3D space by projected rays based on the second head pose to test for intersection with the objects. At step 3340, the system may generate another n lines of pixels (e.g., lines 11-20) based on the determined second visibility of the one or more objects. At step 3350, the system may output the n lines of pixels for display by the virtual reality device. As shown in
The steps illustrated in
In
In particular embodiments, the number of lines generated per iteration may be equal to a predetermined number. For example, n may be preset to be 1, 5, 10, 20, etc. In particular embodiments, the graphics system may predefine this number. Alternatively or additionally, the graphics system may allow an application to specify the number of lines that should be generated per iteration. For example, an application requesting the graphics system to generate a scene may specify the number of lines that it wishes to generate per iteration. The application may communicate this information to the graphics system through an API, for example.
Particular embodiments may repeat one or more steps of the method of
At step 4410 of
At step 4420 of
At step 4430 of
Particular embodiments described herein supports a technique that is termed “beam racing.” In the graphics rendering pipeline, each primitive is rendered in memory before the scene is rasterized. In other words, pixels in the final scene are generate one by one after objects in the scene have been rendered. The pixels are displayed together and assumed to represent the scene at a particular instant in time. However, since it takes time to generate the pixels, there may be significant time lag (e.g., 11 milliseconds) between the time when objects are rendered and the time when the pixels are displayed. In conventional display contexts (e.g., movies, animation, etc.), the lag may not be noticeable. This is not the case in the VR/AR context, however. In VR/AR, a user expects immediate feedback between movement and visual perception. For example, as the user turns his head, he expects the scene to change at that instant and the current display to reflect his current point of view. Any delays, such the time tag for generating and outputting pixels after rendering, may negatively affect the user experience. For example, if at time to the user is standing up, the system may begin to render a scene based on the elevated perspective of the user. However, by the time the pixels of the scene are output at time to +11 ms, the user may be sitting down. Since the user is now expecting to see a scene from a lower vantage point, seeing a scene that does not reflect such expectation would negatively affect the VR experience and may even cause dizziness or nausea.
At step 7010 of
At step 7030 of
At step 7040 of
At step 7050 of
At step 7060 of
At step 7070 of
According to
Modern LED visual displays are comprised of a large number of LEDs. The LEDs within the display may degrade at different rates because as previously mentioned, LEDs degrade based on usage. Depending on the type and usage of the visual display, certain regions of the display may experience greater usage compared to other regions. For example, a particular visual display, among other things, may contain a tool bar at one of the borders and display video content towards the center of the display. On this particular visual display, LED usage in displaying the tool bar may be different compared to LED usage in displaying video content. Therefore, the aforementioned LEDs may degrade at various rates because of their potentially different usage.
Since degraded LEDs become less bright, the performance of the visual display will decline unless the visual display system makes adjustments. In order to increase the brightness of degraded LEDs, the visual display system may increase electrical inputs such as the intensity of current or pulse width supplied to the LEDs. Since visual display systems, if programmed properly, have the ability to compensate for degraded LEDs within their display, designers of LED visual display systems may be interested in understanding comparative LED degradation within the various regions of their visual display. The visual display system may be able to automatically provide adjustments in electrical outputs to degraded LEDs in order to prevent significant decreases in performance of the visual display.
Through research and development, designers may observe how much a particular type of LED will degrade over time and much electrical inputs need to be adjusted in order to compensate for the degraded output. Once developers understand how much electrical inputs need to be adjusted to, they can accurately program the visual display system to accurately compensate for LED degradation and avoid declines in performance of the visual display. Therefore, accurately measuring and/or predicting the amount of LED usage across the entire regions of the visual display is critical to design accurate compensation mechanisms.
The challenge of measuring or predicting LED usage may be exacerbated when applied to certain types of visual display systems where regions of LEDs within the display may experience vastly different usage rates compared to other regions. For example, a visual display system in the form of a wearable heads up display, such as but not limited to wearable augmented reality glasses, may be a particularly challenging system. Within such a display, for example, at a particular time, the majority of LEDs within the display may not be in use as the user is simply looking through the display at the real world. In such instances, only peripheral regions of the display projecting a menu bar may be in use whereas LEDs within the center regions of the display may not be in use. But, at another point in time, the user may be viewing world-locked content that could appear anywhere within the display, depending on the orientation of the user. Such visual displays with varied usage rates across regions within the display may be contrasted with displays such as a smart phone screen, where usage is more constant as the entire screen is often actively displaying an object or image.
While it is possible to measure the performance of every single LED within a display, doing so is not efficient in terms of both cost effectiveness and data consumption. A solution is to subsample the visual display and measure usage for a plurality of tiled regions of LEDs within the display. A tiled region is a small section of LEDs on a visual display (e.g., 16×16 pixels, 32×32 pixels, etc.). The benefits of only sampling the greyscale for tiled regions of a display rather than the entire display, may include but are not limited to, power and data usage savings while still being able to estimate LED degradation.
In order to measure LED greyscale, there needs to be one or more sensors that can measure the brightness of the display. Each sensor may be located in front of the display, which is from the direction a user will view the display, behind the display opposite from the side a user will view the display, or embedded within the display. The one or more sensors may measure the current and pulse-width modulation supplied to the tiled regions to determine the brightness of the selected tiled regions within the visual display. The measured brightness of the sampled tiled regions can, overtime, represent a usage pattern of portions of the display. To estimate a usage pattern of the entire display, the display system could match the measured usage pattern against pre-determined usage patterns (or heat maps) of the whole display (e.g., the manufacturer could measure a variety of usage patterns before shipping the system to the end users). The pre-determined usage pattern that best matches the measured usage patterns at the selected tiled regions could be used to estimate the LED's degradation based on known characteristics of the display. From there, the display system may use the LED degradation profile corresponding to the pre-determined usage pattern to compensate for the degraded LEDs. This may be done by applying per-pixel scaling factors to compensate for the degraded LEDs. A key step in this invention is to choose effective tiled regions of LEDs within the display for sampling. The tiled regions may be fixed or preferably may be relocated within the display based on user activity.
In one embodiment, the plurality of tiled regions to be sampled may be preselected and fixed during the production of the visual display system. In this embodiment, the sampled tiled regions may not be moved, so the sensor will record the brightness output form the same regions of the visual display for the entire life of the display.
In another embodiment, there may be a plurality of preselected tiled regions for sampling, but one or all of the tiled regions may be manually relocated within the visual display. For example, if the user feels that they are frequently using a certain region of the visual display, the user may decide to place sampled tiled regions within that certain region of the display. In another embodiment, the regions sampled could be random.
In a preferred embodiment, the locations of the tiled regions may be automatically relocated by the visual display system based on the usage. The system's goal may be to select tiled regions that are most frequently used by the display system. This would be advantageous because as previously mentioned, the most used LEDs within the display will experience the most degradation and the system will have to compensate the current intensity or pulse width sent to those LEDs to avoid performance decreases of the visual display. For example and not by way of limitation, when the visual display system is a wearable augmented reality glasses, if the user engages in activity where the center of the visual display is mostly utilized such as when watching a video, the visual display system may selected a plurality of tiled regions to sample which are located more toward the center of the visual display. When a user engages in such activity, the display system may recognize that the user is utilizing a head locked usage pattern since the displayed object is locked to a fixed position relative to the visual display. Whenever the display system is utilized for head locked display, the system may select a certain plurality of tiled regions for brightness sampling.
For example and not by way of limitation, when the visual display system is a wearable augmented reality glasses, if the user engages in activity where more peripheral regions of the visual display may be utilized, such as playing an augmented reality game where objects or characters may appear as if in the real world, the visual display system may select a plurality of tiled regions to sample which are located more toward the periphery of the display. When a user engages in such activity, the display system may recognize that the user is utilizing a world locked usage pattern since the displayed object is located to a fixed position relative to the real world and not the visual display. In a world locked scenario, when a user moves their display, the object or image will move relative to the display because it is fixed in location relative to the world. Whenever the display system is utilized for world locked display, the system may select a certain plurality of tiled regions for brightness sampling.
For example and not by way of limitation, the location of the plurality of tiled regions for sampling may be adjusted by the display system based on the user preferences or options elected by the user. One such situation may occur when, for example, the user decides to move the location of a visible menu bar within the display. When this occurs, the system may decide to sample one or more tiled regions within the menu bar, which would be useful since the menu bar will cause the same LEDs to be used repeatedly which will cause degradation.
At step 8003, one or more sensors are used to measure the brightness or greyscale of the LEDs within the tiled regions. The measurements may be performed based on the electrical current and pulse-width modulation used to illuminate the LEDs. The measurements may be aggregated over time. At step 8004, the system computer determines a usage pattern of the display based on the measurements made. For example, the system computer may determine a user pattern for the whole display by matching the measurements of the selected tiled regions to pre-determined usage patterns of the whole display. At step 8005, the system computer may determine that one or more LEDs degraded beyond a threshold value. For example, the usage pattern of the display may correspond to an estimated degradation pattern of the LEDs. If the degradation is not yet significant, the system computer would not need to compensate for it. However, if the degradation is significant (e.g., an LED degraded beyond a threshold value), then the system computer would compensate for the degradation using step 8006 below. The threshold value may be any amount of degradation. For example and not by way of limitation, the computer may only identify LEDs have degraded to the point where their output is below 75% of their expected and non-degraded greyscale output. At step 8006, the system computer adjusts an output image to compensate for LED degradation. For example, given an output image to be displayed by the LEDs, the system computer may adjust the grayscale of the pixels according to the estimated degradation pattern of the display (e.g., for LEDs that have degraded, the corresponding image pixels that would be displayed by those LEDs may be scaled to be brighter). The adjusted pixel values, in turn, would cause electrical outputs to be adjusted for the degraded LEDs (electrical adjustments may include but are not limited to electrical current and pulse width). Particular embodiments may repeat one or more steps of the method of
Although this disclosure describes and illustrates particular steps of the method of
The embodiments disclosed herein are only examples, and the scope of this disclosure is not limited to them. Particular embodiments may include all, some, or none of the components, elements, features, functions, operations, or steps of the embodiments disclosed herein. Embodiments according to the invention are in particular disclosed in the attached claims directed to a method, a storage medium, a system and a computer program product, wherein any feature mentioned in one claim category, e.g. method, can be claimed in another claim category, e.g. system, as well. The dependencies or references back in the attached claims are chosen for formal reasons only. However any subject matter resulting from a deliberate reference back to any previous claims (in particular multiple dependencies) can be claimed as well, so that any combination of claims and the features thereof are disclosed and can be claimed regardless of the dependencies chosen in the attached claims. The subject-matter which can be claimed comprises not only the combinations of features as set out in the attached claims but also any other combination of features in the claims, wherein each feature mentioned in the claims can be combined with any other feature or combination of other features in the claims. Furthermore, any of the embodiments and features described or depicted herein can be claimed in a separate claim and/or in any combination with any embodiment or feature described or depicted herein or with any of the features of the attached claims.
In particular embodiments the computing system may generate occlusion surfaces by projecting the depth map (e.g., a voxel grid with SDF values) of physical objects onto an image plane associated with a particular viewpoint of the user. The projection could be rasterized to generate the SDF values of the texels in the occlusion surfaces.
In particular embodiments the computing system may utilize a ray-casting or other rendering process for sampling the surfaces (including occlusion surfaces and surfaces that represent virtual objects) to determine the final pixels values. In particular embodiments, a computing system (e.g., a laptop, a cellphone, a desktop, a wearable device) may perform this first ray-casting process to sample the virtual-object surfaces and occlusion surfaces. This use of the ray-casting process may be referred to herein as a “visibility test,” because it may be used to determine a visibility of virtual-object surfaces and occlusion surfaces as observed from a particular viewpoint. The computing system may cast a ray from the viewpoint toward each pixel in the imaginary image screen and determine the intersection between the ray and surfaces positioned in the 3D space. The point of intersection within an occlusion surface (e.g., surface 1600 shown in
In particular embodiments the computing system may utilize the sampled SDF values in order to generate an alpha value for each pixel.
In particular embodiments the computing system may achieve different blending effects using different transform functions (e.g., the SDF may be scaled and/or offset) to map interpolated SDF values into corresponding alpha values. As an example, the computing system may adjust the alpha value of every pixel with a SDF value between 0 and 1 (SDFs greater than 1 would be deemed to have a “transparent” alpha value). If an alpha value is either 0 (totally transparent) or 1 (totally opaque), the SDF could be rounded up or down based on a threshold rounding value of 0.5 SDF (e.g., a SDF value of 0.6 would be rounded up to SDF 1, which would translate to alpha 0 or totally transparent). This adjustment may provide for a smooth anti-alias edge that is 1 pixel wide. The computing system may adjust the size of this anti-aliased edge by altering the predetermined distance from the edge for which texels are adjusted. For example, to add a buffer around the alpha mask, an offset could be applied to the SDF values (e.g., subtracting a SDF value by 3 would effectively extend the alpha mask by 3 pixels outward to include pixels that wouldn't otherwise be deemed opaque). As another example, the computing system may adjust the alpha value of every texel that is located within 16 pixels of the edge of the object. This will result in a wider, “fuzzier” edge on the surface. For example, if an alpha value could be any value between 0 and 1, the SDF values could be scaled to create the desired blurring effect (e.g., SDF values between 0 and 16 could be scaled to be within alpha 0 and 1).
The resulting pixels, which includes alpha values for each pixel and is generated based on the corresponding interpolated SDF values sampled from the texels of the occlusion surface, offers aesthetic benefits and computational benefits over traditional methods. Computationally, the disclosed methods permit for generating surfaces without having to project vertices of triangles of a three-dimensional model of the real environment onto an image plane associated with a viewpoint and rasterizing the pixels to determine if each texel in the surface is part of an object or not. Aesthetically, the surface retains straight edges of objects, even when the image is magnified or zoomed in on for display. With traditional techniques, if the edge of an object is curved and the surface is over-magnified (e.g., the zoom exceeds a certain threshold), the curved edges will begin to appear polygamized. With the disclosed methods, because SDF is used to represent that the contours of the surface, the edges will be smoother and only appear polygamized once individual pixels have been magnified to be multi-pixels in size. The disclosed methods thus permit using significantly smaller array data because the surface can be significantly more magnified without a loss in image quality.
In particular embodiments the components of the device that generated the surface may also sample the surface to generate the SDF values and alpha values for the one or more texels comprising the surface. As another example and not by way of limitation, an onboard computing system of an HMD may sample the one or more surfaces and generate the SDF values and alpha values after it receives the surface generated from a separate computing system (e.g., from a CPU or GPU of a wearable, handheld, or laptop device). In particular embodiments, there may be a predefined maximum number of surfaces that may be generated for a view (e.g., 16 surfaces) for efficiency purposes. Although this disclosure focuses on displaying an output image to a user on an AR HMD, it contemplates displaying the output image to a user on a VR display or any other suitable device.
In particular embodiments, a surface may be positioned and oriented in a coordinate system in three-dimensional space. In particular embodiments the coordinate system may correspond to the real environment, for example known world-locked coordinates (x, y). The world-coordinates of the surface may be based on an absolute coordinate in the artificial reality environment (e.g., at a particular x, y coordinate), or the world-coordinates of the surface may be determined relative to the pose of the HMD, the HMD, the user, a particular point on the user (e.g., an eyeball of the user), or one or more other surfaces or virtual objects in the artificial reality (e.g., posed at a coordinate relative to a wall or virtual coffee table in the artificial reality environment). The depth of a surface permits the computing system to position the surface in the artificial reality environment relative to, for example and not by way of limitation, one or more other real objects or virtual object representations in the environment. In particular embodiments the virtual object representations may be two-dimensional surfaces as viewed from the viewpoint of the user.
In particular embodiments, a computing system may generate an output image of a viewpoint of a scene of an artificial reality environment for display to a user that may include for example, one or more surfaces as described herein. Generating the output image may be done on the GPU of the computing system by rendering a surface as viewed from user's current viewpoint for display. As an example and not by way of limitation, this output image of a viewpoint may include a set of virtual objects. The output image may comprise a set of image pixels that correspond to the portion of the surface that is determined to be visible from a current viewpoint of the user. The output image may be configured to cause a display to turn off a set of corresponding display pixels such that the visible portion of the object are visible to the user when the generated output image is displayed to the user. In particular embodiments the output image may be transmitted to the HMD for display. This allows for an immersive artificial reality environment to be displayed to the user.
The output image may correspond to a viewpoint of the user based on the relative occlusions of the surfaces relative to one or more virtual objects or real objects in the artificial reality environment. The computing system may utilize a ray-casting or other rendering process, such as ray tracing, for determining visual information and location information of one or more virtual objects that are to be displayed within the initial output image of a viewpoint of a scene of an artificial reality environment. In particular embodiments, the first computing system (e.g., a laptop, a cellphone, a desktop, a wearable device) may perform this first ray-casting process to generate an output image of a viewpoint of an artificial reality environment. A “viewpoint” of an artificial reality environment may refer to a user perspective of the artificial reality environment, which may, for example, be determined based on a current position and orientation of an HMD. This use of the ray-casting process may be referred to herein as a “visibility test,” because it may be used to determine a visibility of a virtual object relative to a real object in the real environment by comparing a model of the virtual object with the SDF surface. The ray-casting process may ultimately be used to associate pixels of the screen with points of intersection on any objects that would be visible from a particular viewpoint of an artificial reality environment.
In particular embodiments the generated output image may be rendered by one or more components (e.g., CPU, GPU, etc.) of the computing system physically connected to the HMD. However, the HMD may have limited system resources and a limited power supply, and these limitations may not be appreciably reduced without resulting in too much weight, size, and/or heat for the user's comfort. As a result, it may not be feasible for the HMD to unilaterally handle all the processing tasks involved in rendering an output image of a viewpoint of an artificial reality environment. In particular embodiments, the one or more components may be associated with a device (e.g., a laptop, a cellphone, a desktop, a wearable device) that may be used to render the output image (e.g., perform the ray-casting process). In particular embodiments, the device is in communication with a computing system on the HMD but may be otherwise physically separated from the HMD. As an example and not by way of limitation, this device may be a laptop device that is wired to the HMD or communicates wirelessly with the HMD. As another example and not by way of limitation, the device may be a wearable (e.g., a device strapped to a wrist), handheld device (e.g., a phone), or some other suitable device (e.g., a laptop, a tablet, a desktop) that is wired to the HMD or communicates wirelessly with the HMD. In particular embodiments the device may send this output image to the HMD for display.
In particular embodiments the components of the device that generated the output image may also generate the one or more surfaces. As another example and not by way of limitation, an onboard computing system of an HMD may generate the one or more SDF surfaces after it receives the output image from a separate computing system (e.g., from a CPU or GPU of a wearable, handheld, or laptop device). In particular embodiments, there may be a predefined maximum number of surfaces that may be generated for a view (e.g., 16 surfaces) for efficiency purposes. Although this disclosure focuses on displaying an output image to a user on an AR HMD, it contemplates displaying the output image to a user on a VR display or any other suitable device.
At step 1320, a computing system may generate, based on the depth measurements, an occlusion surface representing one or more physical objects in the real environment as seen from a viewpoint of a user of an artificial reality environment, the occlusion surface comprising a plurality of texels.
At step 1330, a computing system may generate a signed distance field (SDF) value for each of the plurality of texels, the SDF value of each texel representing a distance from that texel to a closest texel at which the one or more physical objects appear in the occlusion surface.
At step 1340, a computing system may pose the occlusion surface in a three-dimensional space.
At step 1350, a computing system may sample the SDF values of the plurality of texels of the posed occlusion surface to generate an interpolated SDF value for each of a plurality of pixels.
At step 1360, a computing system may generate, for each of the plurality of pixels, an alpha value based on the interpolated SDF value associated with the pixel.
At step 1370, a computing system may generate an output image based on the alpha values of the plurality of pixels.
In particular embodiments, a computing system may receive one or more signals from one or more sensors associated with an artificial reality system. The system may determine one or more parameters associated a display content for the artificial reality system based on the one or more signals of the one or more sensors associated with the artificial reality system. The system may generate the display content based on the one or more parameters. The system may output the display content to a display of the artificial reality system. In particular embodiments, the system may predict a first head pose of a user of the artificial reality system in a three-dimensional (3D) space at a first time, the first time corresponding to when a first set of lines of a frame is to be output by the display of the artificial reality system. The system may determine a first visibility of one or more objects defined within the 3D space based on the first head pose of the user. The system may generate the first set of lines of the frame based on the determined first visibility of the one or more objects. The system may output the first set of lines using the display of the artificial reality system. In particular embodiments, the system may predict one or more second head poses of the user of the artificial reality system in the 3D space based on the first head pose and a predicted six degrees-of-freedom (6DoF) movement of the artificial reality system at one or more subsequent times to the first time, the one or more subsequent times respectively corresponding to when one or more second sets of lines of the frame are to be output by the display. The system may determine, based on the one or more second head poses, one or more second visibilities of the one or more objects defined within the 3D space. The system may generate the one or more second sets of lines of the frame using, respectively, the one or more second visibilities of the one or more objects. The system may output, using the display of the artificial reality system, the one or more second sets of lines of the frame at the one or more subsequent times, respectively.
In particular embodiments, the system predict a first head pose of a user of the artificial reality system in a three-dimensional (3D) space at a first time, the first time corresponding to when a first line of a first set of lines of a plurality of sets of lines of a frame is to be output by a display of the artificial reality system. The system may determine a plurality of subframes that partition the frame, wherein each subframe comprises a set of lines of the plurality of sets of lines of the frame to be output by the display of the artificial reality system. The system may predict a second head pose of the user of the artificial reality system in the three-dimensional (3D) space at a second time, the second time corresponding to when a final line of the first set of lines of the plurality of sets of lines of the frame is to be output by the display of the artificial reality system. The system may determine a first plurality of visibilities of one or more objects defined within the 3D space based on the first head pose and the second head pose. The system may generate the first set of lines of the frame based on the determined first plurality of visibilities of the one or more objects, wherein the first set of lines of the frame corresponds to a first subframe of the plurality of subframes. The system may output the first set of lines using the display of the artificial reality system.
In particular embodiments, the system may predict a third head pose of the artificial reality system in the three-dimensional (3D) space at a third time, the third time corresponding to when a first line of a second set of lines of the plurality of sets of lines of the frame is to be output by the display of the artificial reality system. The system may predict a fourth head pose of the artificial reality device in the three-dimensional (3D) space at a fourth time, the fourth time corresponding to when a final line of the second set of lines of the plurality of sets of lines of the frame is to be output by the display of the artificial reality system. The system may determine a second plurality of visibilities of the one or more objects defined within the 3D space based on the second head pose, the third head pose, and the fourth head pose. The system may generate the second set of lines of the frame based on the determined second plurality of visibilities of the one or more objects, wherein the second set of lines of the frame corresponds to the second subframe of the plurality of subframes. The system may output the second set of lines using the display of the artificial reality system.
In particular embodiments, the system receive one or more depth measurements of a real environment. The system may generate, based on the depth measurements, an occlusion surface representing one or more physical objects in the real environment as seen from a viewpoint of a user of an artificial reality environment, the occlusion surface comprising a plurality of texels. The system may generate a signed distance field (SDF) value for each of the plurality of texels, the SDF value of each texel representing a distance from that texel to a closest texel at which the one or more physical objects appear in the occlusion surface. The system may pose the occlusion surface in a three-dimensional space. The system may sample the SDF values of the plurality of texels of the posed occlusion surface to generate an interpolated SDF value for each of a plurality of pixels. The system may generate, for each of the plurality of pixels, an alpha value based on the interpolated SDF value associated with the pixel. The system may generate an output image based on the alpha values of the plurality of pixels.
In particular embodiments, the system may select a plurality of tiled regions within the display for sampling. The system may measure a brightness of each light-emitting element in the plurality of tiled regions. The system may determine a usage pattern of the display based on the measured brightness of the light-emitting elements in the plurality of tiled regions. The system may adjusting, based on the usage pattern, an output image to compensate for degradations of the display.
Particular embodiments may repeat one or more steps of the method of
In particular embodiments, the control block 1410 may receive an input data stream 1460 from a primary rendering component and initialize a pipeline in the display engine 1400 to finalize the rendering for display. In particular embodiments, the input data stream 1460 may comprise data and control packets from the primary rendering component. The data and control packets may include information such as one or more surfaces comprising texture data and position data and additional rendering instructions. The control block 1410 may distribute data as needed to one or more other blocks of the display engine 1400. The control block 1410 may initiate pipeline processing for one or more frames to be displayed. In particular embodiments, a HMD may comprise multiple display engines 1400 and each may comprise its own control block 1410.
In particular embodiments, transform blocks 1420a and 1420b may determine initial visibility information for surfaces to be displayed in the view of the artificial reality environment. In general, transform blocks (e.g., the transform blocks 1420a and 1420b) may cast rays from pixel locations on the screen and produce filter commands (e.g., filtering based on bilinear or other types of interpolation techniques) to send to pixel blocks 1430a and 1430b. Transform blocks 1420a and 1420b may perform ray casting from the current viewpoint of the user (e.g., determined using inertial measurement units, eye trackers, and/or any suitable tracking/localization algorithms, such as simultaneous localization and mapping (SLAM)) into the artificial scene where surfaces are positioned and may produce results to send to the respective pixel blocks (1430a and 1430b).
In general, transform blocks 1420a and 1420b may each comprise a four-stage pipeline, in accordance with particular embodiments. The stages of a transform block may proceed as follows. A ray caster may issue ray bundles corresponding to arrays of one or more aligned pixels, referred to as tiles (e.g., each tile may include 16×16 aligned pixels). The ray bundles may be warped, before entering the artificial reality environment, according to one or more distortion meshes. The distortion meshes may be configured to correct geometric distortion effects stemming from, at least, the displays 1450a and 1450b of the HMD. Transform blocks 1420a and 1420b may determine whether each ray bundle intersects with surfaces in the artificial reality environment by comparing a bounding box of each tile to bounding boxes for each surface. If a ray bundle does not intersect with an object, it may be discarded. Tile-surface intersections are detected, and corresponding tile-surface pair 1425a and 1425b are passed to pixel blocks 1430a and 1430b.
In general, pixel blocks 1430a and 1430b determine color values from the tile-surface pairs 1425a and 1425b to produce pixel color values, in accordance with particular embodiments. The color values for each pixel are sampled from the texture data of surfaces received and stored by the control block 1410 (e.g., as part of input data stream 1460). Pixel blocks 1430a and 1430b receive tile-surface pairs 1425a and 1425b from transform blocks 1420a and 1420b, respectively, and schedule bilinear filtering. For each tile-surface pair 1425a and 1425b, pixel blocks 1430a and 1430b may sample color information for the pixels within the tile using color values corresponding to where the projected tile intersects the surface. In particular embodiments, pixel blocks 1430a and 1430b may process the red, green, and blue color components separately for each pixel. Pixel blocks 1430a and 1430b may then output pixel color values 1435a and 1435b, respectively, to display blocks 1440a and 1440b.
In general, display blocks 1440a and 1440b may receive pixel color values 1435a and 1435b from pixel blocks 1430a and 1430b, converts the format of the data to be more suitable for the scanline output of the display, apply one or more brightness corrections to the pixel color values 1435a and 1435b, and prepare the pixel color values 1435a and 1435b for output to the displays 1450a and 1450b. Display blocks 1440a and 1440b may convert tile-order pixel color values 1435a and 1435b generated by pixel blocks 1430a and 1430b into scanline- or row-order data, which may be required by the displays 1450a and 1450b. The brightness corrections may include any required brightness correction, gamma mapping, and dithering. Display blocks 1440a and 1440b may provide pixel output 1445a and 1445b, such as the corrected pixel color values, directly to displays 1450a and 1450b or may provide the pixel output 1445a and 1445b to a block external to the display engine 1400 in a variety of formats. For example, the HMD may comprise additional hardware or software to further customize backend color processing, to support a wider interface to the display, or to optimize display speed or fidelity.
This disclosure contemplates any suitable network 1510. As an example and not by way of limitation, one or more portions of network 1510 may include an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, or a combination of two or more of these. Network 1510 may include one or more networks 1510.
Links 1550 may connect client system 1530, social-networking system 1560, and third-party system 1570 to communication network 1510 or to each other. This disclosure contemplates any suitable links 1550. In particular embodiments, one or more links 1550 include one or more wireline (such as for example Digital Subscriber Line (DSL) or Data Over Cable Service Interface Specification (DOCSIS)), wireless (such as for example Wi-Fi or Worldwide Interoperability for Microwave Access (WiMAX)), or optical (such as for example Synchronous Optical Network (SONET) or Synchronous Digital Hierarchy (SDH)) links. In particular embodiments, one or more links 4650 each include an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, a portion of the Internet, a portion of the PSTN, a cellular technology-based network, a satellite communications technology-based network, another link 1550, or a combination of two or more such links 1550. Links 1550 need not necessarily be the same throughout network environment 1500. One or more first links 1550 may differ in one or more respects from one or more second links 1550.
In particular embodiments, client system 1530 may be an electronic device including hardware, software, or embedded logic components or a combination of two or more such components and capable of carrying out the appropriate functionalities implemented or supported by client system 1530. As an example and not by way of limitation, a client system 1530 may include a computer system such as a desktop computer, notebook or laptop computer, netbook, a tablet computer, e-book reader, GPS device, camera, personal digital assistant (PDA), handheld electronic device, cellular telephone, smartphone, augmented/virtual reality device, other suitable electronic device, or any suitable combination thereof. This disclosure contemplates any suitable client systems 1530. A client system 1530 may enable a network user at client system 1530 to access network 1510. A client system 1530 may enable its user to communicate with other users at other client systems 4630.
In particular embodiments, client system 1530 may include a web browser 1532, and may have one or more add-ons, plug-ins, or other extensions. A user at client system 1530 may enter a Uniform Resource Locator (URL) or other address directing the web browser 1532 to a particular server (such as server 1562, or a server associated with a third-party system 1570), and the web browser 1532 may generate a Hyper Text Transfer Protocol (HTTP) request and communicate the HTTP request to server. The server may accept the HTTP request and communicate to client system 1530 one or more Hyper Text Markup Language (HTML) files responsive to the HTTP request. Client system 1530 may render a webpage based on the HTML files from the server for presentation to the user. This disclosure contemplates any suitable webpage files. As an example and not by way of limitation, webpages may render from HTML files, Extensible Hyper Text Markup Language (XHTML) files, or Extensible Markup Language (XML) files, according to particular needs. Such pages may also execute scripts, combinations of markup language and scripts, and the like. Herein, reference to a webpage encompasses one or more corresponding webpage files (which a browser may use to render the webpage) and vice versa, where appropriate.
In particular embodiments, social-networking system 1560 may be a network-addressable computing system that can host an online social network. Social-networking system 1560 may generate, store, receive, and send social-networking data, such as, for example, user-profile data, concept-profile data, social-graph information, or other suitable data related to the online social network. Social-networking system 1560 may be accessed by the other components of network environment 1500 either directly or via network 1510. As an example and not by way of limitation, client system 1530 may access social-networking system 1560 using a web browser 1532, or a native application associated with social-networking system 1560 (e.g., a mobile social-networking application, a messaging application, another suitable application, or any combination thereof) either directly or via network 1510. In particular embodiments, social-networking system 1560 may include one or more servers 1562. Each server 1562 may be a unitary server or a distributed server spanning multiple computers or multiple datacenters. Servers 1562 may be of various types, such as, for example and without limitation, web server, news server, mail server, message server, advertising server, file server, application server, exchange server, database server, proxy server, another server suitable for performing functions or processes described herein, or any combination thereof. In particular embodiments, each server 1562 may include hardware, software, or embedded logic components or a combination of two or more such components for carrying out the appropriate functionalities implemented or supported by server 1562. In particular embodiments, social-networking system 1560 may include one or more data stores 1564. Data stores 1564 may be used to store various types of information. In particular embodiments, the information stored in data stores 1564 may be organized according to specific data structures. In particular embodiments, each data store 1564 may be a relational, columnar, correlation, or other suitable database. Although this disclosure describes or illustrates particular types of databases, this disclosure contemplates any suitable types of databases. Particular embodiments may provide interfaces that enable a client system 1530, a social-networking system 1560, or a third-party system 1570 to manage, retrieve, modify, add, or delete, the information stored in data store 1564.
In particular embodiments, social-networking system 1560 may store one or more social graphs in one or more data stores 1564. In particular embodiments, a social graph may include multiple nodes—which may include multiple user nodes (each corresponding to a particular user) or multiple concept nodes (each corresponding to a particular concept)—and multiple edges connecting the nodes. Social-networking system 1560 may provide users of the online social network the ability to communicate and interact with other users. In particular embodiments, users may join the online social network via social-networking system 1560 and then add connections (e.g., relationships) to a number of other users of social-networking system 1560 to whom they want to be connected. Herein, the term “friend” may refer to any other user of social-networking system 1560 with whom a user has formed a connection, association, or relationship via social-networking system 1560.
In particular embodiments, social-networking system 1560 may provide users with the ability to take actions on various types of items or objects, supported by social-networking system 1560. As an example and not by way of limitation, the items and objects may include groups or social networks to which users of social-networking system 1560 may belong, events or calendar entries in which a user might be interested, computer-based applications that a user may use, transactions that allow users to buy or sell items via the service, interactions with advertisements that a user may perform, or other suitable items or objects. A user may interact with anything that is capable of being represented in social-networking system 1560 or by an external system of third-party system 1570, which is separate from social-networking system 1560 and coupled to social-networking system 1560 via a network 1510.
In particular embodiments, social-networking system 1560 may be capable of linking a variety of entities. As an example and not by way of limitation, social-networking system 1560 may enable users to interact with each other as well as receive content from third-party systems 1570 or other entities, or to allow users to interact with these entities through an application programming interfaces (API) or other communication channels.
In particular embodiments, a third-party system 1570 may include one or more types of servers, one or more data stores, one or more interfaces, including but not limited to APIs, one or more web services, one or more content sources, one or more networks, or any other suitable components, e.g., that servers may communicate with. A third-party system 1570 may be operated by a different entity from an entity operating social-networking system 1560. In particular embodiments, however, social-networking system 1560 and third-party systems 1570 may operate in conjunction with each other to provide social-networking services to users of social-networking system 1560 or third-party systems 1570. In this sense, social-networking system 1560 may provide a platform, or backbone, which other systems, such as third-party systems 1570, may use to provide social-networking services and functionality to users across the Internet.
In particular embodiments, a third-party system 1570 may include a third-party content object provider. A third-party content object provider may include one or more sources of content objects, which may be communicated to a client system 1530. As an example and not by way of limitation, content objects may include information regarding things or activities of interest to the user, such as, for example, movie show times, movie reviews, restaurant reviews, restaurant menus, product information and reviews, or other suitable information. As another example and not by way of limitation, content objects may include incentive content objects, such as coupons, discount tickets, gift certificates, or other suitable incentive objects.
In particular embodiments, social-networking system 1560 also includes user-generated content objects, which may enhance a user's interactions with social-networking system 1560. User-generated content may include anything a user can add, upload, send, or “post” to social-networking system 1560. As an example and not by way of limitation, a user communicates posts to social-networking system 1560 from a client system 1530. Posts may include data such as status updates or other textual data, location information, photos, videos, links, music or other similar data or media. Content may also be added to social-networking system 1560 by a third-party through a “communication channel,” such as a newsfeed or stream.
In particular embodiments, social-networking system 1560 may include a variety of servers, sub-systems, programs, modules, logs, and data stores. In particular embodiments, social-networking system 1560 may include one or more of the following: a web server, action logger, API-request server, relevance-and-ranking engine, content-object classifier, notification controller, action log, third-party-content-object-exposure log, inference module, authorization/privacy server, search module, advertisement-targeting module, user-interface module, user-profile store, connection store, third-party content store, or location store. Social-networking system 1560 may also include suitable components such as network interfaces, security mechanisms, load balancers, failover servers, management-and-network-operations consoles, other suitable components, or any suitable combination thereof. In particular embodiments, social-networking system 1560 may include one or more user-profile stores for storing user profiles. A user profile may include, for example, biographic information, demographic information, behavioral information, social information, or other types of descriptive information, such as work experience, educational history, hobbies or preferences, interests, affinities, or location. Interest information may include interests related to one or more categories. Categories may be general or specific. As an example and not by way of limitation, if a user “likes” an article about a brand of shoes the category may be the brand, or the general category of “shoes” or “clothing.” A connection store may be used for storing connection information about users. The connection information may indicate users who have similar or common work experience, group memberships, hobbies, educational history, or are in any way related or share common attributes. The connection information may also include user-defined connections between different users and content (both internal and external). A web server may be used for linking social-networking system 1560 to one or more client systems 1530 or one or more third-party system 1570 via network 1510. The web server may include a mail server or other messaging functionality for receiving and routing messages between social-networking system 1560 and one or more client systems 1530. An API-request server may allow a third-party system 1570 to access information from social-networking system 1560 by calling one or more APIs. An action logger may be used to receive communications from a web server about a user's actions on or off social-networking system 1560. In conjunction with the action log, a third-party-content-object log may be maintained of user exposures to third-party-content objects. A notification controller may provide information regarding content objects to a client system 1530. Information may be pushed to a client system 1530 as notifications, or information may be pulled from client system 1530 responsive to a request received from client system 1530. Authorization servers may be used to enforce one or more privacy settings of the users of social-networking system 1560. A privacy setting of a user determines how particular information associated with a user can be shared. The authorization server may allow users to opt in to or opt out of having their actions logged by social-networking system 1560 or shared with other systems (e.g., third-party system 1570), such as, for example, by setting appropriate privacy settings. Third-party-content-object stores may be used to store content objects received from third parties, such as a third-party system 1570. Location stores may be used for storing location information received from client systems 1530 associated with users. Advertisement-pricing modules may combine social information, the current time, location information, or other suitable information to provide relevant advertisements, in the form of notifications, to a user.
This disclosure contemplates any suitable number of computer systems 1100. This disclosure contemplates computer system 1100 taking any suitable physical form. As example and not by way of limitation, computer system 1100 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these. Where appropriate, computer system 1100 may include one or more computer systems 1100; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 1100 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems 1100 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems 1100 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate.
In particular embodiments, computer system 1100 includes a processor 1102, memory 1104, storage 1106, an input/output (I/O) interface 1108, a communication interface 1110, and a bus 1112. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.
In particular embodiments, processor 1102 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor 1102 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 1104, or storage 1106; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 1104, or storage 1106. In particular embodiments, processor 1102 may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor 1102 including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor 1102 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory 1104 or storage 1106, and the instruction caches may speed up retrieval of those instructions by processor 1102. Data in the data caches may be copies of data in memory 1104 or storage 1106 for instructions executing at processor 1102 to operate on; the results of previous instructions executed at processor 1102 for access by subsequent instructions executing at processor 1102 or for writing to memory 1104 or storage 1106; or other suitable data. The data caches may speed up read or write operations by processor 1102. The TLBs may speed up virtual-address translation for processor 1102. In particular embodiments, processor 1102 may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor 1102 including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor 1102 may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors 1102. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor.
In particular embodiments, memory 1104 includes main memory for storing instructions for processor 1102 to execute or data for processor 1102 to operate on. As an example and not by way of limitation, computer system 1100 may load instructions from storage 1106 or another source (such as, for example, another computer system 1100) to memory 1104. Processor 1102 may then load the instructions from memory 1104 to an internal register or internal cache. To execute the instructions, processor 1102 may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor 1102 may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor 1102 may then write one or more of those results to memory 1104. In particular embodiments, processor 1102 executes only instructions in one or more internal registers or internal caches or in memory 1104 (as opposed to storage 1106 or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory 1104 (as opposed to storage 1106 or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor 1102 to memory 1104. Bus 1112 may include one or more memory buses, as described below. In particular embodiments, one or more memory management units (MMUs) reside between processor 1102 and memory 1104 and facilitate accesses to memory 1104 requested by processor 1102. In particular embodiments, memory 1104 includes random access memory (RAM). This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory 1104 may include one or more memories 1104, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory.
In particular embodiments, storage 1106 includes mass storage for data or instructions. As an example and not by way of limitation, storage 1106 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Storage 1106 may include removable or non-removable (or fixed) media, where appropriate. Storage 1106 may be internal or external to computer system 1100, where appropriate. In particular embodiments, storage 1106 is non-volatile, solid-state memory. In particular embodiments, storage 1106 includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage 1106 taking any suitable physical form. Storage 1106 may include one or more storage control units facilitating communication between processor 1102 and storage 1106, where appropriate. Where appropriate, storage 1106 may include one or more storages 1106. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage.
In particular embodiments, I/O interface 1108 includes hardware, software, or both, providing one or more interfaces for communication between computer system 1100 and one or more I/O devices. Computer system 1100 may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system 1100. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces 1108 for them. Where appropriate, I/O interface 1108 may include one or more device or software drivers enabling processor 1102 to drive one or more of these I/O devices. I/O interface 1108 may include one or more I/O interfaces 1108, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface.
In particular embodiments, communication interface 1110 includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system 1100 and one or more other computer systems 1100 or one or more networks. As an example and not by way of limitation, communication interface 1110 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface 1110 for it. As an example and not by way of limitation, computer system 1100 may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system 1100 may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination of two or more of these. Computer system 1100 may include any suitable communication interface 1110 for any of these networks, where appropriate. Communication interface 1110 may include one or more communication interfaces 1110, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface.
In particular embodiments, bus 1112 includes hardware, software, or both coupling components of computer system 1100 to each other. As an example and not by way of limitation, bus 1112 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. Bus 1112 may include one or more buses 1112, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect.
Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate.
Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context.
The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages.
This application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application No. 63/173,945, filed 12 Apr. 2021, U.S. Provisional Patent Application No. 63/173,946, filed 12 Apr. 2021, U.S. Provisional Patent Application No. 63/208,121, filed 8 Jun. 2021, U.S. Provisional Patent Application No. 63/174,455, filed 13 Apr. 2021, which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63173945 | Apr 2021 | US | |
63173946 | Apr 2021 | US | |
63174455 | Apr 2021 | US | |
63208121 | Jun 2021 | US |