The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure.
Throughout the drawings and appendices, identical reference characters and descriptions may indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the appendices and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, equivalents, and alternatives falling within this disclosure.
Pan-tilt-zoom (PTZ) cameras are increasingly utilized in a variety of environments because they are capable of providing good coverage of a room and can typically provide 10-20× optical zoom. However, existing PTZ cameras are commonly bulky, heavy, and operationally complex, relying on moving parts to provide required degrees of freedom for use in various contexts. Thus, it would be beneficial to achieve effective results similar those obtained with conventional PTZ cameras while reducing the complexity and size of the camera devices.
The present disclosure is generally directed to multi-sensor camera devices (i.e., virtual PTZs) that provide pan, tilt, and zoom functionality in a reduced size article that does not utilize moving mechanical parts to achieve various levels of zoom. In some embodiments, the disclosed PTZ approach may use a large number of image sensors with overlapping horizontal fields of view arranged in tiers. The image sensors and corresponding lenses utilized in the systems described herein may be significantly smaller than conventional image sensors and lenses. Each tier may, for example, have increasingly more sensors with narrowing fields of view. A mixture of digital and fixed optical zoom positions utilized in the disclosed systems may provide high-resolution coverage of an environmental space at a variety of positions. Multiplexing/switching at an electrical interface may be used to connect a large number of sensors to system on a chip (SOC) or universal serial bus (USB) interface devices. Position aware n from m selection of sensors may be used to select a current sensor used to provide a displayed image and to prepare the next left or right and/or zoom in or out sensor.
SOC devices used in camera applications typically support up to 3 or 4 images sensors, so building a camera capable of directly connecting to a larger number of sensors would typically not be feasible without a custom application specific integrated circuit (ASIC) and/or field programmable gate array (FPGA). However, such a setup would likely be inefficient and unsuitable for use in high-speed interfaces and replication of logical functions. Additionally, such ASICs can be relatively expensive, making them impractical for implementation in many scenarios. Single sensor interfaces also tend to be overly time-consuming and impractical for use in switching between camera sensors in a practical manner (e.g., due to delays from electrical interface initialization, sensor setup, white balance, etc.), resulting in undesirable image stalling and/or corruption during switching.
However, in the disclosed embodiments discussed below, it may not be necessary for all sensors to be active at the same time as a camera view pans and/or zooms around and captures different portions of a scene. Rather, only the current active sensor(s) may be required at any one position and time for image capture, and image sensors that might be utilized next due to proximity may also be turned on and ready to go. In one embodiment, a small number (n) of active sensors from the total number (m) sensors may be selected. For example, the active sensors utilized at a particular time and position may include a currently used sensor (i.e., the sensor actively capturing an image in the selected FOV), the next left or right sensor, and/or the next zoom in or out sensor. Selection may be based on various factors, including the current position of the virtual PTZ camera. In some examples, movement of the camera view may be relatively slow, allowing for the switching sensor latency (e.g., approximately 1-2 seconds) to be effectively hidden.
Virtual PTZ camera pan and tilt ranges might be excessive when the FOV is focused deeper into a room or space. In some embodiments, each tier of sensors in the camera can narrow down its total FOV so as to reduce the number of lenses and improve angular resolution. Multiple tiers may each be optimized for part of the zoom range to allow fixed focus lenses to be optimized. A 90-degree rotation of the image sensors (e.g., between landscape and portrait modes) for later tiers may provide higher vertical FOV, which may help avoid overlapping in the vertical plane. Using a fisheye lens in the primary tier may provide a wider overall FOV than conventional PTZs. Additionally, the fisheye lens may be used to sense objects/people to direct the framing and selection of image sensors in other tiers corresponding to higher levels of zoom.
As discussed in greater detail below, secondary cameras 106A-106C may cover a range of an environment that partially or fully overlaps a portion of the environment captured by primary camera 104, with secondary cameras 106A-106C covering adjacent regions having FOVs that partially overlap to provide combined coverage of a region. In some examples, primary camera 104 and one or more of secondary cameras 106A, 106B, and 106C may have optical axes that are oriented parallel or substantially parallel to each other, with the respective camera lenses aligned along a common plane.
In certain examples, as shown in
In some examples, a virtual PTZ approach may use multiple sensors with at least partially overlapping horizontal FOVs arranged in multiple tiers of cameras, with each tier having increasingly more sensors with narrowing fields of view. A mixture of digital and fixed optical zoom positions may provide coverage of an environmental space at various levels of detail and scope. In some embodiments, multiplexing and/or switching at an electrical interface may be used to connect the large number of sensors to SOCs or USB interface devices. Position aware n from m selection of sensors may be used to select the current sensor and prepare the next (e.g., the nearest) left or right and/or the next zoom in or out sensor.
For example,
Using multiple lenses to cover the zoom range for suitable PTZ functionality may require a large number of sensors. However, sensors with smaller lenses may be significantly less expensive than larger sensors used in combination with larger lenses and motors (e.g., as used in conventional PTZ cameras). If sensors overlap enough for the desired image width, then images may be effectively captured without stitching images simultaneously captured by two or more adjacent sensors. The suitable amount of overlap may depend on sensor horizontal resolution and desired image width. For example, there may need to be enough overlap in the next tier to maintain the FOV in the previous tier at the desired width. In at least one example, a mixture of fisheye and rectilinear projection lenses may be utilized to meet specified FOV requirements at each tier.
As shown in
The second tier of camera system 700 may include multiple second-tier cameras, such as a pair of second-tier cameras (e.g., second-tier cameras 406/506/606 shown in
In various embodiments, as shown, the overlapped horizontal FOV 720 of the second-tier cameras may be at least as large as the minimum horizontal FOV 714 of the first-tier camera. Accordingly, the overlapped horizontal FOV 720 may provide enough coverage for the desired image width such that images may be effectively captured by the second-tiers cameras without requiring stitching of images simultaneously captured by two or more adjacent sensors. The suitable amount of overlap may depend on sensor horizontal resolution and desired image width. For example, the overlapped horizontal FOV 720 may provide enough overlap in the second tier to maintain the FOV provided in the first tier at the desired width. As such, when the first-tier camera is digitally zoomed to capture an area corresponding to the minimum horizontal FOV 714 in a region within the maximum horizontal FOV 712, the minimum horizontal FOV 714 of the first-tier camera will be narrow enough to fit within the overlapped horizontal FOV 720 and the view may be lined up with a view captured by one or both of the second-tier cameras without requiring stitching together two or more separate views from adjacent second-tier cameras.
In one example, images captured by the first-tier camera may be utilized to produce primary images for display on a screen. An image captured by the first-tier camera may be zoomed until it is at or near the minimum horizontal FOV 714. At that point, in order to further zoom the image or increase the image resolution provided at that level of zoom, the current image feed may be switched at a point when the displayed image region captured by the first-tier camera corresponds to a region being captured by one or both of the second-tier cameras (i.e., an image of the region within a second-tier camera range 706 of one or both of the second-tier cameras). The second-tier cameras may be utilized to produce secondary images for display. In order to keep a smooth flow in the image feed prior to and following the transition between cameras, the first-tier camera and one or both of the second-tier cameras may be activated simultaneously such that the relevant first- and second-tier cameras are capturing images at the same time prior to the transition. By ensuring the displayed regions from the first- and second-tier cameras are aligned or substantially aligned prior to switching, the displayed images may be presented to a viewer with little or no noticeable impact as the images are switched from one camera to another between frames. Selection and activation of one or more of the cameras in tier 1-4 may be accomplished in any suitable manner by, for example, an image controller (see, e.g.,
Moreover, an image captured by two or more of the second-tier cameras, at a level of zoom corresponding to the minimum horizontal FOV 714 of the first-tier camera, may be panned horizontally between the second-tier camera ranges without stitching images captured by the second-tier cameras. This may be accomplished, for example, by activating both second-tier cameras simultaneously such that both cameras are capturing images at the same time. In this example, as an image view is panned between the two second-tier camera ranges 706 covered by respective second-tier cameras, an image feed sent to a display may be switched from an initial second-tier camera to a succeeding second-tier camera when the image covers an area corresponding to the overlapped horizontal FOV 720. Thus, rather than stitching together images or portions of images individually captured by the two second-tier cameras, the current image feed may switched at a point when the displayed image region corresponds to a region being captured by both of the second-tier cameras (i.e., an image of the region within the overlapped horizontal FOV 720). By ensuring the displayed region from the two second-tier cameras is aligned or substantially aligned prior to switching, the displayed images may be presented to a viewer with little or no noticeable impact as the images are switched from one camera to another between frames. This same technique for switching between cameras during panning and zooming may be carried out in the same or similar fashion for the third- and fourth-tier cameras in third and fourth tiers.
The third tier of camera system 700 may include multiple third-tier cameras, such as three third-tier cameras (e.g., third-tier cameras 408/508/608 shown in
In various embodiments, as shown, the overlapped horizontal FOVs 726 of adjacent third-tier cameras may be at least as large as the minimum horizontal FOV 718 of one or more of the second-tier cameras. Accordingly, the overlapped horizontal FOVs 726 may provide enough coverage for the desired image width such that images may be effectively captured by the third-tiers cameras without requiring. In one example, the overlapped horizontal FOVs 726 may each provide enough overlap in the third tier to maintain the overall FOV provided in the second tier at the desired width. As such, when a second-tier camera is digitally zoomed to capture an area corresponding to the minimum horizontal FOV 718, the minimum horizontal FOV 718 of the second-tier camera will be narrow enough to fit within a corresponding overlapped horizontal FOV 726 and the view may be lined up with a view captured by at least one of the third-tier cameras without requiring stitching together of two or more separate views from adjacent third-tier cameras, regardless of where the zoom action is performed. Accordingly, an image captured by a second-tier camera may be zoomed until it is at or near the minimum horizontal FOV 718.
The image may be further zoomed and/or the image resolution provided at that level of zoom may be increased in the same or similar manner to that described above for zooming between the first and second tiers. For example, the current image feed may be switched at a point when the displayed image region captured by a second-tier camera corresponds to a region simultaneously captured by one or more of the third-tier cameras, thereby maintaining a smooth flow in the image feed prior to and following the transition between camera tiers. Moreover, images captured by two or more of the third-tier cameras, at a level of zoom corresponding to the minimum horizontal FOV 718 of the second-tier cameras, may be panned horizontally between the third-tier camera ranges without stitching together images captured by the third-tier cameras in the same or similar manner as that discussed above in relation to the second-tier cameras.
The fourth tier of camera system 700 may include multiple fourth-tier cameras, such as five fourth-tier cameras (e.g., fourth-tier cameras 410/510/610 shown in
In various embodiments, as shown, the overlapped horizontal FOVs 732 of adjacent fourth-tier cameras may be at least as large as the minimum horizontal FOV 724 of one or more of the third-tier cameras. Accordingly, the overlapped horizontal FOVs 732 may provide enough coverage for the desired image width such that images may be effectively captured by the fourth-tiers cameras without requiring. In one example, the overlapped horizontal FOVs 732 may each provide enough overlap in the fourth tier to maintain the overall FOV provided in the second tier at the desired width. As such, when a second-tier camera is digitally zoomed to capture an area corresponding to the minimum horizontal FOV 724, the minimum horizontal FOV 724 of the second-tier camera will be narrow enough to fit within a corresponding overlapped horizontal FOV 732 and the view may be lined up with a view captured by at least one of the fourth-tier cameras without requiring stitching together of two or more separate views from adjacent fourth-tier cameras, regardless of where the zoom action is performed. Accordingly, an image captured by a third-tier camera may be zoomed until it is at or near the minimum horizontal FOV 724.
The image may be further zoomed and/or the image resolution provided at that level of zoom may be increased in the same or similar manner to that described above for zooming between the first and second tiers and/or between the second and third tiers. For example, the current image feed may be switched at a point when the displayed image region captured by a third-tier camera corresponds to a region simultaneously captured by one or more of the fourth-tier cameras, thereby maintaining a smooth flow in the image feed prior to and following the transition between camera tiers. Moreover, images captured by two or more of the fourth-tier cameras, at a level of zoom corresponding to the minimum horizontal FOV 724 of the third-tier cameras, may be panned horizontally between the fourth-tier camera ranges without stitching together images captured by the fourth-tier cameras in the same or similar manner as that discussed above in relation to the second- and third-tier cameras.
Single or multiple sensor cameras may be used in a variety of devices, such as smart phones, interactive screen devices, web cameras, head-mounted displays, video conferencing systems, etc. In some examples, a large number of sensors may be required in a single device to achieve a desired level of image capture detail and/or FOV range. SOC devices may commonly be used, for example, in camera applications that only support a single image sensor. In such conventional SOC systems, it is typically not feasible to just switch between sensors as this may require an unsuitable interval of time (e.g., for electrical interface initialisation, sensor setup, white balance adjustment, etc.) and the image would likely tend to stall and/or be corrupted during a transition. In some conventional systems, a custom ASIC/FPGA may be utilized to enable a camera to directly connect to a larger number of sensors simultaneously. However, such a custom ASIC or FPGA would likely be inefficient in terms of high-speed interfaces and replication of logical functions.
Sensor selection may be based, for example, on the current image position of the virtual PTZ camera. By moving relatively slowly during panning, tilting, and/or zooming the image, the switching sensor latency (e.g., approximately 1-2 seconds) during switching between displayed cameras can be effectively hidden. According to one example, as shown in
In some examples, as shown in
As illustrated in
The routing of the sensors may be selected and laid out to ensure that the active sensors queued up at a particular time have a highest potential as a next image target and to further ensure that any two potential image targets are connected to different multiplexers when possible. Accordingly, adjacent sensors along a potential zoom and/or pan path may be selectively routed to multiplexers 1048A-C so as to ensure that a multiplexer used to receive a currently displayed image is different than a next potential multiplexor used to receive a succeeding image. The sensors may be connected to the multiplexers in such a way that, when a currently display image is received and transmitted by one multiplexer, the other two selected multiplexers are configured to receive data from two sensors that are likely to be utilized next. For example, one or more adjacent cameras in the same tier and/or one or more cameras in one or more adjacent tiers covering an overlapping or nearby FOV may be received at the other multiplexers that are not currently being utilized to provide displayed images. Such a setup may facilitate the selection and activation (i.e., active queuing) of sensors that are likely to be utilized in succession, thus facilitating a smooth transition via switching between sensors during pan, zoom, and tilt movements within the imaged environment. Final selection of the current active sensor may be made downstream of the multiplexers inside an SOC (e.g., SOC 834 in
In at least one example, when first-tier sensor 1004, which is an “A” sensor, is activated and used to generate a currently-displayed image that is sent to multiplexer 1048A, second-tier sensors 1006, which are “B” and “C” sensors routed to multiplexers 1048B and 1048C, may also be activated. Accordingly, when a current target image is zoomed, the image data may be smoothly switched from that received by multiplexer 1048A to image data received by multiplexer 1048B or 1048C from a corresponding one of second-tier sensors 1006. Since the sensors 1006 respectively connected to multiplexers 1048B and 1048C are already active and transmitting image data prior to such a transition, any noticeable lag between display of the resulting images may be reduced or eliminated. Similarly, when, for example, the central third-tier sensor 1008, which is an “A” sensor, is activated and used to generate a currently-displayed image that is sent to multiplexer 1048A, adjacent third-tier sensors 1008, which are “B” and “C” sensors routed to multiplexers 1048B and 1048C, may also be activated. Accordingly, when a current target image is panned, the image data may be smoothly switched from that received by, multiplexer 1048A to image data received by multiplexer 1048B or 1048C from a corresponding one of the adjacent third-tier sensors 1008. Since the adjacent third-tier sensors 1008 respectively connected to multiplexers 1048B and 1048C are already active and transmitting image data prior to such a transition from multiplexer 1048A, any noticeable lag between display of the resulting images may be reduced or eliminated.
In some embodiments, as shown in
Having multiple tiers that are each optimized for part of the zoom range at each tier may allow fixed focus lenses to be effectively utilized and optimized. In some embodiments, the asymmetric aspect ratio and a 90-degree rotation of the image sensors (e.g., during a rotation of sensors and/or sensor array from landscape to portrait mode) for later tiers may also provide higher vertical FOV. Additionally, as shown in
The third tier may have a total horizontal FOV of approximately 90-110 degrees (e.g., approximately 100 degrees), with each of the sensors in the third tier having maximum horizontal FOVs 1522 of approximately 65-75 degrees (e.g., approximately 71 degrees). The fourth tier may have a total horizontal FOV of approximately 70-90 degrees (e.g., approximately 80 degrees), with each of the sensors in the fourth tier having maximum horizontal FOVs 1528 of approximately 50-60 degrees (e.g., approximately 56 degrees). The fifth tier may have a total horizontal FOV of approximately of approximately 50-70 degrees (e.g., approximately 60 degrees), with each of the sensors in the fifth tier having maximum horizontal FOVs 1560 of approximately 42 degrees of approximately 35-45 degrees (e.g., approximately 42 degrees). The sixth tier may have a total horizontal FOV of approximately 30-50 degrees (e.g., approximately 40 degrees), with each of the sensors in the sixth tier having maximum horizontal FOVs 1566 of approximately 25-35 degrees (e.g., approximately 29 degrees). The sensors may be arranged such that the physical distance between a particular sensor and the sensors which may be used next (e.g., left, right and n−1 tier, n+1 tier) is minimized. This may, for example, reduce parallax effects and make switching between sensor images less jarring, particularly at UHD resolutions.
Additionally, as shown in
Moreover, the second-tier sensors may have an overlapped horizontal FOV 1520 of approximately 70-85 degrees or more, adjacent third-tier sensors may have an overlapped horizontal FOV 1526 of approximately 55-65 degrees or more, adjacent fourth-tier sensors may have an overlapped horizontal FOV 1532 of approximately 40-50 degrees or more, adjacent fifth-tier sensors may have an overlapped horizontal FOV 1564 of approximately 30-40 degrees or more, and adjacent sixth-tier sensors may have an overlapped horizontal FOV 1570 of approximately 20-30 degrees or more.
In certain embodiments, instead of utilizing a single sensor at a time, multiple sensors may be utilized to simultaneously capture multiple images. For example, two sensors may provide both a people view and a separate whiteboard view in a split or multi-screen view. In this example, one or both of the active cameras providing the displayed images may function with restrictions limiting how freely and/or seamlessly the sensors are able to move around and/or change views (e.g., by switching between sensors as described herein).
According to some embodiments, a virtual PTZ camera system may use multiple cameras with different fields of view in a scalable architecture that can achieve large levels of zoom without any moving parts. The multiple cameras may be controlled by software that chooses a subset of the cameras and uses image processing to render an image that could support a “virtual” digital pan-tilt-zoom camera type experience (along with other experiences in various examples). Benefits of such technology may include the ability to provide a zoomed view of any portion of a room while simultaneously maintaining awareness through a separate camera capturing a full view of the room. Additionally, the described systems may provide the ability to move a virtual camera view without user intervention and fade across multiple different cameras in a way that is seamless to the user and seems like a single camera. Additionally, users in a field of view of the system may be tracked with much lower latency than a conventional PTZ due to the use of all-digital imaging that does not rely on mechanical motors to move cameras or follow users. Moreover, the described systems may use lower cost camera modules, which, in combination, may achieve image quality competitive with a higher end digital PTZ camera that utilizes a more costly camera and components.
According to various embodiments, the described technology may be utilized in interactive smart devices and workplace communication applications. Additionally, the same technologies may be used for other applications such as AR/VR, security cameras, surveillance, or any other suitable application that can benefit from the use of multiple cameras. The described systems may be well-suited for implementation on mobile devices, leveraging technology developed primarily for the mobile device space.
Returning to
In the example shown, two neighboring secondary cameras may have overlapping FOVs 1608B and 1608C such that a 16:9 video framing a crop of the torso of individual 1604 at a distance of, for example, approximately 2 meters would be guaranteed to fall within FOVs 1608B and 1608C. Such a framing would allow either for displayed images to be transitioned between the cameras with a sharp cut, cross-fade, or any other suitable view interpolation method as described herein. If the cameras have sufficient redundant overlap, then the camera input to an application processor may be switched between the center and right cameras having FOVs 1608B and 1608C during a frame transition, with little or no delay as described above. Accordingly, such a system may be capable of operating with only two camera inputs, including one input for a wide-angle view and the other input for either a right or center view, depending on the user's current position.
Inputs to the system may be an array of individual cameras streaming video images, which may be synchronized in real-time to an application processor that uses the constituent views to synthesize an output video. The final video feed may be considered a video image (i.e., a virtual camera view) that is synthesized from one or more of the multiple cameras. The control of the virtual camera view placement and camera parameters may be managed by the manual intervention of a local or remote user (e.g., from a console), or under the direction of an automated algorithm, such as an artificial intelligence (AI)-directed Smart Camera, that determines a desired virtual camera to render given contextual information and data gathered about the scene. The contextual information from the scene can be aggregated from the multiple cameras and other sensing devices of different modalities (e.g., computer-vision cameras or microphone arrays) that can provide additional inputs to the application processor.
One or more individuals (or other relevant objects of salient interest, like pets, a birthday cake, etc.), which may influence the placement of a final video feed may be detected from some subset of the available cameras in order to build understanding of the scene and its relevant objects. This detection may be an AI-detection deep learning method such as that used in pose estimation fora smart camera device. The results of the detection operation may be used by an automatic algorithm to determine the final desired camera view parameters and which of the physical cameras should be activated or prioritized in the processing to achieve the desired virtual camera view.
In one configuration, as illustrated in
In some embodiments, there may be no single camera that has a view of everything, so an AI detection task may be distributed or rotated over multiple cameras in a temporal manner. For example, AI detection may run once on a frame from one camera, next on a frame from another camera, and so on (e.g., in round robin fashion) in order to build a larger model of the scene than may be achieved from a single camera. It may also use detection in multiple cameras to get a more accurate detection distance of the object through triangulation from a stereo or other multiple camera method. In one case, the AI detection task may be rotated periodically amongst zoomed and wide views in order to detect relevant objects that may be too distant (i.e., at too low a resolution) to result in successful detection in wider FOV cameras.
In another embodiment, one or more of the physical cameras may have their own dedicated built-in real-time AI-enabled co-processing or detection hardware and may stream detection results without necessarily providing an image. A processing unit may gather the metadata and object detection information from the distributed cameras and use them to aggregate its model of the scene, or to control and down-select which cameras provide image data to a more powerful AI detection algorithm. The AI detection software may use detection metadata from the different cameras to determine how to temporally share limited AI resources across multiple cameras (e.g., making the ‘round-robin’ rotation of cameras through the AI detection dependent on individual detections from the particular cameras). In another method, environment detection data from the individual cameras may be used to set a reduced region of interest to save bandwidth and power or conserve processing resources when streaming images to an application processor.
Accordingly, the widest FOV camera in the virtual PTZ camera system may be used for AI understanding of the scene because it has the ability to broadly image objects in the environment, and also for a mobile device there are typically not enough AI resources to process all the camera feeds. However, in some cases, for multiple cameras, the AI and detection tasks may need to be rotated across different cameras or the processing may also be partially or wholly distributed to various individual cameras.
Many hardware devices that may implement an AI-detection algorithm may only support a limited number of camera inputs. In some cases, as shown in
In one embodiment, with virtual PTZ camera system 1801 having more cameras than inputs, access to the camera port on a mobile application processor may be mediated by another separate hardware device (e.g., a multiplexer, MUX, as shown in
A technique, such as an algorithmic technique, for selecting limited inputs from a plurality of inputs may be carried out as follows. Pose and detection information of one or more subjects, such as individuals 1810 in a scene of environment 1800, may be identified through AI detection in the widest frame and may include information concerning the position and relevant key points or bounding boxes of one or more of individuals 1810 (e.g., identification of shoulders, head, etc. as depicted in
A “virtual” camera may be thought of as a specification of an effective camera (e.g., derived from intrinsics, extrinsics, and/or a projection model) for which an image may be generated by the system from the multiple cameras. In practice, a virtual camera may have a projection model for the generated image (such as a Mercator projection model) that is not physically realizable in a real-camera lens. An automatic algorithm (e.g., a smart camera) may determine parameters of the virtual camera that will be rendered based on scene content and AI detections. In many cases, the position, projection, and rotation of the virtual camera may simply match the parameters of one of the physical cameras in the system. In other cases, or for selected periods of time, the parameters of the virtual camera, such as position, rotation, and zoom setting, may be some value not physically matched to any real camera in the system. In such cases, the desired virtual camera view may be synthesized through software processing using image data from a subset of the available multiple cameras (i.e., some sort of view interpolation).
Once the pose and detection information of the person(s) or object(s) of interest in the scene is known relative to the wide-angle camera corresponding to maximum horizontal FOV 1802 shown in
If a further reduced subset of cameras is required due to limited inputs or limitations of the processing platform, then, for example, the algorithm may choose a subset of cameras of virtual PTZ camera system 1801 using additional criteria, such as best stitching and/or best quality criteria. Best stitching criteria may be used to calculate a set of cameras of highest zoom that may synthesize the desired virtual camera view in the union of their coverage when blended or stitched together. Best quality criteria may be used to determine a camera having the best quality (e.g., the most ‘zoomed’ camera) that still fully (or mostly) overlaps the required virtual camera view.
In cases when the virtual camera view is in a position not coinciding with the physical cameras, a view may be synthesized from a subset of camera views (e.g., from two or more cameras) neighboring the position of the desired virtual camera. The virtual view may be generated by any suitable technique. For example, the virtual view may be generated using homography based on motion vectors or feature correspondences between two or more cameras. In at least one example, the virtual view may be generated using adaptive image fusion of two or more cameras. Additionally or alternatively, the virtual view may be generated using any other suitable view interpolation method, including, without limitation, (1) depth-from-stereo view interpolation, (2) sparse or dense motion vectors between two or more cameras, (3) synthetic aperture blending (image-based techniques), and/or (4) Deep-learning based view interpolation
In the methods for view interpolation, as described above, depth or sparse distance information may be necessary and/or may improve the quality of the image operations. In one embodiment, a multi-view stereo depth detection or feature correspondence may be performed on the multiple camera streams to generate a depth map or multiple depth maps of the world space covered by the multiple cameras. In some examples, one or more depth maps may be calculated at a lower frame rate or resolution. In additional examples, a 3D or volumetric model of the scene may be constructed over multiple frames and refined over time to improve the depth needed to generate clean view interpolations. In at least one example, AI processing of single or multiple RGB images may be used to estimate the depth of key objects or persons of interest in the scene. Additionally or alternatively, multi-modal signals from a system, such as a microphone array, may be used to estimate the depth to one or more subjects in a scene. In another example, depth information may be provided by actively illuminated sensors for depth such as structured light, time-of-flight (TOF), and/or light detection and ranging (Lidar).
The simplest realization of the above framework is a dual camera system that includes one wide-angle camera with a full view of the scene and one narrower-angle camera with better zoom. If two cameras are utilized in the system, the wide-angle camera may be set up to take over when a user is outside the maximum FOV of the narrow camera. If a user is inside the FOV of the narrower camera, then the narrower camera may be used to generate the output video because it has the higher image quality and resolution of the two cameras. There are two main options that may be considered for the final virtual camera view in this scenario. In the first option, the virtual camera may always rest on the position of the wider of the two cameras, and the narrower camera information may be constantly fused into that of the wider camera through depth projection. In the second option, the virtual camera may transition from the position of one camera to the other camera. For the time in between, a view may be interpolated during a video transition. Once the transition is over the new camera may become the primary view position. An advantage of the second option may be that there is a resultant higher image quality and less artifacts because the period of view interpolation is only limited to transitions between the cameras. This may reduce the chance that a user will perceive the differences or artifacts bbetween the two cameras during the transition period between cameras.
Because of the potential risks and expensive processing often needed for view interpolation, in order to realize this technique on a real device, the following additional strategies may make view interpolation more practical. In one example, video effects, such as cross-fade, may be used to transition all the way from one camera to the other. This may avoid costly processing associated with view interpolation because it only relies on simpler operations such as alpha blending. According to some examples, the transitions may be triggered to coincide with other camera movements such as zooming in order to hide the noticeability of switching the cameras. In additional examples, the camera may be controlled to transition only when it is least likely to be noticeable.
In some embodiments, the following potentially less-expensive strategies may be utilized instead of or in addition to view interpolation. In one example, a quick cut may simply be performed between the two cameras, with no transition or a limited transition period. A simple cross fade may be performed between the two cameras while applying a homography on one of the two images to prioritize keeping the face and body aligned between the two frames. In another example, a cross fade may be performed while mesh-warping keypoints from the start to the end image. According to at least one example, a more expensive view interpolation may be performed for the transition (as noted above). Additionally, in some cases, to create the virtual output image, multiple cameras may be stitched or fused together constantly in a way that might spatially vary across the frame. For example, a method may be used to fuse key content in higher resolution. For example, only the face would come from one camera and the rest of the content would come from another camera.
Multi-sensor camera devices, systems, and methods, as disclosed herein, may provide virtual pan, tilt, and zoom functionality without the need for moving parts, thereby reducing the space requirements and overall complexity in comparison to conventional PTZ camera systems. In some embodiments, the approach may use a large number of smaller image sensors with overlapping horizontal fields of view arranged in tiers, with the sensors and lenses being more cost-effective than larger sensors and/or lens configurations, particularly in cases where, for example, up to four or more separate sensors may be included in a single SOC component. A mixture of digital and fixed optical zoom positions may provide substantial coverage of an environmental space at various levels of zoom and detail. Multiplexing/switching at the electrical interface may be used to connect the large number of sensors to SOCs or USB interface devices.
The systems and apparatuses described herein may perform steps 2110 and 2120 in a variety of ways. In one example, image data may be received by a physical layer switch 832 from sensors 804 of a primary camera and a plurality of secondary cameras (see, e.g.,
At step 2130 in
At step 2230 in
As shown, for example, in
In at least one embodiment, a camera device 2400 of
A camera system may include a primary camera and a plurality of secondary cameras that each have a maximum horizontal FOV that is less than a maximum horizontal FOV of the primary camera. Two of the plurality of secondary cameras may be positioned such that their maximum horizontal FOVs overlap in an overlapped horizontal FOV and the overlapped horizontal FOV may be at least as large as a minimum horizontal FOV of the primary camera. The camera system may also include an image controller that simultaneously activates two or more of the primary camera and the plurality of secondary cameras when capturing images from a portion of an environment included within the overlapped horizontal FOV.
The camera system of example 1, wherein at least one of the primary camera and the plurality of secondary cameras may include a fixed lens camera.
The camera system of example 1, wherein the primary camera may include a fisheye lens.
The camera system of example 1, wherein the secondary cameras may each have a greater focal length than the primary camera.
The camera system of example 1, wherein the image controller may be configured to digitally zoom at least one of the primary camera and the plurality of secondary cameras by 1) receiving image data from the at least one of the primary camera and the plurality of secondary cameras and 2) producing images that correspond to a selected portion of the corresponding maximum horizontal FOV of the at least one of the primary camera and the plurality of secondary cameras.
The camera system of example 5, wherein, when the image controller digitally zooms the primary camera to a maximum extent, the corresponding image produced by the image controller may cover a portion of the environment that does not extend outside the minimum horizontal FOV.
The camera system of example 5, wherein the image controller may be configured to digitally zoom the at least one of the primary camera and the plurality of secondary cameras to a maximum zoom level corresponding to a minimum threshold image resolution.
The camera system of example 5, wherein the image controller may be configured to digitally zoom between the primary camera and at least one secondary camera of the plurality of secondary cameras by 1) receiving image data from both the primary camera and the at least one secondary camera simultaneously, 2) producing primary images based on the image data received from the primary camera when a zoom level specified by the image controller corresponds to an imaged horizonal FOV that is greater than the overlapped horizontal FOV, and 3) producing secondary images based on the image data received from the at least one secondary camera when the zoom level specified by the image controller corresponds to an imaged horizonal FOV that is not greater than the overlapped horizontal FOV.
The camera system of example 5, wherein the image controller may be configured to digitally pan horizontally between the plurality of secondary cameras when the images produced by the image controller correspond to an imaged horizonal FOV that is less than the overlapped horizontal FOV.
The camera system of example 9, wherein the image controller may pan horizontally between an initial camera and a succeeding camera of the two secondary cameras by 1) receiving image data from both the initial camera and the succeeding camera simultaneously, 2) producing initial images based on the image data received from the initial camera when at least a portion of the imaged horizonal FOV is outside the overlapped horizontal FOV and within the maximum horizontal FOV of the initial camera, and 3) producing succeeding images based on the image data received from the succeeding camera when the imaged horizontal FOV is within the overlapped horizontal FOV.
The camera system of example 1, further including a plurality of camera interfaces, wherein each of the primary camera and the two secondary cameras may send image data to a separate one of the plurality of camera interfaces.
The camera system of example 11, wherein the image controller may selectively produce images corresponding to one of the plurality of camera interfaces.
The camera system of example 11, wherein 1) each of the plurality of camera interfaces may be communicatively coupled to multiple additional cameras and 2) the image controller may selectively activate a single camera connected to each of the plurality of camera interfaces and deactivate the remaining cameras at a given time.
The camera system of example 1, further including a plurality of tertiary cameras that each have a maximum horizontal FOV that is less than the maximum horizontal FOV of the of each of the secondary cameras, wherein two of the plurality of tertiary cameras are positioned such that their maximum horizontal FOVs overlap in an overlapped horizontal FOV.
The camera system of example 14, wherein 1) the primary, secondary, and tertiary cameras may be respectively included within primary, secondary, and tertiary tiers of cameras and 2) the camera system may further include one or more additional tiers of cameras that each include multiple cameras.
The camera system of example 1, wherein an optical axis of the primary camera may be oriented at a different angle than an optical axis of at least one of the secondary cameras.
The camera system of example 1, wherein the primary camera and the plurality of secondary cameras may be oriented such that the horizontal FOV extends in a non-horizontal direction.
A camera system may include a primary camera and a plurality of secondary cameras that each have a maximum horizontal FOV that is less than a maximum horizontal FOV of the primary camera, wherein two of the plurality of secondary cameras may be positioned such that their maximum horizontal FOVs overlap. The camera system may also include an image controller that simultaneously activates two or more of the primary camera and the plurality of secondary cameras when capturing images from a portion of an environment to produce a virtual camera image formed by a combination of image elements captured by the two or more of the primary camera and the plurality of secondary cameras.
The camera system of example 18, wherein the image controller may further 1) detect at least one object of interest in the environment based on image data received from the primary camera, 2) determine a virtual camera view based on the detection of the at least one object of interest, and generate the virtual camera image corresponding to the virtual camera view using image data received from at least one of the activated plurality of secondary cameras.
A method may include 1) receiving image data from a primary camera and 2) receiving image data from a plurality of secondary cameras that each have a maximum horizontal FOV that is less than a maximum horizontal FOV of the primary camera. Two of the plurality of secondary cameras may be positioned such that their maximum horizontal FOVs overlap in an overlapped horizontal FOV and the overlapped horizontal FOV may be at least as large as a minimum horizontal FOV of the primary camera. The method may further include simultaneously activating, by an image controller, two or more of the primary camera and the plurality of secondary cameras when capturing images from a portion of an environment included within the overlapped horizontal FOV.
Embodiments of the present disclosure may include or be implemented in conjunction with various types of artificial-reality systems. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, for example, a virtual reality, an augmented reality, a mixed reality, a hybrid reality, or some combination and/or derivative thereof. Artificial-reality content may include completely computer-generated content or computer-generated content combined with captured (e.g., real-world) content. The artificial-reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional (3D) effect to the viewer). Additionally, in some embodiments, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, for example, create content in an artificial reality and/or are otherwise used in (e.g., to perform activities in) an artificial reality.
Artificial-reality systems may be implemented in a variety of different form factors and configurations. Some artificial-reality systems may be designed to work without near-eye displays (NEDs). Other artificial-reality systems may include an NED that also provides visibility into the real world (such as, e.g., augmented-reality system 2500 in
Turning to
In some embodiments, augmented-reality system 2500 may include one or more sensors, such as sensor 2540. Sensor 2540 may generate measurement signals in response to motion of augmented-reality system 2500 and may be located on substantially any portion of frame 2510. Sensor 2540 may represent one or more of a variety of different sensing mechanisms, such as a position sensor, an inertial measurement unit (IMU), a depth camera assembly, a structured light emitter and/or detector, or any combination thereof. In some embodiments, augmented-reality system 2500 may or may not include sensor 2540 or may include more than one sensor. In embodiments in which sensor 2540 includes an IMU, the IMU may generate calibration data based on measurement signals from sensor 2540. Examples of sensor 2540 may include, without limitation, accelerometers, gyroscopes, magnetometers, other suitable types of sensors that detect motion, sensors used for error correction of the IMU, or some combination thereof.
In some examples, augmented-reality system 2500 may also include a microphone array with a plurality of acoustic transducers 2520(A)-2520(J), referred to collectively as acoustic transducers 2520. Acoustic transducers 2520 may represent transducers that detect air pressure variations induced by sound waves. Each acoustic transducer 2520 may be configured to detect sound and convert the detected sound into an electronic format (e.g., an analog or digital format). The microphone array in
In some embodiments, one or more of acoustic transducers 2520(A)-(J) may be used as output transducers (e.g., speakers). For example, acoustic transducers 2520(A) and/or 2520(B) may be earbuds or any other suitable type of headphone or speaker.
The configuration of acoustic transducers 2520 of the microphone array may vary. While augmented-reality system 2500 is shown in
Acoustic transducers 2520(A) and 2520(B) may be positioned on different parts of the user's ear, such as behind the pinna, behind the tragus, and/or within the auricle or fossa. Or, there may be additional acoustic transducers 2520 on or surrounding the ear in addition to acoustic transducers 2520 inside the ear canal. Having an acoustic transducer 2520 positioned next to an ear canal of a user may enable the microphone array to collect information on how sounds arrive at the ear canal. By positioning at least two of acoustic transducers 2520 on either side of a user's head (e.g., as binaural microphones), augmented-reality device 2500 may simulate binaural hearing and capture a 3D stereo sound field around about a user's head. In some embodiments, acoustic transducers 2520(A) and 2520(B) may be connected to augmented-reality system 2500 via a wired connection 2530, and in other embodiments acoustic transducers 2520(A) and 2520(B) may be connected to augmented-reality system 2500 via a wireless connection (e.g., a BLUETOOTH connection). In still other embodiments, acoustic transducers 2520(A) and 2520(B) may not be used at all in conjunction with augmented-reality system 2500.
Acoustic transducers 2520 on frame 2510 may be positioned in a variety of different ways, including along the length of the temples, across the bridge, above or below display devices 2515(A) and 2515(B), or some combination thereof. Acoustic transducers 2520 may also be oriented such that the microphone array is able to detect sounds in a wide range of directions surrounding the user wearing the augmented-reality system 2500. In some embodiments, an optimization process may be performed during manufacturing of augmented-reality system 2500 to determine relative positioning of each acoustic transducer 2520 in the microphone array.
In some examples, augmented-reality system 2500 may include or be connected to an external device (e.g., a paired device), such as neckband 2505. Neckband 2505 generally represents any type or form of paired device. Thus, the following discussion of neckband 2505 may also apply to various other paired devices, such as charging cases, smart watches, smart phones, wrist bands, other wearable devices, hand-held controllers, tablet computers, laptop computers, other external compute devices, etc.
As shown, neckband 2505 may be coupled to eyewear device 2502 via one or more connectors. The connectors may be wired or wireless and may include electrical and/or non-electrical (e.g., structural) components. In some cases, eyewear device 2502 and neckband 2505 may operate independently without any wired or wireless connection between them. While
Pairing external devices, such as neckband 2505, with augmented-reality eyewear devices may enable the eyewear devices to achieve the form factor of a pair of glasses while still providing sufficient battery and computation power for expanded capabilities. Some or all of the battery power, computational resources, and/or additional features of augmented-reality system 2500 may be provided by a paired device or shared between a paired device and an eyewear device, thus reducing the weight, heat profile, and form factor of the eyewear device overall while still retaining desired functionality. For example, neckband 2505 may allow components that would otherwise be included on an eyewear device to be included in neckband 2505 since users may tolerate a heavier weight load on their shoulders than they would tolerate on their heads. Neckband 2505 may also have a larger surface area over which to diffuse and disperse heat to the ambient environment. Thus, neckband 2505 may allow for greater battery and computation capacity than might otherwise have been possible on a stand-alone eyewear device. Since weight carried in neckband 2505 may be less invasive to a user than weight carried in eyewear device 2502, a user may tolerate wearing a lighter eyewear device and carrying or wearing the paired device for greater lengths of time than a user would tolerate wearing a heavy standalone eyewear device, thereby enabling users to more fully incorporate artificial-reality environments into their day-to-day activities.
Neckband 2505 may be communicatively coupled with eyewear device 2502 and/or to other devices. These other devices may provide certain functions (e.g., tracking, localizing, depth mapping, processing, storage, etc.) to augmented-reality system 2500. In the embodiment of
Acoustic transducers 2520(I) and 2520(J) of neckband 2505 may be configured to detect sound and convert the detected sound into an electronic format (analog or digital). In the embodiment of
Controller 2525 of neckband 2505 may process information generated by the sensors on neckband 2505 and/or augmented-reality system 2500. For example, controller 2525 may process information from the microphone array that describes sounds detected by the microphone array. For each detected sound, controller 2525 may perform a direction-of-arrival (DOA) estimation to estimate a direction from which the detected sound arrived at the microphone array. As the microphone array detects sounds, controller 2525 may populate an audio data set with the information. In embodiments in which augmented-reality system 2500 includes an inertial measurement unit, controller 2525 may compute all inertial and spatial calculations from the IMU located on eyewear device 2502. A connector may convey information between augmented-reality system 2500 and neckband 2505 and between augmented-reality system 2500 and controller 2525. The information may be in the form of optical data, electrical data, wireless data, or any other transmittable data form. Moving the processing of information generated by augmented-reality system 2500 to neckband 2505 may reduce weight and heat in eyewear device 2502, making it more comfortable to the user.
Power source 2535 in neckband 2505 may provide power to eyewear device 2502 and/or to neckband 2505. Power source 2535 may include, without limitation, lithium ion batteries, lithium-polymer batteries, primary lithium batteries, alkaline batteries, or any other form of power storage. In some cases, power source 2535 may be a wired power source. Including power source 2535 on neckband 2505 instead of on eyewear device 2502 may help better distribute the weight and heat generated by power source 2535.
As noted, some artificial-reality systems may, instead of blending an artificial reality with actual reality, substantially replace one or more of a user's sensory perceptions of the real world with a virtual experience. One example of this type of system is a head-worn display system, such as virtual-reality system 2600 in
Artificial-reality systems may include a variety of types of visual feedback mechanisms. For example, display devices in augmented-reality system 2500 and/or virtual-reality system 2600 may include one or more liquid crystal displays (LCDs), light emitting diode (LED) displays, microLED displays, organic LED (OLED) displays, digital light project (DLP) micro-displays, liquid crystal on silicon (LCoS) micro-displays, and/or any other suitable type of display screen. These artificial-reality systems may include a single display screen for both eyes or may provide a display screen for each eye, which may allow for additional flexibility for varifocal adjustments or for correcting a user's refractive error. Some of these artificial-reality systems may also include optical subsystems having one or more lenses (e.g., conventional concave or convex lenses, Fresnel lenses, adjustable liquid lenses, etc.) through which a user may view a display screen. These optical subsystems may serve a variety of purposes, including to collimate (e.g., make an object appear at a greater distance than its physical distance), to magnify (e.g., make an object appear larger than its actual size), and/or to relay (to, e.g., the viewer's eyes) light. These optical subsystems may be used in a non-pupil-forming architecture (such as a single lens configuration that directly collimates light but results in so-called pincushion distortion) and/or a pupil-forming architecture (such as a multi-lens configuration that produces so-called barrel distortion to nullify pincushion distortion).
In addition to or instead of using display screens, some of the artificial-reality systems described herein may include one or more projection systems. For example, display devices in augmented-reality system 2500 and/or virtual-reality system 2600 may include micro-LED projectors that project light (using, e.g., a waveguide) into display devices, such as clear combiner lenses that allow ambient light to pass through. The display devices may refract the projected light toward a user's pupil and may enable a user to simultaneously view both artificial-reality content and the real world. The display devices may accomplish this using any of a variety of different optical components, including waveguide components (e.g., holographic, planar, diffractive, polarized, and/or reflective waveguide elements), light-manipulation surfaces and elements (such as diffractive, reflective, and refractive elements and gratings), coupling elements, etc. Artificial-reality systems may also be configured with any other suitable type or form of image projection system, such as retinal projectors used in virtual retina displays.
The artificial-reality systems described herein may also include various types of computer vision components and subsystems. For example, augmented-reality system 2500 and/or virtual-reality system 2600 may include one or more optical sensors, such as two-dimensional (2D) or 3D cameras, structured light transmitters and detectors, time-of-flight depth sensors, single-beam or sweeping laser rangefinders, 3D LiDAR sensors, and/or any other suitable type or form of optical sensor. An artificial-reality system may process data from one or more of these sensors to identify a location of a user, to map the real world, to provide a user with context about real-world surroundings, and/or to perform a variety of other functions.
The artificial-reality systems described herein may also include one or more input and/or output audio transducers. Output audio transducers may include voice coil speakers, ribbon speakers, electrostatic speakers, piezoelectric speakers, bone conduction transducers, cartilage conduction transducers, tragus-vibration transducers, and/or any other suitable type or form of audio transducer. Similarly, input audio transducers may include condenser microphones, dynamic microphones, ribbon microphones, and/or any other type or form of input transducer. In some embodiments, a single transducer may be used for both audio input and audio output.
In some embodiments, the artificial-reality systems described herein may also include tactile (i.e., haptic) feedback systems, which may be incorporated into headwear, gloves, body suits, handheld controllers, environmental devices (e.g., chairs, floormats, etc.), and/or any other type of device or system. Haptic feedback systems may provide various types of cutaneous feedback, including vibration, force, traction, texture, and/or temperature. Haptic feedback systems may also provide various types of kinesthetic feedback, such as motion and compliance. Haptic feedback may be implemented using motors, piezoelectric actuators, fluidic systems, and/or a variety of other types of feedback mechanisms. Haptic feedback systems may be implemented independent of other artificial-reality devices, within other artificial-reality devices, and/or in conjunction with other artificial-reality devices.
By providing haptic sensations, audible content, and/or visual content, artificial-reality systems may create an entire virtual experience or enhance a user's real-world experience in a variety of contexts and environments. For example, artificial-reality systems may assist or extend a user's perception, memory, or cognition within a particular environment. Some systems may enhance a user's interactions with other people in the real world or may enable more immersive interactions with other people in a virtual world. Artificial-reality systems may also be used for educational purposes (e.g., for teaching or training in schools, hospitals, government organizations, military organizations, business enterprises, etc.), entertainment purposes (e.g., for playing video games, listening to music, watching video content, etc.), and/or for accessibility purposes (e.g., as hearing aids, visual aids, etc.). The embodiments disclosed herein may enable or enhance a user's artificial-reality experience in one or more of these contexts and environments and/or in other contexts and environments.
Computing devices and systems described and/or illustrated herein, such as those included in the illustrated display devices, broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each include at least one memory device and at least one physical processor.
In some examples, the term “memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
In some examples, the term “physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
In some embodiments, the term “computer-readable medium” generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems.
The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the present disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to any claims appended hereto and their equivalents in determining the scope of the present disclosure.
Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and/or claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and/or claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and/or claims, are interchangeable with and have the same meaning as the word “comprising.”
This application claims the benefit of priority to U.S. Provisional Application No. 63/086,980, filed Oct. 2, 2020, and U.S. Provisional Application No. 63/132,982, filed Dec. 31, 2020, the disclosures of each of which are incorporated herein, in their entirety, by this reference.
Number | Date | Country | |
---|---|---|---|
63086980 | Oct 2020 | US | |
63132982 | Dec 2020 | US |