Embodiments of the present disclosure generally relate to video generating systems, and more particularly, to video generating systems with background replacement or modification capabilities.
Video generating systems (e.g., video conferencing equipment) have become more popular in recent years, due in large part to the declining costs of video generating equipment, the proliferation of high-speed Internet, and a global movement towards remote work situations. As familiarity with video generating systems increases, so does demand for more sophisticated video streaming features, such as background removal, modification, and/or replacement schemes for these video applications.
Conventional methods of background replacement rely on chroma key compositing where two or more video streams are layered together based on a color hues. Chroma key compositing requires the use of a monochrome background screen, e.g., a green screen, and an even bright lighting to avoid shadows, which might otherwise present as a darker color and not register for replacement, and to prevent undesirably high amounts of noise by providing a bright and unsaturated image. However, chroma key compositing is generally disfavored for occasional use, such as individual video conferencing use, due to the unwieldy and unattractive nature of required background screens and due to the expensive professional level lighting requirements associated therewith.
Due to the undesirability of chroma key compositing for individual use, such as with a remote work situation, users have shown increasing interest in virtual backgrounds. Virtual background schemes typically provide background removal, modification, and/or replacement using a software executed on a user device, e.g., a personal computer, a laptop, or a gaming console.
Unfortunately, the cost, time, and technical complexity of implementing conventional virtual background replacement has proven prohibitive to potential users who may otherwise desire the privacy and other benefits afforded thereby. For example, users of such virtual background schemes frequently complain (1) that the increased computing power requirements may be more than is available for a typical individual remote office setup, (2) that the virtual background replacement software may be incompatible for use with readily available video generating software, such as readily available video conferencing software applications, and (3) that the software introduces an undesirable lag to a live video stream and/or to the separation of the user from the background.
Accordingly, there is a need in the art for video generating equipment (e.g., video conferencing equipment) and related methods that solve the problems described above.
Embodiments herein generally relate to video generating systems, and more particularly, to advanced camera devices with integrated background differentiation capabilities, such as background removal, background replacement, and/or background blur capabilities, suitable for use in a video application (e.g., video conferencing).
Embodiments of the disclosure include a method of generating an image by receiving, by one or more sensors, electromagnetic radiation from a first environment, wherein the electromagnetic radiation comprises radiation within a first range of wavelengths and radiation within a second range of wavelengths, generating visible image data from the electromagnetic radiation received in the first range of wavelengths, detecting, by a first sensor of the one or more sensors, an intensity of the electromagnetic radiation received in the second range of wavelengths from the first environment, identifying a first portion of the first environment based on values relating to the detected intensity of the electromagnetic radiation received in the second range of wavelengths, generating a first subset of the visible image data based on the identification of the first portion of the first environment, wherein the first subset of the visible image data corresponds to visible image data configured to generate a visible image of the first portion of the first environment, and generating a first visible image of the first portion of the first environment from the first subset of the visible image data.
Embodiments of the disclosure further include a camera device for use with a video streaming system, the camera device including a lens, one or more sensors configured to generate image data from electromagnetic radiation received from a first environment, a controller comprising a processor and a non-transitory computer readable medium that includes instructions which when executed by the processor are configured to cause the camera device to: receive, by the one or more sensors, electromagnetic radiation from the first environment, wherein the electromagnetic radiation comprises radiation within a first range of wavelengths and radiation within a second range of wavelengths; generate visible image data from the electromagnetic radiation received in the first range of wavelengths; detect, by a first sensor of the one or more sensors, an intensity of the electromagnetic radiation received in the second range of wavelengths from the first environment; identify a first portion of the first environment based on values relating to the detected intensity of the electromagnetic radiation received in the second range of wavelengths; generate a first subset of the visible image data based on the identification of the first portion of the first environment, wherein the first subset of the visible image data corresponds to visible image data configured to generate a visible image of the first portion of the first environment; and generate a first visible image of the first portion of the first environment from the first subset of the visible image data.
Embodiments of the disclosure further include a method of generating an image, comprising the following operations. Receiving, on a plurality of sensing elements of a sensor array, electromagnetic radiation during a first time period from a first environment, wherein the electromagnetic radiation comprises radiation within a first range of wavelengths and radiation within a second range of wavelengths. Generating, during a second time period, a first set of visible image data in response to the electromagnetic radiation received in the first range of wavelengths on a first portion of the sensor array during the first time period. Generating, during the second time period, a first set of electromagnetic image data in response to the electromagnetic radiation received in the second range of wavelengths on the first portion of the sensor array during the first time period, wherein the first set of electromagnetic image data includes information relating to the intensities of the electromagnetic radiation in the second range of wavelengths received at the first portion of the sensor array during the first time period. Replacing or modifying at least some of the first set of visible image data generated during the second time period based on the first set of electromagnetic image data generated during the second time period. Then, generating, during a third time period, a second set of visible image data in response to the electromagnetic radiation received in the first range of wavelengths on a second portion of the sensor array during the first time period, wherein the second time period occurs after the first time period, and the third time period occurs after the second time period.
Embodiments of the disclosure further include a camera device for use with a video streaming system, the camera device comprising a lens, a sensor including a sensor array configured to generate image data from electromagnetic radiation received from a first environment, and a controller comprising a processor and a non-transitory computer readable medium that includes instructions stored therein. The instructions which when executed by the processor are configured to cause the camera device to receive, on a plurality of sensing elements of the sensor array, electromagnetic radiation during a first time period from a first environment, wherein the electromagnetic radiation comprises radiation within a first range of wavelengths and radiation within a second range of wavelengths, generate, during a second time period, a first set of visible image data in response to the electromagnetic radiation received in the first range of wavelengths on a first portion of the sensor array during the first time period, generate, during the second time period, a first set of electromagnetic image data in response to the electromagnetic radiation received in the second range of wavelengths on the first portion of the sensor array during the first time period, wherein the first set of electromagnetic image data includes information relating to the intensities of the electromagnetic radiation in the second range of wavelengths received at the first portion of the sensor array during the first time period, replace or modify at least some of the first set of visible image data generated during the second time period based on the first set of electromagnetic image data generated during the second time period; and generate, during a third time period, a second set of visible image data in response to the electromagnetic radiation received in the first range of wavelengths on a second portion of the sensor array during the first time period, wherein the second time period occurs after the first time period, and the third time period occurs after the second time period.
So that the manner in which the above recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this disclosure and are therefore not to be considered limiting of its scope, for the disclosure may admit to other equally effective embodiments.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements and features of one embodiment may be beneficially incorporated in other embodiments without further recitation.
Embodiments herein generally relate to video generating systems, and more particularly, to video generating equipment with integrated background differentiation capabilities, such as background replacement and/or background modification, which are suitable for use in video applications, such as video conferencing applications. Although the following disclosure is largely described in reference to video conferencing systems and related methods, the benefits of the disclosure are not limited to video conferencing applications and can be applied to any system or method in which video is generated, such as video streaming, video recording, and videotelephony. Furthermore, although the following is largely described in reference to repeatedly replacing or modifying a portion (e.g., a background) in a video stream, the benefits of these processes can also be applied to replacing or modifying a portion (e.g., a background) in one or more images that is less than a video stream, such as a single still image or a series of still images.
In the embodiments described below undesired portions of a video conference environment (e.g., a background behind a user) are separated from desired portions of the video conference environment (e.g., a foreground including the user(s)) in a video feed for a video conference by taking advantage of a decay of the intensity of generated electromagnetic radiation from an illuminator over a distance. An illuminator directs electromagnetic radiation (e.g., infrared radiation) having one or more wavelengths at the video conference environment. This electromagnetic radiation is then reflected back to a sensor. The undesired background is located at a greater distance from the sensor and illuminator compared to the desired foreground that includes the user(s). Because the intensity of the generated electromagnetic radiation decays with distance, the electromagnetic radiation reflected from the undesired background has a lower intensity when received by the sensor than the electromagnetic radiation reflected from the desired foreground. This difference in intensity at the one or more wavelengths can then be used to separate the undesired background from the desired foreground, for example on a pixel by pixel basis in an infrared image. After this separation, the undesired background in a corresponding visible image can then be modified (e.g., blurred) or removed and replaced with a different background, for example also on a pixel by pixel basis. By repeating this method, for example on a frame by frame basis, visible images of the desired foreground including the user(s) can then be transmitted along with the modified or replacement background as a video feed for the video conference.
In some embodiments, the background differentiation and/or background replacement methods are performed, using the camera device, before encoding the video stream for transmission of the video stream therefrom. By providing for pre-encoding and thus pre-compression background differentiation, the advanced camera devices described herein desirably avoid accumulated latencies that would otherwise propagate with a background replacement software executing on an operating system of a user device separate from, but communicatively coupled to, the camera device.
The pre-encoding and pre-compression background differentiation techniques disclosed herein will also reduce the amount of information that needs to be transmitted from the camera device due to removal of the unnecessary background information prior to transmission from the camera device. The techniques disclosed herein will reduce the hardware and data transmission protocol (e.g., USB 2.0 versus USB 3.0) requirements needed to transmit the relevant video conferencing information from the camera device to one or more external electronic devices. Therefore, removal of undesired information relating to the background from the video stream at the camera device substantially reduces the bandwidth otherwise required for transmission of an unmodified video stream. In some embodiments, the increased bandwidth availability provided by the advanced camera device may be used to provide the transmission of portions of higher resolution images, e.g., 4k or more, between the advanced camera device and the user device while using less complex and lower cost data transmission hardware and transmission techniques. The background differentiation methods may be used with but are generally invisible to video conferencing software applications, such as Microsoft® Skype®, Apple® FaceTime® and applications available from Zoom® Video Communications, Inc, which advantageously facilitates seamless integration therewith. Furthermore, having the camera device perform the background replacement or modification can have security benefits as well. For example, when an image of a background including personal information is never transmitted from the camera device to another device, then the likelihood of this personal information falling into the wrong hands is substantially reduced. Moreover, as described below, in some embodiments, even the camera device itself never generates a visible image of the background because the camera device can generate the visible image on a pixel by pixel basis and in doing so can only generate visible image pixels of the video conference environment corresponding to areas of the video conference environment, such as the foreground, which have received an electromagnetic intensity above a given threshold. In such embodiments, in which a visible image of the background portion of the video conference environment is never generated, the security of any personal information in the background of the video conference environment is better preserved.
The video conferencing system 100 further includes a network 106 that facilitates communication between the first video conferencing endpoint 101 and the second video conferencing endpoint 102. The network 106 generally represents any data communications network suitable for the transmission of video and audio data (e.g., the Internet). Corresponding communication links 108, 109 are used to support the transmission of video conference feeds that include audio and video streams between the respective video conferencing endpoints 101, 102 and the network 106. These communication links 108, 109 can be, for example, communication links to a Local Area Network (LAN) or a Wide Area Network (WAN).
The following describes how the first video conferencing endpoint 101 is used to modify or replace the background of the local environment L, but the description is applicable for modifying or replacing any video conferencing background with similar equipment and methods.
The first video conferencing endpoint 101 includes a user device 110, a display 112, and a camera device 200. The camera device 200 includes a sensor 250 and an illuminator 270. The illuminator 270 directs electromagnetic radiation E (e.g., infrared radiation) having one or more wavelengths at a portion of the local environment L. While not intending to be limiting as to the scope of the disclosure provided herein, for simplicity of the disclosure the intensity of the electromagnetic radiation emitted at the one or more wavelengths by the illuminator 270 is also sometimes referred to herein as the infrared radiation.
The generated electromagnetic radiation E directed at the local environment L from the illuminator 270 reflects off of the surfaces in the local environment L. Portions of the reflected electromagnetic radiation are received by the sensor 250. As described in additional detail below, the sensor 250 is configured to (1) receive visible light for generating visible images of the local environment L and (2) detect intensities of the electromagnetic radiation E at the one or more wavelengths reflected from surfaces in the local environment L. Used herein, electromagnetic radiation used to generate a visible image is referred to as electromagnetic radiation within a first range of wavelengths. Similarly, the electromagnetic radiation directed from the illuminator is also referred to as electromagnetic radiation within a second range of wavelengths. In some embodiments, the first range of wavelengths and the second range of wavelengths are completely separate with no overlapping between the ranges, such as when the first range is in the visible spectrum and the second range is in a non-visible portion of the electromagnetic spectrum (e.g., the infrared spectrum). However, in other embodiments the first range and the second range can include some overlap. For example, some overlap can occur when the illuminator 270 emits radiation (i.e., radiation within the second range) and the visible image is generated mostly from visible light, but the visible image is also influenced by the radiation (e.g., near infrared radiation) emitted from the illuminator, such that the first range of wavelengths includes a range extending from visible light to one or more of the wavelength(s) emitted by the illuminator 270.
The differences in intensities of the electromagnetic radiation E received at the sensor 250 are then used to separate low-intensity regions of the local environment L (e.g., the background) from high-intensity regions of the local environment L (e.g., the foreground), so that visible images of the local environment L can be generated without the visible areas corresponding to the low-intensity regions of the local environment L or with a modified visible version (e.g., a blurred background) of the areas corresponding to the low-intensity regions of the local environment L.
The user device 110 represents any computing device capable of transmitting a video stream to a remote video conferencing device (e.g., the second video conferencing endpoint 102) via the communication link 108 that is in communication with the network 106. Examples of the user device 110 can include, without limitation, a laptop, a personal computer, a tablet, and a smart phone. The user device 110 includes a processor 114, a memory 116, support circuits 118, and a video conferencing software application 120 stored in the memory 116. The memory 116 can include non-volatile memory to store the video conferencing software application 120. The processor 114 can be used to execute the video conferencing software application 120 stored in the memory 116. Execution of the video conferencing software application 120 can enable the user device 110 to transmit data (e.g., audio and video data) received from the equipment (e.g., the camera device 200) in the first video conferencing endpoint 101 to the second video conferencing endpoint 102 via the communication link 108. Additionally, execution of the video conferencing software application 120 can also enable the user device 110 to receive data (e.g., audio and video data) from the second video conferencing endpoint 102, via the network 106 and the communication links 108, 109. Examples of video conferencing software application 120 include, without limitation, Microsoft® Skype®, Apple® FaceTime®, and applications available from Zoom® Video Communications, Inc. More generally, however, any video conferencing software application capable of receiving video data and transmitting video data to a remote site can be used, consistent with the functionality described herein. The user device 110 can further include audio speakers (not shown) for generating audio, for example audio of the user(s) speaking in the remote environment R, for the user 50 during the video conference.
In some embodiments, for example as shown in
The first video conferencing endpoint 101 can further include a communication link 113 for enabling communication between the camera device 200 and the user device 110. The communication link 113 may be wired or wireless. In some embodiments, the communication link 113 is a USB communication link selected from the industry standards of USB 2.0, 3.0, and 3.1 having one or more of a combination of type A, B, C, mini-A, mini-B, micro-A, and micro-B plugs.
In the local environment L, the user 50 is shown seated on a chair 55 at a desk 60. The user is holding a cup 65. The camera device 200 is positioned to view the user 50 and the user's surroundings. The local environment L further includes a back wall 75 located behind the user 50. The back wall 75 forms at least part of the undesired background that can be replaced or modified using the techniques described herein.
Before providing additional detail on the background modification and replacement performed by the camera device 200, the hardware features of the camera device 200 are described in reference to
The illuminator 270 is configured to direct electromagnetic radiation E having one or more wavelengths at the local environment L. In general, the illuminator 270 is configured to deliver one or more wavelengths of electromagnetic energy to the local environment L that is characterized by a significant drop of intensity as a function of distance travelled from the energy source (i.e., the illuminator 270), such as electromagnetic wavelengths that are more strongly absorbed in air under normal atmospheric conditions. In some embodiments, the illuminator 270 is configured to deliver one or more wavelengths within the infrared range, such as one or more wavelengths from about 700 nm to about 1 mm. For example, in one embodiment, one or more wavelengths within the far-infrared range of 10 μm to 1 mm is emitted from the illuminator 270. In another embodiment, one or more wavelengths within the near-infrared spectrum range of 750 nm to 1400 nm is emitted from the illuminator 270. In one such embodiment, the illuminator 270 is configured to emit one or more wavelengths of energy from about 800 nm to about 950 nm, such as 850 nm and 900 nm. In other embodiments, forms of electromagnetic radiation other than infrared radiation can be directed from the illuminator.
Although much of this disclosure describes using an illuminator, such as the illuminator 270, and then detecting reflections of the electromagnetic radiation E emitted from the illuminator to perform the methods described herein, in some embodiments, the illuminator can be omitted. For example, in one embodiment, one or more sensors are configured to detect ambient levels of infrared energy, such as infrared energy emitted as a result of a user's body heat and infrared energy emitted from surrounding objects. The intensity of infrared energy emitted from user(s) and objects in the video conference environment also decays with distance in the same way that the reflected electromagnetic radiation E emitted from the illuminator 270 decays with distance, and thus the methods described herein can also be applied to perform background replacement or modification when an illuminator is not used. In one embodiment in which the illuminator is omitted, an infrared sensor configured to detect infrared energy from about 900 nm to about 2500 nm can be used to perform the methods described herein. Although the illuminator can be omitted, the remainder of the disclosure is described with reference to embodiments in which an illuminator, such as the illuminator 270, is used.
The sensor 250 is configured to (1) receive visible light for generating visible images and (2) detect intensities of the electromagnetic radiation E reflected from surfaces in the local environment L to generate, for example infrared image data. Typically, the sensor 250 is a digital device. In one embodiment in which the illuminator 270 is an infrared illuminator, the sensor 250 is a multispectral sensor, such as a combination red, green, blue, infrared (RGB-IR) sensor. In some embodiments, the multispectral sensor can include an array of complementary metal oxide semiconductor (CMOS) sensing elements or an array of charge-coupled device (CCD) sensing elements.
Using the sensor array 251 allows for the resolution of the infrared images generated from the infrared sensing elements 264 of the sensor 250 to match the resolution of the visible images generated from the RGB sensing elements 261-263 of the sensor 250. This matching resolution allows for pixels in the visible image generated from the visible light sensing elements 261-263 to be replaced or adjusted based on corresponding pixels from the infrared image generated by the infrared sensing pixels 264. Replacing or adjusting pixels in the visible image based on pixels in the infrared image is discussed in further detail below.
Although the matching resolution between the visible image and the infrared image can simplify the process of adjusting the visible image based on the infrared image, it is also common for these resolutions to be different, such as a visible image with a greater resolution than the corresponding infrared image. In such embodiments, the lower resolution of the infrared image will typically cause the detected edge between the background region and the desired foreground region to be more granular (i.e., an edge determined from larger pixels causing a granular appearance to a user) than the image that is generated by the visible image that includes smaller pixels. In these cases, the more granular infrared image having less but larger pixels can be scaled to have more smaller pixels with a resolution matching the visible image using well-established image scaling methods like bilinear, bicubic, nearest neighbor, and mipmap image interpolation methods.
Using a multispectral sensor with a single sensor array, such as the sensor array 251, is one option for generating (1) a visible image and (2) a corresponding image from the reflected electromagnetic radiation after being directed from the illuminator 270. Other sensor arrangements are discussed below. Furthermore, although the visible light sensing elements 261-263 in the sensor array 251 are described as RGB-sensing elements, other arrangements of visible light sensing elements can also be used, such as an arrangements based on CYYM, RRYB, etc.
The sensor array 251 in
Although the sensor array 251 shows an array of (1) red-light sensing elements 261, (2) green-light sensing elements 262, (3) blue-light sensing elements 263, and (4) infrared sensing elements 264, an intensity value for infrared energy can be determined for each sensing element 261-264 location on the sensor array 251. Similarly, an intensity value for each of the red light, green light, and blue light can be determined for every sensing element 261-264 location on the sensor array 251. Thus, in some embodiments an intensity value can be determined for each type of electromagnetic energy (e.g., red-light, green-light, blue-light, and IR) at each sensing element 261-264 location of the sensor array 251. Various well-known interpolation techniques (e.g., bilinear, bicubic) as well as techniques built-in to today's devices (e.g., a proprietary image processing technique built-in to a multispectral camera) can be used to perform a demosaicing process, so that an intensity value can be determined for each type of electromagnetic energy (e.g., red-light, green-light, blue-light, and IR) at each sensing element 261-264 location. After the demosaicing process, a visible image can be generated with input at each visible image pixel from each of the three sensing element colors (RGB), and an infrared image can be generated with the same resolution.
Because a visible image and an infrared image having matching resolutions can be created from the intensity values for each sensing element 261-264 location, the infrared intensity value (also referred to as an infrared pixel)—for each sensing element 261-264 location in the infrared image constructed from the demosaicing process—can then be used as a switch to control whether the corresponding pixel in the visible image generated by the measurements of the sensor 250 is modified (e.g., replaced or blurred) or not for the video conference. In the following disclosure, the use of infrared detection values as a switch is generally described as a mask (see
Referring to
In some embodiments, the lens 204 may be selected for a desired blur or “bokeh” effect and/or to assist in facilitating the background differentiation methods described herein. For example, in some embodiments, the lens 204 may be of a type commonly used in portrait photography where an aperture of the lens 204 is selected to provide a relatively shallow depth of field so that the one or more conference participants stand out against a blurred background. In embodiments herein, the aperture of the lens 204 may be finely controlled, using the aperture adjustment mechanism 208, to allow for changes to the depth of field and to assist in facilitating the background differentiation methods described below. In other embodiments, the background can be blurred using software, for example software executed by the controller 212 of the camera device 200.
The aperture adjustment mechanism 208 can be used to change the aperture of the lens 204 by restricting the size of the opening having light passing therethrough, e.g., by use of a flexible diaphragm. In some embodiments, the AF system 206 may be used in combination with the aperture adjustment mechanism 208 to respectively focus on the desired portions of a scene and defocus or blur undesired portions of a scene.
The controller 212 is an electronic device that includes a processor 222, memory 224, support circuits 226, input/output devices 228, a video streaming device 230, and a communications device 232. The processor 222 may be any one or combination of a programmable central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), a programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or other hardware implementation(s) suitable for performing the methods set forth herein, or portions thereof.
The memory 224, coupled to the processor 222, is non-transitory and represents any non-volatile type of memory of a size suitable for storing one or a combination of an operating system 234, one or more software applications, e.g., software application 236, background differentiation information 238 generated using the methods set forth herein, and one or more replacement backgrounds 240. The background differentiation information 238 can include, for example, information relating to which portions of an image are desired foreground and which portions are undesired background.
Examples of suitable memory that may be used as the memory 224 include readily available memory devices, such as random access memory (RAM), flash memory, a hard disk, or a combination of different hardware devices configured to store data. In some embodiments, the memory 224 includes memory devices external to the controller 212 and in communication therewith. In some embodiments, the software application 236 stored in memory 224 can include instructions which when executed by the processor 222 are configured to perform the portions of the methods described herein that are described as being performed by the camera device 200 or the alternative camera devices 201, 202 described below in reference to
The video streaming device 230 is coupled to the processor 222 and is generally used to encode video data acquired from the sensor 250 in a desired encoding format and at a desired bitrate. Generally, bitrate describes how much video data a video stream contains where higher resolution, higher frame rates, and lower compression each require an increased bitrate. Typically, the acquired video data is encoded into a desired encoding format, at a desired resolution, and at desired frame rate. The desired resolution may be about 720 p, 1080 p, 1440 p, 3840 p (4K), 7680 p (8K), or more for a display device having an aspect ratio of about 4:3, 16:9, or 21:9. The desired frame rate is typically greater than about 30 frames per second (fps), and may be within in a range from about 30 fps to about 60 fps or more.
Here, the communications device 232, communicatively coupled to the video streaming device 230, delivers the encoded video data to the user device 110 using a wireless connection, such as WiFi or Bluetooth®, or a wired connection, such as the communication link 113 described above in reference to
In some embodiments, the user device 110 then transmits the video data to a remote video conferencing endpoint, such as the second video conferencing endpoint 102, using the video conferencing software application 120. Typically, the desired encoding format, bit rates, and/or frame rates of the to-be-transmitted video data are established between the controller 212 and the video conferencing software application 120 of the user device 110 before full communication begins there between, e.g., by a handshake protocol. In other embodiments, video data is transmitted to a remote video conferencing endpoint(s) using conventional communication devices and protocols. For example, the video data may be transmitted to a remote video conferencing endpoint using a network interface card, Ethernet card, modem, wireless network hardware and/or other conventional computing device communication hardware.
The visible image 130 is formed of pixels. Four exemplary visible pixels 131-134 are shown. These four visible pixels 131-134 include (1) a first visible pixel 131 showing a portion 1 of the user's shirt 51, (2) a second visible pixel 132 showing a portion 2 of the chair 55, (3) a third visible pixel 133 showing a portion 3 of the back wall 75, and (4) a fourth pixel 134 showing a portion 4 of the fourth picture frame 84. For ease of illustration, larger than normal pixels are shown than would be used in an actual image. The locations of these portions 1-4 in the Y-direction and Z-direction are also shown in
With reference to
Referring to
The electromagnetic radiation E directed from the illuminator 270 then reflects off of surfaces in the foreground portion F and the background portion B. The reflected electromagnetic radiation E is then received by the sensor 250. The sensor 250 can detect the intensity of the reflected electromagnetic radiation E across the foreground portion F and the background portion B of the local environment L that are in view of the camera device 200. For example, the intensity of the reflected electromagnetic radiation E can be detected across the local environment L using the infrared sensing elements 264 in the sensor array 251 described above in reference to
The surfaces in the foreground portion F are located substantially closer to the camera device 200 (i.e., closer to the illuminator 270 and the sensor 250) than the surfaces in the background portion B. The relevant distances here that effect the decay of the electromagnetic radiation E are (1) the distance between the surface in the local environment L (e.g., portion 1) and the illuminator 270, and (2) the distance between the surface in the local environment L (e.g., portion 1) and the sensor 250, but because the distance between the illuminator 270 and the sensor 250 is a minimal distance inside the camera device 200 in this example, the distance discussed below is shortened to the distance between the surface in the local environment L (e.g., portion 1) and the camera device 200.
Referring to
The view of the fourth picture frame 84 is blocked by the third picture frame 83 in the side view of
The intensity of electromagnetic radiation, such as infrared radiation, decays with distance. More specifically, the decay of intensity of electromagnetic radiation is proportional to the square of the distance from the source of the electromagnetic radiation (e.g., the illuminator 270). Additionally, with reference to the well-known equation using Plank's constant (E=hv), where E is energy, h is Plank's constant, and v is frequency of the radiation, it is known that radiation with a lower frequency (v), and thus a longer wavelength, carry less energy (E) than radiation with a higher frequency and shorter wavelength. Thus, forms of electromagnetic energy with longer wavelengths tend to decay at a greater rate over distances compared to electromagnetic energy with shorter wavelengths. For example, because infrared wavelengths have a longer wavelength than the wavelengths within the visible range, the intensity of the generated infrared electromagnetic radiation decays more over a given distance than electromagnetic radiation within the visible range. Moreover, certain wavelengths are also preferentially absorbed by the medium through which they pass, such as in the video conferencing case where the medium is air and infrared is preferentially absorbed by one or more components within the air when compared to visible light.
The rate at which the intensity of the infrared wavelengths decay is suitable for the distances often encountered during video conferences. These distances can include (1) the distance between the camera device and the user(s) and other objects in the foreground, (2) the distance between the camera device and the background, and (3) the distance between the foreground and the background. The distance between the camera device and the background is the greatest of these three distances and this distance can range from about a few feet to about fifty feet, such as from about three feet to about fifteen feet, such as about five feet.
The illuminator 270 and the sensor 250 of the camera device can be configured, so that there is a meaningful decay across distances within these ranges. This meaningful decay can assist the sensor 250 in distinguishing between the foreground and background. Although generally not required, in some embodiments the intensity and/or wavelength of the energy directed by the illuminator 270 can be adjusted to provide for a more substantial decay across the distances of a particular video conference environment. Using well-establish techniques for auto focus or low cost depth sensors, the depth of objects and user(s) in the foreground can be estimated, and the infrared intensity from the illuminator can be adjusted to increase the difference in infrared intensity measurements at the sensor between the foreground and background elements in a given environment. For example, when the user(s) and other objects in the foreground are determined to be further away from the camera device 200 relative to a standard distance (e.g., 5 feet), then the intensity of the infrared energy can be increased relative to the intensity used for the standard distance. Conversely, when the when the user(s) and other objects in the foreground are determined to be closer to the camera device 200 relative to a standard distance (e.g., 5 feet), then the intensity of the infrared energy can be decreased relative to the intensity used for the standard distance. Similarly, longer wavelengths can be used for a faster decay when the when the user(s) and other objects in the foreground are determined to be closer to the camera device 200 relative to a standard distance (e.g., 5 feet), and shorter wavelengths can be used for a slower decay when the when the user(s) and other objects in the foreground are determined to be further from the camera device 200 relative to a standard distance (e.g., 5 feet).
The decay of electromagnetic radiation described above causes the electromagnetic radiation E reflected from the surfaces in the foreground portion F to have a higher intensity when received by the sensor 250 than the intensity of the electromagnetic radiation E reflected from the surfaces in the background portion B. This intensity difference can be seen in an infrared image, such as the image shown in
The white areas 140W correspond to areas of the local environment L from which the sensor 250 receives infrared radiation at an intensity above a specified intensity threshold value. The hatched areas 140H correspond to areas of the local environment L from which the sensor 250 receives infrared radiation at an intensity that is not above the specified intensity threshold value. As described above, this infrared radiation received at the sensor 250, which is used to form the infrared image 140, is primarily (e.g., >90%) infrared radiation that is initially directed from the illuminator 270 and then reflected from surfaces in the local environment L to the sensor 250.
The intensity for each pixel 140W, 140H can generally correspond to measurements (e.g., weighted average measurements) performed by infrared sensing elements 264 on locations of the sensor array 251 that are nearby the corresponding location of the pixel in the infrared image. As described above, an infrared intensity value is determined for each sensing element 261-264 location using algorithms, such as well-known interpolation techniques, so that the infrared image has more pixels (e.g., four times as many) than the number of infrared sensing elements 264 in the sensor array 251. Generally, the intensity measured by each infrared sensing element 264 corresponds to a charge that accumulates on the infrared sensing element 264 during an exposure time period of the infrared sensing elements 264 (e.g., scan rate of pixels within the sensing element). This charge for each infrared sensing element 264 is then converted to a digital infrared intensity value. These digital infrared intensity values can then be used with digital infrared intensity values from surrounding infrared sensing elements 264 (e.g., all the infrared intensity values in a 5×5 square or a 15×15 square on the sensor array 251) in the algorithms mentioned above (e.g., interpolation techniques) to generate the infrared intensity value for each sensing element 261-264 location of the sensor array 251. Having the infrared intensity value for each sensing element 261-264 location on the sensor array 251 creates an infrared intensity value for each pixel in the infrared image.
Because the pixels in the infrared image correspond to the pixels in the visible image generated from the sensor array 251, the pixels in the infrared image can be used as a switch to control whether the pixels in the visible image are to be considered part of the foreground or background. For example, the intensity value for each sensing element 261-264 location can be compared to an intensity threshold value (i.e., also a digital value) stored in memory to determine whether the pixel corresponds to a high-intensity area (i.e., white pixels 140W) or a low-intensity area (i.e., hatched pixels 140H). This intensity threshold value can be adjusted manually by a user to provide the desired separation of the foreground and background with the generation of the white pixels 140W and the hatched pixels 140H. The user can also adjust one or more of the intensity of the energy emitted by the illuminator 270 and the exposure time that the infrared sensing elements are exposed for each measurement, so that the desired separation of the foreground and background is achieved by the generation of the white pixels 140W and the hatched pixels 140H. In some embodiments, the camera device 200 can automatically make adjustments to one or more of the intensity of the energy emitted by the illuminator 270, the intensity threshold value, and the exposure time that the infrared sensing elements 264 are exposed for each measurement, so that the desired separation of the foreground and background can be achieved by the generation of the white pixels 140W and the hatched pixels 140H.
These adjustments of intensity threshold value, intensity emitted by the illuminator 270, and exposure time initiated by the user or the camera device 200 can also be useful when ambient levels of infrared energy change. For example, increases in ambient levels of infrared energy at wavelengths around the wavelength(s) emitted by the illuminator 270 can cause more infrared sensing elements 264 to output measurements that are above the specified intensity threshold, which can cause portions of the background to be incorrectly determined as foreground portions unless an adjustment to one or more of the intensity threshold value, intensity emitted by the illuminator 270, and exposure time is made.
The white areas 140W for which infrared radiation is received at the sensor 250 at intensities above the infrared intensity threshold value are designated as the desired foreground for the video conference while the hatched areas 140H for which infrared radiation is received at the sensor 250 at intensities that are not above the infrared intensity threshold value are designated as the undesired background for the video conference. In some embodiments, the intensity threshold value for determining what portion of the video conference environment to include in the foreground, such as the user(s), can also be adjusted to account for varying distances between the user(s) and the camera device 200 as well as how close the background is to the user(s). Furthermore, in some embodiments, the intensity and/or the wavelength of the electromagnetic energy emitted from the illuminator 270 can be adjusted to account for changes in the distance between the user(s) and the camera device 200 as well as how close the background is to the user(s).
Furthermore, the infrared image 140 from
As mentioned above, the distance between a surface in the local environment L to the camera device 200 has a large effect on whether infrared radiation reflected from that surface to the sensor 250 is above the infrared intensity threshold value being used to distinguish between foreground and background surfaces. As shown in
On the other hand, the distance D3 from the portion 3 of the back wall 75 is substantially further from the camera device 200 relative to first and second distances D1, D2. Due to this further distance, the intensity of the infrared radiation reflected from the portion 3 of the back wall 75 is substantially below the intensity threshold value for this example. The portion 4 of the fourth picture frame 84 is located within a few inches of the portion 3 of the back wall 75, and thus the distance between the portion 4 of the fourth picture frame 84 to the camera device 200 is within a few inches of D3. Therefore, the intensity of the infrared radiation reflected from the portion 4 is also substantially below the intensity threshold value for this example. Due to the lower intensities as received by the sensor 250 of infrared radiation reflected from the portions 3, 4, the corresponding pixels 143, 144 for these portions 3, 4 are both shown as hatched in the hatched areas 140H of the infrared image 140 of
The method 4000 is described as being executed in a data acquisition portion 4000A and an image processing portion 4000B. In the data acquisition portion 4000A visible image data and infrared image data are generated based on sensor detections, and a mask is generated from the infrared image data. In the image processing portion 4000B, the mask generated in the data acquisition portion 4000A is applied to modify the visible image data (e.g., background replacement) acquired in the data acquisition portion 4000A, and a modified image is transmitted as part of the video conference.
Block 4002 begins the data acquisition portion 4000A. At block 4002, the illuminator 270 of the camera device 200 illuminates the local environment L with the electromagnetic radiation (i.e., radiation within the second range of wavelengths) at the one or more emitted wavelengths provided from the illuminator 270, which, while not intending to be limiting to the disclosure provided herein, for simplicity of discussion is also sometimes referred to herein as the infrared radiation E illustrated in
At block 4004, the sensor 250 is exposed to receive visible light (i.e., radiation within the first range of wavelengths) and the electromagnetic radiation E (i.e., radiation within the second range of wavelengths) as shown in
The visible image data generated from detections of the RGB sensing elements 261-263 at block 4004 can be generated in a format that can subsequently be used to generate a visible image, such as a visible image formed of pixels. Here, the visible image data captured by the RGB sensing elements 261-263 corresponds to the visible image 130 from
At block 4006, a mask is generated based on the infrared image data generated at block 4004. After receiving the infrared image data from the sensor 250 at block 4004, the controller 212 can generate the mask from the infrared image data as part of executing the software application 236. The mask generated here separates the infrared image data into a first group of high-intensity detections (see white pixels 140W of the foreground from
As described below, the mask is applied to control which pixels in the visible image 130 of
From a data standpoint, the mask generated here can be a single bit for each pixel location, such as a “one” for the location of each high-intensity white pixel 140W and a “zero” for the location of each low-intensity hatched pixel 140H. As discussed below in block 4008, the location of the “ones” in the mask function to “mask on” the pixels from the visible image 130 corresponding to the white pixels 140W of the infrared image 140. Conversely, the location of the “zeroes” in the mask function to “mask off” the pixels from the visible image 130 corresponding to the hatched pixels 140H (i.e., the pixels showing the undesired background) of the infrared image 140. The completion of block 4006 is the end of the data acquisition portion 4000A.
In some embodiments, separating the foreground portion F from the background portion B includes using an edge detection method to detect the peripheral edges of objects in the foreground portion F, and thus define the boundaries between the foreground portion F and the background portion B. Typically, an edge is defined as a boundary between two regions having distinct hatched level properties, i.e., pixels where the brightness or intensity thereof changes abruptly across the boundary region. Edge detection algorithms may be used to locate these abrupt changes in the detected pixel intensity within the infrared image 140. For example, at least portions of block 4006 are performed using one or more edge detection algorithms, e.g., by use of a binary map or a Laplacian operator. In some embodiments, at least portions of block 4006 are performed using one or more edge detection algorithms to determine the edges between the foreground portion F and the background portion B of the local environment L and/or to filter the background portion B from the local environment L. For example, in some embodiments the edge detection algorithm uses a binary mask (morphological image processing), a differentiation operator, such as the Prewitt, Sobel, or Kayyali operators, or a transform, such as a discrete Fourier transform, or a Laplacian transform. The one or more edge detection algorithms may be stored in the memory of the camera device 200 as a software application 236.
Block 4008 begins the image processing portion 4000B. At block 4008, the mask generated at block 4006 is applied to the visible image data (i.e., visible image 130) generated at block 4004 to generate a first subset of visible image data. The controller 212 can apply the mask to the visible image data as part of the execution of the software application 236. The first subset of visible image data generated at block 4008 corresponds to the pixels of visible image 130 to include without any modification in an image for the video conference (i.e., the pixels of the desired foreground). Because application of the mask results in masking-on the pixels in visible image 130 corresponding to the white pixels 140W in infrared image 140 as discussed above, applying the mask results in generating a first subset of visible image data corresponding to the image shown in
The process described above of masking on the white pixels 140W and masking off the hatched pixels 140H is one process for controlling which pixels from the visible image are to be included in the video conference without any modification. In other embodiments, the white pixels 140W can be masked off and the hatched pixels 140H can be masked on, and the corresponding logic can be reversed to arrive at the same result so that the visible pixels corresponding to the white pixels 140W end up in the video conference without any modification and the visible pixels corresponding to the hatched pixels 140H are replaced or modified for the video conference. Furthermore, in some embodiments, it may simply be sufficient to use the infrared image data to identify only the low-intensity areas (i.e., the hatched pixels 140H) or only the high-intensity areas (i.e., the white pixels 140W) and then the visible image can be modified based on this single identification.
On the other hand with reference to
In the method 4000, only one electromagnetic radiation intensity threshold value is used (e.g., one infrared intensity threshold value). Thus, all electromagnetic radiation intensity detections fall into one of two groups with one being greater than the threshold value, and the second being less than or equal to this one electromagnetic radiation intensity threshold value. In this method 4000, the use of single intensity threshold value is used to place each of the detections into one of the two groups of the foreground or the background with the first subset of visible image data corresponding to the foreground and the second subset of visible image data corresponding to the background. In other embodiments, two or more intensity threshold values could be used. For example, if two intensity threshold values were used, portions of the video conference environment with intensities greater than the higher threshold value could be designated as a foreground while portions of the video conference environment with intensities between the higher and lower threshold value could be designated as a middle ground, and portions of the video conference environment with intensities below the lower threshold value could be designated as the background. The corresponding portions of the visible images could then be modified based on these designations, such as not modifying the foreground, blurring the middle ground, and replacing the background. Furthermore, in some embodiments the intensity threshold value(s) could be used in different ways. For example, in one embodiment using a single intensity threshold value that separates the foreground from the background, the camera device could alternatively modify or replace the foreground while leaving the background unmodified.
At block 4010, the controller 212 determines whether background replacement or background modification has been selected, for example as part of the execution of software application 236. This selection can be made by the user, for example by interacting with the user device 110, which is in communication with the camera device 200 or by the user directly interacting with the camera device 200. If background replacement is selected, then the method 4000 proceeds to block 4012. If background modification is selected, then the method 4000 proceeds to block 4016.
At block 4012, when background replacement is selected, a replacement background image is retrieved from memory 224. The controller 212 can retrieve the replacement background image from the replacement backgrounds 240 of the memory 224 as part of the execution of the software application 236.
At block 4014, a composite image 190 shown in
On the other hand, when background modification is selected instead of background replacement, the method 4000 proceeds from block 4010 to block 4016. At block 4016, a modified visible image 130B is generated as shown in
In the modified visible image 130B of
After block 4014 for background replacement or block 4016 for background modification is executed, the method 4000 proceeds to block 4018. At block 4018, the composite image 190 from block 4016 or the modified visible image 130B from block 4018 can be transmitted by the camera device 200 to the user device 110 and ultimately to the second video conferencing endpoint 102 (
The method 6000 is described as being executed by operations included in a data acquisition portion 6000A and by operations included the image processing portion 4000B from
Block 6002 begins the data acquisition portion 6000A. At block 6002, the visible light sensor 255 is exposed to receive visible light (i.e., radiation within the first range of wavelengths) that corresponds to the visible image 130 shown in
At block 6004, with the camera device 201 having replaced the camera device 200 in
At block 6006, with the camera device 201 having replaced the camera device 200 in
In some embodiments, the visible image data generated at block 6002 is generated at a time in which electromagnetic radiation from the illuminator 270 is not directed at the local environment L. Although electromagnetic radiation outside of the visible spectrum is generally invisible to people, electromagnetic radiation can cause color changes and other distortions to visible images generated by cameras, such as the camera device 201. Thus, in some embodiments, the camera device 201 can alternate between time periods of (1) acquiring visible image data at block 6002 and (2) directing electromagnetic radiation from the illuminator 270 at block 6004 and acquiring electromagnetic radiation image data at block 6006. For example, in one embodiment the camera device 201 can switch back and forth between generating visible images and electromagnetic radiation images for every other frame acquired by the camera device 201. This technique of obtaining visible images when the illuminator 270 is not active and switching between generating visible image data and electromagnetic radiation image data can also be applied when executing similar methods using other camera devices, such as the camera device 200.
At block 6008, a mask is generated based on the electromagnetic radiation image data generated at block 6006. After receiving the electromagnetic radiation image data from the electromagnetic radiation sensor 256 at block 6006, the controller 212 can generate the mask from the electromagnetic radiation image data as part of executing the software application 236. Like the method 4000 described above, the mask generated here separates the electromagnetic radiation image data into a first group of high-intensity detections (see white pixels 140W of the foreground from FIG. 3B) above a specified electromagnetic radiation intensity threshold value and into a second group of low-intensity detections (see hatched pixels 140H of the background from
A visible representation of the mask generated at block 6008 is shown in the electromagnetic radiation image 140 of
The remainder of the method 6000 is directed to the image processing portion 4000B which is the same image processing portion 4000B from
Block 4008 begins the image processing portion 4000B. At block 4008, the mask generated at block 6008 is applied to the visible image data (i.e., visible image 130) generated at block 6002 to generate the first subset of visible image data. Because the same mask is applied on the same set of visible image data, the application of the mask here produces the same result as the application of the mask in block 4008 of the method 4000. This same result is the first subset of visible image data which is visibly represented by the modified visible image 130A shown in
After block 4008, the controller 212 executes block 4010 to determine whether to perform background replacement or background modification, for example based on user selection. For background replacement, the controller 212 can execute blocks 4012 and 4014 to use the replacement background image 180 and the subset of visible image data generated at block 4008 by application of the mask (i.e., the modified visible image 130A of
After block 4014 for background replacement or block 4016 for background modification, the method 6000 proceeds to block 4018. At block 4018, the composite image 190 from block 4014 or the modified visible image 130B from block 4016 can be transmitted by the camera device 201 to the user device 110 and ultimately to the second video conferencing endpoint 102 (
The camera device 202 additionally includes a near-infrared bandpass filter 258, such as an 850 nm bandpass filter. The bandpass filter 258 can be configured to allow a band of electromagnetic radiation (e.g., 750-950 nm) centered around a wavelength (e.g., 850 nm) to be received by the visible light sensor 257 without allowing electromagnetic radiation outside of the band (e.g., <750 nm or >950 nm) to be received by the visible light sensor 257 when electromagnetic radiation is directed to the visible light sensor 257 through the bandpass filter 258. The camera device 202 can be configured to switch between (1) only allowing electromagnetic radiation that does not pass through the bandpass filter 258 to be received by the visible light sensor 257 and (2) only allowing electromagnetic radiation that does pass through the bandpass filter 258 to be received by the visible light sensor 257. For example, in one embodiment a component in the camera device 202 is rotated to control whether or not the electromagnetic energy reaches the visible light sensor 257 through the bandpass filter 258. The rotation can be continuous, for example, so that each frame captured by the camera device 202 switches back and forth between an image generated from electromagnetic energy that does not pass through the bandpass filter 258 and an image generated from electromagnetic energy that passes through the bandpass filter 258. In one embodiment, the bandpass filter 258 can be the component that is rotated to control whether or not the electromagnetic energy that reaches the visible light sensor 257 passes through the bandpass filter 258.
The visible light sensor 257 includes an array of sensing elements (not shown), such as RGB sensing elements. Because these same sensing elements are used to generate both the visible light images and the electromagnetic radiation (e.g., infrared) images, the visible light images and the electromagnetic radiation images generated from the detections of the visible light sensor 257 have the same resolution allowing for pixels in the visible image generated from the visible light sensor 257 to be replaced or adjusted based on corresponding pixels from the electromagnetic radiation image generated by the visible light sensor 257 when the visible light sensor 257 is exposed to radiation transmitted through the bandpass filter 258.
The method 7000 is described as being executed by operations included in a data acquisition portion 7000A and by operations included in the image processing portion 4000B from
Block 7002 begins the data acquisition portion 7000A. At block 7002, the visible light sensor 257 is exposed to receive visible light (i.e., radiation within the first range of wavelengths) that corresponds to the visible image 130 shown in
At block 7004, with the camera device 202 having replaced the camera device 200 in
At block 7006, with the camera device 202 having replaced the camera device 200 in
As similarly described above in reference to the method 6000, in some embodiments, the visible image data generated at block 7002 is generated at a time in which electromagnetic radiation from the illuminator 270 is not directed at the local environment L. Thus, in some embodiments, the camera device 202—like the camera device 201 described above—can alternate between time periods of (1) acquiring visible image data at block 7002 and (2) directing electromagnetic radiation from the illuminator 270 at block 7004 and acquiring electromagnetic radiation image data at block 7006. For example, in embodiment the camera device 202 can switch back and forth between generating visible images and electromagnetic radiation images for every other frame acquired by the camera device 202.
At block 7008, a mask is generated based on the electromagnetic radiation image data generated at block 7006. After receiving the electromagnetic radiation image data from the visible light sensor 257 at block 7006, the controller 212 can generate the mask from the electromagnetic radiation image data as part of executing the software application 236. Like the methods 4000 and 6000 described above, the mask generated here separates the electromagnetic radiation image data into a first group of high-intensity detections (see white pixels 140W of the foreground from
A visible representation of the mask generated at block 7008 is shown in the infrared image 140 of
The remainder of the method 7000 is directed to the image processing portion 4000B which is the same image processing portion 4000B from
Block 4008 begins the image processing portion 4000B. At block 4008, the mask generated at block 7008 is applied to the visible image data (i.e., visible image 130) generated at block 7002 to generate the first subset of visible image data. Because the same mask is applied on the same set of visible image data here, the application of the mask here produces the same result as the application of the mask in block 4008 of the method 4000. This same result is the first subset of visible image data which is visibly represented by the modified visible image 130A shown in
After block 4008, the controller 212 executes block 4010 to determine whether to perform background replacement or background modification, for example based on user selection. For background replacement, the controller 212 can execute blocks 4012 and 4014 to use the replacement background image 180 and the subset of visible image data generated at block 4008 by application of the mask (i.e., the modified visible image 130A of
After block 4014 for background replacement or block 4016 for background modification, the method 7000 proceeds to block 4018. At block 4018, the composite image 190 from block 4016 or the modified visible image 130B from block 4018 can be transmitted by the camera device 201 to the user device 110 and ultimately to the second video conferencing endpoint 102 (
The camera device 200 uses the hardware and software components shown in
For example, in the method 8000 the controller 212 can obtain detection values from the sensing elements 261-264 of a group of lines in the sensor array 251 (e.g., ten horizontal rows) and begin the process of analyzing the visible image data and infrared image data from this group of lines to begin the background replacement or modification process before or as the values from the next group of lines (e.g., the next ten horizontal rows) are obtained from the sensor array 251. This process of obtaining visible image data and infrared image data detection values from a group of lines in the sensor array 251 and performing the background replacement or modification can then be repeated until the last group of lines (e.g., last ten horizontal rows) in the sensor array 251 is read. As mentioned above, although
The method 8000 begins the same as the method 4000 with an illumination of the local environment L. At block 8002, the illuminator 270 of the camera device 200 illuminates the local environment L with the electromagnetic radiation E at the one or more emitted wavelengths provided from the illuminator 270, which, while not intending to be limiting to the disclosure provided herein, for simplicity of discussion is also sometimes referred to herein as the infrared radiation E illustrated in
At block 8004, the sensor 250 is exposed to receive visible light (i.e., radiation within the first range of wavelengths) and the electromagnetic radiation E (i.e., radiation within the second range of wavelengths) as shown in
At block 8006, visible image data and infrared image data are generated from the detections of the respective RGB and IR sensing elements 261-264 (see
In the method 8000, all of the visible image data corresponding to the visible image 130 is obtained as execution of block 8006 is repeated as described in fuller detail below. Similarly, infrared image data is generated for all of the rows in the sensor array 251 as execution of block 8006 is repeated as described in fuller detail below. As an example, for a sensor configured to generate 1080 p images, block 8006 can be repeated 108 times for each exposure of the sensor 250 (i.e., for each execution of block 8004) when block 8006 is executed on a portion of the sensor array 251 having a size of ten horizontal rows for each repetition of block 8006.
Generating the visible image data and the infrared image data can include generating intensity values for each type of electromagnetic energy (i.e., RGB and IR) for each sensing element 261-264 location as described above. Algorithms, such as the interpolation techniques referenced above, can be used to generate these intensity values for each sensing element 261-264 location of the sensor array 251. Reading ten rows of sensing elements 261-264 at a time is given as an example here, but more or fewer rows of sensing elements 261-264 can be read during each execution of block 8006. In some embodiments, the number of rows of sensing elements 261-264 read during block 8006 can correspond or be related to the number of rows used in one or more of the interpolation techniques used to generate the RGB intensity values and the infrared intensity values for each sensing element 261-264 location on the sensor array 251. For example, if the process of generating an infrared intensity value for each sensing element 261-264 location on the sensor array 251 involves performing an interpolation technique using a 10×10 square of nearest neighbors of sensing elements 261-264, then it can be useful to read ten rows or some multiple of ten rows at each execution of block 8006.
The following describes generating a mask from the infrared image data generated at block 8006, and applying the mask to the visible image data generated at block 8006, but in some embodiments, the process may need to be somewhat staggered, so that there is enough data generated at block 8006 to perform the interpolation techniques used to generate the RGB and IR intensity value for each sensing element 261-264 location on the sensor array 251. For example, in one embodiment, it may be useful to perform block 8006 one or more times after the initial execution of block 8006 for each exposure of the sensor 250 at block 8004 before proceeding to block 8008 to ensure that there is enough data to perform the interpolation technique being used and to ensure there is enough data to perform the blocks in the method 8000 following block 8006. For example, if ten rows of the sensor array 251 are read during an execution of block 8006 and the interpolation technique being used is performed for each sensing element 261-264 location by using the five nearest rows, then it can be beneficial to perform block 8006 twice before proceeding. By performing block 8006 twice, there will be enough data to perform the interpolation technique on the tenth row in the set of ten rows, and the intensity values can be generated for all of the sensing element 261-264 locations in the first ten rows before proceeding to the initial execution of block 8008.
At block 8008, a mask is generated from the infrared image data generated from the most recent execution of block 8006, where the generated infrared image data includes an infrared intensity value for each sensing element 261-264. The controller 212 can generate the mask from the infrared image data generated at block 8006 as part of executing the software application 236. On the initial execution of block 8008, the initial mask can correspond to mask 811 shown in
At block 8010, the mask generated at the most recent execution of block 8008 is applied to the visible image data generated at the most recent execution of block 8006, which is the visible image data generated for each sensing element 261-264 location that has been determined from the interpolation technique being used. For example, on the initial execution of 8010, the mask 811 (
At block 8012, the controller 212 determines whether background replacement or background modification has been selected, for example as part of the execution of software application 236. This selection can be made by the user, for example by interacting with the user device 110, which is in communication with the camera device 200 or by the user directly interacting with the camera device 200. If background replacement is selected, then the method 8000 proceeds to block 8014. If background modification is selected, then the method 8000 proceeds to block 8018.
At block 8014, when background replacement is selected, a replacement background image is retrieved from memory 224. The controller 212 can retrieve the replacement background image from the replacement backgrounds 240 of the memory 224 as part of the execution of the software application 236. As introduced above,
At block 8016, a section of the composite image 190 shown in
Subsequent applications of other masks during repeated executions of block 8016 as additional rows of the sensor array 251 are read can result in sections being formed in the composite image 190 that include pixels from the visible image data captured by the camera device 200 (i.e., data from the first subset of visible image data) and/or pixels from the replacement background image 180. For example, subsequent executions of block 8016 can form sections 842 and 843 as shown in
On the other hand, when background modification is selected instead of background replacement, the method 8000 proceeds from block 8012 to block 8018. At block 8018, a section of the modified visible image 130B is generated as shown in
After block 8016 for background replacement or block 8018 for background modification is executed, the method 8000 proceeds to block 8020. At block 8020, the controller 212 determines whether detection values from the end of the sensor array 251 were generated during the most recent execution of block 8006. If detection values were not obtained from the end of the sensor array 251 during the last execution of block 8006, then the method 8000 repeats the execution of blocks 8006-8020 including execution of blocks 8014 and 8016 for background replacement or block 8018 for background modification. Used herein, referring to execution of blocks 8006-8020 refers to completing the execution of each block 8006-8012 and 8020 and either (1) blocks 8014 and 8016 for background replacement or (2) block 8018 for background modification. The repeating of blocks 8006-8020 until the end of the sensor array 251 results in either the formation of the composite image 190 for background replacement or the formation of modified visible image 130B for background modification. After the formation of the composite image 190 or the modified visible image 130B, a frame for the video conference is ready to be transmitted as part of the video conference. When detection values were obtained from the end of the sensor array 251 during the last execution of block 8006, then the method 8000 proceeds to block 8022.
At block 8022, the composite image 190 or the modified visible image 1306 can be transmitted by the camera device 200 to the user device 110 and ultimately to the second video conferencing endpoint 102 (
Using the method 8000, the camera device 200 can transmit a video frame for a video conference more quickly than other methods, which perform background replacement or modification only after data corresponding to an entire visible image has been obtained, such as the method 4000 described above. The method 8000 can begin the background replacement or modification process when a relatively small amount of the visible image data and electromagnetic image data from the sensor 250 has been obtained. For example, in some embodiments the background replacement or modification process can begin when the proportion of data that has been obtained from the sensor array 251 for a single exposure of the sensor array 251 is from about 1% and about 5% or from about 0.1% to about 1% or even to proportions less than 0.1%. For example, for a sensor configured to generate 1080 p images and a method is used in which ten horizontal rows of the sensor array 251 are obtained during a repeated process, such as the method 8000 described above, the background replacement or modification process can begin when detection values of only 10 of the 1080 horizontal rows of the sensor array have been obtained representing 0.9235% of the horizontal rows of the sensor array.
Although the method 8000 is described as being performed by repeating blocks 8006-8020 in a sequential manner, this is largely for ease of description and is not meant to be limiting in any way. For example, the generation of the detection values for the visible image data and infrared image data for the next portion of the sensor array 251 (e.g., the next ten horizontal rows) can begin after the generation of the detection values for the visible image data and infrared image data for the previous portion of the sensor array 251 (e.g., the previous ten horizontal rows) is completed. Thus, there is no need to wait for the execution of any of the blocks 8008-8020 before executing block 8006 again, and some portions of the method 8000 can be performed in parallel with each other.
Although portions of the method 8000 can be performed in parallel, much of the background replacement or modification can be performed before large portions of the sensor array 251 are read for a given exposure of the sensor 250. The following is illustrative example of the timing for when different portions of the method 8000 can be performed. During a first time period, the sensor 250 is exposed to the visible light (electromagnetic radiation in the first range of wavelengths) and electromagnetic energy E (electromagnetic radiation in the second range of wavelengths) at block 8004. During a second time period occurring after the first time period, a first set of visible image data can be generated, for example at block 8006, in response to the electromagnetic radiation received in the first range of wavelengths on a first portion of the sensor array 251 (e.g., the first ten rows of the sensor array 251) during the first time period. Also during the second time period, a first set of electromagnetic image data can be generated, for example at block 8006, in response to the electromagnetic radiation received in the second range of wavelengths on the first portion of the sensor array 251 during the first time period, wherein the first set of electromagnetic image data includes information relating to the intensities (e.g., intensity values) of the electromagnetic energy in the second range of wavelengths received at the first portion of sensor array 251 during the first time period. Then also during the second time period, at least some of the first set of visible image data generated during the second time period can be replaced or modified, for example during execution blocks 8014 and 8016 or 8018, based on the first set of electromagnetic image data generated during the second time period.
Then during a third time period occurring after the second time period, a second set of visible image data can be generated, for example during a repeated execution of block 8006, from a second portion of the sensor array 251 (e.g., rows 31-40 in the sensor array 251) in response to the electromagnetic radiation received in the first range of wavelengths on the second portion of the sensor array 251 during the first time period (i.e., the same execution of block 8004 used to generate the first set of visible image data). Also during the third time period, a second set of electromagnetic image data can be generated in response to the electromagnetic radiation received in the second range of wavelengths on the second portion of the sensor array 251 during the first time period, wherein the second set of electromagnetic image data includes information relating to the intensities (e.g., intensity values) of the electromagnetic energy in the second range of wavelengths received at the second portion of sensor array 251 during the first time period. Then also during the third time period, at least some of the second set of visible image data generated during the third time period can be replaced or modified, for example during repeated execution blocks 8014 and 8016 or 8018, based on the second set of electromagnetic image data generated during the third time period.
Continuing the example from above, during a fourth time period occurring after the third time period, a third set of visible image data can be generated, for example during a repeated execution of block 8006, from a third portion of the sensor array 251 (e.g., rows 61-70 in the sensor array 251) in response to the electromagnetic radiation received in the first range of wavelengths on the third portion of the sensor array 251 during the first time period (i.e., the same execution of block 8004 used to generate the first set of visible image data). This process can continue to be repeated during subsequent periods of time for a single exposure of the sensor array 251 until the end of the sensor array 251 is read and the background is replaced or modified as appropriate. A key feature is that large portions of the sensor array 251 remain to be read after the background replacement or modification process has started, so that by the time the end of the sensor array 251 is read, the remaining background replacement or modification is substantially reduced, which enables a frame for the video conference including the replaced or modified background to be quickly transmitted as part of the video conference.
Beginning the background replacement or modification process when only a relatively small amount of the data from the sensor array 251 has been obtained (e.g., <1%) can reduce the latency of the video conferencing system 100 when transmitting recently captured images for the video conference. Reducing latency in a video conference is important for continuing the process of making a video conference feel more like a conversation between people in the same room. As mentioned above, conventional background replacement and modification processes wait for all of the visible image data to be read from the corresponding image sensor before performing the background replacement or modification process, which can result in delay, such as a delay with a duration of a typical frame of the video conference or longer often resulting in a lag that is noticeable to the user(s). Conversely, the method 8000 can significantly reduce the duration of this delay by beginning the process of background replacement or modification when only a relatively small amount of the visible image data has been obtained, such as <1%. For example, in some embodiments, the delay caused by the background replacement or modification performed by the method 8000 can be reduced to a small portion of the duration of the frame, such as delay that is from about 5% to about 20% of the duration of the frame. Thus, compared to a delay that is as long as the duration of the frame, the delay caused by the background modification process in conventional process can be reduced by a factor of about 5 to a factor of about 20.
In the methods described above, the undesired portions of a video conference environment (e.g., an undesired background) can be removed or modified by taking advantage of the relatively fast decay of electromagnetic radiation (e.g., infrared radiation) over distance. In a typical video conference environment, infrared radiation decays at a rate that is sufficient to separate undesired portions of the video conference environment (e.g., an undesired background) from desired portions of the video conference environment (e.g., a desired foreground including the user(s)) based on the differences measured by a sensor of intensities of the infrared radiation received from across the video conference environment. Performing background modification using the methods and camera devices described above that take advantage of this intensity decay of electromagnetic radiation over distance is significantly less cumbersome for many users (e.g., a remote worker) than conventional methods, such as chroma key compositing which requires a monochrome background screen, such as a green screen. Furthermore, the camera devices described above can perform the methods without the use of artificial intelligence algorithms that often require computational power that exceeds the equipment that a user may have available.
The identification and removal or modification of the undesired background is performed by the camera devices 200-202 described above, so the methods 4000, 6000, 7000, and 8000 described above can be performed with any user device (e.g., smart phone, laptop, tablet, etc.) that can perform a video conference without background replacement or modification. Furthermore, because the identification and removal or modification of the undesired background is performed by the camera devices 200-202 described above, the video feed already having the modified or replaced background can be fed to numerous video conferencing applications (e.g., Microsoft® Skype®, Apple® FaceTime® and applications available from Zoom® Video Communications) on one or more user devices for use in the video conference. This can allow user(s) to switch between a first video conferencing application and a second video conferencing application, for example on user device 110, without having to perform any additional configuration steps on the new video conferencing application to achieve the desired background replacement or modification because the background replacement or modification is performed by the peripheral camera device (e.g., any of camera device 200-202) and not by the user device 110.
Furthermore, because the undesired background is removed or modified by the respective camera devices 200-202 and not by another device. Therefore, the data corresponding to the undesired background is not transmitted to another device. Conventional techniques generally perform the background removal and modification using a device (e.g., a server or personal computer) other than a peripheral camera device, such as the camera devices 200-202, that originally captures the images for the video feed. Removal of undesired background from the video stream at the camera device substantially reduces the bandwidth required relative to other methods which remove the undesired background after the undesired background is transmitted to another device.
While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
This application is a divisional application of co-pending U.S. patent application Ser. No. 17/191,143, filed Mar. 3, 2021, which is a continuation of U.S. patent application Ser. No. 17/184,573, filed Feb. 24, 2021, each of which is herein incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
4541113 | Seufert et al. | Sep 1985 | A |
4747148 | Watanabe et al. | May 1988 | A |
5001558 | Burley et al. | Mar 1991 | A |
5022085 | Cok | Jun 1991 | A |
5117283 | Kroos et al. | May 1992 | A |
5175775 | Iwaki et al. | Dec 1992 | A |
5222155 | Delanoy et al. | Jun 1993 | A |
5227985 | DeMenthon | Jul 1993 | A |
5343311 | Morag et al. | Aug 1994 | A |
5369430 | Kitamura | Nov 1994 | A |
5495535 | Smilansky et al. | Feb 1996 | A |
5500906 | Picard et al. | Mar 1996 | A |
5506946 | Bar et al. | Apr 1996 | A |
5517334 | Morag et al. | May 1996 | A |
5534917 | MacDougall | Jul 1996 | A |
5574511 | Yang | Nov 1996 | A |
5581276 | Cipolla et al. | Dec 1996 | A |
5631697 | Nishimura et al. | May 1997 | A |
5633951 | Moshfeghi | May 1997 | A |
5668605 | Nachshon et al. | Sep 1997 | A |
5682557 | Harada et al. | Oct 1997 | A |
5687306 | Blank | Nov 1997 | A |
5710829 | Chen et al. | Jan 1998 | A |
5740266 | Weiss et al. | Apr 1998 | A |
5781659 | Melen et al. | Jul 1998 | A |
5875040 | Matraszek et al. | Feb 1999 | A |
5901245 | Warnick et al. | May 1999 | A |
5923380 | Yang et al. | Jul 1999 | A |
5953076 | Astle et al. | Sep 1999 | A |
6119147 | Toomey et al. | Sep 2000 | A |
6150930 | Cooper | Nov 2000 | A |
6154574 | Paik et al. | Nov 2000 | A |
6411744 | Edwards | Jun 2002 | B1 |
6490006 | Monjo | Dec 2002 | B1 |
6618444 | Haskell et al. | Sep 2003 | B1 |
6636635 | Matsugu | Oct 2003 | B2 |
6646687 | Vlahos | Nov 2003 | B1 |
6661918 | Gordon et al. | Dec 2003 | B1 |
6664973 | Iwamoto et al. | Dec 2003 | B1 |
6754367 | Ito et al. | Jun 2004 | B1 |
6760749 | Dunlap et al. | Jul 2004 | B1 |
6785014 | Nishikawa | Aug 2004 | B1 |
6798407 | Benman | Sep 2004 | B1 |
6836345 | Setchell | Dec 2004 | B1 |
6876768 | Kawade | Apr 2005 | B1 |
6895520 | Altmejd et al. | May 2005 | B1 |
6937744 | Toyama | Aug 2005 | B1 |
6952286 | Luo et al. | Oct 2005 | B2 |
6993190 | Nguyen | Jan 2006 | B1 |
7020335 | Abousleman | Mar 2006 | B1 |
7050070 | Ida et al. | May 2006 | B2 |
7124164 | Chemtob | Oct 2006 | B1 |
7188357 | Rieschl et al. | Mar 2007 | B1 |
7209163 | Ono | Apr 2007 | B1 |
7317830 | Gordon et al. | Jan 2008 | B1 |
7386799 | Clanton et al. | Jun 2008 | B1 |
7420490 | Gupta et al. | Sep 2008 | B2 |
7420590 | Matusik et al. | Sep 2008 | B2 |
7451474 | McBreen et al. | Nov 2008 | B1 |
7463296 | Sun et al. | Dec 2008 | B2 |
7512262 | Criminisi et al. | Mar 2009 | B2 |
7518051 | Redmann | Apr 2009 | B2 |
7574043 | Porikli | Aug 2009 | B2 |
7599555 | McGuire et al. | Oct 2009 | B2 |
7602990 | Matusik et al. | Oct 2009 | B2 |
7623728 | Avinash et al. | Nov 2009 | B2 |
7631151 | Prahlad et al. | Dec 2009 | B2 |
7633511 | Shum et al. | Dec 2009 | B2 |
7634533 | Rudolph et al. | Dec 2009 | B2 |
7650058 | Garoutte | Jan 2010 | B1 |
7653256 | Kanda et al. | Jan 2010 | B2 |
7657171 | Sundstrom | Feb 2010 | B2 |
7668371 | Dorai et al. | Feb 2010 | B2 |
7676081 | Blake et al. | Mar 2010 | B2 |
7692664 | Weiss et al. | Apr 2010 | B2 |
7720283 | Sun et al. | May 2010 | B2 |
7742650 | Xu et al. | Jun 2010 | B2 |
7742894 | Chen et al. | Jun 2010 | B2 |
7755016 | Toda et al. | Jul 2010 | B2 |
7773136 | Ohyama et al. | Aug 2010 | B2 |
7783075 | Zhang et al. | Aug 2010 | B2 |
7784079 | Sipple et al. | Aug 2010 | B1 |
7821552 | Suzuki | Oct 2010 | B2 |
7831087 | Harville | Nov 2010 | B2 |
7965885 | Iwai | Jun 2011 | B2 |
8026945 | Garoutte et al. | Sep 2011 | B2 |
8072439 | Hillis et al. | Dec 2011 | B2 |
8073196 | Yuan et al. | Dec 2011 | B2 |
8081346 | Anup et al. | Dec 2011 | B1 |
8094928 | Graepel et al. | Jan 2012 | B2 |
8146005 | Jones et al. | Mar 2012 | B2 |
8170372 | Kennedy et al. | May 2012 | B2 |
8175379 | Wang et al. | May 2012 | B2 |
8175384 | Wang | May 2012 | B1 |
8204316 | Panahpour Tehrani et al. | Jun 2012 | B2 |
8225208 | Sprang et al. | Jul 2012 | B2 |
8238605 | Chien et al. | Aug 2012 | B2 |
8245260 | Sipple et al. | Aug 2012 | B1 |
8249333 | Agarwal et al. | Aug 2012 | B2 |
8264544 | Chang et al. | Sep 2012 | B1 |
8300890 | Gaikwad et al. | Oct 2012 | B1 |
8300938 | Can et al. | Oct 2012 | B2 |
8320666 | Gong | Nov 2012 | B2 |
8331619 | Ikenoue | Dec 2012 | B2 |
8331685 | Pettigrew et al. | Dec 2012 | B2 |
8335379 | Malik | Dec 2012 | B2 |
8345082 | Tysso | Jan 2013 | B2 |
8346006 | Darbari et al. | Jan 2013 | B1 |
8355379 | Thomas et al. | Jan 2013 | B2 |
8363908 | Steinberg et al. | Jan 2013 | B2 |
8379101 | Mathe et al. | Feb 2013 | B2 |
8396328 | Sandrew et al. | Mar 2013 | B2 |
8406470 | Jones et al. | Mar 2013 | B2 |
8406494 | Zhan et al. | Mar 2013 | B2 |
8411149 | Maison et al. | Apr 2013 | B2 |
8411948 | Rother et al. | Apr 2013 | B2 |
8422769 | Rother et al. | Apr 2013 | B2 |
8437570 | Criminisi et al. | May 2013 | B2 |
8446459 | Fang et al. | May 2013 | B2 |
8472747 | Sasaki et al. | Jun 2013 | B2 |
8488896 | Shi et al. | Jul 2013 | B2 |
8503720 | Shotton et al. | Aug 2013 | B2 |
8533593 | Grossman et al. | Sep 2013 | B2 |
8533594 | Grossman et al. | Sep 2013 | B2 |
8533595 | Grossman et al. | Sep 2013 | B2 |
8565485 | Craig et al. | Oct 2013 | B2 |
8588515 | Bang et al. | Nov 2013 | B2 |
8625897 | Criminisi et al. | Jan 2014 | B2 |
8643701 | Nguyen et al. | Feb 2014 | B2 |
8649592 | Nguyen et al. | Feb 2014 | B2 |
8649932 | Mian et al. | Feb 2014 | B2 |
8655069 | Rother et al. | Feb 2014 | B2 |
8659658 | Vassigh et al. | Feb 2014 | B2 |
8666153 | Hung et al. | Mar 2014 | B2 |
8670616 | Hamada | Mar 2014 | B2 |
8681380 | Kuwata et al. | Mar 2014 | B2 |
8682072 | Sengamedu et al. | Mar 2014 | B2 |
8701002 | Grossman et al. | Apr 2014 | B2 |
8723914 | Mackie et al. | May 2014 | B2 |
8787675 | Makino et al. | Jul 2014 | B2 |
8805065 | Dai | Aug 2014 | B2 |
8818028 | Nguyen et al. | Aug 2014 | B2 |
8831285 | Kang | Sep 2014 | B2 |
8849017 | Ito et al. | Sep 2014 | B2 |
8854412 | Tian et al. | Oct 2014 | B2 |
8874525 | Grossman et al. | Oct 2014 | B2 |
8890923 | Tian et al. | Nov 2014 | B2 |
8890929 | Paithankar et al. | Nov 2014 | B2 |
8897562 | Bai et al. | Nov 2014 | B2 |
8913847 | Tang et al. | Dec 2014 | B2 |
8994778 | Weiser et al. | Mar 2015 | B2 |
9008457 | Dikmen et al. | Apr 2015 | B2 |
9053573 | Lin et al. | Jun 2015 | B2 |
9065973 | Graham et al. | Jun 2015 | B2 |
9083850 | Higgs | Jul 2015 | B1 |
9084928 | Klang | Jul 2015 | B2 |
9087229 | Nguyen et al. | Jul 2015 | B2 |
9088692 | Carter et al. | Jul 2015 | B2 |
9117310 | Coene et al. | Aug 2015 | B2 |
9153066 | Ishii | Oct 2015 | B2 |
9215467 | Cheok et al. | Dec 2015 | B2 |
9245197 | Hikida | Jan 2016 | B2 |
9269153 | Gandolph et al. | Feb 2016 | B2 |
9282285 | Winterstein et al. | Mar 2016 | B2 |
9285951 | Makofsky et al. | Mar 2016 | B2 |
9286658 | Bhaskaran | Mar 2016 | B2 |
9336610 | Ohashi et al. | May 2016 | B2 |
9355325 | Vonolfen et al. | May 2016 | B2 |
9560259 | Tu | Jan 2017 | B2 |
9619705 | Nordstrom | Apr 2017 | B1 |
9626795 | Sathe | Apr 2017 | B2 |
9628722 | Do et al. | Apr 2017 | B2 |
9659658 | Kim | May 2017 | B2 |
9773155 | Shotton et al. | Sep 2017 | B2 |
9773296 | Hiasa | Sep 2017 | B2 |
9787938 | Cranfill et al. | Oct 2017 | B2 |
9881359 | Liu | Jan 2018 | B2 |
9886945 | Sieracki | Feb 2018 | B1 |
9948893 | Barzuza et al. | Apr 2018 | B2 |
10026012 | Zhu et al. | Jul 2018 | B2 |
10110792 | Benson | Oct 2018 | B2 |
10181178 | Cutler et al. | Jan 2019 | B2 |
10185416 | Mistry et al. | Jan 2019 | B2 |
10194060 | Mistry et al. | Jan 2019 | B2 |
10419703 | Goma et al. | Sep 2019 | B2 |
10423214 | Mistry et al. | Sep 2019 | B2 |
10477184 | Surma et al. | Nov 2019 | B2 |
10504246 | Rossato et al. | Dec 2019 | B2 |
10551928 | Mistry et al. | Feb 2020 | B2 |
10582198 | Hannuksela et al. | Mar 2020 | B2 |
10614562 | Mannar et al. | Apr 2020 | B2 |
10621697 | Chou et al. | Apr 2020 | B2 |
10671658 | Case et al. | Jun 2020 | B2 |
10691332 | Offenberg et al. | Jun 2020 | B2 |
10692199 | Sun | Jun 2020 | B2 |
10742954 | Chan et al. | Aug 2020 | B2 |
10789725 | Segman | Sep 2020 | B2 |
10796070 | Srivastava et al. | Oct 2020 | B2 |
10848748 | Handa et al. | Nov 2020 | B2 |
10897614 | Hannuksela et al. | Jan 2021 | B2 |
11659133 | Chu | May 2023 | B2 |
20010005222 | Yamaguchi | Jun 2001 | A1 |
20010036317 | Mori | Nov 2001 | A1 |
20020015186 | Nagata | Feb 2002 | A1 |
20020031262 | Imagawa et al. | Mar 2002 | A1 |
20020031265 | Higaki | Mar 2002 | A1 |
20020051491 | Challapali et al. | May 2002 | A1 |
20020051571 | Jackway et al. | May 2002 | A1 |
20020080148 | Uchino | Jun 2002 | A1 |
20020154820 | Kaneko et al. | Oct 2002 | A1 |
20020158873 | Williamson | Oct 2002 | A1 |
20020172433 | Edwards | Nov 2002 | A1 |
20030002746 | Kusaka | Jan 2003 | A1 |
20030004645 | Kochi | Jan 2003 | A1 |
20030007688 | Ono | Jan 2003 | A1 |
20030063669 | Lee et al. | Apr 2003 | A1 |
20030146884 | Heo | Aug 2003 | A1 |
20030161007 | Maurer et al. | Aug 2003 | A1 |
20030202120 | Mack | Oct 2003 | A1 |
20030206240 | Hyodo et al. | Nov 2003 | A1 |
20030231787 | Sumi | Dec 2003 | A1 |
20030234871 | Squilla et al. | Dec 2003 | A1 |
20040013315 | Li et al. | Jan 2004 | A1 |
20040047615 | Itoh | Mar 2004 | A1 |
20040062422 | Guichard | Apr 2004 | A1 |
20040098224 | Takahashi | May 2004 | A1 |
20040133927 | Sternberg et al. | Jul 2004 | A1 |
20040136593 | Chapoulaud | Jul 2004 | A1 |
20040151393 | Kurauchi | Aug 2004 | A1 |
20040153671 | Schuyler et al. | Aug 2004 | A1 |
20040161160 | Recht | Aug 2004 | A1 |
20040164972 | Carl | Aug 2004 | A1 |
20040184675 | Brown | Sep 2004 | A1 |
20040197028 | Lin et al. | Oct 2004 | A1 |
20040228505 | Sugimoto | Nov 2004 | A1 |
20040239596 | Ono et al. | Dec 2004 | A1 |
20040249557 | Harrington et al. | Dec 2004 | A1 |
20050012961 | Holt | Jan 2005 | A1 |
20050013490 | Rinne et al. | Jan 2005 | A1 |
20050053132 | Caball et al. | Mar 2005 | A1 |
20050093984 | Martins et al. | May 2005 | A1 |
20050105824 | Kesal et al. | May 2005 | A1 |
20050190981 | Fan et al. | Sep 2005 | A1 |
20050207661 | Miyagi et al. | Sep 2005 | A1 |
20050232493 | Satou et al. | Oct 2005 | A1 |
20050243350 | Aoyama | Nov 2005 | A1 |
20050271301 | Solomon et al. | Dec 2005 | A1 |
20050286743 | Kurzweil et al. | Dec 2005 | A1 |
20050286802 | Clark et al. | Dec 2005 | A1 |
20060008173 | Matsugu et al. | Jan 2006 | A1 |
20060013483 | Kurzweil et al. | Jan 2006 | A1 |
20060038894 | Chan et al. | Feb 2006 | A1 |
20060072022 | Iwai | Apr 2006 | A1 |
20060078217 | Poon et al. | Apr 2006 | A1 |
20060104479 | Bonch-Osmolovskiy et al. | May 2006 | A1 |
20060125799 | Hillis et al. | Jun 2006 | A1 |
20060159185 | D'Antonio et al. | Jul 2006 | A1 |
20060193509 | Criminisi et al. | Aug 2006 | A1 |
20060215766 | Wang et al. | Sep 2006 | A1 |
20060238444 | Wang et al. | Oct 2006 | A1 |
20060238445 | Wang et al. | Oct 2006 | A1 |
20060259552 | Mock et al. | Nov 2006 | A1 |
20060288313 | Hillis | Dec 2006 | A1 |
20070018966 | Blythe et al. | Jan 2007 | A1 |
20070025722 | Matsugu et al. | Feb 2007 | A1 |
20070036432 | Xu et al. | Feb 2007 | A1 |
20070046643 | Hillis et al. | Mar 2007 | A1 |
20070110298 | Graepel et al. | May 2007 | A1 |
20070127823 | Seeber | Jun 2007 | A1 |
20070133880 | Sun et al. | Jun 2007 | A1 |
20070154116 | Shieh | Jul 2007 | A1 |
20070165112 | Shinmei et al. | Jul 2007 | A1 |
20070223819 | Yamamoto | Sep 2007 | A1 |
20070263903 | St. Hilaire et al. | Nov 2007 | A1 |
20070269135 | Ono | Nov 2007 | A1 |
20080002961 | Sundstrom | Jan 2008 | A1 |
20080037869 | Zhou | Feb 2008 | A1 |
20080109724 | Gallmeier et al. | May 2008 | A1 |
20080123960 | Kim et al. | May 2008 | A1 |
20080181507 | Gope et al. | Jul 2008 | A1 |
20080240507 | Niwa et al. | Oct 2008 | A1 |
20080246777 | Swanson et al. | Oct 2008 | A1 |
20080266380 | Gorzynski et al. | Oct 2008 | A1 |
20080273751 | Yuan et al. | Nov 2008 | A1 |
20080310742 | Kostrzewski et al. | Dec 2008 | A1 |
20080317295 | Koutaki | Dec 2008 | A1 |
20090003687 | Agarwal et al. | Jan 2009 | A1 |
20090034866 | Park et al. | Feb 2009 | A1 |
20090044113 | Jones et al. | Feb 2009 | A1 |
20090109280 | Gotsman et al. | Apr 2009 | A1 |
20090138805 | Hildreth | May 2009 | A1 |
20090180002 | Suekane | Jul 2009 | A1 |
20090199111 | Emori et al. | Aug 2009 | A1 |
20090219302 | Li et al. | Sep 2009 | A1 |
20090244309 | Maison et al. | Oct 2009 | A1 |
20090245571 | Chien et al. | Oct 2009 | A1 |
20090249863 | Kim et al. | Oct 2009 | A1 |
20090279779 | Dunton et al. | Nov 2009 | A1 |
20090284627 | Bando | Nov 2009 | A1 |
20090300553 | Pettigrew et al. | Dec 2009 | A1 |
20100014752 | Ishigami et al. | Jan 2010 | A1 |
20100027961 | Gentile et al. | Feb 2010 | A1 |
20100046830 | Wang | Feb 2010 | A1 |
20100053212 | Kang | Mar 2010 | A1 |
20100110183 | Bobbitt et al. | May 2010 | A1 |
20100114671 | Bobbitt et al. | May 2010 | A1 |
20100114746 | Bobbitt et al. | May 2010 | A1 |
20100118116 | Tomasz | May 2010 | A1 |
20100118125 | Park | May 2010 | A1 |
20100128927 | Ikenoue | May 2010 | A1 |
20100134624 | Bobbitt et al. | Jun 2010 | A1 |
20100134625 | Bobbitt et al. | Jun 2010 | A1 |
20100142832 | Nafarieh et al. | Jun 2010 | A1 |
20100165081 | Jung et al. | Jul 2010 | A1 |
20100195898 | Bang et al. | Aug 2010 | A1 |
20100283837 | Oohchida et al. | Nov 2010 | A1 |
20100296572 | Ramaswamy et al. | Nov 2010 | A1 |
20100302376 | Boulanger et al. | Dec 2010 | A1 |
20110054870 | Dariush et al. | Mar 2011 | A1 |
20110090246 | Matsunaga | Apr 2011 | A1 |
20110169921 | Lee et al. | Jul 2011 | A1 |
20110193939 | Vassigh et al. | Aug 2011 | A1 |
20110242277 | Do et al. | Oct 2011 | A1 |
20110249863 | Ohashi et al. | Oct 2011 | A1 |
20110249883 | Can et al. | Oct 2011 | A1 |
20120026277 | Malzbender | Feb 2012 | A1 |
20120093360 | Subramanian et al. | Apr 2012 | A1 |
20120133746 | Bigioi et al. | May 2012 | A1 |
20120201470 | Pekar et al. | Aug 2012 | A1 |
20120236937 | Kameyama | Sep 2012 | A1 |
20120250993 | Iso et al. | Oct 2012 | A1 |
20120275718 | Takamori et al. | Nov 2012 | A1 |
20120294510 | Zhang et al. | Nov 2012 | A1 |
20120309532 | Ambrus et al. | Dec 2012 | A1 |
20120314077 | Clavenna, II et al. | Dec 2012 | A1 |
20120321173 | Mitarai et al. | Dec 2012 | A1 |
20130003128 | Watanabe | Jan 2013 | A1 |
20130016097 | Coene et al. | Jan 2013 | A1 |
20130110565 | Means, Jr. et al. | May 2013 | A1 |
20130129205 | Wang et al. | May 2013 | A1 |
20130142452 | Shionozaki et al. | Jun 2013 | A1 |
20130243248 | Vonolfen | Sep 2013 | A1 |
20130243313 | Civit et al. | Sep 2013 | A1 |
20140003719 | Bai et al. | Jan 2014 | A1 |
20140029788 | Kang | Jan 2014 | A1 |
20140112530 | Yadani et al. | Apr 2014 | A1 |
20140112547 | Peeper et al. | Apr 2014 | A1 |
20140119642 | Lee et al. | May 2014 | A1 |
20140153784 | Gandolph et al. | Jun 2014 | A1 |
20140153833 | Miyakoshi et al. | Jun 2014 | A1 |
20140229850 | Makofsky et al. | Aug 2014 | A1 |
20140300630 | Flider et al. | Oct 2014 | A1 |
20140300702 | Saydkhuzhin et al. | Oct 2014 | A1 |
20140307056 | Collet Romea et al. | Oct 2014 | A1 |
20140354759 | Cranfill et al. | Dec 2014 | A1 |
20150040066 | Baron et al. | Feb 2015 | A1 |
20150117775 | Abe et al. | Apr 2015 | A1 |
20150243041 | Panetta et al. | Aug 2015 | A1 |
20150244946 | Agaian et al. | Aug 2015 | A1 |
20150249829 | Koat et al. | Sep 2015 | A1 |
20150348458 | Tian et al. | Dec 2015 | A1 |
20160173789 | Xu et al. | Jun 2016 | A1 |
20160189355 | Basche et al. | Jun 2016 | A1 |
20170142371 | Barzuza | May 2017 | A1 |
20170223234 | Do et al. | Aug 2017 | A1 |
20180332239 | Peterson et al. | Nov 2018 | A1 |
20190208119 | Ekstrand et al. | Jul 2019 | A1 |
20190318460 | Bouzaraa et al. | Oct 2019 | A1 |
20200258192 | Liu et al. | Aug 2020 | A1 |
20200322591 | Yano et al. | Oct 2020 | A1 |
Number | Date | Country |
---|---|---|
103426143 | Jul 2016 | CN |
104063858 | Apr 2017 | CN |
105141858 | Oct 2018 | CN |
Entry |
---|
Jie Gong et al. “Using Depth Mapping to realize Bokeh effect with a single camera Android device”, EE368 Project Report. 9 pages. |
Matthew Buzzi, Nvidia Adds Digital Green Screen, Gaming LatencyReduction for Streaming, Esports. Sep. 1, 2020. https://www.pcmag.com/news/nvidia-adds-digital-green-screen-gaming-latency-reduction-for-streaming. |
Mark Murphy et al. “Lens Drivers Focus on Performance in High-Resolution Camera Modules” http://www.analog.com/analogdialogue. Nov. 2006. |
Robert M. Haralick et al. “Survey, Image Segmentation Techniques” May 26, 1982. Computer Vision, Graphics, and Image Processing 29, 100-132. |
Jingtang Liao et al. “Depth Map Design and Depth-based Effects with a Single Image” Delft University of Technology. 7 pages. |
Nobuo Iizuka, “Image Sensor Communication—Current Status and Future Perspectives”, IEICE Trans, Commun., vol. E. 100-B, No. 6 Jun. 2017. p. 911-916. |
O. Arif, W. Daley, P. Vela, J. Teizer and J. Stewart, “Visual tracking and segmentation using Time-of-Flight sensor,” 2010 IEEE International Conference on Image Processing, Hong Kong, China, 2010, pp. 2241-2244, doi: 10.1109/ICIP.2010.5652979. |
Number | Date | Country | |
---|---|---|---|
20230328199 A1 | Oct 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17191143 | Mar 2021 | US |
Child | 18133852 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17184573 | Feb 2021 | US |
Child | 17191143 | US |