This disclosure relates generally to camera selection in a videoconference.
The most common configuration of a conference room for videoconferencing has a single camera adjacent a monitor or television that sits at one end of the room. One drawback to this configuration is that if a speaker is looking at someone else in the conference room while talking, the speaker does not face the camera. This means that the far end only sees a side view of the speaker, so the speaker does not appear to be speaking to the far end.
Efforts have been made to address this problem by providing multiple cameras in the conference room. The idea is to have the cameras pointed in different directions and then selecting a camera that provides the best view of the speaker, preferably zooming and framing the speaker. The efforts improved the view of the speaker but only in single individual settings, which often were not a problem as the speaker would usually be looking at the monitor and hence the single camera. If multiple individuals were present in the conference room and visible in the various camera views, the efforts did not provide good results.
A method, non-transitory processor readable memory, and system comprising identifying the locations of the plurality of cameras other than the primary camera using an image from the video stream of the primary camera; utilizing sound source localization using the microphone array on the primary camera to determine direction information; identifying a speaker in the group of individuals using the sound source localization direction information and an image from the video stream of the primary camera; determining facial pose of the speaker in the image from the video stream; and selecting a camera from the plurality of cameras to provide a video stream for provision to the far end based on the locations of the plurality of cameras other than the primary camera and the facial pose of the speaker.
The present invention may be better understood, and its numerous objects, features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference number throughout the several figures designates a like or similar element.
Implementations provide for a plurality of cameras embodied in various devices to be placed in an environment, such as conference room. One of the cameras is designated as a primary camera, and is implemented with a microphone array. A video stream is sent to a far end site from the primary camera. An image from the video stream is used to identify the location of the other cameras. Sound source localization using the microphone array is used to determine sound direction information. A speaker of a group of individuals or participants is identified using the sound source location. A facial pose of the speaker is determined from the video stream. A camera from the group of cameras, including the primary, is selected to provide the video stream based on the identified location of the cameras and determined facial pose.
In the drawings and the description of the drawings herein, certain terminology is used for convenience only and is not to be taken as limiting the examples of the present disclosure. In the drawings and the description below, like numerals indicate like elements throughout.
Throughout this disclosure, terms are used in a manner consistent with their use by those of skill in the art, for example:
Computer vision is an interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos. Computer vision seeks to automate tasks imitative of the human visual system. Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world to produce numerical or symbolic information. Computer vision is concerned with artificial systems that extract information from images. Computer vision includes algorithms which receive a video frame as input and produce data detailing the visual characteristics that a system has been trained to detect.
Machine learning includes neural networks. A convolutional neural network is a class of deep neural network which can be applied analyzing visual imagery. A deep neural network is an artificial neural network with multiple layers between the input and output layers.
Artificial neural networks are computing systems inspired by the biological neural networks that constitute animal brains. Artificial neural networks exist as code being executed on one or more processors. An artificial neural network is based on a collection of connected units or nodes called artificial neurons, which mimic the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a ‘signal’ to other neurons. An artificial neuron that receives a signal then processes it and can signal neurons connected to it. The signal at a connection is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs. The connections are called edges. Neurons and edges have weights, the value of which is adjusted as ‘learning’ proceeds and/or as new data is received by a state system. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold.
Referring now to
Turning now to
In
It is noted in
In
In an example, the processing of the audio and video and selection of a desired camera is split between the primary camera 1116B and a codec 1100. Referring to
It is understood that the SSL determination, face detection and facial pose analysis is only performed periodically, not for every video frame, such as once every one second to once every five seconds in some examples. This is satisfactory as the speaker and the individual's location do not change much faster than those periods and because camera switching should not be performed rapidly to avoid disorienting the far end.
It is understood that steps 1004 and 1006 are illustrated as separate steps. The face detection and facial pose determination can be combined in a single neural network, so that steps 1004 and 1006 are then merged. Such single neural network would combine the SSL direction information and video image to determine the speaker from among the individuals and the facial pose of that individual in the processing performed by the single neural network. The actual operation of the single neural network may not operate in the order as illustrated in the serial operations of steps 1004 and 1006, as the neural network may process all of the input data in parallel, but the functional result of the operation of the single neural network will be the same as the series operation of steps 1004 and 1006, namely the facial features of the speaker.
In step 1010, the codec 1100 uses the video from the primary camera 1116B to determine the locations of the other cameras 1116A, 1116C. This operation is detailed in
It is understood that the codec 1100 may perform framing operations on the video stream from the selected camera if desired, rather than providing the entire image from the selected camera. The framing process is simplified by utilizing the bounding boxes from the cameras. Additionally, the codec 1100 may provide video from other cameras based on framing considerations, such as if two individuals are having a conversation. The steps of
Referring to
Referring to
In one example, THRESHOLD is set at 2.5, so that a poseScore is computed when the possibility of a face is higher than 50%. Different weights are used for each facial keypoint as some keypoints, such as the nose, are more important. The cameraScore is the sum of the poseScores for each face in the camera image. For step 909, the highest cameraScore is the selected camera.
In some examples, because distances from the cameras vary and camera settings vary, various correction factors are applied to each poseScore. Each poseScore as computed above is multiplied by a sizescaleFactor and a brightness ScaleFactor. sizeScaleFactor is computed by comparing the face bounding box of two poses. brightnessScaleFactor is computed by comparing the average luminance level of corresponding face bounding box of two poses.
sizeScaleFactor=(pose1FaceBoundingBoxArea/pose2FaceBoundingBoxArea);
brightnessScaleFactor=(pose1FaceBoundingBoxBrightness/pose2FaceBoundingBox Brightness);
Other normalization methods can be applied in calculation of poseScore from the primary camera.
In step 910, the determined best camera ID is set. If there is an active speaker in step 904, in step 908 the facial pose information and the determined camera locations are evaluated to determine which camera has the best view of the face of the speaker. In some instances, the facial pose information may indicate a particular camera would have the best view of the speaker, but the speaker might be blocked in the field-of-view of that camera for some reason (e.g., another person in the room blocking the speaker from the camera). Referring to
The processing unit 1102 can include digital signal processors (DSPs), central processing units (CPUs), graphics processing units (GPUs), dedicated hardware elements, such as neural network accelerators and hardware codecs, and the like in any desired combination.
The flash memory 1104 stores modules of varying functionality in the form of software and firmware, generically programs, for controlling the codec 1100. Illustrated modules include a video codec 1150, camera control 1152, face and body finding 1153, neural network models 1155, framing 1154, other video processing 1156, audio codec 1158, audio processing 1160, network operations 1166, user interface 1168 and operating system and various other modules 1170. The RAM 1105 is used for storing any of the modules in the flash memory 1104 when the module is executing, storing video images of video streams and audio samples of audio streams and can be used for scratchpad operation of the processing unit 1102. The face and body finding 1153 and neural network models 1155 are used in the various operations of the codec 1100, such as the face detection step 1004, the pose determination step 1006, the object detection step 802 and the depth estimation step 806.
The network interface 1108 enables communications between the codec 1100 and other devices and can be wired, wireless or a combination. In one example, the network interface 1108 is connected or coupled to the Internet 1130 to communicate with remote endpoints 1140 in a videoconference. In one or more examples, the general interface 1110 provides data transmission with local devices such as a keyboard, mouse, printer, projector, display, external loudspeakers, additional cameras, and microphone pods, etc.
In one example, the cameras 1116A, 1116B, 1116C and the microphones 1114 capture video and audio, respectively, in the videoconference environment and produce video and audio streams or signals transmitted through the bus 1115 to the processing unit 1102. In at least one example of this disclosure, the processing unit 1102 processes the video and audio using algorithms in the modules stored in the flash memory 1104. Processed audio and video streams can be sent to and received from remote devices coupled to network interface 1108 and devices coupled to general interface 1110. This is just one example of the configuration of a codec 1100.
The processing unit 1202 can include digital signal processors (DSPs), central processing units (CPUs), graphics processing units (GPUs), dedicated hardware elements, such as neural network accelerators and hardware codecs, and the like in any desired combination.
The flash memory 1204 stores modules of varying functionality in the form of software and firmware, generically programs, for controlling the camera 1200. Illustrated modules include camera control 1252, sound source localization 1260 and operating system and various other modules 1270. The RAM 1205 is used for storing any of the modules in the flash memory 1204 when the module is executing, storing video images of video streams and audio samples of audio streams and can be used for scratchpad operation of the processing unit 1202.
In a second configuration, only the primary camera 1116B includes the microphone array 1214 and the sound source location module 1260. Cameras 116A, 116C are then just simple cameras. The prior examples allowed primary camera selection to be done after the cameras are installed, as all of the cameras are the same. In this configuration, during setup of the conference room C the primary camera, with its extra functions, must be identified and properly placed. In a third configuration, the sound source localization is also performed by the codec, with the primary camera 1116B providing the audio streams from each microphone.
Other configurations, with differing components and arrangement of components, are well known for both videoconferencing endpoints and for devices used in other manners.
A graphics acceleration module 1324 is connected to the high-speed interconnect 1308. A display subsystem 1326 is connected to the high-speed interconnect 1308 to allow operation with and connection to various video monitors. A system services block 1332, which includes items such as DMA controllers, memory management units, general-purpose I/O's, mailboxes and the like, is provided for normal SoC 1300 operation. A serial connectivity module 1334 is connected to the high-speed interconnect 1308 and includes modules as normal in an SoC. A vehicle connectivity module 1336 provides interconnects for external communication interfaces, such as PCIe block 1338, USB block 1340 and an Ethernet switch 1342. A capture/MIPI module 1344 includes a four-lane CSI-2 compliant transmit block 1346 and a four-lane CSI-2 receive module and hub.
An MCU island 1360 is provided as a secondary subsystem and handles operation of the integrated SoC 1300 when the other components are powered down to save energy. An MCU ARM processor 1362, such as one or more ARM R5F cores, operates as a master and is coupled to the high-speed interconnect 1308 through an isolation interface 1361. An MCU general purpose I/O (GPIO) block 1364 operates as a slave. MCU RAM 1366 is provided to act as local memory for the MCU ARM processor 1362. A CAN bus block 1368, an additional external communication interface, is connected to allow operation with a conventional CAN bus environment in a vehicle. An Ethernet MAC (media access control) block 1370 is provided for further connectivity. External memory, generally non-volatile memory (NVM) such as flash memory 104, is connected to the MCU ARM processor 1362 via an external memory interface 1369 to store instructions loaded into the various other memories for execution by the various appropriate processors. The MCU ARM processor 1362 operates as a safety processor, monitoring operations of the SoC 1300 to ensure proper operation of the SoC 1300.
It is understood that this is one example of an SoC provided for explanation and many other SoC examples are possible, with varying numbers of processors, DSPs, accelerators and the like.
Implementations described above, discuss determining an active speaker's best view. It is also contemplated that that in certain implementations, the best view of individuals or participants in a conference room are also determined at any given time, regardless of whether an individual is an active speaker or not.
In various implementations, a single stream video composition is provided to the far end conference site or sites (i.e., far end as described above). As further described herein, a best view of each of the individuals or participants in the conference room is taken, and a composite of the views is provided.
For example, in conference room with one camera, wherein the camera is implemented in a device such as video bar as described above, six individuals or participants enter the conference room. A composited video stream of the six individuals or participants is sent or fed to the far end. Implementations further provide for multiple streams to be provided, as well as the use of more than one (i.e., multiple cameras), where the best view from the best camera of the individuals or participants is used. As further described below, various embodiments provide for a production module to perform such functions.
Various described implementations above provide that where one or more cameras (i.e., multiple cameras) are used, the multi-camera selection algorithm provides that secondary cameras do not implement the described machine learning features that implement neural networks, and are considered as “dumb” secondary cameras. Only the primary camera implements the described machine learning that includes neural networks. For example, implementations include third party USB cameras as “dumb” secondary cameras. As discussed, in various embodiments, the primary camera is can be a video bar, pan-tilt zoom cameras or other type of camera. Implementations further provide that such cameras be connected to connect to a computing device, such as a codec.
In the following implementations, all of the plurality of cameras implement the use of machine learning. In various implementations, a checking camera operation is performed. The checking camera operation is implemented to monitor a chosen camera and determine if the chosen camera, as identified camera ID, is no longer the best camera option to perform video streaming. If not, a new best camera, with new camera ID, is found as described above.
Implementations provide that over a certain time period (e.g., 1 or 2 seconds), determined facial keypoints and sound levels of the chosen camera are checked as described in
Implementations further provide a determination of a front view of a speaker as described in
It is contemplated that in other implementations, such secondary cameras implement machine learning as described herein, including face detection, pose detection, etc.
Implementations provide for each of the cameras 1604A and 1604 to include respective camera components 1606A, 1606B, and 1606C. Implementations also provide each of the cameras 1604A and 1604 to include respective machine learning (ML) sub-systems 1608A, 1608B, and 1608C.
The respective camera components 1606 sends video frames to their respective ML subsystems 1608. The ML subsystems 1608 implements the described ML models for face and pose detection, SSL feed ML output to a Gamma block and sends filtered output from Gamma block as described above to a production module 1610 of primary camera 1602. The Gamma block filters the ML bounding boxes described in
Implementations provide for the production module 1610 to be a central processing unit. The production module 1610 determines best camera selection and composition of all framing rectangles into a proper frame to be send to the far side.
Implementations provide for a machine learning-based approach for automatic re-identification of the individuals or participants across multiple cameras of multi-camera system 1600. In such implementations, the same individual or participant receives an identical identifier across the multiple cameras.
The production module 1610 receives bounding box, feature vector, poses, and SSL for the detected faces of individuals or participants. Re-identification is performed through, for example, cosine distance matching of feature vectors. Pose information is used to find the best frontal view of participants. SSL, as described above, can be used to find the active speaker. The production module 1610 combines individual or participant identification, best frontal view, and active speaker to perform determination as to the best camera selection and composition of framing for the far side. For certain implementations, if there is an active speaker, the best frontal view of active speaker is shown. If no such active speaker, the best frontal view of each participant or the whole group is shown.
While the above description has used a conference room as the exemplary environment, the environment can be any setting where multiple cameras can provide different views of a group of individuals.
While the above description has used three cameras as an example, it is understood that different numbers of cameras can be utilized from two to a limit depending on the processing capabilities and the particular environment. For example, in a larger venue with more varied seating, more cameras may be necessary to cover all individuals that may speak.
While the above description had the camera selection being performed in a codec, it is understood that different items can perform the camera selection. In one example, one camera of the number of cameras can be selected to perform the camera selection and to interact with the other cameras to control the provision of video streams from the cameras. In another example, a separate video mixing unit can perform the camera selection and other video processing and the codec can simply encode the selected camera video stream.
The various examples described are provided by way of illustration and should not be construed to limit the scope of the disclosure. Various modifications and changes can be made to the principles and examples described herein without departing from the scope of the disclosure and without departing from the claims which follow.
Computer program instructions may be stored in a non-transitory processor readable memory that can direct a computer or other programmable data processing apparatus, processor or processors, to function in a particular manner, such that the instructions stored in the non-transitory processor readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The present invention is well adapted to attain the advantages mentioned as well as others inherent therein. While the present invention has been depicted, described, and is defined by reference to particular embodiments of the invention, such references do not imply a limitation on the invention, and no such limitation is to be inferred. The invention is capable of considerable modification, alteration, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent arts. The depicted and described embodiments are examples only and are not exhaustive of the scope of the invention.
Consequently, the invention is intended to be limited only by the spirit and scope of the appended claims, giving full cognizance to equivalents in all respects.
This application claims the benefit of U.S. Provisional Application No. 63/202,566, filed Jun. 16, 2021, which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63202566 | Jun 2021 | US |