The disclosure relates to the field of media processing and more particularly to low light media enhancement.
Videos captured under low light conditions or captured using low quality sensors may suffer from various issues:
high noise: a maximum exposure time of the videos may be restricted by a desired frame per second (FPS), which results in high noise in the low light conditions;
low brightness: in the low light conditions, a lack of sufficient ambient light results in a dark video;
color artifacts: an accuracy of the sensor used to capture accurate color drops with a decrease in a number of photons captured resulting in a loss of color accuracy;
obtaining a good output quality by performing a low complexity Artificial Intelligence (AI) video processing (full HD-30FPS) is difficult;
power and memory constraints to handle long duration video capture;
flickering due to temporal consistency issues; and
lack of real-world dataset for training.
In related art methods, spatial or temporal filters may be used to denoise/enhance the video captured under the low light conditions. However, the spatial or temporal filters may not remove noise from the video efficiently, when the video is captured under the low light conditions or using the low-quality sensors.
In some of related art methods, deep Convolutional Neural Networks (CNNs) may be used to enhance the video. However, the deep CNNs used in the related art methods may be too computationally heavy and memory intensive to be deployed in a real-time on an electronic device/mobile phone. The enhanced video using the deep CNNs may also suffer from the flickering due to inconsistent denoising of consecutive video frames.
Provided are methods and systems for enhancing media captured under low light conditions and using inferior sensors.
Another aspect of the embodiments herein is to provide methods and systems for switching between a first, second, and third neural network to enhance the media, by analyzing parameters of a plurality of frames of the video, wherein the parameters include shot boundary detection and artificial light flickering, wherein the first neural network is a high complexity neural network (HCN) with one input frame, the second neural network is a temporally guided lower complexity neural network (TG-LCN) with a ‘q’ number of input frames and a previous output frame for joint deflickering or joint denoising, and the third neural network is a neural network with a ‘p’ number of input frames and the previous output frame for denoising, wherein ‘p’ is lesser than ‘q’.
Another aspect of the embodiments herein is to provide methods and systems for training the first/second/third neural network using a multi-frame Siamese training method.
According to an aspect of the disclosure, a method for enhancing media includes: receiving, by an electronic device, a media stream; performing, by the electronic device, an alignment of a plurality of frames of the media stream; correcting, by the electronic device, a brightness of the plurality of frames; selecting, by the electronic device, one of a first neural network, a second neural network, or a third neural network, by analyzing parameters of the plurality of frames having the corrected brightness, wherein the parameters include at least one of shot boundary detection and artificial light flickering; and generating, by the electronic device, an output media stream by processing the plurality of frames of the media stream using the selected one of the first neural network, the second neural network, or the third neural network.
The media stream may be captured under low light conditions, and the media stream may include at least one of noise, low brightness, artificial flickering, and color artifacts.
The output media stream may be a denoised media stream with enhanced brightness and zero flicker.
The correcting the brightness of the plurality of frames of the media stream may include: identifying a single frame or the plurality of frames of the media stream as an input frame; linearizing the input frame using an Inverse Camera Response Function (ICRF); selecting a brightness multiplication factor for correcting the brightness of the input frame using a future temporal guidance; applying a linear boost on the input frame based on the brightness multiplication factor; and applying a Camera Response Function (CRF) on the input frame to correct the brightness of the input frame, wherein the CRF is a function of a sensor type and metadata, wherein the metadata includes an exposure value and International Standard Organization (ISO), and the CRF and the ICRF are stored as Look-up-tables (LUTs).
The selecting the brightness multiplication factor may include: analyzing the brightness of the input frame; identifying a maximum constant boost value as the brightness multiplication factor, based on the brightness of the input frame being less than a threshold and a brightness of all frames in a future temporal buffer being less than the threshold; identifying a boost value of monotonically decreasing function between maximum constant boost value and 1 as the brightness multiplication factor, based on the brightness of the input frame being less than the threshold, and the brightness of all the frames in the future temporal buffer being greater than the threshold; identifying a unit gain boost value as the brightness multiplication factor, based on the brightness of the input frame being greater than the threshold and the brightness of all the frames in the future temporal buffer being greater than the threshold; and identifying a boost value of monotonically increasing function between 1 and the maximum constant boost value as the brightness multiplication factor, based on the brightness of the input frame being greater than the threshold, and the brightness of the frames in the future temporal buffer being less than the threshold.
The selecting, by the electronic device, one of the first neural network, the second neural network or the third neural network may include: analyzing each frame with respect to earlier frames to determine whether the shot boundary detection is associated with each of the plurality of frames; selecting the first neural network for generating the output media stream by processing the plurality of frames of the media stream, based on the shot boundary detection being associated with the plurality of frames; analyzing a presence of the artificial light flickering in the plurality of frames, based on the shot boundary detection not being associated with the plurality of frames; selecting the second neural network for generating the output media stream by processing the plurality of frames of the media stream, based on the artificial light flickering being present in the plurality of frames; and selecting the third neural network for generating the output media stream by processing the plurality of frames of the media stream, based on the artificial light flickering not being present in the plurality of frames.
The first neural network may be a high complexity neural network with one input frame, the second neural network may be a temporally guided lower complexity neural network with ‘q’ number of input frames and a previous output frame for joint deflickering or joint denoising, and
the third neural network may be a neural network with ‘p’ number of input frames and the previous output frame for denoising, wherein ‘p’ is less than ‘q’.
The first neural network may include multiple residual blocks at a lowest level for enhancing noise removal capabilities, and the second neural network may include at least one convolution operation with less feature maps and the previous output frame as a guide for processing the plurality of input frames.
The first neural network, the second neural network and the third neural network may be trained using a multi-frame Siamese training method to generate the output media stream by processing the plurality of frames of the media stream.
The method may further include training a neural network of at least one of the first neural network, the second neural network and the third neural network by: creating a dataset for training the neural network, wherein the dataset includes one of a local dataset and a global dataset; selecting at least two sets of frames from the created dataset, wherein each set includes at least three frames; adding a synthetic motion to the selected at least two sets of frames, wherein the at least two sets of frames added with the synthetic motion include different noise realizations; and performing a Siamese training of the neural network using a ground truth media and the at least two sets of frames added with the synthetic motion.
The creating the dataset may include: capturing burst datasets, wherein a burst dataset includes one of low light static media with noise inputs and a clean ground truth frame; simulating a global motion and a local motion of each burst dataset using a synthetic trajectory generation and a synthetic stop motion, respectively; removing at least one burst dataset with structural and brightness mismatches between the clean ground truth frame and the low light static media; and creating the dataset by including the at least one burst dataset that does not include the structural and brightness mismatches between the clean ground truth frame and the low light static media.
The simulating the global motion of each burst dataset may include: estimating a polynomial coefficient range based on parameters including a maximum translation and a maximum rotation; generating 3rd order polynomial trajectories using the estimated polynomial coefficient range; approximating a 3rd order trajectory using a maximum depth and the generated 3rd order polynomial trajectories; generating uniform sample points based on a pre-defined sampling rate and the approximated 3D trajectory; generating ‘n’ affine transformations based on the generated uniform sample points; and applying the generated n affine transformations on each burst dataset.
The simulating the local motion of each burst dataset includes: capturing local object motion from each burst dataset in a static scene using the synthetic stop motion, the capturing the local object motion including: capturing an input and ground truth scene with a background scene; capturing an input and ground truth scene with a foreground object; cropping out the foreground object; and creating synthetic scenes by positioning the foreground object at different locations of the background scene; and simulating a motion blur for each local object motion by averaging a pre-defined number of frames of the burst dataset.
The performing the Siamese training of the neural network may include: passing the at least two sets of frames with the different noise realizations to the neural network to generate at least two sets of output frames; computing a Siamese loss by computing a loss between the at least two sets of output frames; computing a pixel loss by computing an average of the at least two sets of output frames and a ground truth; computing a total loss using the Siamese loss and the pixel loss; and training the neural network using the computed total loss.
According to an aspect of the disclosure, an electronic device includes: a memory; and a processor coupled to the memory and configured to: receive a media stream; perform an alignment of a plurality of frames of the media stream; correct a brightness of the plurality of frames; select one of a first neural network, a second neural network, or a third neural network, by analyzing parameters of the plurality of frames having the corrected brightness, wherein the parameters include at least one of shot boundary detection, and artificial light flickering; and generate an output media stream by processing the plurality of frames of the media stream using the selected one of the first neural network, the second neural network, or the third neural network.
The above and other aspects, features, and advantages of certain embodiments of the present disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which::
The example embodiments and the various aspects, features and advantageous details thereof are explained more fully with reference to the accompanying drawings in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The description herein is intended merely to facilitate an understanding of ways in which the example embodiments herein can be practiced and to further enable those of skill in the art to practice the example embodiments herein. Accordingly, this disclosure should not be construed as limiting the scope of the embodiments.
Embodiments of the disclosure provide methods and systems for enhancing media/video in real-time using temporal guided adaptive Convolutional Neural Network (CNN) switching, wherein the media may be captured in extreme low light conditions, under high noise conditions, and/or captured using inferior/low quality sensors.
Further, embodiments of the disclosure provide methods and systems for using a deep learning based pipeline to enhance the media while minimizing noise and flicker artifacts.
Further still, embodiments of the disclosure provide methods and systems for choosing between high and low complexity networks by analyzing temporal consistency of input frames of the media, thus reducing average time and power required to process the media.
Further still, embodiments of the disclosure provide methods and systems for using a Siamese training method to reduce the flicker.
Embodiments of the disclosure will now be described with reference to the drawings, where similar reference characters denote similar features.
Examples of the electronic device 100 may be, but are not limited to, a cloud computing device (which may be a part of a public cloud or a private cloud), a server, a database, a computing device, and so on. The server may be at least one of a standalone server, a server on a cloud, or the like. The computing device may be, but is not limited to, a personal computer, a notebook, a tablet, desktop computer, a laptop, a handheld device, a mobile device, a camera, an Internet of Things (IoT) device, an Augmented Reality (AR)/Virtual Reality (VR) device, and so on. Also, the electronic device 100 may be at least one of, a microcontroller, a processor, a System on Chip (SoC), an integrated chip (IC), a microprocessor based programmable consumer electronic device, and so on.
Examples of the media/media stream may be, but are not limited to, video, animated images, Graphic Interchange Formats (GIFs), a batch of moving images, and so on. In an example, the video may include a low frame per second (FPS) video, an indoor slow motion video, a High Dynamic Range (HDR) video, and so on. In an example, the media may be captured under low light conditions. In another example, the media may be captured using inferior/low quality sensors. In an example, the media may include, but is not limited to, at least one of noise, low brightness, artificial light, flickering, color artifacts, and so on. Embodiments herein use the terms such as “media”, “video”, “media stream”, “video stream”, “image frames”, and so on, interchangeably throughout the disclosure.
The electronic device 100 may enhance the media/media stream stored in a memory or received from at least one external device. Alternatively, the electronic device 100 may enhance the media being captured in real-time. Enhancing the media refers to denoising the media and removing the different artifacts (such as, the artificial light flickering, the color artifacts, and so on) from the media.
The electronic device 100 includes a memory 102, a communication interface 104, a camera (camera sensor) 106, a display 108, and a controller (processor) 110. The electronic device 100 may also communicate with one or more external devices using a communication network to receive the media for enhancing. Examples of the external devices may be, but are not limited to, a server, a database, and so on. The communication network may include, but is not limited to, at least one of a wired network, a value added network, a wireless network, a satellite network, or a combination thereof. Examples of the wired network may be, but are not limited to, a Local Area Network (LAN), a Wide Area Network (WAN), an Ethernet, and so on. Examples of the wireless network may be, but are not limited to, a cellular network, a wireless LAN (Wi-Fi), Bluetooth, Bluetooth low energy, Zigbee, Wi-Fi Direct (WFD), Ultra-wideband (UWB), infrared data association (IrDA), near field communication (NFC), and so on.
The memory 102 may include at least one type of storage medium, from among a flash memory type storage medium, a hard disk type storage medium, a multi-media card micro type storage medium, a card type memory (for example, an SD or an XD memory), random-access memory (RAM), static RAM (SRAM), read-only memory (ROM), electrically erasable programmable ROM (EEPROM), programmable ROM (PROM), a magnetic memory, a magnetic disk, and/or an optical disk.
The memory 102 may store at least one of the media, an input media stream received for enhancing, an output media stream (i.e., the enhanced media stream), and so on.
The memory 102 may also store a first neural network 202a, a second neural network 202b, and a third neural network 202c, which may be used to generate the output media stream by processing the input media stream. In an embodiment, the first neural network 202a may be a high complexity neural network (HCN) with one input frame of the media. In an embodiment, the second neural network 202b may be a temporally guided lower complexity neural network (TG-LCN) with ‘q’number of input frames and a previous output frame for joint deflickering or joint denoising. In an embodiment, the third neural network 202c may be a neural network with ‘p’ number of input frames and the previous output frame for denoising, wherein ‘p’ is less than ‘q’. Each neural network is described later.
Examples of the first, second, and third neural networks (202a, 202b, and 202c) may be, but are not limited to, a deep neural network (DNN), an Artificial Intelligence (AI) model, a machine learning (ML) model, a multi-class Support Vector Machine (SVM) model, a Convolutional Neural Network (CNN) model, a recurrent neural network (RNN), a stacked hourglass network, a restricted Boltzmann Machine (RBM), a deep belief network (DBN), a bidirectional recurrent deep neural network (BRDNN), a generative adversarial network (GAN), a regression based neural network, a deep reinforcement model (with ReLU activation), a deep Q-network, a residual network, a Conditional Generative Adversarial Network (CGAN), and so on.
The first, second, and third neural networks (202a, 202b, and 202c) may include a plurality of nodes, which may be arranged in the layers. Examples of the layers may be, but are not limited to, a convolutional layer, an activation layer, an average pool layer, a max pool layer, a concatenated layer, a dropout layer, a fully connected (FC) layer, a SoftMax layer, and so on. Each layer has a plurality of weight values and performs a layer operation through calculation of a previous layer and an operation of a plurality of weights/coefficients. A topology of the layers of the first, second, and third neural networks (202a, 202b, and 202c) may vary based on the type of the respective network. In an example, the first, second, and third neural networks (202a, 202b, and 202c) may include an input layer, an output layer, and a hidden layer. The input layer receives a layer input and forwards the received layer input to the hidden layer. The hidden layer transforms the layer input received from the input layer into a representation, which can be used for generating the output in the output layer. The hidden layers extract useful/low level features from the input, introduce non-linearity in the network and reduce a feature dimension to make the features equivariant to scale and translation. The nodes of the layers may be fully connected via edges to the nodes in adjacent layers. The input received at the nodes of the input layer may be propagated to the nodes of the output layer via an activation function that calculates the states of the nodes of each successive layer in the network based on coefficients/weights respectively associated with each of the edges connecting the layers.
The first, second, and third neural networks (202a, 202b, and 202c) may be trained using at least one learning method to perform at least one intended function. Examples of the learning method may be, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, regression-based learning, and so on. The trained first, second, and third neural networks (202a, 202b, and 202c) may be a neural network model in which a number of layers, a sequence for processing the layers and parameters related to each layer may be known and fixed for performing the at least one intended function. Examples of the parameters related to each layer may be, but are not limited to, activation functions, biases, input weights, output weights, and so on, related to the layers of the first, second, and third neural networks (202a, 202b, and 202c). A function associated with the learning method may be performed through the non-volatile memory, the volatile memory, and the controller 110. The controller 110 may include one or more processors. At this time, the one or more processors may be a general purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an Artificial Intelligence (AI)-dedicated processor such as a neural processing unit (NPU).
The one or more processors may perform the at least one intended function in accordance with a predefined operating rule of the first, second, and third neural networks (202a, 202b, and 202c), stored in the non-volatile memory and the volatile memory. The predefined operating rules of the first, second, and third neural networks (202a, 202b, and 202c) are provided through training the modules using the learning method.
Herein, being provided through learning means that, by applying the learning method to a plurality of learning data, a predefined operating rule, or the first, second, and third neural networks (202a, 202b, and 202c) of a desired characteristic is made. The intended functions of the first, second, and third neural networks (202a, 202b, and 202c) may be performed in the electronic device 100 itself in which the learning according to an embodiment is performed, and/or may be implemented through a separate server/system.
The communication interface 104 may be configured to communicate with the one or more external devices using communication methods that have been supported by the communication network. The communication interface 104 may include the components such as, a wired communicator, a short-range communicator, a mobile/wireless communicator, and a broadcasting receiver. The wired communicator may enable the electronic device 100 to communicate with the external devices using the communication methods such as, but are not limited to, wired LAN, the Ethernet, and so on. The short-range communicator may enable the electronic device 100 to communicate with the external devices using the communication methods such as, but are not limited to, Bluetooth low energy (BLE), near field communicator (NFC), WLAN (or Wi-fi), Zigbee, infrared data association (IrDA), Wi-Fi direct (WFD), UWB communication, Ant+(interoperable wireless transfer capability) communication, shared wireless access protocol (SWAP), wireless broadband internet (Wibro), wireless gigabit alliance (WiGiG), and so on. The mobile communicator may transceive wireless signals with at least one of a base station, an external terminal, or a server on a mobile communication network/cellular network. In an example, the wireless signal may include a speech call signal, a video telephone call signal, or various types of data, according to transceiving of text/multimedia messages. The broadcasting receiver may receive a broadcasting signal and/or broadcasting-related information from the outside through broadcasting channels. The broadcasting channels may include satellite channels and ground wave channels. In an embodiment, the electronic device 100 may or may not include the broadcasting receiver.
The camera sensor 106 may be configured to capture the media.
The display 108 may be configured to enable a user to interact with the electronic device 100. The display 108 may also be configured to display the output media stream to the user.
The controller 110 may be configured to enhance the media/media stream in real-time. In an embodiment, the controller 110 may enhance the media using a temporal guided adaptive neural network/CNN switching. The temporal guided adaptive neural network switching refers to switching between the first, second, and third neural networks (202a-202c) to enhance the media.
For enhancing the media stream, the controller 110 receives the media stream. In an example, the controller 110 may receive the media stream from the memory 102. In another example, the controller 110 may receive the media stream from the external device. In another example, the controller 110 may receive the media stream from the camera 106. Embodiments herein use the terms such as “media”, “media stream”, “input media stream”, “input video”, “input video frames”, “input video sequence” and so on, interchangeably to refer to media captured under the low light conditions or using the low quality sensors.
When the media stream is received, the controller 110 performs an alignment of a plurality of frames of the media stream. In an example, the plurality of frames may correspond to a plurality of image frames.
After aligning the plurality of frames of the media stream, the controller 110 corrects brightness of the plurality of frames. For correcting the plurality of frames, the controller 110 identifies a single frame of the plurality of frames or the plurality of frames of the media stream as an input frame. The controller 110 linearizes the input frames using an Inverse Camera Response Function (ICRF). The controller 110 chooses a brightness value for correcting the brightness of the input frame using a future temporal guidance, on linearizing the input frames. For choosing the brightness value in accordance with the future temporal guidance, the controller 110 analyzes the brightness of the input frame and a future temporal buffer. The future temporal buffer is the next n frames after the input frame. For example, the current input frame will be the (t-n)th frame, and the frames from (t-n) to t comprise of future temporal buffer(s). There may be a delay of n frames between the camera stream and output. The controller 110 chooses a constant boost value as the brightness value, on analyzing that the brightness of the input frame is less than a threshold and brightness of all frames in the future temporal buffer is less than the threshold. In an embodiment, the threshold value can be set empirically after experiments. The controller 110 chooses a boost value of monotonically decreasing function as the brightness value, based on analyzing that the brightness of the input frame is less than the threshold and the brightness of all the frames in the future temporal buffer is greater than the threshold. The controller 110 chooses a zero-boost value as the brightness value, based on analyzing that the brightness of the input frame is greater than the threshold and the brightness of all the frames in the future temporal buffer is greater than the threshold. Thereby, the controller 110 does not boost the brightness of the input frame, based on choosing the zero boost value. The controller 110 chooses a boost value of monotonically increasing function as the brightness value, based on analyzing that the brightness of the input frame is greater than the threshold, and the brightness of the frames in the future temporal buffer is less than the threshold. After choosing the brightness value, the controller 110 applies a linear boost on the input frame based on the chosen brightness value. The controller 110 applies a Camera Response Function (CRF) on the input frame to correct the brightness of the input frame. The CRF may be a function of a type of the camera 106 used to capture the media stream (hereinafter referred as a sensor type) and metadata. The metadata includes an exposure value and International Standard Organization (ISO). The CRF and the ICRF may be characterized and stored in Look-up-tables (LUTs).
After correcting the brightness of the plurality of frames of the media stream, the controller 110 selects one of the first neural network 202a, the second neural network 202b, and the third neural network 202c for processing the media stream. The controller 110 selects one of the three neural networks (202a, 202b, and 202c) by analyzing parameters of the plurality of frames of the media stream. Examples of the parameters may be, but are not limited to, shot boundary detection, and artificial light flickering.
For selecting one among the three neural networks (220a, 22b, and 202c) for processing the plurality of frames of the media stream, the controller 110 analyzes each frame with respect to earlier frames to check if the shot boundary detection is associated with each of the plurality of frames. The shot boundary detection may be checked by analyzing temporal similarity between the plurality of frames. The controller 110 may analyze that the shot boundary detection is associated with each of the plurality of frames based on an absence of the temporal similarity between the plurality of frames. If the shot boundary detection is associated with the plurality of frames, the controller 110 selects the first neural network 202a for processing the plurality of frames of the media stream. If the shot boundary detection is not associated with the plurality of frames, the controller 110 analyzes a presence of the artificial light flickering in the plurality of frames. If the artificial light flickering is present in the plurality of frames, the controller 110 selects the second neural network 202b for processing the plurality of frames of the media stream. If the artificial light flickering is not present in the plurality of frames, the controller 110 selects the third neural network 202c for processing the plurality of frames of the media stream.
In an embodiment, the first neural network 202a may be the high complexity network (HCN) with one input frame (current frame) of the media. The first neural network 202a includes multiple residual blocks at lowest level for enhancing noise removal capabilities. Embodiments herein use the terms such as “first neural network”, “HCN”, “high complexity CNN”, and so on, interchangeably throughout the disclosure.
In an embodiment, the second neural network 202b may be the temporally guided lower complexity neural network (TG-LCN) with the ‘q’ number of input frames and the previous output frame for joint deflickering or joint denoising. The second neural network 202b includes at least one convolution operation with less feature maps and a previous output frame as a guide for processing the plurality of input frames. Embodiments herein use the terms such as “second neural network”, “TG-LCN (n=q),” “TG-LCN”, “′q′ frame flicker reduction denoiser”, and so on, interchangeably throughout the disclosure.
In an embodiment, the third neural network 202c may be the neural network with the ‘p’ number of input frames and the previous output frame for denoising, wherein the ‘p’ is less than the ‘q’ (i.e., the number of frames of the media stream). In an example, consider that the media stream may include 5 frames (i.e., ‘q′=5). In such a scenario, ‘p’ may be equal to 3 number of frames (i.e., ‘p′=3). Embodiments herein use the terms such as “third neural network”, “TG-LCN (n=p, p<q)”, “TG-LCN (n=p)”, and so on, interchangeably throughout the disclosure.
The first, second, and third neural networks (202a, 202b, and 202c) may be the trained neural network. In an embodiment, the controller 110 may train the first, second, and third neural networks (202a, 202b, and 202c) using a using a multi-frame Siamese training method.
For training the first, second, and third neural networks (202a, 202b, and 202c), the controller 110 creates a dataset. The dataset includes one of a local dataset and a global dataset.
For creating the dataset, the controller 110 captures burst datasets, or alternatively, the controller 110 may receive the burst dataset from the external devices. The burst datasets includes, but is not limited to, one of low light static media with noise inputs and a clean ground truth frame, or the like. The clean ground truth means ground truth image without noise. The clean ground truth can be obtained by averaging the individual frames in the burst. After capturing the burst dataset, the controller 110 simulates a global motion and a local motion of each burst dataset using a synthetic trajectory generation and a synthetic stop motion, respectively. For simulating the global motion of each burst dataset, the controller 110 estimates a polynomial coefficient range based on parameters including a maximum translation and a maximum rotation from the burst dataset. The maximum translation and rotation signify the maximum motion that the camera can undergo during a capture session. This can be used for creating synthetic motion, and can be set empirically after experiments. The controller 110 generates a 3rd order polynomial trajectory using the estimated polynomial coefficient range and approximates the 3rd order trajectory using a maximum depth. The maximum depth determines the distance of the scene from the camera for approximately planar scenes. The maximum depth can be set empirically after experiments. In an example herein, the 3rd order polynomial trajectory may be a trajectory used by the camera 106 to capture the burst dataset. The controller 110 generates the uniform sample points based on a pre-defined sampling rate and the approximated 3D trajectory. The pre-defined sampling rate may be a sampling rate that controls a smoothness between the frames of each burst dataset. The controller 110 generates ‘n’ affine transformations based on the generated uniform sample points and applies the generated ‘n’ affine transformations on each burst dataset. Thereby, creating the global dataset by simulating the global motion of each burst dataset. For simulating the local motion of each burst dataset, the controller 110 captures a local object motion from each burst dataset in a static scene using the synthetic stop motion, For capturing the local object motion, the controller 110 captures an input and ground truth scene with a background scene from each burst dataset. The controller 110 also captures an input and ground truth scene with a foreground object from each burst dataset. The controller 110 crops out the foreground object and creates synthetic scenes by positioning the foreground object at different locations of the background scene. On capturing the local object motion, the controller 110 simulates a motion blur for each local object motion by averaging a pre-defined number of frames of the burst dataset. Thereby, creating the local dataset by simulating the local motion of each burst dataset. After simulating the global motion and the local motion of each burst dataset, the controller 110 removes one or more burst datasets having structural and brightness mismatches between the clean ground truth frame and the low light static media. The controller 110 creates the dataset by including the one or more burst datasets that do not include the structural and brightness mismatches between the clean ground truth frame and the low light static media.
After creating the dataset, the controller 110 selects at least two sets of frames from the created dataset. Each set includes at least three frames. The controller 110 adds a synthetic motion to the selected at least two sets of frames. The at least two sets of frames added with the synthetic motion includes different noise realizations. The controller 110 performs a Siamese training of the first, second, and third neural networks (202a, 202b, and 202c) using a ground truth and the at least two sets of frames added with the synthetic motion. The ground truth can be used for loss computations for training the neural network. For performing the Siamese training of the first, second, and third neural networks (202a, 202b, and 202c), the controller 110 passes the at least two sets of frames with the different noise realizations to at least two of the first, second, and third neural networks (202a, 202b, and 202c) to generate at least two sets of output frames. The controller 110 computes a Siamese loss by computing a L2 loss between the at least two sets of output frames. The controller 110 computes a pixel loss by computing an average of the at least two sets of output frames and a ground truth corresponding to the output frames. The controller 110 computes a total loss using the Siamese loss and the pixel loss and trains the first, second, and third neural networks (202a, 202b, and 202c) using the computed total loss.
After selecting the neural network among the first, second, and third neural networks (202a, 202b, and 202c), the controller 110 generates the output media stream by processing the plurality of frames of the media stream using the selected neural network (202a, 202b, or 202c). The output media stream may be a denoised media stream with enhanced brightness and zero flicker. Embodiments herein use the terms such as “output media stream”, “output”, “output video stream”, “output video frames”, “output image frames”, “denoised media/video”, “enhanced media/video”, and so on, interchangeably to refer to media including zero noise and zero artificial flickering (i.e., including zero artifacts), and the corrected brightness.
For generating the output media stream, the controller 110 selects the single or the plurality of frames of the media stream as an input processing frame. The controller 110 down samples the input processing frame over multiple scales to generate a low-resolution input. In an example herein, the controller 110 down samples the input processing frame for 2 times. On generating the lower resolution input, the controller 110 processes the low-resolution input using the selected one of the first neural network 202a, the second neural network 202b, or the third neural network 202c at a lower resolution to generate a low-resolution output. The controller 110 then upscales the processed low-resolution output over multiple scales to generate the output media stream. For example, the controller 110 upscales the low-resolution output for 2 times. A number of scales for which the down sampling has been performed may be equal to a number of scales for which the upscaling has been performed.
The controller 110 may be further configured to dynamically vary a complexity of the selected one of first neural network 202a, the second neural network 202b, or the third neural network 202c by changing a number of scales for processing the low-resolution input. The complexity of the selected one of the first neural network 202a, the second neural network 202b, or the third neural network 202c may be varied with an inverse relationship with respect to the number of frames of the media stream.
The controller 110 saves/stores the generated output media stream in the memory 102.
The reception and aligner module 204 may be configured to receive the media stream/input media for enhancing and performs the alignment of the plurality of frames of the media stream.
The brightness correction module 206 may be configured to correct the brightness of the plurality of frames of the media stream. The brightness correction module 206 identifies the single frame or the plurality of frames of the media stream as the input frame. The brightness correction module 206 linearizes the input frame using the ICRF and chooses the brightness value for correcting the brightness of the input frame using the future temporal guidance. After choosing the brightness value, the brightness correction module 206 applies the linear boost on the input frame based on the chosen brightness value. The brightness correction module 206 applies the CRF on the input frame to correct the brightness of the input frame.
The neural network selection module 208 may be configured to select one among the first, second, and third neural networks (202a, 202b, and 202c) for processing the plurality of frames of the media stream. For selecting the neural network (202a, 202b, or 202c), the neural network selection module 208 analyzes each frame with respect to earlier frames to check if the shot boundary detection is associated with each of the plurality of frames. If the shot boundary detection is associated with the plurality of frames, the controller 110 selects the first neural network 202a for processing the plurality of frames of the media stream. If the shot boundary detection is not associated with the plurality of frames, the controller 110 analyzes the presence of the artificial light flickering in the plurality of frames of the media stream. If the artificial light flickering is present in the plurality of frames, the controller 110 selects the second neural network 202b for processing the plurality of frames of the media stream. If the artificial light flickering is not present in the plurality of frames, the controller 110 selects the third neural network 202c for processing the plurality of frames of the media stream.
The output generation module 210 may be configured to generate the output media stream by processing the plurality of frames of the media stream using the selected first neural network 202a, or the second neural network 202b, or the third neural network 202c. The output generation module 210 selects the single or the plurality of frames of the media stream as the input processing frame. The output generation module 210 down samples the input processing frame over the multiple scales to generate the low-resolution input. The output generation module 210 processes the low-resolution input using the selected first neural network 202a, or the second neural network 202b, or the third neural network 202c. The output generation module 210 upscales the low-resolution output over the multiple scales using the higher resolution frames as the guide to generate the output media stream.
The training module 212 may be configured to train the first, second, and third neural networks (202a, 202b, and 202c) using the multi-frame Siamese training method. For training the first, second, and third neural networks (202a, 202b, and 202c), the training module 212 creates the dataset for training the first, second, and third neural networks (202a, 202b, and 202c). The dataset includes one of the local dataset and the global dataset. The training module 212 selects the at least two sets of frames from the created dataset and adds the synthetic motion to the selected at least two sets of frames. On adding the synthetic motion to the selected at least two sets of frames, the training module 212 performs the Siamese training of the first, second, and third neural networks (202a, 202b, and 202c) using the ground truth media and the at least two sets of frames added with the synthetic motion.
Embodiments herein further describe the enhancement of the media by considering the media as a video for example, but it may be obvious to a person skilled in the art that any other type of media may be considered.
The electronic device 100 identifies key frames of the video by computing the temporal similarity between the frames of the video. The key frames may be referred to the frames of the video, which have been associated with the shot boundary detection. The electronic device 100 uses the HCN 202a for denoising the key frames of the video. The electronic device 100 uses the TG-LCN for denoising non-key frames of the video using a temporal output guidance. The non-key frames of the video may be the frames of the video including the artificial light flickering. The temporal output guidance may refer to the previous output frame, which has been used as the guide. Both the HCN 202a and the TG-LCN 202b may be composed of multi-scale inputs with convolutional guided filters for fast processing and reduced memory. The electronic device 100 may use the third neural network 202c for denoising the frames of the video, which do not include the artificial light flickering or do not associate with the shot boundary detection (i.e., having the temporal similarity with the other frames).
The electronic device 100 receives the video for enhancing, wherein the received video may be captured under the low light conditions or using the low quality sensors. On receiving the video, the electronic device 100 may perform a Vision Defect Identification System (VDIS) (an optional step) on the video to detect and correct any defects in the video.
After performing the VDIS, the electronic device 100 aligns the frames of the video (using any suitable existing methods). In an example herein, consider that the received video may include five consecutive frames (It−2, It−i, It, It+1, It+2) (i.e., q=5) (referred as input frames). After aligning the input frames of the video, the electronic device 100 performs the brightness correction on the input frames of the video.
After performing the brightness correction, the electronic device 100 checks if the shot boundary detection is associated with the input frames by checking the temporal similarity between the input frames of the video. If the input frames are not similar (i.e., the shot boundary detection is associated with the input frames), the electronic device 100 uses the HCN to generate the output frames for the input frames that are not similar. In an example herein, consider that the input frame (It) is associated with the shot boundary detection. In such a scenario, the electronic device 100 uses the HCN to generate the output frame (Ot) by denoising the input frame (It).
If the input frames are similar (i.e., absence of the shot boundary detection), the electronic device 100 checks the input frames (‘q’ frames) to detect whether the artificial light flickering is present due to artificial lights. If the artificial light flickering is present in the ‘q’ (i.e., 5) input frames, the electronic device 100 uses the TG-LCN/′q′ frame flicker reduction denoiser 202b (n=q) to generate the output frame (OA using the ‘q’ input frames and the previous output frame ‘Ot−1’. The ‘q’ frame flicker reduction denoiser (TG-LCN) (n=q) performs the denoising and flicker elimination on the ‘q’ input frames. If artificial flickering due to the artificial lights is not present in the ‘q’ input frames, the electronic device 100 uses the third neural network/TG-LCN (n=p) 202c to generates the output frame (Ot), using ‘p’ input frames (for example, ‘p′=3 in the example shown in
The electronic device 100 receives the single or the multiple frames of the video/video sequence as the input. The electronic device 100 linearizes the input frame using the ICFR. The electronic device 100 then chooses the brightness value using the temporal future temporal guidance. Choosing the brightness value is shown in
As shown in
where ‘n’ indicates a number of frames of the video.
After choosing the brightness value, the electronic device 100 applies the linear boost on the input frame based on the chosen brightness value. The electronic device 100 applies the CRF on the input frame to correct the brightness of the input frame. The CRF is the function of the sensor type and the metadata.
The HCN 202a may be a single frame denoising network. The HCN 202a comprises of multiple residual blocks at the lowest level to improve noise removal capabilities. The HCN 202a may process the input frames of the video that do not have the temporal similarity (i.e., associated with shot boundary detection) to generate the output video, which is the denoised video.
The TG-LCN/TG-LCN (n) may be a multi-frame denoising network, wherein ‘n’ depicts the input frames of the video. The TG-LCN 202b uses the previous output frame as the guide to process the input frames of the video to generate the output video, which allows the TG-LCN of much lower complexity than the HCN 202a. The TG-LCN does not use the residual blocks. The convolutional operations involved in the TG-LCN may contain less feature maps to reduce computation.
The electronic device 100 receives the single frame or plurality of frames of the video as the input processing frame/input frames. The electronic device 100 down samples the input processing frame over multiple scales at the lower resolutions to generate the low-resolution input. The input processing frame may be down sampled at
resolution. The electronic device 100 uses the selected HCN 202a or the TG-LCN (n=q) 202b, or the third neural network 202c for processing the low-resolution input to generate the low-resolution output. The electronic device 100 upscales/up samples the low-resolution output at every lower level over the multiple scales using Convolution Guided Filters (CGF) to generate the output video. The CGF accepts the higher resolution input set, the low-resolution input and the low-resolution output to generate the output video having the higher resolution output image.
In an embodiment herein, the network, where the multi-scale pyramidal approach is applied to the HCN 202a may be denoted by HCN′, and the network where the multi-scale pyramidal approach is applied to TG-LCN 202b is denoted by TG-LCN′. The electronic device 100 dynamically varies the complexity of the HCN 202a, the TG-LCN 202b, or the third neural network 202c with the inverse relationship with respect to the number of frames of the video.
For training the first, second, and third neural networks (202a, 202b, and 202c), the electronic device 100 creates the dataset using low and higher exposure burst shots and uses self-supervised approaches for refining the dataset. The electronic device 100 corrects the brightness of the created dataset. The electronic device 100 then trains the first, second, and third neural networks (202a, 202b, and 202c) using the multi-frame Siamese training method and self-similarity loss for temporal consistency.
The electronic device 100 first creates the dataset for training the first, second, and third neural networks (202a, 202b, and 202c). Creating the dataset is shown in
ISO of input k >>1. In an example herein, every capture of the burst dataset may consist of a set of 5 ×j noisy inputs and k≤j clean ground truth frames. In an example herein, the electronic device 100 may use any application like a custom dump application for capturing the burst dataset. After capturing the dataset, the electronic device 100 simulates the global motion and the local motion of each burst dataset using the synthetic trajectory generation and the synthetic stop motion, respectively. Thereby, creating the local dataset, and the global dataset. Simulating the global motion and the local motion is shown in
As shown in
As shown in
As shown in
Once the dataset has been created, as shown in
As shown in
The electronic device 100 computes the Siamese loss by computing the L2 loss between the output 1 and the output 2. The electronic device 100 also computes the pixel loss by computing the average of the output 1 and the output 2 and the ground truth. The electronic device 100 computes the total loss using the Siamese loss and the pixel loss. The electronic device 100 trains the first, second, and third neural networks (202a, 202b, and 202c) using the computed total loss.
Consider an example scenario, wherein the electronic device 100 receives the low FPS video captured under the low light conditions for enhancement, wherein the low FPS video refers to the video with the FPS of up to 60. In such a scenario, the electronic device performs the VDIS and aligns the input frames of the video. The electronic device 100 performs the brightness correction on the input frames of the video and appends the aligned input frames of the video to form the input video sequence.
The electronic device 100 checks if the shot boundary detection is associated with the input frames of the input video sequence by analyzing the temporal similarity between the input frames. In an example herein, consider that the input frame (It) does not have the temporal similarity with the other input frames
In such a scenario, the electronic device 100 selects the HCN 202a for processing the input frame (It) and the third neural network/TG-LCN (n=p) 202c for processing the input frames
to generate the output video Ot.
In an embodiment, since the artificial light flickering is minimal in the low FPS video, the electronic device 100 does not check the presence of the artificial light flickering in the input frames of the input video sequence.
Consider an example scenario, wherein the electronic device 100 receives the indoor slow motion video captured at high frame rates (240/960 FPS), thereby resulting in the video frames with noise. In such a scenario, the electronic device 100 enhances the indoor slow motion video by denoising and removing the artificial light flickering from the slow motion video. The electronic device checks the input frames of the slow motion video for the temporal similarity. If the input frames are not similar (i.e., presence of the shot boundary detection), the electronic device 100 uses the HCN 202a to generate the output frame (OA wherein the current frame (It) serves as input to the HCN′ 202a and the HCN′ 202a denoises the current frame (It). If the input frames of the slow motion video are similar (i.e., absence of the shot boundary detection), the electronic device 100 checks if the artificial light flickering is present in the input frames (‘q’ input frames) of the slow motion video due to the artificial lights. If the artificial light flickering is present in the ‘q’ input video frames, the electronic device 100 selects the second neural network/′q′ frame flicker reduction denoiser (TG-LCN′) (n=q) 202b to generate the output video frame (OA using the ‘q’ input frames and the previous output frame Ot−1. The q frame flicker reduction denoiser (TG-LCN′) (n=q) 202b performs the denoising and flicker elimination on the q input video frames. If artificial light flickering is not present in the ‘q’ input video frames, the electronic device 100 uses the third neural network/TG-LCN′(n=p) 202c to generate the output video frame (Ot), using p input frames ((p=3 in the shown example) (It−1, It, It+1)) and the previous output frame Ot−1. Using the Ot−1 as the guide allows to use a network of much lower complexity and also helps in removing the artificial light flickering.
At step 1802, the method includes receiving, by an electronic device 100, the media stream. At step 1804, the method includes performing, by the electronic device 100, the alignment of the plurality of frames of the media stream. At step 1806, the method includes correcting, by the electronic device 100, the brightness of the plurality of frames.
At step 1808, the method includes selecting, by the electronic device 100, one of the first neural network 202a, the second neural network 202b, or the third neural network 20c, by analyzing parameters of the plurality of frames, after correcting the brightness of the plurality of frames. The parameters include at least one of the shot boundary detection and the artificial light flickering.
At step 1810, the method includes generating, by the electronic device 100, the output media stream by processing the plurality of frames of the media stream using the selected one of the first neural network 202a, the second neural network 202b, or the third neural network 202c. The various actions in method 1800 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in
Embodiments herein provide methods and systems for enhancing videos in real time using temporal guided adaptive CNN switching, wherein the videos have been captured in extreme low light, under high noise conditions and/or captured using inferior sensors. Embodiments herein provide a deep learning based pipeline to achieve real time video enhancement while minimizing noise and flicker artifacts. Embodiments herein provide methods and systems to choose between high and low complexity networks by analyzing temporal consistency of input frames, thus reducing average time and power required to process a video. Embodiments herein provide use of Siamese training to reduce flicker.
Embodiments herein provide a method for low light video enhancement. The method comprises receiving an input video stream corrupted by noise, low brightness or color artifacts. The brightness is boosted to desired levels using a pre-tuned look up table. The temporal similarity of consecutive frames is analyzed. If there are dissimilar frames (based on the analysis of the consecutive frames), a high complexity single frame DNN model is deployed. If there are similar frames (based on the analysis of the consecutive frames), a lower complexity multi frame (p) DNN model (for example, a 3 frame DNN model) guided by previous output is deployed. On detecting artificial light flickering in the input video stream, an input comprising of multiple frames (q, where q >p) (for example, the input comprises of five frames) is used to perform flicker removal along with noise reduction. The output from one of the pathways is saved to the output video stream.
Embodiments herein provide a method for fast video denoising. The method includes receiving single or multiple frames as input. The frames are down sampled to lower resolutions using multiple scales. The video frames are processed at a lower resolution, generating a low resolution output. The low resolution output is upscaled over multiple levels using higher resolution frames as a guide. A low resolution network for down sampling and up sampling may be trained using Siamese training approach for temporal consistency.
Embodiments herein provide a deep learning based pipeline to achieve real-time video enhancement, while minimizing noise and flicker artifacts.
Embodiments herein provide a method to choose between high and low complexity networks by analyzing temporal consistency of input frames, thus reducing an average time required to process a video/media.
Embodiments herein provide a method to dynamically change network complexity at inference.
Embodiments herein provide a method for employing Siamese training to reduce the flicker.
The embodiments herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the elements. The elements shown in
The embodiments of the disclosure provide methods and systems for low light media enhancement. Therefore, it is understood that the scope of the protection is extended to such a program and in addition to a computer readable means having a message therein, such computer readable storage means contain program code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The method is implemented in an embodiment through or together with a software program written in e.g., Very high speed integrated circuit Hardware Description Language (VHDL) another programming language, or implemented by one or more VHDL or several software modules being executed on at least one hardware device. The hardware device may be any kind of portable device that may be programmed. The device may also include an ASIC, or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. The method embodiments described herein could be implemented partly in hardware and partly in software. Alternatively, the invention may be implemented on different hardware devices, e.g., using a plurality of CPUs.
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments have been described, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
Number | Date | Country | Kind |
---|---|---|---|
202141026673 | Jun 2021 | IN | national |
2021 41026673 | Apr 2022 | IN | national |
This application is a continuation application of International Application No. PCT/KR2022/008294, filed on Jun. 13, 2022, which is based on and priority to Indian Provisional Patent Application No. 202141026673, filed on Jun. 15, 2021, in the Indian Intellectual Property Office, and of Indian Complete Patent Application No. 202141026673, filed on Apr. 11, 2022, in the Indian Intellectual Property Office, the disclosures of which are incorporated by reference herein in its entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/KR2022/008294 | Jun 2022 | US |
Child | 17889988 | US |