The present disclosure relates to image processing, and more particularly to methods and systems for detecting artifacts in media and enhancing the media by removing the artifacts using at least one artificial intelligence technique.
Images stored in a user device may comprise low quality images and high quality images. If the user device has efficient processing and computational capabilities, media captured using a camera of the user device may be of high quality.
When social networking applications are accessed using the user device by connecting to the Internet, the user device can receive media, which may be stored in the user device. The quality of the media received from the social networking applications may typically be of low quality, as significant compression may be applied to the media for saving bandwidth involved in transfer of media, for example. Due to the compression, the resolution of the media may decrease. Thus, the media stored in the user device may be media of a varying range of qualities.
If or when a user migrates to another new user device that comprises a camera having more advanced features than the camera of the earlier user device, and if or when processing and computational capabilities of the new user device may be more efficient compared to the earlier used user device, then media captured using the camera of the new user device may be of higher quality. Therefore, if or when the user transfers the media stored in the earlier user device to the new user device, then the range of variation of qualities of the media stored in the new user device may become greater.
The media transferred from the earlier used user device may have artifacts created during the capturing of the media. Low sensitivity of the camera and single frame processing may result in artifacts such as noise in the captured media if or when the media has been captured in low light conditions. Alternatively or additionally, motion of the camera may result in artifacts such as blur in the captured media. Moreover, poor environmental conditions and/or an unstable capturing position may result in artifacts such as reflections and/or shadows in the captured media. Currently, there are no means available to the new user device to improve or enhance the media stored in the new user device.
Aspects of the present disclosure provide methods and systems for enhancing quality of media stored in a device and/or a cloud by detecting artifacts and/or degradations in the media, identifying at least one Artificial Intelligence (AI)-based media processing model for nullifying the detected artifacts and/or degradations, and enhancing the media by applying the at least one AI-based media processing model in a predetermined order for enhancing the media.
Some embodiments of the present disclosure may comprise triggering the detection of artifacts and/or degradations in the media stored in a device. The triggering may be performed either automatically or manually invoked by user of the device. The device according to some embodiments may be configured to automatically trigger the detection of the artifacts if or when the device is idle, if or when the device is not being utilized, or if or when the media is stored in the device.
Alternative or additional embodiments of the present disclosure may comprise generating artifact/quality tag information associated with the media to indicate specific artifacts and/or degradations included in the media and store the artifact/quality tag information either along with the media as metadata, and/or in a dedicated database.
Alternative or additional embodiments of the present disclosure may comprise identifying, based on the artifact/quality tag information associated with the media, the at least one AI-based media processing model that needs to be applied to the media to enhance the media.
Alternative or additional embodiments of the present disclosure may comprise selecting a pipeline of AI-based media processing models arranged in a predetermined order. The AI-based media processing models can be applied to the media in the predetermined order, indicated in the pipeline, to enhance the media. The pipeline may be obtained based on feature vectors of the image such as the artifact/quality tag information associated with the media, identified AI-based media processing models to be applied to the media, dependency amongst the identified AI-based media processing models, aesthetic score of the media, media content, and the like. The pipeline may be obtained using a previous result from enhancing a reference media, having same and/or similar feature vectors with a current media to be enhanced, by applying the AI-based media processing models in the predetermined order.
Alternative or additional embodiments of the present disclosure may ensure optimality of the enhancement by determining that an aesthetic score of the media has reached a maximum value after the enhancement, wherein the AI-based media processing models are applied recursively to the media, to enhance the media, until the aesthetic score of the media has reached the maximum value.
Alternative or additional embodiments herein may perform at least one operation comprising detecting artifacts in the media, generating artifact tag information associated with the media, and enhancing the media using at least one identified AI-based media processing model, in at least one of the device and the cloud.
Alternative or additional embodiments herein may perform the at least one operation in the background automatically or in the foreground on receiving commands from a user of the device to perform the at least one operation.
Accordingly, the embodiments of the present disclosure provide methods and systems for enhancing quality of media by detecting presence of artifacts and/or degradations in the media and nullifying the artifacts and the degradations using one or more AI-based media processing models.
In some embodiments, a method for enhancing media is provided. The method comprises detecting at least one artifact included in the media based on tag information indicating the at least one artifact included in the media, identifying at least one AI-based media enhancement model for enhancing the detected at least one artifact, and applying the at least one AI-based media enhancement model to the media for enhancing the media.
In some embodiments, the tag information regarding the media is encrypted and the tag information with the media is stored as metadata of the media.
In some embodiments, the at least one artifact in the media is detected in case that an aesthetic score of the media is less than a predefined threshold. In some embodiments, the identifying the at least one AI-based media enhancement model further comprises identifying a type of the at least one artifact included in the media based on the tag information and determining the at least one AI-based media enhancement model according to the identified type of the at least one artifact.
In some embodiments, the determining the at least one AI-based media enhancement model comprises determining a type of the at least one AI-based media enhancement model and an order of the at least one AI-based media enhancement model. If or when a plurality of AI-based media enhancement models are determined for enhancing the at least one artifact detected in the media, the plurality of AI-based media enhancement models are applied to the media in a predetermined order.
In some embodiments, the determining the at least one AI-based media enhancement model further comprises: determining a type of the at least one AI-based media enhancement model and an order of the at least one AI-based media enhancement model for enhancing a reference media, storing the determined type and the order of the at least one AI-based media enhancement model for enhancing the reference media in a database, obtaining feature vectors of the media, and determining the type and the order of the at least one AI-based media enhancement model for enhancing the media based on the determined type and the order of the at least one AI-based media enhancement model for enhancing the reference media, wherein the reference media has equal or similar feature vectors with the media. The feature vectors comprise at least one of metadata of the media, the tag information pertaining to the media, aesthetic score of the media, the plurality of AI-based media processing models to be applied to the media, dependencies among the plurality of AI-based media processing models, and the media.
In some embodiments, detection of the at least one artifact in the media, identification of the at least one AI-based media enhancement model, and application of the at least one AI-based media enhancement model to the media is performed in an electronic device of a user. Alternatively, the detection of the at least one artifact in the media, identification of the at least one AI-based media enhancement model, and application of the at least one AI-based media enhancement model to the media is performed in a cloud, wherein the detection of the at least one models in the media is initiated after the media is uploaded to the cloud.
In some embodiments, an electronic device for enhancing media is provided. In such embodiments, the electronic device comprises a memory, one or more processors communicatively connected to the memory and the one or more processors are configured to: detect at least one artifact included in the media based on tag information indicating the at least one artifact included in the media, identify at least one AI-based media enhancement model for enhancing the detected at least one artifact, and apply the at least one AI-based media enhancement model to the media for enhancing the media.
In some embodiments, the one or more processors are configured to encrypt the tag information regarding the media and storing the tag information with the media as metadata of the media.
In some embodiments, the at least one artifact in the media is detected in case that an aesthetic score of the media is less than a predefined threshold.
In an embodiment, the one or more processors are further configured to: identify a type of the at least one artifact included in the media based on the tag information, and determine the at least one AI-based media enhancement model according to the identified type of the at least one artifact.
In some embodiments, the one or more processors are further configured to determine a type of the at least one AI-based media enhancement model and an order of the at least one AI-based media enhancement model. If or when a plurality of AI-based media enhancement models are determined for enhancing the at least one artifact detected in the media, the plurality of AI-based media enhancement models are applied to the media in a predetermined order.
In some embodiments, the one or more processors are further configured to: determine a type of the at least one AI-based media enhancement model and an order of the at least one AI-based media enhancement model for enhancing a reference media, store the determined type and the order of the at least one AI-based media enhancement model for enhancing the reference media in a database, obtain feature vectors of the media, and determine the type and the order of the at least one AI-based media enhancement model for enhancing the media based on the determined type and the order of the at least one AI-based media enhancement model for enhancing the reference media, wherein the reference media has equal or similar feature vectors with the media. The feature vectors comprise at least one of metadata of the media, the tag information pertaining to the media, aesthetic score of the media, the plurality of AI-based media processing models to be applied to the media, dependencies among the plurality of AI-based media processing models, and the media.
In some embodiments, the electronic device is located on a cloud. The one or more processors are configured to initiate the detection of the at least one artifact in the media either automatically when the electronic device is in idle status or on receiving commands from a user.
Some embodiments may comprise analyzing the media to detect the artifacts and/or the degradations wherein the analysis can be triggered automatically or manually. Alternative or additional embodiments may comprise determining aesthetic scores of the media and saliency of the media. Alternative or additional embodiments may comprise prioritizing the media for enhancement based on the aesthetic scores and the saliency of the media. Alternative or additional embodiments may comprise generating artifact/quality tag information, which indicates the artifacts and/or degradations that have been detected in the media. The artifact/quality tag information allows associated between the media with the artifacts and/or degradations that have been detected in the media. The artifact/quality tag information may be stored either along with the media as metadata. Alternatively or additionally, the artifact/quality tag information may be stored in a dedicated database. The database may indicate the media and artifacts and/or degradations associated with the media. The artifact/quality tag information may allow users to classify media based on specific artifacts and/or degradations present in the media and initiate enhancement of media having specific artifacts and/or degradations.
In some embodiments notifications can be provided to the users for indicating the media that can be enhanced. Alternative or additional embodiments may comprise identifying one or more AI-based media processing models for enhancing the media. Alternative or additional embodiments may comprise enhancing the media, (improving the quality of the media) by applying the one or more AI-based media processing models (AI-based enhancement and artifact removal models) to the media. The identification of the one or more AI-based media enhancement models can be initiated on receiving commands (from the users). Alternatively or additionally, the one or more AI-based media processing models can be automatically identified. Alternative or additional embodiments may comprise identifying the AI-based media processing models that need to be applied to the media to enhance the media based on the artifact/quality tag information associated with the media.
Some embodiments may comprise creating a pipeline of the AI-based media processing models, which may be applied to the media to enhance the media (e.g., multiple AI-based media processing models need to be applied to the media to enhance the media). In alternative or additional embodiments, the AI-based media processing models may be applied to the media in a predetermined order as indicated in the pipeline. The pipeline can be created offline (training phase), wherein correspondence is created between media and sequences of AI-based media processing models to be applied to the media (e.g., for enhancing the media). The sequences may be determined during the training phase and can be referred to as the predetermined order during the application phase. The pipeline can be created using an AI system, which is trained with different varieties of degraded media and enhancement of the media, wherein the enhancement involves creating multiple enhancement pipeline comprising of AI-based media processing models arranged in different orders, and finding the optimal enhancement pipeline for the media. Alternative or additional embodiments may comprise creating the correspondence based on the artifact tag information associated with the media, identified AI-based media processing models to be applied to the media, dependency amongst the identified AI-based media processing models, aesthetic score of the media, media content, and the like.
Some embodiments may comprise ensuring the optimality of the enhancement of the media by determining that an aesthetic score of the media has reached a maximum value after the enhancement. Alternative or additional embodiments may comprise applying the AI-based media processing models recursively to the media and determining the aesthetic score of the media, until the aesthetic score of the media has reached the maximum value. In some embodiments, the operations comprising detecting artifacts and/or degradations in the media, generating artifact/quality tag information associated with the media, identifying one or more AI-based media processing model for enhancing the media and enhancing the media using the identified AI-based media processing model, can be performed in a device or a cloud.
An example embodiment includes a method for enhancing media, comprising detecting at least one artifact included in the media based on tag information indicating the at least one artifact included in the media. The method includes identifying at least one AI-based media enhancement model. The at least one AI-based media enhancement model being configured to enhance the at least one artifact detected in the media. The method further includes enhancing the media by applying the at least one AI-based media enhancement model to the media.
Another example embodiment includes an electronic device for enhancing media, comprising a memory and one or more processors communicatively connected to the memory. The one or more processors are configured to detect at least one artifact included in the media based on tag information indicating the at least one artifact included in the media. The one or more processors are configured to identify at least one AI-based media enhancement model. The at least one AI-based media enhancement model being configured to enhance the at least one artifact detected in the media. The one or more processors are further configured to enhance the media by applying the at least one AI-based media enhancement model to the media.
These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
Embodiments herein are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
The aspects described herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments may be practiced and to further enable those of skill in the art to practice the embodiments. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein. Further, expressions such as “at least one of a, b, and c” should be understood as including only a, only b, only c, both a and b, both a and c, both b and c, all of a, b, and c, or other variations of thereof.
Embodiments herein disclose methods and systems for enhancing quality of media by detecting presence of artifacts and/or degradations in the media and nullifying the artifacts and/or the degradations using one or more Artificial Intelligence (AI)-based media enhancement models. The triggering of the detection of artifacts and/or degradations in the media can be automatic or manual. Some embodiments may comprise generating artifact/quality tag information associated with the media for indicating specific artifacts and/or degradations present in the media and/or storing the artifact/quality tag information along with the media as metadata, and/or in a dedicated database. Some embodiments may comprise triggering initiation of the media enhancement. The media enhancement may comprise identifying at least one AI-based media enhancement model that needs to be applied to the media to enhance the media. The at least one AI-based media enhancement model may be identified based on the artifact/quality tag information associated with the media.
Alternative or additional embodiments may comprise creating a pipeline, which may comprise AI-based media enhancement models. The AI-based media enhancement models can be applied to the media in a sequential order, as indicated in the pipeline, to enhance the media. In some embodiments, the creation of the pipeline may be based on the artifact/quality tag information associated with the media, identified AI-based media enhancement models to be applied to the media, dependency amongst the identified AI-based media enhancement models, aesthetic score of the media, media content, and the like. Alternative or additional embodiments may comprise computing the aesthetic scores of the media prior to, and/or after, the identified AI-based media enhancement models are applied to the media. Alternative or additional embodiments may comprise determining whether the aesthetic scores have improved after media enhancement. If or when the aesthetic scores improve, some embodiments may comprise applying the identified AI-based media enhancement models to the media recursively, until the aesthetic scores stop improving. That is, the enhancing process using the identified AI-based media enhancement models may be applied recursively until the aesthetic scores of the media had reached a maximum value. Thus, the optimality of the media enhancement can be determined by determining that the aesthetic score of the media has reached a maximum value after the enhancement. The AI-based media enhancement models can be applied recursively to the media, to enhance the media, until the aesthetic score of the media reaches the maximum value. If or when no further enhancement are made, the AI-based media enhancement models may be stopped.
In some embodiments, at least one operation comprising detecting artifacts in the media, generating artifact/quality tag information associated with the media, identifying at least one AI-based media enhancement models to enhance the media, and enhancing the media using the at least one identified AI-based media enhancement model, can be performed in at least one of a user device and/or a cloud. In alternative or additional embodiments, the at least one operation can be performed in the user device automatically in background or in the foreground on receiving commands from a user of the device to perform the at least one operation.
If or when the media enhancement is performed in the cloud, the user can retrieve or download the enhanced media from the cloud. In some embodiments, the at least one operation may be performed in the cloud automatically, if or when the media is stored in the cloud. In alternative or additional embodiments, the at least one operation may be performed in the cloud in response to receiving one or more commands from the user to perform the at least one operation, if or when the media is stored in the cloud. In alternative or additional embodiments, the at least one operation may be performed in the cloud after the media has been uploaded from the user device to the cloud. The media may not be required to be stored in the cloud, and after AI processing, the media can be stored in a separate database and/or retransmitted to the user device. The at least one operation may be performed in the cloud either automatically or in response to receiving the one or more user commands to perform the at least one operation.
Referring now to the drawings, and more particularly to
In some embodiments, the controller 101, the controller memory 102, the detector 103, the AI media enhancement unit 104, the memory 105, the display 106, the communication interface 107, and the AI enhancement mapper 108 can be implemented in the electronic device 100. Examples of the electronic device 100 can be, but not limited to, a smart phone, a Personal Computer (PC), a laptop, a desktop, an Internet of Things (IoT) device, and the like.
In other embodiments, the controller 101, the controller memory 102, the detector 103, the AI media enhancement unit 104, and the AI enhancement mapper 108 can be implemented in an electronic device of a cloud (e.g., virtual device, not shown). The electronic device 100 may comprise the memory 105, the display 106, and the communication interface 107. The cloud device may comprise a memory. The electronic device 100 can store media (e.g., originally stored in the memory 105 of the device) in the cloud memory by sending the media to the cloud using the communication interface 107. The portion of the memory 105 storing the media can be synchronized with the cloud memory for enabling automatic transfer (upload) of media from the electronic device 100 to the cloud. Once the media has been enhanced (e.g., quality of the media has been improved), the enhanced media can be stored in the cloud memory. The electronic device 100 can receive (e.g., download) the enhanced media from the cloud using the communication interface 107 and store the enhanced media in the memory 105.
In other embodiments, the AI media enhancement unit 104 and the AI enhancement mapper 108 can be stored in the cloud. In such embodiments, the electronic device 100 may comprise the controller 101, the controller memory 102, the detector 103, the memory 105, the display 106, and the communication interface 107. The electronic device 100 can send selected media and the impairments detected in the selected media to the cloud, for enhancement of the media using particular AI-based media enhancement models. The AI media enhancement unit 104 stored in the cloud, which may comprise the AI-based media enhancement blocks 104A-104N, can apply the particular AI-based media enhancement models to the selected media. As such, the electronic device 100 may perform media enhancement using AI-based media enhancement models that can be considered as overly burdensome for the electronic device 100, particularly in terms of processing, computational, and storage requirements. The electronic device 100 can impose constraints on AI-based media enhancement blocks 104A-104N (for enhancing the media using the particular AI-based media enhancement models), if or when the AI-based media enhancement blocks 104A-104N and the AI enhancement mapper 108 are stored in the electronic device 100. The electronic device 100 can receive, using the communication interface 107, enhanced media from the cloud, and store the enhanced media in the memory 105.
The controller 101 can trigger detection of impairments included in the media. The impairments may comprise artifacts and/or degradations. The media can refer to images and videos stored in the memory 105 of the electronic device 100. The media stored in the memory 105 may comprise media captured using a camera (not shown) of the electronic device 100, media obtained from other devices, media obtained through social media applications/services, and the like. In some embodiments, the controller 101 can automatically trigger the detection of artifacts and/or degradations. For example, the detection can be triggered at a specific time of the day if or when the device is not likely to be in use. Alternatively or additionally, the detection can be triggered if or when the processing and/or computational load on the electronic device 100 is less than (e.g., does not exceed) a predefined threshold. Alternatively or additionally, the detection can be triggered if or when the electronic device 100 is in an idle state. In other embodiments, the controller 101 can trigger the detection of artifacts and/or degradations in the media in response to receiving a command (e.g., from an user) to trigger the detection.
In some embodiments, the controller 101 may be present in the cloud, and the electronic device 100 can send selected media to be enhanced to the cloud. For example, the user of the electronic device 100 can connect to the cloud and send at least one command to the cloud to trigger the detection of artifacts and/or degradations in the media sent to the cloud.
In other embodiments, the electronic device 100 can prioritize media stored in the memory 105 for media enhancement. For example, the electronic device 100 can determine aesthetic scores of the media stored in the memory 105. That is, media with low aesthetic scores and/or with a moderate-high saliency can be prioritized for media enhancement.
If or when the controller 101 has triggered the detection of artifacts and/or degradations included in the media, the detector 103 can analyze the media. The analysis may comprise detecting artifacts and/or degradations included in the media stored in the memory 105. The detector 103 may comprise one or more AI modules to detect the artifacts and/or degradations in the media. The detector 103 may be configured to mark the media enhancement if or when the detector 103 detected artifacts and/or degradations in the media. In some embodiments, the detector 103 can be a single monolithic deep neural network, which can detect and/or identify artifacts and/or degradations included in the media. Examples of artifacts included in the media may comprise shadows and reflections. Examples of degradations present in the media may comprise a presence of blur and/or noise in the media, under or over exposure, low resolution, low light (insufficient brightness), and the like.
The detector 103 can determine the resolutions of the media (e.g., images, videos) based on camera intrinsic parameters, which can be stored along with the media as metadata. The detector 103 can determine image type (e.g., color image, graphics image, grey scale image, and the like), and effects applied to the images (such as “beauty” effect and/or Bokeh effect). In some embodiments, the detector 103 can compute the aesthetic scores of the images. For example, the aesthetic scores may fall within a range, such as from 1 (worst) to 10 (best). In another example, the aesthetic scores may fall within a range of 1 (best) to 10 worst). In yet another example, the aesthetic scores may fall within a range of 1 to 100. In some embodiments, the detector 103 can determine histograms pertaining to the images (e.g., media) for determining the pixel distributions in the images. The histograms of the images may be used by the detector 103 to determine corresponding exposure levels of the images. For example, the detector 103 may assign an exposure level to each image, such as a normal exposure level (uniform distribution), an over exposure level, an under exposure level, and/or both an under exposure level and an over exposure level.
The detector 103 can perform object detection on the media. That is, the detector 103 may identify objects in the images and/or classify the objects according to a type of the identified objects, such as presence of humans, animals, things, and the like. In some embodiments, the detector 103 can perform other operations on the media, such as face recognition to detect and/or identify humans in the images. Alternatively or additionally, the detector 103 can also perform image segmentation.
In some embodiments, the detector 103 may comprise a low-light classifier, a blur classifier, and/or a noise classifier. The low-light classifier of the detector 103 may determine whether the image has been captured in low-light and/or whether the brightness of the image is sufficient. For example, the detector 103 may indicate a result of the determination of whether the image has been captured in low-light as a Boolean flag (e.g., ‘true’ or ‘false’). That is, if or when an image has been captured under low-light conditions, a low-light tag indicating the low-light condition of the image may be set as ‘true’ or a binary value of one. In another example, if or when the image has been captured under normal lighting conditions, the low-light tag indicating the low-light condition of the image may be set as ‘false’ or a binary value of zero.
In some embodiments, the blur classifier of the detector 103 may utilize the results obtained from the object detection and/or the image segmentation to determine whether or not there is a presence of blur in the image and the type of blur (if or when blur is present) in the image. For example, the type of blur in an image may be indicated as at least one of ‘Defocus’, ‘Motion Blur’, ‘False’ (no Blur), ‘Studio Blur’, and ‘Bokeh blur’. That is, a blur tag of the image may be set according to the classification of the blur type.
In some embodiments, the noise classifier of the detector 103 may utilize the results obtained from the object detection and/or the image segmentation to determine whether or not there is a presence of noise in the image. For example, a determination of whether noise is present in the image may be indicated by a Boolean flag (e.g., ‘true’ or ‘false’). That is, if or when noise is present in the image, a noise tag indicating whether the noise is present in the image may be set as ‘true’ or a binary value of one. In another example, if or when the noise is not present in the image, the noise tag of the image may be set as ‘false’ or a binary value of zero.
In some embodiments, the detector 103 may provide the results produced by the face detection and instance segmentation process 210 to a noise classifier 212. That is, the detector 103 may, with the noise classifier 212, detect a presence of noise in the media, based on the results produced by the face detection and instance segmentation process 210. For example, the detector 103 may generate, using the noise classifier 212, an indication of a noise presence (e.g., ‘true’, ‘false’) and add the output of the noise classifier 212 to the artifact/quality tag information 250, as noise tag 255.
In other embodiments, the detector 103 may measure an aesthetic score 220 of the media and add the aesthetic score 220 to the artifact/quality tag information 250 as score tag 257. Alternatively or additionally, the detector 103 may be configured to perform a histogram analysis 230 to measure the quality of the media, such as an exposure level, for example. The detector 103 may be configured to add the exposure level to the artifact/quality tag information 250 as exposure tag 256.
In some embodiments, the detector 103 may determine, using a low-light classifier 240, whether the media has been captured under low-light conditions. Alternatively or additionally, the detector 103 may be configured to add the result of the low-light condition determination made by the low-light classifier 240 to the artifact/quality tag information 250 as low-light tag 253.
Consequently, the quality of the media may be determined based on the presence of artifacts such as reflection and shadow, the presence of blur and the type of blur, the presence of noise, whether the media was captured in low-light, a resolution (e.g., high, low) of the media, an exposure level of the media, and/or an aesthetic score of the media. For example, the quality of an image may be considered as low if or when the blur type is ‘defocus’ or ‘motion’, noise is present in the image, the image has been captured in low-light, the resolution of the image is low, the exposure level of the image is not normal (e.g., ‘under exposed’ or ‘over exposed’), and the aesthetic score is low. In some embodiments, the blur in the image may a result of a lack of focus or motion of a camera while the image was captured. That is, the factors degrading the quality of the image can be considered as degradations.
In some embodiments, the detector 103 may generate artifact/quality tag information 250 indicating characteristics of the image and/or defects included in the image. For example, the artifact/quality tag information 250 may comprise an image type (e.g., image-type tag 251), low-resolution information on whether the resolution of image is low (e.g., low-resolution tag 252), low-light information on whether the image has been captured in a low-light condition (e.g., low light tag 253), a type of blur of the image (e.g., blur-type tag 254), noise information (e.g., noise tag 255), exposure information indicating an exposure level of the image (e.g., exposure 256), aesthetic score information (e.g., score tag 257), information indicating whether the image needs to be revitalized (e.g., revitalization tag 258), and a revitalized thumbnail image (e.g., revitalized-thumbnail tag 259).
In some embodiments, the detector 103 may output the artifact/quality tag information 250 to the controller 101. The controller 101 may store the artifact/quality tag information 250, obtained from the detector 103, in the controller memory 102 and/or the memory 105. Alternatively or additionally, the detector 103 may store the artifact/quality tag information 250 in a memory storage separate from the electronic device 100, such as a cloud database, for example. In other embodiments, the controller 101 may generate a database for storing the media and the related artifact/quality tag information. Alternatively or additionally, the media stored in the database may be linked with the associated artifact/quality tag information pertaining to the media. In some embodiments, the database may be stored in the controller memory 102. In other embodiments, the detector 103 may store the artifact/quality tag information 250 associated with the media along with the media in the memory 105. Alternatively or additionally, the artifact/quality tag information may be embedded with the media in an exchangeable media file format and/or in an extended media file format. That is, the artifact/quality tag information may be stored as metadata of the media file. In some embodiments, the media may be stored in a database outside of the electronic device 100 such as cloud storage. That is, the media and the related artifact/quality tag information may be stored in the cloud storage.
In some embodiments, the artifact/quality tag information may be encrypted.
In some embodiments, transmission of the media 262 from the electronic device 100 to the other devices (having the controller 101 and the detector 103) may cause a loss in the quality of the transferred media due to noise, compression, and other such factors. In such embodiments, the artifact/quality tag information may need to be regenerated regarding the transferred media. However, the regeneration latency at the other devices may be reduced as the other devices may not need to detect the presence of artifacts such as shadows and/or reflections, as well as, degradations such as low-light conditions. That is, the electronic device 100 may send the encrypted artifact/quality tag information 264 of the media 262 along with the media 262 and the other devices (if or when authorized by the electronic device 100) may decrypt the encrypted artifact/quality tag information 264 of the media 262. Thus, the other devices may regenerate and/or update the artifact/quality tag information 250 using the encrypted artifact/quality tag information 264 transferred along with the media 262.
The electronic device 300 may classify the imaged stored in the electronic device 300 based on the type of artifacts and degradation. Then, the electronic device 300 may display grouped images based on the type of artifacts and degradation. For example, the electronic device 300 may display low resolution images 310 and blur/noisy images 320 in groups as illustrated in
Alternatively or additionally, the images associated with the artifact/quality tag information 250 indicating a presence of artifacts (e.g., reflections and/or shadows) and/or a presence of degradations (e.g., low-light conditions) may be displayed in groups classified according to the type of the artifact and/or degradation. In some embodiments, the electronic device 300 may be configured to display a User Interface (UI) (e.g., on the display 106) indicating clusters of images with similar artifacts and degradations (e.g., low resolution images 310, blur/noisy images 320). As such, selection of media (e.g., images or videos) that needs to be enhanced may be facilitated.
In some embodiments, the controller 101 may trigger the initiation of media enhancement. The initiation of media enhancement may be triggered manually or automatically. That is, the electronic device 300 may be ready to receive commands requesting to initiate enhancement of the images displayed in the clusters of images after the clusters of images with similar artifacts and/or degradations have been generated and displayed. For example, the user may select the images to be enhanced and input a request to enhance the selected image using the UI provided by the display 106. In response to receiving the request to enhance the images selected by the user, the electronic device 300 may initiate the media enhancement process.
In some embodiments, an automatic triggering of the detection, by the controller 101, of artifacts and/or degradations in media may also cause an automatic triggering of the initiation of the media enhancement process, by the controller 101. In other embodiments, the media enhancement process may be performed by the controller 101 in the cloud. In such embodiments, the electronic device 300 storing the media may send the selected media to be enhanced, along with the artifact/quality tag information 250 associated with the selected media, to the cloud. For example, the user of the electronic device 300 may connect to the cloud and send at least one command to the cloud to trigger the initiation of the media enhancement process on the selected media stored in the electronic device 300.
The media enhancement process may comprise identifying at least one AI-based media enhancement model that needs to be applied to the media 262 to enhance the media 262. In some embodiments, in response to the controller 101 being triggered to initiate the media enhancement process, the AI media enhancement unit 104 may start identifying one or more AI-based media enhancement models to be applied to the media 262 to enhance the media 262. For example, the AI media enhancement unit 104 may determine the type of the artifact or the degradation included in the image based on the artifact/quality tag information 250 associated with the media 262, and identify the AI-based media enhancement model based on the determined type of the artifact or the degradation associated with the media 262. Alternatively or additionally, one or more AI-based media enhancement blocks 104A-104N may be applied as an AI-based media enhancement model to the media 262.
In some embodiments, one or more AI-based media enhancement models (e.g., 421-426) may be required to be applied to the media 262 for enhancing the media 262. That is, the one or more AI-based media enhancement models 421-426 may be configured to remove and/or nullify the artifacts and/or the degradations present in the media 262. For example, an AI-based media enhancement model (e.g., 421-426) may need to be applied for enhancing the media 262 as determined based on the artifact/quality tag information 250 associated with the media 262 As such, the media 262 may be sent to a corresponding AI-based media enhancement block (e.g., 104A-104N) for applying the one or more AI-based media enhancement models (e.g., 421-426) assigned to the corresponding AI-based media enhancement block (e.g., 104A-104N). By applying the one or more AI-based media enhancement models 421-426 to the image, according to the type of the artifact or the degradation of the image, the quality of the image may be enhanced. In some embodiments, the AI media enhancement unit 104 and the corresponding AI-based media enhancement blocks (e.g., 104A-104N) may be implemented in the cloud. In such embodiments, the enhancement process may be performed on the cloud, and the enhanced media may be obtained from the cloud. That is, the electronic device 100 may send the media to be enhanced to the cloud and receive the enhanced media from the cloud.
In some embodiments, an image may comprise a plurality of artifact and/or degradation types. In such embodiments, a plurality of AI-based media enhancement models 421-426 may be required to be applied to the media 262 for enhancing the media 262. In other embodiments, the AI media enhancement unit 104 may select a pipeline including a plurality of AI-based media enhancement models 421-426. The AI media enhancement unit 104 may determine one or more AI-based media enhancement models 421-426 to be applied to the media 262 based on the artifact/quality tag information 250, and may determine an order for applying the one or more AI-based media enhancement models 421-426. The media 262 may be sent to one or more of the AI-based media enhancement blocks 104A-104N and the AI-based media enhancement models 421-426 may be applied to the media 262 in a predetermined order, as indicated by the pipeline. For example, if or when an artifact/quality tag information 250 associated with an image indicates that the exposure of the image is ‘low’ and the resolution of the image is ‘low’, the image may be sent to the AI color correction in HDR model 423 followed by the AI super resolution model 425. The AI color correction in HDR model 423 may enhance the image by adjusting the exposure of the image, and the AI super resolution model 425 may enhance the image by upscaling the image. In such an example, the sequence of the AI-based media enhancement models to be applied in the pipeline may be the AI color correction in HDR model 423 followed by the AI super resolution model 425.
In another example, if or when artifact/quality tag information 250 associated with an image indicates that the image is captured in low light conditions (e.g., low-light tag 253 is set to ‘true’), the image is a blurred image (e.g., blur-type tag 254 is set to ‘defocus’), and there are noisy artifacts present in the image (e.g., noise tag 255 is set to ‘true’), the image may be sent to the AI denoising model 421, followed by the AI debluring model 422, which in turn may be followed by the AI low-light enhancement model 424. In such an example, the sequence of the pipeline may set from the AI denoising model 421 to the AI debluring model 422 and to AI low-light enhancement model 424. The sequence of AI-based media enhancement models to be applied may change, and is not limited by the examples above.
The pipeline, which may include one or more AI-based media enhancement models 421-426 for enhancing media 262, may be dynamically updated based on the artifacts and/or degradations present in the media 262. In some embodiments, the pipeline may be created by the AI enhancement mapper 108. For example, the AI enhancement mapper 108, in cooperation with the AI media enhancement unit 104, may be trained to find an optimal dynamic enhancement pipeline including a plurality of AI-based media enhancement models 421-426, to enhance the media 262. In some embodiments, the AI enhancement mapper 108 may be trained with a plurality of images and corresponding tag information provided to the AI enhancement mapper 108. In such embodiments, the AI enhancement mapper 108 may determine an optimal dynamic enhancement pipeline for the plurality of images and corresponding tag information that has been provided. After training of the AI enhancement mapper 108 has been completed, the AI enhancement mapper 108 may generate a similar optimal dynamic pipeline to be applied to images with similar characteristics and/or similar corresponding tag information to the images and corresponding tag information provided during the training period.
In some embodiments, the AI enhancement mapper 108 may generate pipelines based on, but not limited to, artifact/quality tag information 250 associated with the media 262, identified AI-based media enhancement models 421-426 to be applied to the media 262, dependency among the identified AI-based media enhancement models 421-426, aesthetic score of the media 262, content of the media 262, and the like.
The AI enhancement mapper 108 may be trained to generate sequences/orders (e.g., pipelines) of the AI-based media enhancement models applied to the media 262 for enhancing the media 262. The training of the AI enhancement mapper 108 may identify and/or create correlations between media having particular artifacts and/or degradations and particular sequences of the AI-based media enhancement models 421-426 to be added to the pipeline and applied to the media 262 in a particular order, for enhancing the media 262. For example, a media 262 having a reflection artifact and a low-resolution degradation may correlate with a pipeline sequence such as [AI reflection removal model 426—AI super resolution 425]. That is, after the AI enhancement mapper 108 has been trained and has been installed with the AI media enhancement unit 104, the AI enhancement mapper 108 may select pipelines to enhance the media 262, which may be stored in the memory 105, for example. In some embodiments, the AI enhancement mapper 108 may select pipelines comprising AI-based media enhancement models 421-426 that may correlate with particular artifacts and/or degradations present in the media 262. The AI-based media enhancement models 421-426 of the selected pipelines may be applied to the media 262 to enhance the media 262.
For example, the AI media enhancement unit 104 may determine, based on the artifact/quality tag information 250 associated with an image, that a reflection artifact exists in the image, that the exposure of the image is ‘low’, that blur and noise is present in the image, and that the resolution of the image is ‘low’. In such an example, as illustrated in
Alternatively or additionally, the AI media enhancement unit 104 may arrange the AI-based media enhancement blocks 104A-104N, that are applying the selected AI-based media enhancement models 421-426, in a pipeline in a predetermined order. As described above in reference to
Continuing to refer to
Referring to
Alternatively or additionally, the AI media enhancement unit 104 may arrange the AI-based media enhancement blocks 104A-104N, that are applying the selected AI-based media enhancement models 421-426, in a pipeline in a predetermined order. For example, the pipeline sequence selected by the AI media enhancement unit 104 may be [AI denoising model 421—AI low-light enhancement model 424 (e.g., AI night shot)—AI deblurring model 422—AI super resolution model 425 (e.g., AI upscaler)]. That is, the pipeline may indicate that the image may be sent to a first AI-based media enhancement block (e.g., 104A) that has been assigned the AI denoising model 421, followed by a second AI-based media enhancement block (e.g., 104B) that has been assigned the AI low-light enhancement model 424, and followed by a third AI-based media enhancement block (e.g., 104N) that has been assigned the AI deblurring model 422 and the AI super resolution model 425. As such, an enhanced version of the image may be obtained by applying the AI-based media enhancement models 421-426 to the image in the selected order according to the pipeline
In other embodiments, the AI enhancement mapper 108 may extract a saliency map of the reference image. Alternatively or additionally, the AI enhancement mapper 108 may identify the AI-based media enhancement models 421-426 that need to be applied to the reference image, for enhancement of the reference image. That is, the AI enhancement mapper 108 may identify one or more AI-based media enhancement models 421-426 for nullifying the effects of artifacts and/or degradations that may be included in the reference image. The AI enhancement mapper 108 may utilize the artifact/quality tag information 250 associated with the reference image for determining the artifacts and/or the degradations included in the reference image. The AI enhancement mapper 108 may determine dependencies among the AI-based media enhancement models 421-426 to be applied to the image for enhancement of the reference image. The generic features, deep features, saliency map, AI-based media enhancement models to be applied for enhancement of the reference image, and the dependencies among the AI-based media enhancement models, may be considered as feature vectors.
As depicted in
Alternatively or additionally, if or when the AI enhancement mapper 108 determines that the aesthetic score of the reference image has not increased (and/or decreased) compared to the aesthetic score of the reference image prior to the application of the identified AI-based media enhancement models 421-426, the pipeline may be updated by changing the order of placement of the identified AI-based media enhancement models 421-426. Thereafter, the identified AI-based media enhancement models 421-426 may be reapplied to the reference image in the updated order, and the aesthetic score may be re-evaluated. If or when the aesthetic score improves, application of the identified AI-based media enhancement models in the updated order, to the reference image, may be continued until the aesthetic score reaches the saturation value.
In some embodiments, the AI enhancement mapper 108 may generate multiple pipelines by varying the placement of the identified AI-based media enhancement models 421-426 in the pipelines. Alternatively or additionally, the AI enhancement mapper 108 may obtain aesthetic scores after applying the identified AI-based media enhancement models 421-426 to the reference image in the orders indicated in each of the pipelines. The AI enhancement mapper 108 may select at least one pipeline based on an improvement to the aesthetic score of the reference image, wherein the improvement in the aesthetic score is obtained by applying the identified AI-based media enhancement models 421-426 to the reference image in the order corresponding to the selected pipeline. The AI enhancement mapper 108 may select an order of AI-based media enhancement models 421-426 of the pipeline, among the at least one selected orders of the pipeline, which maximizes the aesthetic score of the reference image if or when the AI-based media enhancement models 421-426 are applied to the reference image in that corresponding order.
In some embodiments, the AI enhancement mapper 108 may determine that an optimal pipeline comprising an optimal set of the AI-based media enhancement models 421-426, which may have been identified based on the feature vectors of the reference image and may be arranged in an optimal order, cause the aesthetic score of the reference image to reach the saturation value (e.g., a maximum value). The AI enhancement mapper 108 may be configured, in a synthesis phase, to use the optimal pipeline to enhance media 262 having similar feature vectors as the reference image. That is, after the AI enhancement mapper 108 has been trained, the AI enhancement mapper 108 may utilize the optimal pipeline for enhancement of media 262, if or when feature vectors of the media 262 match or relate to the feature vectors of the reference image. For example, the AI enhancement mapper 108 may apply the optimal set of the AI-based media enhancement models 421-426 in the optimal order indicated in the optimal pipeline to the media 262 for media enhancement. Consequently, the AI enhancement mapper 108 may apply the same optimal pipeline to images with similar characteristics (e.g., feature vectors).
Continuing to refer to
Referring to
The AI enhancement mapper 108 may be further configured to create a pipeline 740 of the AI-based media enhancement blocks 104A-104N implementing the AI-based media enhancement models. For example, as shown in
Referring to
Referring to
Referring to
In some embodiments, the AI enhancement mapper 108 may stop changing the pipeline if or when applying the AI-based media processing blocks in the order indicated in the pipeline allows for maximizing the aesthetic score of the input image. In other embodiments, an operator and/or trainer may select the pipeline for enhancing the input image based on the feature vectors associated with the input image. In the synthesis phase, the AI enhancement mapper 108 may select the same pipeline for enhancing an image, if or when the feature vectors of the image are identical with, or similar to, the feature vectors of the input image used for training.
In some embodiments, the AI enhancement mapper 108 may evaluate the aesthetic score V1 of the regression 1 enhanced image after the low quality input image has been enhanced according to the order of AI-based media enhancement models indicated by the pipeline. For example, the AI enhancement mapper 108 may compare the aesthetic score V1 of the regression 1 enhanced image with the aesthetic score V0 of the low quality input image, after applying the Enhancement A, the Enhancement B, and the Enhancement C, to the low quality input image in the order indicated in the pipeline. If or when the AI enhancement mapper 108 determines that there is no significant improvement in the aesthetic score (e.g., a difference between V1 and V0 is less than a threshold), the AI enhancement mapper 108 may change the sequence (e.g., order) for applying the AI-based media models on the image. For example, the AI enhancement mapper 108 may change, in an Nth recursion, the pipeline to have an order for applying the AI-based media enhancement models of [Enhancement B—Enhancement C—Enhancement A]. As such, the AI enhancement mapper 108 may apply the AI-based media enhancement models to the low quality input image, according to the order indicated by the changed pipeline, resulting in a regression N enhanced image. Alternatively or additionally, the AI enhancement mapper 108 may calculate aesthetic score VN for the regression N enhanced image. In some embodiments, the aesthetic score VN may correspond to a highest or maximum value that the aesthetic score of the low quality input image may attain.
In some embodiments, the AI enhancement mapper 108 may create a correspondence between the low quality input image and the selected pipeline [Enhancement B—Enhancement C—Enhancement A]. During the synthesis phase, if or when an input image having similar artifacts and/or degradations needs to be enhanced and the feature vectors of the input image and the feature vectors of the image used for training are similar (or same), the AI enhancement mapper 108 may select the pipeline [Enhancement B—Enhancement C—Enhancement A] for enhancing the image.
If or when the pipeline is created, the AI enhancement mapper 108 may create a correspondence between the image and the pipeline [Enhancement A—Enhancement B—Enhancement C]. During the synthesis phase, if or when an input image having similar artifacts and/or degradations needs to be enhanced and the feature vectors of the input image and the feature vectors of the image used for training are detected to be similar (or same), the AI enhancement mapper 108 may select the pipeline [Enhancement A—Enhancement B—Enhancement C] for enhancing the image.
In other embodiments, the images that have been enhanced may be marked and indicated to the user. For example, the UI depicted in
Referring to
Referring to
Referring to
In operation 1101, the method 10 comprises detecting a presence of artifacts and/or degradations in the media 262. The triggering of the detection of the artifacts and/or the degradations may be automatic or manual. In some embodiments, the detecting at operation 1101 may comprise determining aesthetic scores of the media 262 and saliency of the media 262. In optional or additional embodiments, the detecting at operation 1101 may comprise prioritizing the media 262 for enhancement based on the aesthetic scores and the saliency of the media 262. For example, media 262 having a low aesthetic score and a high degree of saliency may be prioritized. The prioritization may allow for indicating the media 262 that is available for enhancement, which may further allow for manual triggering of detection of artifacts and/or the degradations in the media 262 and/or may further allow for automatic triggering of the detection of the artifacts and/or the degradations in the media 262.
At operation 1102, the method comprises generating artifact/quality tag information 250, which may indicate the artifacts and/or degradations detected in the media 262. In some embodiments, the generating at operation 1102 may comprise creating a mapping between media 262 and artifact/quality tag information 250 associated with the media 262 (e.g., artifacts and/or degradations that have been detected in the media 262). In optional or additional embodiments, the generating at operation 1102 may comprise storing the artifact/quality tag information 250 along with the media 262 as metadata, and/or in a dedicated database. The database may indicate the media 262 and the artifacts and/or degradations associated with the media 262. The artifact/quality tag information 250 may allow classification of media based on specific artifacts and/or degradations present in the media 262.
In operation 1103, the method comprises identifying one or more AI-based media enhancing models 421-426 for enhancing the media 262, (e.g., improving the quality of the media 262), based on the artifact/quality tag information 250. In some embodiments, the identifying at operation 1103 may comprise identifying the one or more AI-based media enhancement models 421-426, which may need to be applied to the media 262 to enhance the media 262, based on the artifact/quality tag information 250 associated with the media 262. In optional or additional embodiments, the identifying at operation 1103 may comprise applying the one or more identified AI-based media enhancement models 421-426 for removing or nullifying the artifacts and/or degradations that have been detected in the media 262. In other optional or additional embodiments, the identifying at operation 1103 may be triggered manually or automatically. In other optional or additional embodiments, the identifying at operation 1103 may be triggered automatically if or when the detection of the artifacts and/or degradations in the media 262 is configured to be triggered automatically. In other optional or additional embodiments, the identifying at operation 1103 may triggered manually in response to receiving one or more commands from a user.
In operation 1104, the method comprises applying the identified one or more AI-based media enhancing models 421-426 to the media 262 in a predetermined order. In some embodiments, a single AI-based media enhancing model 421-426 may be identified, which may need to be applied to the image for enhancing the media 262, for nullifying the artifacts and/or degradations that have been detected in the media 262, such that the identified single AI-based media enhancing model 421-426 may be applied directly to the media 262. In other embodiments, multiple AI-based media enhancing models 421-426 may have been identified for application to the media 262 for enhancing the media 262, such that the AI-based media enhancing models 421-426 may need to be applied to the media 262 in the predetermined order/sequence. In optional or additional embodiments, the applying at operation 1104 may comprise selecting a pipeline of the AI-based media enhancing models 421-426, wherein the identified AI-based media enhancing models 421-426 may be arranged in a predetermined order. In optional or additional embodiments, the applying at operation 1104 may comprise updating the pipelines of the identified AI-based media enhancing models 421-426 based on the identified AI-based media enhancement models 421-426 required (to be applied to the media 262) to enhance the media 262.
In optional or additional embodiments, the applying at operation 1104 may comprise creating pipelines of the AI-based media enhancement models 421-426, to be applied to the media 262 to enhance the media 262. The pipelines may be created offline (e.g., during a training phase), wherein correspondences may be created between media with specific artifacts and/or degradations (which have been detected in the media) and specific sequences of AI-based media enhancement models 421-426. In such embodiments, the AI-based media enhancement models 421-426 may be applied to the media 262 (for enhancing the media 262) in the specific sequences. The sequences are determined during the training phase and may be referred to as the predetermined order during the synthesis phase.
In optional or additional embodiments, the applying at operation 1104 may comprise creating the correspondences based on the feature vectors of the media 262 such as the artifact/quality tag information 250 associated with the media 262, identified AI-based media enhancement models 421-426 to be applied to the media 262, dependency amongst the identified AI-based media enhancement models 421-426, aesthetic score of the media 262, content of the media 262, and the like.
In optional or additional embodiments, the applying at operation 1104 may comprise ensuring the optimality of the enhancement of the media 262 by determining that aesthetic score of the media 262 has reached a maximum value after the enhancement. In optional or additional embodiments, the applying at operation 1104 may comprise applying the AI-based media enhancement models 421-426 recursively to the media 262 and determining the aesthetic score of the media 262, until the aesthetic score of the media 262 has reached a maximum value.
The various actions in the flowchart 1100 may be performed in the order presented, in a different order, or simultaneously. Further, in some embodiments, some actions listed in
The processor 1210 may be a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a domain programmable gate array (FPGA), or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. It is possible to implement or execute the various exemplary logical blocks, modules and circuits described in combination with the disclosures of the present disclosure. The processor 1210 may also be a combination of computing functions, such as a combination of one or more microprocessor, a combination of a DSP and a microprocessor, and the like.
The bus 1240 may comprise a path for communicating information between the above components. The bus 1240 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus. The bus 1240 may be divided into an address bus, a data bus, a control bus, and the like. For the sake of simplicity,
The memory 1220 may be a read only memory (ROM) and/or other type of static storage device that may store static information and instructions, random access memory (RAM) and/or other types of dynamic storage device that may store information and instructions. Alternatively or additionally, the memory 1220 may comprise an electrically erasable programmable read only memory (EEPROM), a compact disc read only memory (CD-ROM) and/or other optical disc storage, such as compression optical discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, and the like, magnetic disk storage media and/or other magnetic storage devices, and/or any other non-transitory computer-readable storage medium that may be used to carry or store desired program code in the form of instructions or data structures and may be accessed by a computer, but not limited to this. Non-transitory computer-readable storage media may exclude transitory signals.
The memory 1220 may be used to store application program code that, when executed by the processor 1210, may implement one or more embodiments of the present disclosure. The processor 1210 may be configured to execute application program code stored in the memory 1220 to implement the features described in any of the foregoing embodiments.
In some embodiments, the electronic device 1200 may comprise, but is not limited to, a mobile terminal, such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (PDA), a portable android device (PAD), a portable multimedia player (PMP), an in-vehicle terminal (for example, a car navigation terminal) and the like, as well as a fixed terminal such as digital TV, a desktop computer and the like. The electronic device 1200 shown in the
The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the network elements. The computing elements shown in
The embodiments disclosed herein describe methods and systems for enhancing quality of media stored in a device or cloud by detecting artifacts and/or degradations in the media, identifying at least one AI-based media enhancement model for nullifying the detected artifacts and/or degradations, and enhancing the media by applying the at least one AI-based media enhancement model in a predetermined order for enhancing the media. Therefore, it is understood that the scope of the protection is extended to such a program and in addition to a computer-readable storage means having a message therein, such computer-readable storage means may contain program code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The method is implemented in a preferred embodiment through or together with a software program written in example Very high speed integrated circuit Hardware Description Language (VHDL), or any other programming language, or implemented by one or more VHDL or several software modules being executed on at least one hardware device. The hardware device can be any kind of portable device that can be programmed. The device may further comprise means, which could be, for example, a hardware means, for example, an Application-specific Integrated Circuit (ASIC), or a combination of hardware and software means, for example, an ASIC and a Field Programmable Gate Array (FPGA), or at least one microprocessor and at least one memory with software modules located therein. The method embodiments described herein could be implemented partly in hardware and partly in software. Alternatively, the invention may be implemented on different hardware devices, e.g. using a plurality of Central Processing Units (CPUs).
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the scope of the embodiments as described herein.
Number | Date | Country | Kind |
---|---|---|---|
202041039989 | Sep 2020 | IN | national |
2020 41039989 | Jul 2021 | IN | national |
This application is a Continuation Application of International Application PCT/KR2021/012602 filed on Sep. 15, 2021, which claims benefit of priority from Indian Patent Application No. 202041039989, filed on Jul. 19, 2021, and Indian Provisional Patent Application No. 202041039989, filed on Sep. 15, 2020, the disclosures of which are incorporated herein in their entireties by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/KR2021/012602 | Sep 2021 | US |
Child | 17550751 | US |