—Not applicable—
—Not applicable—
The present application relates generally to systems and methods of objective video quality measurement, and more specifically to systems and methods of objective video quality measurement that employ a reduced-reference approach.
Systems and methods are known that employ a full-reference approach, a no-reference approach, and a reduced-reference approach to video quality measurement. For example, systems that employ a full-reference approach to video quality measurement typically receive target video content (also referred to herein as a/the “target video”) whose perceptual quality is to be measured, and compare information from the target video to corresponding information from a reference version (also referred to herein as a/the “reference video”) of the target video to provide a measurement of the perceptual quality of the target video. In such systems that employ a full-reference approach to video quality measurement, it is generally assumed that the systems have full access to all of the information from the reference video for comparison to the target video information. However, transmitting all of the information from the reference video over a network for comparison to the target video information at an endpoint device, such as a mobile phone, can consume an undesirably excessive amount of network bandwidth. Such a full-reference approach to video quality measurement is therefore generally considered to be impractical for use in measuring the perceptual quality of a target video at such an endpoint device.
In systems that employ a no-reference approach to video quality measurement, it is generally assumed that no information from any reference video is available to the systems for comparison to the target video information. Such systems that employ a no-reference approach to video quality measurement therefore typically provide measurements of the perceptual quality of the target video using only information from the target video. However, such systems that employ a no-reference approach to video quality measurement may be inaccurate, since certain assumptions made for the purpose of measuring the perceptual quality of the target video may be inaccurate.
Systems that employ a reduced-reference approach to video quality measurement typically have access to a reduced amount of information from the reference video for comparison to the target video information. For example, such information from the reference video can include a limited number of characteristics of the reference video, such as its spectral components, its variation of energy level, and/or its energy distribution in the frequency domain, each of which may be sensitive to degradation during processing and/or transmission of the target video. However, such known systems that employ a reduced-reference approach to video quality measurement can also be impractical for use in measuring the perceptual quality of a target video following its transmission over a network to an endpoint device, such as a mobile phone, due at least in part to constraints in the network bandwidth, and/or because of the limited processing power that is typically available in the endpoint device to perform the video quality measurement.
It would therefore be desirable to have improved systems and methods of objective video quality measurement that avoid at least some of the drawbacks of the various known video quality measurement systems and methods described above.
In accordance with the present application, systems and methods of objective video quality measurement are disclosed that employ a reduced-reference approach. Such systems and methods of objective video quality measurement can extract information pertaining to one or more features (also referred to herein as “target features”) of a target video whose perceptual quality is to be measured, extract corresponding information pertaining to one or more features (also referred to herein as “reference features”) of a reference video, and employ one or more prediction functions involving the target features and the reference features to provide a measurement of the perceptual quality of the target video.
In accordance with a first aspect, a system for measuring the perceptual quality of a target video that employs a reduced-reference approach to video quality measurement comprises a plurality of functional components, including a target feature extractor, a reference feature extractor, and a quality assessor. The target feature extractor is operative to extract one or more target features from the target video by performing one or more objective measurements with regard to the target video. Similarly, the reference feature extractor is operative to extract one or more reference features from the reference video by performing one or more objective measurements with regard to the reference video. Such objective measurements performed on the target video and the reference video can include objective measurements of blocking artifacts in the respective target and reference videos (also referred to herein as “blockiness measurements”), objective measurements of blur in the respective target and reference videos (also referred to herein as “blurriness measurements”), objective measurements of an average quantization index for the respective target and reference videos, as examples, and/or any other suitable types of objective measurements. Such objective measurements can result in target features and reference features that can be represented by compact data sets, which may be transmitted over a network without consuming an undesirably excessive amount of network bandwidth. As employed herein, the term “quantization index” (also referred to herein as a/the “QI”) corresponds to any suitable parameter that can be adjusted to control the quantization step-size used by a video encoder. For example, such a QI can correspond to a quantization parameter (also referred to herein as a/the “QP”) for video bitstreams compressed according to the H.264 coding format, a quantization scale for video bitstreams compressed according to the MPEG-2 coding format, or any other suitable parameter for video bitstreams compressed according to any other suitable coding format. The quality assessor is operative to provide an assessment of the perceptual quality of the target video following its transmission over the network to an endpoint device, using one or more prediction functions involving the target features and the reference features. In accordance with an exemplary aspect, one or more of the prediction functions can be linear prediction functions or non-linear prediction functions. For example, such an endpoint device can be a mobile phone, a mobile or non-mobile computer, a tablet computer, or any other suitable type of mobile or non-mobile endpoint device capable of displaying video.
In accordance with another exemplary aspect, the perceptual quality of each of the target video and the reference video can be represented by a quality assessment score, such as a predicted mean opinion score (MOS). In accordance with such an exemplary aspect, the quality assessor is operative to estimate the perceptual quality of the target video by obtaining a difference between an estimate of the perceptual quality of the reference video, and an estimate of the predicted differential MOS (also referred to herein as a/the “DMOS”) between at least a portion of the reference video and at least a portion of the target video. For example, the estimate of the perceptual quality of the target video (also referred to herein as a/the “{circumflex over (Q)}tar”) can be expressed as
{circumflex over (Q)}tar={circumflex over (Q)}ref−
in which “{circumflex over (Q)}ref” corresponds to the estimate of the perceptual quality of the reference video, and “
ΔQ=(Qref−Qtar).
The quality assessor is further operative to calculate or otherwise determine the {circumflex over (Q)}ref using a first prediction function for a predetermined segment from a corresponding time frame within the reference video and the target video. For example, the {circumflex over (Q)}ref can be expressed as
{circumflex over (Q)}ref=ƒ1(QIref),
in which “ƒ1(QIref)” corresponds to the first prediction function, and “QIref” corresponds to the QI for the reference video. For example, the first prediction function, ƒ1(QIref), can be a linear function of the QI for the reference video, and/or any other suitable reference feature(s). The quality assessor is also operative to calculate or otherwise determine the
in which “ƒ2(Δblr,Δblk)” corresponds to the second prediction function, “Δblr” corresponds to the average change in frame-wise blurriness measurements between the reference video and the target video for the predetermined segment, and “Δblk” corresponds to the average change in frame-wise blockiness measurements between the reference video and the target video for the predetermined segment. For example, the second prediction function, ƒ2(Δblr,Δblk), can be a linear function of the average change in the frame-wise blurriness measurements for the respective target and reference videos, the average change in the frame-wise blockiness measurements for the respective target and reference videos, and/or any other suitable reference feature(s) and target feature(s).
In accordance with a further exemplary aspect, the quality assessor is operative to estimate the perceptual quality of the target video, {circumflex over (Q)}tar, using a third prediction function based on the first prediction function, ƒ1(QIref), and the second prediction function, ƒ2(Δblr,Δblk). For example, the {circumflex over (Q)}tar can be expressed as
in which “ƒ3(QIref,Δblr,Δblk)” corresponds to the third prediction function, and “a3,” “b3,” “c3,” and “d3” each correspond to a parameter coefficient of the third prediction function. For example, the value of each of the parameter coefficients a3, b3, c3, and d3 can be determined using a multi-variate linear regression approach, based on a plurality of predetermined target video bitstreams and their corresponding reference video bitstreams, and ground truth quality values, or any other suitable technique.
In accordance with another aspect of the disclosed systems and methods, the target feature extractor, the reference feature extractor, and the quality assessor can be implemented in a distributed fashion within a video communications environment. In accordance with an exemplary aspect, the target feature extractor can be located proximate to or co-located with the quality assessor, such as within the endpoint device, and the reference feature extractor can be disposed at a distal or geographically remote location from the target feature extractor and the quality assessor. In accordance with such an exemplary aspect, the disclosed system can transmit the reference features from the reference feature extractor at the distal or geographically remote location to the quality assessor, which, in turn, can access the target features from the target feature extractor proximate thereto or co-located therewith for estimating the perceptual quality of the target video. In accordance with another exemplary aspect, the reference feature extractor can be located proximate to or co-located with the quality assessor, and the target feature extractor can be disposed at a distal or geographically remote location from the reference feature extractor and the quality assessor. In accordance with such an exemplary aspect, the disclosed system can transmit the target features from the target feature extractor at the distal or geographically remote location to the quality assessor, which, in turn, can access the reference features from the reference feature extractor proximate thereto or co-located therewith for estimating the perceptual quality of the target video. In accordance with a further exemplary aspect, the quality assessor may be disposed at a centralized location that is geographically remote from the target feature extractor and the reference feature extractor. In accordance with such an exemplary aspect, the disclosed system can transmit the target features from the target feature extractor to the quality assessor, and transmit the reference features from the reference feature extractor to the quality assessor, for estimating the perceptual quality of the target video within the quality assessor at the geographically remote, centralized location.
By extracting reference features and target features from a reference video and a target video, respectively, and representing the respective reference and target features as compact data sets, the disclosed systems and methods can operate to transmit the reference features and/or the target features over a network to a quality assessor for assessing the perceptual quality of the target video, without consuming an undesirably excessive amount of network bandwidth. Further, by providing for such a perceptual quality assessment of the target video using one or more prediction functions, the perceptual quality assessment of the target video can be performed within an endpoint device having limited processing power. Moreover, by using the average values of frame-wise objective measurements for a predetermined segment from a corresponding time frame within the reference video and the target video, fluctuations in the perceptual quality assessment of the target video can be reduced.
Other features, functions, and aspects of the invention will be evident from the Drawings and/or the Detailed Description of the Invention that follow.
The invention will be more fully understood with reference to the following Detailed Description of the Invention in conjunction with the drawings of which:
a is a block diagram of an exemplary target feature extractor, an exemplary reference feature extractor, and an exemplary quality assessor included within the system of
b is a block diagram of the exemplary target feature extractor, the exemplary reference feature extractor, and the exemplary quality assessor included within the system of
c is a block diagram of the exemplary target feature extractor, the exemplary reference feature extractor, and the exemplary quality assessor included within the system of
Systems and methods of objective video quality measurement are disclosed that employ a reduced-reference approach. Such systems and methods of objective video quality measurement can extract information pertaining to one or more features (also referred to herein as “target features”) of a target video whose perceptual quality is to be measured, extract corresponding information pertaining to one or more features (also referred to herein as “reference features”) of a reference video, and employ one or more prediction functions involving the target features and the reference features to provide a measurement of the perceptual quality of the target video.
It is noted that one or more types of degradation may be introduced into the source video during its processing within the video encoder 102 to generate the reference video. One or more types of degradation may also be introduced into the reference video during its processing within the video transcoder 104, and/or its transmission over the communication channel 106. By way of non-limiting example, such degradation of the source video and/or the reference video may be due to image rotation, additive noise, low-pass filtering, compression losses, transmission losses, and/or any other possible type of degradation. For example, the perceptual quality of each of the source video, the reference video, and the target video can be represented by a predicted mean opinion score (MOS), or any other suitable type of quality assessment score. It is noted that the perceptual quality of the reference video can be represented by a predetermined constant value. It is further noted that the source video is assumed to have the highest perceptual quality in comparison to the reference video and the target video.
It is noted that such objective measurements performed with regard to the respective target and reference videos can result in target features and reference features that can be represented by compact data sets, which may be transmitted over a network without consuming an undesirably excessive amount of network bandwidth. It is further noted that the term “quantization index” (also referred to herein as a/the “QI”), as employed herein, corresponds to any suitable parameter that can be adjusted to control the quantization step-size used by a video encoder, such as the video encoder 102, or a video encoder (not shown) within the video transcoder 104. For example, such a QI can correspond to a quantization parameter (also referred to herein as a/the “QP”) for a video bitstream compressed according to the H.264 coding format, a quantization scale for a video bitstream compressed according to the MPEG-2 coding format, or any other suitable parameter for a video bitstream compressed according to any other suitable coding format.
The quality assessor 116 is operative to provide an assessment of the perceptual quality of the target video, after its having been processed and transmitted within the video communications environment 100, using one or more prediction functions involving the target features and the reference features. For example, one or more of the prediction functions can be linear prediction functions or non-linear prediction functions. In accordance with the illustrative embodiment of
{circumflex over (Q)}tar={circumflex over (Q)}ref−
in which “{circumflex over (Q)}ref” corresponds to the estimate of the perceptual quality of the reference video, and “
ΔQ=(Qref−Qtar). (1b)
The quality assessor 116 is further operative to calculate or otherwise determine the {circumflex over (Q)}ref using a first prediction function for a predetermined segment from a corresponding time frame within the reference video and the target video. For example, such a predetermined segment can have a duration of about 5 seconds, or any other suitable duration.
Moreover, the {circumflex over (Q)}ref can be expressed as
{circumflex over (Q)}ref=ƒ1(QIref), (2)
in which “ƒ1(QIref)” corresponds to the first prediction function, and “QIref” corresponds to the QI for the reference video. For example, the first prediction function, ƒ1(QIref), can be a linear function of the QI for the reference video, and/or any other suitable reference feature(s). The quality assessor 116 is further operative to calculate or otherwise determine the
in which “f2(Δblr,Δblk)” corresponds to the second prediction function, “Δblr” corresponds to the average change in the blurriness measurements between the reference video and the target video for the predetermined segment, and “Δblk” corresponds to the average change in the blockiness measurements between the reference video and the target video for the predetermined segment.
For example, the second prediction function, ƒ2(Δblr,Δblk), can be a linear function of the average change in the blurriness measurements for the respective target and reference videos, the average change in the blockiness measurements for the respective target and reference videos, and/or any other suitable reference feature(s) and target feature(s). Such blurriness measurements for the respective target and reference videos can be performed using any suitable technique, such as the techniques described in U.S. patent application Ser. No. 12/706,165, filed Feb. 16, 2010, entitled UNIVERSAL BLURRINESS MEASUREMENT APPROACH FOR DIGITAL IMAGERY, which is assigned to the same assignee of the present application, and which is hereby incorporated herein by reference in its entirety. Further, such blockiness measurements for the respective target and reference videos can be performed using any suitable technique, such as the techniques described in U.S. patent application Ser. No. 12/757,389, filed Apr. 9, 2010, entitled BLIND BLOCKING ARTIFACT MEASUREMENT APPROACHES FOR DIGITAL IMAGERY, which is assigned to the same assignee of the present application, and which is hereby incorporated herein by reference in its entirety.
In accordance with the illustrative embodiment of
Δblr=blrref−blrtar. (4)
Similarly, to obtain the average change, Δblk, in the blockiness measurements for the reference video and the target video, the quality assessor 116 is operative, for each predetermined segment from a corresponding time frame within the reference video and the target video, to perform blockiness measurements in a frame-wise fashion, and to take the average of the frame-wise blockiness measurements to obtain the average blockiness measurements, blkref and blktar, for the reference video and the target video, respectively. The quality assessor 116 is further operative to take the difference between the average blockiness measurements, blkref and blktar, as follows,
Δblk=blkref−blktar. (5)
In further accordance with the illustrative embodiment of
in which “wref” is the width of each video frame in the predetermined segment corresponding to the reference video, and “wtar” is the width of each video frame in the predetermined segment corresponding to the target video.
Moreover, to obtain the QIref, the quality assessor 116 is further operative, for each predetermined segment from a corresponding time frame within the reference video and the target video, to obtain the QI in a frame-wise fashion, and to take the average of the frame-wise QIs to obtain the QIref for the reference video. For example, each frame-wise QI can be determined by taking the average of the QIs for all of the coded macroblocks in a corresponding video frame.
In further accordance with the illustrative embodiment of
in which “ƒ3(QIref,Δblr,Δblk)” corresponds to the third prediction function, and “a3,” “b3,” “c3,” and “d3” each correspond to a parameter coefficient of the third prediction function. It is noted that the value of each of the parameter coefficients a3, b3, c3, and d3 can be determined using a multi-variate linear regression approach, based on a plurality of predetermined target video bitstreams and their corresponding reference video bitstreams, and ground truth quality values, or any other suitable technique.
By way of example, one such technique for determining the parameter coefficients, a3, b3, c3, and d3, includes, for a large number (e.g., greater than about 500) of predetermined segments, collecting the corresponding ground truth quality values and objective feature values for QIref, Δblr, and Δblk. A matrix, X, can then be formed, as follows,
X=[QIref|Δblr|Δblk|1], (8)
in which “QIref” is a vector of all of the corresponding QIref values, “Δblr” is a vector of all of the corresponding Δblr values, “Δblk” is a vector of all of the corresponding Δblk values, “1” is a vector of 1s, and “|” indicates column-wise concatenation of the vectors, QIref, Δblr, Δblk, and 1. Next, a vector, y, of all of the ground truth quality values can be formed, and can be related to the matrix, X, as follows,
y=Xp, (9)
in which “p” is a parameter vector, which can be expressed as
p=(a3, b3, c3, d3)T. (10)
For example, the parameter vector, p, can be determined using a least-squares linear regression approach, as follows,
p=(XTX)−1XTy. (11)
In this way, exemplary values for the parameter coefficients, a3, b3, c3, and d3, can be determined to be equal to about 0.025, 0.19, 1.85, and 4.28, respectively, or any other suitable values.
a-2c each depict an exemplary method of providing the target features and the reference features from the target feature extractor 112 and the reference feature extractor 114, respectively, to the quality assessor 116 (see also
As shown in
In addition, and as shown in
Having described the above illustrative embodiments of the video quality measurement system 101, other alternative embodiments or variations may be made. In accordance with one or more such alternative embodiments, the target feature extractor 112 and the reference feature extractor 114 can extract target features from the target video, and reference features from the reference video, respectively, by performing objective measurements with regard to the respective target and reference videos involving one or more additional temporal quality factors including, but not limited to, temporal quality factors relating to frame rates, video motion properties including jerkiness motion, frame dropping impairments, packet loss impairments, freezing impairments, and/or ringing impairments.
For example, taking into account the frame rate, in frames per second (fps), of the target video (also referred to herein as “fpstar”), and the frame rate, in frames per second (fps), of the reference video (also referred to herein as “fpsref,”), the estimate of the predicted DMOS between the reference video and the target video,
in which “Δblr” corresponds to the average change in the blurriness measurements between the reference video and the target video for a predetermined segment from a corresponding time frame within the reference video and the target video, “Δblk ” corresponds to the average change in the blockiness measurements between the reference video and the target video for the predetermined segment, and “Δfps” corresponds to the average change in the frame rates between the reference video and the target video, (fpsref−fpstar), for the predetermined segment. For example, the Δfps can have a minimum bound at 0 (i.e., zero), such that there is essentially no penalty or benefit for the frame rate of the target video being higher than the frame rate of the reference video.
Accordingly, the quality assessor 116 can estimate the perceptual quality of the target video, {circumflex over (Q)}tar using a fourth prediction function based on the first prediction function, ƒ1(QIref) (see equation (2) above), and the modified second prediction function, ƒ2(Δblr,Δblk,Δfps) (see equation (12) above). For example, the {circumflex over (Q)}tar can be expressed as
in which “f4(QIref,Δblr,Δblk,Δfps)” corresponds to the fourth prediction function, “QIref” corresponds to the QI for the reference video, and “a4,” “b4,” “c4,” “d4,” and “e4” each correspond to a parameter coefficient of the fourth prediction function. For example, the value of each of the parameter coefficients a4, b4, c4, d4, and e4 can be determined using a multi-variate linear regression approach, based on a plurality of predetermined target video bitstreams and their corresponding reference video bitstreams, and ground truth quality values, or any other suitable technique. For example, the parameter coefficients a4, b4, c4, d4, and e4 may be determined or set to be equal to about 0.048, 0.19, 1.08, −0.044, and 5.23, respectively, or any other suitable values.
In accordance with one or more further alternative embodiments, the video encoder 102 may be omitted from the video communications environment 100, allowing the source video to take on the role of the reference video. In such a case, the estimate of the perceptual quality of the reference video, {circumflex over (Q)}ref, is assumed to be fixed and known. Further, because the source video is assumed to have the highest perceptual quality, the {circumflex over (Q)}ref estimate now corresponds to the highest perceptual quality in comparison to at least the estimate of the perceptual quality of the target video, {circumflex over (Q)}tar.
Accordingly, the quality assessor 116 can estimate the perceptual quality of the target video, {circumflex over (Q)}tar, using a fifth prediction function based on the modified second prediction function, ƒ2(Δblr,Δblk,Δfps) (see equation (12) above). For example, the {circumflex over (Q)}tar can be expressed as
in which “ƒ5(Δblr,Δblk,Δfps)” corresponds to the fifth prediction function, and “a5,” “b5,” “c5,” and “d5” each correspond to a parameter coefficient of the fifth prediction function. For example, the value of each of the parameter coefficients a5, b5, c5, and d5 can be determined using a multi-variate linear regression approach, based on a plurality of predetermined target video bitstreams and their corresponding reference video bitstreams, and ground truth quality values, or any other suitable technique. It is noted that, in the fifth prediction function (see equation (14) above), the fixed, known estimate of the perceptual quality of the reference video, {circumflex over (Q)}ref, can be incorporated into the parameter coefficient, d5. For example, the parameter coefficients a5, b5, c5, and d5 may be determined or set to be equal to about 0.17, 1.81, −0.04, and 4.1, respectively, or any other suitable values.
In accordance with one or more additional alternative embodiments, taking into account the video motion properties of the target video, the
in which “{circumflex over (Q)}tar
in which “ƒtar” corresponds to the frame rate of the target video, and “ƒmax” corresponds to a maximum frame rate. Further, in equation (16) above, “d” can be expressed as
d=α·eβ·AMD, (17)
in which “α” and “β” are constants, and “AMD” is a temporal quality factor that can be obtained by taking the sum of absolute mean differences of pixel values in a block-wise fashion between consecutive video frames in the predetermined segment of the target video. For example, for video frames in the common intermediate format (CIF), the constants α and β may be set to be equal to about 9.412 and −0.1347, respectively, or any other suitable values. Further, for video frames in the video graphics array (VGA) format, the constants α and β may be set to be equal to about 8.526 and −0.0575, respectively, or any other suitable values. Moreover, for video frames in the high definition (HD) format, the constants α and β may be set to be equal to about 6.283 and −0.1105, respectively, or any other suitable values.
Accordingly, the quality assessor 116 can estimate the perceptual quality of the target video, {circumflex over (Q)}tar, using a sixth prediction function based on the modified second prediction function, ƒ2(Δblr,Δblk,{circumflex over (Q)}tar
in which “ƒ6(Δblr,Δblk,{circumflex over (Q)}tar
In accordance with one or more further alternative embodiments, taking into account the frame dropping impairments of the target video, the
in which “NIFVQ(fpstar)” is a temporal quality factor representative of the negative impact of such frame dropping impairments on the perceptual quality of the target video for a predetermined segment from a corresponding time frame within the reference video and the target video, and “fpstar” corresponds to the frame rate of the target video, for a current video frame in the predetermined segment. For example, NIFVQ(fpstar) can be expressed as
NIFVQ(fpstar)=[log(30)−log(fpstar)] (20)
or
NIFVQ(fpstar)=AMD*[log(30)−log(fpstar)], (21)
in which “AMD ” is the temporal quality factor that can be obtained by taking the sum of block-wise absolute mean differences of pixel values in a block-wise fashion between consecutive video frames in the predetermined segment of the target video. It is noted that in the exemplary equations (20) and (21) above, it has been assumed that the maximum frame rate of the reference video is 30 frames per second.
Accordingly, the quality assessor 116 can estimate the perceptual quality of the target video, {circumflex over (Q)}tar, using a seventh prediction function based on the modified second prediction function, ƒ2(Δblr,Δblk,NIFVQ(fpstar)) (see equation (19) above). For example, the {circumflex over (Q)}tar can be expressed as
in which “ƒ7(Δblr,Δblk,NIFVQ(fpstar))” corresponds to the seventh prediction function, and “a7,” “b7,” “c7,” and “d7” each correspond to a parameter coefficient of the seventh prediction function. For example, the value of each of the parameter coefficients a7, b7, c7, and d7 can be determined using a multi-variate linear regression approach, based on a plurality of predetermined target video bitstreams and their corresponding reference video bitstreams, and ground truth quality values, or any other suitable technique. For example, in the event the temporal quality factor, NIFVQ(fpstar), is determined using equation (20) above, the parameter coefficients a7, b7, c7, and d7 may be determined or set to be equal to about 0.17, 1.82, −0.68, and 4.07, respectively, or any other suitable values. Further, in the event the temporal quality factor, NIFVQ(fpstar), is determined using equation (21) above, the parameter coefficients a7, b7, c7, and d7 may be determined or set to be equal to about 0.18, 1.84, −0.02, and 3.87, respectively, or any other suitable values.
In accordance with one or more additional alternative embodiments, taking into account the jerkiness motion in the target video, the
in which “JM” is a temporal quality factor representative of a jerkiness measurement performed with regard to the target video for a predetermined segment from a corresponding time frame within the reference video and the target video. For example, JM can be expressed as
in which “fpstar” is the frame rate of the target video, “M” and “N” are the dimensions of each video frame in the target video, and “|ƒi(x,y)−ƒi−1(x,y)|” represents the direct frame difference between consecutive video frames at times “i” and “i−1” in the target video. It is noted that the temporal quality factor, AMD, may be similar to the direct frame difference, |ƒi(x,y)−ƒi−1(x,y)|, employed in equation (24) above. The temporal quality factor, JM, can therefore be alternatively expressed in terms of the AMD as follows,
Accordingly, the quality assessor 116 can estimate the perceptual quality of the target video, {circumflex over (Q)}tar, using an eighth prediction function based on the modified second prediction function, ƒ2(Δblr,Δblk,JM) (see equation (23) above). For example, the {circumflex over (Q)}tar can be expressed as
in which “ƒ7(Δblr,Δblk,JM)” corresponds to the eighth prediction function, and “a8,” “b8,” “c8,” and “d8” each correspond to a parameter coefficient of the eighth prediction function. For example, the value of each of the parameter coefficients a8, b8, c8, and d8 can be determined using a multi-variate linear regression approach, based on a plurality of predetermined target video bitstreams and their corresponding reference video bitstreams, and ground truth quality values, or any other suitable technique. For example, the parameter coefficients a8, b8, c8, and d8 may be set to be equal to about 0.19, 1.92, −0.006, and 3.87, respectively, or any other suitable values.
An illustrative method of operating the video quality measurement system 101 of
It is noted that the operations depicted and/or described herein are purely exemplary, and imply no particular order. Further, the operations can be used in any sequence, when appropriate, and/or can be partially used. With the above illustrative embodiments in mind, it should be understood that such illustrative embodiments can employ various computer-implemented operations involving data transferred or stored in computer systems. Such operations are those requiring physical manipulation of physical quantities. Typically, though not necessarily, such quantities take the form of electrical, magnetic, and/or optical signals capable of being stored, transferred, combined, compared, and/or otherwise manipulated.
Further, any of the operations depicted and/or described herein that form part of the illustrative embodiments are useful machine operations. The illustrative embodiments also relate to a device or an apparatus for performing such operations. The apparatus can be specially constructed for the required purpose, or can be a general-purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general-purpose machines employing one or more processors coupled to one or more computer readable media can be used with computer programs written in accordance with the teachings disclosed herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
The presently disclosed systems and methods can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data, which can thereafter be read by a computer system. Examples of such computer readable media include hard drives, read-only memory (ROM), random-access memory (RAM), CD-ROMs, CD-Rs, CD-RWs, magnetic tapes, and/or any other suitable optical or non-optical data storage devices. The computer readable media can also be distributed over a network-coupled computer system, so that the computer readable code can be stored and/or executed in a distributed fashion.
The foregoing description has been directed to particular illustrative embodiments of this disclosure. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their associated advantages. Moreover, the procedures, processes, and/or modules described herein may be implemented in hardware, software, embodied as a computer-readable medium having program instructions, firmware, or a combination thereof. For example, the functions described herein may be performed by a processor executing program instructions out of a memory or other storage device.
It will be appreciated by those skilled in the art that modifications to and variations of the above-described systems and methods may be made without departing from the inventive concepts disclosed herein. Accordingly, the disclosure should not be viewed as limited except as by the scope and spirit of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5446492 | Wolf et al. | Aug 1995 | A |
6496221 | Wolf et al. | Dec 2002 | B1 |
7668397 | Le Dinh et al. | Feb 2010 | B2 |
7705881 | Okamoto et al. | Apr 2010 | B2 |
Entry |
---|
P. Marziliano, F. Dufaux, S. Winkler, and T. Ebrahimi, “A no-reference perceptual blur metric,” in Proceedings of the International Conference on Image Processing, vol. 3, Sep. 2002, pp. 57-60. |
R. Muijs and I. Kirenko, “A no-reference blocking artifact measure for adaptive video processing,” European Signal Processing Conference (EUSIPCO'2005), Antalya, Turkey, Sep. 2005. |
C. Chen, W. Chen, and J. Bloom, “A Universal and Reference-Free Blurriness Measure,” SPIE Electronic Imaging, Image Quality and System Performance VIII (EI106), San Francisco, CA, Jan. 23-27, 2011. |
M. Carnec, P. Le Callet, and D. Barba, “An image quality assessment method based on perception of structural information,” IEEE International Conference on Image Processing (ICIP), Sep. 2003. |
M. Carnec, P. Le Callet, and D. Barba, “Visual features for image quality assessment with reduced reference,” IEEE International Conference on Image Processing (ICIP), Sep. 2005. |
I. P. Gunawan and M. Ghanbari, “Reduced reference picture quality estimation by using local harmonic amplitude information,” in Proceedings of the London Commununication Symposium, Sep. 2003, pp. 137-140. |
I. P. Gunawan and M. Ghanbari, “Image quality assessment based on harmonics gain/loss information,” IEEE International Conference on Image Processing (ICIP), Sep. 2005. |
Z. Wang and E. P. Simoncelli, “Reduced-reference image quality assessment using a wavelet-domain natural image statistic model,” Human Vision and Electronic Imaging X, Proceedings of the SPIE, vol. 5666, Jan. 2005. |
Z. Wang, G. Wu, H. R. Sheikh, E. P. Simoncelli, E.-H. Yang and A. C. Bovik, “Quality-aware images,” IEEE Transactions on Image Processing, vol. 15, No. 6, pp. 1680-1689, Jun. 2006. |
Q. Li and Z. Wang, “Reduced-reference image quality assessment using divisive normalization-based image representation,” IEEE Journal of Selected Topics in Signal Processing, Special issue on Visual Media Quality Assessment, vol. 3, No. 2, pp. 202-211, Apr. 2009. |
A. Rehman and Z. Wang, “Reduced-Reference SSIM estimation,” IEEE International Conference on Image Processing (ICIP), Sep. 2010. |
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, No. 4, Apr. 2004, pp. 600-612. |
C. S. Won, “Image fidelity assessment using the edge histogram descriptor of MPEG-7,” ETRI Journal, vol. 29, No. 5, pp. 703-705, 2007. |
U. Engelke, M. Kusuma, H.-J. Zepernick, and M. Caldera, “Reduced-reference metric design for objective perceptual quality assessment in wireless imaging,” Signal Processing: Image Communication, vol. 24, No. 7, Aug. 2009, pp. 525-547. |
X. Lv and Z. J. Wang, “Reduced-reference image quality assessment based on perceptual image hashing,” IEEE International Conference on Image Processing (ICIP), Nov. 2009. |
J. You, U. Reiter, M. M. Hannuksela, M. Gabbouj, and A. Perkis, “Perceptual-based quality assessment for audio-visual services: A survey,” Signal Processing: Image Communication, vol. 25, No. 7, Special Issue on Image and Video Quality Assessment, Aug. 2010, pp. 482-501. |
VQEG, “Final report from the Video Quality Experts Group on the validation of reduced-reference and no-reference objective models for standard definition television, Phase I,” Jun. 2009, available at http://www.vqeg.org. |
ITU-T Recommendation J.246, “Perceptual visual quality measurement techniques for multimedia services over digital cable television networks in the presence of a reduced bandwidth reference,” International Telecommunication Union, Geneva, Switzerland, 2008. |
S. Winkler, “Video quality measurement standards—Current status and trends,” International Conference on Information, Communications and Signal Processing (ICICS), Dec. 2009, pp. 1-5. |
S. Wolf and M. Pinson, “Video quality measurement techniques,” National Telecommunications and Information Administration (NTIA) Report, TR-02-392, Jun. 2002. |
S. Wolf and M. Pinson, “Low bandwidth reduced reference video quality monitoring system,” International Workshop on Video Processing and Quality Metrics for Consumer Electronics, Jan. 2005. |
J. You, M. Hannuksela, and M. Gabbouj, “An objective video quality metric based on spatiotemporal distortion,” IEEE International Conference on Image Processing (ICIP), Nov. 2009, pp. 2229-2232. |
J. You, A. Perkis, M. M. Hannuksela, and M. Gabbouj, “Perceptual quality assessment based on visual attention analysis,” ACM International Conference on Multimedia, Oct. 2009, pp. 561-564. |
I. P. Gunawan and M. Ghanbari, “Reduced-reference video quality assessment using discriminative local harmonic strength with motion consideration,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, No. 1, Jan. 2008, pp. 71-83. |
I. P. Gunawan and M. Ghanbari, “An efficient reduced-reference video quality metric,” Picture Coding Symposium (PCS), Nov. 2007. |
B. Hiremath, Q. Li, and Z. Wang, “Quality-aware video,” IEEE International Conference on Image Processing (ICIP), Sep. 2007. |
A. Albonico, G. Valenzise, M. Naccari, M. Tagliasacchi, and S. Tubaro, “A reduced-reference video structural similarity metric based on no-reference estimation of channel-induced distortion,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2009, pp. 1857-1860. |
Z. Wang, L. Lu, and A. C. Bovik, “Video quality assessment based on structural distortion measurement,” Signal Processing: Image Communication, vol. 19, No. 2, Feb. 2004, pp. 121-132. |
R. Streijl, S. Winkler, and D. Hands, “Perceptual quality measurement: Towards a more efficient process for validating objective models [Standards in a nutshell],” IEEE Signal Processing Magazine, vol. 27, No. 4, pp. 136-140, Jul. 2010. |
Z. Lu, W. Lin, B. C. Seng, S. Kato, S. Yao, E. Ong, and X. K. Yang, “Measuring the Negative Impact of Frame Dropping on Perceptual Visual Quality”, in Proc. SPIE Human Vision and Electronic Imaging, vol. 5666, Jan. 2005, pp. 554-562. |
G. Zhai, J. Cai, W. Lin, X. Yang, W. Zhang, “Three Dimentional Scalable Video Adaptation via User-end Perceptual Quality Assessment”, Broadcasting, IEEE Transactions on, Sep. 2008, vol. 54, issue 3, p. 719-727. |
T. Yamada, Y. Miyamoto, Y. Senda, and M. Serizawa, “Video-quality estimation based on reduced-reference model employing activity-difference,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E92-A, No. 12, Dec. 2009, pp. 3284-3290. |
P. Le Callet, C. Viard-Gaudin, S. Pechard, and E. Caillault, “No reference and reduced reference video quality metrics for end to end QoS monitoring,” IEICE Transactions on Communications, vol. E89-B, No. 2, 2006, pp. 289-296. |
Tobias Oelbaum and Klaus Diepold, “Building a Reduced Reference Video Quality Metric with Very Low Overhead using Multivariate Data Analysis,” Proceedings of the 4th International Conference on Cybernetics and Information Technologies, Systems and Applications (CITSA '07), 2007. |
Y-F Ou, Z. Ma, T. Liu and Y. Wang, “Perceptual quality assessment of video considering both frame rate and quantization artifacts”, IEEE Trans. Circuit and Sys. for Video Technology, vol. 21, No. 3, Mar. 2011, pp. 286-298. |
D. O. Kim, R.-H. Park, and D. G. Sim, “Reduced reference image quality assessment based on similarity of edge projections,” available online at http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.103.770, 2008. |
U. Engelke, “Perceptual Quality Metric Design for Wireless Image and Video Communication,” Blekinge Institute of Technology, Licentiate Dissertation Series No. 2008:08, 2008. |
Number | Date | Country | |
---|---|---|---|
20120307074 A1 | Dec 2012 | US |