This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2011-249176, filed on Nov. 14, 2011, the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to an image processing apparatus and an image processing method.
Conventionally, to enhance the resolution and the quality of an image, various types of image processing is performed in cameras, television receivers, and the like. One aspect of such image processing is a technology for adding a high-frequency image component such as a texture to a frame image. In the conventional technology, for example, a texture image is generated for each frame image, and the texture image is added to the frame image, whereby it is possible to improve the texture.
In the conventional technology, if an analysis is performed so as to generate a high-frequency image such as a texture for each frame image, for example, a processing load increases.
A general architecture that implements the various features of the invention will now be described with reference to the drawings. The drawings and the associated descriptions are provided to illustrate embodiments of the invention and not to limit the scope of the invention.
In general, according to one embodiment, an image processing apparatus comprises: a characteristic calculator configured to calculate, in a unit of a predetermined display area, characteristic information indicating a characteristic included in the predetermined display area of first frame image information contained in moving image information; a search module configured to search for motion of a pixel between the first frame image information and second frame image information that is contained in the moving image information and that is posterior to the first frame image information; an estimating module configured to estimate characteristic information in the unit of the predetermined display area of the second frame image information based on the motion of the pixel searched for by the search module and the characteristic information in the unit of the predetermined display area of the first frame image information; a generator configured to generate high-frequency image information in which a high-frequency component varies depending on the characteristic information in the unit of the predetermined display area of the second frame image information; and a blending module configured to blend the high-frequency image information on the second frame image information.
The image processing apparatus 100 according to the first embodiment performs processing in units of frame image data contained in moving image data in chronological order. The image processing apparatus 100 is included in a camera and a television receiver, for example. The image processing apparatus 100 performs various types of image processing on frame image data, and outputs the frame image data on which the image processing is performed.
The frame buffer 101 temporarily stores therein the frame image data thus received.
The characteristic amount calculator 102 calculates a characteristic amount indicating a characteristic included in a display area of the frame image data contained in the moving image data in units of predetermined display areas. In the first embodiment, the display size of the predetermined display area is 16×16 dots (hereinafter, also referred to as a reliability block for calculating reliability). However, the display size is not limited thereto. The characteristic amount calculated by the characteristic amount calculator 102 is represented by activity, for example. The activity is the degree of luminance fluctuation in the display area of 16×16 dots (reliability block). To calculate the activity, various methods, such as the method disclosed in Japanese Patent Application Laid-open No. 2008-310117, may be used.
Furthermore, a motion vector calculated by performing motion search may be used as the characteristic amount. If such a motion vector is used, the characteristic amount calculator 102 may determine intensity based on the magnitude of the motion to be the characteristic amount (e.g., illustration of FIG. 5 in Japanese Patent Application Laid-open No. 2011-35450).
The characteristic amount is not limited to the activity and the motion vector, and various parameters may be used for the characteristic amount. Furthermore, the characteristic amount may be a plurality of types of parameters. If a plurality of types of parameters are used, for example, the final reliability can be determined by multiplying reliability calculated from each characteristic amount.
The image processing apparatus 100 according to the first embodiment blends texture image data on frame image data. To generate the texture image data, the characteristic amount calculator 102 calculates the characteristic amount for each reliability block (16×16 dots).
In the method disclosed in Japanese Patent Application Laid-open No. 2008-310117 and other methods, to blend a subtle texture generated to improve the texture of an image on frame image data, activity is calculated. Subsequently, if it is determined that fluctuation in the pixel value is nearly constant based on the activity, control is performed such that the blend ratio of the subtle texture increases. In the calculation of the activity, signals in a certain adjacent area need to be analyzed in the methods including the disclosed method, whereby the processing load increases. Because the calculation of the characteristic amount causes heavy processing load in this manner, the characteristic amount calculator 102 in the first embodiment calculates the characteristic amount not for all the frame image data, but for the frame image data contained in the moving image data for every predetermined image number of m.
The characteristic amount storage 103 stores therein the characteristic amount calculated by the characteristic amount calculator 102.
The characteristic amount is calculated by the characteristic amount calculator 102 for every predetermined image number of m. However, the texture image data needs to be generated for each frame image data. As a result, it is necessary to calculate the characteristic amount of all the frame image data. Therefore, in the image processing apparatus 100 according to the first embodiment, motion search is performed among a plurality of frame image data. Based on the search result and the characteristic amount calculated in one piece of frame image data, the characteristic amount of another piece of frame image data is calculated.
The motion search module 104 searches for a motion vector of a pixel between arbitrary frame image data and frame image data prior to the arbitrary frame image data among the frame image data contained in the moving image data. The motion search module 104 according to the first embodiment searches for the motion vector of the pixel in units of 8×8 dot blocks (hereinafter, also referred to as motion search blocks) obtained by dividing the reliability block (16×16 dots) in the arbitrary frame image data. The motion search module 104 according to the first embodiment calculates the motion vector in units of 8×8 dot blocks. Alternatively, the motion vector may be calculated in units of another display area size. Furthermore, the motion search may be performed with an accuracy of one-pixel units, or with a sub-pixel accuracy that is more minute than one pixel.
Based on the motion vector of the pixel among a plurality of frames searched for by the motion search module 104 and the characteristic amount of the frame image data in units of the display area, the characteristic amount estimating module 106 estimates the characteristic amount of another piece of frame image data in units of the display area. The motion search module 104 searches for the motion of the pixel between one piece of frame image data and another piece of frame image data.
In other words, the motion search module 104 performs the motion search between the frame image data, and the characteristic amount calculator 102 calculates the characteristic amount with high accuracy by a signal analysis in frame image data n. Subsequently, the characteristic amount estimating module 106 uses a motion vector between frame image data n+1 and the frame image data n to estimate the characteristic amount of the frame image data n+1 from the characteristic amount of the frame image data n. With this estimation, the characteristic amount need not be calculated for each frame image data, whereby it is possible to reduce the processing load.
The reliability calculator 105 calculate reliability for each display area (reliability block) in each frame image data based on the characteristic amount thus calculated or estimated. The reliability according to the first embodiment is the degree to which a texture component is to be added, and represents a value from 0.0 to 0.1, for example. If the characteristic amount is represented by the activity, for example, the reliability calculator 105 may perform non-linear conversion on the value of the activity, thereby converting the activity into the reliability.
The gradient feature calculator 107 calculates gradient feature data for each pixel included in the frame image data. The gradient feature data is the amount of change indicating a change in the pixel value in a predetermined display area near a pixel as a gradient for each pixel included in the frame image data. The gradient feature calculator 107 uses a differential filter to calculate the gradient feature data for each pixel included in the frame image data, for example. In the first embodiment, the gradient feature calculator 107 uses a horizontal direction differential filter or a vertical direction differential filter to calculate horizontal direction gradient feature data and vertical direction gradient feature data for each pixel. While the size of the filter used for the calculation is approximately 3×3 to 5×5, for example, the size is not limited thereto. In the description below, the horizontal direction gradient feature may be referred to as “Fx”, and the vertical direction gradient feature may be referred to as “Fy”. Furthermore, in the first embodiment, an explanation is made of the example in which the gradient feature data for each pixel is used. However, it is not limited to the gradient feature data, and any data may be used as long as the data indicates the amount of change representing a change in the pixel value in the predetermined display area.
The generator 108 calculates gradient intensity of a local gradient pattern that is a weight related to a high-frequency component of each pixel included in the frame image data based on a probability distribution indicating a distribution of relative values of gradient feature data of a high-frequency component for each pixel included in learning image information with respect to gradient feature data for each pixel included in the learning image data and on the gradient feature data (Fx, Fy) calculated for each pixel included in the frame image data. The learning image data according to the first embodiment has the same resolution (size of the display area) as that of the frame image data.
The local gradient pattern according to the first embodiment is a predetermined image pattern indicating a pattern of a change in a predetermined pixel value (e.g., a luminance value). The gradient intensity is a weight that is related to a high-frequency component for each pixel included in the frame image data and that is calculated based on the gradient feature. The gradient intensity is used for generating the high-frequency component of the frame image data.
The generator 108 then weights the local gradient pattern with the gradient intensity, and generates texture image data indicating the high-frequency component for the frame image data based on the reliability calculated for each display area. The local gradient pattern and the gradient intensity will be described later in detail.
At this time, by generating the texture image data based on the reliability, the generator 108 according to the first embodiment can generate the texture image data in which the texture component varies depending on the characteristic amount of the frame image date in units of the reliability block (16×16 dots).
The probability distribution according to the first embodiment is the distribution of the relative values described above, and is distribution of the relative angles and relative magnitudes of the gradient of pixels in leaning high-frequency component image data with respect to the gradient of each pixel in the learning image data. The probability distribution will now be described.
As illustrated in
As illustrated in
The distribution calculator 125 calculates the vector of the gradient of the high-frequency component for each pixel as described above, thereby calculating a probability distribution surrounded by the dashed line in
In the image processing apparatus 100 according to the first embodiment, the probability distribution calculated in the processing described above is stored in the probability distribution storage 109 in advance.
The generator 108 then uses the probability distribution and the gradient feature data to calculate the gradient intensity. The average of the “normal distribution N1” is “μ1”, and standard deviation thereof is “σ1”. The average of the “normal distribution N2” is “μ2”, and standard deviation thereof is “σ2”. The generator 108 acquires a random variable “α” from the “normal distribution N1”, and acquires a random variable “β” from the “normal distribution N2”. The generator 108 then calculates the gradient intensity of the high-frequency component by substituting the random variable “a”, the random variable “β”, and the gradient feature data (Fx, Fy) into Equation (1):
fx=αFx+βFy, fy=αFy−βFx (1)
where “fx” represents the horizontal direction gradient intensity, and “fy” represents the vertical direction gradient intensity.
Subsequently, based on the gradient intensity of the high-frequency component (horizontal direction: fx, and vertical direction: fy) and the local gradient patterns (horizontal direction: Gx, and vertical direction: Gy), the generator 108 generates a high-frequency component for the input image data. “Gx” and “Gy” are predetermined image patterns indicating patterns of a change in a predetermined pixel value. In the first embodiment, “Gx” and “Gy” are base patterns having the same luminance change as that of the filter used for the calculation of the gradient of the learning high-frequency component image performed by the distribution calculator 125.
In other words, the generator 108 calculates a high-frequency component “T” for each pixel included in the frame image data by substituting the gradient intensity (horizontal direction: fx, and vertical direction: fy) and the local gradient patterns (horizontal direction: Gx, and vertical direction: Gy) into Equation (2):
T=fx·Gx+fy·Gy (2)
Subsequently, the generator 108 changes the high-frequency component “T” based on the reliability. If reliability α represents a value from 0.0 to 1.0, for example, the generator 108 may perform processing using Equation (3):
T′=αT (3)
As a result, it is possible to vary the high-frequency component to be added to the frame image data in accordance with the reliability, that is, the characteristic amount.
In the first embodiment, high-frequency component image data composed of the high-frequency component “T′” calculated for each pixel by the generator 108 is texture image data. In the first embodiment, the display area of the texture image data has the same size as that of the frame image data.
Subsequently, the blending module 110 blends the texture image data corresponding to the frame image data for each frame image data. As a result, it is possible to improve the texture, thereby achieving high image quality.
An explanation will be made of processing for blending the texture image data on the frame image data in the image processing apparatus 100 according to the first embodiment.
The image processing apparatus 100 reads frame image data (frame number i) from outside thereof (S601). i is an arbitrary integer uniquely assigned to each frame image data. The image processing apparatus 100 accumulates the frame image data (frame number i) in the frame buffer 101 (S602).
Subsequently, the characteristic amount calculator 102 determines whether the frame number i is a multiple of a constant m (S603). In other words, in the present processing flow, the characteristic amount is calculated only when the frame number i is a multiple of the constant m.
If the characteristic amount calculator 102 determines that the frame number i is a multiple of the constant m (Yes at S603), calculation of a characteristic amount chai−m that is started before a time period of m number of frames by a signal analysis for each reliability block performed by the characteristic amount calculator 102 is completed (S604). The characteristic amount chai−m thus calculated is stored in the characteristic amount storage 103. The processing for calculating the characteristic amount according to the first embodiment is performed in a divided manner in a time period for inputting a plurality of frames equal to or less than m number of frames as a separate thread. As a result, it is possible to smooth the processing load in each frame section.
As illustrated in
In the example illustrated in
Subsequently, the motion search module 104 performs motion search between frame image data (frame number i−m) and the frame image data (frame number i) stored in the frame buffer 101 (S605). In the motion search performed by the motion search module 104, corresponding block in the frame image data (frame number i−m) is calculated for each motion search bock of 8×8 dots in the frame image data (frame number i). The result thereof is represented as a motion vector mvi−m, i.
Based on a pair of frame image data (e.g., frame image data of the frame number i and the frame number i−m), the motion search module 104 searches for the positions of corresponding blocks P′, Q′, R′, and S′ having a similar pixel pattern as that of the motion search blocks P, Q, R, and S, respectively. In the first embodiment, the motion search blocks P, Q, R, and S are included in the frame image data (frame number i (i is a multiple of m)), and the corresponding blocks P′, Q′, R′, and S′ are included in the frame image data (frame number i−m). An arrow vector illustrated in
If i is smaller than m, the frame image data (frame number i−m) is yet to be buffered in the frame buffer 101. Therefore, the subsequent processing is skipped, and the texture image data may be generated considering that no characteristic amount is obtained. Alternatively, the texture image data may be generated by calculating the characteristic amount using the own frame image data as reference frame image data.
In other words, frame image data of a frame number n−3 is paired with frame image data of a frame number n−6, frame image data of a frame number n is paired with the frame image data of the frame number n−3, and frame image data of a frame number n+3 is paired with the frame image data of the frame number n.
This is because processing time for 3 frames is required for the characteristic amount calculator 102 to calculate the characteristic amount as illustrated in
If motion search is performed between the frame image data of the frame number i and the frame image data m pieces of data prior thereto, the motion search may possibly be less-accurate because the frame image data m pieces of data prior thereto is used. To prevent this, the calculation of the characteristic amount by a signal analysis of the characteristic amount calculator 102 may be performed in a time period of one frame.
In the example illustrated in
Referring back to
At this time, the characteristic amount estimating module 106 repeats the loop processing 1052 of S1001 and S1002 for each of all the motion search blocks (8×8 dots) included in the reliability block.
In other words, the characteristic amount estimating module 106 reads a characteristic amount chaj of a reliability block adjacent to a corresponding block (block prior to moving) specified by a motion vector mvj, i on a motion search reference frame j for each motion search block (S1001). In the case of S606 in
In the example illustrated in
Subsequently, based on the distances between the corresponding block (e.g., the corresponding block P′) whose position is corresponding to the position of the motion search block (e.g., the motion search block P) before the motion vector moves and the reliability blocks adjacent thereto, the characteristic amount estimating module 106 calculates the weighted average of the characteristic amounts of the reliability blocks adjacent to the corresponding block, thereby calculating the characteristic amount of the motion search vector (motion search block P) (S1002).
In the example illustrated in
chai−m, P′=(a*chai−m, a+b*chai−m, b+c*chai−m, c+d*chai−m, d)/(a+b+c+d) (4)
The characteristic amount estimating module 106 then estimates the characteristic amount chai−m, P′ of the corresponding block P′ thus calculated to be the characteristic amount of the motion search block P, which is the position posterior to moving.
Referring back to
Subsequently, the reliability calculator 105 converts the characteristic amount chai into the reliability reli (S1004). To convert the characteristic amount chai into the reliability reli, various types of methods can be employed. If the activity is used as the characteristic amount as disclosed in Japanese Patent Application Laid-open No. 2008-310117, for example, the reliability calculator 105 may perform the conversion as follows: the smaller the characteristic amount chai is, the closer to 1.0 the reliability reli is; and the larger the characteristic amount chai is, the closer to 0.0 the reliability reli is.
In the process described above, the reliability calculator 105 calculates the reliability reli for each of all the reliability blocks included in the frame image data. After all the processing illustrated in
Referring back to
By contrast, if the characteristic amount calculator 102 determines that the frame number i is not a multiple of the constant m at S603 (No at S603), the motion search module 104 performs motion search between the frame image data of the frame number i−1 and the frame image data of the frame number i stored in the frame buffer 101 (S610). The process of the motion search is the same as that of S605 except that the number of the reference frame is i−1.
The characteristic amount estimating module 106 and the motion search module 104 function to estimate the characteristic amount chai and calculate the reliability reli from the characteristic amount chai−1 stored in the characteristic amount storage 103 and the motion vector mvi−1, i calculated at S610 (S611). The characteristic amount chai thus calculated is stored in the characteristic amount storage 103. In other words, based on the pairs of the frame image data illustrated in
Subsequently, the characteristic amount estimating module 106 deletes the frame image data of the frame number i−1 from the frame buffer 101 (S612).
After S607 and S612, the generator 108 generates texture image data under control based on the reliability reli (S608).
An explanation will be made of the generation processing of the texture image data in the generator 108 according to the first embodiment.
The gradient feature calculator 107 uses the horizontal direction differential filter or the vertical direction differential filter to calculate a horizontal direction gradient feature and a vertical direction gradient feature for each pixel in the frame image data (S1201).
Subsequently, the generator 108 calculates the gradient intensity of each pixel in the frame image data based on a random variable derived from a probability distribution indicating a distribution of vectors, each of which is relative magnitude and a relative angle of the gradient feature of each pixel included in a high-frequency component of learning image information with respect to the gradient feature of each pixel included in the learning image information and on the gradient feature calculated by the gradient feature calculator 107 (S1202).
The generator 108 then generates texture image data indicating a high-frequency component for the frame image data based on the gradient intensity related to the high-frequency component for each pixel of the frame image data, the local gradient pattern, and the reliability calculated for each reliability block in the frame image data (S1203).
In the process described above, the texture image data in which the texture component to be added varies depending on the reliability of each reliability block is generated.
Referring back to
In the process described above, the texture image data in which the texture component to be added varies for each reliability block depending on the characteristic amount of each reliability block can be generated and added to the frame image data. Therefore, it is possible to improve the texture.
In the image processing apparatus according to the first embodiment, an explanation has been made of the example in which, to reduce the processing load, the characteristic amount calculated for every predetermined number of frame images is stored and used for calculating the reliability for each frame image data. In the first embodiment, the parameter stored to reduce the processing load is not limited to the characteristic amount. In an image processing apparatus according to a second embodiment, an example will be explained in which the reliability is stored.
The characteristic amount calculator 1301 calculates the characteristic amount in units of reliability blocks for every predetermined number of pieces of frame image data.
Every time the frame image data is read, the motion search module 1302 searches for a motion vector of a pixel among a plurality of pieces of frame image data. The motion search module 1302 according to the second embodiment searches for the motion vector for each motion search block obtained by dividing a reliability block in the same manner as in the first embodiment.
The reliability calculator 1303 calculates reliability for each reliability block of each frame image data based on the characteristic amount calculated by the characteristic amount calculator 1301.
The reliability storage 1304 stores therein the reliability calculated by the reliability calculator 1303.
Based on the reliability of each reliability block of arbitrary frame image data stored in the reliability storage 1304 and the motion vector searched for by the motion search module 1302, the reliability estimating module 1305 estimates the reliability for each reliability block in the frame image data read subsequently to the arbitrary frame image data.
The generator 108 uses the reliability estimated by the reliability estimating module 1305 to generate texture image data for each frame image data.
An explanation will be made of processing for blending the texture image data on the frame image data in the image processing apparatus 1300 according to the second embodiment.
In the same manner as that at S601 to S603 in the first embodiment illustrated in
If the characteristic amount calculator 1301 determines that the frame number i is a multiple of the constant m (Yes at S1403), a characteristic amount chai−m is calculated by the characteristic amount calculator 1301, and reliability reli−m is calculated by the reliability calculator 1303 (S1404).
The reliability reli−m thus calculated is stored in the reliability storage 1304. The processing for calculating the characteristic amount chai−m is performed in a divided manner in a time period for inputting a plurality of frames equal to or less than m number of frames as a separate thread. As a result, it is possible to smooth the processing load in each frame section.
As illustrated in
Subsequently, the reliability calculator 1303 converts the characteristic amount chai thus calculated into the reliability reli−m (S1502).
By performing the process described above, the reliability reli−m of the frame image data of the frame number i−m is calculated.
Referring back to
Subsequently, the reliability estimating module 1305 estimates the reliability reli based on the reliability reli−m calculated at S1404 and a motion vector mvi−m, i calculated at S1405 (S1406).
Furthermore, the reliability estimating module 1305 repeats the loop processing 1652 of S1601 and S1602 for each of all the motion search blocks included in the reliability block.
In other words, the reliability estimating module 1305 reads reliability relj of a reliability block adjacent to a corresponding block (block prior to moving) specified by a motion vector mvj, i on a motion search reference frame j for each motion search block (S1601). In the case of S1406 in
In the example illustrated in
Subsequently, based on the distances between the corresponding block (e.g., the corresponding block P′) whose position is corresponding to the position of the motion search block (e.g., the motion search block P) before the motion vector moves and the reliability blocks adjacent thereto, the reliability estimating module 1305 calculates the weighted average of the reliability of the reliability blocks adjacent to the corresponding block, thereby calculating the reliability of the motion search vector (motion search block P) (S1602).
The reliability estimating module 1305 averages out the reliability calculated for the motion search blocks included in the reliability block, thereby generating the reliability reli of the reliability block (S1603).
In the process described above, the reliability estimating module 1305 calculates the reliability reli for all the reliability blocks included in the frame image data. Subsequently, through the processing flow illustrated in
Referring back to
By contrast, if the characteristic amount calculator 1301 determines that the frame number i is not a multiple of the constant m at S1403 (No at S1403), the motion search module 1302 performs motion search between the frame image data of the frame number i−1 and the frame image data of the frame number i stored in the frame buffer 101 (S1410). The process of the motion search is the same as that of S1405 except that the number of the reference frame is i−1.
The reliability estimating module 1305 then estimates the reliability reli from the reliability reli−1 stored in the reliability storage 1304 and the motion vector mvi−1, i calculated at S1410 (S1411). The reliability reli thus calculated is stored in the reliability storage 1304.
Subsequently, the characteristic amount calculator 1301 deletes the frame image data of the frame number i−1 from the frame buffer 101 (S1412).
After S1407 and S1412, the generator 108 generates texture image data under control based on the reliability reli (S1408).
Subsequently, the blending module 110 blends the texture image data thus generated on the frame image data (S1409).
The image processing apparatuses according to the embodiments estimate the characteristic amount or the reliability by performing motion search. As a result, a signal analysis for calculating the characteristic amount need not be performed on all the frame image data, whereby it is possible to reduce the processing load. In particular, in an image processing apparatus that originally performs motion search between frames, sharing the motion search result is especially effective for reducing the processing load.
The image processing apparatuses according to the embodiments control the degree to which a texture component is added by using the reliability in the high-quality image processing on the texture image data and the like. In this case, the image processing apparatuses calculate the characteristic amount by a signal analysis every few frames, and estimate the characteristic amount or the reliability in the other frames by using a motion search result between the frames obtained separately. With this configuration, the image processing apparatuses according to the embodiments can not only reduce the whole load related to generation of the texture image data, but also make influence on the image quality small.
In the embodiments, the texture generation processing is explained as an example of the high-quality image processing. However, the technology may be applied to super-resolution processing and edge enhancement processing for increasing the sharpness, for example.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel apparatuses and methods described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the apparatuses and methods described herein may be made without departing from the spirit of the inventions. Moreover, the apparatuses and methods described herein may be combined as appropriate without being inconsistent with the content. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
The functions of the image processing apparatuses explained in the embodiments may be included in a camera, a television receiver, and the like as components. Alternatively, the functions may be realized by executing an image processing program prepared in advance in a computer, such as a personal computer and a work station.
The image processing program executed in the computer may be distributed over a network such as the Internet. Furthermore, the image processing program may be recorded in a computer-readable recording medium, such as a hard disk, a flexible disk (FD), a compact disk read-only memory (CD-ROM), a magneto-optical disk (MO), and a digital versatile disc (DVD), and executed by being read from the recording medium by the computer.
Moreover, the various modules of the systems described herein can be implemented as software applications, hardware and/or software modules, or components on one or more computers, such as servers. While the various modules are illustrated separately, they may share some or all of the same underlying logic or code.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2011-249176 | Nov 2011 | JP | national |