This technology relates to an image processing technique.
For example, a “1-segment receiving service for a cellular phone or mobile terminal” (also called “One Seg”) is being provided for mobile terminals such as cellular phones.
By the way, there are mobile terminals, which are capable of handling “One Seg” but have a small display screen, and such mobile terminals have a function for expanding the display of part of a video image. For example, when expanding the display based on the center of the video image, some areas in a peripheral area of the video image protrude out of the display frame, and a caption that is inserted in the peripheral area of the video image is not displayed. Incidentally, the caption is often inserted in the peripheral area of the video image. In addition, this problem is not limited to mobile terminals that are capable of handling “One Seg”, and may also occur on other terminals that perform screen displays.
On the other hand, there is a conventional technique for moving a strip-shaped area 101 (hereafter, called a caption strip) on a screen such as illustrated in
However, in the conventional techniques, because the area of the movement destination is replaced with the entire caption strip or the entire caption area, the video image to be originally displayed in the area of the movement destination is not displayed at all. Particularly, when the display screen is small, the video image to be originally displayed is greatly affected.
This caption movement processing apparatus includes: a caption extraction unit to identify first pixels belonging to a first portion regarded as a character string that is inserted with an overlap on a background in an expanded image generated by expanding a specific frame image included in video image data; a caption movement calculation unit to determine whether or not any one of the first pixels is out of a display area that is a part of the expanded image and to calculate a movement amount for moving the first portion so as to make all of the first pixels or at least a main portion of the first pixels accommodated in the display area when it is determined that any one of the first pixels is out of the display area; and a caption drawing unit to identify a movement destination pixel for each of the first pixels or each of pixels belonging to the character string represented by a predetermined font, according to the calculated movement amount, and to replace a color of the movement destination pixel with a predetermined color.
The object and advantages of the embodiment will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the embodiment, as claimed.
The input unit 1 sequentially receives plural frame images relating to a certain video image, and stores those frame images into the frame image storage unit 3. The image expansion processing unit 5 uses the frame images that are stored in the frame image storage unit 3, and by performing an image expansion processing that will be explained later, the image expansion processing unit 5 generates expanded images that correspond to the frame images, then stores the expanded images into the expanded image storage unit 7. By using the expanded images that are stored in the expanded image storage unit 7 to perform a caption extraction processing that will be explained later, the caption extractor 9 extracts portions regarded as a character string that was inserted with an overlap on the background (hereafter, these may also be called “caption character potion”), then generates a mask image that will be explained later and stores the mask image into the mask image storage unit 11. The font dictionary storage unit 13 stores font dictionaries that include, for each character code, a character image of a character, which is expressed using a predetermined font. By using the mask image that is stored in the mask image storage unit 11 and the font dictionary that is stored in the font dictionary storage unit 13 to perform a font generation processing that will be explained later, the caption generator 15 updates the mask image. By using the mask image that is stored in the mask image storage unit 11 and the expanded image that is stored in the expanded image storage unit 7 to perform a caption feature calculation processing that will be explained later, the caption feature calculation unit 17 identifies the circumscribed rectangle of the caption character portion, and calculates the average color of pixels belonging to the caption character portion. By using the mask image that is stored in the mask image storage unit 11 to perform a caption movement calculation processing that will be explained later, the caption movement calculation unit 19 calculates a movement amount of the caption character portion. By using the mask image that is stored in the mask image storage unit 11 and the movement amount calculated by the caption movement calculation unit 19 to perform a caption drawing processing that will be explained later, the caption drawing unit 21 generates an output image and stores the generated output image into the output image storage unit 23. The caption processing unit 25 performs a caption processing that will be explained later on the output image that is stored in the output image storage unit 23, and updates the output image. The output unit 27 outputs the output image that is stored in the output image storage unit 23 on a display device.
Next, the processing by the caption movement processing apparatus illustrated in
First, the image expansion processing unit 5 acquires the size of the read frame image I and the expansion rate p (
Returning to the explanation of
First, the caption extractor 9 identifies the caption character portion in the expanded image M (
Returning to the explanation of
First, based on the mask image m, the caption feature calculation unit 17 identifies, from among the pixels that belong to the caption character portion (in other words, pixels of which m(x, y, t)=1), pixels whose x coordinate value is the minimum, and sets the x coordinate value of the identified pixel to a variable msx (
The caption feature calculation unit 17 then identifies, from among the pixels that belong to the caption character portion (in other words, pixels for which m(x, y, t)=1), the pixel whose x coordinate value is the maximum based on the mask image m, and sets the x coordinate value of the identified pixel to a variable mex (step S43). In other words, the x coordinate value of the pixel on the furthest right end among the pixels that belong to the caption character portion is set to the variable mex.
The caption feature calculation unit 17 then, based on the mask image m, identifies the pixel from among the pixels that belong to the caption character section (in other words, pixels for which m(x, y, t)=1) whose y coordinate value is a minimum, and sets the y coordinate value of the identified pixel as a variable msy (step S45). In other words, the y coordinate value of the pixel on the top end among the pixels that belong to the caption character portion is set to the variable msy.
The caption feature calculation unit 17 then identifies, from among the pixels that belong to the caption character portion (in other words, pixels for which m(x, y, t)=1), the pixel whose y coordinate value is the maximum based on the mask image m, and sets the y coordinate value of the identified pixel to the variable mey (step S47). In other words, the y coordinate value of the pixel on the bottom end among the pixels that belong to the caption character portion is set to the variable mey.
When the processing of the step S41 to step S47 has been carried out, the circumscribed rectangle of the caption character portion is identified as illustrated in
The caption feature calculation unit 17 then calculates the average color μ of the pixels that belong to the caption character portion and stores the calculation result into a storage device (step S49). For example, in the case of an image that is expressed using RGB, the caption feature calculation unit 17 calculates the average value of each color component, and the average color is μ=(ru, gu, bu). The caption feature calculation processing then ends, and the processing returns to the calling-source processing.
Returning to the explanation of
First, the caption movement calculation unit 19 sets “0” to a variable yflag (
The caption movement calculation unit 19 then determines whether or not msy is less than sy+ymargin (step S55). In other words, the caption movement calculation unit 19 determines whether or not the caption character portion protrudes out in the upward direction. Here, ymargin represents the size of the margin area that is provided on the inside from the edge of the display area (top end and bottom end) in the y-axis direction, and is set beforehand. In this embodiment, the caption character portion is displayed at a position determined by an extra amount ymargin from the edge of the display area in the y-axis direction. For example, as illustrated in
When it is determined that msy is less than sy+ymargin (step S55: YES route), it is determined that the caption character portion protrudes out in the upward direction, and the caption movement calculation unit 19 sets “1” to the variable yflag (step S57). On the other hand, when it is determined that msy is equal to or greater than sy+ymargin (step S55: NO route), the processing of step S57 is skipped, and the processing moves to the processing of step S59.
The caption movement calculation unit 19 then determines whether or not mey is greater than ey−ymargin (step S59). In other words, the caption movement calculation unit 19 determines whether or not the caption character portion protrudes out in the downward direction. When mey is determined to be greater than ey−ymargin (step S59: YES route), it is determined that the caption character portion protrudes out in the downward direction, and the caption movement calculation unit 19 adds “2” to the yflag (step S61). On the other hand, when it is determined that mey is equal to or less than ey−ymargin (step S59: NO route), the processing of step S61 is skipped, and the processing moves to the processing of step S63.
Therefore, when the caption character portion protrudes out in only the upward direction, yflag becomes 1. When the caption character portion protrudes out in only the downward direction, yflag becomes 2. Furthermore, when the caption character portion protrudes out in both the upward direction and downward direction, yflag becomes 3.
The caption movement calculation unit 19 then determines whether or not msx is less than sx+xmargin (step S63). In other words, the caption movement calculation unit 19 determines whether or not the caption character portion protrudes out in the left direction. Here, the xmargin represents the size of a margin area that is provided on the inside from the left end and right end of the display area, and is set beforehand. In this embodiment, the caption character portion is displayed at a position also determined by an extra amount xmargin in the x-axis direction.
When it is determined that msx is less than sx+xmargin (step S63: YES route), it is determined that the caption character portion protrudes out in the right direction, and the caption movement calculation unit 19 sets “1” to xflag (step S65). On the other hand, when it is determined that msx is equal to or greater than sx+xmargin (step S63: NO route), the processing of step S65 is skipped, and the processing moves to the processing of step S67.
The caption movement calculation unit 19 then determines whether or not mex is greater than ex−xmargin (step S67). In other words, the caption movement calculation unit 19 determines whether or not the caption character portion protrudes out in the right direction. When it is determined that mex is greater than ex−xmargin (step S67: YES route), it is determined that the caption character portion protrudes out in the right direction, and the caption movement calculation unit 19 adds “2” to xflag (step S69). After that, the processing moves to the processing of step S71 (
Therefore, when the caption character portion protrudes out only in the left direction, xflag becomes 1. Moreover, when, the caption character portion protrudes out only in the right direction, xflag becomes 2. Furthermore, when the caption character portion protrudes out in both the right direction and left direction, xflag becomes 3.
Moving to explanation of
On the other hand, when it is determined that yflag is not 0 (step S71: NO route), the caption movement calculation unit 19 determines whether or not yflag is 1 (step S73). When it is determined that yflag is 1 (step S73: YES route), the caption movement calculation unit 19 calculates “sy−msy+ymargin”, and sets the calculation result to the movement amount gy in the y-axis direction (step S75). When the movement amount gy is a positive value, the value represents a movement amount in the downward direction, and when the movement value gy is a negative value, the value represents a movement amount in the upward direction. As described above, yflag becomes 1 when the caption character portion protrudes out in only the upward direction, and the movement amount gy that is set at the step S75 becomes a positive value. After that, the processing moves to step S83.
On the other hand, when it is determined that the yflag is not 1 (step S73: NO route), the caption movement calculation unit 19 determines whether or not yflag is 2 (step S77). When it is determined that yflag is 2 (step S77: YES route), the caption movement calculation unit 19 calculates “ey−mey−ymargin”, and sets the calculation result to the movement amount gy in the y-axis direction (step S79). As described above, yflag is 2 when the caption character portion protrudes out only in the downward direction, and the movement amount gy that is calculated at the step S79 is a negative value. After that, the processing moves to the step S83.
On the other hand, when it is determined that yflag is not 2 (step S77: NO route), in other words, when yflag is 3, the caption movement calculation unit 19 sets “0” to the movement amount gy in the y-axis direction (step S81). Even when yflag is determined to be 0 at the step S71, the processing of this step is carried out. As described above, yflag is 3 when the caption character portion protrudes out in both the upward and downward direction. On the other hand, yflag is 0 when the caption character portion does not protrude out in either the upward direction or downward direction. In these cases, because it is meaningless to move the caption character portion in the y-axis direction, “0” is set to the movement amount gy.
The caption movement calculation unit 19 determines whether or not xflag is 0 (step S83). When it is determined that xflag is 0 (step S83: YES route), the processing moves to step S93.
On the other hand, when it is determined that xflag is not 0 (step S83: NO route), the caption movement calculation unit 19 determines whether or not xflag is 1 (step S85). When it is determined that xflag is 1 (step S85: YES route), the caption movement calculation unit 19 calculates “sx−msx+xmargin”, and sets the calculation result to the movement amount gx in the x-axis direction (step S87). When the movement amount gx is a positive value, the value represents a movement amount in the right direction, and when the movement amount gx is a negative value, the value represents a movement amount in the left direction. As described above, xflag is 1 when the caption character portion protrudes out only in the left direction, and the movement amount gx, which is set at the step S87, is a positive value. After that, the processing moves to the processing of step S95 (
On the other hand, when it is determined that xflag is not 1 (step S85: NO route), the caption movement calculation unit 19 determines whether or not xflag is 2 (step S89). When it is determined that xflag is 2 (step S89: YES route), the caption movement calculation unit 19 calculates “ex−mex−xmargin”, and sets the calculation result to the movement amount gx in the x-axis direction (step S91). As described above, xflag is 2 when the caption character portion protrudes out only in the right direction, and the movement amount gx, which is calculated at the step S91, is a negative value. After that, processing moves to the processing of step S95 (
On the other hand, when it is determined that xflag is not 2 (step S89: NO route), in other words, when xflag is 3, the caption movement calculation unit 19 sets “0” to the movement amount gx in the x-axis direction (step S93). When it was determined even at the step S83 that xflag is 0, the processing of this step is carried out. As described above, xflag is 3 when the caption character portion protrudes out in both the left direction and right direction. On the other hand, xflag is 0 when the caption character portion does not protrude in either left direction or right direction. In these cases, because movement in the x-axis direction is meaningless, 0 is set to the movement amount gx is set.
Moving to explanation of
On the other hand, when the condition that gy is less than old_gy+th_y, and gy is greater than old_gy−th_y is not satisfied (step S95: NO route), the caption movement calculation unit 19 sets old_gy to gy (step S99). In other words, for a processing of the next frame image (in other word, the frame image at time (t+1)), gy is stored as old_gy. The processing then moves to the processing of step S101.
The caption movement calculation unit 19 then determines whether or not the condition that gx is less than “old_gx+th_x”, and gx is greater than “old_gx−th_x” is satisfied (step S101). Here, old_gx is the movement amount of the previous frame image in the x-axis direction. In other words, at the step S101, the caption movement calculation unit 19 determines whether or not the difference between gx and old_gx is less than a predetermined threshold value th_x. When the condition that gx is less than “old_gx+th_x”, and gx is greater than “old_gx−th_x” is satisfied (step S101: YES route), the caption movement calculation unit 19 sets old_gx to gx (step S103). In this embodiment, in order to prevent the caption after the movement from flickering, when the difference between the movement amount gx and the movement amount old_gx of the previous frame image is less than the predetermined threshold value th_x, the movement amount old_gx of the previous frame image is used as the movement amount gx. The caption movement calculation processing then ends, and the processing returns to the calling-source processing.
On the other hand, when the condition that gx is less than “gx+th_x”, and gx is greater than “old_gx−th_x” is not satisfied (step S101: NO route), the caption movement calculation unit 19 sets gx to old_gx (step S105). In other words, for the processing of the next frame image, gx is stored as old_gx. The caption movement calculation processing then ends, and the processing returns to the calling-source processing.
By performing the processing such as described above, it is possible to calculate the movement amount in the x-axis direction and y-axis direction. Moreover, when the difference between the calculated movement amount and the movement amount of the previous frame image is small, the movement amount of the previous frame image is used. Therefore, it is possible to prevent the display of the caption character portion after the movement from flickering.
Returning to the explanation of
On the hand, when it is determined that the caption character portion is reformed (step S11: YES route), the caption generator 15 uses the mask image m that is stored in the mask image storage unit 11, and the font dictionary that is stored in the font dictionary storage unit 13 to carry out a caption generation processing (step S13). As illustrated in
First, the caption generator 15 uses the mask image m to carry out a character recognition processing on the caption character portion, and acquires the circumscribed rectangle and character code of each character (
The caption generator 15 then identifies an unprocessed character from among characters that are included in the caption character portion (step S113). The caption generator 15 then acquires the character image f of a character that corresponds to the character code of the identified character from the font dictionary, and expands or reduces the size of the acquired character image f so that the size of the character image f matches with the size of the circumscribed rectangle of the identified character (step S115). An example of the character image f is illustrated in
The caption generator 15 then sets “0” to the counter i (step S117). The caption generator 15 also sets “0” to the counter j (step S119). Then, the processing moves to the processing of step S121 (
Moving to explanation of
On the other hand, when it is determined that counter j is equal to or greater than “cex−csx” (step S127: NO route), the caption generator 15 increments counter i by “1” (step S129), and determines whether or not the counter i is less than “cey−csy” (step S131). When it is determined that the counter i is less than “cey−csy” (step S131: YES route), the processing returns to the processing of the step S119 (
For example, when the processing such as described above is carried out on part of the mask image m using the character image f illustrated in
On the other hand, when it is determined that i is equal to or greater than “cey−csy” (step S131: NO route), the caption generator 15 updates the mask image m (step S133). In this processing, as for each of the pixels whose pixel value is “1”, the pixel values of those pixels are changed to “0”. Moreover, as for each of the pixels whose pixel value is “2” or “3”, the pixel values of those pixels are changed to “1”. For example, the processing of this step is carried out for the mask image m illustrated in
The caption generator 15 then determines whether or not the processing is complete for all characters (step S135). When the processing is not complete for all characters (step S135: NO route), the processing returns to the processing of step S113 (
By carrying out the processing as described above, it is possible to display captions in an output image with characters, which are easy to view as will be described below, even when bleeding of the characters or the like occur due to the expansion of the video image, for example.
Returning to the explanation of
First, the caption drawing unit 21 generates a transformed mask image m′, which has the same size as that of the output image O and stores the generated image into the output image storage unit 23. At this point, the value of each of the pixels in the output image O and the value of each of the pixels in the transformed mask image m′ are all “0”. The caption drawing unit 21 then sets a counter i to “0” (
The caption drawing unit 21 determines whether or not m(j, i, t) is “1” (step S145). When it is determined that m(j, i, t) is “1” (step S145: YES route), the caption drawing unit 21 sets the average color μ to M(j+gx, j+gy, t) (step S147). In other words, the color of the pixels at the movement destination in the expanded image M is replaced with the average color μ. The pixels at the movement destination are identified by moving from the current position by an amount gx in the x-axis direction, and an amount gy in the y-axis direction.
The caption drawing unit 21 then sets “1” to m′(j+gx−sx, i+gy−sy, t) (step S149). In other words, “1” is set to the pixels at the movement destination in the transformed mask image m′. Here, the reason for subtracting sx and sy is that, as illustrated in
On the other hand, when it is determined that m(j, i, t) is not “1” (step S145: NO route), the processing of the steps S147 and S149 is skipped, and the processing moves to the processing of step S151.
Then, the caption drawing unit 21 increments the counter j by “1” (step S151), and determines whether or not the counter j is less than mx (step S153). When it is determined that counter j is less than mx (step S153: YES route), the processing returns to the processing of step S145, and the processing from the step S145 to step S153 is repeated.
However, when it is determined that counter j is equal to or greater than mx (step S153: NO route), the caption drawing unit 21 increments the counter i by “1” (step S155), and determines whether or not the counter i is less than my (step S157). When it is determined that counter i is less than my (step S157: YES route), the processing returns to the processing of step S143, and the processing from the step S143 to step S157 is repeated.
On the other hand, when it is determined that i is equal to or greater than my (step S157: NO route), the caption drawing unit 21 copies the values of the pixels in the display area of the expanded image M to the output image O (step S159). For example, an example of the output image O is illustrated in
By performing the processing such as described above, it is possible to generate the output image O in which only the pixels belonging to the caption character portion are moved. In other words, it is possible to display captions while keeping any effect on the image to be originally displayed to the minimum. When there are pixels for which m(j, i, t)=1 in the display area, the caption character portion before the movement is not displayed in the output image O by setting the average color of the pixels surrounding those pixels to M(j, i, t).
Returning to the explanation of
First, the caption processing unit 25 reads out the transformed mask image m′ from the output image storage unit 23. Then, for each of the pixels of which m′(x, y, t)=0, the caption processing unit 25 calculates the shortest distance from those pixels to the pixels of which m′(x, y, t)=1 (
For example,
Moreover,
Furthermore,
The shortest distance may be calculated using another method. For example, it is assumed that the processing of step S161 is carried out for the transformed mask image m′ illustrated in
The caption processing unit 25 sets a counter i to “0” (step S163). The caption processing unit 25 also sets a counter j to “0” (step S165). The caption processing unit 25 then determines whether or not the condition that d(j, i) is less than a predetermined threshold value Th_d and d(j, i) is equal to or greater than “0” is satisfied (step S167). When it is determined that the condition that d(j, i) is less than the predetermined threshold value Th_d and d(j, i) is equal to or greater than “0” is not satisfied (step S167: NO route), the processing from step S169 to step S175 is skipped, and the processing moves to the processing of step S177 (
On the other hand, when it is determined that the condition that d(j, i) is less than a predetermined threshold value Th_d and d(j, i) is equal to or greater than “0” is satisfied (step 5167: YES route), the caption processing unit 25 calculates the color difference degree s (step S169). When the color is expressed by RGB, for example, the color difference degree s can be calculated as s=|r−ru|+|g−gu|+|b−bu|. Here, r, g and b represent the color components of O(j, i, t) and ru, gu and bu represent the color components of the average color μ.
The caption processing unit 25 then determines whether or not the color difference degree s is less than a predetermined reference value (step S171). When it is determined that the color difference degree s is equal to or greater than the predetermined reference value (step S171: NO route), the processing of step S173 and step S175 is skipped, and the processing moves to the processing of step S177 (
On the other hand, when it is determined that the color difference degree s is less than the predetermined reference value (step S171: YES route), the caption processing unit 25 generates a processed color c (step S173), and sets that processed color c to O(j, i, t) (step S175). For example, when the processed color c is taken to be (rc, gc, bc), each color component can be calculated by rc=mod(r+128, 255), gc=mod(g+128, 255) and bc=mod(b+128, 255). Thus, it is possible to replace the color of O(j, i, t) with the opposite color of O(j, i, t) (in other words, color whose difference of RGB value is 128). In addition, each color component may be calculated by rc=mod(ru+128, 255), gc=mod(gu+128, 255) and bc=mod(bu+128, 255). Thus, it is possible to replace the color of O(j, i, t) with the opposite color of the average color μ. After that, the processing moves to the processing of step S177 (
Moving to an explanation of
On the other hand, when it is determined that the counter j is equal to or greater than mx′ (step S179: NO route), the caption processing unit 25 increments the counter i by “1” (step S181), and determines whether or not the counter i is less than my′ (step S183). Here, my′ is the height of the output image O. When it is determined that the counter i is less than my′ (step S183: YES route), the processing returns to the processing of step S165 (
On the other hand, when it is determined that the counter i is equal to or greater than my′ (step S183: NO route), the caption processing ends, and the processing returns to the calling-source processing. For example, when the surrounding pixels having a distance of 2 or less are transformed according to the distance-transformed image d illustrated in
By performing a processing such as described above, a border is formed for each character of the caption character portion using a color that is different from the character color. Therefore, it becomes possible to make it easy to see the caption after the movement.
Returning to the explanation of
Although one embodiment of this technique was explained, this technique is not limited to this. For example, the functional block diagram of the aforementioned caption movement processing apparatus does not always correspond to an actual program module configuration. Furthermore, in the processing flows, as long as the processing results do not change, the order of the steps may be exchanged. Moreover, the steps may be executed in parallel.
Although an example was explained of calculating the movement amount in order that all of the pixels that belong to a caption character portion fit within a display area, it is not absolutely necessary that all of the pixels that belong to the caption character portion fit within the display area. For example, as long as the caption character portion is recognizable even though some of the pixels that belong to the caption character portion are missing, it is possible to calculate the movement amount so that the main pixels except for part of the pixels fit.
Moreover, although an example was explained above in which the caption generation processing was performed after the caption movement calculation processing, the caption generation processing can be performed first. In this case, the movement amount can be calculated based on the reformed caption character portion.
Incidentally, it is possible to create a program for realizing the caption movement processing apparatus with the hardware such as a computer, and such a program is stored in a storage medium or storage device, such as flexible disk, CD-ROM, magneto-optical disk, semiconductor memory, hard disk and the like. Moreover, the intermediate processing results are stored in the storage device such as a main memory.
This embodiment is outlined as follows:
A caption movement processing apparatus of the embodiment includes: a caption extraction unit to identify first pixels belonging to a first portion regarded as a character string that is inserted with an overlap on a background in an expanded image generated by expanding a specific frame image included in video image data; a caption movement calculation unit to determine whether or not any one of the first pixels is out of a display area that is a part of the expanded image and to calculate a movement amount for moving the first portion so as to make all of the first pixels or at least a main portion of the first pixels accommodated in the display area when it is determined that any one of the first pixels is out of the display area; and a caption drawing unit to identify a movement destination pixel for each of the first pixels or each of pixels belonging to the character string represented by a predetermined font, according to the calculated movement amount, and to replace a color of the movement destination pixel with a predetermined color.
Thus, even when the character string, which is inserted as a caption, protrude out of the display area along with the expansion of the video image, for example, it becomes possible to display the character string within the display area. Incidentally, because only the pixels included in the character string are replaced, the influence to the video image to be originally displayed is minimized.
The caption movement processing apparatus may further include: a caption processing unit to replace a color of a second pixel whose distance to the movement destination pixel is equal to or less than a predetermined distance among pixels other than the movement destination pixels with a color different from the color of the movement destination pixel. Thus, each character included in the character string is fringed with a color different from the color of the character. Therefore, it becomes easy to see the character sting.
The caption movement processing apparatus may further includes: a font storage unit to store, for each character code, a character image of a character, which is represented by a predetermined font; and a caption generation unit to obtain a character code of each character included in the character string by carrying out a character recognition processing for the first portion, to extract a character image corresponding to the obtained character code for each character included in the character string from the font storage unit, and to replace data of the character included in the character string with the extracted character image. Thus, even when a blur of the character occurs by the expansion of the video image, for example, it becomes possible to display the character string with characters that are easy to see.
Moreover, the caption movement calculation unit may include: a unit to calculate a difference between a second movement amount relating to a frame image immediately before the specific frame image and the calculated movement amount, and to determine whether or not the calculated difference is less than a predetermined value; and a unit to replace the calculated movement amount with the second movement amount when it is determined that the calculated difference is less than the predetermined value. Thus, when the movement amount is less than the predetermined value, the movement amount relating to the frame image immediately before the specific frame image. Therefore, it becomes possible to prevent the display of the character string after the movement from flickering.
Furthermore, the caption movement processing apparatus may further include: a unit to calculate an average color of the first pixels. Then, the caption drawing unit may replace a color of the movement destination pixel with the average color.
The caption processing unit may include: a unit to calculate a difference degree between a color of the second pixel and a color of the movement destination pixel for each second pixel; and a unit to replace the color of the second pixel whose difference degree is less than a predetermined reference with a color different from the color of the movement destination pixel.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiments of the present inventions have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
This application is a continuing application, filed under 35 U.S.C. section 111(a), of International Application PCT/JP2008/070608, filed Nov. 12, 2008.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2008/070608 | Nov 2008 | US |
Child | 13101214 | US |