Field of the Invention
The present invention relates to information processing apparatuses and methods, and programs, and more particularly, to an information processing apparatus and method, and a program in which it is possible to provide information for presenting characters appearing in moving picture content and positions of the characters to users such that the users can easily understand and recognize them.
Description of the Related Art
In most cases, moving picture content includes various characters. Accordingly, the demand that users understand the type of content from characters or search for scenes of a specific character from among various characters and play them back is increasing.
For handling moving picture content including various characters, as discussed above, the following techniques, for example, disclosed in U.S. Pat. No. 3,315,888 and Japanese Unexamined Patent Application Publication No. 2004-363775 are known.
In the known techniques disclosed in those publications, however, it is difficult to respond to the above-described demand sufficiently. With a mere combination of known techniques disclosed in the above-described publications, it is difficult to present characters appearing in moving picture content and positions of the characters to users such that the users can easily understand and recognize them.
It is thus desirable to provide information for presenting characters appearing in moving picture content and positions of the characters to users such that the users can easily understand and recognize them.
According to an embodiment of the present invention, there is provided an information processing apparatus that may generate resource information used for playing back image content that can be divided into a plurality of zones. The information processing apparatus may include image generating means for generating a still image from each of the plurality of zones, face processing means for setting each of the plurality of zones to be a target zone and for determining whether a face of a specific character which is determined to continuously appear in at least one zone before the target zone is contained in the still image generated from the target zone by the image generating means, and information generating means for specifying, on the basis of a determination result obtained for each of the plurality of zones by the face processing means, at least one zone in which the face of the specific character continuously appears as a face zone, and for generating information concerning the face zone as one item of the resource information.
The information generating means may generate, as another item of the resource information, a face thumbnail image, which serves as an index of the face zone, on the basis of the face of the specific character contained in a predetermined one of at least one still image generated by the image generating means from each of at least one zone included in the face zone.
If at least one zone which is determined to contain the face of the specific character by the face processing means is continues, the information generating means may specify at least one continuous zone as the face zone.
If a third continuous zone including at least one zone which is determined not to contain the face of the specific character by the face processing means is present between a first continuous zone and a second continuous zone, each including at least one zone which is determined to contain the face of the specific character by the face processing means, and if a length of the third continuous zone is smaller than or equal to a threshold, the information generating means may specify the first continuous zone, the third continuous zone, and the second continuous zone as one face zone, and, if the length of the third continuous zone exceeds the threshold, the information generating means may specify the first continuous zone and the second continuous zone as different face zones.
The face processing means may attempt to detect a face of a specific character from the still image generated from the target zone, and if a face is detected, the face processing means may generate a face image by extracting an area containing the detected face from the still image generated from the target zone. Then, the face processing means may set the face image generated from the target zone to be a first comparison subject and may set the latest face image selected from among face images of the specific character generated from zones before the target zone to be a second comparison subject, and may compare the first comparison subject with the second comparison subject, and may determine on the basis of comparison results whether the specific character is contained in the still image generated from the target zone.
The face processing means may include a table representing information concerning the specific character and may list information concerning whether the specific character is contained in the still image generated from the target zone in the table. The information generating means may specify the face zone on the basis of the information of the table listed by the face processing means.
Information concerning each of at least one specific character may be listed in the table. If at least one face image is generated as a result of detecting at least one face from the still image generated from the target zone, the face processing means may set each of at least one face image generated from the still image generated from the target zone to be the first comparison subject and may set each of at least one face image of at least one specific character contained in the table to be the second comparison subject. The face processing means may perform matching processing on all combinations of at least one face image in the first comparison subject and at least one face image in the second comparison subject to calculate scores as a result of the matching processing, and may select at least one matching pair which is determined to be a combination of face images of an identical person from all the combinations. The face processing means may determine that, among at least one specific character contained in the table, a specific character of each of at least one matching pair corresponding to the second comparison subject is contained in the still image generated from the target zone, and may determine that other characters are not contained in the still image, and may list determination results in the table.
Concerning a combination which is not selected as a matching pair, the face processing means may add a character corresponding to a face image contained in the first comparison subject to the table as a new character.
The face processing means may exclude, among all the combinations, a combination whose score is smaller than or equal to a predetermined threshold from selection candidates of the matching pair.
If a zone in which a first comparison subject is generated and a zone in which a second comparison subject is generated are separated from each other by a predetermined interval or greater, the face processing means may exclude a combination including the first comparison subject and the second comparison subject from selection candidates of the matching pair.
According to another embodiment of the present invention, there is provided an information processing method for an information processing apparatus that may generate resource information used for playing back image content that can be divided into a plurality of zones. The information processing method may include setting each of the plurality of zones to be a target zone, generating a still image from the target zone, determining whether a face of a specific character which is determined to continuously appear in at least one zone before the target zone is contained in the still image generated from the target zone, and specifying, on the basis of a determination result obtained for each of the plurality of zones, at least one zone in which the face of the specific character continuously appears as a face zone and generating information concerning the face zone as one item of the resource information.
According to another embodiment of the present invention, there is provided a program corresponding to the above-described information processing method.
According to the information processing apparatus and method and program, resource information used for playing back image content that can be divided into a plurality of zones may be generated as follows. Each of the plurality of zones is set to be a target zone, and a still image is generated from the target zone. It is determined whether a face of a specific character which is determined to continuously appear in at least one zone before the target zone is contained in the still image generated from the target zone. On the basis of a determination result obtained for each of the plurality of zones, at least one zone in which the face of the specific character continuously appears is specified as a face zone and information concerning the face zone is generated as one item of the resource information.
According to an embodiment of the present invention, resource information can be provided as information for presenting characters appearing in moving picture content and positions of the characters to users. In particular, it is possible to provide resource information for presenting characters appearing in moving picture content and positions of the characters to users so that the users can easily understand and recognize them.
Before describing an embodiment of the present invention, the correspondence between the features of the claims and the embodiment disclosed in the present invention is discussed below. This description is intended to assure that the embodiment supporting the claimed invention is described in this specification. Thus, even if an element in the following embodiment is not described as relating to a certain feature of the present invention, that does not necessarily mean that the element does not relate to that feature of the claims. Conversely, even if an element is described herein as relating to a certain feature of the claims, that does not necessarily mean that the element does not relate to other features of the claims.
An information processing apparatus (e.g., an image recording apparatus 401 shown in
The information generating means may generate, as another item of the resource information, a face thumbnail image (e.g., face thumbnail images 522-A through 522-E shown in
If at least one zone which is determined to contain the face of the specific character by the face processing means is continues (e.g., in the example shown in
If a third continuous zone including at least one zone which is determined not to contain the face of the specific character by the face processing means is present between a first continuous zone and a second continuous zone, each including at least one zone which is determined to contain the face of the specific character by the face processing means, and if a length of the third continuous zone is smaller than or equal to a threshold (e.g., concerning person a in
The face processing means may attempt to detect a face of a specific character from the still image generated from the target zone (e.g., executing step S104 in
The face processing means may include a table representing information concerning the specific character (e.g., a face image table including specific characters A through H shown in
Information concerning each of at least one specific character (e.g., in the example shown in
Concerning a combination which is not selected as a matching pair, the face processing means may add a character corresponding to a face image contained in the first comparison subject to the table as a new character (e.g., in the example in
The face processing means may exclude, among all the combinations, a combination whose score is smaller than or equal to a predetermined threshold from selection candidates of the matching pair (e.g., in the example in
If a zone in which a first comparison subject is generated and a zone in which a second comparison subject is generated are separated from each other by a predetermined interval or greater (e.g., in the example in
According to another embodiment of the present invention, there is provided an information processing method (e.g., a method implemented by the resource data generating/recording processing shown in
According to another embodiment of the present invention, there is provided a program corresponding to the above-described information processing method, and is executed by a personal computer shown in
For easy understanding of the present invention, reference is first given to
Content includes products created from humans' creative activity, and among the content pieces, content at least including images is referred to as “image content”. The image content can be largely divided into moving picture content mainly including moving pictures and still image content mainly including still images. In this specification, so-called “content data”, namely, products created from humans' creative activity and then converted into a form that can be processed by a machine, such as electric signals or data recorded on recording media, are also referred to as “content”.
In the following description, content is recorded on a recording medium on a file-by-file basis, and the number of content pieces is also indicated by the number of files. That is, one piece of content is content that can be recorded on a recording medium as a single file.
The following playback instruction operation for giving an instruction to play back a desired piece of content from among a plurality of moving picture content pieces is known. A list of thumbnail images, which serves as an index of a plurality of moving picture content pieces, is presented to a user. The user then selects the thumbnail image corresponding to a desired piece of moving picture content from the list.
In this embodiment of the present invention, in addition to the above-described playback instruction operation, the playback instruction operation shown in
In the example shown in
As in the zone 11, a zone in which a specific character continuously appears in moving pictures is hereinafter referred to as a “face zone”. If the image of a face is represented by a thumbnail image, as in the thumbnail image 21, the thumbnail image is referred to as a “face thumbnail image”.
In addition to the face zone 11, the moving picture content 1 includes face zones 12 through 14, and face thumbnail images 22 through 24 are associated with the face zones 12 through 14, respectively.
In the above-configured moving picture content 1, the following playback instruction operation can be implemented. A list of the face thumbnail images 21 through 24 is presented to the user as a list of the indexes of the face zones 11 through 14 of the moving picture content 1. The user then selects the face thumbnail image corresponding to a desired face zone from the list. Details of this playback instruction operation and processing performed by an apparatus in response to this instruction operation are discussed below with reference to
In other words, in order to present characters and positions thereof appearing in moving picture content to a user such that the user can easily understand and recognize them, a face zone is introduced as one of the measures to indicate the position at which the character appears, and a face thumbnail image including the face of the character appearing in the face zone is introduced as the index of the face zone. Accordingly, the use of face thumbnail images and face zones makes it possible to respond to the above-described demand that the user wishes to understand the type of content from characters or to search for scenes of a specific character from among various characters and to play them back.
An overview of a technique for specifying face zones and a technique for generating face thumbnail images are discussed below with reference to
In the example shown in
In this case, the apparatus divides the moving picture content 1 into predetermined base units, and generates one still image from each base unit. The base unit for dividing the moving picture content 1 is not particularly restricted. In another example discussed below, one GOP is used as the base unit (hereinafter referred to as the “1GOP unit”, and the base unit including k GOPs is referred to as the “kGOP unit”). In this example, two GOPs are used as the base unit, i.e., one still image is generated from the 2GOP unit.
The approach to generating still images is not particularly restricted, and, in the example shown in
In
In this case, for example, a still image is generated from a B-picture. More specifically, the apparatus reads the I-picture as a reference picture and then generates a still image from the B-picture on the basis of the reference I-picture. In the example shown in
Alternatively, the apparatus may sequentially read the pictures from the head of the GOP, and when reading the I-picture for the first time, it can generate a still image from the read I-picture. In the example shown in
Referring back to
It is now assumed that the third 2GOP unit from the left in
In this case, if no face is detected from the 2GOP unit immediately before the target unit, i.e., the second 2GOP unit in the example shown in
If, for example, the base unit for generating a still image is the 2GOP unit or a greater number of GOPs, there is a possibility that the faces of the same character be contained in the second and subsequent GOPs of the previous unit. Thus, instead of determining the head position of the third 2GOP unit as the start position of the face zone 11, the apparatus determines the head position of the third 2GOP unit merely as a candidate of the start position. The apparatus then makes a further determination as to whether to contain the face of the same character in several GOPs before the candidate. If no GOP that contains the face of the same character is found, the apparatus sets the candidate of the start position as the start position of the face zone 11. If a GOP that contains the face of the same character is found, the apparatus sets the head position of the first GOP that contains the face of the same character as the start position of the face zone 11.
After setting the start position of the face zone 11, the apparatus sequentially sets every 2GOP unit as the target unit, generates a still image from each target unit, and then extracts a face image from the still image. This procedure is repeatedly performed.
In the example in
Similarly, the apparatus sets the fifth 2GOP unit as the target unit, and generates a still image 31-3 and extracts a face image 32-3 from the still image 31-3. Then, the apparatus compares the face image 32-3 with the previous face image 32-2, and if the two face images indicate the face of the same character, the apparatus determines that the 2GOP unit from which the face image 32-3 is extracted, i.e., the fifth 2GOP unit, is also contained within the face zone 11.
The apparatus then sets the sixth 2GOP unit as the target unit, and generates a still image 31-4 and attempts to detect a face in the still image 31-4. However, since no face is contained in the still image 31-4, no face is detected. Accordingly, the apparatus sets the head position of the 2GOP unit from which no face is detected, i.e., the head position of the sixth 2GOP unit, namely, the end position of the fifth 2GOP unit, as a candidate of the end position of the face zone 11.
The reason for setting the head position of the sixth 2GOP unit merely as a candidate is as follows. To humans, the time corresponding to the 2GOP unit is a short period of time, and even if no face is detected for such a short period of time, the user can feel as if a face zone were still continuing if the face of the same character is detected once again after the short period of time. That is, even after determining a candidate for the end position of the face zone 11, the apparatus continues detecting faces for subsequent several 2GOP units, and if the period during which no faces are detected continues for a certain period of time, the apparatus sets the candidate as the end position of the face zone 11.
After specifying the face zone 11, the apparatus generates the face thumbnail image 21 as the index of the face zone 11 and associates the face thumbnail image 21 with the face zone 11. The face thumbnail image 21 of the face zone 11 is not particularly restricted as long as the user can recognize the same face as that of the faces detected from the still images 32-1 through 32-3. Accordingly, after determining the face zone 11, the apparatus may generate a new face image from the face zone 11 and uses the generated image as the face thumbnail image 21. Alternatively, the apparatus may set one of the face images used while specifying the face zone 11 as the face thumbnail image 21. In the example shown in
For the user's convenience, the apparatus may generate, not only the face thumbnail image 21, but also a thumbnail image 41 as the index of the face zone 11, from the entire still image including the face of the face thumbnail image 21. In the example shown in
That is, as the indexes of the face zones 11 through of the moving picture content 1, not only the face thumbnail images 21 through 24, respectively, thumbnail images may be generated from the associated entire still images, though they are not shown in
An overview of a technique for specifying face zones and a technique for generating face thumbnail images have been described with reference to
When recording content on a recording medium so that a face zone can be played back by specifying a face thumbnail image, the recording medium having a structure, such as that shown in
On a recording medium 51 shown in
In the real data area 52, N (N is an integer of 0 or greater) pieces of content 64-1 through 64-N are recorded. In the example shown in
For one piece of content 64-K, resource data 65-K necessary for playing back the content 64-K is recorded in the resource data area 53.
The resource data 65-K includes information and data concerning the content 64-K, such as management information, thumbnail information and individual thumbnails, content-meta information, face-zone-meta information, and face thumbnails.
The management information includes a set of various items of information for managing the entirety of the content 64-K.
The thumbnail is an image, such as the thumbnail image 41 shown in
The thumbnail information includes a set of various items of information concerning the above-described individual thumbnails.
The content-meta information is meta-information concerning the content 64-K, and includes, for example, basic information necessary for playing back the content 64-K, except for the subsequent face-zone-meta information.
The face-zone-meta information includes a set of various items of information necessary for playing back face zones, such as the face zones 11 through 14 shown in
Referring back to
In the example shown in
In other words, the management information concerning the content 64-1 through the management information concerning the content 64-N are collectively included in the management file 61. The thumbnail information and the corresponding thumbnails of the content 64-1 through the content 64-N are collectively included in the thumbnail image file 62. The content-meta information, the face-zone-meta information, and the face thumbnails of the content 64-1 through the content 64-N are collectively included in the meta-file 63.
That is, a plurality of pairs of face-zone-meta information and associated face thumbnails are included in the meta-file 63 as one piece of meta-information concerning the content 64-K. More specifically, if the content 64-K is the moving picture content 1 shown in
The use of resource data, such as the resource data 65-K, implements high-speed display of a list of the faces of a character and positions thereof without the need for conducting re-search. Additionally, in response to a search request for information concerning content from an external source, it is sufficient that resource data is sent. As a result, the responsiveness can be improved.
The use of resource data also makes it possible to extract a portion in which a specific character is recorded and to generate new moving picture data, or to play back or backup moving picture data in which only a specific character is recorded. Accordingly, resource data can be used for extracting data when automatic editing, automatic playback, or automatic backup is performed.
By using the playback apparatus shown in
To implement the above-described processing, the playback apparatus shown in
The control unit 71 includes a system controller 91, a user interface (UI) controller 92, and a content management information controller 93. The RAM 73 includes a management information area 101 and a real data area 102.
The system controller 91 executes various types of control processing on the audio decoder 74, the video decoder 76, the still image decoder 77, and the image blender 78.
The UI controller 92 performs various types of control on graphical user interfaces (GUIs) by using the operation unit 80 and GUI images displayed on the image display unit 79, for example, GUI images shown in
If necessary, the content management information controller 93 expands management information for playing back the content 64-K shown in
The demultiplexer 72 reads at least part of the content 64-K to be played back among the content 64-1 through the content 64-N recorded on the recording medium 51. The demultiplexer 72 then demultiplexes the read data into audio data and video data and stores them in the real data area 102 of the RAM 73. The term “reading at least part of the content 64-K” is used because a certain zone of the content 64-K, for example, a face zone, can be played back.
In the real data area 102 of the RAM 73, data, such as video data and audio data, read from the recording medium 51 is stored. In the management information area 101, several pieces of information among the resource data 65-K recorded on the recording medium 51 are stored as management information. Details of the management information area 101 are further discussed below with reference to
Under the control of the system controller 91, the audio decoder 74 reads out audio data from the real data area 102 and converts the audio data into a sound signal having a format that is compatible with the sound output unit 75, and supplies the resulting sound signal to the sound output unit 75. The sound output unit 75 outputs the sound corresponding to the sound signal supplied from the audio decoder 74, i.e., the sound corresponding to the audio data of the content 64-K.
Under the control of the system controller 91, the video decoder 76 reads out video data from the real data area 102 and converts the video data into an image signal having a format that is compatible with the image display unit 79. If the video data is MPEG data, the video decoder 76 performs MPEG decode processing and supplies the resulting image signal to the image blender 78.
Under the control of the system controller 91, the still image decoder 77 reads out still image data, for example, a GUI still image, from the management information area 101, and converts the still image data into an image signal having a format that is compatible with the image display unit 79, and supplies the resulting image signal to the image blender 78.
The image blender 78 combines the image signal output from the video decoder 76 with the image signal output from the still image decoder 77 and provides the resulting composite image signal to the image display unit 79. The image display unit 79 displays the image corresponding to the image signal output from the image blender 78, i.e., moving pictures corresponding to the video data of the content 64-K or GUI images, such as those shown in
Details of the management information area 101 of the RAM 73 are given below with reference to
The management information area 101 includes, as shown in
In the image processing area 111, the image data of the GUI image data shown in
In the property area 112, common information necessary for accessing the recording medium 51 shown in
The entry is a zone for which a playback instruction is given, and for example, the face zones 11 through 14 shown in
In the thumbnail area 113, information concerning the thumbnail of each entry (hereinafter referred to as a “thumbnail entry”) is stored.
In the meta-area 114, information concerning metadata of each entry (hereinafter referred to as a “meta-entry”) is stored. If a face zone is an entry, a pair of corresponding face-zone-meta information and a face thumbnail (see
It should be noted that the property entry, the thumbnail entry, and the meta-entry of each entry are not separately stored in the property area 112, the thumbnail area 113, and the meta-area 114, respectively, and instead, they are stored for each entry in association with each other, as indicated by the arrows in
A description is given below of the playback instruction operation performed by the user by using the playback apparatus shown in
In
For example, in response to a start operation for giving a playback instruction by the operation unit 80, the UI controller 92 determines that the state transition condition C1 is satisfied and shifts the state of the playback apparatus to a media list display state S1.
After the state of the playback apparatus is shifted to the media list display state S1, the content management information controller 93 generates a media list GUI image in the management information area 101 of the RAM 73 in the form of image data, and supplies the media list GUI image to the still image decoder 77. The system controller 91 controls the still image decoder 77 and the image blender 78 to convert the media list GUI image from the image data into an image signal, and supplies the converted media list GUI image to the image display unit 79. Then, on the image display unit 79, the media list GUI image is displayed. As a result, a GUI using the media list GUI image and the operation unit 80 can be implemented.
The media list is an index list of each recording medium that can be played back by the playback apparatus. That is, a GUI image that allows the media list to be displayed and that receives an operation for selecting the index corresponding to a desired recording medium from the list is the media list GUI image, though it is not shown.
A series of processing for displaying another GUI image on the image display unit 79 and implementing a GUI using the GUI image and the operation unit 80 is basically similar to the processing for the above-described media list GUI image. Thus, such a series of processing is simply referred to as the “GUI image display processing” and a detailed explanation is omitted.
After the operation for selecting a desired recording medium from the media list is performed through the operation unit 80, the UI controller 92 determines that the state transition condition C2 is satisfied, and shifts the state of the playback apparatus from the media list display state S1 to the file/folder list display state S2.
After the state of the playback apparatus is shifted to the file/folder list display state S2, the control unit 71 executes processing for displaying a file/folder list GUI image.
The file/folder list is a list of icons of folders and files, in the form of a tree structure, contained in the selected recording medium. That is, though a GUI image that allows a file/folder list to be displayed and that receives an operation for selecting the icon corresponding to a desired file/folder is the file/folder list GUI image, though it is not shown.
The file/folder list GUI image contains a software button for redisplaying, for example, the media list GUI image. If the software button is operated, it is determined that the state transition condition C3 is satisfied, and the state of the playback apparatus is shifted from the file/folder list display state S2 to the media list display state S1.
In response to an operation for selecting a desired file/folder from the file/folder list GUI image performed through the operation unit 80, the UI controller 92 determines that the state transition condition C4 is satisfied, and shifts the state of the playback apparatus from the file/folder list display state S2 to the folder file display state S3.
More specifically, in this embodiment, the folder file display state S3 includes three states, such as a general display state S3-1, a moving-picture selection screen display state S3-2, and a still-image selection screen display state S3-3. Among the three states, the state of the playback apparatus shifts from the file/folder list display state S2 to the general display state S3-1. That is, the general display state S3-1 is the default state of the folder file display state S3.
After the state of the playback apparatus is shifted to the general display state S3-1, the control unit 71 executes processing for displaying a file selection GUI image.
The file selection GUI image contains a software button for redisplaying, for example, the file/folder list GUI image, though it is not shown. If the software button is operated, it is determined that the state transition condition C5 is satisfied, and the state of the playback apparatus is shifted from the general display state S3-1 to the file/folder list display state S2.
The file selection GUI image is a GUI image that allows icons of individual files contained in the folder of the selected recording medium to be displayed and that receives an operation for selecting a predetermined icon. To select an icon is to select the file associated with the icon.
In this case, if the selected recording medium is the recording medium 51 shown in
However, it is difficult for the user to visually determine whether an icon represents a file of moving picture content or a file of another type of content. Even if the user can identify that the icon represents a file of moving picture content, it is very difficult to understand the type of moving picture content.
Accordingly, in this embodiment, a moving-picture selection GUI image 150, such as that shown in
The file selection GUI image contains a software button for displaying, for example, the moving-picture selection GUI image 150, though it is not shown. If the software button is operated, it is determined that the state transition condition C6 shown in
After the state of the apparatus is shifted to the moving-picture selection screen display state S3-2, the control unit 71 performs processing for displaying the moving-picture selection GUI image 150 shown in
The moving-picture selection GUI image contains a software button for redisplaying, for example, the file selection GUI image, though it is not shown. If the software button is operated, it is determined that the state transition condition C7 is satisfied, and the state of the playback apparatus is shifted from the moving-picture selection screen display state S3-2 to the general display state S3-1.
While the moving-picture selection GUI image 150 shown in
In this case, to select the thumbnail image 151 is to give an instruction to play back the moving picture content associated the thumbnail image 151. In this embodiment, there are at least two types of playback instruction operations, as discussed with reference to
Accordingly, the selection operation for the thumbnail image 151 also includes two types of playback instruction operations, such as a first selection operation corresponding to the entire playback instruction operation and a second selection operation corresponding to the face-zone playback instruction operation.
If the first selection operation is performed, it is determined that the state transition condition C10 shown in
After the state of the playback apparatus is shifted to the moving-picture playback state S6, the control unit 71 plays back the moving picture content for which the entire playback instruction operation is performed. That is, the entire moving picture content is read from the recording medium and is played back from the start. The playback operation for the entire moving picture content can be easily understood from the description of the playback apparatus with reference to
After the playback of the moving picture content is finished or in response to an operation for interrupting the playback of the moving picture content, it is determined that the state transition condition C11 is satisfied, and the state of the playback apparatus is shifted from the moving-picture playback state S6 to the moving-picture selection screen display state S3-2.
In contrast, if the second selection operation corresponding to the face-zone playback instruction operation is performed, it is determined that the state transition condition C12 is satisfied and the state of the playback apparatus is shifted from the moving-picture selection screen display state S3-2 to the face-zone playback selection screen display state S5.
After the state of the playback apparatus is shifted to the face-zone playback selection screen display state S5, processing for displaying the face-zone playback selection GUI images is performed.
The face-zone playback selection GUI images are GUI images that instruct the user to select a desired face zone from the moving picture content selected in the moving-picture selection screen GUI image.
In this embodiment, as the face-zone playback selection GUI images, three GUI images 161 through 163, such as those shown in
In
In the example shown in
While the face thumbnail GUI image 161 is being displayed, the user can operate the operation unit 80 to move a cursor 201 to a desired face thumbnail image, as shown in
In the example shown in
An example of the technique for specifying a range of the moving picture content 1 to be read as the face zone 12 is briefly discussed below. As stated above, it is possible to specify the range of the moving picture content 1 corresponding to the face zone 12 by means of the face-zone-meta information (
In contrast to the face thumbnail GUI image 161, instead of face images, a GUI image 162 contains thumbnail images corresponding to the entireties of the source-scene still images from which the face images are extracted. The GUI image 162 is hereinafter referred to as the “source-scene GUI image 162”.
In the example shown in
Accordingly, while the source-scene GUI image 162 is being displayed, the user can operate the operation unit 80 to move the cursor 201 to a desired thumbnail image, as shown in
In the example shown in
In this embodiment, after locating the cursor 201 at a desired face thumbnail image in the face thumbnail image 161, or after locating the cursor 201 at a desired thumbnail image in the source-scene GUI image 162, that is, after selecting a desired face zone in the face thumbnail GUI image 161 or the source-scene GUI image 162, the user can perform a predetermined operation through the operation unit 80 to display a GUI image 163 including a time line 191 indicating the time position of the predetermined face zone in the moving picture content 1. In the example shown in
While the time-line GUI image 163 is being displayed, the user can perform a predetermined operation through the operation unit 80 to give an instruction to play back the face zone represented by the image displayed in the time line 191.
In the example shown in
In the example shown in
In this case, the face zones selected by one playback instruction operation, for example, in
As discussed above, in this embodiment, as the face zone playback selection GUI images, three GUI images, such as the face thumbnail GUI image 161, the source-scene GUI image 162, and the time-line GUI image 163 shown in
If the above-described face-zone playback instruction operation is performed at least for once in the face-thumbnail screen display state S5-1, the source-scene screen display state S5-2, and the time-line screen display state S5-3, it is determined that the state transition conditions C14-1, C14-2, and C14-3, respectively, are satisfied, and the state of the playback apparatus is shifted to the moving-picture playback state S6.
After the state of the playback apparatus is shifted to the moving-picture playback state S6, the control unit 71 continuously plays back one or more face zones for which the playback instruction operation is performed, as discussed above.
After finishing the playback operation for one or more face zones to the end or in response to an instruction to stop the playback operation, it is determined that the state transition condition C11 is satisfied, and the state of the playback apparatus is shifted from the moving-picture playback state S6 to the moving-picture selection screen display state S3-2.
A software button for redisplaying, for example, the moving-picture selection GUI image 150, is contained in each of the face thumbnail GUI image 161, the source-scene GUI image 162, and the time-line GUI image 163, though it is not shown. If the software button is operated, it is determined that the state transition condition C13-1, C13-2, or C13-3 is satisfied, and the state of the playback apparatus is shifted to the moving-picture selection screen display state S3-2.
Another software button for redisplaying, for example, another face-zone selection GUI image, is contained in each of the face thumbnail GUI image 161, the source-scene GUI image 162, and the time-line GUI image 163, though it is not shown. If the software button is operated, it is determined that one of the state transition conditions C21 through C26 is satisfied, and the state of the playback apparatus is shifted to the corresponding one of the face-thumbnail screen display state S5-1, the source-scene screen display state S5-2, and the time-line screen display state S5-3.
As discussed above, the above-configured playback apparatus shown in
The user may select a plurality of face thumbnail images. In this case, a plurality of face zones corresponding to the selected plurality of face thumbnail images are continuously played back. In the example shown in
Additionally, one or more selected face zones may be formed into a new piece of content, i.e., a new file, and recorded on, an external medium, such as the recording medium 51. Alternatively, one or more selected face zones may be transferred to an external apparatus (not shown) via a network. Such a technique is referred to as a “collective write/transfer technique”.
More specifically, in the example shown in
To apply the collective write/transfer technique to the playback apparatus, in addition to the above-described configuration and operation discussed with reference to
That is, in order to implement the collective write/transfer technique by using the face thumbnail GUI image 161 and the source-scene GUI image 162 shown in
The software buttons 251-1 through 253-1 and 251-2 through 253-2 are hereinafter simply referred to as the “software buttons 251 through 253” unless it is not necessary to individually distinguish them from each other. The software buttons 251 through 253 are also referred to as a playback button 251, a file generate button 252, and an external write button 253, respectively, as shown in
The playback button 251 is a software button for performing a playback instruction operation for continuously playing back one or more face zones in that order associated with one or more selected face thumbnail images or thumbnail images.
The file generate button 252 is a software button for generating new content, as a new file, from one or more face zones associated with one or more selected face thumbnail images or thumbnail images and for storing the new file in a built-in memory of the playback apparatus, for example, the RAM 73 shown in
The external write button 253 is a software button for generating new content, as a new file, from one or more face zones associated with one or more selected face thumbnail images or thumbnail images and for recording the generated new content in an external medium, such as the recording medium 51, or transferring the new content to an external apparatus via a network. By operating the external write button 253, the user can record new content, such as the new content 211 shown in
In the recording/playback apparatus shown in
As stated above, according to the collective write/transfer technique, one or more face zones selected by using face thumbnail images or thumbnail images can be formed into one piece of content, i.e., a new file. In this case, however, if more than one face zone are selected, an editing operation for splicing those face zones becomes necessary. In this embodiment, this editing operation is performed on face zones when they are in the form of a baseband signal. Thus, in the recording/playback apparatus shown in
When generating, for example, the new content 211 shown in
If sound is included in the new content 211, the audio encoder/decoder 261 performs processing in a manner similar to the video encoder/decoder 262.
In the recording/playback apparatus shown in
The above-configured recording/playback apparatus shown in
The recording/playback apparatus shown in
The collective write processing is executed when the state of the recording/playback apparatus is the face-zone playback selection screen display state S5 shown in
In step S21, the UI controller 92 of the control unit shown in
If it is determined in step S21 that neither of the file generate button 252 or the external write button 253 has been operated, the process returns to step S21, and the determination processing in step S21 is repeated.
If it is determined in step S21 that the file generate button 252 or the external write button 253 has been operated, the process proceeds to step S22.
In step S22, the content management controller 93 generates resource data for continuously playing back portions corresponding to the selected face thumbnail images or thumbnail images.
In step S23, the content management controller 93 generates a temporary folder including the resource data in the management information area 101 of the RAM 73.
More specifically, it is now assumed, for example, that after the face thumbnail images 21 through 24 are selected by using the face thumbnail GUI image 161, the file generate button 252-1 or the external write button 253-1 is operated.
In this case, the face thumbnail images 21 through 24 serve as the indexes of the face zones 11 through 14, respectively. Accordingly, in step S22, resource data items 271 through 274 for playing back the face zones 11 through 14, respectively, are generated. Then, in step S23, a temporary folder 261 including the resource data items 271 through 274 is recorded in the management information area 101 of the RAM 73. The resource data items 271 through 274 are data including pairs of face thumbnails and face-zone-meta information concerning the face zones 11 through 14, respectively (see
In step S24 in
If it is determined in step S24 that the file generate button 252 has been operated, the process proceeds to step S25.
In step S25, the content management information controller 93 writes information concerning the temporary folder into the common management information area of the RAM so that the temporary folder can be converted into a permanent folder. Then, the collective write processing is completed.
If it is determined in step S24 that the operated button is not the file generate button 252, i.e., the operated button is the external write button 253. The process then proceeds to step S26.
In step S26, the content management information controller 93 and the system controller 91 combine the resource data in the temporary folder with the real data to generate new content as a new file. The real data generating processing has been discussed above through a description of the video encoder/decoder 262.
In step S27, the content management information controller 93 records the new content on an external medium, such as the recording medium 51, as a new file. Then, the collective write processing is completed.
If the recording/playback apparatus shown in
In the above-described example, face thumbnail images are associated with moving picture content. However, they may be associated with still image content. Then, as in the moving picture content, the user can perform a search operation or a playback instruction operation on still image content by using a GUI image including a list of face thumbnail images.
It is possible that, in most cases, the number of pieces of still image content is much greater than the number of pieces of moving picture content. Accordingly, instead of displaying a list of face thumbnail images associated with all pieces of still image content, the following approach to arranging still image content pieces is more convenient for users. Still image content pieces that are determined to contain the same character are formed into one group, and then, one face thumbnail image is extracted from each group as a typical face thumbnail image. Then, a list of typical face thumbnail images is displayed.
More specifically, if still image content pieces recorded on the recording medium 51 belong to three groups indicating three characters, as shown in
In this case, the character folders 271 through 273 may be stored in the recording medium 51 beforehand, or may be generated by the playback apparatus shown in
A description is now given of an example of processing performed by the playback apparatus shown in
In this case, among the states shown in
The still-image selection screen display state S3-3 is the state in which a still-image selection GUI image is displayed.
The still-image selection GUI image is a GUI image that allows a corresponding typical face thumbnail image to be displayed as the index of each character file and that receives an operation for selecting a typical face thumbnail image associated with a desired character file. In this case, after selecting one character file from the GUI image, a list of thumbnail images included in the selected character file may be displayed as a GUI image, and such a GUI image is also referred to as the still-image selection GUI image. Specific examples of the still-image selection GUI image are described below with reference to
A software button for displaying such a still-image selection GUI image is included in the file selection GUI image displayed in the general display state S3-1, though it is not shown. If the software button is operated, it is determined that the state transition condition C8 is satisfied, and the state of the playback apparatus is shifted from the general display state S3-1 to the still-image selection screen display state S3-3.
After the playback apparatus is shifted to the still-image selection screen display state S3-3, the control unit 71 shown in
More specifically, in this embodiment, the still-image selection screen display state S3-3 includes, as shown in
The playback apparatus shifts from the general display state S3-1 to the still-image list screen display state S31. That is, the still-image list screen display state S31 is the default of the still-image selection screen display state S3-3.
After the playback apparatus is shifted to the still-image list screen display state S31, the control unit 71 performs display processing for a still-image list GUI image.
A software button for redisplaying, for example, a file selection GUI image, is contained in the still-image list GUI image. If the software button is operated, it is determined that the state transition condition C9 is satisfied, and the state of the playback apparatus is shifted from the still-image list screen display state S31 to the general display state S3-1.
The still-image list GUI image is a GUI image that allows a list of thumbnail images associated with all pieces of still-image content included in a selected folder to be displayed as the index of the still-image content pieces and that receives an operation for selecting a desired thumbnail image from the list, though it is not shown.
If a predetermined thumbnail image is selected while the still-image list GUI image is being displayed, that is, if an instruction to play back the still image content corresponding to the selected thumbnail image is given, it is determined that the state transition condition 015-1 is satisfied, and the state of the playback apparatus is shifted from the still-image list screen display state S31 to the still-image display state S4.
After the playback apparatus is shifted to the still-image display state S4, the control unit 71 plays back the still-image content for which a playback instruction operation has been performed. That is, if the state of the playback apparatus is shifted to the still-image display state S4 since the state transition condition S15 is satisfied, the still-image content is read from the recording medium 51 and is played back. That is, the still image is displayed on the image display unit 79 shown in
If an instruction to stop the playback operation for the still-image content is given, it is determined that the state transition condition C16-1 is satisfied, and the state of the playback apparatus is shifted from the still-image display state S4 to the still-image list screen display state S31.
As stated above, in the still-image list GUI, all pieces of still-image content are displayed. Accordingly, if the number of still-image content pieces is large, the number of thumbnail images also becomes large. It is thus hard for the user to select a desired one of many thumbnail images.
Thus, in this embodiment, as stated above, a typical face thumbnail image representing each character is associated with the corresponding character folder. A GUI image that allows a list of the typical face thumbnail images to be displayed and receives an operation for selecting a desired typical face thumbnail image is provided. Such a GUI image is also referred to as the “face thumbnail GUI image”.
A software button for displaying, for example, a face thumbnail GUI image, is contained in the still-image list GUI image, though it is not shown. If the software button is operated, it is determined that the state transition condition C51 is satisfied, and the state of the playback apparatus is shifted from the still-image list screen display state S31 to the face-thumbnail screen display state S32.
After the playback apparatus is shifted to the face-thumbnail screen display state S32, the control unit 71 executes display processing for a face thumbnail GUI image.
Then, a face thumbnail GUI image 301, such as that shown in
In each of the typical face thumbnail images 311 through 314, an image of the corresponding character's face is contained. For easy understanding, however, in the example shown in
In this embodiment, not only typical face images, but also a GUI image 302 including source still images from which the typical face images are extracted are displayed, as shown in
In the example shown in
A software button for displaying the face-thumbnail source image GUI image 302 shown in
After the playback apparatus is shifted to the face-thumbnail source image screen display state S33, the control unit 71 executes display processing for the face-thumbnail source image GUI image 302. Then, the face-thumbnail source image GUI image 302 shown in
A software button for displaying the face thumbnail GUI image 301 shown in
While the face thumbnail GUI image 301 shown in
Similarly, while the face thumbnail GUI image 302 shown in
In this manner, if an instruction to play back the character folder of the character α is given in the state in which face thumbnail GUI image 301 shown in
After the playback apparatus is shifted to the selected-character source image list screen display state S34, the control unit 71 executes display processing for the selected-character source image list GUI image.
The selected-character source image list GUI image is a GUI image that allows all still-image content pieces contained in the selected character folder, i.e., the still image content pieces containing the selected character, to be displayed as a list of thumbnail images and that receives an operation for selecting a predetermined thumbnail image.
For example, since the character folder of the character α is selected, still images 351 through 356 containing the images of the character α are displayed, as shown in
A software button for displaying, for example, the face thumbnail GUI image 301 shown in
While the selected-character source image list GUI image 303 shown in
In response to an instruction to play back still image content, it is determined that the state transition condition C15-2 shown in
After the playback apparatus is shifted to the still image display state S4, the control unit 71 plays back the still image content for which a playback instruction operation is performed. That is, if the playback apparatus is shifted to the still image display state S4 since the state transition state C15-1 is satisfied, the still image content is read from the recording medium 51 and is played back. More specifically, a GUI 304 including the still image 356 to be played back is displayed, as shown in
For the simplicity of description, the still images 356 shown in
If an instruction to stop playing back the still image content, it is determined that the state transition condition C16-2 shown in
The processing to be performed in response to an instruction to play back still image content has been discussed with reference to
In this case, the collective write/transfer technique discussed with reference to
More specifically, as shown in
In the example shown in
It is now assumed, for example, that after selecting all the thumbnail images 371 through 374 in the still-image list GUI image 362, the file generate button 252-3 or the external write button 253-3 is operated.
In this case, the control unit 71 of the recording/playback apparatus shown in
If the file generate button 252-3 is operated, the content management information controller 93 shown in
In contrast, if the external write button 253-3 is operated, the content management information controller 93 and the system controller 91 shown in
A description has been given of, as an application of an information processing apparatus of an embodiment of the present invention, an apparatus that can play back moving picture content or still image content recorded on the recording medium 51 shown in
A description is now given, as another application of an information processing apparatus of an embodiment of the present invention, an image recording apparatus that records moving picture content or still image content on the recording medium 51 so that playback instruction operation GUIs utilizing face thumbnail images can be presented.
The image recording apparatus 401 includes a controller 411, an image capturing unit 412, an image processor 413, an image compressor 414, and a writer 415.
The controller 411 includes, for example, a central processing unit (CPU), and executes various types of control processing in accordance with a program stored in, for example, a read only memory (ROM) (not shown). That is, the controller 411 controls the operations of the image capturing unit 412, the image processor 413, the image compressor 414, and the writer 415.
The image capturing unit 412 includes, for example, a digital video camera, and captures an image of a subject and provides an image signal obtained as a result of the capturing operation to the image processor 413 in the form of a baseband signal.
The image processor 413 performs various types of image processing on the image signal supplied from the image capturing unit 412 so that the moving picture or still image corresponding to the image signal can be processed. The image processor 413 then supplies the resulting image signal to the image processor 414 in the form of a baseband signal. In this description, the image processing performed by the image processor 413 includes the above-described processing, such as specifying face zones and forming face thumbnail images and thumbnail images as the indexes of the face zones.
Information concerning the face zones and the face thumbnail images and thumbnail images are also output from the image processor 413 to the controller 411 or the writer 415 as predetermined data. Details of the image processing are discussed below.
The image compressor 414 performs predetermined compression-coding processing on the image signal supplied from the image capturing unit 412 via the image processor 413 in the form of a baseband signal. More specifically, the image compressor 414 performs, for example, MPEG decode processing, if the image signal is a moving picture signal, and then supplies the resulting compressed image data to the writer 415. The compressed image data, for example, MPEG data, may sometimes be supplied to the image processor 413 as the image signal for detecting a face zone.
The writer 415 writes the image data supplied from the image compressor 414 into the recording medium 51 as image content, i.e., a file, and also writes the resource data of the image content on the recording medium 51. The resource data includes information concerning the face thumbnail images and face zones supplied from the image processor 413 or the controller 411. That is, the information concerning the face thumbnail images and face zones is written into the resource data area 53 of the recording medium 51 as pairs of face thumbnails and face-zone-meta information discussed with reference to
In the example shown in
The noise eliminator 421 performs noise elimination processing for eliminating unwanted noise contained in the captured image corresponding to the image signal supplied from the image capturing unit 412. The noise eliminator 421 then supplies the resulting image signal to the enlargement/reduction unit 423.
More specifically, the noise eliminator 421 performs the following processing as the noise elimination processing by using the frame memory 422. By using the image signal of the previous frame (one frame before a target frame) read from the frame memory 422 and the image signal of the target frame input from the image capturing unit 412, the noise eliminator 421 obtains noise components from the two image signals and then eliminates them from the image signal of the target frame. The noise eliminator 421 then supplies the image signal without noise to the enlargement/reduction unit 423. This image signal is written back to the frame memory 422 so that it can be used as the image signal one frame before the subsequent target frame. In this case, the parameter for adjusting the degree to which noise is eliminated is provided from the controller 411, and thus, the noise eliminator 421 can perform noise elimination processing in accordance with the image captured by the image capturing unit 412.
The enlargement/reduction unit 423 performs enlargement/reduction processing on the image signal supplied from the noise eliminator 421 in accordance with the predetermined enlargement/reduction ratio supplied from the controller 411, and then supplies the resulting image signal to the signal converter 424. If no instruction is given from the controller 411 or if the enlargement/reduction ratio is 100%, the enlargement/reduction unit 423 directly supplies the image signal to the signal converter 424 without changing the size thereof.
The signal converter 424 performs image processing concerning the type of video effect instructed by the controller 411 on the image signal supplied from the enlargement/reduction unit 423. The signal converter 424 then supplies the resulting image signal to the image blender 425 and also to the image information detector 427. The type of image processing performed by the signal converter 424 is not particularly restricted, and may be color conversion into a sepia or black-and-white color, or negative-positive inversion, or mosaic or blurring processing. The signal converter 424 may supply the image signal output from the enlargement/reduction unit 423 to the image blender 425 or the image information detector 427 without performing any image processing on the image signal.
The image blender 425 performs blending processing on the image signal supplied from the signal converter 424 in accordance with the type of blending processing instructed by the controller 411, and then supplies the resulting image signal to the image compressor 414. The type of image processing performed by the image blender 425 is not particularly restricted, and may be transparency blending processing by means of alpha (a) blending with a graphic image stored in the frame memory 426 or fader blending processing for gradually fading in or fading out with an image stored in the frame memory 426 along the time axis. The image blender 425 may output the image signal supplied from the signal converter 424 to the image compressor 414 without performing any image processing on the image signal.
Under the control of the controller 411, the image information detector 427 performs various types of image processing on the image signal supplied from the signal converter 424 or the image compressor 414 to extract character information or face information, and provides the extracted information to the writer 415 or the controller 411. The face information includes information concerning the above-described face zones and face thumbnail images.
The image information detector 427 includes, as shown in
The still image generator 431 generates a still image in the format of image data from the image signal supplied from the signal converter 424 or the image compressor 414, and then supplies the generated still image to the face image processor 432. If the image signal corresponding to a still image is supplied from the signal converter 424 or the image compressor 414, the image signal is directly supplied to the face image processor 432.
Under the control of the controller 411, the face image processor 432 performs various types of processing, such as detecting a character's face from the still image supplied from the still image generator 431 and extracting the face image from the still image. The results of the image processing are supplied to the controller 411 and the thumbnail generator 433. The results of the image processing include a face image table, which is described below with reference to
Under the control of the controller 411, the thumbnail generator 433 specifies a face zone and generates a face thumbnail image as the index of the face zone by the use of the information from the face image processor 432 or the controller 411, and supplies information concerning the face zone and the face thumbnail image to the controller 411 and the writer 415.
An example of resource data generating/recording processing performed by the image information detector 427 or the controller 411 shown in
The resource data generating/recording processing is processing for the resource data 65-K, in particular, a pair of a face thumbnail and face-zone-meta information of the resource data 65-K, to be recorded, together with the moving picture content 64-K, on the recording medium 51 shown in
In the processing indicated by the flowchart shown in
In step S101, the still image generator 431 of the image information detector 427 sets the latest GOP to be the target GOP.
In step S102, the still image generator 431 generates a still image from the target GOP. In step S102, the approach to generating a still image is not particularly restricted. In this case, since MPEG data is supplied in the form of GOPs, the approach discussed with reference to
In step S103, the still image generator 431 changes the size of the still image and supplies the resulting still image to the face image processor 432.
Then, in step S104, the face image processor 432 attempts to detect faces from the still image. The technique for detecting faces is not particularly restricted. In this case, if a plurality of faces are contained in the still image, they are detected one by one.
In step S105, the face image processor 432 determines whether a face has been detected.
If it is determined in step S105 that one face has been detected, the process proceeds to step S106. In step S106, the face image processor 432 generates face detection information concerning the detected face, which is discussed below.
Then, the process returns to step S104. That is, if a plurality of faces are contained in the still image, they are sequentially detected, and face detection information concerning each of the plurality of faces is generated.
Specific examples of the face detection information are described below with reference to
For example, in step S101, the GOP shown in
The still image 502 contains face areas 502-1 and 502-2 (hereinafter simply referred to as the “faces 502-1 and 502-2), as shown in
In this case, in step S104, the face 502-1 is detected. After it is determined in step S105 that a face has been detected, in step S106, face detection information 503-1 is generated, as shown in
In the example shown in
After generating the face detection information 503-1, the process returns to step S104 in which the face 502-2 is detected. After it is determined in step S105 that a face has been detected, in step S106, face detection information 503-2 shown in
Then, the process returns to step S104, and at this point, no face is contained in the still image 502. Accordingly, it is determined in step S105 that no face has been detected, and the process proceeds to step S107.
In step S107, the face image processor 432 determines whether at least one item of face detection information has been generated.
If no face is contained in the still image, though such a case is not shown, a face is not detected nor is face detection information generated. In this case, it is determined in step S107 that face detection information has not been generated, and the process proceeds to step S110.
In contrast, if at least one face is contained in the still image and has successfully been detected, at least one item of face detection information is generated. It is thus determined in step S107 that at least one item of face detection information has been generated, and the process proceeds to step S108.
In step S108, the face image processor 432 extracts each of at least one face image from the still image on the basis of the corresponding item of face detection information.
More specifically, if two items of face detection information 503-1 and 503-2, such as those shown in
In step S109, the face image processor 432 performs various types of processing necessary for generating or updating a table, such as that shown in
If the GOP number of the target GOP is 1, a new table is generated. At this stage, the table includes columns representing the typical face image of the GOP number 1 and only one row representing the GOP number 1. Then, when the GOP number of the target GOP is i (in the example shown in
More specifically, in the table shown in
For example, in the leftmost column of the table in
In the column representing the character A, “face information” or “no face information” is indicated in the line i. If “face information” is indicated in the line i, it means that the character A is contained in the still image generated from the GOP number i. In contrast, if “no face information” is indicated in the line i, it means that the character A is not contained in the still image generated from the GOP number i. An approach to determining whether or not the character A is contained is not particularly restricted. A specific example of this approach is discussed below with reference to
The table shown in
After step S109, in step S110, the face image processor 432 determines whether the target GOP is the final GOP.
If the target GOP is not the final GOP, the process returns to step S101. In the example shown in
After finishing the face image table generating/updating processing for the final GOP number n in step S109, the face image table is updated as shown in
Then, in step S111, the thumbnail generator 433 generates a face thumbnail image and specifies a face zone for each character on the basis of the face image table. Step S111 is performed for each character registered in the face image table, and more specifically, each of the character A through the character H in the example shown in
Step S111 is described in details below with reference to
It should be noted that, in the example in
Concerning the person a, face images are contained in the GOP numbers 1 through 3, and no face image is contained in the GOP number 4. Then, face images are again contained in the GOP numbers 5 and 6. In this manner, if face images are not generated only in a short interval, i.e., in a few GOPs, it is considered that the same character A is still appearing. That is, in this case, the thumbnail generator 433 specifies the face zone of the character A, not as a zone from the GOP 1 to the GOP 3, but as a zone 521-A from the GOP 1 to GOP 6, as shown in
The technique for generating the face thumbnail image 522-A is not particularly limited. For example, a new thumbnail image may be generated. In this case, however, since the typical image of the character A is contained in the face image table shown in
Concerning the person b, face images are contained in the GOP numbers 1 through 4, and face images are not contained for a long interval, and face images are again contained in the GOP numbers n−5 to n. In this manner, if face images have not been generated for a long interval, the face image processor 432 determines that the face images generated in a set of GOPs and the face images generated in another set of GOPs belong to different characters B and D. As a result, in the face image table in
In this case, the thumbnail generator 433 specifies, as shown in
The thumbnail generator 433 also specifies, as shown in
The interval, which is used as the reference for determining whether characters appearing in different zones are the same person, is not particularly restricted. That is, if the interval for which no face image appears is only a short interval, it is considered to be part of a continuous face zone. In contrast, if such an interval is a long interval, it is not considered to be part of a continuous face zone. An approach to determining whether such an interval is a short interval or a long interval is not particularly restricted. The following approach, for example, may be employed. A predetermined integral multiple of the base unit for generating a face image, that is, in the example shown in
Concerning other persons, face zones are specified and face thumbnails are generated in a manner similar to that described above.
More specifically, concerning the person c, the thumbnail generator 433 specifies, as shown in
Concerning the person d, the thumbnail generator 433 specifies, as shown in
Concerning the characters F through H shown in
Then, step S111 in
In step S112, the controller 411 or the writer 415 generates meta-information including the face thumbnail and the face zone of each character, i.e., meta-information including each pair of face-zone-meta information and a face thumbnail. If the meta-information is generated by the controller 411, it is supplied to the writer 415.
In step S113, the writer 415 records the meta-information generated in step S112, together with the management information, on the recording medium 51 as the resource data of the content.
Then, the resource data generating/recording processing is completed.
A description is now given of, with reference to
As discussed through the use of the face image table shown in
In the face image table generating/updating processing, a determination as to whether a face image of a specific character appears in the target GOP number i is necessary. The algorithm for this determination, i.e., a determination technique, is not particularly restricted. In this embodiment, the technique shown in
In
A still image 601 is a still image generated from the GOP one before the target GOP number i, i.e., from the GOP number i−1. Face images 611-1 and 611-2 are face images corresponding to faces 601-1 and 601-2, respectively, detected from the still image 601. That is, the face images extracted from the still image 601 are the faces images 611-1 and 611-2.
In this case, the face image processor 432 shown in
The matching processing technique is not particularly restricted. In this embodiment, for example, the following technique is employed. Concerning a combination k (k is a combination number and may be any value from 1 to the total number of combinations), a predetermined computation is performed by the use of the similarity (hereinafter referred to as “ak”) between a face image in the target GOP number i and a face image in the previous GOP number i−1 and the distance (hereinafter referred to as “bk”) between the coordinates of the face image in the still image in the target GOP number i and the coordinates of the face image in the still image in the previous GOP number i−1. The score Sk as a result of performing the computation is used for determining whether the face image in the target GOP number i and the face image in the previous GOP number i−1 belong to the same person.
To compute the score Sk, it is sufficient that ak and bk are used. In this embodiment, for example, the score Sk is computed by the following equation (1):
Sk=√α*ak+β*bk (1)
where α and β indicate parameters for allowing the distance and the similarity to be used for comparison processing, i.e., the parameters for normalization.
The computation method for ak is not particularly restricted, either, and for example, the similarity computation method using the principal component analysis, may be employed. For the computation method for bk, the following method may be used. Face images to be compared are generated on the basis of face detection information associated with the face images. The face detection information includes, as shown in
In this manner, the face image processor 432 shown in
More specifically, in this embodiment, matching pairs are selected, as shown in
In
As indicated by the double sided arrows shown in
More specifically, as indicated by the leftmost side of
Then, the face image processor 432 excludes all the combinations including one of the face images 611-1 and 612-1, which are selected as a matching pair, from the candidates for the next matching pair. As a result, as indicated by the rightmost side in
In the example in
The principle of selecting matching pairs has been discussed with reference to
In the example shown in
More specifically, in the example in
In this case, the latest face images of the character A and the character B are the face images 611-1 and 611-2, respectively. Accordingly, as in the example in
In this manner, after determining face images to be compared with face images of the target GOP, in principle, the matching processing discussed with reference to
In the example shown in
Then, all combinations including at least one of the face images 611-1 and 612-1, which are selected as a matching pair, are excluded from possible candidates of the next matching pair. Then, in the example shown in
However, even if the combination of the face images 612-2 and 611-2 has the highest score Sk among the existing combinations, if the highest score Sk is very low, it is difficult to determine that the face image 612-2 of the GOP number i is the face image of the character B. In this embodiment, by taking such a case into consideration, if the score Sk of a combination of face images is lower than or equal to a predetermined threshold, such a combination is not determined to be a matching pair (is excluded from matching-pair candidates), and the face image included in the combination is determined to be a face image of another character.
In the example shown in
Additionally, as shown in
The interval used for determining whether two face images appearing therebetween are different characters is not particularly restricted, and may be a desired value, such as an interval equivalent to three seconds.
The score of a matching pair selected as described above is listed in the field of “similarity (score)” (see
The face-image presence/absence determination technique, which is one of the techniques according to an embodiment of the present invention, has been discussed below with reference to
Details of step S109 in
In step S121, the face image processor 432 shown in
More specifically, at least one face image of a target GOP is compared with each of the face images, which serve as the latest images, of all the characters registered in the face image table. All the characters registered in the face image table are the characters A through H in the example in
In step S122, the face image processor 432 sets the combination having the highest score Sk to be a candidate pair.
In step S123, the face image processor 432 determines whether the score Sk of the candidate pair exceeds the threshold.
If it is determined in step S123 that the score Sk exceeds the threshold, the process proceeds to step S124 to determine whether the time interval of the candidate pair is smaller than or equal to a predetermined threshold. The time interval is the time interval of two GOPs between which the two images are generated. More specifically, if the target GOP number is i and if the GOP number having the face image included in the candidate pair is j (j is an integer smaller than or equal to i−1), i−j can be used as the time interval.
If the time interval of the candidate pair is found to be smaller than or equal to the threshold in step S124, the process proceeds to step S125.
In step S125, the face image processor 432 sets the candidate pair to be a matching pair.
Then, in step S126, the face image processor 432 determines that the character of the face image of the target GOP contained in the matching pair and the character of the other face image contained in the matching pair are the same person.
In step S127, the face image processor 432 indicates “face information” in the column of the target GOP of the character associated with the face image contained in the matching pair.
In step S128, the face image processor 432 excludes all combinations (including the matching pair) including at least one of the face images contained in the matching pair.
In step S129, the face image processor 432 determines whether there is any combination to be processed.
If it is determined in step S129 that there is a combination to be processed, the process returns to step S122. More specifically, in step S122, the combination having the highest score Sk is selected, and steps S123 through S130 are repeated.
If it is determined in step S123 that the score Sk is lower than or equal to the threshold, or if it is determined in step S124 that the time interval exceeds the threshold, the process proceeds to step S130.
In step S130, the face image processor 432 registers the face image contained in the candidate pair as the typical image of a new character. Then, the process proceeds to step S129.
If it is determined in step S129 that there is no combination to be processed after executing loop processing from step S122 to step S130, the process proceeds to step S131.
In step S131, the face image processor 432 indicates “no face information” in the remaining columns of the target GOP of the face image table.
Then, the face image table generating/updating processing is completed. That is, step S109 in
The processing for generating and recording resource data for moving picture content by the image information detector 427 of the image processor 413 shown in
By performing still-image resource data generating/recording processing indicated by the flowchart in
In step S151, the image information detector 427 sets a predetermined still image of at least one piece of still image content to be a target still image. At least one piece of still image content means all pieces of still image content recorded on the recording medium 51 in the example shown in
After step S151, steps S152 through S157 are executed. Steps S152 through S157 are basically similar to step S103 through S108, respectively, in
After at least one face image is extracted from the target still image in step S157, the process proceeds to step S158.
In step S158, the image information detector 427 sets a predetermined face image to be a target face image.
Then, in step S159, the image information detector 427 determines whether the target face image is a face image of a new character.
If it is determined in step S159 that the target face image is a face image of a new character, the process proceeds to step S160. In step 3160, the image information detector 427 produces a character folder for the new character.
If it is determined in step S159 that the target face image is not a face image of a new character, i.e., that the target face image is a face image of an existing character, the image information detector 427 proceeds to step S161 by skipping step 3160.
In step S161, the image information detector 427 generates still-image resource data, such as a face thumbnail image, from the target face image. Then, in step S162, the image information detector 427 inputs the still-image resource data into the corresponding character folder.
In step S163, the image information detector 427 determines whether there is any face image that has not been set.
If it is determined in step S163 that there is a face image that has not been set among at least one face image extracted from the target still image in step S157, the process returns to step S158.
That is, for each of at least one face image extracted from the target still image in step S157, loop processing from step S158 to step S163 is repeated. Then, it is determined in step S163 that there is no face image that has not been set, and the process proceeds to step S164.
As stated above, if it is determined in step S156 that no face detection information has been generated, the process also proceeds to step S164.
In step S164, the image information detector 427 determines whether there is any piece of still image content that has not been set.
If there is a piece of still image content that has not been set, the process returns to step S151.
That is, for each of at least one piece of still image content, loop processing from step S151 to step S164 is repeated. Then, it is determined in step S164 that there is no piece of still image content, and the process proceeds to step S165.
In step S165, the image information detector 427 records each character folder, together with the management information, on the recording medium 51 as still image resource data.
Then, the still-image resource data generating processing is completed.
The above-described series of processing may be executed by hardware or software. If software is used for executing the series of processing, a corresponding software program is installed from a program recording medium into a computer built in dedicated hardware or a personal computer, such as a general-purpose computer, that can execute various functions by installing various programs thereinto.
In
An input/output interface 705 is connected to the CPU 701 with the bus 704 therebetween. The input/output interface 705 is also connected to an input unit 706 including a keyboard, a mouse, a microphone, etc., and an output unit 707 including a display, a speaker, etc. The CPU 701 executes various types of processing in response to an instruction input from the input unit 706. The CPU 701 then outputs a processing result to the output unit 707.
The storage unit 708 connected to the input/output interface 705 is formed of, for example, a hard disk, and stores programs and various data to be executed by the CPU 701. A communication unit 709 communicates with external devices via a network, such as the Internet or a local area network (LAN).
A program may be obtained via the communication unit 709 and be stored in the storage unit 708.
A drive 710 connected to the input/output interface 705 drives a removable medium 711, such as a magnetic disk, an optical disc, a magneto-optical disk, or a semiconductor memory, loaded in the drive 710, and obtains a program or data stored in the removable medium 711. The obtained program or data is transferred to and stored in the storage unit 708 if necessary.
A program recording medium storing a program that is installed into a computer and is executable by the computer includes, as shown in
In this specification, steps forming the program stored in the program recording medium include processing executed in time-series manner in accordance with the order indicated in the specification. However, they are not restricted to processing executed in time-series manner, and may include processing executed in parallel or individually.
In this specification, the system is an entire apparatus or circuit including a plurality of apparatuses or circuits.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
P2006-184686 | Jul 2006 | JP | national |
This application is a continuation of U.S. patent application Ser. No. 11/825,117, filed on Jul. 3, 2007, which claims priority from Japanese Patent Application No. JP 2006-184686 filed in the Japanese Patent Office on Jul. 4, 2006, the entire content of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
7508961 | Chen et al. | Mar 2009 | B2 |
7668401 | Marugame | Feb 2010 | B2 |
20020171648 | Inoue | Nov 2002 | A1 |
20020175997 | Takata et al. | Nov 2002 | A1 |
20040109587 | Segawa | Jun 2004 | A1 |
20060025968 | Sano | Feb 2006 | A1 |
20060035259 | Yokouchi et al. | Feb 2006 | A1 |
20060093189 | Kato | May 2006 | A1 |
20060168298 | Aoki et al. | Jul 2006 | A1 |
20060192784 | Yamaji et al. | Aug 2006 | A1 |
20060280445 | Masaki et al. | Dec 2006 | A1 |
20070053660 | Abe | Mar 2007 | A1 |
20090309897 | Morita et al. | Dec 2009 | A1 |
Number | Date | Country |
---|---|---|
2001167110 | Jun 2001 | JP |
3315888 | Jun 2002 | JP |
2002342357 | Nov 2002 | JP |
2004173112 | Jun 2004 | JP |
2004-363775 | Dec 2004 | JP |
2005333381 | Dec 2005 | JP |
2006-133946 | May 2006 | JP |
2006178516 | Jul 2006 | JP |
2007280325 | Oct 2007 | JP |
2008017042 | Jan 2008 | JP |
2006025272 | Mar 2006 | WO |
Entry |
---|
Office Action from Japanese Application No. 2006-184686, dated Jul. 6, 2010. |
Office Action from Japanese Application No. 2008-318543, dated Oct. 12, 2010. |
Office Action from Japanese Application No. 2008-318543, dated Nov. 10, 2011. |
Number | Date | Country | |
---|---|---|---|
20150178551 A1 | Jun 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11825117 | Jul 2007 | US |
Child | 14642113 | US |