Before describing embodiments of the present invention, examples of correspondence between the features of the present invention and embodiments described in the specification or shown in the drawings will be described below. This description is intended to assure that embodiments supporting the present invention are described in this specification or shown in the drawings. Thus, even if a certain embodiment is not described in this specification or shown in the drawings as corresponding to certain features of the present invention, that does not necessarily mean that the embodiment does not correspond to those features. Conversely, even if an embodiment is described or shown as corresponding to certain features, that does not necessarily mean that the embodiment does not correspond to other features.
An information processing apparatus (e.g., a recording and reproducing apparatus 1 shown in
An information processing method according to an embodiment of the present invention is an information processing method of an information processing apparatus (e.g., the recording and reproducing apparatus 1 shown in
A program according to an embodiment of the present invention is a program including the step of the information processing method described above, and is executed, for example, by a computer including a controller 11 shown in
Next, embodiments of the present invention will be described with reference to the drawings.
In this specification, “video signals” refers to not only signals corresponding to video content itself, but also include signals that are used (e.g., listened to) by a user (e.g., audio data) together with video content. That is, data that is recorded or reproduced actually includes audio data or the like as well as video content, and the data that is recorded or reproduced, including audio data or the like, will be simply referred to as video content for simplicity of description.
Referring to
The recording and reproducing apparatus 1 includes a controller 11, a read-only memory (ROM) 12, a random access memory (RAM) 13, a communication controller 14, a codec chip 15, a storage unit 16, and a drive 17.
The controller 11 controls the operation of the recording and reproducing apparatus 1 as a whole. For example, the controller 11 controls the operations of the codec chip 15, the communication controller 14, and so forth, which will be described later. When exercising the control, the controller 11 can execute various types of processing according to programs stored in the ROM 12 or the storage unit 16 as needed. The RAM 13 stores programs executed by the controller 11, data, and so forth as needed.
The communication controller 14 controls communications with external devices. In the case of the example shown in
Furthermore, for example, when video signals have been supplied from the camcoder 2, the communication controller 14 can supply the video signals to the codec chip 15. Conversely, when video signals have been supplied from the codec chip 15, the communication controller 14 can supply the video signals to the camcoder 2.
Furthermore, although not shown, for example, the communication controller 14 can receive broadcast signals (e.g., terrestrial analog broadcast signals, broadband-satellite analog broadcast signals, terrestrial digital broadcast signals, or broadcast-satellite digital broadcast signals), and sends corresponding video signals of television programs to the codec chip 15.
Furthermore, the communication controller 14 is capable of connecting to a network, such as the Internet, and the communication controller 14 can receive, for example, certain data transmitted by multicasting via a certain network and supply the data to the codec chip 15.
The codec chip 15 includes an encoder/decoder 21 and a recording and reproduction controller 22.
In a recording operation, the encoder/decoder 21 encodes video signals supplied from the communication controller 14, for example, according to an MPEG (Moving Picture Experts Group) compression algorithm, and supplies the resulting encoded data (hereinafter referred to as video data) to the recording and reproduction controller 22. Then, the recording and reproduction controller 22 stores the video data in the storage unit 16 or records the video data on the removable medium 31 via the drive 17. That is, video content is recorded on the removable medium 31 or stored in the storage unit 16 in the form of video data.
In this embodiment, for example, when video content is dubbed from the digital video tape 32 to the removable medium 31, as a playlist of the video content, the controller 11 automatically generates a playlist in which in addition to original titles, titles can be managed on a basis of individual dates of recording on the digital video tape 32, i.e., on a basis of individual dates of imaging by the imaging device 2 when video content captured by the imaging device 2 is recorded on the digital video tape 32 (hereinafter referred to as a date playlist), and records the date playlist on the removable medium 31 via the drive 17. However, processing for creating the date playlist need not necessarily be executed by the controller 11, and may be executed, for example, by the recording and reproduction controller 22. Furthermore, although the date playlist is saved on the removable medium 31 in this embodiment, without limitation to this embodiment, the date playlist may be saved within the recording and reproducing apparatus 1, for example, in the storage unit 16. The date playlist will be described later in detail with reference to
In a recording operation, the recording and reproduction controller 22 reads video data from the storage unit 16 or reads video data from the removable medium 31 via the drive 17, and supplies the video data to the encoder/decoder 21. Then, the encoder/decoder 21 decodes the video data according to a decoding algorithm corresponding to the compression algorithm described earlier, and supplies the resulting video signals to the communication controller 14.
At this time, if the removable medium 31 has the date playlist recorded thereon, the recording and reproduction controller 22 can read the corresponding video data from the removable medium 31 via the drive 17 according to the date playlist, and supply the video data to the encoder/decoder 21. The date playlist will be described later in detail with reference to
The storage unit 16 is formed of, for example, a hard disk drive (HDD), and stores various types of information, such as video data supplied from the codec chip 15. Furthermore, the storage unit 16 reads video data or the like stored therein, and supplies the video data to the codec chip 15.
The drive 17 records video data or the like supplied from the codec chip 15 on the removable medium 31. Furthermore, the drive 17 reads video data or the like recorded on the removable medium 31 and supplies the video data or the like to the codec chip 15.
The removable medium 31 may be, for example, a magnetic disc (e.g., a flexible disc), an optical disc (e.g., a compact disc read-only memory (CD-ROM), a digital versatile disc (DVD), or a Blu-ray disc (BD)), a magneto-optical disc (e.g., a mini disc (MD)), a magnetic tape, or a semiconductor memory.
It is assumed herein that the removable medium 31 in this embodiment is a DVD or a BD. The data structure of video data or playlists differs between DVD and BD. This difference will be described later with reference to
Next, an overview of the date playlist will be described with reference to
In an example shown in
“gap” above the bar 41 indicates a gap point of “REC TIME” (recording date and time information) on the digital video tape 32. “Chapter MarkK” (where k is an integer) and a triangle mark placed in the proximity thereof indicate a location at which a chapter mark is written (chapter mark point), i.e., a point corresponding to the beginning of a chapter. That is, in this embodiment, a chapter mark is written at each gap point.
Furthermore, in a rectangle shown below “Chapter MarkK”, the content of “REC TIME” of “gap” associated with the “Chapter MarkK”, i.e., the date and time of recording of the gap point (year/month/day time AM or PM) is shown. More specifically, of two events preceding and succeeding the gap point, the date and time of recording of the succeeding event is shown. For example, an event refers to video content captured by the camcoder 2 during a single imaging operation, i.e., between an imaging start operation and an imaging end operation, and recorded on the digital video tape 32 in the form of video data. Thus, the recording date and time can be considered as an imaging date and time representing an imaging time period of the event. That is, “REC TIME” can be considered as information representing an imaging date and time from the viewpoint of imaging. For example, the video content in the period between the “gap” associated with “Chapter Mark1” and the “gap” associated with “Chapter Mark2” constitutes an event, and the imaging start time of the event is the “REC TIME” of the “gap” associated with “Chapter Mark1”, i.e., “200x/x/3 10:00 AM”.
In this embodiment, as shown below the bars 41 and 42, a date playlist 43 including playlist titles 1 to 3 is generated.
In this specification, a set of one or more titles created according to rules (restrictions) described later will be referred to as a playlist title, and a set of one or more playlist titles will be referred to as a date playlist. That is, although a single title is sometimes referred to as a playlist title, in this specification, a playlist title is clearly distinguished from a date playlist, which refers to a set of one or more playlist titles.
Now, rules for creating each playlist title in a date playlist will be described.
Basically, a playlist title is information for reproducing one or more scenes having imaging time periods with the same date and arranged in ascending order of time. A scene herein refers to video content between “Chapter MarkK” to “Chapter MarkK+1”, i.e., video content corresponding to an event identified by the “gap” associated with “Chapter MarkK” and the “gap” associated with “Chapter MarkK+1”.
In this case, when a plurality of scenes having imaging time periods with the same date exist, even when the locations of streams corresponding to the plurality of scenes on the removable medium 31 are separate, the plurality of scenes are sorted in order of time and combined into a single playlist title. This constitutes a first rule.
In the case of the example shown in
When a playlist is created according to the first rule and video content on the removable medium 31 is reproduced according to the playlist, scenes with the same date, i.e., events with the same date, are sequentially reproduced in order of their imaging time periods. As described above, the first rule is a most fundamental rule that serves as a basis for creating each playlist title in a date playlist.
Furthermore, in this embodiment, for example, the following second to eighth rules are defined. However, rules other than the first rule are not limited to the second to eighth rules in this embodiment, and rules other than the first rule may be defined as desired by a designer or the like.
The maximum number of scenes that can be managed is defined as Smax+1, where Smax is a predetermined integer, e.g., 300). “+1” indicates that scenes on and after Smax+1 (301 when Smax is 300) are internally managed collectively as a single scene.
A scene shorter than a predetermined time, e.g., a scene shorter than 2 seconds, is not included in a playlist title.
The maximum number of scenes having the same date is defined as SSmax, which is a predetermined integer, e.g., 99. That is, when SSmax+1 (100 when SSmax=99) or more scenes having the same date exist, the scenes are sorted in order of time, and the first to SSmax-th scenes among the sorted scenes are combined to form a playlist title of the date, and the (SSmax+1)-th and subsequent scenes are not included in the playlist title.
The maximum number of playlist titles that can be created at once is equal to the number of generated titles, which is restricted by the medium used, such as 30. That is, when 30 playlist titles have been generated on the medium, further playlist titles are not generated. The medium in this embodiment refers to the removable medium 31.
Even when an original title created by dubbing is composed only of events (scenes) of one day, a date playlist is created. The original title refers to a title that is created in advance at the time of assignment of the “Chapter MarkK”. The relationship between the original title and playlist titles in a date playlist will be described later with reference to
A segment in which “REC TIME” is not obtained is not included in a date playlist.
When the time of gap points goes backward, separate playlist titles are not created.
Now, a series of processing steps (hereinafter referred to as a playlist generating process for dubbing) executed by the recording and reproducing apparatus 1 shown in
In step S1, the controller 11 of the recording and reproducing apparatus 1 shown in
Let it be supposed that video signals including a video stream and data attached to the video stream are supplied from the camcoder 2 to the recording and reproducing apparatus 1.
The data attached to the video stream includes, for example, an “aspect ratio, indicating an aspect ratio of 4:3, 16:9, etc., an “audio emphasis” representing setting information as to whether emphasis is to be applied, an “audio mode”, indicating an audio mode, such as stereo or bilingual, “copy control information”, a “sampling frequency”, a “tape speed”, and a “REC TIME” indicating a date and time (year, month, day, hour, minute, and second) of recording on the digital video tape 32.
As described earlier, a date playlist is generated using “REC TIME” among these pieces of data.
In step S2, the controller 11 exercises control so that the communication controller 14, using an AVC command, requests the camcoder 2 to rewind the digital video tape 32 (hereinafter simply referred to as the tape 32). The AVC command refers to a command in a set of commands that allows operating the camcoder 2 or obtaining status information of the camcoder 2 via an i.LINK cable.
In response to the AVC command, the camcoder 2 rewinds the tape 32.
In step S3, the controller 11 checks whether the tape 32 has been rewound to the beginning.
When it is determined that the tape 32 has not been rewound to the beginning, step S3 results in NO, so that the checking in step S3 is executed again. That is, the checking in step S3 is repeated until the tape 32 is rewound to the beginning. When the tape 32 has been rewound to the beginning, step S3 results in YES, and the process proceeds to step S4.
In step S4, the controller 11 starts recording.
In step S5, the controller 11 exercises control so that the communication controller 14, using an AVC command, requests the camcoder 2 to reproduce data on the tape 32.
In response to the AVC command, the camcoder 2 reproduce data on the tape 32. As a result, video content recorded on the tape 32 is supplied from the camcoder 2 to the recording and reproducing apparatus 1 in the form of a video stream. The video stream has attached thereto various types of data described earlier, including “REC TIME”.
The controller 11 controls the communication controller 14 and the codec chip 15 so that the video stream supplied from the camcoder 2 is sequentially recorded on the removable medium 31 in the form of video data.
Furthermore, during the recording, in step S6 shown in
In step S7, the controller 11 checks whether “REC TIME” has become discontinuous.
When “REC TIME” obtained in step S6 in a current iteration represents a time continuous with “REC TIME” obtained in step S6 in a previous iteration, step S7 results in NO, and the process proceeds to step S8.
Furthermore, for example, when “REC TIME” is absent as in a case described later with reference to
In step S8, the controller 11 checks whether the status of no recording has continued for 5 minutes or longer, whether the camcoder 2 has stopped, and whether the user has stopped dubbing.
When the status of no recording has continued for 5 minutes or longer, when the camcoder 2 has stopped, or when the user has stopped dubbing, step S8 results in YES, and the process proceeds to step S13 in
On the other hand, when the status of no recording has not continued for 5 minutes or longer, the camcoder 2 has not stopped, and the user has not stopped dubbing, step S8 results in NO, and the process returns to step S6 and the subsequent steps are repeated.
That is, as long as the status of no recording has not continued for 5 minutes or longer, the camcoder 2 has not stopped, the user has not stopped dubbing, and “REC TIME” obtained in step S6 in the current iteration indicates a time continuous with “REC TIME” obtained in step S6 in the previous iteration, the process repeats the loop of steps S6, S7 (NO), and S8 (NO).
When the time represented by “REC TIME” in step S6 in the current iteration has become discontinuous with the time represented by “REC TIME” obtained in step S6 in the previous iteration, step S7 results in YES, and the process proceeds to step S9.
Furthermore, for example, when the process has proceeded to step S7 as a result of step S6 in an initial iteration, when the obtainment of “REC TIME” has succeeded in step S6 after failures in previous iterations, or conversely when the obtainment of “REC TIME” has failed in step S6 after successes in previous iterations, step S7 results in YES, and the process proceeds to step S9.
In step S9, the controller 11 converts presentation time stamps (PTSs) on the tape 32 into PTSs on the original title. That is, step S9 is executed since reproduction of a portion corresponding to “REC TIME” just obtained in step S6 in the video stream is not always possible.
In step S10, the controller 11 saves the PTS associated with a discontinuity on the original title as a gap point, and also saves preceding and succeeding “REC TIME”.
In step S11, the controller 11 checks whether the number of chapters has already reached 99.
When the number of chapters has already reached 99, step S11 results in YES, and the process returns to step S6 and the subsequent steps are repeated.
On the other hand, when the number of chapters is less than or equal to 98, step S11 results in NO, and the process proceeds to step S12. In step S12, the controller 11 places a chapter mark in the portion of the gap point. The process then returns to step S6, and the subsequent steps are repeated.
When the status of no recording has continued for 5 minutes or longer, when the camcoder 2 has stopped, or the user has stopped dubbing, step S8 results in YES, and the process proceeds to step S13 shown in
In step S13, the controller 11 stops recording, and sets an original title name. The method of setting the original title name is not particularly limited. For example, in this embodiment, a newest time and an oldest time are obtained from values of “REC TIME” and the original title name is set using the newest time and the oldest time.
In step S14, the controller 11 controls the communication controller 14 to check whether the camcoder 2 has stopped.
When it is determined in step S14 that the camcoder 2 has not stopped, in step S15, the controller 11 controls the communication controller 14 to request using an AVC command that the camcoder 2 be stopped. Then, the process proceeds to step S16.
On the other hand, when it is determined in step S14 that the camcoder 2 has stopped, the process skips step S15 and directly proceeds to step S16.
That is, step S16 is executed when the camcoder 2 has stopped.
In step S16, the controller 11 creates scenes using information such as “REC TIME” saved in step S10 shown in
Then, the process proceeds to step S17 shown in
Before describing step S17 and the subsequent steps shown in
For example, let it be assumed that video content indicated as “tape content” in
The reason that a point with “PTS” indicating “DDD” is not a gap point is as follows. Since “REC TIME” does not exist in the period with “PTS” indicating “CCC” to “EEE”, it is not possible to execute the checking in step S7 shown in
Furthermore, regarding periods preceding and succeeding “PTS” indicating “EEE”, “REC TIME” does not exist in the period preceding “EEE”, whereas “REC TIME” exists immediately after “EEE”. In this case, step S7 results in YES, so that steps S9 to S12 are executed. Accordingly, “gap point 5” is detected.
When the video stream corresponding to the “tape content” shown in
“PTS” in
When the information shown in
That is, first, using “PTS” and preceding and succeeding “REC TIME” included in the information shown in
For example, scene data of a scene M (M is an integer, and is one of the values 1 to 4 in the example shown in
When M is 1, the first “PTS” (“0” in the example shown in
In this manner, individual scene data of “scene 1” to “scene 4” shown in the table in the lower part of
Next, as shown in an upper part of
In the data 61 and the data 62, a “pointer to scene M” refers to information pointing to scene data of the scene M. That is, since inclusion of scene data in the data 61 or the data 62 results in doubly holding the same scene data in a memory such as the RAM 13 shown in
In generating data classified on the basis of individual dates, scenes having invalid values as the “first recording date and time” or the “last recording date and time”, such as “scene 4” shown in
Furthermore, from the data classified on the basis of individual dates, the controller 11 generates data in which individual scenes are sorted in order of time. This data serves as date-title creating data for each date.
In the case of the example shown in
Furthermore, from the data 62 for “scene 2” having the date “2006/7/2”, the date-title creating data 72 for “2006/7/2” is generated. Since “scene 2”, is the only scene having the date “2006/7/2”, the date-title creating data 72 is substantially the same as the data 62.
The date-title creating data of each date is generated as a result of step S16 shown in
In step S17, the controller 11 calculates a restriction of the medium (i.e., the number of titles that can be newly created on the medium). For example, in this embodiment, the controller 11 calculates a restriction of the removable medium 31 shown in
In step S18, the controller 11 reads date-title creating data of a specific date. In the case of the example shown in
In step S19, the controller 11 creates a playlist title of the specific date using the first scene data in the date-title creating data of the specific date, more specifically, scene data indicated by the first pointer in the date-title creating data of the specific date.
For example, when the date-title creating data 71 of the date “2006/7/1” is read in step S18, in step S19, a playlist title of the date “2006/7/1” is created using the scene data of “Scene 3”.
On the other hand, when the date-title creating data 72 of the date “2006/7/2” is read in step S18, in step S19, a playlist title of the date “2006/7/2” is created using the scene data of “Scene 2”.
In step S20, the controller 11 checks the scene data is the last scene data in the date-title creating data of the specific date.
When it is determined in step S20 that the scene data is not the last scene data in the date-title creating data of the specific date, the process proceeds to step S21. In step S21, the controller 11 merges the next scene data in the date-title creating data of the specific date, more specifically, scene data indicated by the next pointer in the date-title creating data of the specific date, with the playlist title of the specific date.
Then, the process returns to step S20, and the subsequent steps are repeated. That is, pieces of scene data in the date-title creating data of the specific date, more specifically, pieces of scene data indicated individually by pointers in the date-title creating data of the specific date are sequentially merged with the playlist title of the specific date in order of time. When the last scene data has been merged, step S20 results in YES, and the process proceeds to step S22.
For example, when the date-title creating data 71 of the date “2006/7/1” is read in step S18 and a playlist title of the date “2006/7/1”, is created using scene data of “Scene 3”, scene data of “Scene 1” is remaining. Thus, step S20 results in NO, and in step S21, scene data of “Scene 1”, is merged with the playlist title of the date “2006/7/1”. Since the scene data of “Scene 1” is the last scene data, step S20 in the next iteration results in YES, and the process proceeds to step S22.
On the other hand, when the date-title creating data 72 of the date “2006/7/2”, is read in step S18 and a playlist title of the date “2006/7/2” is created using scene data of “Scene 2” in step S19, no other scene data exists, i.e., the scene data of “Scene 2” is the last scene data. Thus, step S20 immediately results in YES, and the process proceeds to step S22 without executing step S21 at all.
In step S22, the controller 11 sets a name of the playlist title of the specific date.
The method of setting the name is not particularly limited. For example, in this embodiment, a name 101 shown in
More specifically, a character string 102 of the first two characters represents a type of a video stream supplied from the camcoder 2. In the example shown in
A character string 103 indicates an earliest recording date and time (year/month/day time AM or PM) of video content that is reproduced according to the playlist title of the specific date. A character string 104 indicates a latest recording date and time (time AM or PM) of video content that is reproduced according to the playlist title of the specific date. That is, according to the playlist title having the name 101, video content from the recording date and time indicated by the character string 103 to the recording date and time indicated by the character string 104 is reproduced. In the case of the example shown in
Referring back to
In step S23, the controller 11 controls the codec chip 15 so that the playlist title of the specific date is written to the removable medium 31 via the drive 17.
In step S24, the controller 11 checks whether date-title creating data with which a playlist title has not been created exists and the date-title creating data does not violate the media restriction calculated in step S17.
When date-title creating data with which a playlist title has not been created exists and the date-title creating data does not violate the media restriction calculated in step S17, the process returns to step S18, and the subsequent steps are repeated.
As described above, through repeated execution of the loop formed of steps S18 to S24, playlist titles of individual dates are created. That is, a date playlist including the playlist titles of the individual dates is generated. When the date playlist has been generated, step S24 results in NO, and the process proceeds to step S25.
In step S25, the controller 11 controls the codec chip 15 so that flushing of the removable medium 31 (fixing of the filesystem) is executed.
When the flushing is finished, the entire playlist generating process for dubbing is finished.
When the playlist generating process for dubbing has been executed as described above, video data dubbed from the digital video tape 32, original titles, a date playlist, and so forth are recorded on the removable medium 31.
Since the directory structure differs between a case where the removable medium 31 is a DVD and a case where the removable medium 31 is a BD, the structure of arrangement of various types of data also differs between these cases.
Thus, overviews of the structure of data arrangement in a BD and the structure of data arrangement in a DVD will be described with reference to
In the example shown in
In the example shown in
In “PLAYLIST”, original titles are stored in files having extensions “rpls”, such as “01001.rpls” and “02002.rpls”. On the other hand, playlist titles of a specific date in a date playlist are stored in a file having an extension “vpls”, such as “9999.vpls”. That is, a date playlist is a set of files having extensions “vpls”.
In “STREAM”, files having extensions “m2ts”, such as “01000.m2ts”, “02000.m2ts”, and “03000.m2ts”, store actual video data (MPEG-TS). That is, when the playlist generating process for dubbing, described earlier, is executed once, video data dubbed from the digital video tape 32 is recorded under “STREAM” in the form of a single file having an extension “m2ts”.
Furthermore, additional information of each piece of video data is recorded under “CLIPINF” in the form of a file having a name corresponding to the file name of the video data and having an extension “clip”. More specifically, in the case of the example shown in
In
Furthermore, a “Clip AV stream” in “STREAM” represents content of a file having an extension “m2ts”, i.e., actual video data corresponding to a file (MPEG-TS). A piece of “Clip information” in “CLIPINF” on “Clip AV stream” represents additional information of associated video data, i.e., content of a file having a name corresponding to the file name of the video data and having an extension “clip”.
As shown in
Furthermore, in the example shown in
As opposed to the structure of data arrangement in a BD, described above,
In the example shown in
“DVD_RTAV” includes five types of files, namely, “VR_MANGR.IFO”, “VR_MOVIE.VRO”, “VR_STILL.VRO”, “VR_AUDIO.VRO”, and “VR_MANGR.BUP”.
“VR_MANGR.IFO” includes title management data, i.e., management data of original titles, and management data of playlist titles of each date in a date playlist. “VR_MANGR.BUP” is a backup file for “VR_MANGR.IFO”.
“VR_MOVIE.VRO” stores actual video data (moving-picture and audio data) (MPEG-PS). “VR_STILL.VRO” stores actual still-picture data. “VR_AUDIO.VRO” stores actual attached audio data.
In “VR_MANGR.IFO”, “ORIGINAL PGCI” represents management information of an original title. On the other hand, “User Define PGCI” represents management information of a user-defined title. Thus, “User Define PGCI” can be used as management information of each playlist title in a date playlist. M_VOBI” stores additional information of associated video data.
The series of processes described above can be executed either by hardware or by software. When the series of processes is executed by software, a programs constituting the software is installed from a program recording medium onto a computer embedded in special hardware, or a computer including the codec chip 15, the controller 11, or the like of the recording and reproducing apparatus 1 shown in
As shown in
In this specification, steps defining the program stored on the program recording medium need not necessarily be executed sequentially in the orders described herein, and may include steps that are executed in parallel or individually.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
P2006-245390 | Sep 2006 | JP | national |