This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2006-353421 filed on Dec. 27, 2006; the entire contents of which are incorporated herein by this reference.
1. Field of the Invention
The present invention relates to a video contents display apparatus, a video contents display method, and a program therefor.
2. Description of the Related Art
Recently, equipment capable of recording video contents such as a TV program etc. for a long time has become widespread. As the recording equipment, there are a hard disk recorder (hereinafter referred to as an HDD recorder for short), a home server, a personal computer (hereinafter referred to as a PC for short), etc. that contain a hard disk device. The tendency comes from a larger storage capacity and a lower cost of the information recording device such as a hard disk device etc.
Using the function of a common HDD recorder, a user selects a desired program to be viewed by narrowing program from plural recorded programs on the listing display of program names etc. At this time, a list of plural programs to be selected is displayed in a so-called thumbnail format, and a user selects a program while checking the thumbnail images.
In addition, there has recently been an apparatus practically capable of recording plural programs currently being broadcast using a built-in tuner. For example, refer to the URL http://www.vaio.sony.co.jp/Products?/VGX.X90P/. The display of plural programs on the device is similar to the display of weekly program table on a newspaper.
However, in the above-mentioned conventional devices, although plural video contents are recorded on an HDD recorder, a home server, etc., related scenes have not been able to be retrieved from among recorded video contents.
In retrieving video contents, a list of titles of plural video contents has been able to be displayed along a time axis of the date and time of recording. However, the retrieval has not been able to be performed with various time relations taken into account. For example, it is possible to retrieve “contents recorded in the year of XX” from a database storing plural video contents by setting the “year of XX” in the retrieval conditions. However, it has not been possible to retrieve contents with plural time relations taken into account such as retrieving video contents with period settings of the time in which specific video contents were viewed.
The video contents display apparatus according to an aspect of the present invention includes: a static image generation unit for generating a predetermined number of static images by considering a time of lapse from information about recorded video contents; an image conversion unit for converting a static image other than at least one specified static image into an image reduced in a predetermined format from the predetermined number of generated static images; and a display unit for displaying a sequence of images by arranging at least the one-static image and the other static image along a predetermined path on a screen by considering the time of laps.
The video contents display method according to an aspect of the present invention is a method of displaying video contents, and includes: generating a predetermined number of static images by considering a time of lapse from information about recorded video contents; converting a static image other than at least one specified static image into an image reduced in a predetermined format from the predetermined number of generated static images; and displaying the at least the one static image and the other compressed static image as a sequence of images arranged along a predetermined path on a screen by considering the time of lapse.
Embodiments of the present invention are described below with reference to the attached drawings.
First, the configuration of the video contents display system according to an embodiment of the present invention is described below with reference to
A video contents display apparatus 1 as a video contents display system includes a contents storage unit 10, a display generation unit 11, an input device 12, and an output device 13.
The contents storage unit 10 is a processing unit for digitizing video contents, and recording and accumulating the resultant contents in a storage device 10A such as an internal hard disk or an external large-capacity memory (that can be connected over a network). The plural video contents accumulated or recorded in the contents storage unit 10 can be various video contents such as contents obtained by recording a broadcast program, distributed toll or free contents, contents captured by each user on a home video device, contents shared and accumulated with friends or at home, contents obtained by recording contents distributed through a packet medium, contents generated or edited by equipment at home, etc.
The display generation unit 11 is a processing unit having a central processing unit (CPU) described later and using the information input from the input device 12 and internally held information about the three-dimensional display to subject the contents accumulated in the contents storage unit 10 to a conversion to allow a three-dimensional image to be projected on a two-dimensional plane, a conversion to allow plural static images to be displayed in a image sequence format various modifications, application of effects, superposing process, etc., so as to generate a screen of a three-dimensional graphical user interface (hereinafter referred to as a GUI for short).
The input device 12 is, for example, a keyboard and a mouse of a computer, a remote controller of a television (TV), a device having the function of a remote controller, etc., and is a device for input for specifying a display method, and for input for a GUI command.
The output device 13 is for example, a display device or a TV screen display device, and displays the screen of a two-dimensional and a three-dimensional GUI. In addition to a display, the output device 13 includes an audio output unit such as a speaker etc. for outputting voice included in video contents.
The descriptions of the functions and processing methods for recording, playing back, editing, and transferring video contents in the video contents display apparatus 1 are omitted here. The video contents display apparatus 1 shown in
A user can record information about video contents (hereinafter referred to simply as contents) to the storage device 10A through the contents storage unit 10 by transmitting a predetermined command to the display generation unit 11 by operating the input device 12. Then, the user operates the input device 12 and transmits the predetermined command to the video contents display apparatus 1, thereby retrieving and playing back the contents to be viewed from among the plural contents recorded on the storage device 10A through the contents storage unit 10, displaying the contents on the screen of the output device 13, and successfully viewing the contents.
Various processes performed in the video contents display apparatus 1 are integrally executed by the display generation unit 11. The display generation unit 11 includes CPU, ROM, RAM, etc. not shown in the attached drawings. The display generation unit 11 realizes the functions corresponding to various processes such as recording, playing back, etc. by the CPU executing a software program stored in advance in the ROM etc.
In the present embodiment, the CPU has, for example, a multi-core multiprocessor architecture capable of performing parallel processes and executing a real-time OS (operating system). Therefore, the display generation unit 11 can process a large amount of data, especially viewer data in parallel at a high speed.
Practically, the display generation unit 11 is configured by a processor capable of performing a parallel process, formed by integrating on one chip a total of nine processors including a 64-bit CPU core, and eight independent signaling processor SPEs (synergistic processing element) for processing a 128-bit register. The SPE is appropriate for processing multimedia data and streaming data. Each SPE has SRAM of a single port for pipeline operation as 256-Kbyte local memory to perform different signal processes in parallel.
The core CPU 73 includes secondary cache 73a, primary cache 73b, and an arithmetic operation unit 73c. The interface unit 74 is a DRAM interface of the two-channel XDR as a memory interface. The interface unit 75 is a Flex IO interface as a system interface.
Using a processor of a multi-core multiprocessor architecture capable of performing parallel processes, the parallel processes of generating, retrieving, displaying thumbnail images described later can be smoothly performed. The CPU can be not only a one-chip processor, but also plural combined processors.
The remote controller 12A includes a power supply button 91, a channel button 92, a volume button 93, a channel direct switch button 94, a cross key 95 for moving a cursor up and down and right and left, a home button 96, a program table button 97, a submenu button 97, a return button 98, and a various recording and playback function key group 99.
The cross key 95 has double ring-shaped keys (hereinafter referred to as ring keys) 95a and 95b. Inside the inner ring key 95a, an execution key 95c for the function of selection, that is, execution, is provided.
Furthermore, the remote controller 12A includes a GUI1 button 95d and a GUI2 button 95e. The functions of the GUI1 button 95d and the GUI2 button 95e are described later. The remote controller 12A further includes a GUI3 button 95f, but the GUI3 button 95f is described with reference to the variation example described later.
In the following explanation, the input device 12 is the remote controller 12A shown in
Described below is the information about the contents stored in the storage device 10A (animation contents in the present embodiment).
Each of the contents stored in the storage device 10A is assigned the contents information as shown in
The data structure shown in
As shown in
The data shown in
In the data structure shown in
The details of the time axis data shown in
The numeral data shown in
The text data shown in
Furthermore, as described later, in addition to persons who shares the contents, for example, if a friend can share the contents, the contents information for can be improved in cooperation, and a display screen can be obtained that is easy to use and easy to search/retrieve contents. Since a program distributed over a network includes common contents to be held by each user, a database of meta-data (contents information) of contents may be structured on a network server, such that friends or members of an indefinite number can write data to be shared.
As shown in
The acquisition information about contents varies depends on input means. For example, contents distributed over a network have date and time of recording as acquisition information. Toll contents in a network distribution format and a packet distribution format include a date and time of purchase as acquisition information. If a broadcast is recorded by a video recorder built in an HDD etc., the recorded data includes a date and time of recording as acquisition information. If the broadcast is recorded by a video recorder built in an HDD, the recorded data includes a date and time of recording as acquisition information. Thus, the acquisition information relates to the information about a time axis such as a date and time of download, a date and time of purchase, a date and time of recording, etc. As described later, the date and time can include a year, a month, a day, an hour, a minute, and a second, or can include only a year, only a year and a month without a minute and a second as a time indicating a period having a length in time. For example, if the time information such as period settings is oblique, or information indicates no time point but the information indicates a time length such as an event etc., a period data can be registered. If the time information is oblique or includes a time length, the date and time can be period data to be easily extracted when retrieved later. Therefore, in a time axis such as a period setting etc. the year of 1600 etc. does not indicate a momentary time point of 0:00 of Jan. 1, 1600, but indicates period data such as “0:00:00 of Jan. 1, 1600 to 23:59:59 of Dec. 31, 1600”. Furthermore, precise time data may not be acquired about a date and time of recording, a date and time of production, etc. In this case, the period data can be set so that data can be easily extracted when searched for.
The production information about contents is the information about a time axis such as a date and time of production, a date and time of shooting, a date and time of editing, a date and time of publishing (for example, for movie contents, a publishing date at theater, and for a DVD, a starting date of sales, etc.), a date and time of broadcast (the first date and time of broadcast, or the date and time of re-broadcast for a TV broadcast), etc.
The time axis information about the detailed contents can be, for example, the information about a time axis such as the date and time of a period set by the contents (for example, a date and time of the Edo period for a drama in the old days, and a date and time of the Heian period for the war between Genji and Heishi).
The time axis information includes the information (for example, a date and time of shooting) that cannot be acquired unless a contents provider or a contents mediator provides the information and the information that can be acquired by a contents viewer (contents consumer). There is also data for each content (for example, a date and time of recording from TV). The data for each content includes the data (for example, the first date and time of broadcast of the contents) to be shared with friends who hold the same contents.
That is, the contents information includes various data such as numeral data, text data, time axis data, viewer data described later, etc., of which the data to be shared can be shared using a network, and the data provided from a provider of the contents can be acquired and registered through a necessary path. If the data is not provided from the provider (for example, a date and time of shooting such as movie contents, etc.), the corresponding item is blank. If a viewer is to input the information, the viewer inputs the information. That is, various types of information are collected and registered as much as possible, and as the information is improved in quantity and quality, contents can be retrieved by various co-occurrence relationships, that is, the retrieval by association can be realized when the time is represented in plural dimensions (three dimensions in the following descriptions) as described later.
In other words, as time axis data, various time axes including (1) a time counter of contents, (2) a date and time of viewing of the contents, (3) a date and time of recording the contents, (4) a date and time of acquiring the contents, (5) a year or a date and time set by the contents or the scene, (6) a date and time of production of the contents, (7) a date and time of broadcast, (8) a time axis of the life of the user, etc. can be prepared.
Since the association (the consideration given when video contents are searched for based on the memory) of a person is performed along a time axis in many cases, and a considering method used when a person raises association or an idea is to use the relationships in various aspects and association, various types of time axes prepared allow a user to easily retrieve desired contents or scene.
Furthermore, if video contents are to be sorted using, for example, a type, a keyword to a character etc. as in the conventional method, one coordinate axis is not sufficient, and a coordinate value cannot be uniquely determined.
However, coordinates can be uniquely obtained by each video content using a time axis.
Therefore, preparing various time axes allows a user to retrieve contents with free association.
With the contents information having the data structures shown in
Using the plural contents stored in the storage device 10A and each of contents information about the plural contents, the video contents display apparatus 1 displays on the display screen of the output device 13 the three-dimensional display screen shown in
Described below is the effect of the video contents display apparatus 1 with the above-mentioned configuration.
First, the screen of the GUI1 as a three-dimensional display screen is described below. When viewing or having completely viewed a content, a user presses the GUI1 button 95d of the remote controller 12A, resulting in the screen shown in
In
The size of each block shown in
On the screen shown in
Furthermore, whether or not each axis is displayed can be selected, or a ruler display (for example, a display of “the year of 1999 from this point”) can be added so that a user can determine the scale of each axis.
The arrangement of contents is described below with reference to
Note that there may be such a case where, plural contents are positioned considerably far away on a time axis depending on a time axis such as a time axis for setting a period etc. In this case, a time axis scale can be, for example, a scale of a logarithm, and a scale can be changed such that the position of each content can correspond to each other. With the configuration, for example, the time density is higher for the time point closer to the current time, and the time density is lower for the time point closer to the past or the future.
In addition, when the date and time of setting a period is used, there is a tendency that certain period has a large volume of contents or few contents. For example, there are a number of contents from Nobunaga Oda to Ieyasu Tokugawa, but there are a decreasing number of contents in the stable Edo period. In this case, only the time order is held and the intervals of the plots of the axes can be set such that the arrangement of the contents can be equally displayed on the time axis.
Furthermore, some time axis data include only year data or year and month data without year-month-day data. In this case, the display generation unit 11 determines the time axis data for display of the GUI1 according to predetermined rules. For example, if the time axis data is “February in 2000”, the data is processed as the data of “Feb. 1, 2000”. According to such a rule, the display generation unit 11 can arrange each block.
In the display state shown in
The content in the focus state is displayed in a different display mode from other contents to indicate the focus state by adding a yellow frame to the thumbnail images of the contents or increasing the brightness etc.
The view point of the screen shown in
The movement (selection) of the focus contents, and the movement of the viewpoint position may be made up and down, left and right, backward and forward using the two ring keys 95a and 95b marked with arrows of the remote controller 12A.
Otherwise, the movements may also be made by displaying a submenu and selecting a moving direction from the submenu. Practically, by specifying two positive and negative directions of the axes (a total of six directions), the view point direction can be selected, thereby allowing a user to conveniently use the function.
In addition, the size of a user view space may be set in various methods. For example, the settings can be: 1) a predetermined time width (for example, three preceding or subsequent days) commonly for each axis, 2) different time axis for each axis (for example, three preceding or subsequent days for the X axis, five preceding or subsequent days for the Y axis, and three years for the Z axis), 3) a different scale for each axis (a linear scale for the X axis, a log scale for the Y axis, etc.), 4) the range in which a predetermined number (for example, 5) of preceding and subsequent contents including the focused contents for each axis are extracted (in this case, if plural contents are positioned close to each other, the range is smaller, and it they are positioned loosely, the range is larger), 5) the order of determining the range of each axis changeable when a predetermined number of contents are extracted including the focused contents for each axis (the range of the first axis can be amended when the range of the second axis is determined), and 6) only a sampled content displayable, or the size of the block indicating each content changeable when a predetermined number or more of contents exist.
As shown in
In
The display generation unit 11 can generate the three-dimensional display screen shown in
Thus, by changing the viewpoint position, view direction, or viewing angle, a user can take a down shot of a contents group from a desired view point (viewpoint). In addition, if a time axis configuring a space is converted into another time axis, for example, a date and time of purchase of contents, then the user can easily retrieve contents, that is, search for the contents purchased in the same period.
In addition, for example, using as a reference position the date and time of the birthday of a user as a viewer, for example, an intersection position of three axes is specified, and plural contents are rearranged on each time axis. Then, the user compares the contents with the video contents taken by the user, and can easily search for a TV program frequently viewed around the time of the contents being recorded.
The origin position of time axis data, that is, the intersection of the three time axes, can be optionally set in each time axis. For example, in the case shown in
Furthermore, for example, in
In the example shown in
In
Furthermore, by changing a time axis and a viewpoint position, the appearance of a two-dimensional projection image changes. At this time, the direction of the thumbnail images of each content may be fixed with respect to a predetermined time axis in a three-dimensional space. If the direction of the thumbnail image is fixed to a predetermined time axis in a three-dimensional space, the thumbnail image can be viewed at a tilt, or can be viewed from the back, thereby changing the view of the thumbnail image. Otherwise, even although time axis etc. is changed, the direction of a thumbnail image may be fixed on the two-dimensional projection image. For example, when an image is displayed in a two-dimensional array, a thumbnail image may be fixed to constantly face forward. In such a case where the direction of a thumbnail image of each content is fixed with respect to a predetermined time axis in a three-dimensional space, for example, by preparing a button of “changing a thumbnail image to face forward” for the input device 12, a user can change the direction of a thumbnail image with a desired state and timing.
Furthermore, as a variation example of a display mode, the display method as shown in
In the information about a time axis, the information about the first date and time of viewing by a user can be a blank if contents have not been viewed. When these contents are sorted by a time axis of the date and time of the first viewing, a future date and time are virtually set. For example, contents that have not been viewed can be arranged in a position of a predetermined time such as five minutes after the current time etc. If there are plural contents that have not been viewed, then the contents can be sorted by virtually setting the future date and time at equal intervals in the order of the activation date and time (date and time of purchase for package contents, date and time of reception for network received contents, date and time of recording for contents recorded from broadcasts, date and time of shooting for contents shot by a user).
5.2 Software of Display Generation Unit about GUI1
When a user presses the GUI1 button 95d of the remote controller 12A, the display generation unit 11 performs the process shown in
In the following example, the process shown in
First, the display generation unit 11 acquires time axis data of the contents information about plural contents stored in the storage device (step S1). Since the time axis information is stored in the storage device 10A as the time axis data about the contents information as shown in
The display generation unit 11 determines the position in the absolute time space of each content based on the acquired time axis data (step S2). The display generation unit 11 determines the position in the absolute time space, that is, the time, of each content for each time axis data. Thus, for each time axis, the position of the content on the time axis is determined for each time axis. The determined position information about each content on each time axis is stored on the RAM or the storage device 10A. The step S2 corresponds to a position determination unit for determining the position on plural time axes for each of the plural video contents according to the time information about the plural video contents.
Next, it is determined whether or not the past view information is to be used (step S3). The view information includes the information about the view point, the origin (intersection), three time axes, that is, the first to third time axes, and the display range of each time axis when the display shown in
Whether or not the past view information is to be used may be set by a user in advance in the storage device 10A, and a display unit such as a subwindow etc. may be provided for selection on the display screen as to whether or not the past view information is to be used. A user makes the selection and determines whether or not the past view information is to be used.
If YES in step S8, that is, if the past view information is used, then the display generation unit 11 determines a user view space from the past view information (step S4).
In step S2, the position in the absolute time space ATS of each content C is determined. The user view space UVS is determined according to the set various types of information, that is, the information about the view point, the origin (intersection), three time axes, that is, the first to third time axes, and the display range of each time axis. The display generation unit 11 can generate the screen data (practically, the data of the projection image to the two-dimensional plane in the three-dimensional space) for display shown in
Thus, the display generation unit 11 displays the user view space on the screen of the output device 13 (step S5). The user view space includes the graphics of plural blocks indicating the respective contents. As a result, the display as shown in
Next, it is determined whether or not the user has selected a function of changing the screen display (step S6). When the function of changing screen display is displayed, for example, a user operates the remote controller 12A, displays a predetermined subwindow on the display screen, and a predetermined function for a change is selected.
If YES in step S6, that is, if a user issues an instruction to change screen display, control is returned to step S3. In step S3, it is determined whether or not the past view information is to be used. If the past view information is used (YES in step S3), and if there are plural pieces of past view information, then another piece of past view information is selected, or if the past view information is not used, a process of changing view information is performed (step S10).
If NO in step S6, that is, if a user does not issue an instruction to change screen display, it is determined whether or not a content has been selected (step S7). If a content is not selected, it is determined NO in step S7, and control is returned to step S6. A content is selected by a user using, for example, an arrow key of the remote controller 12A to move a cursor to the place of a content to be viewed and select the content.
If YES in step S7, that is, a content is selected, the display generation unit 11 stores the view information about a user view space displayed in step S5 in the storage device 10A (step S8). The view information includes a view point, an origin, a first time axis, a second time axis, and a third time axis, and further includes the information about a display range of each of the first to third time axes. As the information about a view point, for example, including the information as to whether the view point is positioned forward or backward the first to third time axes, the information about the origin is the information about the date and time such as a year, a month, etc. The information about the display range of each time axis includes scale information.
After step S8, the display generation unit 11 passes control to the GUI2 display processing (step S9). The transfer to the GUI display processing is performed by pressing the GUI2 button 95e.
If NO in step S3, that is, if the past view information is not used, view information change processing is performed (step S10). In the view information change processing, a subwindow screen (not shown in the attached drawings) is displayed to set each parameter on the display screen, to allow a user to set or input the information about the display range of the first to third time axes in addition to the above-mentioned view information, origin information, first time axis information, second time axis information, and third time axis information described above.
After the user changes the view information, control is passed to step S5, the user view space is displayed on the screen of the output device 13 according to the view information changed in step S10.
Thus, plural contents in a predetermined period of each of the three time axes are arranged in a three-dimensional array and displayed on the display screen of the display device of the output device 13. When the user requests to view one of the contents, the user selects the content, and the content is played back.
Since the video contents display apparatus 1 can display plural contents in relation to plural time axes as shown in
As described above, when a user as a viewer selects three desired time axes from among plural time axes as an axis of the information about a three-dimensional space in which video contents or a scene in the video contents are browsed, the video contents display apparatus 1 configures a virtual three-dimensional space based on the selected time axes, and displays the video contents and the scene as a static image or animation at a predetermined position in the three-dimensional space according to the time axis information. By operating the remote controller 12A, the user can browse the space from any viewpoint position in the three-dimensional space. Then, the video contents display apparatus 1 can perform the viewing operation such as presenting, playing back, temporarily stopping the playback, stopping the playback, fast playing back, returning the playback, storing and calling a playback position, etc. of the information about the contents and the scene with respect to the set of the video contents and scene selected by the user from the display state on the screen shown in
In the conventional two-dimensional GUI, there are only two references (date and time of recording, last date and time of viewing) of represented rearrangements. Therefore, when the rearrangement reference is changed, it is necessary to press a page switch button or a mode change button.
Although there is a three-dimensional GUI for displaying video in a three-dimensional space, the GUI has no meaning, but has a three-dimensional appearance only.
In the conventional GUI, a content cannot be arranged on an evaluation axis when it is provided with various information such as a type name, the name of a character, a place, the meaning and contents of the content or scene. When the contents are arranged according to the information, each content may not be uniquely plotted.
However, using plural time axes as in the present embodiment above, a unique plot (assigning coordinates) can be allotted on each time axis. Therefore, it is effective to sort the animation content using a time axis.
Conventionally, a sorting method or a retrieving method using one or two types of axes (concept of time) such as a recording time, a playback time, etc. has been provided. The conventional sorting method etc. has no retrieval key such as the time of the date and time (Edo period if a drama of old days) set by the contents as described above, the date and time on which contents are published, the date and time of acquiring contents, the date and time of recording contents, etc. In the conventional sorting method, a user first selects a recording day from the listed contents, selects a recording channel, and selects a content, then a scene is retrieved. Thus, a content can be retrieved in the regular retrieving procedure.
However in the method above, in such a case where a scene can be recollected, but the recording day is oblique, it is difficult to select the scene.
In addition, for example, video contents cannot be practically performed at a request for “contents broadcast when the content is previously viewed”. The operation to allow a user having the request above to view the video contents can be performed by recollecting the date and time of the previous viewing of the current video contents, selecting each of the video contents from a list of plural video contents that can be viewed, comparing the date and time in an operation of displaying the date and time of broadcast, and repeating the operations until the video contents broadcast on the desired date and time can be retrieved. The more the video contents that can be viewed, the more impractical the above-mentioned operation becomes. Thus, most users give up the viewing.
However, a person vaguely remembers the co-occurrence relations and the relations between contents in various time axes, and may in some cases associate various time axes or co-occurrences with other contents while viewing contents. Conventionally, there is no method of retrieving and viewing contents based on the various time axes or co-occurrences. Then, there is no system etc. for providing a retrieving method using combined time axes such the GUI according to the present embodiment.
The three-dimensional GUI as shown in
As shown in
That is, by the display shown in
As described above, according to the GUI1, the video contents or scene can be easily retrieved from plural video contents with the time relations taken into account.
Described next is the method of retrieving a scene in selected contents.
In the display state shown in
In the focus state, when a user operates the remote controller 12A and specifies the display of the submenu, a submenu window 102 as shown in
The user can operate, for example, the cross key of the remote controller 12A from plural selection portions, to move a cursor to desired selection portion to select a desired command.
If the execution key 95c of the remote controller 12A is pressed in the state in which the selection portion (“collecting the programs of the same production year”) is selected, the programs of the same series as the selected content 112d are retrieved and extracted as related contents, and the screen as shown in
In
In the thumbnail images of the four contents 121a, 121b, 121d, and 121e, the leftmost thumbnail image is a target image not horizontally reduced. The frame F1 indicating a non-reduced image is added to the leftmost thumbnail image. The frame F1 is a mark indicating a target image not displayed as reduced in each content.
The thumbnail image in the central and selected content 121c is displayed in a state in which the leftmost thumbnail image is not reduced like other contents 121a, 121b, 121d, and 121e to which the frame F1 is added, and the frame F2 indicating the image at the cursor position is added when the screen shown in
In the state above, when the user moves the cursor using the remote controller 12A, the thumbnail image (hereinafter referred to as a focus image) at a position (focus position) of the moved cursor is displayed in an unreduced state.
Note that the frames F1 and F2 are displayed in the display mode in which the frames can be discriminated from each other, for example, using different thicknesses, colors, etc. so that a target image can be discriminated from a focus image.
Further note that, in the explanation above, the method of displaying a target image is described such that an unreduced image is displayed. However, it is not essential to display an unreduced image, but any outstanding expression is acceptable.
The focus image shown in
A target image and a focus image are displayed as reduced images simply with the image reduced in size without changing the aspect ratio. The thumbnail image at the positions other than the target image and the focus image are reduced in a predetermined direction, that is, horizontally reduced in this embodiment, and displayed as long portrait images.
As shown in
Furthermore, as the target image and the focus image, a target image may be displayed with higher brightness so that the target image can be brighter than the surrounding images. Otherwise, the thumbnail images other than the target image and the focus image may be displayed with lower brightness.
The image reducing direction may be a vertical direction in addition to the horizontal direction. The images may also be arranged and displayed by laying thumbnail images such that only the rightmost or leftmost edge can be viewed, instead of reducing the thumbnail images.
When the screen shown in
As described above, by arranging and displaying each content in a continuous sequence of plural static images in a predetermined direction, the user can browse a large flow of scenes in the entire contents, or roughly grasp the scene change. A user can determine a scene change by the position where the color of the entire sequence of static images has changed. In the sequence of images, if the time intervals of static images are arranged at equal intervals (equal interval mode), the user can immediately grasp the total length (length in time) of each content. The static images can also be arranged with the time interval of the static image set as unequal time intervals, and an arrangement (equal image number mode) of required number of images from the leftmost point to the rightmost point can be accepted. Otherwise, the reduction rate of static images may be changed with the total length of each content fixed, such that the time intervals of the static images are equal (equal total length mode).
As described later, the user can operate the remote controller 12A, and move the cursor position in the content, thereby changing the target image and the focus image. When thumbnail images are displayed in the equal interval mode, a target image or a focus image is skipped at predetermined time intervals, for example, every third minute. When thumbnail images are displayed in the equal image number mode, a predetermined rate, for example, 2% of the target image or the focus image of the content having any time length can be skipped.
As described above, in the present embodiment, the sequence of images of each content shown in
Back to
In the display state shown in
As described above, the user can extract the desired content 112d from plural contents displayed on the three-dimensional display screen shown in
There is a case in which a user requests to retrieve a desired related scene associated with a scene in plural related contents as shown in
A user can operate the cross key of the remote controller 12A in the display state shown in
In the state, if the user operates the remote controller 12A and issues an instruction to display a submenu to retrieve the related scene, a submenu window 123 as shown in
A user can operate the cross key of the remote controller 12A, move a cursor to a desired selection portion from plural selection portions, and select a desired command.
If the execution key 95c of the remote controller 12A is pressed in the state in which the selection portion (for “searching for a similar scene”) selected, then a scene similar to the scene indicated by the thumbnail image TN1 as a focus image is retrieved, and the screen as shown in
A similar scene can be retrieved by analyzing each frame of each content or a thumbnail image, and by the presence/absence of (for example, characters similar to the those in the thumbnail image TN1) similar images.
A user can easily confirm the scene as a result of retrieval since an extracted related scene is displayed in an unreduced state as a result of retrieving a specified related scene as shown in
In response to the command for “searching for a scene of high excitement”, when the excitement level is proportional to the level of the volume included in the content, a scene having the high volume level is extracted. In response to the command for “searching for a scene including the same person”, the amount of feature is determined from an image of the face etc. of a person appearing in a specified thumbnail image in the image analyzing process, and an image having an equal or substantially equal amount of feature is extracted. In response to the command for “searching for the boundary between scenes”, an image having largely different amount of feature from an adjacent framed image is extracted in the image analyzing process.
The above-mentioned example retrieves a similar scene etc., and an application example thereof can retrieve the same specific corner in the same program broadcast every day, week, or month.
In the description shown in
Furthermore, in the voice sound processing, a corner starting with the “same music” can be retrieved. For example, a weather forecast corner can start with the same music. A corner appearing with the same superimposition mark can be retrieved. Although the superimposition is not read as a letter, it can be recognized as the “same mark”. In this case, the superimposition is recognized and retrieved as a mark. Furthermore, in the speech recognition processing, a corner starting with the “same words” can be retrieved. For example, when a corner starts with determined words “Here goes the corner of A”, the determined words are retrieved to retrieve the corner. Thus, if there are any common points in images or words as with the determined corner, then the common features can be retrieved.
In the display state as shown in
In
In the initial display state SS0, when the right cursor portion IR of the ring key 95a inside the cross key 95 is continuously pressed, the focus position is moved in the display state SS1 from the thumbnail image 141 to all right thumbnail images in the selected contents C2. At this time, while the right cursor portion IR is pressed, the thumbnail images at the cursor position are changed, and the focus image moves right without changing its size. In
Although not shown in the attached drawings, in the initial display state SS0, when the up or down cursor portion IU or ID of the ring key 95a is pressed, the focus image moves to the thumbnail image at the same position as the related content C1 or C3 above or below the cursor position regardless of a thumbnail image of a highlight scene.
Furthermore, if the movement of the focus image to the up or down related contents stops, and the right cursor portion IR is pressed from the position, the focus image moves right, and if the left cursor portion IL is pressed, the focus image moves left. That is, the left and right cursor portions IR and IL have the function of moving right or left the focus image, that is, in the same content. The up and down cursor portions IU and ID have the function of moving up and down the focus image, that is, between the contents.
Next, in the initial display state SS0, when the up cursor portion OU of the ring key 95b outside the cross key 95 is pressed, the focus moves to the thumbnail image 142 of the highlight scene of the related content C1 displayed above the thumbnail image 141 of the focus image in the selected content C2, thereby entering the display state SS2. If the up cursor portion OU is pressed, the cursor does not move from the thumbnail image 141 to 143 because the thumbnail image 142 is closest to the thumbnail image 141 on the display screen. If the cursor is placed at the thumbnail image 144 in the state shown in
Then, although not shown in the attached drawings, if the cursor portion OD is pressed in the initial display state SS0, the cursor moves to the thumbnail image 145 of the related content C3 displayed below.
If the cursor portion OD is pressed when the cursor is placed at the thumbnail image 142 of the related content C1, then the cursor moves to the thumbnail image 141 of the selected content C2 displayed below, and if the cursor portion OD is further pressed, then the cursor moves to the thumbnail image 145 of the related content C3 displayed below.
Similarly, if the cursor portion OU is pressed when the cursor is placed at the thumbnail image 145 of the related content C3, then the cursor moves to the thumbnail image 144 of the selected content C2 displayed above. If the cursor portion OU is further pressed, then the cursor moves to the thumbnail image 143 of the highlight scene in the related content C1 displayed above. That is, the up and down cursor portions OU and OD have the function of moving (that is, jumping) the cursor up and down, that is, between the contents, to the thumbnail image of the highlight scene.
In the initial display state SS0, if the right cursor portion OR of the ring key 95b outside the cross key 95 is pressed, then the cursor moves from the thumbnail image 141 of the highlight scene on which the cursor is placed in the selected content C2 to the thumbnail image 144 of the highlight scene of the selected content C2, thereby entering the display state SS3.
Then, in the display state SS3, when the left cursor portion OL is pressed, the cursor returns to the thumbnail image 141 of the highlight scene of the selected content C2. That is, the left and right cursor portions OR and OL have the function of moving (that is, jumping) the cursor left and right, that is, to the thumbnail image of the highlight scene in the same content.
As shown in the display example shown in
In the example above, on the screen on which the related scenes extracted and specified in the submenu window 123 shown in
The display generation unit 11 has change the related contents displayed with the selected contents, according to the contents of focus images or contents information. For example, when the focus image displays the face of a talent of a comedy program, the contents of a program in which the talent plays a role are extracted and displayed as related contents. Otherwise, when a focus image displays the face of a player in a live broadcast of a golf tournament, the contents of a program in which the player plays a role are extracted and displayed as related contents. Furthermore, when a focus image displays a goal scene of a team in a football game, the contents of a program in which the team has a goal scene are extracted and displayed, etc.
Furthermore, in the displayed selected contents and the related contents, the scenes in which the same talent or the same player is displayed are displayed as related scenes. In the display state, the operation by the cross key 95 as shown in
In such a display state, the function may be suppressed by selecting whether or not the function of the outside ring key 95b is made effective.
In addition, with a change of the focus image, related contents may be dynamically changed, and related scenes may also be dynamically changed.
Furthermore, there may be a switching function between enabling aid disabling the function of dynamically changing related contents with a change of the focus image, and in addition, there may be a switching function between enabling and disabling the function of dynamically changing related scenes.
Furthermore, if the image of the weather forecast corner in a news program is a focus image, the related contents above and below are displayed as including the images of similar weather forecast corner in another program as a target image. A user can perform an operation of moving only the image of the weather forecast corner by moving the focus up and down. Otherwise, if a close-up of a talent in a drama is a focus image, the related contents above and below are displayed as associated with a close-up of the same talent in another program as a target image. When the user moves up and down the focus, a target image of a close-up of the same talent in another program can be displayed.
If related scenes are dynamically changed depending on the movement of the focus image, then the display generation unit 11 can perform a process of generating list data of the cast in the program in the background process, thereby more quickly performing dynamic change and display processing.
Thus, if related contents can be dynamically changed according to the contents of a focus image or the contents information, the related scene of the changed related contents can be retrieved.
Therefore, a user as a viewer can easily retrieve a screen or enjoy retrieving a scene.
In addition, if animation contents are a set of cuts and chapters, the cuts and chapters in the contents can be regarded as a unit of contents as well as the original contents. In this case, if the cuts and chapters are designed to have the structure of content data as shown in
That is, depending on the position of the cursor or a so-called focus image, the contents information included in each content changes. Therefore, for example, the related contents arranged up and down can be more dynamically changed depending on the movement of the position of the focus image on the thumbnail images as shown in
When the related contents arranged up and down are dynamically changed, for example, the following display can be performed.
1) Programs of other channels at the same recording time are arranged in order of channels.
2) Same programs (daily or weekly recorded) are arranged in order of recording time.
3) Same corners (for example, a weather forecast, a today's stock market, etc.) are arranged in order of date.
4) Programs with the same cast are arranged in order of time regardless of the titles of programs.
5) Contents captured at the same place are arranged in order of time.
6) Contents of the same type of sports are arranged in order of time.
7) Same situations and same scenes (chances, pinches, goal scenes) of the same type of sports are arranged in order of time.
8) The contents arranged above and below are not only the same in contents information, but also, for example, different in scene in sports such as the first goal scene, the second goal scene, the third goal, etc. in the same contents arranged in order based on a specific condition.
In the example in (8) above, in the case of the contents of sports, the same type of sports are arranged, the same type of sports with chance scenes are arranged. Thus, if there are plural methods of arranging scenes, then a system of specifying the arranging method can be incorporated into a context menu of the GUI.
Furthermore, as one method of using a sequence of images, there are fast forward and fast return bars in playing back contents.
While playing back contents, the thumbnail image corresponding to the scene 141 being played back is displayed on the thumbnail image display unit 143, but if the user operates the remote controller 12A, and moves the cursor position of the image sequence display unit 142, then the display generation unit 11 displays the thumbnail image corresponding to the moved position on the thumbnail image display unit 143, and displays on the scene display unit 140 the scene 141 of the contents corresponding to the position displayed on the thumbnail image display unit 143. What is called a fast forward or fast return is realized by the image sequence display unit 142 and a cursor moving operation.
The user can rotate the tetrahedron 151 in a virtual space and change the surface viewed from the user by operating the cross key 95 of the remote controller 12A. For example, when the up cursor portion OU of the outside ring key 95b is pressed, the tetrahedron 151 rotates so that the surface 151d can be viewed from the front in place of the surface 151a which has been viewed from the front up to this point. As a result, the user can view the sequence of images of the contents 131d. Furthermore, when the up cursor portion OU of the outside ring key 95b is pressed, the tetrahedron 151 rotates so that the surface 151c can be viewed from the front in place of the 151d which has been viewed from the front up to this point. As a result, the user can view the sequence of images of the contents 131c.
On the other hand, when the down cursor portion OD of the outside ring key 95b is pressed, the tetrahedron 151 rotates so that the surface 151b can be viewed from the front in place of the surface 151a which has been viewed from the front up to this point. As a result, the user can view the sequence of images of the contents 131b. Furthermore, when the down cursor portion OD of the outside ring key 95b is pressed, the tetrahedron 151 rotates so that the surface 151c can be viewed from the front in place of the surface 151b which has been viewed from the front up to this point. As a result, the user can view the sequence of images of the contents 131c. As described above, the user can operate the remote controller 12A, and switch the displayed sequence of images so that the tetrahedron 151 can be rotated like a cylinder.
The operation of moving the highlight scene shown in
In the display state shown in
Furthermore, as a variation example of the displays shown in
The viewership data r of the contents changes corresponding to the elapsed time t of the playback time of contents. With the change, the thumbnail images TN11 and TN12 corresponding to two large values are displayed without reduction in the horizontal direction. The size of the thumbnail image TN1 corresponds to the viewership r1. The size of the thumbnail image TN12 corresponds to the viewership r2. In
There are various methods of determining, in the sequence of images, the thumbnail image of which scene is to be displayed in an unreduced format in the horizontal direction, that is, the viewership data r is to be equal to or higher than a predetermined threshold, a high order predetermined number of scenes having high viewership is determined, etc.
At this time, the additional information is, for example, the information based on the time series data in the text format or numeric value format as shown in
For example, according to the information (time series data) below, the magnification or reduction rate of thumbnail images is changed.
1) level of excitement from acclamation
2) level of BGM and effective sound
3) level of laughter and applaud
4) density of conversation
5) viewership
6) number of recorded user members
7) number of links if there are links in the scene in animation
8) frequency of viewing of scene
9) determination value of specific detected character (probability of appearance of specific character)
10) size of detected face
11) number of detected persons
12) hit rate of keyword retrieval
13) determination value of scene change
14) highlight portion in music program
15) important portrait portion in educational program
In the information above, the magnification of thumbnail images is high.
For example, in the contents of a sports match, the excitement level of the match can be digitized by analyzing the volume of the acclaim in the contents in the voice sound processing. Depending on the excitement level, the thumbnail image is displayed large. That is, the higher the excitement level, the larger thumbnail image while the lower the excitement level, the smaller thumbnail image. In this display method, the user can immediately recognize the contents, thereby easily selecting desired scenes.
The method of representing the additional information in the contents includes, in addition to representing the reduction rate or magnification rate of a thumbnail image in the sequence of images, controlling the brightness of a thumbnail image, the thickness of the frame of a thumbnail image, the color or brightness of the frame of a thumbnail image, shifting up and down a thumbnail image, etc.
Conventionally, an excitement scene has not been able to be recognized without a troublesome process of specifying high level of, for example, voice data from the waveform of audio data, and then retrieving an image corresponding to the waveform position. However, according to
Furthermore, the displays as shown in
There may be provided plural modes such as a mode in which an excitement scene is enlarged, a mode in which a serious scene is enlarged, etc., such that the modes can be switched to display the representation shown in
Described next is the display processing of the sequence of images of the contents displayed by the output device 13.
When a user presses the GUI2 button 95e of the remote controller 12A, the process shown in
First, the display generation unit 11 selects a content displayed at a predetermined position, for example, at the central position shown in
Next, the display generation unit 11 selects contents to be displayed in other positions than the predetermined position, for example, above or below in
The content to be displayed at the central row shown in
The display generation unit 11 performs the display processing for displaying sequence of images based on the information about a predetermined display system and the parameter for display (step S23). As a result of the display processing, a thumbnail image generated from each framed image in the sequence of images of each content is arranged in a predetermined direction in a predetermined format. The display system refers to a display mode of the entire screen as to whether or not contents are to be displayed in a plural row format as shown in
The display processing in step S23 is described below with reference to
First, the display generation unit 11 generates a predetermined number of static images, that is, thumbnail images, along the lapse of time of the contents forming a sequence of images from the storage device 10A (step S41). The step S41 corresponds to a static image generation unit.
Next, the display generation unit 11 converts thumbnail images other than at least one predetermined and specified thumbnail image (a target image in the example above) from among a predetermined number of generated thumbnail images into reduced images in a predetermined format (step S42). The step S42 corresponds to an image conversion unit.
Then, the display generation unit 11 displays the at least one thumbnail image and the other converted thumbnail images as a sequence of thumbnail images arranged along a predetermined path on the screen (horizontally in the example above) and along the lapse of time (step S43). The step S43 corresponds to a display unit.
In step S23 in which the process shown in
Then, the display generation unit 11 determines whether or not a user has issued a focus move instruction (step S24). The presence/absence of a focus move instruction is determined depending on whether or not the cross key 95 of the remote controller 12A has been operated. If the cross key 95 has been operated, control is passed to step S25.
As described above with reference to
If the up or down cursor portion IU or ID, not the right cursor portion ID or the left cursor portion IL, is pressed, it is determined No in step S25, and it is determined YES in step S27, and the display generation unit 11 changes the content. If the up cursor portion IU is pressed, the display generation unit 11 selects the content in the upper row displayed on the screen. If the down cursor portion ID is pressed, the display generation unit 11 selects the content in the lower row displayed on the screen. Since the content is changed, the display generation unit 11 changes the time of the focus image into the starting time of the content after the change (step S29).
As a result, if the up cursor portion IU is pressed, then the frame F2 indicating a focus image moves to the content 121b, the frame F2 indicating a focus image is added to the content, and the thumbnail image as the leftmost framed image shown in
In the example above, when the content is changed, the time of the focus image becomes the starting time of the content after the change. However, the time of a focus image may be changed such that the time can be set not as a starting time, but for the same position of the focus image before the change as in the vertical direction on the screen, or for the position of the same elapsed time from the starting time of the content.
If the right cursor portion OR or the left cursor portion OL of the outside ring key 95b, not the right cursor portion IR or the left cursor portion IL, nor the up or down cursor portion IU or ID, is pressed, then the determination in steps S25 and S27 is NO, the determination is YES in step S30, and the display generation unit 11 changes the time for display of a focus image into the highlight time for the next (that is adjacent) target image (step S31). In
Thus, the focus image is transferred between the target images in the content, that is, between the highlight scenes in this example.
If the up cursor portion OU or the down cursor portion OD of the outside ring key 95b, not the right cursor portion IR or the left cursor portion IL, nor the up cursor portion IU or the down cursor portion ID, is pressed, then it is determined NO in steps S25, S27, and S30, and the display generation unit 11 changes the content (step S32). If the up cursor portion OU is pressed, the display generation unit 11 can select the content in the upper row displayed on the screen (step S32). When the down cursor portion OD is pressed, the display generation unit 11 selects the content in the lower row displayed on the screen. In addition, since the content is changed, the time of the focus image is changed to the time of the highlight scene in the content after the change (step S33).
As a result, when the up cursor portion OU is pressed, the frame F2 indicating the focus moves to the content 121b in
Thus, a focus image moves to the highlight scene of another content.
If it is determined NO in step S24, that is, if a user instruction is not a focus move instruction, the display generation unit 11 determines whether or not it is a specification of action on a content (step S34). The specification of action on a content is a content playback instruction, a fast forward instruction, an erase instruction, etc. If it is determined YES in step S34, it is determined whether or not the instruction is a content playback instruction (step S35). If the instruction is a content playback instruction, the display generation unit 11 plays back the content pointed to by the cursor from the time position of the focus image (step S36). If the instruction is other than a content playback instruction, then the display generation unit 11 performs other processes corresponding to the contents of the instructions other than the play back instruction (step S37).
As described above, in the process shown in
Next, using the information about the framed image at the position of the focus image, the processing as the related contents are selected is described below.
First, the display generation unit 11 determines whether or not the information about a framed image at the position corresponding to the focus image is to be used in selecting related contents (step S51). Whether or not the information about the framed image in the position (focus position) of the focus is to be used in selecting related contents is predetermined and stored in the display generation unit 11 or the storage device, and the display generation unit 11 can make determination based on the set information.
If it is determined YES in step S51, the display generation unit 11 acquires the information about a character at the time of the focus position (step S52). The information is acquired by, for example, retrieving the information about the character in the text data shown in
Then, the content in which the character appears is selected (step S53). Practically, the display generation unit 11 searches the column of the characters in the text data, and the content storing the character name in the column is retrieved and selected.
Then, the display generation unit 11 performs rearrangement processing by sorting plural selected contents in a predetermined order, for example, recording time order (step S54). From among rearranged contents, a predetermined number of contents to be displayed, that is, four contents above and below in this example, are selected (step S55). As a result, the four selected related contents are displayed above and below the selected content as a focus image on the screen.
If it is determined NO in step 51, the related content is selected on the initial condition (step S56), and control is passed to step S54.
Since the processes above are executed each time a focus image is changed, the related contents above and below are dynamically reselected, changed, and displayed.
If the selection portion for “searching for a similar scene” has been selected (diagonal lines are added) as shown in
Next, the highlight display processing as shown in
First, the display generation unit 11 determines the total number of thumbnail images for display of the sequence of images of contents, and the size of displayed sequence of images (step S61). Then, the display generation unit 11 acquires the time series data based on which the display size of each thumbnail image is determined (step S62). The time series data is the data in the contents information set and stored in the display generation unit 11 or the storage device.
The display generation unit 11 reads and acquires a piece of data of the thumbnail images of the target contents of the thumbnail images (step S63).
It is determined whether or not the acquired data of the thumbnail images of the target contents is the data to be displayed as highlighted (step S64).
If the data of the thumbnail images is the data to be displayed as highlighted, the amount of scaling is set for the highlight size (step S65). If the data is not to be displayed as highlighted (if NO in step S64), then the amount of scaling of the thumbnail image is determined based on the time series data (step S66).
Next, it is determined whether or not the process of the entire thumbnail image has been completed (step S67). If the process of the entire thumbnail image has not been completed, it is determined NO in step S67, and control is passed to step S63.
If the process is completed on the entire thumbnail images, it is determined YES in step S67, and when the entire thumbnail images are displayed, the amount of scaling of all images is amended so that the images can be stored in a predetermined display width (step S68). Thus, each thumbnail image can be stored in the predetermined display width.
Then, the scaling processing of the entire thumbnail images is performed (step S69). In addition, the size of the display as a sequence of images is adjusted.
Then, the display generation unit 11 displays the entire thumbnail images (step S70).
In the above-mentioned process, the sequence of images of one content is displayed highlighted as shown in
In the above-mentioned example, each thumbnail image is read one by one to be processed in step S63, but the entire thumbnail images may be read and a predetermined number of time series data, for example, higher order 10 scenes may be highlighted and displayed.
As described above, using the GUI1 and GUI2, the user can retrieve interesting contents in less steps in a natural association method for a person. Practically, the following processes can be performed.
(1) A video contents is searched for in a three-dimensional space of time axis by the GUI1.
(2) By considering a time axis, contents are rearranged by the GUI1 in a three-dimensional space including the time axis.
(3) The GUI1 calls a title of an interesting content.
(4) A scene is selected while browsing the entire contents by the GUI2.
(5) After browsing the scenes by the GUI2, the content in which the same character appears in the preceding day is retrieved.
As described above, the video contents display apparatus 1 of the present embodiment described above can provide a graphic user interface capable of easily and pleasantly selecting and viewing a desired video content and a desired scene in the video contents from among plural video contents.
Described next are variation examples of the GUI1.
There are following cases in generating a screen of the GUI1 shown in
“A user requests to view the content B produced in the same period as the content A viewed in those days or at that time.”
“A user requests to view other video contents D or scenes E having the same period background as the scene C.”
In the case of Case 1-1, the data such as date and time of production etc. is stored as common time axis data in the contents information about the content A and content B. Therefore, by searching the data of date and time of production etc., the contents “produced in the same period” can be extracted, and the extracted contents can be displayed as a list.
In the case in Case 1-2, the data such as period settings etc. is stored as common time axis data in the contents information about the scene C, content D, and scene E. Therefore, by searching the data of period settings etc., the contents “having the same period background” can be extracted, and the extracted contents can be displayed as a list.
Therefore, in this case, if various data such as the date and time of production, the date and time of broadcast, the date and time of shooting, the date and time of recording, date and time of viewing, etc. are set as the time axis data, then, by the user using the data of the GUI1 to select the selection portion shown in
That is, when a user as a person memorizes an event etc. along a time axis, the device according to the present embodiment provides a screen display shown in
However, there are also the following cases.
“The user requests to view the content Q frequently viewed when the content P is purchased and the content R having the same period settings.”
“The user requests to view the content B broadcast when the content A is previously viewed.”
In Case 2-1, the time when the content P is purchased (date and time) matches the time when the content Q is viewed (date and time), but the time axes of the two times (date and time) are different from each other. One is the date and time of purchase, and the other is the date and time of viewing. Therefore, in the user view space 101 as shown in
In Case 2-2, the time when the content A is viewed previously (or last time)(date and time) is close to the time when the content B is broadcast (date and time), but the time axes of the times (date and time) are different from each other. One is the last date and time of viewing, and the other is the date and time of broadcast. Therefore, in the user view space 101 of the GUI1 described above, the contents A and B are not necessarily arranged close to each other.
These cases are described below with reference to the attached drawings.
In
However, in Case 2-2 of “content B broadcast when the content A is previously viewed”, the content B cannot be retrieved from the contents information about the content A.
Four methods of solving the above-mentioned problems are described below.
First, the first solution is described.
Additionally, in Case 2-1, although not shown in the attached drawings, the selection portion to “collect a content having the same period settings as the content broadcast on the purchase day of the content” is added. In addition, for example, a selection portion can change the view point position by “moving to the view point centering the date and time of the time axis B (axis of the date and time of broadcast day) having the same date and time of the time axis A (axis of the date and time of previous viewing)”, “moving to the view point centering the date and time of the time axis C having the same date and time of the time axis A (axis of the date and time of previous viewing)”, etc.
As described above, using the command by the selection portion, and using the time axis data with the contents information about a content in the focus state, data of another time axis is retrieved so that related contents can be retrieved in Cases 2-1 and 2-2.
As described above, plural selection portions corresponding to the combination of estimated retrieval may be prepared, such that the plural selection portions can be displayed on the screen as a selection menu, but a screen on which related combinations can be selected may be displayed, such that the combination is selected to allow generating the retrieval command.
Next, the second solution is described below.
In the above-mentioned Case 2-2, relating to the content in the focus state, there are the data of the last date and time of viewing, that is, “two years ago” in this embodiment and the data of the date and time of broadcast, that is, “three years ago” in this embodiment. The second solution uses the time axis data of “two years” and “three years” to expand and display, the display range of the user view space. That is, the second solution is to determine the display range of the user view space, and expand and display it only using the time data (in the example above, the time data of “two years” regardless of the time axis of “date and time of viewing”, the time data of “three years” regardless of the time axis of “date and time of broadcast”) regardless of the time axis of the time data relating to the retrieval condition of Case 2-2 etc. in a time range in which there can be a content to be retrieved and viewed.
From the time data of “two years” and “three years”, the display range of the user view space 101 is set to one year from two years ago to three years ago in each time axis to generate, the data of a user view space and display the user view space. As a result, in the user view space 101, only the content in the display range is displayed, and the user can easily find a target content. In
When there are three or more pieces of time data, the maximum and minimum values of the three pieces of data have only to be used as the display range data for all three time axes of the user view space 101B.
If there are still a large number of contents although the display range is limited, the types of contents may be limited from the user taste data, that is, the user taste profile, thereby decreasing the number of contents to be displayed.
Otherwise, an upper limit may be placed on the number of contents to be displayed, such that if the upper limit is exceeded, the contents with the upper limit are extracted by random sampling, thereby limiting the number of contents to be displayed.
As described above, the time data relating to the retrieval condition is extracted regardless of the time axis, and the time data is used as data in determining the display range of the user view space. Thus, the display range can be limited to the time range in which there can be a content to be retrieved and viewed, and the user can retrieve related contents in Cases 2-1 and 2-2.
Described below is the third solution.
In Cases 2-2 above, the content in the focus state has three time data about three time axes. Then, a content having the time data the same as or similar to each piece of the three time data with respect to another time axis is retrieved and extracted, and the retrieved content is displayed in the user view space as the third solution. That is, according to the third solution, displayed are contents having time data of three time axes the same as or similar to each time data of the three time axes of the content in other two time axes than the time axis to which the time data belongs.
Practically, if there are time axes of the date and time of production, the date and time of broadcast, and the final date and time of playback as the three time axes of a content in the focus state, a time axis other than the three time axes is retrieved using the three pieces of data. That is, contents having time data the same as or similar to the time data relating to the X axis and also having the time data relating to the Y and Z axes are retrieved. Similarly, contents having time data the same as or similar to the time data relating to the Y axis and also having the time data relating to the X and Z axes are retrieved. Similarly, the contents having the time data the same as or similar to the time data relating to the Z axis and also having the time data relating to the X and Y axes are retrieved, and they are displayed with the contents in the focus state in the user view space. As a result, the contents having the time data the same as or similar to the three pieces of data can be retrieved. Then, the extracted and acquired contents are displayed in the screen format as shown in
Thus, the user can easily retrieve the contents relating to the contents in the focus state.
Described below is the fourth solution.
The fourth solution is to include the date and time of occurrence of an event in an absolute time for each content in the contents information (or related to the contents information), and display on the screen such that the concurrence and the relation of the date and time of occurrence of an event between the contents can be clearly expressed. That is, the fourth solution stores one or more event occurring in a content as associated with the time information (event time information) in the reference time (in the following example, the absolute time of the global standard time etc.) indicating the time of the occurrence, and displays the event on the screen such that the concurrence etc. of the date and time of occurrence of the event between the contents can be clearly expressed.
Described practically below is the fourth solution.
First, the contents information relating to the fourth solution is described below.
That is, the target to be stored as event information is predetermined, and if the operation etc. by the user for the TV recorder etc. as a video contents display device corresponds to the predetermined event, event information is generated as associated with the content for which the event has occurred based on the operation, and the information is added to the contents information about the storage device 10A.
As described above, for some contents or time axes, it may be not necessary to store the time data in hour, minute, and second for the date and time data in the event information. In this case, only the year, or only the year and month may be stored as period data. For example, only the year for time data or the period data of only the year and month are recorded for the time axis of the content or period setting of a history drama.
Furthermore, the data structure may be a table format related to a sequence of events using the content as a key, and expressed in an XML format.
The video contents display apparatus 1 can display the three-dimensional display screen as shown in
In
The axes other than the Z axis may not relate to time. For example, the X axis and Y axis may indicate titles in the order of a sequence Japanese characters, in the alphabetical order, in the order of user viewing frequency, etc.
Practically, as shown in
Furthermore, each content is arranged in a corresponding position on each time axis with respect to other user selected time axes (X axis and Y axis). In the case shown in
In
Similarly, other contents 202 and 203 are displayed. Practically, the content 202 includes three events, and three blocks respectively indicating the three events are connected by the bar unit 211. The content 203 includes four events, and four blocks indicating the four events are connected by the bar unit 211.
In this example, when plural contents are displayed in a predetermined display mode, an event is represented in a block form, and the connection between the blocks is indicated by a bar unit. The predetermined display mode may be any other display modes than the display mode shown in
In the display state shown in
In
Practically, a content 201 has three events 201A, 201B, and 201C, In
There is a portion having a predetermined width at the center of the screen. The portion indicates an attention range IR as a portion indicated by diagonal lines in
In
A method of specifying a selected event can be, as described above, automatically specifying the event at the center or substantially at the center of the plural events of the content as a selected event when a predetermined operation is performed for screen display shown in
In the above-mentioned example, an event at the center or substantially the center of plural events of the content in a focus state is a selected event, but other events (for example, the events as the earliest date and time of occurrence (event 201A in the content 201)) can be selected events.
Furthermore, in a state in which a once selected event is displayed as included in the attention range IR, a predetermined operation can be performed using a mouse etc., to define another event as a selected event. For example, in the display state of the display screen shown in
Furthermore, the selection may be performed by pressing the left and right keys of the arrow keys on the keyboard etc. to move the viewpoint position by a predetermined amount or continuously while the key is pressed in the direction selected by the left and right key. At this time, the attention range IR also changes on the time axis of the absolute time with the movement of the viewpoint position. When the event of the content in the focus state is positioned in the attention range IR, it is assumed that the event is selected, and each content enters the display state as shown in
In the description above, the contents 202 and 203 including an event having the same or close date and time of occurrence of the event 201B are displayed. However, a content (for example, the content 204 indicated by the dotted line in
Therefore, depending on the change of the selected event, a content including an event having the same or close date and time of occurrence of the selected event, and a content including no event having a date and time of occurrence in the attention range IR are changed, and the display mode of each content dramatically changes.
As described above, according to the display screen shown in
If a user requests to “view a content B broadcast when the content A was previously viewed” as in Case 2-2, the user can easily extract or determine by viewing the screen shown in
Similarly, in Case 2-1 “The user requests to view the content Q frequently viewed when the content P was purchased and the content R having the same period settings”, the user can easily determine by checking the screen shown in
Also in the display state shown in
In the case shown in
A practical example is described below with reference to
When a content 401 is produced, it is assumed that the past scene is located and shot in two divisions. In this case, as shown in
With the display, a user can easily recognize the change although there is a change in time on the time axis in one event.
As described above, as shown in
In
Next, the process of the screen display shown in
First, when the GUI3 button 95f is pressed, the display generation unit 11 determines whether or not the view point for the user view space is fixed to the direction orthogonal to the absolute time axis (step S101). The determination is performed by a user according to the information predetermined in the memory of the display generation unit 11, for example, rewritable memory. For example, if a user sets the display shown in
The GUI3 button 95f may be designed so as to, when not preset, be pressed to display a popup window that allows the user to input and select one of the displays of
Next, the display generation unit 11 reads the time axis data of the content in the focus state, that is, the reference content (step S102). The read time axis data is a time axis data including the event information shown in
Next, the display generation unit 11 determines the time axes of the X axis and the Y axis (step S103). The determination is performed by a user according to the information preset in the memory of the display generation unit 11, for example, rewritable memory. For example, if the user sets the time axes of the X axis and the Y axis shown in
Although settings are not preset, a predetermined popup window may be displayed on the screen to allow the user to select the time axes of the X axis and the Y axis.
Next, the display generation unit 11 determines the display range of the absolute time axis (step S104). The display range of the absolute time axis can be determined by the data indicating the range, for example, from “1990” to “1999”. The display range in the Z axis direction in the user view space shown in
Next, the display generation unit 11 determines the range of concern IR (step S105). The range of concern IR in the Z direction within the user view space in
Further, the display generation unit 11 uses time axis keys of the X and Y axes to retrieve contents in the range of the user view space in order to extract and select the contents in the user view range (step S106). The display generation unit 11 determines the position of each of contents in the user view space in
The user can manipulate the arrow key, the mouse and the like to change the display range of the range of concern in the user view space while viewing the user view space in
In response to such manipulation, the user view space or the range of concern will be changed. Accordingly, the display generation unit 11 determines whether or not the user view space or the range of concern has been changed (step S108). When the display generation unit 11 determines that such a change has been made and YES in step S108, the process returns to step S101. Alternatively, when YES in step S108, the process may be returned to step S104 or other steps.
When such a change has not been made, which is indicated by NO in step S108, the display generation unit 11 determines whether or not one of contents has been selected (step S109). Once the content has been selected, the display generation unit 11 performs a process for displaying the GUI2 (such as
When NO in step S101, the process continues with the process in
The display generation unit 11 reads time axis data for all contents (step S121). The display generation unit 11 then determines time axes of the X and Y axes (step S122). Similarly to step S103 as described above, this determination may also be made based on information predefined by the user in the memory, e.g. a rewritable memory, of the display generation unit 11, or by displaying a predetermined pop-up window on the screen to allow the user to select respective time axes of the X and Y axes.
Next, the display generation unit 11 determines and generates X, Y and Z three dimensional time spaces, with the Z axis being as the absolute time (step S123).
The display generation unit 11 then determines whether or not past view information is used (step S124). When past view information is used, which is indicated by YES in step S124, the display generation unit 11 determines the position of each of contents in the user view space and the position of each event (step S125). The step S125 corresponds to the position determination unit that determines positions on plural time axes for each of plural video contents and a position on the absolute time axis for each of plural events, based on time information of the plural video contents and event time information.
The view origin may be defaulted to center the current date. In addition, some scales of each axis may be selectable, for example, in hour, week, month or other unit.
The display generation unit 11 then saves each parameter of the view information in the storage device 10A (step S126).
When NO in step S124, or past view information is not used, the display generation unit 11 performs a process to change the view information. In the process to change the view information, a pop-up window that has plural input fields for inputting each of parameters of the view information can be displayed to allow the user to input each parameter and finally operate a confirmation button and the like to accomplish the setting. Alternatively, plural parameters may be separately set by the user.
Therefore, a determination is initially made whether or not the viewpoint is changed (step S127). When the viewpoint is to be changed, which is indicated by YES in step S127, the display generation unit 11 performs a process to change the parameters for the viewpoint (step S128).
After steps S127 and S128, a determination is made whether or not the view origin is changed (step S129). When the view origin is to be changed, which is indicated by YES in step S128, the display generation unit 11 performs a process to change the parameters for the view origin (step S130).
Similarly, after steps S129 and S130, a determination is made whether or not the display range of Z axis is changed (step S131). When the display range of Z axis is to be changed, which is indicated by YES in step S131, the display generation unit 11 performs a process to change the parameters for the display range of Z axis (step S132).
Similarly, after steps S131 and S132, a determination is made whether or not the display range of X or Y axis is changed (step S133). When the display range of X or Y axis is to be changed, which is indicated by YES in step S133, the display generation unit 11 performs a process to change the parameters for the display range of X or Y axis (step S134).
Incidentally, the change of the display range in steps 132 and 134 may be performed using either data for a specific time segment, for example, years from “1990” to “1995”, or ratio or scale data.
After steps S133 and S134, a determination is made whether or not the change process for the view information is completed (step S135). The determination can be made based on, for example, whether or not a confirmation button is pressed as described above. When NO in step S135, the process returns to step S127. When YES in step 135, the process continues with step S126.
When the view information is changed from steps S127 to S135, it is possible to make a display viewed in the direction perpendicular to the XZ, XY, or YZ plane as shown in
After the step S126 process, the process continues with step S107 in
In this way, the screens as shown in
Incidentally, in the case of the processes in
On the other hand, in the case of the processes in
Therefore, the process to generate the display screen in
Incidentally, in making a display such as in
As described above, according to the fourth solution, event occurrence date relative to a reference time is included in (or associated with) contents information for each of contents, and the date is displayed on the screen so that concurrence and association of event occurrence dates between contents can be recognized, thereby the user can easily retrieve a desired scene or contents from plural video contents.
A program that performs the operation described above is entirely or partially recorded or stored on a portable media such as a flexible disk, CD-ROM and the like, as well as a storage device such as a hard disk, and can be provided as a program product. The program is read by a computer to execute entirely or partially the operation. The program can be entirely or partially distributed or provided through a communication network. The user can easily realize a video contents display device according to the invention by downloading the program through the communication network to be installed on the computer, or by installing the program on the computer from a recording media.
Although the foregoing embodiment has been described using video contents by way of example, the present invention may be applicable to music contents having time-related information such as produced date and played-back date, or may be further applicable to document files such as document data, presentation data, and project management data, which have time-related information on, e.g. creation and modification. Alternatively, the invention may be applicable to a case where a device for displaying video contents is provided on a server and the like to provide a video contents display through a network.
As described above, according to the foregoing embodiment, a video contents display device can be realized with which a desired scene or contents can be easily retrieved from plural video contents.
The present invention is not limited to the embodiment described above, and various changes and modifications may be made within the scope of the invention without departing from its spirit.
Number | Date | Country | Kind |
---|---|---|---|
2006-353421 | Dec 2006 | JP | national |