Apparatus and method for separately viewing multimedia content desired by a user

Information

  • Patent Grant
  • 9355682
  • Patent Number
    9,355,682
  • Date Filed
    Friday, October 29, 2010
    14 years ago
  • Date Issued
    Tuesday, May 31, 2016
    8 years ago
Abstract
An apparatus and method for reproducing multimedia content are provided. Content selected through user input unit is reproduced, and if it is requested that part of the reproduced content be registered as content of interest, metadata about the part of the reproduced content is generated and stored using metadata about the reproduced content. The generated metadata is generated and stored as metadata of interest about the content of interest according to input of a user input unit.
Description
PRIORITY

This application claims priority under 35 U.S.C. §119(a) to an application filed in the Korean Intellectual Property Office on Oct. 30, 2009 and assigned Serial No. 10-2009-0104527, the contents of which are incorporated herein by reference.


BACKGROUND OF THE INVENTION

1. Field of the Invention


The present invention relates generally to a multimedia content reproducing apparatus and method, and more particularly, to an apparatus and method which can separately view only multimedia content desired by a user.


2. Description of the Related Art


With the development of the Internet, users may access the Internet through communication devices such as personal computers, notebook computers, cellular phones, etc. to receive various multimedia content such as sound, images, data, etc. Recently, the multimedia content can be received even by a vehicle or portable Digital Multimedia Broadcasting (DMB) receiver through satellite/terrestrial DMB.


However, to view multimedia content through the Internet, a user has to wait until the multimedia content is buffered by accessing a server in which the content is stored. Further, in order to view a desired scene, a user has to search the desired scene by reproducing the corresponding content from the first or using a Fast Forward (FF) or Rewind (REW) button and has to wait unit the desired scene is buffered.


Therefore, a method which can easily extract and view desired sections or scenes from a plurality of multimedia content is needed.


SUMMARY OF THE INVENTION

An aspect of the present invention is to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below. Accordingly, an aspect of the present invention provides a multimedia reproducing apparatus and method which can separately reproduce only desired scenes from content desired by a user.


Another aspect of the present invention provides a multimedia reproducing apparatus and method which can separately reproduce only desired scenes from content desired by a user by managing metadata about the content desired by the user.


In accordance with an aspect of the present invention, a method for reproducing multimedia content, provided through the Internet, includes reproducing content selected through user input unit, if it is requested that part of the reproduced content be registered as content of interest, generating and storing metadata about the part of the reproduced content, using metadata about the reproduced content, and re-constructing metadata which is generated with respect to a plurality of content of interest as metadata of interest according to input of the user input unit and storing the metadata of interest.


In accordance with another aspect of an embodiment of the present invention, an apparatus for reproducing multimedia content provided through the Internet includes a user input unit for receiving reproduction information about multimedia content and information about parts desired to be managed as content of interest out of currently reproduced content, a metadata generator for generating information about the content of interest selected by the user input unit as metadata and re-constructing metadata which is generated with respect to a plurality of the content of interest as metadata of interest, a metadata storage unit for storing the metadata generated from the metadata generator and the metadata of interest, a content reproducing unit for confirming the metadata of interest in the metadata storage unit and controlling the content of interest to be reproduced according to the metadata of interest, when a request that the content of interest be reproduced is received through the user input unit, and a display unit for displaying the content of interest which is reproduced by the control of the content reproducing unit so that a user can view the content of interest.





BRIEF DESCRIPTION OF THE DRAWINGS

The above and other aspects, features and advantages of certain embodiments of the present invention will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:



FIG. 1 is a diagram illustrating an entire construction of a content providing system according to an embodiment of the present invention;



FIG. 2 is a block diagram illustrating a construction of a user terminal according to an embodiment of the present invention;



FIG. 3 is a diagram illustrating a user interface screen for registering and reproducing content of interest according to an embodiment of the present invention;



FIG. 4 is a diagram illustrating a user interface screen for registering and reproducing content of interest according to another embodiment of the present invention; and



FIG. 5 is a flow chart illustrating a process for registering content of interest in a user terminal according to an embodiment of the present invention.





Throughout the drawings, the same drawing reference numerals will be understood to refer to the same elements, features and structures.


DETAILED DESCRIPTION OF EMBODIMENTS OF THE PRESENT INVENTION

Reference will now be made in detail to the embodiments of the present invention with reference to the accompanying drawings. The following detailed description includes specific details in order to provide a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention may be practiced without such specific details.


Generally, various multimedia content, such as moving picture content, provided on a website, is transmitted to a user terminal in the form of a streaming service in order to provide a real-time service. Accordingly, if a user desires to re-view a specific image section or scene while viewing multimedia content, the user should download the whole multimedia content or manipulate the seek bar of the user terminal while viewing the multimedia content.


To solve such inconvenience, an embodiment of the present invention provides a multimedia content reproducing method in which information about image sections or scenes (hereinafter, “sections of interest”) that a user desires to re-view out of a plurality of multimedia content is registered and if the user requests that the sections of interest be reproduced, previously registered sections of interest (hereinafter, “content of interest”) can be successively reproduced or selectively reproduced, based on the section of interest information.


To register the sections of interest, a user terminal may use various terminals such as a cellular phone, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a notebook computer, a palmtop etc., which can reproduce multimedia content. The user terminal includes a client program to register and store the sections of interest.


The client program is configured to successively reproduce a plurality of content of interest or selectively reproduce at least one content of interest, according to a reproduction request through the user terminal.


If a user employs a plurality of user terminals, section of interest information which is registered through the user terminals may be uploaded on a prescribed website joined as a member and other users authenticated through the website may browse sections of interest by accessing the website.


In an embodiment of the present invention, the section of interest information may include information about location, storage region, use rights etc. of content of interest so that the content of interest can be reproduced through download from a website in a streaming form without being directly stored in the user terminal. The section of interest information may be comprised of metadata.


The client program may be configured so as to reproduce content of interest after directly storing a plurality of multimedia content in a user terminal and registering a plurality of section of interest information about the plurality of stored multimedia content.


In an embodiment of the present invention, if the section of interest information is managed on a predetermined website, content of interest can be reproduced using a general terminal in which the client program is not installed. In this case, the website which manages the section of interest information extracts only content of interest from multimedia content and provides the content of interest to the user terminal in the form of a streaming service.


It is assumed in an embodiment of the present invention that the section of interest information is comprised of metadata. If the section of interest information can be downloaded in a streaming form through a website, a variety of known data formats can be used.


Metadata refers to data, which is assigned to content according to a predetermined rule in order to efficiently search and use desired information out of a large quantity of information. At least one attribute, such as location and details of content, information about a content maker, right condition of use rights, use condition, and use history, may be recorded in the metadata. Using the metadata, a user can reduce the time spent searching desired content by confirming desired content of interest and directly accessing a server which provides the content of interest.



FIG. 1 is a diagram illustrating an entire construction of a content providing system according to an embodiment of the present invention.


Referring to FIG. 1, the content providing system includes a user terminal 110, a content server 120, a metadata server 130, and a billing and authentication server 140, which are connected to each other through the Internet 100.


User terminal 110 can reproduce desired content by accessing content server 120 through the Internet 100 based on an Internet Protocol (IP).


Content server 120 provides content and a user interface to user terminal 110 so that the user can manage the desired part of content through metadata.


Metadata server 130 provides metadata about content offered to the user to content server 120.


Billing and authentication server 140 performs billing and payment functions for content provided to the user.


Although content server 120, metadata server 130, and billing and authentication server 140 are separately represented in FIG. 1, they may be constructed in one device.



FIG. 2 is a block diagram illustrating a construction of user terminal 100 according to an embodiment of the present invention.


As illustrated in FIG. 2, user terminal 110 includes a user input unit 111, a metadata generator 112, a metadata storage unit 113, a content reproducing unit 114, and a display unit 115. At least one of the constituent elements, of user terminal 110, may be implemented by a client program. Herein, the term “unit” refers to a hardware device or a combination of hardware and software.


User input unit 111 receives information about play/FF/REW of content from a user and receives time information about the start and end of content desired by the user to be classified as content of interest.


Metadata generator 112 receives information about the content of interest selected by a user from the user input unit 111 and generates metadata about the corresponding information. The metadata is about content of interest and is generated by processing metadata about content offered by content server 120 according to information received from user input unit 111. Metadata generator 112 generates metadata about a plurality of content of interest according to information from user input unit 111 and re-constructs the generated metadata as one metadata (hereinafter, “metadata of interest”). The re-constructed metadata of interest is integrated metadata about content of interest requested by a user so as to successively reproduce multiple content of interest at a time. The metadata of interest includes information about storage location, storage region, type, use rights, etc. of corresponding content.


Metadata storage unit 113 stores the metadata of interest which is generated from and re-constructed in metadata generator 112.


Content reproducing unit 114 confirms, from metadata storage unit 113, metadata about content requested to be reproduced according to input from user input unit 111 and accesses a content server according to the metadata so that corresponding content can be reproduced through user terminal 110. If the requested content is content of interest, content reproducing unit 114 may initially buffer content to be subsequently reproduced while current content is reproduced in order to reproduce content provided by different content servers without discontinuity.


Display unit 115 displays content which is reproduced by the control of content reproducing unit 114, on a screen, so that a user can view the content. Information about content of interest stored by a user as well as the content is displayed on the display unit 115 so that the user can select desired content of interest through user input unit 111 from among a plurality of content of interest.



FIG. 3 is a diagram illustrating a user interface screen for registering and reproducing content of interest according to an embodiment of the present invention.


As illustrated in FIG. 3, a user interface includes a screen display region 310 for displaying a content reproducing screen and a metadata display region 320.


Screen display region 310 displays various icons for reproducing content selected from metadata display region 320, such as a screen size control icon, a volume control icon, play time display information, and the like.


Metadata display region 320 may display content of interest lists 323 and 324 selected and stored by a user. Content of interest may be classified by a user and managed as folders as indicated by reference numeral 322. Metadata display region 320 may display content of interest registration icon 321 so as to store a part of currently reproduced content as content of interest. A user can register part of an image displayed on screen display region 310 as the content of interest by clicking the content of interest registration icon 321. Moreover, part of currently reproduced content may be added to an already registered content of interest. An undesired part may be deleted from the registered content of interest and to this end, metadata display region 320 may include a content of interest delete icon (not shown).



FIG. 4 is a diagram illustrating a user interface screen for registering and reproducing content of interest according to another embodiment of the present invention and shows an example applying the present invention to video education content.


As illustrated in FIG. 4, a user interface includes a screen display region 410 for displaying a content reproducing screen and a metadata display region 420.


Metadata display region 420 is constructed similarly to the metadata display region 320 of FIG. 3.


According to characteristics of the video education content, screen display region 410 includes a moving picture display part 411 for displaying moving pictures and a text display part 412 for displaying the content of video lectures as text. Content of interest may be classified by a user and managed as folders as indicated by reference numeral 422. Metadata display region 420 may display content of interest registration icon 421 so as to store a part of currently reproduced content as content of interest. Although a user may register a desired part as content of interest using a scroll bar for video playback, text materials selected by clicking the text materials corresponding to a desired part of a video lecture or through a drag function may be registered together with a video part corresponding to the text materials as content of interest. In this case, main text materials as well as main screens of content may be displayed on content of interest lists 423 and 424 of the metadata display region 420.



FIG. 5 is a flow chart illustrating a process for registering content of interests in a user terminal according to an embodiment of the present invention.


Referring to FIG. 5, a user terminal reproduces content selected through a user input unit in step 510. If a request to register part of the currently reproduced content as content of interest is received through the user input unit in step 520, a metadata generator generates metadata about the part of the currently reproduced content using metadata about the currently reproduced content received from a content server and stores the generated metadata in a metadata storage unit in step 530.


In step 540, it is determined whether there is content to be added as content of interest out of the currently reproduced content. If the content to be added is present, steps 520 and 530 are repeated. If no content to be added is present, the generated metadata is stored as metadata of interest about the content of interest or the metadata of interest is updated, by adding the generated metadata in already generated and stored metadata of interest, according to user selection in step 550.


Meanwhile, the metadata of interest generated in step 530 may be stored in a blog etc. of a website rather than the metadata storage unit of a user terminal to share the metadata of interest with users of the website. In this case, the metadata of interest in which information about a section of interest of each content is stored may be generated or a Uniform Resource Identifier (URI) of each section of interest may be generated and stored.


When generating metadata of interest, URI of corresponding content and information about a section of interest within content are included in the metadata of interest. The URI of content denotes a location of the content. In the metadata of interest, information about a start point of a section of interest and a selected section may be stored as byte offset, time, or frame type.


When generating a URI of each section of interest, the URI is generated according to the standard of the World Wide Web Consortium (W3C) media fragment working group.


When metadata of interest is stored in a blog, a user may perform user authentication by accessing the corresponding blog, confirming the content of interest list stored in the blog and selecting the desired content of interest to reproduce the content of interest on the web.


A user may edit the metadata of interest, stored in the blog, after user authentication, to re-construct the metadata of interest in a desired form and may store the re-constructed metadata as a content of interest list. If a user accesses the corresponding blog through user authentication, the user may confirm the stored content of interest list and reproduce the content of interest on the web by selecting the content of interest from the content of interest list.


According to embodiments of the present invention, since only desired scenes of content can be separately reproduced, a user does not need to reproduce content from the beginning in order to reproduce a desired scene of desired content or does not need to wait until the desired scene is buffered using a FF or REW button whenever viewing the corresponding content. Therefore, the user can quickly view the desired scene of the desired content without discontinuity.


Although the embodiments of the present invention have been disclosed for illustrative purposes, various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims. Accordingly, the scope of the present invention should not be limited to the description of the embodiment, but defined by the accompanying claims and equivalents thereof.

Claims
  • 1. A method for reproducing a multimedia content, the method comprising: reproducing a first multimedia content received from a first server;based on a selection of a first portion of the first multimedia content while reproducing the first multimedia content, generating first metadata including time information about the selected first portion within the first multimedia content and location information of the first multimedia content;reproducing a second multimedia content received from a second server;based on a selection of a second portion of the second multimedia content while reproducing the second multimedia content, generating second metadata including time information about the selected second portion within the second multimedia content and location information of the second multimedia content;generating one integrated metadata including the first metadata and the second metadata;accessing the first multimedia content stored in the first server and the second multimedia content stored in the second server using the location information included in the one integrated metadata; andbuffering the second portion of the second multimedia content while reproducing the first portion of the first multimedia content using the time information about the first portion within the first multimedia content and the time information about the second portion within the second multimedia content.
  • 2. The method of claim 1, wherein the one integrated metadata further includes information for at least one of a content maker, a content type, right condition of use rights, use condition and use history for each of the first portion and the second portion.
  • 3. The method of claim 1, further comprising receiving information about a first text content that corresponds to the first portion.
  • 4. The method of claim 3, wherein the generated one integrated metadata further includes the first text content and a second text content that corresponds to the second portion, if the information about the first text content is received.
  • 5. The method of claim 4, wherein the buffering of the second portion buffers the second portion with the second text content while reproducing the first portion with the first text content using the time information about the first portion within the first multimedia content and the time information about the second portion within the second multimedia content.
  • 6. An apparatus for reproducing a multimedia content, comprising: a display unit; anda control unit configured to reproduce a first multimedia content received from a first server via the display unit,based on a selection of a first portion of the first multimedia content while reproducing the first multimedia content, generate first metadata including time information about the selected first portion within the first multimedia content and location information of the first multimedia content,reproduce a second multimedia content received from a second server, based on a selection of a second portion of the second multimedia content while reproducing the second multimedia content,generate second metadata including time information about the selected second portion within the second multimedia content and location information of the second multimedia content,generate one integrated metadata including the first metadata and the second metadata,access the first multimedia content stored in the first server and the second multimedia content stored in the second server using the location information included in the one integrated metadata, andbuffer the second portion of the second multimedia content while reproducing the first portion of the first multimedia content using the time information about the first portion within the first multimedia content and the time information about the second portion within the second multimedia content.
  • 7. The apparatus of claim 6, wherein the one integrated metadata includes information for at least one of a content maker, a content type, right condition of use rights, use condition and use history for each of the first portion and the second portion.
  • 8. The apparatus of claim 6, wherein the control unit is further configured to receive information about a first text content that corresponds to the first portion.
  • 9. The apparatus of claim 8, wherein the control unit is further configured to generate the one integrated metadata further including the first text content and a second text content that corresponds to the second portion, when the information about the first text content is received.
  • 10. The apparatus of claim 9, wherein the control unit is further configured to buffer the second portion with the second text content while reproducing the first portion with the first text content using the time information about the first portion within the first multimedia content and the time information about the second portion within the second multimedia content.
Priority Claims (1)
Number Date Country Kind
10-2009-0104527 Oct 2009 KR national
US Referenced Citations (126)
Number Name Date Kind
5109482 Bohrman Apr 1992 A
5329320 Yifrach Jul 1994 A
5414808 Williams May 1995 A
5467288 Fasciano et al. Nov 1995 A
5973723 DeLuca Oct 1999 A
6119154 Weaver et al. Sep 2000 A
6125229 Dimitrova et al. Sep 2000 A
6137544 Dimitrova et al. Oct 2000 A
6272566 Craft Aug 2001 B1
6285361 Brewer et al. Sep 2001 B1
6289346 Milewski et al. Sep 2001 B1
6321024 Fujita et al. Nov 2001 B1
6332144 deVries et al. Dec 2001 B1
6332147 Moran et al. Dec 2001 B1
6360234 Jain et al. Mar 2002 B2
6404978 Abe Jun 2002 B1
6519603 Bays et al. Feb 2003 B1
6549922 Srivastava et al. Apr 2003 B1
6551357 Madduri Apr 2003 B1
6681398 Verna Jan 2004 B1
6754389 Dimitrova et al. Jun 2004 B1
6799298 deVries et al. Sep 2004 B2
6842190 Lord et al. Jan 2005 B1
6917965 Gupta et al. Jul 2005 B2
6956593 Gupta et al. Oct 2005 B1
7051275 Gupta et al. May 2006 B2
7111009 Gupta et al. Sep 2006 B1
7131059 Obrador Oct 2006 B2
7143353 McGee et al. Nov 2006 B2
7149755 Obrador Dec 2006 B2
7162690 Gupta et al. Jan 2007 B2
7274864 Hsiao et al. Sep 2007 B2
7280738 Kauffman et al. Oct 2007 B2
7320134 Tomsen et al. Jan 2008 B1
7320137 Novak et al. Jan 2008 B1
7506262 Gupta et al. Mar 2009 B2
7540011 Wixson et al. May 2009 B2
7616946 Park et al. Nov 2009 B2
7735104 Dow et al. Jun 2010 B2
7777121 Asano Aug 2010 B2
7793212 Adams et al. Sep 2010 B2
7836473 Tecot et al. Nov 2010 B2
7844820 Martinez Nov 2010 B2
7870475 Schachter Jan 2011 B2
8005841 Walsh et al. Aug 2011 B1
8082504 Tischer Dec 2011 B1
8103646 Brown Jan 2012 B2
8122474 Tecot et al. Feb 2012 B2
8161387 Tischer Apr 2012 B1
8166305 Martinez Apr 2012 B2
8191103 Hofrichter et al. May 2012 B2
8209397 Ahn et al. Jun 2012 B2
8214463 Ahn et al. Jul 2012 B2
8214519 Ahn et al. Jul 2012 B2
8224925 Ahn et al. Jul 2012 B2
8413182 Bill Apr 2013 B2
8451380 Zalewski May 2013 B2
8495675 Philpott et al. Jul 2013 B1
8566865 Zalewski et al. Oct 2013 B2
8646002 Lee Feb 2014 B2
8676900 Yruski Mar 2014 B2
8683538 Tucker Mar 2014 B2
8745657 Chalozin et al. Jun 2014 B2
8745660 Bill Jun 2014 B2
8751607 Jenkins Jun 2014 B2
8843957 Lemire et al. Sep 2014 B2
20010044808 Milewski et al. Nov 2001 A1
20020069218 Sull et al. Jun 2002 A1
20020100041 Rosenberg et al. Jul 2002 A1
20020112249 Hendricks et al. Aug 2002 A1
20020184195 Qian Dec 2002 A1
20030028873 Lemmons Feb 2003 A1
20030061369 Aksu Mar 2003 A1
20030177503 Sull et al. Sep 2003 A1
20040194123 Fredlund et al. Sep 2004 A1
20040194127 Patton et al. Sep 2004 A1
20040194128 McIntyre Sep 2004 A1
20050005289 Adolph Jan 2005 A1
20050149557 Moriya et al. Jul 2005 A1
20050251832 Chiueh Nov 2005 A1
20060026628 Wan et al. Feb 2006 A1
20060085826 Funk Apr 2006 A1
20060294538 Li et al. Dec 2006 A1
20070015457 Krampf Jan 2007 A1
20070055985 Schiller et al. Mar 2007 A1
20070078904 Yoon Apr 2007 A1
20070083762 Martinez Apr 2007 A1
20070094082 Yruski Apr 2007 A1
20070226761 Zalewski et al. Sep 2007 A1
20080036917 Pascarella et al. Feb 2008 A1
20080040741 Matsumoto Feb 2008 A1
20080046920 Bill Feb 2008 A1
20080065691 Suitts et al. Mar 2008 A1
20080071837 Moriya et al. Mar 2008 A1
20080134277 Tucker Jun 2008 A1
20080140853 Harrison Jun 2008 A1
20080141134 Miyazaki et al. Jun 2008 A1
20080155627 O'Connor et al. Jun 2008 A1
20080269931 Martinez Oct 2008 A1
20090049115 Jenkins Feb 2009 A1
20090055006 Asano Feb 2009 A1
20090083788 Russell Mar 2009 A1
20090094637 Lemmons Apr 2009 A1
20090157753 Lee Jun 2009 A1
20090172724 Ergen et al. Jul 2009 A1
20090199230 Kumar et al. Aug 2009 A1
20090288112 Kandekar et al. Nov 2009 A1
20090300508 Krampf Dec 2009 A1
20090313654 Paila et al. Dec 2009 A1
20090327346 Teinila et al. Dec 2009 A1
20100169786 O'Brien Jul 2010 A1
20110001758 Chalozin et al. Jan 2011 A1
20110016487 Chalozin et al. Jan 2011 A1
20110184964 Li Jul 2011 A1
20110271116 Martinez Nov 2011 A1
20110314493 Lemire et al. Dec 2011 A1
20120272262 Alexander et al. Oct 2012 A1
20120317302 Silvestri et al. Dec 2012 A1
20130185749 Bill Jul 2013 A1
20140040944 Zalewski et al. Feb 2014 A1
20140130084 Zalewski May 2014 A1
20140196085 Dunker et al. Jul 2014 A1
20140380355 Hellier et al. Dec 2014 A1
20150007218 Neumann et al. Jan 2015 A1
20150128171 Zalewski May 2015 A1
20150304710 Zalewski Oct 2015 A1
Foreign Referenced Citations (11)
Number Date Country
1647528 Jul 2005 CN
101373476 Feb 2009 CN
1 496 701 Jan 2005 EP
1020010073436 Aug 2001 KR
1020040108726 Dec 2004 KR
1020050114169 Dec 2005 KR
1020070061145 Jun 2007 KR
100838524 Jun 2008 KR
100872708 Dec 2008 KR
WO 2008060655 May 2008 WO
WO 2010019143 Feb 2010 WO
Non-Patent Literature Citations (3)
Entry
David Bargeron et al., “Annotations for Streaming Video on the Web: System Design and Usage Studies”, Computer Networks, vol. 31, May 17, 1999.
Herng-Yow Chen et al., “A Synchronized and Retrievable Video/HTML Lecture System for Industry Employee Training”, Nov. 29, 1999.
European Search Report dated Oct. 20, 2014 issued in counterpart application No. 10827136.
Related Publications (1)
Number Date Country
20110106879 A1 May 2011 US