Method, apparatus and system for facilitating navigation in an extended scene

Information

  • Patent Grant
  • 11699266
  • Patent Number
    11,699,266
  • Date Filed
    Friday, August 26, 2016
    8 years ago
  • Date Issued
    Tuesday, July 11, 2023
    a year ago
Abstract
A method, apparatus and system for facilitating navigation toward a region of interest in an extended scene of video content include determining a timeline including information regarding at least one region of interest in the video content and displaying, in a portion of the video content currently being displayed, a visual indicator indicating a direction in which to move in the video content to cause the display of the at least one region of interest. In one embodiment of the present principles a timeline is attached to the content and carries information evolving over time about the region(s) of interest. A renderer processes the timeline and provides navigation information to a user using available means such as a graphical representation or haptic information, or a combination of several means.
Description

This application is a national stage application under 35 U.S.C. § 371 of International Application PCT/EP2016/070181, filed Aug. 26, 2016, which was published in accordance with PCT Article 21(2) on Mar. 9, 2017, in English, and which claims the benefit of European patent application No. 15306349.0 filed Sep. 2, 2015.


TECHNICAL FIELD

The present principles relate generally to navigating through video content and, more particularly, to facilitating navigation in an extended scene in video content.


BACKGROUND

Recently there has been a growth of available large field-of-view content (up to 360°). Such content is potentially not fully visible by a user watching the content on common devices such as Head Mounted Displays, Oculus Rift, smart glasses, PC screens, tablets, smartphones and the like. That means that at a given moment, a user may only be viewing a part of the content, and often times, a part of the content not important to the storyline. Although a user can navigate within the content by various means such as head movement, mouse movement, touch screen, voice and the like, if the content represents a dynamic scene (e.g. a movie) with events happening at different moments and at different locations in the content, the user is not sure to be looking at a relevant part of the scene and may miss important events/interesting sequences if they occur outside of his/her current field of view.


SUMMARY OF THE INVENTION

These and other drawbacks and disadvantages of the prior art are addressed by the present principles, which are directed at a method, apparatus and system for facilitating navigation in a wide scene and directing a user's attention to a region of interest.


In one embodiment of the present principles a timeline is attached to the content and carries information evolving over time about the region(s) of interest and more particularly about a location or object ID, the associated optimal viewpoint(s) and level(s) of interest. On the device, a renderer (3D engine, video player . . . ) processes the timeline and provides navigation information to a user using available means (graphical representation, haptic information, or a combination of several means . . . ).





BRIEF DESCRIPTION OF THE DRAWINGS

The teachings of the present invention can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:



FIG. 1 depicts a pictorial representation of a user's view of a portion of a total available content;



FIG. 2 depicts a timing diagram/timeline of two ROIs identified by an object ID in accordance with an embodiment of the present principles;



FIG. 3 depicts a representative syntax for providing the information in the timing diagram of FIG. 2 in accordance with an embodiment of the present principles;



FIG. 4 depicts a version of the syntax of FIG. 3 reduced in accordance with an embodiment of the present principles;



FIG. 5 depicts a timing diagram/timeline of two ROIs identified by an object shape in accordance with an embodiment of the present principles;



FIG. 6 depicts a representative syntax for providing the information in the timing diagram of FIG. 5 in accordance with an embodiment of the present principles;



FIG. 7 depicts a portion of scene of content including a bar at the edge of a screen to indicate to a user in which direction the user should look/navigate the scene;



FIG. 8 depicts a high level block diagram of a renderer in accordance with an embodiment of the present principles; and



FIG. 9 depicts a flow diagram of a method for facilitating navigation toward a region of interest in an extended scene of video content in accordance with an embodiment of the present principles.





To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. The drawings are not to scale, and one or more features may be expanded or reduced for clarity.


DETAILED DESCRIPTION

Embodiments of the present principles advantageously provide a method, an apparatus and a system facilitating navigation in a wide scene and directing a user's attention to a region of interest. Although the present principles will be described primarily within the context of specific visual indicators and directing a user's view in a horizontal direction, the specific embodiments of the present principles should not be treated as limiting the scope of the invention. It will be appreciated by those skilled in the art and informed by the teachings of the present principles that the concepts of the present principles can be advantageously applied to any visual indicators that can be used to direct a user's attention to any portion of the video content whether it be in the horizontal, vertical and/or diagonal direction.


Embodiments of the present principles are directed to facilitating a user to navigate in a wide rendered scene towards a location for optimal viewpoint (OV1) where the user will be able to watch region(s) of interest (ROI) of a scene that could be of interest for the user. That is, at a given moment, several parts of a scene could be of interest to a user. As such, in accordance with embodiments of the present principles it is important to inform a user that several ROIs are present at the same time. Such ROIs can be of various degrees of interest and as such embodiments of the present principles include associating a rank to each ROI indicating its level of interest (LOI). The LOI of a ROI can also evolve over time. The various LOIs values can be the same for all the users or be personalized, with respect to the type of ROIs for which the user has previously indicated interest. In various embodiments of the present principles, using the LOI, a user can decide to navigate towards the ROI or, at the contrary, can estimate that it is of no interest at the moment.



FIG. 1 depicts a pictorial representation of a user's view of a portion of a total available content. That is, in FIG. 1, a black rectangular outlined box represents a portion of a total content within a user's field of view. Embodiments of the present principles combine both, the notion of ROI and OV in a virtual scene by, for example, having a timeline indicating at each moment what is the ROI (e.g. the virtual object identifier or shape coordinates) as well as an associated. OV(s). That is, in accordance with embodiments of the present principles, the notion of optimal viewpoint (OV) comprises a location and direction (orientation) in which to direct a user's attention. In various embodiments the OV can coincide with the ROI. In alternate embodiments, the OV can include a trade-off direction allowing a user to watch 2 different ROIs simultaneously. In addition in various embodiments of the present principles, an OV can evolve over time and be associated with changes related to ROI(s). In such embodiments, it is conceivable to provide not all the coordinates but only a subset of coordinates providing a means to move from one coordinate to the other (i.e. the trajectory to follow). For example, a first position, a last position and an intermediate position(s) are provided as well as a trajectory function to apply. In such embodiments, the trajectory can include a straight line between two points, a Bezier curve, and the like. A renderer would then interpolate all the intermediate positions to determine a final position. Such a solution in accordance with the present principles significantly reduces an amount of data to be provided. Such a solution can also be applied to the direction.


In accordance with various embodiments of the present principles, for a video which is a 2D content, the ROI can be, for example, a rectangle and the coordinates can include the upper left and lower right corners of the rectangle. For a 3D scene, the ROI can be assimilated to a bounding box or a more complex shape. In such embodiments, information provided to a renderer can include the coordinates of the shape or alternatively can include an identifier of an object or group of objects/shapes.


In the embodiments of the present principles, a main difference between providing the object/group of objects ID and the coordinates of a 2D or 3D shape is that in the first case the ID and timing information (start time and duration) indicating when and how long a ROI is active only has to be provided once, whereas in the second case, the coordinates and timing information have to be provided to a renderer each time the ROI changes (potentially at each frame). In the first case, the renderer knows at every moment the location of the object with respect to the user's view and/or virtual location in the related scene. In the second case, the solution proposed above for reducing the amount of data can also be applied to model the ROI trajectory.


In various embodiments of the present principles, the content may have some periods without any interesting events and in such cases there is no information about a ROI. In alternate embodiments several ROIs could be present simultaneously. In such embodiments the ROIs could have different levels of interest (LOI). In accordance with embodiments of the present principles, signaling such simultaneous ROIs with an associated LOI can be accomplished using visual, haptic messages or a combination of messages.


That is in various embodiments of the present principles, a LOI includes data related to a ROI and typically indicates information about a level of interest associated with the ROI. The LOI can include discrete information, for example in one embodiment, having a value range of, for example, 5 where 1 indicates a low level of interest in a ROI and 5 indicates a high level of interest or vice versa. It should be noted that in accordance with various embodiments of the present principles, a LOI can evolve over time.


In accordance with the present principles, information regarding the OV, ROI and LOI are predetermined and available to a renderer before a ROI becomes active in content. That is, in accordance with the present principles a renderer is able to begin to signal a user about a ROI to be presented. A goal is to enable the user to anticipate movements required to bring a ROI into the user's field of view with enough time so as not to miss the beginning of a sequence or object of interest in the ROI. In one embodiment of the present principles, during a preparation step, the renderer can use the LOI associated to the start time of the sequence. In alternate embodiments, the renderer can use a global LOI. A global LOI value can include the mean of the values the LOI takes over time or a value set by a content creator. The global LOI provides an overview of the global level of interest of a sequence which can be contrary to a first LOI value of the sequence, which is not necessarily representative of the whole sequence.



FIG. 2 depicts a timing diagram/timeline of two ROIs identified by an object ID in accordance with an embodiment of the present principles. In the example of FIG. 2, the first ROI has a global LOI of 3 associated with it. The first ROI also has associated an OV that is the same all the time and is equal to OV11. In the example of FIG. 2, the first ROI has an associated LOI that evolves over time and takes the values LOI11 and LOI12 at timestamps t1 and t112.


Further, in the embodiment of FIG. 2, the second ROI has a global LOI of 4. The OV of the second ROI of FIG. 2 evolves over time and takes the values OV21 and OV22 and the LOI evolves over time and takes the values LOI21 and LOI22 at timestamps t2 and t212. As recited above, the ROI is an object identified by its ID.



FIG. 3 depicts a representative syntax for providing the information in the timing diagram of FIG. 2 in accordance with an embodiment of the present principles. That is, the syntax of FIG. 3 can be used to provide the information of the example of FIG. 2 to a renderer for use as will be described below.


In accordance with various embodiments of the present principles, the syntax of FIG. 3 can be reduced for efficiency. More specifically, FIG. 4 depicts a version of the syntax of FIG. 3 reduced in accordance with an embodiment of the present principles. That is, in the reduced syntax of FIG. 4, some useless information has been removed. For example, in the reduced syntax of FIG. 4, the stopTime is not set for values (e.g. LOI11 and LOI12) of a same field (e.g. LOI) that come one after another. In the reduced syntax of FIG. 4, the stopTime is equal to the following value's startTime (LOI11's stopTime=LOI12's startTime) or to the parent element's stopTime (LOI12's stopTime=ROI's stopTime).



FIG. 5 depicts a timing diagram/timeline of two ROIs identified by an object shape in accordance with an embodiment of the present principles. In the example of FIG. 5, the first ROI has a global LOI of 3 associated with it. The first ROI also has associated an OV that is the same all the time and is equal to OV11. In the example of FIG. 5, the first ROI has an associated LOI that evolves over time and takes the values LOI11 and LOI12 at timestamps t1 and t112.


Further, in the embodiment of FIG. 5, the second ROI has a global LOI of 4. The OV of the second ROI of FIG. 2 evolves over time and takes the values OV21 and OV22 and the LOI evolves over time and takes the values LOI21 and LOI22 at timestamps t2 and t212. As recited above, in both cases the ROI is identified by its shape for which the location is provided (i.e., relative to another object).


A difference between the example of FIG. 2 and FIG. 5 is that in FIG. 5, in which the ROIs are identified by a shape, a field to indicate the coordinates of the ROI, which was not needed in the example of FIG. 2, is present. In the example of FIG. 5, the first ROI (1) takes 3 different positions (coords11, coords12 and coords13) and the second ROI (2) takes 2 different positions (coords21 and coords22).



FIG. 6 depicts a representative syntax for providing the information in the timing diagram of FIG. 5 in accordance with an embodiment of the present principles. That is, the syntax of FIG. 6 can be used to provide the information of the example of FIG. 5 to a renderer for use as will be described below.


In accordance with various embodiments of the present principles, the syntax of FIG. 6 can be reduced for efficiency as described above with respect to FIG. 4.


In various embodiments of the present principles, the information of the timing diagram (timeline) is provided to a rendering device such as s video player, 3D engine, processing engine and the like. The renderer analyzes the information in the timing diagram and determines:

    • when a ROI will become active and for how long
    • what is its position
    • what is its OV and how it evolves over time
    • what is its global LOI and how LOI evolves over time


The renderer knows the current pose and orientation of a user in the rendered scene using techniques known in the art. Such techniques will not be described herein. Such information enables a determination of a path a user should follow to reach the OV and a direction in which a user should look to view the ROI.


In various embodiments of the present principles, a user can be alerted to look in a particular direction or more particularly navigation information can be directed to a user using a visual indicator such as at least one or a combination of the following:

    • a compass.
    • a bar located at the edge of the screen which moves towards the direction to follow.
    • in a case having a scene in which the ROIs are identified by object IDs, it is possible to use a miniature of the asset or group of assets representing the ROI.
    • footprint symbols showing one or more path(s) to follow (to reach optimal viewpoint location for ROI(s)), in which a color pattern linked to the type(s) of objects of interests to which the OV is related.


      For example, FIG. 7 depicts a portion of scene of content including a bar at the edge of a screen to indicate to a user in which direction the user should look/navigate the scene in accordance with an embodiment of the present principles. More specifically, in FIG. 7, the bar at the bottom left edge of the screen indicates to the user to follow the bottom left direction. Although in the embodiment of FIG. 7 the bar is depicted as being positioned in the bottom left of the content directing the user to look in the left direction, in alternate embodiments of the present principles a user's attention can be directed toward any portion of the video content and in any direction of the video content using a visual indicator of the present principles.


In various embodiments of the present principles a user can have the option to select the type of navigation indicator wanted. For example, in one embodiment of the present principles a drop down menu can be populated with several visual indicator options and a user can select a visual indicator to use.


In addition, in accordance with various embodiments of the present principles a notion of distance can be associated with a visual indicator of the present principles. More specifically, in one embodiment the distance to a desired ROI can be expressed by the size of the visual indicator or alternatively can be expressed using a color of the visual indicator red when far from the ROI and green when near or vice versa).


In one embodiment of the present principles a renderer in a preparation step displays navigation information a couple of seconds before a ROI begins to be active. As previously stated, because of the preparation step, a user can anticipate his/her movement towards the ROI/OV such that the user's viewpoint includes the ROI before a sequence of interest is displayed/processed. In such embodiments, such preparation step can further include a color code, a specific symbol, a countdown or another kind of indicator alerting a user that a preparation step has been processed. The parameters of the preparation step, such as duration, can either be hard-coded or set by a user.


In alternate embodiments of the present principles a renderer can display to a user information regarding a global LOI during the preparation step. Information regarding a specific LOI for a portion of the content can be presented to a user using a specific symbol or color code related to the value of the LOI. Such convention can be hard-coded or can be a parameter selectable by a user.


In an embodiment in which several simultaneous OVs exist, an indication for each of the OVs can be presented. Alternatively, an indication of only the most interesting one or ones can be displayed. In an embodiment in which an indication of more than one OV is presented, a user has the ability to decide which indicator to follow to view a desired ROI. The number of simultaneous ROIs can either hard-coded or a parameter the user can set.


In various embodiments of the present principles, data associated with embodiments of the present principles can be stored in a metadata component, similar to subtitles components.



FIG. 8 depicts a high level block diagram of a renderer for implementing the features of the present principles in accordance with an embodiment of the present principles. The renderer of FIG. 8 comprises a processor 810 as well as a memory 820 for storing control programs, instructions, software, video content, data and the like. The processor 810 cooperates with conventional support circuitry 830 such as power supplies, clock circuits, cache memory and the like as well as circuits that assist in executing the software routines stored in the memory 820. As such, it is contemplated that some of the process steps discussed herein as software processes may be implemented within hardware, for example, as circuitry that cooperates with the processor 810 to perform various steps. The renderer of FIG. 8 also includes input-output circuitry 840 that forms an interface between the various respective functional elements communicating with the renderer.


Although the renderer of FIG. 8 is depicted as a general purpose computer that is programmed to perform various control functions in accordance with the present principles, the invention can be implemented in hardware, for example, as an application specified integrated circuit (ASIC). As such, the process steps described herein are intended to be broadly interpreted as being equivalently performed by software, hardware, or a combination thereof.



FIG. 9 depicts a flow diagram of a method for facilitating navigation toward a region of interest in an extended scene of video content in accordance with an embodiment of the present principles. The method 900 begins at step 902 during which a timeline including information regarding at least one region of interest in the video content is determined. The method 900 can then proceed to step 904.


At step 904, a visual indicator indicating a direction in which to move in the video content to cause the display of the region of interest is displayed in a portion of the video content currently being displayed. The method 900 can then optionally include any of the other features of the present principles described above. For example, the method 900 can further include the determination of an OV and LOI as described above.

Claims
  • 1. A method of rendering at least one indicator when rendering a portion of a video content, the method comprising: obtaining data representative of a timeline from a metadata component of the video content, wherein the timeline comprises information representative of a time and a location at which a sequence of interest appears within a virtual scene in the video content, wherein the data is obtained before the time at which the sequence of interest appears in the virtual scene, wherein a user navigates inside the virtual scene in the video content from a current location of a current viewpoint at a current time to a subsequent location of a subsequent viewpoint before the time at which the sequence of interest appears;processing the timeline to identify the sequence of interest within the video content before the sequence of interest appears;determining the at least one indicator to direct attention toward the subsequent viewpoint from which to view the sequence of interest within the virtual scene, the at least one indicator being determined according to the current viewpoint in the virtual scene and the location of the sequence of interest within the virtual scene, wherein the current viewpoint has the current location and a current viewing direction, and the subsequent viewpoint has the subsequent location and a subsequent viewing direction within the virtual scene and wherein the subsequent location is different from the current location of the current viewpoint; andrendering the at least one indicator within a current field of view inside the virtual scene while rendering the portion of the virtual scene, wherein the at least one indicator is rendered prior to the time at which the sequence of interest appears and in time for a user to move within the virtual scene following a trajectory from the current location of the current viewpoint to the subsequent location of the subsequent viewpoint before the time at which the sequence of interest appears in the virtual scene.
  • 2. The method of claim 1, wherein the sequence of interest within the video content is further associated, in the timeline, with at least one rank indicative of a level of interest among a plurality of different levels of interest and wherein the at least one indicator is further determined according to the at least one level of interest.
  • 3. The method of claim 2, wherein the at least one rank indicative of a level of interest is included in the metadata.
  • 4. The method of claim 1, wherein the location of the sequence of interest is determined according to a description of a shape of a two-dimension part of the video content.
  • 5. The method of claim 1, wherein the at least one indicator includes one or more visual objects to be overlaid on the rendered portion of the video content.
  • 6. The method of claim 1, wherein the at least one indicator comprises a haptic effect.
  • 7. The method of claim 1, wherein the video content is a projection of a dynamic three-dimension scene and wherein the location of the sequence of interest is determined according to an object of the three-dimension scene.
  • 8. The method of claim 1, wherein the user navigates continuously inside the virtual scene in the video content from the current location of the current viewpoint at the current time to the subsequent location of the subsequent viewpoint before the time at which the sequence of interest appears.
  • 9. The method of claim 1, wherein the subsequent location is identified by coordinates of an object or shape within the virtual scene in the video content.
  • 10. The method of claim 1, wherein the current location and the subsequent location are identified by coordinates within the virtual scene, and a trajectory function from coordinates of the current location to coordinates of the subsequent location are provided.
  • 11. An apparatus comprising at least one processor and at least one memory having stored instructions operative, when executed by the at least one processor to cause the apparatus to: obtain data representative of a timeline from a metadata component of a video content, wherein the timeline comprises information representative of a time and a location at which a sequence of interest appears within a virtual scene in the video content, wherein the data is obtained before the time at which the sequence of interest appears in the virtual scene, wherein a user navigates inside the virtual scene in the video content from a current location of a current viewpoint at a current time to a subsequent location of a subsequent viewpoint before the time at which the sequence of interest appears;process the timeline to identify the sequence of interest within the video content before the sequence of interest appears;determine at least one indicator to direct attention toward the subsequent viewpoint from which to view the sequence of interest within the virtual scene, the at least one indicator being determined according to the current viewpoint in the virtual scene and the location of the sequence of interest within the virtual scene, wherein the current viewpoint has the current location and a current viewing direction, and the subsequent viewpoint has the subsequent location and a subsequent viewing direction within the virtual scene and wherein the subsequent location is different from the current location of the current viewpoint; andrender the at least one indicator within a current field of view inside the virtual scene while rendering the portion of the virtual scene, wherein the at least one indicator is rendered prior to the time at which the sequence of interest appears and in time for a user to move within the virtual scene following a trajectory from the current location of the current viewpoint to the subsequent location of the subsequent viewpoint before the event time at which the sequence of interest appears in the virtual scene.
  • 12. The apparatus of claim 11, wherein the sequence of interest within the video content is further associated, in the timeline, with at least one rank indicative of a level of interest among a plurality of different levels of interest and wherein the instructions are further operative to determine the at least one indicator according to the at least one level of interest.
  • 13. The apparatus of claim 12, wherein the at least one rank indicative of a level of interest is included in the metadata.
  • 14. The apparatus of claim 12, wherein the at least one indicator includes at least one of a color code, a specific symbol, a countdown, and a haptic effect.
  • 15. The apparatus of claim 11, wherein the at least one indicator includes one or more visual objects to be overlaid on the rendered portion of the video content.
  • 16. The apparatus of claim 11, wherein the at least one indicator comprises a haptic effect.
  • 17. The apparatus of claim 16, further comprising haptic effectors, wherein the instructions are further operative to render the haptic effects of the indicators on the haptic effectors.
  • 18. The apparatus of claim 11, wherein the video content is a projection of a dynamic three-dimension scene and wherein the location of the sequence of interest is determined according to an object of the three-dimension scene.
  • 19. The apparatus of claim 11, wherein the location of the sequence of interest is determined according to a description of a shape of a two-dimension part of the video content.
  • 20. The apparatus of claim 11, wherein the current location and the subsequent location are one of a user location and a camera location.
Priority Claims (1)
Number Date Country Kind
15306349 Sep 2015 EP regional
PCT Information
Filing Document Filing Date Country Kind
PCT/EP2016/070181 8/26/2016 WO
Publishing Document Publishing Date Country Kind
WO2017/036953 3/9/2017 WO A
US Referenced Citations (445)
Number Name Date Kind
5644694 Appleton Jul 1997 A
5714997 Anderson Feb 1998 A
5977978 Carey Nov 1999 A
6040841 Cohen Mar 2000 A
6324336 Kanda Nov 2001 B1
6366296 Boreczky Apr 2002 B1
6392651 Stradley May 2002 B1
6563529 Jongerius May 2003 B1
6585521 Obrador Jul 2003 B1
6636210 Cheng Oct 2003 B1
6636238 Amir Oct 2003 B1
6762757 Sander Jul 2004 B1
6768486 Szabo Jul 2004 B1
7411594 Endo Aug 2008 B2
7672864 Nair Mar 2010 B2
8224154 Shinkai Jul 2012 B2
8438595 Kannan May 2013 B1
8745499 Pendergast Jun 2014 B2
8769421 Meaney Jul 2014 B2
9082018 Laska Jul 2015 B1
9110988 Tan Aug 2015 B1
9269239 Jensen Feb 2016 B1
9305400 Kochi Apr 2016 B2
9313556 Borel Apr 2016 B1
9380274 McLean Jun 2016 B1
9392212 Ross Jul 2016 B1
9396401 Maruoka Jul 2016 B2
9563201 Tofte Feb 2017 B1
9672865 Fundament Jun 2017 B2
9699465 Ouedraogo Jul 2017 B2
9703369 Mullen Jul 2017 B1
9721611 Matias Aug 2017 B2
9767610 Takeuchi Sep 2017 B2
9781342 Turley Oct 2017 B1
9794541 Gay Oct 2017 B2
9836652 Lection Dec 2017 B2
9836885 Eraker Dec 2017 B1
9851936 Nakagawa Dec 2017 B2
9871994 Vaden Jan 2018 B1
9886794 van Os Feb 2018 B2
9898868 Aonuma Feb 2018 B2
9996149 Martin Jun 2018 B1
10009596 Jayaram Jun 2018 B2
10049493 Verizzo Aug 2018 B1
10055887 Gil Aug 2018 B1
10062414 Westphal Aug 2018 B1
10078644 Newman Sep 2018 B1
10078917 Gaeta Sep 2018 B1
10115149 Deem Oct 2018 B1
10139985 Mildrew Nov 2018 B2
10151599 Meador Dec 2018 B1
10152815 Tinsman Dec 2018 B2
10169678 Sachdeva Jan 2019 B1
10176640 Tierney Jan 2019 B2
10186298 Matias Jan 2019 B1
10228760 Ross Mar 2019 B1
10235788 Tinsman Mar 2019 B2
10261966 Liu Apr 2019 B2
10303421 Nakagawa May 2019 B2
10303435 Sendai May 2019 B2
10373342 Perez, III Aug 2019 B1
10388176 Wallace Aug 2019 B2
10397666 Thomas Aug 2019 B2
10419716 Tanumihardja Sep 2019 B1
10447394 Kasilya Sudarsan Oct 2019 B2
10491817 You Nov 2019 B2
10602300 Lyren Mar 2020 B1
10602302 Lyren Mar 2020 B1
10659679 Beeler May 2020 B1
10701450 Selvaraj Jun 2020 B2
10708631 Taibi Jul 2020 B2
10715843 Van Brandenburg Jul 2020 B2
10748577 Matias Aug 2020 B2
10911658 Kim Feb 2021 B2
10962780 Ambrus Mar 2021 B2
10974147 Sarria, Jr. Apr 2021 B1
11006089 Arai May 2021 B2
11079753 Roy Aug 2021 B1
11113885 Cordes Sep 2021 B1
11180159 Post Nov 2021 B1
11190753 Meier Nov 2021 B1
11205404 Kim Dec 2021 B2
11216857 Lokesh Jan 2022 B2
11303881 Zhou Apr 2022 B2
11496788 Takase Nov 2022 B2
11587292 Crocker Feb 2023 B2
20010034607 Perschbacher, III Oct 2001 A1
20020012526 Sai Jan 2002 A1
20020069218 Sull Jun 2002 A1
20030054882 Suzuki Mar 2003 A1
20030086686 Matsui May 2003 A1
20040090424 Hurley May 2004 A1
20040125133 Pea Jul 2004 A1
20040125148 Pea Jul 2004 A1
20040183826 Taylor Sep 2004 A1
20050002648 Hoshino Jan 2005 A1
20050010409 Hull Jan 2005 A1
20050116964 Kotake Jun 2005 A1
20060015925 Logan Jan 2006 A1
20060036513 Whatley Feb 2006 A1
20060106625 Brown May 2006 A1
20060120610 Kong Jun 2006 A1
20060132482 Oh Jun 2006 A1
20060178902 Vicars Aug 2006 A1
20070052724 Graham Mar 2007 A1
20070085141 Ando Apr 2007 A1
20070103461 Suzuno May 2007 A1
20070219645 Thomas Sep 2007 A1
20080088627 Shimizu Apr 2008 A1
20080126206 Jarrell May 2008 A1
20080168350 Jones Jul 2008 A1
20080252640 Williams Oct 2008 A1
20080291217 Vincent Nov 2008 A1
20080294663 Heinley Nov 2008 A1
20090009522 Endo Jan 2009 A1
20090024619 Dallmeier Jan 2009 A1
20090031246 Cowtan Jan 2009 A1
20090079731 Fitzmaurice Mar 2009 A1
20090100366 Fitzmaurice Apr 2009 A1
20090102711 Elwell, Jr. Apr 2009 A1
20090106671 Olson Apr 2009 A1
20090160856 Hoguet Jun 2009 A1
20090195652 Gal Aug 2009 A1
20090237411 Gossweiler, III Sep 2009 A1
20090256904 Krill Oct 2009 A1
20090309975 Gordon Dec 2009 A1
20090319178 Khosravy Dec 2009 A1
20100005424 Sundaresan Jan 2010 A1
20100045666 Kornmann Feb 2010 A1
20100064596 Bowsher Mar 2010 A1
20100092156 McCrossan Apr 2010 A1
20100115459 Kinnunen May 2010 A1
20100122208 Herr May 2010 A1
20100123737 Williamson May 2010 A1
20100134264 Nagamine Jun 2010 A1
20100158099 Kalva Jun 2010 A1
20100188503 Tsai Jul 2010 A1
20100223004 Kondo Sep 2010 A1
20100232504 Feng Sep 2010 A1
20100241525 Aguera y Arcas Sep 2010 A1
20100245584 Minasyan Sep 2010 A1
20110018895 Buzyn Jan 2011 A1
20110052144 Abbas Mar 2011 A1
20110141141 Kankainen Jun 2011 A1
20110210962 Horan Sep 2011 A1
20110299832 Butcher Dec 2011 A1
20110300522 Faubert Dec 2011 A1
20120016578 Coppens Jan 2012 A1
20120105474 Cudalbu May 2012 A1
20120127284 Bar-Zeev May 2012 A1
20120176525 Garin Jul 2012 A1
20120182377 Wang Jul 2012 A1
20120194547 Johnson Aug 2012 A1
20120233000 Fisher Sep 2012 A1
20120264510 Wigdor Oct 2012 A1
20120272173 Grossman Oct 2012 A1
20120281500 Hoekstra Nov 2012 A1
20120323891 Jacobson Dec 2012 A1
20130063558 Phipps Mar 2013 A1
20130086109 Huang Apr 2013 A1
20130086517 Van Lancker Apr 2013 A1
20130120387 Mueller May 2013 A1
20130124997 Speir May 2013 A1
20130129308 Karn May 2013 A1
20130142384 Ofek Jun 2013 A1
20130162632 Varga Jun 2013 A1
20130169685 Lynch Jul 2013 A1
20130170813 Woods Jul 2013 A1
20130179841 Mutton Jul 2013 A1
20130202265 Arrasvuori Aug 2013 A1
20130222364 Kraus Aug 2013 A1
20130235270 Sasaki Sep 2013 A1
20130290876 Anderson Oct 2013 A1
20130297706 Arme Nov 2013 A1
20130311575 Woods Nov 2013 A1
20130321400 van Os Dec 2013 A1
20130322844 Suzuki Dec 2013 A1
20130326407 van Os Dec 2013 A1
20130326425 Forstall Dec 2013 A1
20130330055 Zimmermann Dec 2013 A1
20140040833 McLean Feb 2014 A1
20140046550 Palmer Feb 2014 A1
20140047371 Palmer Feb 2014 A1
20140074855 Zhao Mar 2014 A1
20140079126 Ye Mar 2014 A1
20140089990 van Deventer Mar 2014 A1
20140095122 Appleman Apr 2014 A1
20140113660 Park Apr 2014 A1
20140192087 Frost Jul 2014 A1
20140199050 Khalsa et al. Jul 2014 A1
20140207307 Jonsson Jul 2014 A1
20140208239 Barker Jul 2014 A1
20140215317 Floyd Jul 2014 A1
20140215318 Floyd Jul 2014 A1
20140240313 Varga Aug 2014 A1
20140274387 Lewis Sep 2014 A1
20140300636 Miyazaya Oct 2014 A1
20140313203 Shugart Oct 2014 A1
20140327792 Mulloni Nov 2014 A1
20140341549 Hattori Nov 2014 A1
20140354683 Suzuki Dec 2014 A1
20140355823 Kwon Dec 2014 A1
20140359653 Thorpe Dec 2014 A1
20140375683 Salter Dec 2014 A1
20140378222 Balakrishnan Dec 2014 A1
20150046537 Rakib Feb 2015 A1
20150067556 Tibrewal Mar 2015 A1
20150070388 Sheaffer Mar 2015 A1
20150081706 Elmqvist Wulcan et al. Mar 2015 A1
20150093029 Tijssen Apr 2015 A1
20150105934 Palmer Apr 2015 A1
20150139608 Theobalt May 2015 A1
20150172627 Lee Jun 2015 A1
20150212719 Gottschlag Jul 2015 A1
20150235672 Cudak Aug 2015 A1
20150237166 Denoual Aug 2015 A1
20150244969 Fisher Aug 2015 A1
20150262616 Jaime Sep 2015 A1
20150264296 Devaux Sep 2015 A1
20150271570 Pomeroy Sep 2015 A1
20150271571 Laksono Sep 2015 A1
20150279120 Sakuragi Oct 2015 A1
20150286875 Land Oct 2015 A1
20150317832 Ebstyne Nov 2015 A1
20150324940 Samson Nov 2015 A1
20150325268 Berger Nov 2015 A1
20150346955 Fundament Dec 2015 A1
20150348326 Sanders Dec 2015 A1
20150350628 Sanders Dec 2015 A1
20150362520 Wells Dec 2015 A1
20150363966 Wells Dec 2015 A1
20150373281 White Dec 2015 A1
20160005229 Lee Jan 2016 A1
20160026874 Hodulik Jan 2016 A1
20160050349 Vance Feb 2016 A1
20160054889 Hadley Feb 2016 A1
20160063103 Bostick Mar 2016 A1
20160093105 Rimon Mar 2016 A1
20160148417 Kim May 2016 A1
20160155260 Jenkins Jun 2016 A1
20160163107 Chen Jun 2016 A1
20160165309 Van Brandenburg Jun 2016 A1
20160225179 Sheppard Aug 2016 A1
20160227262 Grant Aug 2016 A1
20160239252 Nakagawa Aug 2016 A1
20160259854 Liu Sep 2016 A1
20160267720 Mandella Sep 2016 A1
20160292511 Ayalasomayajula Oct 2016 A1
20160343107 Newman Nov 2016 A1
20160343351 Chen Nov 2016 A1
20160350969 Castillo Dec 2016 A1
20160350972 Kauffmann Dec 2016 A1
20160360266 Wilms Dec 2016 A1
20160371882 Ege Dec 2016 A1
20160373828 Seol Dec 2016 A1
20160379682 Williams Dec 2016 A1
20160381290 Prayle Dec 2016 A1
20160381306 Yang Dec 2016 A1
20170025152 Jaime Jan 2017 A1
20170026577 You Jan 2017 A1
20170053545 Yang Feb 2017 A1
20170061038 Ruiz Mar 2017 A1
20170076408 D'Souza Mar 2017 A1
20170076571 Borel Mar 2017 A1
20170078767 Borel Mar 2017 A1
20170090196 Hendron Mar 2017 A1
20170103571 Beaurepaire Apr 2017 A1
20170109585 Matias Apr 2017 A1
20170110151 Matias Apr 2017 A1
20170118540 Thomas Apr 2017 A1
20170134714 Soni May 2017 A1
20170140796 Fontenot May 2017 A1
20170155912 Thomas Jun 2017 A1
20170169125 Greco Jun 2017 A1
20170180444 Denoual Jun 2017 A1
20170180780 Jeffries Jun 2017 A1
20170182406 Castiglia Jun 2017 A1
20170192637 Ren Jul 2017 A1
20170206798 Newman Jul 2017 A1
20170229147 McKaskle Aug 2017 A1
20170244959 Ranjeet Aug 2017 A1
20170249839 Becker Aug 2017 A1
20170255372 Hsu Sep 2017 A1
20170264864 Mcnelley Sep 2017 A1
20170280166 Walkingshaw Sep 2017 A1
20170285737 Khalid Oct 2017 A1
20170285738 Khalid Oct 2017 A1
20170287357 Weiss Oct 2017 A1
20170289219 Khalid Oct 2017 A1
20170311035 Lewis Oct 2017 A1
20170315697 Jacobson Nov 2017 A1
20170316806 Warren Nov 2017 A1
20170322622 Hong Nov 2017 A1
20170328733 Gotoh Nov 2017 A1
20170337776 Herring Nov 2017 A1
20170352196 Chen Dec 2017 A1
20170354883 Benedetto Dec 2017 A1
20170354884 Benedetto Dec 2017 A1
20170354888 Benedetto Dec 2017 A1
20170356753 Findley Dec 2017 A1
20180001200 Tokgoz Jan 2018 A1
20180005430 Griffith Jan 2018 A1
20180018510 Williams Jan 2018 A1
20180020162 Turley Jan 2018 A1
20180021684 Benedetto Jan 2018 A1
20180025500 Nielsen Jan 2018 A1
20180025542 Upendran Jan 2018 A1
20180025649 Contreras Jan 2018 A1
20180041750 Kim Feb 2018 A1
20180053130 Pettersson Feb 2018 A1
20180061116 Mitchell Mar 2018 A1
20180070113 Phillips Mar 2018 A1
20180070119 Phillips Mar 2018 A1
20180077440 Wadhera Mar 2018 A1
20180077451 Yip Mar 2018 A1
20180081520 Han Mar 2018 A1
20180089901 Rober Mar 2018 A1
20180095616 Valdivia Apr 2018 A1
20180095635 Valdivia Apr 2018 A1
20180095636 Valdivia Apr 2018 A1
20180096507 Valdivia Apr 2018 A1
20180098059 Valdivia Apr 2018 A1
20180101293 Fang Apr 2018 A1
20180107211 Schubert Apr 2018 A1
20180122422 Allison May 2018 A1
20180130255 Hazeghi May 2018 A1
20180143023 Bjorke May 2018 A1
20180143756 Mildrew May 2018 A1
20180144547 Shakib May 2018 A1
20180150204 Macgillivray May 2018 A1
20180160123 Van Der Auwera Jun 2018 A1
20180164588 Leppanen Jun 2018 A1
20180176661 Varndell Jun 2018 A1
20180182146 Laaksonen Jun 2018 A1
20180189958 Budagavi Jul 2018 A1
20180190323 de Jong Jul 2018 A1
20180199080 Jackson, Jr. Jul 2018 A1
20180204362 Tinsman Jul 2018 A1
20180204385 Sarangdhar Jul 2018 A1
20180210627 Woo Jul 2018 A1
20180224945 Hardie-Bick Aug 2018 A1
20180225870 Upendran Aug 2018 A1
20180242028 Van Brandenburg Aug 2018 A1
20180255332 Heusser Sep 2018 A1
20180261255 Goshen Sep 2018 A1
20180271740 Lydecker Sep 2018 A1
20180275745 Crisler Sep 2018 A1
20180278993 Crisler Sep 2018 A1
20180284974 Meganathan Oct 2018 A1
20180295400 Thomas Oct 2018 A1
20180308187 Rotem Oct 2018 A1
20180308523 Silvestri Oct 2018 A1
20180310116 Arteaga Oct 2018 A1
20180314322 Tseng Nov 2018 A1
20180316853 Liang Nov 2018 A1
20180318716 Benedetto Nov 2018 A1
20180321798 Kawamura Nov 2018 A1
20180326286 Rathi Nov 2018 A1
20180338111 Mourkogiannis Nov 2018 A1
20180341811 Bendale Nov 2018 A1
20180343387 Bostick Nov 2018 A1
20180349703 Rathod Dec 2018 A1
20180350125 Duong Dec 2018 A1
20180350144 Rathod Dec 2018 A1
20180357820 Tytgat Dec 2018 A1
20180357825 Hofmann Dec 2018 A1
20180365496 Hovden Dec 2018 A1
20180365855 Laurent Dec 2018 A1
20180374276 Powers Dec 2018 A1
20190004618 Tadros Jan 2019 A1
20190005719 Fleischman Jan 2019 A1
20190026944 Laaksonen Jan 2019 A1
20190033989 Wang Jan 2019 A1
20190043259 Wang Feb 2019 A1
20190045268 Veeramani Feb 2019 A1
20190051051 Kaufman Feb 2019 A1
20190056848 DiVerdi Feb 2019 A1
20190072405 Luchner Mar 2019 A1
20190087067 Hovden Mar 2019 A1
20190104316 Da Silva Pratas Gabriel Apr 2019 A1
20190114803 Liu Apr 2019 A1
20190122699 Matias Apr 2019 A1
20190124316 Yoshimura Apr 2019 A1
20190128765 Hadj-Rabah May 2019 A1
20190129602 Siwak May 2019 A1
20190139305 Sakamoto May 2019 A1
20190156495 Altuev May 2019 A1
20190172265 Cossairt Jun 2019 A1
20190179145 Ibrahim Jun 2019 A1
20190200058 Hall Jun 2019 A1
20190213211 Zhao Jul 2019 A1
20190228230 Onuma Jul 2019 A1
20190259424 Lintz Aug 2019 A1
20190273864 Bostick Sep 2019 A1
20190273865 Turley Sep 2019 A1
20190289341 Vasco de Oliveira Redol Sep 2019 A1
20190304188 Bridgeman Oct 2019 A1
20190318540 Piemonte Oct 2019 A1
20190324718 Francisco Oct 2019 A1
20190356894 Oh Nov 2019 A1
20190373293 Bortman Dec 2019 A1
20190394500 Sugimoto Dec 2019 A1
20200035025 Crocker Jan 2020 A1
20200035026 Demirchian Jan 2020 A1
20200045359 Tokumo Feb 2020 A1
20200053336 Kawai Feb 2020 A1
20200107003 Phillips Apr 2020 A1
20200118342 Varshney Apr 2020 A1
20200189459 Bush Jun 2020 A1
20200198660 Bellet Jun 2020 A1
20200201512 Faulkner Jun 2020 A1
20200245093 Lyren Jul 2020 A1
20200252741 Lyren Aug 2020 A1
20200258278 Mirhosseini Aug 2020 A1
20200273235 Emami Aug 2020 A1
20200285784 Isbel Sep 2020 A1
20200286526 Osler Sep 2020 A1
20200289935 Azmandian Sep 2020 A1
20200342673 Lohr Oct 2020 A1
20200349751 Bentovim Nov 2020 A1
20200351525 Sugimoto Nov 2020 A1
20200363940 DiVerdi Nov 2020 A1
20200372935 Matias Nov 2020 A1
20200380733 Inatani Dec 2020 A1
20210055787 Chhabra Feb 2021 A1
20210058733 Lyren Feb 2021 A1
20210110610 Xu Apr 2021 A1
20210150223 Onuma May 2021 A1
20210183114 Corson Jun 2021 A1
20210192385 Farré Guiu Jun 2021 A1
20210204087 Lyren Jul 2021 A1
20210252398 Benedetto Aug 2021 A1
20210256597 Soppin Aug 2021 A1
20210283497 Gullicksen Sep 2021 A1
20210283514 Benedetto Sep 2021 A1
20210372810 Hato Dec 2021 A1
20210382305 Chang Dec 2021 A1
20220058825 Chen Feb 2022 A1
20220058844 Chen Feb 2022 A1
20220148254 Sorkine Hornung May 2022 A1
20220161140 Benedetto May 2022 A1
20220165306 Hamada May 2022 A1
20220305385 Konno Sep 2022 A1
20220343590 Jutan Oct 2022 A1
20230007231 Kadam Jan 2023 A1
Foreign Referenced Citations (19)
Number Date Country
102244807 Nov 2011 CN
102369497 Mar 2012 CN
102611872 Jul 2012 CN
104185078 Dec 2014 CN
104685893 Jun 2015 CN
1376587 Jan 2004 EP
2669866 Dec 2013 EP
2868103 Dec 2016 EP
2002514875 May 2002 JP
2013538377 Oct 2013 JP
2013250830 Dec 2013 JP
2014075743 Apr 2014 JP
2014215828 Nov 2014 JP
2014235469 Dec 2014 JP
2015057706 Mar 2015 JP
20090040462 Apr 2009 KR
9959026 Nov 1999 WO
2008033853 Mar 2008 WO
WO2014202486 Dec 2014 WO
Non-Patent Literature Citations (9)
Entry
Anonymous, “Digital Video Broadcasting (DVB); Specification for the use of Video and Audio Coding in Broadcasting Applications based on the MPEG-2 Transport Stream”, European Telecommunications Standards Institute, ETSI Technical Specification ETSI TS 101 154 v2.2.1, Jun. 2015, pp. 1-242.
Anonymous, “High Efficiency Video Coding”, ITU-T H.265, Telecommunication Standardization Sector of ITU, Series H: Audiovisual and Multimedia Systems, Infrastructure of audiovisual services—Coding of moving video, Apr. 2015, pp. 1-634.
Anonymous, “SubRip”, Wikipedia, https://en.wikipedia.org/w/index.php?title=SubRip&oldid=667432445, Jun. 18, 2015, pp. 1-6.
Armstrong et al., “TVX2014 Short Paper—Enhancing Subtitles”, BBC R&D, http://www.bbc.co.uk/rd/blog/2014/10/tvx2014-short-paper-enhancing-subtitles, Nov. 3, 2014, pp. 1-7.
Kim et al., “Scene Graph for Dynamic Virtual Environment: Spangraph”, International Journal of Virtual Reality (IJVR), vol. 4, No. 2, Dec. 2015, pp. 23-36.
CN102369497 A, Translated “Method and apparatus for creating a zone of interest in a video display” Mar. 7, 2012.
JP2013538377, Translated “Content mapping of mobile device-based for the augmented reality environment” Oct. 10, 2013.
JP2014215828, Translated “Image Data Reproduction Device, and Viewpoint Information Generation Device” Nov. 17, 2014.
JP2014075743, Translated “Video Viewing History Analysis Device, Video Viewing History Analysis Method and Video Viewing History Analysis Program” Apr. 24, 2014.
Related Publications (1)
Number Date Country
20180182168 A1 Jun 2018 US