The disclosure relates generally to a virtual reality system and the video associated with virtual reality.
Virtual Reality brings new challenges when it comes to Video Compression where the content (that is uncompressed video) is enjoyed within a Head Mounted Display so that the video is only a few inches away from the eye of the user. Because the video is very close to the eye and the eye is sensitive to artifacts that occur during video compression the user is more likely to see the artifacts and impairments in the video that used to be unnoticeable to the user when watching a few feet away from a TV set.
Extensive research has been conducted on the visual acuity of a human being. That research reveals that a human being has better visual acuity at a center of their vision than at the edges of their vision. More specifically, the visual acuity of the human eye centers on the middle 10° of horizontal view (±5°) due to the high density of Cones on the retina (green line in the graph in
The technical problem with existing virtual reality systems and methods is that these systems and method use existing video compression techniques that provide acceptable quality when used to compress television signals. However, these existing techniques do not provide the level of video quality needed for virtual reality systems for the reasons described above. Thus, it is desirable to improve the quality of the video used for virtual reality systems.
The disclosure is particularly applicable to a streaming virtual reality system that has a client/server type architecture and it is in this context that the disclosure will be described. It will be appreciated, however, that the system and method for improved video quality has greater utility since it may be used with other streaming virtual reality systems that may utilize a different architecture (peer to peer, single computer, mainframe computer, etc.) and also may be used with other systems in which it is desirable to be able to generate improved video quality to be displayed to a user.
In a streaming system as shown in
Each virtual reality device 302 may be a device that is capable of receiving virtual reality streaming data, processing the virtual reality streaming data (including possibly decompressing that data) and displaying the virtual reality streaming data to a user using some type of virtual reality viewing device. Each virtual reality device may further directly deliver an immersive visual experience to the eyes of the user based on positional sensors of the virtual reality device that detects the position of the virtual reality device and affects the virtual reality data being displayed to the user. Each virtual reality device 302 may include at least a processor, memory, one or more sensors for detecting and generating data about a current position/orientation of the virtual reality device 302, such as an accelerometer, etc., and a display for displaying the virtual reality streaming data. For example, each virtual reality device 302 may be a virtual reality Headset, a computer having an attached virtual reality headset, a mobile phone with virtual reality viewing accessory or any other plain display device capable of displaying video or images. For example, each virtual reality device 302 may be a computing device, such as a smartphone, personal computer, laptop computer, tablet computer, etc. that has an attached virtual reality headset 304A1, or may be a self-contained virtual reality headset 304AN.
The system 300 may further comprise the backend 306 that may be implemented using computing resources, such as a server computer, a computer system, a processor, memory, a blade server, a database server, an application server and/or various cloud computing resources. The backend 306 may be implemented using a plurality of lines of computer code/instructions that may be stored in a memory of the computing resource and executed by a processor of the computing resource so that the computer system with the processor and memory is configured to perform the functions and operations of the system as described below. The backend 306 may also be implemented as a piece of hardware that has processing capabilities within the piece of hardware that perform the backend virtual reality data functions and operations described below. Generally, the backend 306 may receive a request for streamed virtual reality data for a virtual reality device (that may contain data about the virtual reality device) and perform the technical task of virtual reality data preparation (using one or more rules or lines of instructions/computer code). The VR data preparation may include generating the stream of known in view and out of view virtual reality data as well as the one or more pieces of optimized virtual reality data (collectively the “optimized streamed virtual reality data” that includes improved content quality) based on each request for streamed virtual reality data for each virtual reality device 302. The backend 306 may then stream that optimized streamed virtual reality data to each virtual reality device 302 that requested the virtual reality data. The optimized streamed virtual reality data is used to solve the technical problems of poor and noticeable VR data quality in VR systems as described above.
The backend system 306 may further include a video quality improvement engine 501 and a video encoding engine 506 that implements one or more different methods/processes that improve the quality of the virtual reality data to provide a technical solution to the above described problem of conventional virtual reality systems. The video quality improvement engine 501 and the video encoding engine 506 each may be implemented using a plurality of lines of computer code/instructions as described above or using hardware device as described above. The video quality improvement engine 501 may further include a quantization parameter engine 502 and a gradient scaling engine 504 that may be coupled to each other. Each of these modules/components/engines 502-506 may be implemented in hardware or software. If a module/component/engine is implemented in software, the module/component/engine comprises a plurality of lines of computer code/instructions that may be executed by a processor of the backend system that perform a set of instructions/rules that perform the function and operation of that module as described below. If a component/module/engine is implemented in hardware, the module may be a piece of hardware, such as a micro-controller, processor, programmed logic device, etc. that is programmed/manufactured so that the module performs the function and operation of that module as described below. The important aspects of these modules is the operations/processes that these modules perform to improve the quality of the virtual reality data and the particular implementation of the modules using well known computer elements like a processor, server computer, etc. is not important to the disclosure or affect the important aspect of the system.
The quantization parameter engine 502 may execute a process/method using an algorithm to adjust the quantization parameters of the virtual reality data sent to each device 302. An example of the processes performed by the quantization parameter engine 502 is described in more detail with respect to
As described above, the characteristics of a human eye (the rods and cones) are known and this method also leverages those known characteristics of the human eye. Thus, the method may then, based on the lens characteristics and the human eye characteristics, adjust the quantization parameters (604) in the spatial domain to improve the quality of the virtual reality data sent to each device 302. The method may then output the adjusted quantization parameters for each device 302 (606).
The adjusted quantization parameters take into account the impact of the lenses. Specifically, due to lens distortion, the quality as seen by the eye is optimal in the center where the majority of the users are looking, but then degrades the further away from the center the user looks. Thus, the adjusted quantization parameters take advantage of this characteristic and apply a QP (Quantization Parameter) map (a matrix that contains the adjusted quantization parameters for the different regions of the virtual reality data (see
The adjusted quantization parameters also take into account the impact of the rod and cone density of the human retina. The cones of the retina provide the eye with high visual acuity and fast response (10 ms integration time). These cones are highly concentrated in the middle 10° of human vision. The rods which have less acuity and a slower response time (100 ms integration time) peak at ±30° from center and fall off toward zero at ±140° from center. Thus, the method may provide a QP map that matches these human eye characteristics.
The method may obtain a frame of virtual reality data (802) and may generate the gradient scaling for the frame (804). In the method, gradient Scaling is applied beyond the actual Field of View (FOV). Specifically, the method implements gradient scaling to encode the original full 360 degree video frame for a specific view point and use gradient scaling from the center where the user is currently looking all the way to the edge of the current 360 degree view (an example of which is shown in
For an adaptive streaming virtual system, the gradient scaling method may provide a new level of flexibility to adaptive streaming solutions. Legacy adaptive streaming systems only have variations between different video profiles that include the resolution, frame rate, and bitrate. Using the gradient method described above, the method may use multiple FOVs to cover the full 360 original video so that different gradient scaling sizes per bitrate range may be used. For example, for the premium profile at high bitrate, the method may use less FOVs to cover the 360 original video as we can afford to spend more bits and more pixels, while we will use a lot more FOVs when going down in bitrates where we need to use less bits and less pixels.
In some embodiments, the discloseed method provides a method for encoding of virtual reality data in which a gradient scaling is performed for each piece of virtual reality content that has a field of view portion and an out of view portion that surrounds the field of view portion. The disclosed gradient scaling maintains a pixel ratio in the field of view portion during encoding while decreasing the pixel ratio in the out of view portion during the encoding. In the method, the virtual reality data is encoded into a smaller image using the gradient scaled pieces of virtual reality data content. The method may also gradually decrease the pixel ratio as a portion of the out of view portion is farther away from the field of view portion.
In another aspect of the disclosed method for encoding of virtual reality data, the method may generate one or more field of views for a piece of virtual reality data based on a bitrate budget for the virtual reality system and then encode the virtual reality data using the one or more generated field of views. The method also may generate a single field of view with a high bitrate covering the piece of virtual reality data and generate a plurality of field of views with a lower bitrate, each field of view of the plurality of field of views covering less of the virtual reality data.
In another aspect of the disclosed method for encoding of virtual reality data, the method may receive viewing data from a plurality of virtual reality devices wherein the viewing data indicates a content being viewed using the plurality of virtual reality devices. The method may then encode, at a backend remote from the plurality of virtual reality devices, the virtual reality data using the received viewing data. The viewing data may be viewing data over a period of time or the viewing data may be analytic data that is one of a most viewed scene of the virtual reality data, a heat map of the virtual reality data and a movement of each virtual reality device indicating a scene of the virtual reality data currently being viewed by each virtual reality device to determine an area of the scene in the virtual reality data most viewed. The method may the area of the scene in the virtual reality data most viewed at a higher quality based on the analytic data and encoding a plurality of scenes in the virtual reality data that are not most viewed at a lower quality.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the disclosure and its practical applications, to thereby enable others skilled in the art to best utilize the disclosure and various embodiments with various modifications as are suited to the particular use contemplated.
The system and method disclosed herein may be implemented via one or more components, systems, servers, appliances, other subcomponents, or distributed between such elements. When implemented as a system, such systems may include an/or involve, inter alia, components such as software modules, general-purpose CPU, RAM, etc. found in general-purpose computers. In implementations where the innovations reside on a server, such a server may include or involve components such as CPU, RAM, etc., such as those found in general-purpose computers.
Additionally, the system and method herein may be achieved via implementations with disparate or entirely different software, hardware and/or firmware components, beyond that set forth above. With regard to such other components (e.g., software, processing components, etc.) and/or computer-readable media associated with or embodying the present inventions, for example, aspects of the innovations herein may be implemented consistent with numerous general purpose or special purpose computing systems or configurations. Various exemplary computing systems, environments, and/or configurations that may be suitable for use with the innovations herein may include, but are not limited to: software or other components within or embodied on personal computers, servers or server computing devices such as routing/connectivity components, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, consumer electronic devices, network PCs, other existing computer platforms, distributed computing environments that include one or more of the above systems or devices, etc.
In some instances, aspects of the system and method may be achieved via or performed by logic and/or logic instructions including program modules, executed in association with such components or circuitry, for example. In general, program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular instructions herein. The inventions may also be practiced in the context of distributed software, computer, or circuit settings where circuitry is connected via communication buses, circuitry or links. In distributed settings, control/instructions may occur from both local and remote computer storage media including memory storage devices.
The software, circuitry and components herein may also include and/or utilize one or more type of computer readable media. Computer readable media can be any available media that is resident on, associable with, or can be accessed by such circuits and/or computing components. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and can accessed by computing component. Communication media may comprise computer readable instructions, data structures, program modules and/or other components. Further, communication media may include wired media such as a wired network or direct-wired connection, however no media of any such type herein includes transitory media. Combinations of the any of the above are also included within the scope of computer readable media.
In the present description, the terms component, module, device, etc. may refer to any type of logical or functional software elements, circuits, blocks and/or processes that may be implemented in a variety of ways. For example, the functions of various circuits and/or blocks can be combined with one another into any other number of modules. Each module may even be implemented as a software program stored on a tangible memory (e.g., random access memory, read only memory, CD-ROM memory, hard disk drive, etc.) to be read by a central processing unit to implement the functions of the innovations herein. Or, the modules can comprise programming instructions transmitted to a general purpose computer or to processing/graphics hardware via a transmission carrier wave. Also, the modules can be implemented as hardware logic circuitry implementing the functions encompassed by the innovations herein. Finally, the modules can be implemented using special purpose instructions (SIMD instructions), field programmable logic arrays or any mix thereof which provides the desired level performance and cost.
As disclosed herein, features consistent with the disclosure may be implemented via computer-hardware, software and/or firmware. For example, the systems and methods disclosed herein may be embodied in various forms including, for example, a data processor, such as a computer that also includes a database, digital electronic circuitry, firmware, software, or in combinations of them. Further, while some of the disclosed implementations describe specific hardware components, systems and methods consistent with the innovations herein may be implemented with any combination of hardware, software and/or firmware. Moreover, the above-noted features and other aspects and principles of the innovations herein may be implemented in various environments. Such environments and related applications may be specially constructed for performing the various routines, processes and/or operations according to the invention or they may include a general-purpose computer or computing platform selectively activated or reconfigured by code to provide the necessary functionality. The processes disclosed herein are not inherently related to any particular computer, network, architecture, environment, or other apparatus, and may be implemented by a suitable combination of hardware, software, and/or firmware. For example, various general-purpose machines may be used with programs written in accordance with teachings of the invention, or it may be more convenient to construct a specialized apparatus or system to perform the required methods and techniques.
Aspects of the method and system described herein, such as the logic, may also be implemented as functionality programmed into any of a variety of circuitry, including programmable logic devices (“PLDs”), such as field programmable gate arrays (“FPGAs”), programmable array logic (“PAL”) devices, electrically programmable logic and memory devices and standard cell-based devices, as well as application specific integrated circuits. Some other possibilities for implementing aspects include: memory devices, micro-controllers with memory (such as EEPROM), embedded microprocessors, firmware, software, etc. Furthermore, aspects may be embodied in microprocessors having software-based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types. The underlying device technologies may be provided in a variety of component types, e.g., metal-oxide semiconductor field-effect transistor (“MOSFET”) technologies like complementary metal-oxide semiconductor (“CMOS”), bipolar technologies like emitter-coupled logic (“ECL”), polymer technologies (e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures), mixed analog and digital, and so on.
It should also be noted that the various logic and/or functions disclosed herein may be enabled using any number of combinations of hardware, firmware, and/or as data and/or instructions embodied in various machine-readable or computer-readable media, in terms of their behavioral, register transfer, logic component, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, non-volatile storage media in various forms (e.g., optical, magnetic or semiconductor storage media) though again does not include transitory media. Unless the context clearly requires otherwise, throughout the description, the words “comprise,” “comprising.” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words “herein,” “hereunder,” “above,” “below,” and words of similar import refer to this application as a whole and not to any particular portions of this application. When the word “or” is used in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list.
Although certain presently preferred implementations of the invention have been specifically described herein, it will be apparent to those skilled in the art to which the invention pertains that variations and modifications of the various implementations shown and described herein may be made without departing from the spirit and scope of the invention. Accordingly, it is intended that the invention be limited only to the extent required by the applicable rules of law.
While the foregoing has been with reference to a particular embodiment of the disclosure, it will be appreciated by those skilled in the art that changes in this embodiment may be made without departing from the principles and spirit of the disclosure, the scope of which is defined by the appended claims.
This application claims the benefit under 35 USC 119 (e) to U.S. Provisional Patent Application Ser. No. 62/503,560, file May 9, 2017 and entitled “Video Quality Improvements System and Method for Virtual Reality”, the entirely of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5596659 | Normile et al. | Jan 1997 | A |
5703799 | Ohta | Dec 1997 | A |
5856832 | Pakenham et al. | Jan 1999 | A |
5872572 | Rossignac | Feb 1999 | A |
5900849 | Gallery | May 1999 | A |
6016360 | Nguyen et al. | Jan 2000 | A |
6052205 | Matsuura | Apr 2000 | A |
6128407 | Inoue et al. | Oct 2000 | A |
6304295 | Krishnannurthy et al. | Oct 2001 | B1 |
6393156 | Nguyen et al. | May 2002 | B1 |
6463178 | Kondo et al. | Oct 2002 | B1 |
6466254 | Furlan et al. | Oct 2002 | B1 |
6625221 | Knee et al. | Sep 2003 | B2 |
6690731 | Gough et al. | Feb 2004 | B1 |
6697521 | Islam et al. | Feb 2004 | B2 |
6715003 | Safai | Mar 2004 | B1 |
6792153 | Tsujii | Sep 2004 | B1 |
6925193 | Farmer | Aug 2005 | B2 |
6938073 | Mendhekar | Aug 2005 | B1 |
7003167 | Mukherjee | Feb 2006 | B2 |
7149811 | Wise et al. | Dec 2006 | B2 |
7791508 | Wegener | Sep 2010 | B2 |
7894680 | Moon et al. | Feb 2011 | B2 |
7916960 | Mizuno | Mar 2011 | B2 |
7965900 | Maurer et al. | Jun 2011 | B2 |
8077990 | Islam | Dec 2011 | B2 |
8130828 | Hsu et al. | Mar 2012 | B2 |
8194989 | Lee et al. | Jun 2012 | B2 |
8265144 | Christoffersen | Sep 2012 | B2 |
8347329 | Dawson | Jan 2013 | B2 |
8422804 | Islam | Apr 2013 | B2 |
8463033 | Islam | Jun 2013 | B2 |
8639057 | Mendhekar | Jan 2014 | B1 |
8811736 | Islam | Aug 2014 | B2 |
9042644 | Mendhekar | May 2015 | B2 |
9230341 | Islam | Jan 2016 | B2 |
9917877 | Adams et al. | Mar 2018 | B2 |
10015506 | Mendhekar | Jul 2018 | B2 |
10015507 | Mendhekar | Jul 2018 | B2 |
10460700 | Mendhekar et al. | Oct 2019 | B1 |
10462477 | Mendhekar | Oct 2019 | B2 |
10482653 | Xu et al. | Nov 2019 | B1 |
10595069 | Swaminathan et al. | Mar 2020 | B2 |
10742704 | Guardini et al. | Aug 2020 | B2 |
10743004 | Waggoner | Aug 2020 | B1 |
10944971 | Guardini et al. | Mar 2021 | B1 |
20010031009 | Knee et al. | Oct 2001 | A1 |
20010041011 | Passagio et al. | Nov 2001 | A1 |
20010048770 | Maeda | Dec 2001 | A1 |
20020108118 | Cohen et al. | Aug 2002 | A1 |
20030002734 | Islam et al. | Jan 2003 | A1 |
20030108099 | Nagumo | Jun 2003 | A1 |
20030202579 | Lin et al. | Oct 2003 | A1 |
20030202581 | Kodama | Oct 2003 | A1 |
20030206590 | Krishnamachari | Nov 2003 | A1 |
20040137886 | Ross et al. | Jul 2004 | A1 |
20040264793 | Okubo | Dec 2004 | A1 |
20050063599 | Sato | Mar 2005 | A1 |
20060039473 | Filippini et al. | Feb 2006 | A1 |
20060115166 | Sung et al. | Jun 2006 | A1 |
20060177145 | Lee et al. | Aug 2006 | A1 |
20060285587 | Luo et al. | Dec 2006 | A1 |
20070019875 | Sung et al. | Jan 2007 | A1 |
20070064800 | Ha | Mar 2007 | A1 |
20070082742 | Takizawa | Apr 2007 | A1 |
20070113250 | Logan | May 2007 | A1 |
20070206871 | Jalil et al. | Sep 2007 | A1 |
20070237237 | Chang et al. | Oct 2007 | A1 |
20070248163 | Zuo et al. | Oct 2007 | A1 |
20080032739 | Soskov et al. | Feb 2008 | A1 |
20080211788 | Ting | Sep 2008 | A1 |
20080240239 | Stuart | Oct 2008 | A1 |
20080247658 | Lee et al. | Oct 2008 | A1 |
20100007738 | Lehnert | Jan 2010 | A1 |
20100020868 | Ayres, Jr. et al. | Jan 2010 | A1 |
20100066912 | Kumwilaisak | Mar 2010 | A1 |
20100110163 | Bruls et al. | May 2010 | A1 |
20100266008 | Reznik | Oct 2010 | A1 |
20100272174 | Toma et al. | Oct 2010 | A1 |
20100299630 | McCutchen et al. | Nov 2010 | A1 |
20100329358 | Zhang et al. | Dec 2010 | A1 |
20110103445 | Jax et al. | May 2011 | A1 |
20110200262 | Canel-Katz | Aug 2011 | A1 |
20110206287 | Islam | Aug 2011 | A1 |
20110307685 | Song | Dec 2011 | A1 |
20120007947 | Costa | Jan 2012 | A1 |
20120026157 | Unkel et al. | Feb 2012 | A1 |
20120086815 | Cooper et al. | Apr 2012 | A1 |
20120120251 | Sun | May 2012 | A1 |
20120183053 | Lu | Jul 2012 | A1 |
20130024898 | Munetsugu et al. | Jan 2013 | A1 |
20130072299 | Lee | Mar 2013 | A1 |
20130094590 | Laksono et al. | Apr 2013 | A1 |
20130286160 | Sasaki et al. | Oct 2013 | A1 |
20130315573 | Sasaki et al. | Nov 2013 | A1 |
20140028721 | Kalva | Jan 2014 | A1 |
20140341303 | Mendhekar | Jan 2014 | A1 |
20140341304 | Mendhekar | Jan 2014 | A1 |
20140133583 | Lin et al. | May 2014 | A1 |
20140188451 | Asahara et al. | Jul 2014 | A1 |
20140282736 | Elstermann | Sep 2014 | A1 |
20140306954 | Kao | Oct 2014 | A1 |
20150055937 | Van Hoff et al. | Feb 2015 | A1 |
20150229948 | Puri | Aug 2015 | A1 |
20150235453 | Schowengerdt et al. | Aug 2015 | A1 |
20150264296 | Devaux | Sep 2015 | A1 |
20150279022 | Shuster et al. | Oct 2015 | A1 |
20150338204 | Richert et al. | Nov 2015 | A1 |
20150348558 | Riedmiller et al. | Dec 2015 | A1 |
20150362733 | Spivack | Dec 2015 | A1 |
20160013244 | Sutton | Jan 2016 | A1 |
20160073114 | Kawamura | Mar 2016 | A1 |
20160140733 | Gu et al. | May 2016 | A1 |
20160150230 | He | May 2016 | A1 |
20160198140 | Nadler | Jul 2016 | A1 |
20160247250 | Mendhekar | Aug 2016 | A1 |
20160282433 | Kannengiesser | Sep 2016 | A1 |
20160307297 | Akenine-Moller | Oct 2016 | A1 |
20170034501 | McDevitt | Feb 2017 | A1 |
20170064294 | Priede | Mar 2017 | A1 |
20170103577 | Mendhekar et al. | Apr 2017 | A1 |
20170188007 | Bae et al. | Jun 2017 | A1 |
20170206707 | Guay | Jul 2017 | A1 |
20170244775 | Ha | Aug 2017 | A1 |
20170302918 | Mammou et al. | Oct 2017 | A1 |
20180035134 | Pang et al. | Feb 2018 | A1 |
20180077209 | So et al. | Mar 2018 | A1 |
20180160160 | Swaminathan | Jun 2018 | A1 |
20180192058 | Chen et al. | Jul 2018 | A1 |
20180232955 | Namgoong et al. | Aug 2018 | A1 |
20180262687 | Hildreth | Sep 2018 | A1 |
20180270471 | Luo et al. | Sep 2018 | A1 |
20180300564 | Kwant | Oct 2018 | A1 |
20180310013 | Tanner | Oct 2018 | A1 |
20190012822 | Seigneurbieux | Jan 2019 | A1 |
20190045222 | Yip | Feb 2019 | A1 |
20190051058 | Robinson | Feb 2019 | A1 |
20190102944 | Han et al. | Apr 2019 | A1 |
20190113966 | Connellan et al. | Apr 2019 | A1 |
20190173929 | Guardini et al. | Jun 2019 | A1 |
20190174125 | Ninan | Jun 2019 | A1 |
20190200084 | Gilson et al. | Jun 2019 | A1 |
20190258058 | Fortin-Deschêes | Aug 2019 | A1 |
20190310472 | Schilt et al. | Oct 2019 | A1 |
20190362151 | Stokking et al. | Nov 2019 | A1 |
20190364204 | Wozniak et al. | Nov 2019 | A1 |
20190364205 | Wozniak et al. | Nov 2019 | A1 |
20200050884 | Han et al. | Feb 2020 | A1 |
20200152105 | Ishii | May 2020 | A1 |
Number | Date | Country |
---|---|---|
2003-018412 | Jan 2003 | JP |
2004-173205 | Jun 2004 | JP |
2007-104645 | Apr 2007 | JP |
2007-318711 | Dec 2007 | JP |
2003-0007080 | Jan 2003 | KR |
2010-0918377 | Sep 2009 | KR |
WO2003103295 | Dec 2003 | WO |
WO 2006061734 | Jun 2006 | WO |
Entry |
---|
Augustine, P., et al. entitled “Anatomical Distribution of Rods and Cones”—National Institutes of health, neuroscience, 2nd edition. Sunderland (MA); Sinauer Associates, dated 2001 retrieved from the web at https:/www.ncbi.nlm.nih.gov/books/NBK10848/ (2 pgs.). |
Anonymous, entitled “Roads & Cones” - retreated from the web on May 7, 2018 at https://www.cis.rit.edu/people/faculty/montag/vandplite/pages/chap_9/ch9p1.html (8 pgs.). |
Charalampos Patrikakis, Nikolaos Papaoulakis, Panagiotis Papageorgiou, Aristodemos Pnevmatikakis, Paul Chippendale, Mario S. Nunes, Rui Santos Cruz, Stefan Poslad, Zhenchen Wang, “Personalized Coverage of Large Athletic Events”, Nov. 9, 2010, IEEE, IEEE MultiMedia, vol. 18, issue 4. |
Omar A. Niamut, Axel Kochale, Javier Ruiz Hidalgo, Rene Kaiser, Jens Spille, Jean-Francois Macq, Gert Kienast, Oliver Schreer, Ben Shirley, “Towards A Format-agnostic Approach for Production, Delivery and Rendering of Immersive Media”, Mar. 1, 2013, ACM, Proceedings of the 4th ACM Multimedia Systems Conference. |
Rene Kaiser, Marcus Thaler, Andreas Kriechbaum, Hannes Fassold, Werner Bailer, Jakub Rosner, “Real-time Person Tracking in High-resolution Panoramic Video for Automated Broadcast Production”, Nov. 17, 2011, IEEE, 2011 Conference for Visual Media Production, pp. 21-29. |
Number | Date | Country | |
---|---|---|---|
62503560 | May 2017 | US |