The application relates generally to controlling a large screen display using a wireless portable computer such as a tablet or laptop computer interfacing with a display controller such as a game console.
A computer ecosystem, or digital ecosystem, is an adaptive and distributed socio-technical system that is characterized by its sustainability, self-organization, and scalability. Inspired by environmental ecosystems, which consist of biotic and abiotic components that interact through nutrient cycles and energy flows, complete computer ecosystems consist of hardware, software, and services that in some cases may be provided by one company, such as Sony. The goal of each computer ecosystem is to provide consumers with everything that may be desired, at least in part services and/or software that may be exchanged via the Internet. Moreover, interconnectedness and sharing among elements of an ecosystem, such as applications within a computing cloud, provides consumers with increased capability to organize and access data and presents itself as the future characteristic of efficient integrative ecosystems.
Two general types of computer ecosystems exist: vertical and horizontal computer ecosystems. In the vertical approach, virtually all aspects of the ecosystem are owned and controlled by one company, and are specifically designed to seamlessly interact with one another. Horizontal ecosystems, one the other hand, integrate aspects such as hardware and software that are created by other entities into one unified ecosystem. The horizontal approach allows for greater variety of input from consumers and manufactures, increasing the capacity for novel innovations and adaptations to changing demands.
An example ecosystem that is pertinent here is an entertainment ecosystem in the home or in a luxury suite at a stadium that includes a large screen high definition display controlled by a controller such as a personal computer (PC) or game console which receives commands from a portable control device such as a tablet computer.
Accordingly, a control device includes at least one computer readable storage medium bearing instructions executable by a processor, and at least one processor configured for accessing the computer readable storage medium to execute the instructions to configure the processor for presenting on a display of the control device a user interface (UI) presenting at least one video image of content and a border superimposed on a portion of the video image to define a portion of video. The portion of video is smaller than the video image. The instructions when executed by the processor configure the processor for receiving a user input to move the border from a first portion of video in the video image to a second portion of video in the video image. Also, responsive to the user input, the instructions when executed by the processor configure the processor for sending a command to a controller of a display device presenting the content separately from the control device to pan the video image on the display device from the first portion to the second portion.
In some embodiments, the instructions when executed by the processor configure the processor for, responsive to the user input, sending a command to the controller of the display device presenting the content separately from the control device to cause the controller to pan and zoom a portion of the video image, such that the content related to the video image on the display device is entirely established, in temporal sequence, by a zoomed presentation of the first portion, then a moving pan across at least part of the video image on the display device in concert with the user input to move the border to the second portion of the video image on the control device. The instructions when executed by the processor may also configure the processor for, responsive to the user input dropping the border on the second portion of the video image on the control device, sending a command to the controller to cause presentation the video image on the display device to be entirely established by a zoomed presentation of the second portion.
In examples, the instructions when executed by the processor configure the processor for presenting both the video image of the content and the border superimposed on the portion of the video image as the user input causes the border to move across the video image of the content, while causing the controller to present on the display device only content from the video image corresponding to content within the border on the control device. The control device can be established by a portable computer and the display device can include an ultra high definition display. The control device may communicate with the controller only, and not with the display device directly.
In another aspect, a method includes receiving at a control device a user-input drag and drop between first and second portions of a video image of content presented on the display device. Responsive to the drag and drop, the method entails commanding a controller of a display device presenting the content to pan presentation on the display device from the first portion to the second portion.
In another aspect, a system includes a display device configured for presenting video content, a controller configured for controlling the display device, and a control device configured for communicating commands to the controller to control presentation on the display device. The control device is configured to present a user interface (UI) having an indicator on a portion of video content presented on the control device. The video content presented on the control device is the same as the video content presented on the display device. The control device commands the controller to pan and zoom the video content on the display device responsive to a user moving the indicator on the control device.
The details of the present invention, both as to its structure and operation, can be best understood in reference to the accompanying drawings, in which like reference numerals refer to like parts, and in which:
This disclosure relates generally to computer ecosystems including aspects of consumer electronics (CE) device based user information in computer ecosystems. A system herein may include server and client components, connected over a network such that data may be exchanged between the client and server components. The client components may include one or more computing devices including portable televisions (e.g. smart TVs, Internet-enabled TVs), portable computers such as laptops and tablet computers, and other mobile devices including smart phones and additional examples discussed below. These client devices may operate with a variety of operating environments. For example, some of the client computers may employ, as examples, operating systems from Microsoft, or a Unix operating system, or operating systems produced by Apple Computer or Google. These operating environments may be used to execute one or more browsing programs, such as a browser made by Microsoft or Google or Mozilla or other browser program that can access web applications hosted by the Internet servers discussed below.
Servers may include one or more processors executing instructions that configure the servers to receive and transmit data over a network such as the Internet. Or, a client and server can be connected over a local intranet or a virtual private network. A server or controller may be instantiated by a game console such as a Sony Playstation (trademarked), a personal computer, etc.
Information may be exchanged over a network between the clients and servers. To this end and for security, servers and/or clients can include firewalls, load balancers, temporary storages, and proxies, and other network infrastructure for reliability and security. One or more servers may form an apparatus that implement methods of providing a secure community such as an online social website to network members.
As used herein, instructions refer to computer-implemented steps for processing information in the system. Instructions can be implemented in software, firmware or hardware and include any type of programmed step undertaken by components of the system.
A processor may be any conventional general purpose single- or multi-chip processor that can execute logic by means of various lines such as address lines, data lines, and control lines and registers and shift registers.
Software modules described by way of the flow charts and user interfaces herein can include various sub-routines, procedures, etc. Without limiting the disclosure, logic stated to be executed by a particular module can be redistributed to other software modules and/or combined together in a single module and/or made available in a shareable library.
Present principles described herein can be implemented as hardware, software, firmware, or combinations thereof; hence, illustrative components, blocks, modules, circuits, and steps are set forth in terms of their functionality.
Further to what has been alluded to above, logical blocks, modules, and circuits described below can be implemented or performed with a general purpose processor, a digital signal processor (DSP), a field programmable gate array (FPGA) or other programmable logic device such as an application specific integrated circuit (ASIC), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can be implemented by a controller or state machine or a combination of computing devices.
The functions and methods described below, when implemented in software, can be written in an appropriate language such as but not limited to C# or C++, and can be stored on or transmitted through a computer-readable storage medium such as a random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), compact disk read-only memory (CD-ROM) or other optical disk storage such as digital versatile disc (DVD), magnetic disk storage or other magnetic storage devices including removable thumb drives, etc. A connection may establish a computer-readable medium. Such connections can include, as examples, hard-wired cables including fiber optics and coaxial wires and digital subscriber line (DSL) and twisted pair wires. Such connections may include wireless communication connections including infrared and radio.
Components included in one embodiment can be used in other embodiments in any appropriate combination. For example, any of the various components described herein and/or depicted in the Figures may be combined, interchanged or excluded from other embodiments.
“A system having at least one of A, B, and C” (likewise “a system having at least one of A, B, or C” and “a system having at least one of A, B, C”) includes systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.
Now specifically referring to
Accordingly, to undertake such principles the AVDD 12 can be established by some or all of the components shown in
In addition to the foregoing, the AVDD 12 may also include one or more input ports 26 such as, e.g., a USB port to physically connect (e.g. using a wired connection) to another CE device and/or a headphone port to connect headphones to the AVDD 12 for presentation of audio from the AVDD 12 to a user through the headphones. The AVDD 12 may further include one or more tangible computer readable storage medium 28 such as disk-based or solid state storage. Also in some embodiments, the AVDD 12 can include a position or location receiver such as but not limited to a cellphone receiver, GPS receiver and/or altimeter 30 that is configured to e.g. receive geographic position information from at least one satellite or cellphone tower and provide the information to the processor 24 and/or determine an altitude at which the AVDD 12 is disposed in conjunction with the processor 24. However, it is to be understood that that another suitable position receiver other than a cellphone receiver, GPS receiver and/or altimeter may be used in accordance with present principles to e.g. determine the location of the AVDD 12 in e.g. all three dimensions.
Continuing the description of the AVDD 12, in some embodiments the AVDD 12 may include one or more cameras 32 that may be, e.g., a thermal imaging camera, a digital camera such as a webcam, and/or a camera integrated into the AVDD 12 and controllable by the processor 24 to gather pictures/images and/or video in accordance with present principles. Also included on the AVDD 12 may be a Bluetooth transceiver 34 and other Near Field Communication (NFC) element 36 for communication with other devices using Bluetooth and/or NFC technology, respectively. An example NFC element can be a radio frequency identification (RFID) element.
Further still, the AVDD 12 may include one or more auxiliary sensors 37 (e.g., a motion sensor such as an accelerometer, gyroscope, cyclometer, or a magnetic sensor, an infrared (IR) sensor, an optical sensor, a speed and/or cadence sensor, a gesture sensor (e.g. for sensing gesture command), etc.) providing input to the processor 24. The AVDD 12 may include still other sensors such as e.g. one or more climate sensors 38 (e.g. barometers, humidity sensors, wind sensors, light sensors, temperature sensors, etc.) and/or one or more biometric sensors 40 providing input to the processor 24. In addition to the foregoing, it is noted that the AVDD 12 may also include an infrared (IR) transmitter and/or IR receiver and/or IR transceiver 42 such as an IR data association (IRDA) device. A battery (not shown) may be provided for powering the AVDD 12.
Still referring to
In the example shown, to illustrate present principles all three devices 12, 44, 46 are assumed to be members of an entertainment network in, e.g., a luxury suite of the stadium, or in a home, or at least to be present in proximity to each other in a location such as a house. However, for illustrating present principles the first CE device 44 is assumed to be in the same room as the AVDD 12, bounded by walls illustrated by dashed lines 48.
The example non-limiting first CE device 44 may be established by any one of the above-mentioned devices, for example, a portable wireless laptop computer or notebook computer, and accordingly may have one or more of the components described below. The second CE device 46 without limitation may be established by a wireless telephone.
The first CE device 44 may include one or more displays 50 that may be touch-enabled for receiving user input signals via touches on the display. The first CE device 44 may include one or more speakers 52 for outputting audio in accordance with present principles, and at least one additional input device 54 such as e.g. an audio receiver/microphone for e.g. entering audible commands to the first CE device 44 to control the device 44. The example first CE device 44 may also include one or more network interfaces 56 for communication over the network 22 under control of one or more CE device processors 58. Thus, the interface 56 may be, without limitation, a Wi-Fi transceiver, which is an example of a wireless computer network interface. It is to be understood that the processor 58 controls the first CE device 44 to undertake present principles, including the other elements of the first CE device 44 described herein such as e.g. controlling the display 50 to present images thereon and receiving input therefrom. Furthermore, note the network interface 56 may be, e.g., a wired or wireless modem or router, or other appropriate interface such as, e.g., a wireless telephony transceiver, or Wi-Fi transceiver as mentioned above, etc.
In addition to the foregoing, the first CE device 44 may also include one or more input ports 60 such as, e.g., a USB port to physically connect (e.g. using a wired connection) to another CE device and/or a headphone port to connect headphones to the first CE device 44 for presentation of audio from the first CE device 44 to a user through the headphones. The first CE device 44 may further include one or more tangible computer readable storage medium 62 such as disk-based or solid state storage. Also in some embodiments, the first CE device 44 can include a position or location receiver such as but not limited to a cellphone and/or GPS receiver and/or altimeter 64 that is configured to e.g. receive geographic position information from at least one satellite and/or cell tower, using triangulation, and provide the information to the CE device processor 58 and/or determine an altitude at which the first CE device 44 is disposed in conjunction with the CE device processor 58. However, it is to be understood that that another suitable position receiver other than a cellphone and/or GPS receiver and/or altimeter may be used in accordance with present principles to e.g. determine the location of the first CE device 44 in e.g. all three dimensions.
Continuing the description of the first CE device 44, in some embodiments the first CE device 44 may include one or more cameras 66 that may be, e.g., a thermal imaging camera, a digital camera such as a webcam, and/or a camera integrated into the first CE device 44 and controllable by the CE device processor 58 to gather pictures/images and/or video in accordance with present principles. Also included on the first CE device 44 may be a Bluetooth transceiver 68 and other Near Field Communication (NFC) element 70 for communication with other devices using Bluetooth and/or NFC technology, respectively. An example NFC element can be a radio frequency identification (RFID) element.
Further still, the first CE device 44 may include one or more auxiliary sensors 72 (e.g., a motion sensor such as an accelerometer, gyroscope, cyclometer, or a magnetic sensor, an infrared (IR) sensor, an optical sensor, a speed and/or cadence sensor, a gesture sensor (e.g. for sensing gesture command), etc.) providing input to the CE device processor 58. The first CE device 44 may include still other sensors such as e.g. one or more climate sensors 74 (e.g. barometers, humidity sensors, wind sensors, light sensors, temperature sensors, etc.) and/or one or more biometric sensors 76 providing input to the CE device processor 58. In addition to the foregoing, it is noted that in some embodiments the first CE device 44 may also include an infrared (IR) transmitter and/or IR receiver and/or IR transceiver 78 such as an IR data association (IRDA) device. A battery (not shown) may be provided for powering the first CE device 44.
The second CE device 46 may include some or all of the components shown for the CE device 44.
Now in reference to the afore-mentioned at least one server 80, it includes at least one server processor 82, at least one tangible computer readable storage medium 84 such as disk-based or solid state storage, and at least one network interface 86 that, under control of the server processor 82, allows for communication with the other devices of
Accordingly, in some embodiments the server 80 may be an Internet server, and may include and perform “cloud” functions such that the devices of the system 10 may access a “cloud” environment via the server 80 in example embodiments. Or, the server 80 may be implemented by a game console or other computer in the same room as the other devices shown in
The control devices 108, 110 may be, without limitation, portable computers such as tablet computers or laptop computers (also including notebook computers) or other devices with one or more of the CE device 44 components shown in
The controller 106 may receive video from plural video cameras 114. In the stadium context, a first camera 114 may image a first half of a sports field, racetrack, or other action venue whereas as second camera 114 may image the other half of the action venue, with the feeds from the two cameras being combined before being sent to the controller 106 or combined by the controller 106 and “stitched” to present a single video view of both halves of the action venue on one or both of the displays 102, 104. That is, the combined feed from both cameras may be presented on a single display in an 8K mode, or the combined feed may be spread across the juxtaposed displays such that one display shows one half of the action venue and the other display shows the other half. It will be appreciated that the feeds sent to the controller preferably are HD or more preferably UHD.
As well, the cameras 114 (through appropriate image processing down-resolution components) can present the same video feeds albeit at a lower resolution to the control devices 108, 110. The UHD feeds may be sent to the controller 106 over a network from a network address while the lower resolution feeds of the same content may be simultaneously sent to the control devices 108, 110 over the network from the same or a different network address, such that the video content on the control devices is the same as the video content presentable on the displays, albeit typically of a lower resolution.
Note that a dedicated local server or PS4 may not be required in some embodiments to manage the 4K and thumbnail feeds as well as analyze the commands coming from the tablet. Instead, this can happen in the cloud with the 4K TV and tablet having their own MAC address and the cloud server acting as though it were local to permit control of 4K monitors in remote locations as well.
A location sensing system such as any of those described above may be used to determine where the control device is relative multiple 4K display locations to allow the user to roam and have the 4K content follow him. This provides for multiple 4K clusters in a stadium suite, each showing the same or different content. In this case what is showing on a particular 4K TV cluster can drive the UI on the tablet, or the other way around.
In
When a user has instantiated the border 118 (by, e.g., selecting a “pan and zoom” selector 122), the control device in response sends a command to the controller to cause a large display such the display 102 in
In one example, the screen of the control device 108 is a touch screen display, and a user may touch the border 118 and/or portion enclosed thereby and drag (as indicated by the arrow 126) the border to a new portion of the video as indicated by the dotted line box 128, releasing the user touch once the border has been dragged to the desired part of the video shown on the control device. In the new portion 128, two defensive players “X” are shown, denoted by subscripts “2” to distinguish them. As indicated by the arrow 130, this drag and drop causes the controller to pan the zoomed video from the first portion to the second portion in the direction of the drag until the second portion of the (higher definition) video substantially fills the screen of the display device 102 as shown at 132 in the figure.
Thus, responsive to the drag and drop of the border 118 on the control device 108, the content related to the video image on the display device 102 is entirely established, in temporal sequence, by a zoomed presentation of the first portion, then a moving pan across at least part of the video image on the display device in concert with the user input to move the border to the second portion of the video image on the control device, to end at a zoomed presentation on the display device of the second portion. During the drag and drop process, the control device presents both the entire video image of the content and the border 118 superimposed on the portion 120 of the video image as the user input causes the border to move across the video image of the content, whereas the display device 102 is caused to present only content from the video image corresponding to content 120 within the border 118 on the control device.
HTML5 may be used along with JavaScript (including some JavaScript libraries), and CSS in one implementation. Video files may be stored locally on the control device and played in the browser using the video tag of HTML5. Live streaming files from a local streaming server, streaming files from Internet and live tuner signal can also be used as the source. To select a different file, a user drags and drops a tile; based on the id of the tile, the path of the video in the quad portion (on which the tile id dropped) of the display presenting video in four quadrants selected is changed to the correct video and this new video is played. A full screen API may not be used since it requires user interaction to allow full screen on the control device. Accordingly, as a workaround for full screen, all videos can be paused, then the video selected can be scaled by the browser to 4K resolution. If a 4K file is present, the 4K file is used, then no browser scaling is needed. Websocket may be used to communicate through IP from the control device to the controller to control the display device. Messages may be broadcast to all the display devices, then each display device browser can use the message it needs. Drag and drop can be done using the jQuery UI library, and scrolling can be done using CSS position updating. The stitch image zoom effect can be done by drawing video on the HTML5 canvas, sending coordinates from control device to the controller so the controller knows which portion of the video to zoom on in the display device.
A phone application may also be implemented in HTML5, allowing audio files from the server to be played on a speaker, e.g., of the display device or other device, through IP. The phone application audio matches the audio for the four videos played in the quad view, and each audio file can be selected for playback. When selecting an external device connected to a different HDMI input of the display device (such as video disk player, a satellite feed, etc.), when a user drags the appropriate tile for the external device, the control device may send IP commands to the display device (via the controller) to change input. If a tile corresponding to a video is drag and dropped, another IP command can be sent to the display device (via the controller) to change input back to PC and/or controller and the video file selected is played from the PC and/or controller.
A UI 402 is presented on the display of the control device 108. As shown, the UI 402 includes plural main selectors 404 arranged in a layout, preferably the same layout as the windows 400 on the display device 102 as shown. Each main selector 404 is established by a respective video feed, in the example shown, the same content albeit perhaps in lower resolution as the four videos in the quad view of the display device 102, as duly indicated by use of the same video program designators P1-P4.
The UI 402 may further include a row 406 of additional content selectors 408 apart from the programs P1-P4 shown in the main selectors 404, although in the embodiment shown, for ease of disclosure, the same four programs P1-P4 establish the first four content selectors 408 in the row 406, while the last two content selectors indicate they may be selected to present content from two additional programs P5 and P6. In some embodiments, unlike the main selectors 404, which recall are established by moving video, the content selectors 408 in the row 406 may be established by still image thumbnails.
Furthermore, a column 410 of audio selectors 412 may be presented on the UI 402. Each volume selector 412 in the column 410 may correspond to a respective content in the content selectors 408 in the row 406. Each audio selector 412 may include a respective audio on/off symbol 414, with all of the symbols 414 except one having a line through them indicating that the audio represented by those selectors is not being played on the display device 102. In contrast, in the example shown the symbol 414 of the top audio selector 412 does not have a line through it, indicating that the audio from the program associated with the top selector, in the example shown, program P1, is being played on the display device 102. Touching an audio selector 412 on the control device 108 causes the control device to command the controller to switch audio play on the display device 102 to the audio represented by the touched audio selector on the control device 108. This also causes the line through the respective symbol 414 of the touched selector to be removed, and a line placed onto the symbol 414 of the selector 412 representing the replaced audio.
The right side of
The left side of
Although not shown in
Alternatively, throwing a main selector 404 to cause the display device to enter full screen mode as described may not alter the appearance of the main selectors 404, which can remain in the quad view shown on the control device. Subsequently touching any one of the main selectors 404 on the control device may result in the control device commanding the controller to resume the quad view presentation on the display device. Or, if desired, as shown on the bottom right of
To view the details of any content represented by a content selector 408, as shown in the left side of
When a single control device 108 is used to control both display device 102 shown in
The bottom two screen shots in
While a four screen quad view is discussed and shown, any number of windows in a multi-window arrangements may be used.
While the particular CONTROL OF LARGE SCREEN DISPLAY USING WIRELESS PORTABLE COMPUTER TO PAN AND ZOOM ON LARGE SCREEN DISPLAY is herein shown and described in detail, it is to be understood that the subject matter which is encompassed by the present invention is limited only by the claims.
Number | Name | Date | Kind |
---|---|---|---|
4090504 | Nathan | May 1978 | A |
4546349 | Prohofsky et al. | Oct 1985 | A |
5630040 | Furuya | May 1997 | A |
5901178 | Lee et al. | May 1999 | A |
6388684 | Iwamura et al. | May 2002 | B1 |
6400378 | Snook | Jun 2002 | B1 |
6400852 | Edward et al. | Jun 2002 | B1 |
6710815 | Billmaier et al. | Mar 2004 | B1 |
6931656 | Eshelman et al. | Aug 2005 | B1 |
7077271 | Hamilton | Jul 2006 | B1 |
7219309 | Kaasila et al. | May 2007 | B2 |
7312764 | Driver et al. | Dec 2007 | B2 |
D595288 | Roach et al. | Jun 2009 | S |
7631277 | Nie et al. | Dec 2009 | B1 |
8045844 | Sasaki et al. | Oct 2011 | B2 |
8085851 | Toma et al. | Dec 2011 | B2 |
8144191 | Kawanishi et al. | Mar 2012 | B2 |
8149267 | Sasaki et al. | Apr 2012 | B2 |
8150238 | Sasaki et al. | Apr 2012 | B2 |
8164619 | Sasaki et al. | Apr 2012 | B2 |
8208790 | Toma et al. | Jun 2012 | B2 |
8270807 | Sasaki et al. | Sep 2012 | B2 |
8290338 | Sasaki et al. | Oct 2012 | B2 |
8326529 | Kang | Dec 2012 | B2 |
8400476 | Iijima | Mar 2013 | B2 |
8429174 | Ramani et al. | Apr 2013 | B2 |
8467664 | Sasaki et al. | Jun 2013 | B2 |
8493282 | Moran | Jul 2013 | B2 |
8520055 | Sasaki et al. | Aug 2013 | B2 |
8520056 | Sasaki et al. | Aug 2013 | B2 |
8538234 | Sasaki et al. | Sep 2013 | B2 |
8559737 | Sugio et al. | Oct 2013 | B2 |
8660189 | Toma et al. | Feb 2014 | B2 |
8666231 | Sasaki et al. | Mar 2014 | B2 |
8667017 | Forney et al. | Mar 2014 | B1 |
8824754 | Halmann | Sep 2014 | B2 |
8978075 | Kaiser et al. | Mar 2015 | B1 |
9019315 | Tsuda et al. | Apr 2015 | B2 |
9078082 | Gill et al. | Jul 2015 | B2 |
9271048 | Yee et al. | Feb 2016 | B2 |
9332303 | Holland | May 2016 | B2 |
9554061 | Proctor et al. | Jan 2017 | B1 |
9712266 | Linde et al. | Jul 2017 | B2 |
9954923 | Lee et al. | Apr 2018 | B2 |
20020012450 | Tsujii | Jan 2002 | A1 |
20020049722 | Lekuch et al. | Apr 2002 | A1 |
20020057341 | Tanaka | May 2002 | A1 |
20020057496 | Kanai | May 2002 | A1 |
20020069265 | Bountour et al. | Jun 2002 | A1 |
20030038707 | Geller | Feb 2003 | A1 |
20030038804 | Hontao | Feb 2003 | A1 |
20030041871 | Endo et al. | Mar 2003 | A1 |
20030137522 | Kaasila et al. | Jul 2003 | A1 |
20030229900 | Reisman | Dec 2003 | A1 |
20040039920 | Kim et al. | Feb 2004 | A1 |
20040227836 | Tanaka | Nov 2004 | A1 |
20050019015 | Ackley et al. | Jan 2005 | A1 |
20050019016 | Nakashika et al. | Jan 2005 | A1 |
20050071775 | Kaneko | Mar 2005 | A1 |
20050174482 | Yamada et al. | Aug 2005 | A1 |
20050227673 | Hallensleben | Oct 2005 | A1 |
20050241462 | Hirano | Nov 2005 | A1 |
20060050090 | Ahmed | Mar 2006 | A1 |
20070092217 | Nakashika et al. | Apr 2007 | A1 |
20070199035 | Schwartz et al. | Aug 2007 | A1 |
20070209009 | Huang | Sep 2007 | A1 |
20080022352 | Seo et al. | Jan 2008 | A1 |
20080040759 | She et al. | Feb 2008 | A1 |
20080092172 | Guo et al. | Apr 2008 | A1 |
20080259205 | Fukuda et al. | Oct 2008 | A1 |
20080267588 | Iwase et al. | Oct 2008 | A1 |
20080291266 | Burckart et al. | Nov 2008 | A1 |
20090122085 | Iijima | May 2009 | A1 |
20090125961 | Perlman et al. | May 2009 | A1 |
20090256864 | Borgaonkar | Oct 2009 | A1 |
20090274326 | Jia et al. | Nov 2009 | A1 |
20100021145 | Oashi et al. | Jan 2010 | A1 |
20100086285 | Sasaki et al. | Apr 2010 | A1 |
20100094542 | Kang | Apr 2010 | A1 |
20100111370 | Black et al. | May 2010 | A1 |
20100123643 | Moran | May 2010 | A1 |
20100142723 | Bucklen | Jun 2010 | A1 |
20100165083 | Sasaki et al. | Jul 2010 | A1 |
20100199214 | Mikawa | Aug 2010 | A1 |
20100202759 | Sasaki et al. | Aug 2010 | A1 |
20100232767 | Sasaki et al. | Sep 2010 | A1 |
20100254679 | Sasaki et al. | Oct 2010 | A1 |
20100283800 | Cragun et al. | Nov 2010 | A1 |
20100302130 | Kikuchi | Dec 2010 | A1 |
20100303444 | Sasaki et al. | Dec 2010 | A1 |
20110008024 | Sasaki et al. | Jan 2011 | A1 |
20110013890 | Sasaki et al. | Jan 2011 | A1 |
20110050847 | Sasaki et al. | Mar 2011 | A1 |
20110052144 | Abbas et al. | Mar 2011 | A1 |
20110064387 | Mendeloff et al. | Mar 2011 | A1 |
20110119611 | Ahn et al. | May 2011 | A1 |
20110129198 | Toma et al. | Jun 2011 | A1 |
20110142426 | Sasaki et al. | Jun 2011 | A1 |
20110149049 | Sasaki et al. | Jun 2011 | A1 |
20110158604 | Sasaki et al. | Jun 2011 | A1 |
20110161843 | Bennett et al. | Jun 2011 | A1 |
20110164111 | Karaoguz et al. | Jul 2011 | A1 |
20110164115 | Bennett et al. | Jul 2011 | A1 |
20110185312 | Lanier et al. | Jul 2011 | A1 |
20110187817 | Sasaki et al. | Aug 2011 | A1 |
20110225544 | Demar et al. | Sep 2011 | A1 |
20110231791 | Itahana | Sep 2011 | A1 |
20110252317 | Keranen | Oct 2011 | A1 |
20110254929 | Yang et al. | Oct 2011 | A1 |
20110299832 | Butcher | Dec 2011 | A1 |
20110304773 | Okumura | Dec 2011 | A1 |
20110305443 | Sasaki et al. | Dec 2011 | A1 |
20110310235 | Sasaki et al. | Dec 2011 | A1 |
20120011550 | Holland | Jan 2012 | A1 |
20120016917 | Priddle et al. | Jan 2012 | A1 |
20120033039 | Sasaki et al. | Feb 2012 | A1 |
20120044324 | Lee et al. | Feb 2012 | A1 |
20120075436 | Chen et al. | Mar 2012 | A1 |
20120082424 | Hubner et al. | Apr 2012 | A1 |
20120106921 | Sasaki et al. | May 2012 | A1 |
20120131458 | Hayes | May 2012 | A1 |
20120133736 | Nishi et al. | May 2012 | A1 |
20120140117 | Waites | Jun 2012 | A1 |
20120147141 | Sasaki et al. | Jun 2012 | A1 |
20120148218 | Sasaki et al. | Jun 2012 | A1 |
20120177343 | Sasaki et al. | Jul 2012 | A1 |
20120182203 | Yoshikawa | Jul 2012 | A1 |
20120189274 | Toma et al. | Jul 2012 | A1 |
20120198386 | Hautala | Aug 2012 | A1 |
20120227098 | Obasanjo et al. | Sep 2012 | A1 |
20120229518 | Chowdhry | Sep 2012 | A1 |
20120260198 | Choi | Oct 2012 | A1 |
20120274850 | Hawkins et al. | Nov 2012 | A1 |
20120300031 | Horlander | Nov 2012 | A1 |
20120314965 | Kashiwagi et al. | Dec 2012 | A1 |
20120319927 | Khatib | Dec 2012 | A1 |
20120321275 | Sly et al. | Dec 2012 | A1 |
20130002821 | Okuda | Jan 2013 | A1 |
20130003848 | Sugio et al. | Jan 2013 | A1 |
20130004093 | Sugio et al. | Jan 2013 | A1 |
20130009997 | Boak et al. | Jan 2013 | A1 |
20130055129 | Lee et al. | Feb 2013 | A1 |
20130093672 | Ichieda | Apr 2013 | A1 |
20130113717 | Eerd et al. | May 2013 | A1 |
20130141533 | Suh et al. | Jun 2013 | A1 |
20130147832 | Patel | Jun 2013 | A1 |
20130167070 | Tsuda | Jun 2013 | A1 |
20130169765 | Park et al. | Jul 2013 | A1 |
20130191861 | Sasaki et al. | Jul 2013 | A1 |
20130194378 | Brown | Aug 2013 | A1 |
20130223456 | Kim et al. | Aug 2013 | A1 |
20130223539 | Lee et al. | Aug 2013 | A1 |
20130229368 | Harada | Sep 2013 | A1 |
20130235270 | Sasaki et al. | Sep 2013 | A1 |
20130236158 | Lynch et al. | Sep 2013 | A1 |
20130238758 | Lee et al. | Sep 2013 | A1 |
20130243227 | Kinsbergen et al. | Sep 2013 | A1 |
20130262997 | Markworth et al. | Oct 2013 | A1 |
20130279883 | Ogawa et al. | Oct 2013 | A1 |
20130287090 | Sasaki et al. | Oct 2013 | A1 |
20130290847 | Hooven | Oct 2013 | A1 |
20130290848 | Billings et al. | Oct 2013 | A1 |
20130293676 | Sugio et al. | Nov 2013 | A1 |
20130305138 | Gicovate | Nov 2013 | A1 |
20130307929 | Hattori et al. | Nov 2013 | A1 |
20130308703 | Sugio et al. | Nov 2013 | A1 |
20130308706 | Sugio et al. | Nov 2013 | A1 |
20130313313 | Boudville | Nov 2013 | A1 |
20130314514 | Mochinaga et al. | Nov 2013 | A1 |
20130315472 | Hattori et al. | Nov 2013 | A1 |
20130335525 | Hattori et al. | Dec 2013 | A1 |
20140033127 | Choi | Jan 2014 | A1 |
20140035855 | Feldman | Feb 2014 | A1 |
20140036033 | Takashi et al. | Feb 2014 | A1 |
20140037007 | Lee et al. | Feb 2014 | A1 |
20140037011 | Lim et al. | Feb 2014 | A1 |
20140043652 | Kyoda et al. | Feb 2014 | A1 |
20140050458 | Mochinaga et al. | Feb 2014 | A1 |
20140055561 | Tsukagoshi | Feb 2014 | A1 |
20140056577 | Ogawa et al. | Feb 2014 | A1 |
20140063002 | Nagae | Mar 2014 | A1 |
20140064023 | Nagae | Mar 2014 | A1 |
20140089847 | Seo | Mar 2014 | A1 |
20140096167 | Lang et al. | Apr 2014 | A1 |
20140098715 | Morsy et al. | Apr 2014 | A1 |
20140104137 | Brown | Apr 2014 | A1 |
20140125866 | Davy et al. | May 2014 | A1 |
20140145969 | DeLuca | May 2014 | A1 |
20140149596 | Emerson | May 2014 | A1 |
20140168277 | Ashley et al. | Jun 2014 | A1 |
20140173467 | Clavel et al. | Jun 2014 | A1 |
20140253802 | Clift et al. | Sep 2014 | A1 |
20140267026 | Olsen | Sep 2014 | A1 |
20140282069 | Canetti et al. | Sep 2014 | A1 |
20140320587 | Oyman | Oct 2014 | A1 |
20140344736 | Ryman et al. | Nov 2014 | A1 |
20140359681 | Amidei | Dec 2014 | A1 |
20150081207 | Briant | Mar 2015 | A1 |
20150089372 | Mandalia et al. | Mar 2015 | A1 |
20150106012 | Kandangath et al. | Apr 2015 | A1 |
20150120953 | Crowe et al. | Apr 2015 | A1 |
20150124171 | King | May 2015 | A1 |
20150135206 | Reisman | May 2015 | A1 |
20150195490 | Oyman | Jul 2015 | A1 |
20150208103 | Guntur et al. | Jul 2015 | A1 |
20150213776 | Sharma et al. | Jul 2015 | A1 |
20150242179 | Benson | Aug 2015 | A1 |
20150297949 | Aman et al. | Oct 2015 | A1 |
20160049155 | Siemes | Feb 2016 | A1 |
20160057494 | Hwang | Feb 2016 | A1 |
20160165185 | Oyman | Jun 2016 | A1 |
20160165309 | Brandenburg et al. | Jun 2016 | A1 |
20160302166 | Dang et al. | Oct 2016 | A1 |
Number | Date | Country |
---|---|---|
101430620 | May 2009 | CN |
2478651 | Jun 2016 | GB |
2012075030 | Apr 2012 | JP |
2014025319 | Feb 2014 | WO |
2015197815 | Dec 2015 | WO |
2018049321 | Mar 2018 | WO |
Entry |
---|
Panning (camera), Feb. 9, 2014, Wikipedia (https://en.wikipedia.org/w/index.php?title=Panning_(camera)&oldid=594670928) (Year: 2014). |
David Andrew Young, Liviu Burciu, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer and Facilitating Selection of Audio on a Headphone”, related U.S. Appl. No. 14/271,282, Non-Final Office Action dated Sep. 3, 2015. |
David Andrew Young, Liviu Burciu, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer and Facilitating Selection of Audio on a Headphone”, related U.S. Appl. No. 14/271,282, Applicant's response to Non-Final Office Action filed Sep. 11, 2015. |
David Andrew Young, Liviu Burciu, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer and Facilitating Selection of Audio on a Headphone”, related U.S. Appl. No. 14/271,282, Final Office Action dated Dec. 4, 2015. |
David Andrew Young, Liviu Burciu, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer and Facilitating Selection of Audio on a Headphone”, related U.S. Appl. No. 14/271,282, Applicant's response to Final Office Action dated Dec. 10, 2015. |
David Andrew Young, Liviu Burciu, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer and Facilitating Selection of Audio on a Headphone”, related pending U.S. Appl. No. 15/139,642, filed Apr. 27, 2016. |
David Andrew Young, Liviu Burciu, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer and Facilitating Selection of Audio on a Headphone”, related U.S. Appl. No. 15/139,642, Non-Final Office Action dated Sep. 14, 2016. |
David Andrew Young, Liviu Burciu, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer and Facilitating Selection of Audio on a Headphone”, related U.S. Appl. No. 15/139,642, Applicant's response to Non-Final Office Action filed Oct. 10, 2016. |
Peter Shintani, Brant Candelore, “Presentation of Enlarged Content on Companion Display Device”, file history of related U.S. Appl. No. 14/503,819, filed Oct. 1, 2014. |
Peter Shintani, Brant Candelore, “Presentation of Enlarged Content on Companion Display Device”, related U.S. Appl. No. 14/503,819, Final Office Action dated Nov. 18, 2016. |
Peter Shintani, Brant Candelore, “Presentation of Enlarged Content on Companion Display Device”, related U.S. Appl. No. 14/503,819, Applicant's response to Final Office Action filed Nov. 22, 2016. |
David Andrew Young, Liviu Burciu, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer and Facilitating Selection of Audio on a Headphone”, related U.S. Appl. No. 15/139,642, Final Office Action dated Jan. 10, 2017. |
David Andrew Young, Liviu Burciu, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer and Facilitating Selection of Audio on a Headphone”, related U.S. Appl. No. 15/139,642, Applicant's response to the Final Office Action filed Jan. 11, 2017. |
David Andrew Young, Liviu Burciu, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer and Facilitating Selection of Audio on a Headphone”, related U.S. Appl. No. 15/139,642, Non-Final Office Action dated Feb. 7, 2017. |
David Andrew Young, Liviu Burciu, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer and Facilitating Selection of Audio on a Headphone”, related U.S. Appl. No. 15/139,642, Applicant's response to Non-Final Office Action filed Feb. 9, 2017. |
David Andrew Young, Liviu Burciu, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer and Facilitating Selection of Audio on a Headphone”, related U.S. Appl. No. 15/139,642, Final Office Action dated Mar. 2, 2017. |
David Andrew Young, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer Interfacing with Display Controller”, related U.S. Appl. No. 14/271,156, Non-Final Office Action dated Mar. 13, 2017. |
David Andrew Young, Liviu Burciu, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer and Facilitating Selection of Audio on a Headphone”, related U.S. Appl. No. 15/139,642, Applicant's response to Final Office Action filed Mar. 21, 2017. |
Peter Shintani, Brant Candelore, “Presentation of Enlarged Content on Companion Display Device”, related U.S. Appl. No. 14/503,819, Non-Final Office Action dated Apr. 17, 2017. |
David Andrew Young, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer Interface with Display Controller”, related pending U.S. Appl. No. 14/271,156, filed May 6, 2014. |
David Andrew Young, Liviu, Louis Le, “Control of Large Screen Display Using Wireless Portable Computer and Facilitating Selection of Audio on a Headphone”, related pending U.S. Appl. No. 14/271,282, filed May 6, 2014. |
Affinity Labs of Texas, LLC v. Amazon.com, CAFC Appeal No. 2015-2080. |
David Andrew Young, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer Interfacing with Display Controller”, related U.S. Appl. No. 14/271,156, Final Office Action dated Jun. 28, 2017. |
Peter Shintani, Brant Candelore, “Presentation of Enlarged Content on Companion Display Device”, related U.S. Appl. No. 14/503,819, Final Office Action dated Jun. 16, 2017. |
David Andrew Young, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer Interfacing with Display Controller”, related U.S. Appl. No. 14/271,156, Applicant's response to Final Office Action filed Jul. 17, 2017. |
Peter Shintani, Brant Candelore, “Presentation of Enlarged Content on Companion Display Device”, related U.S. Appl. No. 14/503,819, Applicant's response to Final Office Action filed Jul. 17, 2017. |
David Andrew Young, Liviu Burciu, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer and Facilitating Selection of Audio on a Headphone”, Reply Brief filed by Applicant Aug. 14, 2017. |
Peter Shintani, Brant Candelore, “Presentation of Enlarged Content on Companion Display Device”, related U.S. Appl. No. 14/503,819, Applicant's response to Non-Final Office Action filed May 9, 2017. |
David Andrew Young, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer Interfacing with Display Controller”, related pending U.S. Appl. No. 14/271,156, Non-Final Office Action dated Aug. 25, 2017. |
David Andrew Young, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer Interfacing with Display Controller”, related pending U.S. Appl. No. 14/271,156, Applicant's response to Non-Final Office Action filed September. |
Peter Shintani, Brant Candelore, “Presentation of Enlarged Content on Companion Display Device”, related U.S. Appl. No. 14/503,819, Non-Final Office Action dated Oct. 6, 2017. |
Peter Shintani, Brant Candelore, “Presentation of Enlarged Content on Companion Display Device”, related U.S. Appl. No. 14/503,819, Applicant's response to Non-Final Office Action filed Oct. 17, 2017. |
David Andrew Young, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer Interfacing with Display Controller”, related U.S. Appl. No. 14/271,156, Final Office fiction dated Dec. 13, 2017. |
David Andrew Young, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer Interfacing with Display Controller”, related U.S. Appl. No. 14/271,156, Applicant's response to Final Office Action filed Dec. 18, 2017. |
Peter Shintani, Brant Candelore, “Presentation of Enlarged Content on Companion Display Device”, related U.S. Appl. No. 14/503,819, Non-Final Office Action dated Mar. 1, 2018. |
Peter Shintani, Brant Candelore, “Presentation of Enlarged Content on Companion Display Device”, related pending U.S. Appl. No. 14/503,819, applicant's response to non-final office action filed Mar. 5, 2018. |
David Andrew Young, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer Interfacing with Display Controller”, related U.S. Appl. No. 14/271,156, Non-Final Office Action dated Jan. 26, 2018. |
David Andrew Young, Louis Le, Steven Martin Richman, “Control of Large Screen Display Using Wireless Portable Computer Interfacing with Display Controller”, related U.S. Appl. No. 14/271,156, Applicant's response to Non-Final Office Action filed Jan. 30, 2018. |
Peter Shintani, Brant Candelore, “Presentation of Enlarged Content on Companion Display Device”, related U.S. Appl. No. 14/503,819, Final Office Action dated Jan. 25, 2019. |
Peter Shintani, Brant Candelore, “Presentation of Enlarged Content on Companion Display Device”, related U.S. Appl. No. 14/503,819, Applicant's response to Final Office Action filed Mar. 4, 2019. |
Peter Shintani, Brant Candelore, “Presentation of Enlarged Content on Companion Display Device”, related U.S. Appl. No. 14/503,819, Non-Final Office Action dated Sep. 20, 2018. |
Peter Shintani, Brant Candelore, “Presentation of Enlarged Content on Companion Display Device”, related U.S. Appl. No. 14/503,819, Applicant's response to Non-Final Office Action filed Sep. 24, 2018. |
Handley, et al.,“SDP: Session Description Protocol”, Network Working Group, Jul. 2006. |
Number | Date | Country | |
---|---|---|---|
20150256592 A1 | Sep 2015 | US |
Number | Date | Country | |
---|---|---|---|
61949545 | Mar 2014 | US |