The present invention relates to the field of displaying presentations associated with graphics processing units.
Electronic systems and circuits have made a significant contribution towards the advancement of modern society and are utilized in a number of applications to achieve advantageous results. Numerous electronic technologies such as digital computers, calculators, audio devices, video equipment, and telephone systems facilitate increased productivity and cost reduction in analyzing and communicating data, ideas and trends in most areas of business, science, education and entertainment. Frequently, these activities often involve the presentation of various graphics information on a display.
Graphics applications associated with the display presentations can have different characteristics and features. For example, graphics applications can have different processing requirements, different quality features, involve different levels of complexity, and so on. A system may include multiple graphics processing units and the graphics processing units can also have different processing capabilities and characteristics. In addition, control software and hardware for each processor may be entirely different (e.g., processors manufactured by different vendors, etc.) and not able to be controlled identically. Furthermore, displays typically can only handle input from one graphics processing unit at a time and often have particular interface requirements. For example, if signaling is not performed correctly damage to the panel may result, or the user may observe disturbing visual artifacts, or the panel controller may force a failsafe shutdown.
Ensuring the timing requirements are met when changing between active graphics processing units can be complicated and complex. Before a transition occurs one GPU is driving the LVDS and there is an extremely small chance that signals from the GPU are aligned with another GPU, creating flickering and substantial delays while a Panels Timing Controller (TCON) resynchronizes with the second source. For example, when a switch occurs one GPU may be at the end of a raster scan while the other is at the beginning. Displays typically have panel power sequencing specifications that indicate signal activation timing requirements. For example, the standards panel working group (SPWG) indicates general mechanical and interface specifications (e.g., SPWG spec, http://www.spwg.org) for displays used in note book computers. Detrimental impacts to images and the display itself can occur if timing requirements are not adhered to. In addition, if the panel power sequencing specifications are not adhered to the TCON may take multiple frames until it re-acquires the vertical sync such as indicated by the assertion of VSync signal, or the de-assertion of Display Enable, whereupon it can begin to re-synchronize to the second GPU's signal. If the mis-sync lasts too long then the TCON can misinterpret the condition as a loss of timing and then enter Fail-Safe mode wherein the LCD is safely shutdown and the panel must be powered-off and on before it will re-enable displays.
Systems and methods for utilizing multiple graphics processing units for controlling presentations on a display are presented. In one embodiment, a dual graphics processing system includes a first graphics processing unit for processing graphics information; a second graphics processing unit for processing graphics information; a component for synchronizing transmission of display component information from the first graphics processing unit and the second graphics processing unit and a component for controlling switching between said first graphics processing unit and said second graphics processing unit. In one embodiment, the component for synchronizing transmission of display component information adjusts (e.g., delays, speeds up, etc.) the occurrence or duration of a corresponding graphics presentation characteristic (e.g., end of frame, end of line, vertical blanking period, horizontal blanking period, etc.) in signals from multiple graphics processing units.
The accompanying drawings, which are incorporated in and form a part of this specification, are included for exemplary illustration of the principles of the present invention and not intended to limit the present invention to the particular implementations illustrated therein. The drawings are not to scale unless otherwise specifically indicated.
Reference will now be made in detail to the preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be obvious to one ordinarily skilled in the art that the present invention may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the current invention.
Portions of the detailed description that follows are presented and discussed in terms of a method. Although steps and sequencing thereof are disclosed in figures herein describing the operations of this method, such steps and sequencing are exemplary. Embodiments are well suited to performing various other steps or variations of the steps recited in the flowchart of the figure herein, and in a sequence other than that depicted and described herein. Some portions of the detailed description are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits that can be performed within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer-executed step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps include physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic, optical or quantum signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout, discussions utilizing terms such as “processing”, “computing”, “calculating”, “determining”, “displaying”, “accessing,” “writing,” “including,” “storing,” “transmitting,” “traversing,” “associating,” “identifying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Computing devices typically include at least some form of computer readable media. Computer readable media can be any available media that can be accessed by a computing device. By way of example, and not limitation, computer readable medium may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computing device. Communication media typically embodies computer readable instructions, data structures, program modules, or other data in modulated data signals such as carrier waves or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
Some embodiments may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc, that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments.
The present invention facilitates efficient and effective utilization of multiple graphics processing units or hybrid graphics processing system. In one embodiment, a dual graphics processing system includes a first graphics processing unit for processing graphics information, a second graphics processing unit for processing graphics information, a component for synchronizing transmission of display component information from the first graphics processing unit and the second graphics processing unit, and a component for controlling switching between the first graphics processing unit and the second graphics processing unit. In one embodiment, blanking intervals are adjusted to facilitate synchronization and simulate a “genlock” without use of conventional Genlock hardware. For example, adjustable-timing components of one or more GPUs can be leveraged to adjust (e.g., slide, etc.) the timing alignment of one pixel source to match another pixel source, and then a transition from one source to the other can be made with minimal or no transition artifacts and the independent timing of the pixel sources can be restored after the transition. In an exemplary implementation, the first graphics processing unit is an integrated graphics processing unit and the second graphics processing unit is a discreet graphics processing unit.
The components of exemplary computer system 10 cooperatively operate to synchronize a first stream of display information with a second stream of display information during a switch between the streams of display information. Synchronizing component 30 adjusts transmission timing of display component information from the first stream of display signals 11 and/or the second stream of display signals 12 within a synchronization tolerance. In one embodiment, the synchronization tolerance (e.g., within the vertical blanking period of a display, etc.) is one that permits a transition between the graphics streams without introducing artifacts or glitches. Switching component 40 switches communication of signals to display 50 between the first stream of display signals and the second stream of display signals.
It is appreciated that a variety of techniques are available for synchronizing component 30 to synchronize the signals from the plurality of processors. In one embodiment, synchronizing component 30 introduces timing adjustments to the signals. For example, synchronizing component 30 can introduce an adjustment to the duration of a timing factor (e.g., a vertical blanking interval, horizontal blanking interval, etc.) in one and/or both of the signals so that graphics information or a display characteristic is in within synchronization tolerance or concurrence in time. In one exemplary implementation, adjusting the timing of the signals from the graphics processing units creates a sliding timing window in which alignment of signals from multiple graphics processing units can be realized allowing the transition between the graphics processing units to be completed when the output of the two graphics processing units are aligned. It is appreciated that embodiments can be readily adapted to implement a variety of adjustments (e.g., a delay, increase, quicken, shorten, stall, etc.) to a time duration between the occurrence of various display characteristics (e.g. a frame based characteristic, a line based characteristic, various interrupts, a refresh indication, etc.). It is also appreciated that the timing adjustments can be made over multiple characteristics (e.g., over multiple frames, multiple lines, etc.) and can be incremental adjustments. For example, the amount of adjustment in one frame or line duration can be relatively large (e.g., when relatively far out of sync, etc.) compared to the amount of adjustment in another frame or line duration (e.g., when relatively close to sync, etc.).
It is also appreciated a variety of mechanisms can be utilized to facilitate introduction of the timing adjustment. In one embodiment, NVIDIA Display Power Management Saving (NVDPS) is utilized to facilitate timing adjustments. In one exemplary implementation, adjustment of the horizontal blanking interval is made to effect a lower overall timing base (and refresh rate) without requiring the dot clock and other timing parameters to be completely reprogrammed.
The components of exemplary computer system 100 cooperatively operate to arbitrate control of a display between two graphics controllers. First graphics processing unit iGPU 110 processes graphics information. Second graphics processing unit dGPU 120 processes graphics information. Memory 112 and memory 122 store information, including data and instructions for use by first graphics processing unit iGPU 110 and second graphics processing unit dGPU 120. It is appreciated memory 112 and 122 can include a variety of configurations. For example, memory 112 and memory 122 can be a single memory or plurality of memories, can include random access memory, hard drive media, removable media, flash memory, compact disk, etc. Synchronizing detection component 130 detects synchronization differences in the transmission of display component information from the first graphics processing unit iGPU 110 and the second graphics processing unit dGPU 120. Switching component 140 controls switching between synchronized transmission of display component information from the iGPU 110 and the dGPU 120. Switching component 140 forwards display component signals from the first graphics processing unit and the second graphics processing unit in accordance with a graphics processing unit selection indication. In one embodiment, switching component 140 is a multiplexer. LCD 150 displays presentations (e.g., text, video, graphics, etc.). It is appreciated the display can have a variety of configurations (e.g., integrated, external, etc.).
In one embodiment, synchronizing detection component 130 detects the timing difference between the two graphics processing unit output signals from the graphics processing units. In one embodiment, the synchronizing detection component 130 detects an indication of a corresponding video or display characteristic (e.g., a display synchronization pulse, a horizontal blank interrupt, a vertical blank interrupt, interrupt associated with a refresh, etc.) from multiple graphics processors. The detection of indications of corresponding video or display characteristics are utilized to determine the timing difference between the outputs from the graphics processing units. In one embodiment, the detection of corresponding video or display characteristics occurrences are forwarded for utilization in determining the timing difference between the outputs from the graphics processing units. In one exemplary implementation the detection of the corresponding video or display characteristic occurrences (e.g., blanking interrupts, sync pulses, etc.) are forwarded for utilization by a microcontroller in determining the timing difference between the outputs from the graphics processing units.
In one embodiment, the iGPU 110 includes a microcontroller PMU 111 and the dGPU 120 includes microcontroller PMU 121 that run software and/or firmware that directs adjustments to the timing of a GPU output. In one embodiment, the microcontrollers are on chip and run independently. In one embodiment, a GPU can include a state machine that takes the information and directs adjustments to the timing of a GPU output. In one embodiment, the microcontroller receives the notification of the indications and directs storage of absolute times associated with the occurrence of the respective corresponding video or display characteristics. The microcontroller also determines the timing difference or delta between occurrence of the respective corresponding video or display characteristics. In one exemplary implementation, the microcontroller determines the timing difference or delta between occurrence of the respective corresponding video or display characteristics by subtracting the absolute time associated with the occurrence of a video or display characteristic in the signals from a first graphics processor from the absolute time associated with an occurrence of a corresponding video or display characteristic in the signals from a second graphics processor. In one embodiment the microcontroller operates in accordance with software instructions stored on a computer readable medium.
In one embodiment, the timing difference or delta between occurrences of the respective corresponding video or display characteristics is determined by the synchronizing component 130. In one exemplary implementation, the difference is determined by counting using pixel clocks as a time base and counts the pixel clock pluses. In one exemplary implementation, the count is provided back to a component (e.g. PMU 111, PMU 121, etc.) as a delta between the two signals. The timing difference or delta between occurrences of the respective corresponding video or display characteristics is utilized in determining an adjustment to the stream of display signals. For example, the delta or timing difference information is forwarded to a PMU and software and/or firmware running on the PMU can direct how much timing adjustment (e.g., delay, etc.) to introduce based upon the received delta or count. In one embodiment, the PMU and synchronizing detection component 130 form a synchronizing component (e.g., synchronizing component 30, etc.)
Conventional attempts at dealing with the artifacts are often very complicated. Conventional LVDS systems do not typically have a readily adaptable gen lock feature where controllers arrive at synchronized timing by sharing sync pulses in which one slaves off the other controller pixel clock. While synchronizing the outputs of multiple graphics processors may be accomplished utilizing a “Gen-Lock” technique that detects a sync signal and alters the timing generators to match, attempting to implement such a gen lock feature in a LVDS system can be very complex. Reprogramming timing generators (e.g., dot clock, PLL, etc.) can be an exhaustive effort involving significant overhead and glitches.
While exemplary timing diagram 300 indicates the m and d signals are in sync after time 305 for frames m+5, m+6 and m+7, it is appreciated the iGPU and dGPU can operate independently and the m and d signals can become out sync after the transition at time 305. If the signals do become out of sync and a transition request back to the iGPU is received a similar synchronization of introducing timing adjustments can be implemented to get the signals m and d back in sync for the transition.
With reference back to
In block 510 graphics processing is performed on a first graphics processing unit and results of the graphics processing from the first graphics processing unit are forwarded to a display.
In block 520 graphics processing is performed on a second graphics processing unit.
A synchronizing process is performed in block 530 in which graphics signals from the first graphics processing unit are synchronized within acceptable tolerance to graphics signals from the second graphics processing unit. In one embodiment, the synchronization includes adjusting a duration between the occurrences of a display characteristics in at least one of said graphics signals. In one exemplary implementation, a duration of a timing factor (e.g., a blanking interval, etc.) is adjusted in at least one of the graphics signals.
At block 540 a graphics processing unit change over process is performed in which results of the graphics processing from the second graphics processing unit are forwarded to the display instead of the results of the graphics processing from the first graphics.
In block 610, a difference in the timing of a corresponding graphics presentation characteristic is determined. In one embodiment, a timing difference is determined between a corresponding graphics presentation characteristic of the graphics information processed on the first processor and graphics information processed on the second processor. In one exemplary implementation the timing occurrence of the corresponding graphics presentation characteristics is determined by a hardware component and the information forwarded to another component which utilizes software and/or firmware to direct determination of a delta between the occurrence timings and a corresponding adjustment to the signals. In another exemplary implementation timing differences are determined by hardware (e.g., a counter, etc.) and the delta information is forwarded to another component which utilizes software and/or firmware to direct a corresponding adjustment to the signals.
In block 620, at least one signal from at least one of the first processor and the second processor is adjusted to synchronize the occurrence of a corresponding graphics presentation characteristic. In one embodiment, signals are adjusted to reduce the timing difference between the corresponding graphics presentation characteristic. It is appreciated the corresponding graphics presentation characteristic can include a variety of characteristics. For example the characteristics can include a frame blanking interval, line blanking interval, etc. It is also appreciated a variety of adjustments can be made. For example, the adjustments can include delaying or speeding up the occurrence or duration of one or more blanking intervals (e.g., a frame blanking interval, vertical blanking interval, a line blanking interval, a horizontal blanking interval, etc.). In one exemplary implementation the adjusting signals can include shortening and/or extending the occurrence or duration of a blanking interval.
In block 710 an indication to initiate a switch between graphics processors is received. In one embodiment the indication to initiate a switch between graphics processors is a request to complete a hybrid transition. In one exemplary implementation, the indication also initiates a power up of the second processor if the second processor is not already powered on. For example, the second GPU LVDS display output is enabled (even though not yet forwarded to the display).
In block 720, the vertical blank interrupt is hooked on the first GPU and second GPU.
In block 730, the temporal misalignment is quantified. In one embodiment, the temporal misalignment is quantified prior to making any adjustments. In one exemplary implementation an estimate on how much the timing of the second GPU needs to be altered may be generated and used to determine how much the timing is adjusted in order to quickly achieve synchronization within an acceptable range (e.g., a blanking time, etc.).
In block 740 a timing factor (e.g., the horizontal blanking interval, vertical blanking interval, etc.) on one of the processors is adjusted (e.g., delayed, speeded up, etc.). In one embodiment, adjustment of the timing factor impacts (e.g., slows down, etc.) the overall display rate.
In block 750 a vertical blank indication is received and a corresponding absolute time indication is tracked. In one embodiment, software receives an indication of a vertical blank interrupt and directs retrieval and storage of a corresponding absolute time (e.g., from GPU clock, process clock, system timer, etc.). The difference between absolute times is utilized to determine or quantify the temporal misalignment between the vertical blank interrupts.
In block 760 a determination is a made that the signals from both GPU's align. In one embodiment, the vertical blank interrupts occur at the approximately the same time (e.g. remaining misalignment falls within an acceptable range so the vertical syncs are roughly in the same time). In one exemplary implementation, the horizontal blank extensions are reduced enough to allow horizontal sync alignment to be subsequently achieved using a similar process. In one exemplary implementation, timings are restored so that the second GPU does not necessarily match the first.
In block 770, a switch is made between the processors. In one embodiment a MUX is switched completing the transition of LVDS ownership. In one exemplary implementation, the transition to the second GPU is completed during the vertical blanking interval so as to further reduce artifacts.
It is appreciated that present invention systems and methods can include a variety of implementations. For example, present invention systems and methods or processes can include adjustment to a variety of intervals (e.g., a frame blanking interval, vertical blanking interval, a line blanking interval, a horizontal blanking interval, etc.). It is also appreciated that embodiments of the present invention can be implemented in a variety of configurations including hardware, software, firmware and combinations thereof. For example, a computer readable medium (e.g., memory communicatively coupled to system 10, 100, etc.) with instructions embedded therein can be utilized for directing components (e.g., processors in system 10, 100, etc) participating in present systems, methods or processes (e.g., 500, 600, 700, etc).
In one exemplary implementation, both GPU's start at approximately 62 HZ or a 16 mS vertical sync interval. If initially the second GPU's VBI occurs 8 ms after the first GPU, the second GPU's timings are increased to create a vertical interval of 24 mS (e.g., 16+8) or approximately 41.67 Hz. Then on the next vertical blank the two signals should align. If the misalignment exceeds the timing adjustment ability, a smaller adjustment may be utilized so as to achieve alignment after two or more vertical blanks. Additionally, a smaller alignment may be used (e.g., 23.9 ms) so that as the vertical blank interrupts occur very close, the timing can be further adjusted to optimally bring the Horizontal timings in sync. In one embodiment, the time taken to align the displays is dependent on the amount with which the timing can be accurately aligned and Operating System overhead that may delay processing of the vertical blank interrupts.
In one embodiment, hardware is utilized to detect alignment by electrically monitoring the dGPU and mGPU outputs directly. In this situation the delta D or delta M signal may be produced depending on the direction of the transition. An agent in the GPU such as a microcontroller (e.g., PMU) can respond to the signal and perform the timing adjustment steps described above without the need for driver or operations system control.
It is appreciated that the present change over systems and methods enable each processor to operate independently when not transitioning. In addition, by synchronizing the signals, the present approach facilitates reduction of possible panel control signal excursion, panel failure and possible damage associated with undeterministic timing in signals associated with the transition from one processor to another. For example, during a transition interval the time taken to otherwise re-apply valid timings on the panel interface is affected by driver software response time which is affected by operating system response time and other activities on the system, without the present invention timing could be undeterministic. Operating systems are often not real time, and do not typically have guaranteed latency. Without the present invention, adverse impacts could occur if a system becomes busy right at the middle of the transition exceeding the maximum allowed interval set forth the panel specification.
In one embodiment, precise coordinated control of internal sequencing on both the integrated and discrete graphics processing units is available and the LCD power enable is kept applied while the LVDS signals are modulated. In one exemplary implementation, the mode on the other GPU is set prior to transition as set forth above.
Thus, the present invention facilitates efficient and effective utilization of multiple processors with a display. Each process can start LVDS frame timings at a random point in time and the present processor change over approach facilitates synchronization of the LVDS signals and avoidance of artifacts on the panel. For example, artifacts associated with several frames that could otherwise pass before the panel controller re-syncs to the alternate processor's timing. In addition, main timing characteristics (e.g., dot clock, phase lock loops, etc.) do not need to be disturbed and conventional power panel sequencing is not needed for transitions between the processors.
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the Claims appended hereto and their equivalents. The listing of steps within method claims do not imply any particular order to performing the steps, unless explicitly stated in the claim.
Number | Name | Date | Kind |
---|---|---|---|
4145685 | Farina | Mar 1979 | A |
4603400 | Daniels | Jul 1986 | A |
4955066 | Notenboom | Sep 1990 | A |
5016001 | Minagawa et al. | May 1991 | A |
5321419 | Katakura et al. | Jun 1994 | A |
5321510 | Childers et al. | Jun 1994 | A |
5321811 | Kato et al. | Jun 1994 | A |
5371847 | Hargrove | Dec 1994 | A |
5461679 | Normile et al. | Oct 1995 | A |
5517612 | Dwin et al. | May 1996 | A |
5552802 | Nonoshita et al. | Sep 1996 | A |
5687334 | Davis et al. | Nov 1997 | A |
5712995 | Cohn | Jan 1998 | A |
5768164 | Hollon, Jr. | Jun 1998 | A |
5781199 | Oniki et al. | Jul 1998 | A |
5841435 | Dauerer et al. | Nov 1998 | A |
5878264 | Ebrahim | Mar 1999 | A |
5900913 | Tults | May 1999 | A |
5917502 | Kirkland et al. | Jun 1999 | A |
5923307 | Hogle, IV | Jul 1999 | A |
5963200 | Deering et al. | Oct 1999 | A |
5978042 | Vaske et al. | Nov 1999 | A |
6002411 | Dye | Dec 1999 | A |
6008809 | Brooks | Dec 1999 | A |
6018340 | Butler et al. | Jan 2000 | A |
6025853 | Baldwin | Feb 2000 | A |
6075531 | DeStefano | Jun 2000 | A |
6078339 | Meinerth et al. | Jun 2000 | A |
6118462 | Margulis | Sep 2000 | A |
6175373 | Johnson | Jan 2001 | B1 |
6191758 | Lee | Feb 2001 | B1 |
6208273 | Dye et al. | Mar 2001 | B1 |
6226237 | Chan et al. | May 2001 | B1 |
6259460 | Gossett et al. | Jul 2001 | B1 |
6337747 | Rosenthal | Jan 2002 | B1 |
6359624 | Kunimatsu | Mar 2002 | B1 |
6388671 | Yoshizawa et al. | May 2002 | B1 |
6449017 | Chen | Sep 2002 | B1 |
6473086 | Morein et al. | Oct 2002 | B1 |
6480198 | Kang | Nov 2002 | B2 |
6483502 | Fujiwara | Nov 2002 | B2 |
6498721 | Kim | Dec 2002 | B1 |
6557065 | Peleg et al. | Apr 2003 | B1 |
6600500 | Yamamoto | Jul 2003 | B1 |
6606127 | Fang et al. | Aug 2003 | B1 |
6628243 | Lyons et al. | Sep 2003 | B1 |
6630943 | Nason et al. | Oct 2003 | B1 |
6654826 | Cho et al. | Nov 2003 | B1 |
6657632 | Emmot et al. | Dec 2003 | B2 |
6724403 | Santoro et al. | Apr 2004 | B1 |
6753878 | Heirich et al. | Jun 2004 | B1 |
6774912 | Ahmed et al. | Aug 2004 | B1 |
6784855 | Matthews et al. | Aug 2004 | B2 |
6816977 | Brakmo et al. | Nov 2004 | B2 |
6832269 | Huang et al. | Dec 2004 | B2 |
6832355 | Duperrouzel et al. | Dec 2004 | B1 |
6956542 | Okuley et al. | Oct 2005 | B2 |
7007070 | Hickman | Feb 2006 | B1 |
7010755 | Anderson et al. | Mar 2006 | B2 |
7030837 | Vong et al. | Apr 2006 | B1 |
7034776 | Love | Apr 2006 | B1 |
7119808 | Gonzalez et al. | Oct 2006 | B2 |
7124360 | Drenttel et al. | Oct 2006 | B1 |
7129909 | Dong et al. | Oct 2006 | B1 |
7212174 | Johnston et al. | May 2007 | B2 |
7269797 | Bertocci et al. | Sep 2007 | B1 |
7359998 | Chan et al. | Apr 2008 | B2 |
7383412 | Diard | Jun 2008 | B1 |
7450084 | Fuller et al. | Nov 2008 | B2 |
7486279 | Wong et al. | Feb 2009 | B2 |
7509444 | Chiu et al. | Mar 2009 | B2 |
7522167 | Diard et al. | Apr 2009 | B1 |
7552391 | Evans et al. | Jun 2009 | B2 |
7558884 | Fuller et al. | Jul 2009 | B2 |
7612783 | Koduri et al. | Nov 2009 | B2 |
8176155 | Yang et al. | May 2012 | B2 |
8766989 | Wyatt et al. | Jul 2014 | B2 |
20010028366 | Ohki et al. | Oct 2001 | A1 |
20020087225 | Howard | Jul 2002 | A1 |
20020128288 | Kyle et al. | Sep 2002 | A1 |
20020129288 | Loh et al. | Sep 2002 | A1 |
20020140627 | Ohki et al. | Oct 2002 | A1 |
20020163513 | Tsuji | Nov 2002 | A1 |
20020182980 | Van Rompay | Dec 2002 | A1 |
20020186257 | Cadiz et al. | Dec 2002 | A1 |
20030016205 | Kawabata et al. | Jan 2003 | A1 |
20030025689 | Kim | Feb 2003 | A1 |
20030041206 | Dickie | Feb 2003 | A1 |
20030065934 | Angelo et al. | Apr 2003 | A1 |
20030084181 | Wilt | May 2003 | A1 |
20030088800 | Cai | May 2003 | A1 |
20030090508 | Keohane et al. | May 2003 | A1 |
20030122836 | Doyle et al. | Jul 2003 | A1 |
20030126335 | Silvester | Jul 2003 | A1 |
20030188144 | Du et al. | Oct 2003 | A1 |
20030189597 | Anderson et al. | Oct 2003 | A1 |
20030195950 | Huang et al. | Oct 2003 | A1 |
20030197739 | Bauer | Oct 2003 | A1 |
20030200435 | England et al. | Oct 2003 | A1 |
20030222876 | Giemborek et al. | Dec 2003 | A1 |
20040001069 | Snyder et al. | Jan 2004 | A1 |
20040019724 | Singleton, Jr. et al. | Jan 2004 | A1 |
20040027315 | Senda et al. | Feb 2004 | A1 |
20040080482 | Magendanz et al. | Apr 2004 | A1 |
20040085328 | Maruyama et al. | May 2004 | A1 |
20040184523 | Dawson et al. | Sep 2004 | A1 |
20040222978 | Bear et al. | Nov 2004 | A1 |
20040224638 | Fadell et al. | Nov 2004 | A1 |
20040225901 | Bear et al. | Nov 2004 | A1 |
20040225907 | Jain et al. | Nov 2004 | A1 |
20040235532 | Matthews et al. | Nov 2004 | A1 |
20040268004 | Oakley | Dec 2004 | A1 |
20050012749 | Gonzalez et al. | Jan 2005 | A1 |
20050025071 | Miyake et al. | Feb 2005 | A1 |
20050052446 | Plut | Mar 2005 | A1 |
20050059346 | Gupta et al. | Mar 2005 | A1 |
20050064911 | Chen et al. | Mar 2005 | A1 |
20050066209 | Kee et al. | Mar 2005 | A1 |
20050073515 | Kee et al. | Apr 2005 | A1 |
20050076088 | Kee et al. | Apr 2005 | A1 |
20050076256 | Fleck et al. | Apr 2005 | A1 |
20050097506 | Heumesser | May 2005 | A1 |
20050140566 | Kim et al. | Jun 2005 | A1 |
20050182980 | Sutardja | Aug 2005 | A1 |
20050240538 | Ranganathan | Oct 2005 | A1 |
20050262302 | Fuller et al. | Nov 2005 | A1 |
20060001595 | Aoki | Jan 2006 | A1 |
20060007051 | Bear et al. | Jan 2006 | A1 |
20060010261 | Bonola et al. | Jan 2006 | A1 |
20060085760 | Anderson et al. | Apr 2006 | A1 |
20060095617 | Hung | May 2006 | A1 |
20060119537 | Vong et al. | Jun 2006 | A1 |
20060119538 | Vong et al. | Jun 2006 | A1 |
20060119602 | Fisher et al. | Jun 2006 | A1 |
20060125784 | Jang et al. | Jun 2006 | A1 |
20060129855 | Rhoten et al. | Jun 2006 | A1 |
20060130075 | Rhoten et al. | Jun 2006 | A1 |
20060150230 | Chung et al. | Jul 2006 | A1 |
20060164324 | Polivy et al. | Jul 2006 | A1 |
20060200751 | Underwood et al. | Sep 2006 | A1 |
20060232494 | Lund et al. | Oct 2006 | A1 |
20060250320 | Fuller et al. | Nov 2006 | A1 |
20060267857 | Zhang et al. | Nov 2006 | A1 |
20060267987 | Litchmanov | Nov 2006 | A1 |
20060267992 | Kelley et al. | Nov 2006 | A1 |
20060282855 | Margulis | Dec 2006 | A1 |
20070046562 | Polivy et al. | Mar 2007 | A1 |
20070052615 | Van Dongen et al. | Mar 2007 | A1 |
20070067655 | Shuster | Mar 2007 | A1 |
20070079030 | Okuley et al. | Apr 2007 | A1 |
20070083785 | Sutardja | Apr 2007 | A1 |
20070091098 | Zhang et al. | Apr 2007 | A1 |
20070103383 | Sposato et al. | May 2007 | A1 |
20070129990 | Tzruya et al. | Jun 2007 | A1 |
20070153007 | Booth et al. | Jul 2007 | A1 |
20070195007 | Bear et al. | Aug 2007 | A1 |
20070273699 | Sasaki et al. | Nov 2007 | A1 |
20080130543 | Singh et al. | Jun 2008 | A1 |
20080155478 | Stross | Jun 2008 | A1 |
20080158233 | Shah et al. | Jul 2008 | A1 |
20080172626 | Wu | Jul 2008 | A1 |
20080297433 | Heller et al. | Dec 2008 | A1 |
20080320321 | Sutardja | Dec 2008 | A1 |
20090021450 | Heller et al. | Jan 2009 | A1 |
20090031329 | Kim | Jan 2009 | A1 |
20090059496 | Lee | Mar 2009 | A1 |
20090109159 | Tsai | Apr 2009 | A1 |
20090153540 | Blinzer et al. | Jun 2009 | A1 |
20090160865 | Grossman | Jun 2009 | A1 |
20090172450 | Wong et al. | Jul 2009 | A1 |
20090193243 | Ely | Jul 2009 | A1 |
20100010653 | Bear et al. | Jan 2010 | A1 |
20100033433 | Utz et al. | Feb 2010 | A1 |
20100033916 | Douglas et al. | Feb 2010 | A1 |
20100085280 | Lambert et al. | Apr 2010 | A1 |
20110102446 | Oterhals et al. | May 2011 | A1 |
20110141133 | Sankuratri et al. | Jun 2011 | A1 |
20120108330 | Dietrich, Jr. et al. | May 2012 | A1 |
20120162238 | Fleck et al. | Jun 2012 | A1 |
20120268480 | Cooksey et al. | Oct 2012 | A1 |
20140168229 | Ungureanu et al. | Jun 2014 | A1 |
20140184611 | Wyatt et al. | Jul 2014 | A1 |
20140184629 | Wyatt et al. | Jul 2014 | A1 |
Number | Date | Country |
---|---|---|
2005026918 | Mar 2005 | WO |
Entry |
---|
“Epson; EMP Monitor V4, 10 Operation Guide”, by Seiko Epson Corp., 2006 http://support.epson.ru/products/manuals/100396/Manual/EMPMonitor.pdf. |
“Virtual Network Computing”, http://en.wikipedia.org/wiki/Vnc, Downloaded Circa: Dec. 18, 2008, pp. 1-4. |
Andrew Fuller; “Auxiliary Display Platform in Longhorn”; Microsoft Corporation; The Microsoft Hardware Engineering Conference Apr. 25-27, 2005; slides 1-29. |
McFedries, ebook, titled “Complete Idiot's Guide to Windows XP”, published Oct. 3, 2001, pp. 1-7. |
PCWorld.com, “Microsoft Pitches Display for Laptop Lids” dated Feb. 10, 2005, pp. 1-2, downloaded from the Internet on Mar. 8, 2006 from http://www.pcworld.com/resources/article/aid/119644.asp. |
Vulcan, Inc., “Product Features: Size and performance”, p. 1; downloaded from the internet on Sep. 20, 2005 from http://www.flipstartpc.com/aboutproduct—features—sizeandpower.asp. |
Vulcan, Inc., “Product Features:LID Module”, p. 1, downloaded from the Internet on Sep. 19, 2005 from http://www.flipstartpc.com/aboutproduct—features—lidmodule.asp. |
Vulcan, Inc., “Software FAQ”, p. 1, downloaded from the internet on Sep. 20, 2005 from http://www.flipstartpc.com/faq—software.asp. |
“System Management Bus (SMBus) Specification,” Version 2.0, Aug. 3, 2000; pp. 1-59. |
Handtops.com, “FlipStart PC in Detail” pp. 1-4, downloaded from the internet o Sep. 20, 2005 from http://www.handtops.com/show/news/5. |
Microsoft Corporation, “Microsoft Windows Hardware Showcase”, dated Apr. 28, 2005; pp. 1-5; downloaded from the internet on Sep. 15, 2005, from http://www.microsoft.com/whdc/winhec/hwshowcase05.mspx. |
Paul Thurrot's SuperSite for Windows, “WinHEC 2004 Longhorm Prototypes Gallery”, dated May 10, 2004, pp. 1-4, downloaded from the internet on Sep. 15, 2005 from http://www.sinwupersite.com/showcase.loghom—winhc—proto.asp. |
“The Java Tutorial: How to Use Combo Boxes”, Archived Mar. 5, 2006 by archive.org, Downloaded Jun. 30, 2011, http://web.archive.org/web/20050305000852/http://www-mips.unice.fr/Doc/Java/Tutorial/uiswing/components/combobox.html. |
Vulcan Inc., “Connectivity FAQ”, p. 1, downloaded from the internet on Sep. 20, 2005 from http://www.flipstartpc.com/faq—connectivity.asp. |
“Usage: NVIDIA GeForce 6800—PCIe x16”, Dell, archived Jan. 15, 2006 by archive.org, Downloaded Jun. 29, 2011, http://web.archive.org/web/20060115050119/http://support.dell.com/support/edocs/video/P82192/en/usage.html. |
“Graphics: Intel® 82852182855 Graphics Controller Family”, Intel, Archived Nov. 2, 2006 by archive.org, Downloaded Jun. 30, 2011, http://web.archive.org/web/20061103045644/http://www.intel.com/support/graphics/intel852gm/sb/CS-009064.html. |
Texas Instruments, “TMS320VC5501/5502 DSP Direct Memory Access (DMA) Controller Reference Guide”, Sections 1, 2, 4, 11, and 12; Literature No. SPRU613G, Mar. 2005. |
Number | Date | Country | |
---|---|---|---|
20100315427 A1 | Dec 2010 | US |