This description relates to techniques for adjusting the rendering of content based upon environmental conditions.
With the increased use of electronically presented content for conveying information, more electronic displays are being incorporated into objects (e.g., vehicle dashboards, entertainment systems, cellular telephones, eReaders, etc.) or produced for stand alone use (e.g., televisions, computer displays, etc.). With such a variety of uses, electronic displays may be found in nearly every geographical location for stationary applications (e.g., presenting imagery in homes, offices, etc.), mobile applications (e.g., presenting imagery in cars, airplanes, etc.), etc. Further, such displays may be used for presenting various types of content such as still imagery, textual content such as electronic mail (email), documents, web pages, electronic books (ebooks), magazines and video along with other types of content such as audio.
The systems and techniques described here relate to appropriately adjusting the rendering of content based upon environmental conditions and/or potentially other types of data to dynamically provide a reasonably consistent viewing experience to a viewer.
In one aspect, a computing device-implemented method includes receiving information representative of one or more environmental conditions. The method also includes determining one or more adjustments for rendering content on one or more electronic displays based upon the received information representative of the one or more environmental conditions, and, adjusting the rendering of the content for being presented on at least one display based upon the received information representing the one or more environmental conditions.
Implementations may include one or more of the following features. Adjusting the rendering of the content may include adjusting one or more rendering parameters. The content for being presented on the at least one display may include graphics, text, or other types of individual content or combinations of content. At least one of the environmental conditions may be collected from a sensor or multiple sensors that employ one or more sensing techniques. At least one of the environmental conditions may be user-provided or provided from another source or a combination of sources. At least one of the environmental conditions may represent ambient light or another type of condition or multiple conditions. At least one of the environmental conditions may represent an artificial light source or other type of source. The artificial light source may be a computing device or other type of device. The one or more electronic displays may include a printer.
In another aspect, a system includes a computing device that includes a memory configured to store instructions. The computing device also includes a processor to execute the instructions to perform a method that includes receiving information representative of one or more environmental conditions. The method also includes determining one or more adjustments for rendering content on one or more electronic displays based upon the received information representative of the one or more environmental conditions, and, adjusting the rendering of the content for being presented on at least one display based upon the received information representing the one or more environmental conditions.
Implementations may include one or more of the following features. Adjusting the rendering of the content may include adjusting one or more rendering parameters. The content for being presented on the at least one display may include graphics, text, or other types of individual content or combinations of content. At least one of the environmental conditions may be collected from a sensor or multiple sensors that employ one or more sensing techniques. At least one of the environmental conditions may be user-provided or provided from another source or a combination of sources. At least one of the environmental conditions may represent ambient light or another type of condition or multiple conditions. At least one of the environmental conditions may represent an artificial light source or other type of source. The artificial light source may be a computing device or other type of device. The one or more electronic displays may include a printer.
In another aspect, one or more computer readable media storing instructions that are executable by a processing device, and upon such execution cause the processing device to perform operations that include receiving information representative of one or more environmental conditions. Operations also include determining one or more adjustments for rendering content on one or more electronic displays based upon the received information representative of the one or more environmental conditions, and, adjusting the rendering of the content for being presented on at least one display based upon the received information representing the one or more environmental conditions.
Implementations may include one or more of the following features. Adjusting the rendering of the content may include adjusting one or more rendering parameters. The content for being presented on the at least one display may include graphics, text, or other types of individual content or combinations of content. At least one of the environmental conditions may be collected from a sensor or multiple sensors that employ one or more sensing techniques. At least one of the environmental conditions may be user-provided or provided from another source or a combination of sources. At least one of the environmental conditions may represent ambient light or another type of condition or multiple conditions. At least one of the environmental conditions may represent an artificial light source or other type of source. The artificial light source may be a computing device or other type of device. The one or more electronic displays may include a printer.
These and other aspects and features and various combinations of them may be expressed as methods, apparatus, systems, means for performing functions, program products, and in other ways.
Other features and advantages will be apparent from the description and the claims.
Referring to
Referring to
To sense environmental conditions that may affect the presentation of content, one or more techniques and methodology may be implemented. For example, one or more types of sensing techniques may be used for collecting information reflective of environmental conditions experienced by electronic displays. For example, passive and active senor technology may be utilized to collect information regarding environmental conditions. In this illustrated example, a sensor 206 (e.g., light sensor) is embedded into the dashboard of the vehicle 200 at a location that is relatively proximate to the electronic display 202. In some arrangements, one or more such sensors may be located closer or farther from the electronic display. Sensors may also be included in the electronic display itself; for example, one or more light sensors may be incorporated such that their sensing surfaces are substantially flush to the surface of the electronic display. Sensors and/or arrays of sensors may be mounted throughout the vehicle 200 for collecting such information (e.g., sensing devices, sensing material, etc. may be embedded into windows of the vehicle, mounted onto various internal and external surfaces of the vehicle, etc.). Sensing functionality may also be provided from other devices, for example, which include sensors not incorporated into the vehicle. For example, the sensing capability of computing devices (e.g., a cellular telephone 208) may be exploited for collecting environmental conditions. Once collected, the computing device may provide the collected information for accessing the environmental conditions (e.g., incident ambient light) being experienced by the electronic display. In the illustrated example, the cellular telephone 208 may collect and provide environmental condition information to access the current conditions being experienced by the electronic display 202. To provide this information various types of technology may be used; for example, one or more wireless links (e.g., radio frequency, light emissions, etc.) may be established and protocols (e.g., Bluetooth, etc.) used to provide the collected information.
Along with natural conditions (e.g., ambient light, etc.), environment conditions may also include other types of information. For example, information associated with one or more viewers of the electronic display may be collected and used for presenting content. Viewer-related information may be collected, for example, from the viewer or from information sources associated with the viewer. With reference to the illustrated vehicle 200, information may be collected for estimating the perspective at which the viewer sees the electronic display 202. For example, information may be provided based upon actions of the viewer (e.g., the position of a car seat 210 used by the viewer, any adjustments to the position of the seat as controlled by the viewer, etc.). In some arrangements, multiple viewers (e.g., present in the vehicle 200) may be monitored and one or more displays may be adjusted (e.g., adjust the content rendering on the respective display being viewed). For example, a head's up display may be adjusted for the driver of a vehicle while a display incorporated into the rear of the driver's seat may be adjusted for a backseat viewer. Viewer activity may also be considered an environmental activity that can be monitored and provide a trigger event for adjusting the rendering of content on one or more displays. Such activities may be associated with controlling conditions internal or external to the vehicle 200 (e.g., weather conditions, time of day, season of year, etc.). For example, lighting conditions within the cabin of the vehicle 200 (e.g., turning on or more lights, raising or lowering the roof for a convertible vehicle, etc.) may be controlled by the viewer and used to represent the environmental conditions. In some arrangements, viewer activities may also include relatively simple viewer movements. For example, the eyes of a viewer (e.g., driver of a vehicle) may be tracked (e.g., by a visual eye tracking system incorporated into the dash board of a vehicle) and corresponding adjustments executed to the rendering of display content (e.g., adjusting content rendering during time periods when the driver is focused on the display).
Other information may also be collected that is associated with one or more viewers of the electronic display. For example, characteristics of each viewer (e.g., height, gender, location in a vehicle, one or more quantities representing their eyesight, etc.) and information that represents additional information about the viewer's vision (e.g., viewer wears proscription glasses, contacts, sunglasses, has one or more medical conditions, etc.). Viewer characteristics may also be collected from the viewer, compared to being actively provided from the viewer. For example, a facial recognition system (e.g., incorporated into the vehicle, a device residing within the vehicle, etc.) may be used to detect the face of one or more viewers (e.g., the driver of the vehicle). The facial expression of the viewer may also be identified by the system and corresponding action taken (e.g., if the viewer's eyes are squinted or if an angry facial expression is detected, appropriately adjust the rendering of the content presented on the electronic display). One or more feedback techniques may be implemented to adjust content rendering based upon, for example, viewer reaction to previous adjustments (e.g., the facial expression of an anger viewer changes to indicate pleasure, more intense anger, etc.). Other types of information may also be collected from the viewer; for example, audio signals such as speech may be collected (e.g., from one or more audio sensors) and used to determine if content rendering should be adjusted to assist the viewer. Other types of audio content may also be collected; for example, audio signals may be collected from other passengers in the vehicle to determine if rendering should be adjusted (e.g., if many passengers are talking in the vehicle the content rendering may be adjusted to ease the driver's ability to read the content). Audio content may also be collected external to the vehicle to provide a measure of vehicle's environment (e.g., in a busy urban setting, in a relatively quiet rural location, etc.). Position information provided from one or more systems (e.g., a global positioning system (GPS)) present within the vehicle and/or located external to the vehicle, may be used to provide information regarding environmental conditions (e.g., position of the vehicle) and used to determine if content rendering should be adjusted. In this particular example, a content rendering engine 212 is included within the dashboard of the vehicle 200 and processes the provided environmental information and correspondingly adjusts the presented content, if needed. One or more computing devices incorporated into the vehicle 200 may provide a portion of the functionality of the content rendering engine 212. Computing devices separate from the vehicle may also be used to provide the functionality; for example, one or more computing devices external to the vehicles (e.g., one or more remotely located servers) may be used in isolation or in concert with the computational capability included in the vehicle. One or more devices present within the vehicle (e.g., cellular telephone 208) may be utilized for providing the functionality of the content rendering engine 212.
Environmental conditions may also include other types of detected information, such as detecting information associated with the platform within which content is being displayed. For example, similar to detecting changes in sunlight while being driven, objects such as traffic signs, construction site warning lights, store fronts, etc. may be detected (e.g., by one or more image collecting devices incorporated into the exterior or interior of a vehicle) and have representations prepared for presenting to occupants of the vehicle (e.g., the driver). Based upon the identified content, the rendering of the corresponding representations may be adjusted, for example to quickly grab that attention of the vehicle driver (e.g., to warn that the vehicle is approaching a construction site, a potential or impending accident with another car, etc.). In some arrangements, input provided by an occupant (e.g., indicating that he is interested in finding a particular restaurant, style of restaurant, etc.) may be used to signify when rendering adjustments should be executed (e.g., when a Chinese restaurant is detected by the vehicle cameras, rending is adjusted to alert the driver to the nearby restaurant).
Referring to
Referring to
One or more techniques and methodologies may be used by the content rendering engine 402 to adjust the presentation of content. For example, the content to be presented may be adjusted to improve its legibility based upon the provided environmental conditions. Adjustments may include changes to the rendering of the content being presented. For example, for textual content, the brightness of the text may be controlled. Similarly the contrast between brighter and dimmer portions of the text may be adjusted to improve legibility. Linear and nonlinear operations associated with coding and decoding values such as luminance values (e.g., gamma correction) may similarly be adjusted for textual content. Pixel geometry and geometrical shapes associated with text (e.g., line thickness, font type, etc.) along with visual characteristics (e.g., text color, shadowing, shading, font hinting, etc.) may be adjusted by the content rendering engine 402.
The techniques and methodologies for adjusting content presentation may also include adjusting parameters of the one or more electronic displays being used to present the content. For example, lighting parameters of a display (e.g., foreground lighting levels, back lighting levels, etc.), resolution of the display, the number of bits used to represent the color of a pixel (e.g., color depth), colors associated with the display (e.g., color maps), and other parameters may be changed for adjusting the presented content.
One or more operations and algorithms may be implemented to identify appropriate adjustments for content presentation. For example, based upon the one or more of the provided environmental conditions and the content (e.g., text) to be presented, one or more substantially optimal rendering parameters may be identified along with appropriate values by the content rendering engine 402. Once identified, the parameters may be used by the computer system 400, provided to one or more other computing devices, etc. for adjusting the content for presentation on one or more electronic displays. One or more techniques may be utilized to trigger the determination of the presentation adjustments, for example, one or more detected events (e.g., user input selection, etc.) may be defined to initiate the operations of the content rendering engine 402. Adjustments may also be determined and acted upon in a predefined manner. For example, adjustments may be determined and executed in a periodic manner (e.g., every second, fraction of a second) so that a viewer (or viewers) is given an impression that environmental conditions are periodically sampled and adjustments are regularly executed. In some arrangements, the frequency of the executed adjustment may be increased such that the viewer or viewers perceive the adjustments nearly occurring in real time. Adjustments may also be executed during one or more particular time periods, for example, in a piecewise manner. For example, adjustments may be executed more frequently during time periods when experienced environmental conditions are more troublesome (e.g., lower incident angles of the sun during the summer) and less frequent during time periods when potentially dangerous environmental conditions (e.g., periods of less glare) are generally not experienced.
Referring to
Operations may include receiving 502 information (e.g., data) representative of one or more environmental conditions. For example, the ambient light level incident upon one or more electronic displays, the position and viewing angle of one or more viewers, etc. may be received by a content rendering engine. Operations may also include determining 504 one or more adjustments for rendering content on one or more electronic displays based upon the received information representative of the one or more environmental conditions. For example, brightness, sharpness, contrast, font type, style, line width, etc. may be identified and adjusted to for rendering the content (e.g., text). Operations may also include adjusting 506 the rendering of the content for presentation on the one or more electronic displays. In some arrangements, the operations may be executed over a relatively short period of time and in a repetitive manner such that rendering adjustments may be executed nearly in real time.
Computing device 600 includes processor 602, memory 604, storage device 606, high-speed controller 608 connecting to memory 604 and high-speed expansion ports 610, and low speed controller 612 connecting to low speed expansion port 614 and storage device 606. Each of components 602, 604, 606, 608, 610, and 612, are interconnected using various busses, and can be mounted on a common motherboard or in other manners as appropriate. Processor 602 can process instructions for execution within computing device 600, including instructions stored in memory 604 or on storage device 606 to display graphical data for a GUI on an external input/output device, including, e.g., display 616 coupled to high speed interface 608. In other implementations, multiple processors and/or multiple buses can be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 600 can be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
Memory 604 stores data within computing device 600. In one implementation, memory 604 is a volatile memory unit or units. In another implementation, memory 604 is a non-volatile memory unit or units. Memory 604 also can be another form of computer-readable medium, including, e.g., a magnetic or optical disk.
Storage device 606 is capable of providing mass storage for computing device 600. In one implementation, storage device 606 can be or contain a computer-readable medium, including, e.g., a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in a data carrier. The computer program product also can contain instructions that, when executed, perform one or more methods, including, e.g., those described above. The data carrier is a computer- or machine-readable medium, including, e.g., memory 604, storage device 606, memory on processor 602, and the like.
High-speed controller 608 manages bandwidth-intensive operations for computing device 600, while low speed controller 612 manages lower bandwidth-intensive operations. Such allocation of functions is an example only. In one implementation, high-speed controller 608 is coupled to memory 604, display 616 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 610, which can accept various expansion cards (not shown). In the implementation, low-speed controller 612 is coupled to storage device 606 and low-speed expansion port 614. The low-speed expansion port, which can include various communication ports (e.g., USB, BLUETOOTH®, Ethernet, wireless Ethernet), can be coupled to one or more input/output devices, including, e.g., a keyboard, a pointing device, a scanner, or a networking device including, e.g., a switch or router, e.g., through a network adapter.
Computing device 600 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as standard server 620, or multiple times in a group of such servers. It also can be implemented as part of rack server system 624. In addition or as an alternative, it can be implemented in a personal computer including, e.g., laptop computer 622. In some examples, components from computing device 600 can be combined with other components in a mobile device (not shown), including, e.g., device 650. Each of such devices can contain one or more of computing device 600, 650, and an entire system can be made up of multiple computing devices 600, 650 communicating with each other.
Computing device 650 includes processor 652, memory 664, an input/output device including, e.g., display 654, communication interface 666, and transceiver 668, among other components. Device 650 also can be provided with a storage device, including, e.g., a microdrive or other device, to provide additional storage. Each of components 650, 652, 664, 654, 666, and 668, are interconnected using various buses, and several of the components can be mounted on a common motherboard or in other manners as appropriate.
Processor 652 can execute instructions within computing device 650, including instructions stored in memory 664. The processor can be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor can provide, for example, for coordination of the other components of device 650, including, e.g., control of user interfaces, applications run by device 650, and wireless communication by device 650.
Processor 652 can communicate with a user through control interface 658 and display interface 656 coupled to display 654. Display 654 can be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. Display interface 656 can comprise appropriate circuitry for driving display 654 to present graphical and other data to a user. Control interface 658 can receive commands from a user and convert them for submission to processor 652. In addition, external interface 662 can communicate with processor 642, so as to enable near area communication of device 650 with other devices. External interface 662 can provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces also can be used.
Memory 664 stores data within computing device 650. Memory 664 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory 674 also can be provided and connected to device 650 through expansion interface 672, which can include, for example, a SIMM (Single
In Line Memory Module) card interface. Such expansion memory 674 can provide extra storage space for device 650, or also can store applications or other data for device 650. Specifically, expansion memory 674 can include instructions to carry out or supplement the processes described above, and can include secure data also. Thus, for example, expansion memory 674 can be provided as a security module for device 650, and can be programmed with instructions that permit secure use of device 650. In addition, secure applications can be provided through the SIMM cards, along with additional data, including, e.g., placing identifying data on the SIMM card in a non-hackable manner.
The memory can include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in a data carrier. The computer program product contains instructions that, when executed, perform one or more methods, including, e.g., those described above. The data carrier is a computer- or machine-readable medium, including, e.g., memory 664, expansion memory 674, and/or memory on processor 652, which can be received, for example, over transceiver 668 or external interface 662.
Device 650 can communicate wirelessly through communication interface 666, which can include digital signal processing circuitry where necessary. Communication interface 666 can provide for communications under various modes or protocols, including, e.g., GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication can occur, for example, through radio-frequency transceiver 668. In addition, short-range communication can occur, including, e.g., using a BLUETOOTH®, WIFI, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module 670 can provide additional navigation- and location-related wireless data to device 650, which can be used as appropriate by applications running on device 650.
Device 650 also can communicate audibly using audio codec 660, which can receive spoken data from a user and convert it to usable digital data. Audio codec 660 can likewise generate audible sound for a user, including, e.g., through a speaker, e.g., in a handset of device 650. Such sound can include sound from voice telephone calls, can include recorded sound (e.g., voice messages, music files, and the like) and also can include sound generated by applications operating on device 650.
Computing device 650 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as cellular telephone 680. It also can be implemented as part of smartphone 682, personal digital assistant, or other similar mobile device.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms machine-readable medium and computer-readable medium refer to a computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying data to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be a form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in a form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or a combination of such back end, middleware, or front end components. The components of the system can be interconnected by a form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), and the Internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
In some implementations, the engines described herein can be separated, combined or incorporated into a single or combined engine. The engines depicted in the figures are not intended to limit the systems described here to the software architectures shown in the figures.
A number of embodiments have been described. Nevertheless, it will be understood that various modifications can be made without departing from the spirit and scope of the processes and techniques described herein. In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps can be provided, or steps can be eliminated, from the described flows, and other components can be added to, or removed from, the described systems. Accordingly, other embodiments are within the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5170442 | Murai | Dec 1992 | A |
5684510 | Brassell | Nov 1997 | A |
5715473 | Reed | Feb 1998 | A |
5724456 | Boyack et al. | Mar 1998 | A |
5781687 | Parks | Jul 1998 | A |
5956157 | Tai | Sep 1999 | A |
6947017 | Gettemy | Sep 2005 | B1 |
7443466 | Dedene et al. | Oct 2008 | B2 |
7535471 | Mansfield | May 2009 | B1 |
7696995 | McTaggart | Apr 2010 | B2 |
7880746 | Marks et al. | Feb 2011 | B2 |
7965859 | Marks | Jun 2011 | B2 |
8130204 | Tschirhart | Mar 2012 | B2 |
8279349 | Shmueli et al. | Oct 2012 | B2 |
8494507 | Tedesco et al. | Jul 2013 | B1 |
8538144 | Benitez et al. | Sep 2013 | B2 |
8749478 | Bowen | Jun 2014 | B1 |
8933958 | Luengen et al. | Jan 2015 | B2 |
20010043277 | Tanaka et al. | Nov 2001 | A1 |
20020021292 | Sakashita | Feb 2002 | A1 |
20020167467 | Kao et al. | Nov 2002 | A1 |
20030105547 | Haight | Jun 2003 | A1 |
20040070565 | Nayar | Apr 2004 | A1 |
20040183828 | Nichogi et al. | Sep 2004 | A1 |
20050162415 | Chen et al. | Jul 2005 | A1 |
20050212824 | Marcinkiewicz et al. | Sep 2005 | A1 |
20060044234 | Shimonishi | Mar 2006 | A1 |
20060077169 | Fujikawa | Apr 2006 | A1 |
20060092182 | Diefenbaugh et al. | May 2006 | A1 |
20060109266 | Itkowitz | May 2006 | A1 |
20060188132 | Shigeta | Aug 2006 | A1 |
20060239127 | Wu | Oct 2006 | A1 |
20060284895 | Marcu | Dec 2006 | A1 |
20070126657 | Kimpe | Jun 2007 | A1 |
20070146863 | Suh et al. | Jun 2007 | A1 |
20070162195 | Garceau et al. | Jul 2007 | A1 |
20070195519 | Shin et al. | Aug 2007 | A1 |
20070257928 | Marks et al. | Nov 2007 | A1 |
20070270669 | Parnagian | Nov 2007 | A1 |
20070279427 | Marks | Dec 2007 | A1 |
20070288844 | Zingher | Dec 2007 | A1 |
20080036591 | Ray | Feb 2008 | A1 |
20080123000 | Lin et al. | May 2008 | A1 |
20080262722 | Haag et al. | Oct 2008 | A1 |
20090002563 | Barnhoefer et al. | Jan 2009 | A1 |
20090010537 | Horie | Jan 2009 | A1 |
20090069953 | Hale et al. | Mar 2009 | A1 |
20090109171 | Akaiwa | Apr 2009 | A1 |
20090174636 | Kohashikawa et al. | Jul 2009 | A1 |
20090219244 | Fletcher | Sep 2009 | A1 |
20090225065 | Overes | Sep 2009 | A1 |
20090267780 | Van Hoff et al. | Oct 2009 | A1 |
20090303215 | Shiozaki | Dec 2009 | A1 |
20100026682 | Plowman et al. | Feb 2010 | A1 |
20100079426 | Pance et al. | Apr 2010 | A1 |
20100103186 | Luengen et al. | Apr 2010 | A1 |
20100114923 | McVady | May 2010 | A1 |
20100123597 | Kitsukawa | May 2010 | A1 |
20100141571 | Chiang et al. | Jun 2010 | A1 |
20100149145 | Van Woudenberg | Jun 2010 | A1 |
20100225640 | Vieri | Sep 2010 | A1 |
20100228735 | Hsu | Sep 2010 | A1 |
20100245212 | Dallas et al. | Sep 2010 | A1 |
20100321377 | Gay et al. | Dec 2010 | A1 |
20110037576 | Jeon | Feb 2011 | A1 |
20110050695 | Sullivan et al. | Mar 2011 | A1 |
20110095875 | Thyssen | Apr 2011 | A1 |
20110096048 | Diefenbaugh et al. | Apr 2011 | A1 |
20110102451 | Broga et al. | May 2011 | A1 |
20110131153 | Grim, III | Jun 2011 | A1 |
20110141159 | Takeuchi | Jun 2011 | A1 |
20110193876 | Handa et al. | Aug 2011 | A1 |
20110205397 | Hahn et al. | Aug 2011 | A1 |
20110210942 | Kitamori et al. | Sep 2011 | A1 |
20110254819 | Yamagishi | Oct 2011 | A1 |
20110285746 | Swic | Nov 2011 | A1 |
20110310446 | Komatsu | Dec 2011 | A1 |
20120050306 | Kurzweil | Mar 2012 | A1 |
20120081279 | Greenebaum et al. | Apr 2012 | A1 |
20120135783 | Sams | May 2012 | A1 |
20120182276 | Kee | Jul 2012 | A1 |
20120268436 | Chang | Oct 2012 | A1 |
20120287113 | Uehata et al. | Nov 2012 | A1 |
20120287271 | Holland et al. | Nov 2012 | A1 |
20120293528 | Larsen | Nov 2012 | A1 |
20130078976 | Naftolin | Mar 2013 | A1 |
20130222354 | Koivunen | Aug 2013 | A1 |
Number | Date | Country |
---|---|---|
1571485 | Sep 2005 | EP |
2028640 | Feb 2009 | EP |
Entry |
---|
Yamauchi, Pixel Circuit and Display Apparatus (WO2011052272), 2011. |
International Search Report & Written Opinion, PCT/US2013/026042, mailed May 16, 2013, 10 pages. |
Number | Date | Country | |
---|---|---|---|
20130215133 A1 | Aug 2013 | US |