Some touch-sensitive displays may recognize gestures that are at least partially performed outside of an area in which graphical content is displayed. For example, aspects of the graphical content may be affected by a gesture that starts and/or ends outside of an active area of the display. To facilitate gesture detection outside of the active area, the touch-sensitive region of the display may be expanded by extending a touch sensor beyond the active area. This expansion, however, constrains the mechanical and industrial design of the display, for example by significantly increasing the size of a bezel and/or cover glass of the display. These issues are exacerbated in arrays of multiple touch-sensitive displays, as the expansion of touch sensing outside of the active display area of the overall array increases the amount by which adjacent individual active display areas are separated by non-active display areas (e.g., bezels).
Embodiments are disclosed that relate to electrostatic communication among displays. For example, one disclosed embodiment provides a multi-touch display comprising a display stack having a display surface and one or more side surfaces bounding the display surface, a touch sensing layer comprising a plurality of transmit electrodes positioned opposite a plurality of receive electrodes, the touch sensing layer spanning the display surface and bending to extend along at least a portion of the one or more side surfaces of the display, and a controller configured to suppress driving the plurality of transmit electrodes of the touch sensing layer for an interval, and during that interval, receive configuration information from a transmit electrode of a touch sensing layer in a side surface of an adjacent display.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
As described above, some touch-sensitive displays may recognize gestures that are at least partially performed outside of an area in which graphical content is displayed, referred to herein as an “active display area”. A gesture that starts and/or ends outside of the active display area may prompt the display of an element of a graphical user interface (GUI), for example. To facilitate gesture detection and general touch sensing outside of the active display area, the touch-sensitive region of the display may be expanded by extending a touch sensor beyond the active display area. Such expansion, however, constrains the mechanical and industrial design of the display, for example by significantly increasing the size of a bezel of the display housing the extended touch sensor. A similarly problematic increase in the size of components may occur in displays that do not include a bezel—for example, the size of a black mask positioned along the border of such a display and configured to reduce the perceptibility of routing, pads, fiducials, etc. may increase as a touch sensor is expanded beyond the active display area. In both cases, the display design is constrained and the material cost of a substrate (e.g., glass) increased due to touch sensor expansion. These issues are exacerbated when attempting to form an array of multiple touch-sensitive displays, as the expansion of the touch sensors in each display increases the amount by which adjacent active display areas are separated by non-active display areas (e.g., bezels), interrupting the visual continuity of the array and degrading the user experience.
Accordingly, implementations are disclosed herein that relate to electrostatic communication among displays. This may allow rapid, ad-hoc formation of a display array and generation of appropriate portions of graphical content for each display. Moreover, data used to calibrate display output in response to touch input for one display in the display array may be communicated to other displays in the array such that accurate touch sensing throughout the entire array may be provided by calibrating a single display.
Each display 104 may utilize various suitable display technologies to facilitate graphical output, including but not limited to liquid-crystal or organic light-emitting diode display technologies. While each display 104 is shown as being operatively coupled to display controller 106, two or more display controllers may be operatively coupled to the displays, and in some examples, each display may be operatively coupled to a unique display controller. In some implementations, display array 102 may present graphical content that is discontinuous across one or more displays 104, unlike the graphical content shown in
In this example, each display 104 includes a touch sensor (e.g., touch sensor 108, represented in
Each touch sensor 108 further extends beyond its respective display surface 109 and bends to extend along at least a portion of one or more side surfaces (e.g., side surface 118) that bound the display surface. Side surfaces 118 in this example are substantially perpendicular (e.g., within 5°) to display surface 109, though other angular orientations are possible including those in which a side surface's angular orientation is variable. In the example depicted in
Other actions may be executed in display array 102 in response to detection of touch input along side surfaces 118. For example, virtual buttons 124 may be placed along side surfaces 118 and activated in response to detecting input proximate the virtual buttons via regions of touch sensors 108 positioned along side surfaces 118. Virtual buttons 124 may be operable to control a large range of functions of an underlying GUI and/or OS, including but not limited to adjusting the volume of audio, switching among video sources that provide graphical content to one or more displays 104, etc. Analogous virtual button functionality, and/or general touch sensing functionality, may be provided at the rear surfaces of displays 104 for implementations in which their respective touch sensors extend to the rear surfaces.
Touch sensors 108 may further be used to form electrostatic communication links between adjacent displays 104 to thereby transmit information among the displays. Information transmitted among displays 104 may be used to automatically configure display array 102—that is, determine the number and arrangement (e.g., relative position) of the displays, and communicate this configuration information to display controller 106 so that the display controller may determine the appropriate portions of graphical content to send to each display as described above.
In one implementation, display 104B may receive configuration information from display 104A placed adjacent to and bordering display 104B on a predefined side (e.g., left side) of display 104A. The configuration information may be transmitted between displays 104A and 104B via an electrostatic communication link formed between their respective touch sensors 108. Turning now to
As shown in
In the implementation depicted in
A plurality of inter-column jumpers (e.g., inter-column jumper 213B) may be positioned between adjacent transmit electrodes 202. Unlike intra-column jumpers 213A, inter-column jumpers 213B include a plurality of electrical discontinuities (e.g., discontinuity 214) that render each overall inter-column jumper electrically non-conductive. Being aligned (e.g., horizontally in
Touch sensor and electrode configurations other than those shown in
In some implementations, display 108A may transmit data indicating its presence to display 108B via electrostatic link 215, for example by sending a display identifier, as discussed below. The transmitted data may further indicate a sequence used to scan touch sensor 108A—particularly, a temporal position within the sequence indicating the one or more transmit electrodes 202 being driven may be transmitted to touch sensor 108B, allowing touch sensors 108A and 108B to become synchronized in time. Synchronization between touch sensors 108A and 108B may allow, for a given temporal position in a scanning sequence, controller 212 of touch sensor 108B to suppress driving of the plurality of transmit electrodes 202 for an interval during which configuration information may be received from driven transmit electrodes 202 of touch sensor 108A. In this way, data may be transmitted via electrostatic links established between respective touch sensors of adjacent displays without adversely affecting touch sensing in either display or confounding configuration information by driving transmit electrodes when they should be not be driven.
As described in more detail below, each display 104 in a display array 102 will attempt communication with surrounding displays on each side surface 118 of its perimeter. Accordingly, each display 104 will gather data indicating, for each side surface, a display identifier for the adjacent display on that side surface. Each display may transmit this information to the display controller 106, so that display controller 106 may generate an accurate map of the display array, including the display identifier and position of each display in the array. Using this map, display controller 106 can generate an appropriate display signal for the display array 102.
Inter-display communication in the manner described above may be used to automatically configure a display array such that appropriate portions of graphical content may be sent to each display. Such automatic configuration may be particularly useful, for example, when a display array is permanently installed in a new location, or when a display array is set up on an ad-hoc basis for temporary use, such as at a trade show, exhibition, conference, etc. By such automatic configuration, painstaking programming of the display controller may be omitted, since the displays self-report their relative positions in the array to the display controller.
Further, to enable bi-directional communication between adjacent displays, it will be appreciated that a first interval may be provided during which a first display of an adjacent display pair functions as a receiving display and suppresses the transmit electrodes positioned along the side surface of the display, and a second interval may be provided during which the first display functions as a transmitting display, and the adjacent display in the display pair functions as the receiving display, and thus suppresses its transmission electrode along the side surface of the display, in order to better receive data via the electrostatic link.
Next, at 410 of method 400, configuration information from each of the adjacent displays is received by the first display via electrostatic links formed therebetween. Receiving the configuration information may include, at 412, receiving the configuration information via the receive electrodes of the first display at one or more of the side surfaces. Conversely, configuration information that is not received at one or more side surfaces may be used to determine the relative positioning of a display. Identification of corner displays (e.g., display 104A) in the display array, for example, may be performed by determining that configuration information is not being received at two of the side surfaces (e.g., left and top side surfaces). Receiving the configuration information may also include, at 414, suppressing driving of the transmit electrodes of the first display for an interval so that reception of the configuration information is not confounded. The interval during which transmit electrode driving is suppressed may be determined based on the received configuration information and particularly the scanning data.
Next, at 416 of method 400, the configuration information received at 410 by the first display is communicated to a display controller. The first display may communicate the configuration information to the display controller via a touch sensing controller through a suitable communication interface, for example. Communicating the configuration information may include, at 418, sending display identifiers for each of the adjacent displays in addition to the side surface at which each display identifier was received. Each display identifier and associated side surface at which the identifier was received may be sent to the display controller as a pair. Sending the display identifiers at 418 may also include communicating, from the first display, a display identifier identifying itself (e.g., an identifier identifying the first display). As a non-limiting example, display 104A in display array 102 may communicate to display controller 106 a display identifier identifying display 104A, a display identifier identifying display 104B and data indicating that this display identifier was received at the right side surface 118 of display 104A, and a display identifier identifying a display 104C and data indicating that this display identifier was received at the bottom side surface 118 of display 104A. In this example, display 104A may also send to display controller 106 data indicating that display identifiers were not received at the top or left side surfaces 118.
Continuing with
At 420 of method 400, the relative position of each display in the display array is determined by the display controller. The display controller may determine, for a given display, its relative position in the display array by analyzing the display identifiers it received, the side surfaces at which they were received, and any side surfaces at which display identifiers were not received.
Next, at 422 of method 400, a respective portion of graphical content is determined for each display based on their relative positions determined at 420. Determination of the respective graphical content portions may be performed in various suitable manners. In a display array having displays of equal size positioned at the same orientation (e.g., landscape), the graphical content may be divided into equal portions, for example.
Finally, at 424 of method 400, the portions of graphical content are sent to their respective displays.
Method 400 as shown and described may facilitate rapid, ad-hoc formation of a display array and correspondingly rapid distribution of appropriate graphical content to each display in the array. Using method 400, a display array may include a plurality of displays where each display is configured to communicate display identifiers and positions of adjacent displays to a display controller, based on configuration information received from the adjacent displays via corresponding electrostatic links formed between touch sensor regions on a side surface of each display pair. Method 400, however, may be applied to other types of devices having displays, such as portable personal computers, smartphones, tablets, and other movable electronic devices with displays. Thus, displays 103 described above may be displays housed in smartphones, tablets, or laptop computers, for example.
Touch sensor 508 comprises a sensor film 510, a transmit electrode layer 512 comprising a plurality of transmit electrodes, and a receive electrode layer 514 comprising a plurality of receive electrodes. Film 510 and layers 512 and 514 may be integrally formed as a single layer by depositing layer 512 on a top surface of film 510, and by depositing layer 514 on a bottom surface of the film. In other implementations, layers 512 and 514 may be formed as separate layers and subsequently bonded via an OCA layer.
Transmit and receive electrode layers 512 and 514 may be formed by a variety of suitable processes. Such processes may include deposition of metallic wires onto the surface of an adhesive, dielectric substrate; patterned deposition of a material that selectively catalyzes the subsequent deposition of a metal film (e.g., via plating); photoetching; patterned deposition of a conductive ink (e.g., via inkjet, offset, relief, or intaglio printing); filling grooves in a dielectric substrate with conductive ink; selective optical exposure (e.g., through a mask or via laser writing) of an electrically conductive photoresist followed by chemical development to remove unexposed photoresist; and selective optical exposure of a silver halide emulsion followed by chemical development of the latent image to metallic silver, in turn followed by chemical fixing. In one example, metalized sensor films may be disposed on a user-facing side of a substrate, with the metal facing away from the user or alternatively facing toward the user with a protective sheet (e.g., comprised of polyethylene terephthalate (PET)) between the user and metal. Although TCO is typically not used in the electrodes, partial use of TCO to form a portion of the electrodes with other portions being formed of metal is possible. In one example, the electrodes may be thin metal of substantially constant cross section, and may be sized such that they may not be optically resolved and may thus be unobtrusive as seen from a perspective of a user. Suitable materials from which electrodes may be formed include various suitable metals (e.g., aluminum, copper, nickel, silver, gold, etc.), metallic alloys, conductive allotropes of carbon (e.g., graphite, fullerenes, amorphous carbon, etc.), conductive polymers, and conductive inks (e.g., made conductive via the addition of metal or carbon particles).
The materials that comprise film 510 and layers 512 and 514 may be particularly chosen to allow touch sensor 508 to be bent along at least a portion of the display, and optionally to the rear surface of the display. For example, film 510 may be comprised of cyclic olefin copolymer (COC), polyethylene terephthalate (PET), or polycarbonate (PC).
A second OCA layer 516 bonds the bottom surface of touch sensor 508 to the top surface of a substrate 518, which may be comprised of various suitable materials including but not limited to glass, acrylic, or PC. A third OCA layer 520 bonds the bottom surface of substrate 518 to the top surface of a display stack 522, which may be a liquid crystal display (LCD) stack, organic light-emitting diode (OLED) stack, plasma display panel (PDP), or other flat panel display stack. For implementations in which display stack 522 is an OLED stack, substrate 518 may be omitted, in which case a single OCA layer may be interposed between touch sensor 508 and the display stack. Regardless, display stack 522 is operable to emit visible light L upwards through stack 500 and top surface 504 such that graphical content may be perceived by a user.
As seen in
While shown as including a bezel, it will be appreciated that housing 528 may include other components positioned around its perimeter and not a bezel in other implementations. For example, housing 528 may include a black mask positioned along its border and configured to reduce the perceptibility of components in stack 500. The touch sensor configuration shown in
The bezel, and portions 530, may be used to restrain touch sensor 508 and particularly its bent portions along side surfaces 524 and optionally along rear surface 526 to ensure that desired positioning is maintained. For example, double sided adhesive may be attached to touch sensor 508 at one side and to the bezel at the other side to restrain touch sensor 508. In another example, mechanical clamping may be used. In yet another implementation, the bezel itself, when placed around bent touch sensor 508 may restrain the touch sensor.
It will be appreciated that the various views of stack 500 shown in
In some implementations, the methods and processes described herein may be tied to a computing system of one or more computing devices. In particular, such methods and processes may be implemented as a computer-application program or service, an application-programming interface (API), a library, and/or other computer-program product.
Computing system 600 includes a logic machine 602 and a storage machine 604. Computing system 600 may optionally include a display subsystem 606, input subsystem 608, communication subsystem 610, and/or other components not shown in
Logic machine 602 includes one or more physical devices configured to execute instructions. For example, the logic machine may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
The logic machine may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic machine may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. Processors of the logic machine may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic machine optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic machine may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration.
Storage machine 604 includes one or more physical devices configured to hold instructions executable by the logic machine to implement the methods and processes described herein. When such methods and processes are implemented, the state of storage machine 604 may be transformed—e.g., to hold different data.
Storage machine 604 may include removable and/or built-in devices. Storage machine 604 may include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. Storage machine 604 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
It will be appreciated that storage machine 604 includes one or more physical devices. However, aspects of the instructions described herein alternatively may be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for a finite duration.
Aspects of logic machine 602 and storage machine 604 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
The terms “module,” “program,” and “engine” may be used to describe an aspect of computing system 600 implemented to perform a particular function. In some cases, a module, program, or engine may be instantiated via logic machine 602 executing instructions held by storage machine 604. It will be understood that different modules, programs, and/or engines may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same module, program, and/or engine may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The terms “module,” “program,” and “engine” may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
It will be appreciated that a “service”, as used herein, is an application program executable across multiple user sessions. A service may be available to one or more system components, programs, and/or other services. In some implementations, a service may run on one or more server-computing devices.
When included, display subsystem 606 may be used to present a visual representation of data held by storage machine 604. This visual representation may take the form of a graphical user interface (GUI). As the herein described methods and processes change the data held by the storage machine, and thus transform the state of the storage machine, the state of display subsystem 606 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 606 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic machine 602 and/or storage machine 604 in a shared enclosure, or such display devices may be peripheral display devices.
When included, input subsystem 608 may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, or game controller. In some implementations, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity.
When included, communication subsystem 610 may be configured to communicatively couple computing system 600 with one or more other computing devices. Communication subsystem 610 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network, or a wired or wireless local- or wide-area network. In some implementations, the communication subsystem may allow computing system 600 to send and/or receive messages to and/or from other devices via a network such as the Internet.
It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific implementations or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.