This disclosure relates generally to touch screen devices and related methods, including but not limited to touch screen devices and methods that operate in modes using fingerprint sensor systems, optical sensor systems, force sensor systems, and other sensor systems for detecting touch.
Touch screen devices are commonly featured in a variety of devices. Many touch screen devices including smartphones use capacitive touch screen technology to register user inputs on the touch screen device. Capacitive touch screen technology is largely displacing resistive touch screens due to industrial design, durability, and performance considerations. Generally, capacitive touch screens require “bare-handed” contact to sense a touch because a change in capacitance on the touch screen is induced by a small electrical charge drawn to a fingertip at a point of contact. This gives rise to a problem when a user is operating the touch screen device underwater or when the user is wearing protective gloves.
The systems, methods and devices of the disclosure each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein.
One innovative aspect of the subject matter described in this disclosure may be implemented in an apparatus. The apparatus may include a display, a capacitive touch sensor system, and one or more of the following: a fingerprint sensor system, an optical sensor system, or a force sensor system. The apparatus may further include a control system configured for communication with the capacitive touch sensor system, the fingerprint sensor system, and the display, where the control system is further configured for: determine that a user context of the apparatus is incompatible with touch sensing for the capacitive touch sensor system, and present a graphical user interface on the display of the apparatus, where the graphical user interface comprises a first portion configured for sensing touch using one or more of the following: the fingerprint sensor system, the optical sensor system, or the force sensor system.
According to some implementations, the first portion comprises a region of the display coextensive with an active area of the fingerprint sensor system. In some implementations, the graphical user interface further comprises a second portion, where the second portion of the graphical user interface is configured to display a video or image. In some implementations, the control system configured to determine that the user context is incompatible with touch sensing for the capacitive touch sensor system is configured to determine that the user context comprises one or more of the following: (i) the display of the apparatus is covered or submerged in water, (ii) a user wearing gloves, or (iii) a user utilizing a prosthetic or robotic hand.
According to some implementations, the control system is further configured to: obtain a location of a touch input on the graphical user interface using the fingerprint sensor system. In some implementations, the control system configured to obtain the location of the touch input on the graphical user interface using the fingerprint sensor system is configured to: divide the first portion of the graphical user interface into an array of tiles corresponding to active areas of the fingerprint sensor system, perform a scan of one or more tiles of the array of tiles using the fingerprint sensor system to identify a user's finger, and correlate the user's finger with one of the tiles in the first portion of the graphical user interface to obtain the location of the touch input. In some implementations, the control system configured to obtain the location of the touch input on the graphical user interface using the fingerprint sensor system is configured to: detect an applied force on the display using one or more pressure or force sensors associated with the force sensor system or fingerprint sensor system, perform a scan using the fingerprint sensor system to identify a user's finger, and correlate the user's finger with at least one tile in an array of tiles in the first portion of the graphical user interface to obtain the location of the touch input, where the array of tiles divides the first portion into a grid.
According to some implementations, the fingerprint sensor system comprises an ultrasonic fingerprint sensor system. The control system configured to obtain the location of the touch input on the graphical user interface using the fingerprint sensor system may be configured to: cause an ultrasonic transmitter or transceiver in the ultrasonic fingerprint sensor system to generate ultrasonic waves, receive reflections of the ultrasonic waves at a piezoelectric receiver array to acquire image data that identifies a stylus or pen in contact with a surface of the apparatus, and correlate the stylus or pen to a position in the first portion of the graphical user interface to obtain the location of the touch input using the image data. In some implementations, the control system configured to obtain the location of the touch input on the graphical user interface using the fingerprint sensor system may be configured to: receive an indication of mechanical deformation at a piezoelectric receiver array of the ultrasonic fingerprint sensor system to acquire image data that identifies a stylus or pen in contact with a surface of the apparatus, and correlate the stylus or pen to a position in the first portion of the graphical user interface to obtain the location of the touch input using the image data.
According to some implementations, the control system is further configured to: obtain a location of a touch input on the graphical user interface using the force sensor system, where the control system configured to obtain the location of the touch input using the force sensor system is configured to: detect an applied force from an object on the display, obtain a plurality of force measurements from a plurality of force sensors associated with the force sensor system, each of the plurality of force sensors positioned at different locations of the apparatus, and estimate a position of the object in the first portion of the graphical user interface based on the plurality of force measurements to obtain a location of the touch input.
Other innovative aspects of the subject matter described in this disclosure may be implemented in a method. The method may include determining, using a control system, that a user context of a touch screen device is incompatible with touch sensing for a capacitive touch sensor system of the touch screen device, and presenting, using the control system, a graphical user interface on a display of the touch screen device, where the graphical user interface comprises a first portion configured for sensing touch using one or more of the following: a fingerprint sensor system, an optical sensor system, or a force sensor system of the touch screen device.
According to some implementations, the first portion of the graphical user interface comprises a region of the display coextensive with an active area of the fingerprint sensor system. In some implementations, the graphical user interface further comprises a second portion, where the second portion of the graphical user interface is configured to display a video or image. In some implementations, the user context that is incompatible with touch sensing for the capacitive touch sensor system comprises one or more of the following: (i) the display of the touch screen device is covered or submerged in water, (ii) a user wearing gloves, or (iii) a user utilizing a prosthetic or robotic hand.
According to some implementations, the method further includes obtaining, using the control system, a location of a touch input on the graphical user interface using the fingerprint sensor system. In some implementations, obtaining the location of the touch input on the graphical user interface using the fingerprint sensor system comprises: dividing the first portion of the graphical user interface into an array of tiles corresponding to active areas of the fingerprint sensor system, performing a scan of one or more tiles of the array of tiles using the fingerprint sensor system to identify a user's finger, and correlating the user's finger with one of the tiles in the first portion of the graphical user interface to obtain the location of the touch input. In some implementations, obtaining the location of the touch input on the graphical user interface using the fingerprint sensor system comprises: detecting an applied force on the display using one or more pressure or force sensors associated with the force sensor system or fingerprint sensor system, performing a scan using the fingerprint sensor system to identify a user's finger, and correlating the user's finger with at least one tile in an array of tiles in the first portion of the graphical user interface to obtain the location of the touch input, where the array of tiles divides the first portion into a grid.
According to some implementations, the fingerprint sensor system comprises an ultrasonic fingerprint sensor system. In some implementations, obtaining the location of the touch input on the graphical user interface using the fingerprint sensor system comprises: causing an ultrasonic transmitter or transceiver in the ultrasonic fingerprint sensor system to generate ultrasonic waves, receiving reflections of the ultrasonic waves at a piezoelectric receiver array to acquire image data that identifies a stylus or pen in contact with a surface of the touch screen device, and correlating the stylus or pen to a position in the first portion of the graphical user interface to obtain the location of the touch input using the image data. In some implementations, obtaining the location of the touch input on the graphical user interface using the fingerprint sensor system comprises: receiving an indication of mechanical deformation at a piezoelectric receiver array of the ultrasonic fingerprint sensor system to acquire image data that identifies a stylus or pen in contact with a surface of the touch screen device, and correlating the stylus or pen to a position in the first portion of the graphical user interface to obtain the location of the touch input using the image data.
According to some implementations, the method further includes obtaining a location of a touch input on the graphical user interface using the optical sensor system.
According to some implementations, obtaining a location of a touch input on the graphical user interface using the force sensor system. Obtaining the location of the touch input using the force sensor system may comprise: detecting an applied force from an object on the display, obtaining a plurality of force measurements from a plurality of force sensors associated with the force sensor system, each of the plurality of force sensors positioned at different locations of the touch screen device, and estimating a position of the object in the first portion of the graphical user interface based on the plurality of force measurements to obtain a location of the touch input.
Some or all of the operations, functions and/or methods described herein may be performed by one or more devices according to instructions (e.g., software) stored on one or more non-transitory media. Such non-transitory media may include memory devices such as those described herein, including but not limited to random access memory (RAM) devices, read-only memory (ROM) devices, etc. Accordingly, some innovative aspects of the subject matter described in this disclosure can be implemented in one or more non-transitory media having software stored thereon.
For example, the software may include instructions for controlling one or more devices to perform a method. In some implementations, the method may involve determining that a user context of a touch screen device is incompatible with touch sensing for a capacitive touch sensor system of the touch screen device, and presenting a graphical user interface on a display of the touch screen device, where the graphical user interface comprises a first portion configured for sensing touch using one or more of the following: a fingerprint sensor system, an optical sensor system, or a force sensor system of the touch screen device.
According to some implementations, the user context that is incompatible with touch sensing for the capacitive touch sensor system comprises one or more of the following: (i) the display of the touch screen device is covered or submerged in water, (ii) a user wearing gloves, or (iii) a user utilizing a prosthetic or robotic hand.
In some implementations, the method may further involve obtaining a location of a touch input on the graphical user interface using the fingerprint sensor system. In some implementations, the method may further involve obtaining a location of a touch input on the graphical user interface using the force sensor system.
Details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims. Note that the relative dimensions of the following figures may not be drawn to scale. Like reference numbers and designations in the various drawings indicate like elements.
The following description is directed to certain implementations for the purposes of describing the innovative aspects of this disclosure. However, a person having ordinary skill in the art will readily recognize that the teachings herein may be applied in a multitude of different ways. The described implementations may be implemented in any device, apparatus, or system that includes a touch sensing system as disclosed herein. In addition, it is contemplated that the described implementations may be included in or associated with a variety of electronic devices such as, but not limited to: mobile devices, multimedia Internet enabled cellular telephones, mobile television receivers, wireless devices, smartphones, smart cards, wearable devices such as bracelets, armbands, wristbands, rings, headbands, patches, etc., Bluetooth® devices, personal data assistants (PDAs), wireless electronic mail receivers, hand-held or portable computers, netbooks, notebooks, smartbooks, tablets, printers, copiers, scanners, facsimile devices, global positioning system (GPS) receivers/navigators, cameras, digital media players (such as MP3 players), camcorders, game consoles, wrist watches, clocks, calculators, television monitors, flat panel displays, electronic reading devices (e.g., e-readers), mobile health devices, computer monitors, auto displays (including odometer and speedometer displays, etc.), cockpit controls and/or displays, camera view displays (such as the display of a rear view camera in a vehicle), electronic photographs, electronic billboards or signs, projectors, architectural structures, microwaves, refrigerators, stereo systems, cassette recorders or players, DVD players, CD players, VCRs, radios, portable memory chips, washers, dryers, washer/dryers, parking meters, packaging (such as in electromechanical systems (EMS) applications including microelectromechanical systems (MEMS) applications, as well as non-EMS applications), aesthetic structures (such as display of images on a piece of jewelry or clothing) and a variety of EMS devices. The teachings herein also may be used in applications such as, but not limited to, electronic switching devices, radio frequency filters, sensors, accelerometers, gyroscopes, motion-sensing devices, magnetometers, inertial components for consumer electronics, parts of consumer electronics products, steering wheels or other automobile parts, varactors, liquid crystal devices, electrophoretic devices, drive schemes, manufacturing processes and electronic test equipment. Thus, the teachings are not intended to be limited to the implementations depicted solely in the Figures, but instead have wide applicability as will be readily apparent to one having ordinary skill in the art.
An increasing number of devices incorporate a touch screen that enable a user to interact with a graphical user interface via touch as an input. A user may rely on touch screen inputs as an alternative to physical buttons, keyboards, and other input devices. Many touch screen devices operate using capacitive touch screen technology. Typically, a capacitive touch screen device comprises an insulator (e.g., glass) coated with a transparent conductor (e.g., indium tin oxide). Since the human body can act as an electrical conductor, touching a touch screen results in a distortion of the touch screen's electrostatic field. The distortion is measurable as a change in capacitance, which, in turn, can be used to determine the location of contact on the touch screen.
There are various user contexts that may render a capacitive touch screen device inoperable. One situation occurs when the touch screen device is submerged under water. In such instances, the touch screen device is overwhelmed by the change in capacitance and essentially functions as if a huge finger is covering the touch screen uniformly, thereby hindering touch detection. Even for devices that claim to be water-resistant or water-proof, touch detection becomes poor or non-functional for such devices covered in water. Another situation occurs when a user is operating the touch screen device with gloves. Often, the material of a glove is an electrical insulator that insulates the user's fingers. This prevents a capacitive touch screen from detecting the conductivity of a fingertip through the glove. Thus, the capacitive touch screen may be inoperable while the user is wearing gloves. This presents a problem for users that wear gloves in cold weather conditions and for users that wear gloves in certain occupations or recreational activities to protect their hands. Similarly, another situation occurs when the user is operating the touch screen device with a prosthetic. Many prosthetic hands are made of a material that is electrically nonconductive so that the prosthetic hands are incapable of interacting with capacitive touch screen technologies.
Some of the disclosed methods, devices, and apparatuses involve operation of a touch screen device in a non-capacitive touch mode using alternative sensor systems such as a fingerprint sensor system, optical sensor system, or force sensor system. The touch screen device may detect a user context that is incompatible with touch sensing for a touch sensor system (e.g., capacitive touch sensor system) of the device. Upon detection of such a user context, the touch screen device may enter a non-capacitive touch mode where a designated portion of a graphical user interface is configured for sensing touch using alternative sensor systems to a capacitive touch sensor system. In some implementations, the designated portion may enable touch sensing using a fingerprint sensor system such as an ultrasonic fingerprint sensor system. In some examples, the fingerprint sensor system may perform a scan that identifies a user's finger on an active tile and then correlates the active tile to a location on the graphical user interface. In some examples, the fingerprint sensor system receives reflections of the ultrasonic waves to identify an object (e.g., stylus) to locate a position of the object, or the fingerprint sensor system may receive an indication of mechanical deformation caused by an object (e.g., stylus) to locate a position of the object. In some implementations, the designated portion may enable touch sensing using an optical sensor system to generate images to determine a location of user input. In some implementations, the designated portion may enable touch sensing using a force sensor system employing a plurality of force sensors to determine a location of a user input.
Particular implementations of the subject matter described in this disclosure may be implemented to realize one or more of the following potential advantages. User experience is enhanced by enabling touch sensing and recognition in an underwater environment, environments (e.g., cold weather) where the user is wearing gloves, situations where the user has prosthetic hands or robotic hands, or other environments that may ordinarily render touch sensing inoperable. This allows users to continue to operate their smartphones or other devices while swimming, while participating in water activities, while wearing gloves, while using prosthetic or robotic hands, etc. Functionality is improved by designating portions of a graphical user interface for touch sensing that correspond to active areas of alternative sensor systems. Less power is wasted by disabling a capacitive touch sensor system and enabling touch operation only in designated portions that correspond to active areas of alternative sensor systems. When using a fingerprint sensor system for touch sensing, power may be saved by reducing usage of the fingerprint sensor transmitter during scanning and reducing usage of fingerprint image processing. Touch recognition is also improved by dividing the designated portion of the graphical user interface into an array of tiles to assist in correlating user input with a location of the user input.
According to some examples, the fingerprint sensor system 102 may be, or may include, an ultrasonic fingerprint sensor. Alternatively, or additionally, in some implementations the fingerprint sensor system 102 may be, or may include, another type of fingerprint sensor, such as an optical fingerprint sensor, a photoacoustic fingerprint sensor, etc. In some examples, an ultrasonic version of the fingerprint sensor system 102 may include an ultrasonic receiver and a separate ultrasonic transmitter. In some such examples, the ultrasonic transmitter may include an ultrasonic plane-wave generator. However, various examples of ultrasonic fingerprint sensors are disclosed herein, some of which may include a separate ultrasonic transmitter and some of which may not. For example, in some implementations, the fingerprint sensor system 102 may include a piezoelectric receiver layer, such as a layer of polyvinylidene fluoride PVDF polymer or a layer of polyvinylidene fluoride-trifluoroethylene (PVDF-TrFE) copolymer. In some implementations, a separate piezoelectric layer may serve as the ultrasonic transmitter. In some implementations, a single piezoelectric layer may serve as both a transmitter and a receiver. The fingerprint sensor system 102 may, in some examples, include an array of ultrasonic transducer elements, such as an array of piezoelectric micromachined ultrasonic transducers (PMUTs), an array of capacitive micromachined ultrasonic transducers (CMUTs), etc. In some such examples, PMUT elements in a single-layer array of PMUTs or CMUT elements in a single-layer array of CMUTs may be used as ultrasonic transmitters as well as ultrasonic receivers.
Data received from the fingerprint sensor system 102 may sometimes be referred to herein as “fingerprint sensor data,” “fingerprint image data,” etc., although the data will generally be received from the fingerprint sensor system in the form of electrical signals. Accordingly, without additional processing such image data would not necessarily be perceivable by a human being as an image.
The touch sensor system 103 may be, or may include, a capacitive touch sensor system such as a surface capacitive touch sensor system or a projected capacitive touch sensor system. The touch sensor system 103 may be, or may include, other types of touch sensor systems such as a resistive touch sensor system, a surface acoustic wave touch sensor system, an infrared touch sensor system, or other suitable type of touch sensor system. Some of aforementioned touch sensor systems may be rendered inoperable for touch recognition depending on a user context (e.g., underwater). In some implementations, an area of the touch sensor system 103 may extend over most or all of a display portion of the display system 110.
In some examples, the interface system 104 may include a wireless interface system. In some implementations, the interface system 104 may include a user interface system, one or more network interfaces, one or more interfaces between the control system 106 and the fingerprint sensor system 102, one or more interfaces between the control system 106 and the touch sensor system 103, one or more interfaces between the control system 106 and the memory system 108, one or more interfaces between the control system 106 and the display system 110, one or more interfaces between the control system 106 and the optical sensor system 112, one or more interfaces between the control system 106 and the force sensor system 114, one or more interfaces between the control system 106 and the gesture sensor system 116 and/or one or more interfaces between the control system 106 and one or more external device interfaces (e.g., ports or applications processors).
The interface system 104 may be configured to provide communication (which may include wired or wireless communication, electrical communication, radio communication, etc.) between components of the apparatus 101. In some such examples, the interface system 104 may be configured to provide communication between the control system 106 and the fingerprint sensor system 102. According to some such examples, the interface system 104 may couple at least a portion of the control system 106 to the fingerprint sensor system 102 and the interface system 104 may couple at least a portion of the control system 106 to the touch sensor system 103, e.g., via electrically conducting material (e.g., via conductive metal wires or traces. According to some examples, the interface system 104 may be configured to provide communication between the apparatus 101 and other devices and/or human beings. In some such examples, the interface system 104 may include one or more user interfaces. The interface system 104 may, in some examples, include one or more network interfaces and/or one or more external device interfaces (such as one or more universal serial bus (USB) interfaces or a serial peripheral interface (SPI)).
The control system 106 may include one or more general purpose single- or multi-chip processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) or other programmable logic devices, discrete gates or transistor logic, discrete hardware components, or combinations thereof. According to some examples, the control system 106 also may include one or more memory devices, such as one or more random access memory (RAM) devices, read-only memory (ROM) devices, etc. In this example, the control system 106 is configured for communication with, and for controlling, the display system 110. In implementations wherein the apparatus 101 includes a fingerprint sensor system 102, the control system 106 is configured for communication with, and for controlling, the fingerprint sensor system 102. In implementations wherein the apparatus 101 includes a touch sensor system 103, the control system 106 is configured for communication with, and for controlling, the touch sensor system 103. In implementations wherein the apparatus 101 includes a memory system 108 that is separate from the control system 106, the control system 106 also may be configured for communication with the memory system 108. In implementations wherein the apparatus 101 includes an optical sensor system 112, the control system 106 is configured for communication with, and for controlling, the optical sensor system 112. In implementations wherein the apparatus 101 includes a force sensor system 114, the control system 106 is configured for communication with, and for controlling, the force sensor system 114. According to some examples, the control system 106 may include one or more dedicated components that are configured for controlling the fingerprint sensor system 102, the touch sensor system 103, the memory system 108, the display system 110, the optical sensor system 112 and/or the force sensor system 114.
Some examples of the apparatus 101 may include dedicated components that are configured for controlling at least a portion of the fingerprint sensor system 102 (and/or for processing fingerprint image data received from the fingerprint sensor system 102). Although the control system 106 and the fingerprint sensor system 102 are shown as separate components in
In some examples, the memory system 108 may include one or more memory devices, such as one or more RAM devices, ROM devices, etc. In some implementations, the memory system 108 may include one or more computer-readable media, storage media and/or storage media. Computer-readable media include both computer storage media and communication media including any medium that may be enabled to transfer a computer program from one place to another. Storage media may be any available media that may be accessed by a computer. In some examples, the memory system 108 may include one or more non-transitory media. By way of example, and not limitation, non-transitory media may include RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disc ROM (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer.
In some examples, the apparatus 101 includes a display system 110, which may include one or more displays. In some examples, the display system 110 may be, or may include, a light-emitting diode (LED) display, such as an organic light-emitting diode (OLED) display. In some such examples, the display system 110 may include layers, which may be referred to collectively as a “display stack.”
In some implementations, the apparatus 101 may include an optical sensor system 112. The optical sensor system 112 may include one or more arrays of optical sensor pixels. The one or more arrays of optical sensor pixels may include one or more arrays of active pixel sensors, which may include complementary metal oxide semiconductor (CMOS) sensors, charge coupled device (CCD) image sensors, charge injection device (CID) sensors, or any other sensors capable of detecting changes in light. The optical sensor system 112 may be configured to capture an image using one or more optical sensors (e.g., camera).
In some implementations, the apparatus 101 may include a force sensor system 114. The force sensor system 114 may include one or more force sensors. In some cases, the one or more force sensors may be a plurality of force sensors positioned at different locations of the apparatus 101 under a display of the display system 110. The force sensors may be piezo-resistive sensors, capacitive sensors, thin film sensors (e.g., polymer-based thin film sensors), or other types of suitable force sensors. If the force sensor is a piezo-resistive sensor, the piezo-resistive sensor may include silicon, metal, polysilicon, and/or glass. In some implementations, the one or more force sensors may be mechanically coupled with the fingerprint sensor system 102. In some implementations, the one or more force sensors may be separate from the fingerprint sensor system 102. The one or more force sensors of the force sensor system 114 may be coupled to a portion of the control system 106.
In some implementations, the apparatus 101 may include a gesture sensor system 116. The gesture sensor system 116 may be, or may include, an ultrasonic gesture sensor system, an optical gesture sensor system or any other suitable type of gesture sensor system.
The apparatus 101 may be used in a variety of different contexts, some examples of which are disclosed herein. For example, in some implementations a mobile device may include at least a portion of the apparatus 101. In some implementations, a wearable device may include at least a portion of the apparatus 101. The wearable device may, for example, be a bracelet, an armband, a wristband, a ring, a headband or a patch. In some implementations, the control system 106 may reside in more than one device. For example, a portion of the control system 106 may reside in a wearable device and another portion of the control system 106 may reside in another device, such as a mobile device (e.g., a smartphone). The interface system 104 also may, in some such examples, reside in more than one device.
At block 210 of the process 200, a control system of an apparatus may detect a change in constant pressure. The apparatus may be, or may include, a touch screen device such as a smartphone. The apparatus may be equipped with one or more pressure sensors and/or one or more force sensors. The pressure and/or force sensors may detect a change in pressure exerted on the apparatus. Detection of a change in pressure may be accomplished using only one force sensor or only one pressure sensor, since the pressure and/or force sensors need not estimate position for touch sensing at this stage. As soon as the pressure or force applied on the apparatus exceeds a threshold amount, the control system may initiate a determination whether to enter a non-capacitive touch mode or not. The change in pressure may be induced by a finger tap on the screen, by a water droplet, by an object pressed against the screen, or by any other external or environmental force. The change in pressure may be indicative of a user context that would render touch sensing using a capacitive touch sensor system inoperable.
In some implementations, the control system may be configured to detect a change in an environmental condition such as humidity, temperature, ambient light, etc. Detecting such a change in the environmental condition may be made in the alternative or in addition to detecting a change in constant pressure. For instance, the control system may detect a change in humidity that may be indicative of the apparatus being in wet conditions (e.g., submerged in water). The control system may detect a change in temperature that may be indicative of the apparatus being in cold weather, where the user may be more likely to wear gloves in cold weather. The control system may detect a change in ambient light that may be indicative of an object being in proximity to the apparatus or the apparatus being underwater.
With a detected change in pressure or other specified environmental condition at block 210, the process 200 may proceed to block 220. Without a detected change in pressure or other specified environmental condition, the process 200 may proceed to block 230. At block 230, the apparatus remains in a capacitive touch mode for touch sensing.
At block 220 of the process 200, the control system of the apparatus may identify a source of the change in pressure or environmental condition. Identification of the source of the change may occur by identifying an object in proximity to the display of the apparatus. Identification of the object may be triggered by the change in pressure or specified environmental condition at block 210. In some implementations, identification of the object may proceed by scanning one or more regions of the display using a fingerprint sensor system or optical sensor system. The fingerprint sensor system or the optical sensor system may be configured to scan localized or partial regions of the display to obtain data (e.g., image data) regarding the object in proximity to the display. The data may be used to classify the type of object in proximity to the display of the apparatus. For example, the fingerprint sensor system may be an ultrasonic fingerprint sensor system configured to generate ultrasonic waves. Reflections of the ultrasonic waves may provide unique signal characteristics that can be used to identify the type of object. Different materials may reflect back different acoustic properties, as different materials behave differently in terms of how acoustic waves are reflected back. The acoustic properties may be analyzed to identify the object in proximity to the display of the apparatus. This can be done by analyzing the acoustic waves that reflected back with different frequencies. Using the acoustic properties, the control system may identify the type of object that is in proximity to the display, such as whether the object is solid or liquid, or the type of material that the object is made of. Accordingly, the control system may detect whether the apparatus is underwater, whether the user is wearing gloves, and whether the user is utilizing a prosthetic or robotic hand.
In some implementations, the change in pressure or specified environmental condition detected at block 210 may alternatively trigger presentation of a pop-up, window, or other interface by which the user can select the user context that corresponds to a non-capacitive touch mode or directly select the non-capacitive touch mode. For instance, the user may determine that the apparatus is underwater, that the user is wearing gloves, that the user is utilizing a prosthetic hand, that the user is using a robotic hand, or that the user context otherwise renders capacitive touch sensing inoperable. As such, detected change in pressure or specified environmental condition may trigger an opportunity for the user to initiate having the apparatus enter a non-capacitive touch mode at block 240 or remain in a capacitive touch mode at block 230.
After determining that the apparatus is compatible with touch sensing using a capacitive touch sensor system at block 210 or block 220, the control system may enable touch sensing using the capacitive touch mode at block 230. In the capacitive touch mode, capacitive touch sensors may receive user input that provides a touch location corresponding to an x,y coordinate of a touch sensor coordinate system. Using the coordinates, touch drivers of the control system act on the given coordinates at block 250. However, if the apparatus is determined to be incompatible with touch sensing using a capacitive touch sensor system at block 210 and block 220, the control system may enable touch sensing using the non-capacitive touch mode at block 240. In the non-capacitive touch mode, a fingerprint sensor system, an optical sensor system, a force sensor system, or other sensor system may be employed to receive user input to provide a touch location. In some examples, the touch location may correspond to an active tile in an array of tiles in a designated portion of a graphical user interface. Using the active tile, touch drivers coupled to the control system act on the active tile at block 250.
At block 310 of the process 300, a user context of a touch screen device is determined, by a control system, to be incompatible with touch sensing of a touch sensor system. In some implementations, the touch sensor system is a capacitive touch sensor system. Depending on the user context that the touch screen device is operating in, the touch sensor system may be rendered inoperable or substantially inoperable.
As used herein, “user context” generally describes the environment in which the user is located, the current activity of the user, and/or the circumstances in which the user is operating in. The terms “context,” “user context,” and “user environment” are used in this disclosure and are used interchangeably. To list some non-limiting examples, user context may refer to the physical environment in which the user is located, the physical activity of the user, the attire of the user, the transportation mode of the user, and/or the physiological condition of the user.
In some implementations, determination can be made by a user-initiated action. The control system may receive a user input indicating that the user context is incompatible with touch sensing of the touch sensor system, or at least that indicating that the user would like to enter a “special” or “non-capacitive” touch mode.
In some implementations, determination can be made in an automated manner by the control system. The control system may determine a change in an environmental condition and initiate a scan using a fingerprint sensor system or optical sensor system to obtain image data regarding an object in proximity to the touch screen device, where the image data can be used to identify the type of object in proximity to the touch screen device. A pressure sensor or force sensor may detect a change in pressure when a force applied to the touch screen device exceeds a threshold amount. Or, another environmental sensor such as a temperature sensor, humidity sensor, optical sensor, etc. may detect a change in a specified environmental condition. The change in pressure or specified environmental condition may be indicative of a change in user context. This can trigger the control system of the touch screen device to identify the source of the change in pressure or specified environmental condition. In some instances, identification of the source may occur by performing a scan (e.g., low-resolution scan) of certain activated regions of the fingerprint sensor system or optical sensor system to obtain image data regarding an object in proximity to the touch screen device. The image data can be processed to determine if the type of object in proximity to the touch screen device renders the touch sensor system inoperable or substantially inoperable. Examples of user contexts that would render the touch sensor system inoperable or substantially inoperable include but are not limited to: (i) the touch screen device covered or submerged in water, (ii) the user wearing gloves, and (iii) the user utilizing a prosthetic or robotic hand. As such, the image data can be processed to determine if the phase of the object is solid or liquid, if a material of the object is electrically insulating, if the object is water, etc.
By way of illustration, ultrasonic images may be captured using an ultrasonic fingerprint sensor system. Properties of the ultrasonic images may be analyzed to ascertain information regarding the user context, including information regarding a material of an object in proximity to the touch screen device, a relative size and shape of the object, and a relative proximity of the object.
Returning to
In some implementations, the graphical user interface includes the first portion and a second portion, where the second portion may be a viewable region for displaying an image or video. The graphical user interface may be divided between a non-touch-sensitive region (the second portion) and a touch-sensitive region (the first portion). In some implementations, the first portion may occupy half, more than half, less than half, or some other fraction of the display area while the second portion may occupy a remainder of the display area.
In some implementations, the first portion occupies a region of the display area that corresponds to an active region of the fingerprint sensor system. Thus, the fingerprint sensor system may be coextensive with the first portion. The active region of the fingerprint sensor system may be an area which an array of fingerprint sensor pixels resides. By way of an example, 4 mm×9 mm or 8 mm×8 mm fingerprint sensors may represent the active region of the fingerprint sensor system in some touch screen devices, which may represent a small fraction of the display area. Touch sensing using the fingerprint sensor system may be limited to a small area of the display area. However, as the active region of the fingerprint sensor system increases, the touch-sensitive region in the non-capacitive touch mode increases.
Depending on the application, the graphical user interface of the application can be repurposed or reformatted in its presentation to accommodate touch-sensitive controls within the active region of the fingerprint sensor system. In some cases, the graphical user interface can present the application and its controls in a window or frame that is coextensive with the active region of the fingerprint sensor system. In some other cases, the graphical user interface can present only the touch-sensitive controls of the application in a window or frame that is coextensive with the active region of the fingerprint sensor system.
In
Returning to
Regardless of whether the designated portion occupies an entirety or only a fraction of the graphical user interface, it is possible that touch sensing using the fingerprint sensor system, the optical sensor system, and/or the force sensor system may not be as accurate as touch sensing using a capacitive touch sensor system. For example, touch resolution for pinpointing touch location using a fingerprint sensor system may be less accurate than touch resolution for pinpointing touch location using a capacitive touch sensor system. In some implementations, the graphical user interface may present larger user interface controls such as larger user interface buttons and icons when operating in the non-capacitive touch mode. This provides improved usability and user satisfaction for registering user inputs with low touch resolution. In addition or in the alternative, the graphical user interface may present a grid or array of M×N tiles for receiving a user input when operating in the non-capacitive touch mode. Each of the tiles may define a sufficiently large space for receiving a touch input when operating with low touch resolution. This also provides improved usability and user satisfaction for registering user inputs with low touch resolution.
Referring to
In contrast, graphical user interfaces 630b, 630c, 630d, 630e presented on the display system 620 of
The plurality of user interface controls 640b, 640c, 640d, 640e may be arranged within a grid or an array of tiles 650b, 650c, 650d, 650e, respectively. The array of tiles 650b, 650c, 650d, 650e may occupy an area that is coextensive with the active area of the fingerprint sensor system 610. As discussed below, when touch or force is applied on the display system 620, fingerprint sensor pixels corresponding to one or more tiles 650b may be scanned to identify a user's finger and determine touch location. In these instances the graphical user interface 630b is divided into the array of tiles 650b first and then a scan is performed to determine touch location. Or, when touch or force is applied on the display system 620, fingerprint sensor pixels are scanned or at least partially scanned to identify a user's finger, which is then correlated to a particular tile in the array of tiles 650b to determine touch location. In such instances the scan is performed first and then the graphical user interface 630 is divided into the array of tiles 650b to determine touch location.
As illustrated in
Returning to
In some implementations, the location of the touch input can be obtained using the fingerprint sensor system. The fingerprint sensor system may perform a scan of one or more regions of the first portion to identify an object (e.g., user's finger). The scanned object can be correlated with a coordinate in the first portion or with an active tile in an array of tiles, thereby obtaining the location of the touch input.
The fingerprint sensor system may be an ultrasonic fingerprint sensor system. The ultrasonic fingerprint sensor system may include an ultrasonic transmitter or transceiver configured to generate ultrasonic waves. The ultrasonic fingerprint sensor system may include an ultrasonic receiver or transceiver configured to receive reflections of the ultrasonic waves, where the ultrasonic receiver may include a piezoelectric receiver array. In some cases, the piezoelectric receiver array may be divided into segments so that some or all segments are scanned for ultrasonic imaging. In some implementations, the control system may scan all areas of the piezoelectric receiver array. In some implementations, the control system may select a scanning area of the piezoelectric receiver array. For instance, different areas may be scanned until a user's finger is identified in one of the scanned areas.
In one example, one or more fingerprint sensor pixels are scanned until a user's finger is identified. The scan may be a full scan or a partial scan of the active region of the fingerprint sensor system. The first portion of the graphical user interface may be divided into an M×N array of tiles. The fingerprint sensor pixel that scanned the user's finger may be correlated to an active tile in the M×N array of tiles. The location of the active tile provides the location of the touch input.
In another example, the first portion of the graphical user interface may be divided into an M×N array of tiles. The fingerprint sensor system performs a scan of one or more tiles until a user's finger is identified. The user's finger is correlated to an active tile in the M×N array of tiles, and the location of the active tile provides the location of the touch input.
In some implementations, the control system scans one or more areas of the fingerprint sensor system to acquire image data. The scan may be initiated by a stylus, pen, or other object in contact with the display of the touch screen device. The acquired image data may correspond to signals produced by a piezoelectric receiver array in response to an ultrasonic signal and/or a mechanical deformation caused by the object in contact with a surface such as a cover glass of the touch screen device. In some instances, the object is a stylus. The control system may be configured to detect a doublet pattern or ring pattern in the acquired image data. Using the acquired image data, the control system may be configured to detect a position of the object on the surface of the touch screen device to provide a location of the touch input.
In
In
Returning to
In some implementations, the location of the touch input can be obtained using the force sensor system. The force sensor system may include a plurality of force sensors positioned at different locations of the touch screen device. When force is applied at a particular location on the touch screen device, each of the plurality of force sensors may obtain force measurements. The force measurements at each of the different locations of the touch screen device may be used to calculate the location of maximum force. The location of the maximum force corresponds to the location of the object applying the force. The force measurements from the plurality of force sensors are used to triangulate the location of touch. As such, the location of the object applying the force can be estimated based on the plurality of force measurements at different locations.
Returning to
As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c.
The various illustrative logics, logical blocks, modules, circuits and algorithm processes described in connection with the implementations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. The interchangeability of hardware and software has been described generally, in terms of functionality, and illustrated in the various illustrative components, blocks, modules, circuits and processes described above. Whether such functionality is implemented in hardware or software depends upon the particular application and design constraints imposed on the overall system.
The hardware and data processing apparatus used to implement the various illustrative logics, logical blocks, modules and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some implementations, particular processes and methods may be performed by circuitry that is specific to a given function.
In one or more aspects, the functions described may be implemented in hardware, digital electronic circuitry, computer software, firmware, including the structures disclosed in this specification and their structural equivalents thereof, or in any combination thereof. Implementations of the subject matter described in this specification also may be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a computer storage media for execution by, or to control the operation of, data processing apparatus.
If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium, such as a non-transitory medium. The processes of a method or algorithm disclosed herein may be implemented in a processor-executable software module which may reside on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that may be enabled to transfer a computer program from one place to another. Storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, non-transitory media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection may be properly termed a computer-readable medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and instructions on a machine readable medium and computer-readable medium, which may be incorporated into a computer program product.
Various modifications to the implementations described in this disclosure may be readily apparent to those having ordinary skill in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of this disclosure. Thus, the disclosure is not intended to be limited to the implementations shown herein, but is to be accorded the widest scope consistent with the claims, the principles and the novel features disclosed herein. The word “exemplary” is used exclusively herein, if at all, to mean “serving as an example, instance, or illustration.” Any implementation described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other implementations.
Certain features that are described in this specification in the context of separate implementations also may be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation also may be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination may in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems may generally be integrated together in a single software product or packaged into multiple software products. Additionally, other implementations are within the scope of the following claims. In some cases, the actions recited in the claims may be performed in a different order and still achieve desirable results.
It will be understood that unless features in any of the particular described implementations are expressly identified as incompatible with one another or the surrounding context implies that they are mutually exclusive and not readily combinable in a complementary and/or supportive sense, the totality of this disclosure contemplates and envisions that specific features of those complementary implementations may be selectively combined to provide one or more comprehensive, but slightly different, technical solutions. It will therefore be further appreciated that the above description has been given by way of example only and that modifications in detail may be made within the scope of this disclosure.