This disclosure relates to display devices, including but not limited to display devices that incorporate touch screens.
Electromechanical systems (EMS) include devices having electrical and mechanical elements, actuators, transducers, sensors, optical components (e.g., mirrors) and electronics. EMS can be manufactured at a variety of scales including, but not limited to, microscales and nanoscales. For example, microelectromechanical systems (MEMS) devices can include structures having sizes ranging from about a micron to hundreds of microns or more. Nanoelectromechanical systems (NEMS) devices can include structures having sizes smaller than a micron including, for example, sizes smaller than several hundred nanometers. Electromechanical elements may be created using deposition, etching, lithography, and/or other micromachining processes that etch away parts of substrates and/or deposited material layers, or that add layers to form electrical and electromechanical devices.
One type of EMS device is called an interferometric modulator (IMOD). As used herein, the term IMOD or interferometric light modulator refers to a device that selectively absorbs and/or reflects light using the principles of optical interference. In some implementations, an IMOD may include a pair of conductive plates, one or both of which may be transparent and/or reflective, wholly or in part, and capable of relative motion upon application of an appropriate electrical signal. In an implementation, one plate may include a stationary layer deposited on a substrate and the other plate may include a reflective membrane separated from the stationary layer by an air gap. The position of one plate in relation to another can change the optical interference of light incident on the IMOD. IMOD devices have a wide range of applications, and are anticipated to be used in improving existing products and creating new products, especially those with display capabilities.
In the past, users of cellular telephones (also referred to herein as cell phones) generally held the cell phone next to the ear when using the cell phone. However, it is becoming more common for cell phone users to look at video or other content on their cell phone display, with the cell phone held away from the ear, even while having a cell phone conversation. If the user switches between watching the display and holding the cell phone next to the ear, the audio level and/or sound directivity from the cell phone's speaker may require adjustment. In some situations, a user may benefit from invoking a cell phone operation other than pressing a button or performing a gesture with one or more fingers on a touch screen.
The systems, methods and devices of the disclosure each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein.
One innovative aspect of the subject matter described in this disclosure can be implemented in an apparatus which includes a mobile device, such as a cell phone, having a sensor array. The sensor array may include a touch sensor array. The mobile device may be configured to determine whether sensor signals from the sensor array indicate an ear gesture and/or the presence of an ear. One or more device operations may be invoked according to the determination.
Another innovative aspect of the subject matter described in this disclosure can be implemented in a method that involves scanning a sensor array, detecting array capacitances of the sensor array, analyzing the array capacitances, determining whether the array capacitances indicate the presence of an ear and invoking a device operation if the presence of the ear is indicated. The method may involve receiving a sensor signal from a sensor device and determining whether the sensor signal indicates the presence of the ear. In some implementations, the sensor array may be a projected capacitive touch sensor array.
The invoked device operation may involve unlocking a mobile device. The device operation may be a cell phone operation. For example, the cell phone operation may involve controlling at least one speaker of the cell phone, controlling voice recognition functionality of the cell phone and/or controlling other functionality of the cell phone.
Another innovative aspect of the subject matter described in this disclosure can be implemented in a mobile device that includes a projected capacitive touch sensor array and a logic system. The logic system may be configured for scanning the sensor array, detecting array capacitances of the sensor array, analyzing the array capacitances, determining whether the array capacitances indicate the presence of an ear and invoking a device operation if the presence of the ear is indicated.
According to some implementations, the mobile device may include a cell phone. The device operation may be a cell phone operation. The cell phone operation may involve controlling at least one speaker of the cell phone. The cell phone operation may involve unlocking the cell phone. The cell phone operation may involve controlling voice recognition functionality of the cell phone.
Another innovative aspect of the subject matter described in this disclosure can be implemented in a method of gesture detection that involves scanning a sensor array of a mobile device, detecting sensor signals from the sensor array, analyzing the sensor signals, determining whether the sensor signals indicate an ear gesture and invoking a device operation based on the ear gesture indication. The ear gesture may be an ear touch, an ear press, an ear pressure, an ear swipe, an ear rotation, an ear position, an ear distance and/or an ear motion. In some implementations, the sensor array may be a projected capacitive touch sensor array. The sensor signals may be capacitance signals.
The device operation may involve switching to a speaker phone mode, switching to a normal audio mode, adjusting a volume of an audio output device, adjusting a directionality of an audio output device, adjusting a directionality of a microphone, recognizing an ear, detecting a left ear, detecting a right ear, recognizing a particular ear, using an ear recognition as a PIN, accessing a cell phone, unlocking a cell phone, receiving a phone call, initiating a phone call, terminating a phone call, turning on a voice-recognition feature, turning off a voice-recognition feature, recognizing a characteristic pattern of an ear and a portion of a face, learning an ear gesture and/or tracking an ear position. The device operation may be a cell phone operation. The cell phone operation may involve modifying a volume level of at least one speaker of the cell phone, changing a voice recognition functionality of the cell phone, etc.
The method may involve receiving a supplemental sensor signal from a supplemental sensor device of the mobile device and validating the presence of the ear with the supplemental sensor signal. The supplemental sensor signal may be a signal from a pressure sensor, an infrared (IR) sensor, an accelerometer, a gyroscope, an orientation sensor and/or a camera of the mobile device.
Another innovative aspect of the subject matter described in this disclosure can be implemented in a non-transitory medium having software stored therein. The software may include instructions to control a mobile device to scan a projected capacitive touch sensor array of the mobile device, detect capacitance signals from the sensor array, analyze the capacitance signals, determine whether the capacitance signals indicate an ear gesture and invoke a device operation based on the ear gesture indication.
The device operation may be a cell phone operation. The cell phone operation may involve modifying a volume level of at least one speaker of the cell phone. The cell phone operation may involve unlocking the cell phone. The cell phone operation may involve changing a voice recognition functionality of the cell phone.
Details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Although the examples provided in this summary are primarily described in terms of MEMS-based displays, the concepts provided herein may apply to other types of displays, such as liquid crystal displays (LCD), organic light-emitting diode (OLED) displays, electrophoretic displays, and field emission displays. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims. Note that the relative dimensions of the following figures may not be drawn to scale.
Like reference numbers and designations in the various drawings indicate like elements.
The following description is directed to certain implementations for the purposes of describing the innovative aspects of this disclosure. However, a person having ordinary skill in the art will readily recognize that the teachings herein can be applied in a multitude of different ways. The described implementations may be implemented in any device or system that can be configured to display an image, whether in motion (e.g., video) or stationary (e.g., still image), and whether textual, graphical or pictorial. More particularly, it is contemplated that the described implementations may be included in or associated with a variety of electronic devices such as, but not limited to: mobile telephones, multimedia Internet enabled cellular telephones, mobile television receivers, wireless devices, smartphones, Bluetooth® devices, personal data assistants (PDAs), wireless electronic mail receivers, hand-held or portable computers, netbooks, notebooks, smartbooks, tablets, printers, copiers, scanners, facsimile devices, GPS receivers/navigators, cameras, MP3 players, camcorders, game consoles, wrist watches, clocks, calculators, television monitors, flat panel displays, electronic reading devices (i.e., e-readers), computer monitors, auto displays (including odometer and speedometer displays, etc.), cockpit controls and/or displays, camera view displays (such as the display of a rear view camera in a vehicle), electronic photographs, electronic billboards or signs, projectors, architectural structures, microwaves, refrigerators, stereo systems, cassette recorders or players, DVD players, CD players, VCRs, radios, portable memory chips, washers, dryers, washer/dryers, parking meters, packaging (such as in electromechanical systems (EMS), microelectromechanical systems (MEMS) and non-MEMS applications), aesthetic structures (e.g., display of images on a piece of jewelry) and a variety of EMS devices. The teachings herein also can be used in non-display applications such as, but not limited to, electronic switching devices, radio frequency filters, sensors, accelerometers, gyroscopes, motion-sensing devices, magnetometers, inertial components for consumer electronics, parts of consumer electronics products, varactors, liquid crystal devices, electrophoretic devices, drive schemes, manufacturing processes and electronic test equipment. Thus, the teachings are not intended to be limited to the implementations depicted solely in the Figures, but instead have wide applicability as will be readily apparent to one having ordinary skill in the art.
According to some implementations provided herein, a mobile device, such as a cell phone, may include one or more sensors. In some implementations, the mobile device may include a sensor array. The sensor array may include a touch sensor array, such as a projected capacitive touch (PCT) sensor array. The mobile device may be configured to determine whether one or more sensor signals from the sensor array indicate an ear gesture and/or the presence of an ear. One or more device operations may be invoked according to the determination.
The device operation may involve controlling at least one speaker of a cell phone. The device operation may involve switching to a speaker phone mode, switching to a normal audio mode, adjusting a volume of an audio output device, adjusting a directionality of an audio output device, adjusting a directionality of a microphone, etc. For example, if the presence of an ear is detected, the volume of a cell phone speaker may be reduced. The device operation may involve tracking an ear position and/or orientation. Microphone, speaker and/or other device functionality may be adjusted according to the ear position and/or orientation.
Alternatively, or additionally, the device operation may involve recognizing an ear, recognizing a characteristic pattern of an ear and a portion of a face, detecting a left ear, detecting a right ear, recognizing a particular ear, etc. In some such implementations, ear recognition may be used as a type of user authentication. For example, an ear recognition process may be used in lieu of (or in addition to) an authorization code, such as a personal identification number (PIN). In some implementations, the ear recognition process may invoke device operations for accessing a mobile device, unlocking a mobile device, etc.
The device operation may involve learning processes. For example, the device operation may involve learning a characteristic pattern of an ear and/or a portion of a face, storing ear pattern data and/or face pattern data, etc. Some implementations may involve associating an ear gesture with a device operation. Learning processes may include receiving and storing user input regarding device functionality. For example, some such processes may involve receiving user input regarding a first desired speaker volume level to be applied when a cell phone is next to the user's ear and/or regarding a second desired speaker volume level to be applied when the cell phone is not next to the user's ear.
In some implementations, a device may control voice recognition functionality according to whether an ear and/or an ear gesture is detected. For example, a voice-recognition feature may be turned on or turned off if an ear is detected.
Particular implementations of the subject matter described in this disclosure can be implemented to realize one or more of the following potential advantages. If a user switches between watching a cell phone display and holding the cell phone next to the ear, the audio level and/or sound directivity from the cell phone's speaker may be automatically adjusted upon detecting the presence or absence of an ear. Such functionality eliminates the need for a user to manually change the audio settings. Providing various types of cell phone functionality according to detected ear gestures may allow a user to unlock a cell phone, receive a phone call, initiate a phone call, terminate a phone call, etc., without requiring the use of two hands or one or more fingers touching the surface of a touch screen.
Implementations that enable ear and/or face recognition to be used as a type of user authentication can provide varying levels of device security. In some implementations, an ear recognition process alone may invoke device operations for accessing a mobile device, unlocking a mobile device, etc. Using an ear recognition process in addition to an authorization code can provide a higher level of security. In some implementations, a user of a cell phone with ear or ear gesture recognition capability may allow the user to interact with the phone without excessive glancing at the phone or the need to touch the face of the display with a finger, which can add to convenience and safety while in a mobile vehicle, for example.
The IMOD display device can include a row/column array of IMODs. Each IMOD can include a pair of reflective layers, i.e., a movable reflective layer and a fixed partially reflective layer, positioned at a variable and controllable distance from each other to form an air gap (also referred to as an optical gap or cavity). The movable reflective layer may be moved between at least two positions. In a first position, i.e., a relaxed position, the movable reflective layer can be positioned at a relatively large distance from the fixed partially reflective layer. In a second position, i.e., an actuated position, the movable reflective layer can be positioned more closely to the partially reflective layer. Incident light that reflects from the two layers can interfere constructively or destructively depending on the position of the movable reflective layer, producing either an overall reflective or non-reflective state for each pixel. In some implementations, the IMOD may be in a reflective state when unactuated, reflecting light within the visible spectrum, and may be in a dark state when unactuated, reflecting light outside of the visible range (e.g., infrared light). In some other implementations, however, an IMOD may be in a dark state when unactuated, and in a reflective state when actuated. In some implementations, the introduction of an applied voltage can drive the pixels to change states. In some other implementations, an applied charge can drive the pixels to change states.
The depicted portion of the pixel array in
In
The optical stack 16 can include a single layer or several layers. The layer(s) can include one or more of an electrode layer, a partially reflective and partially transmissive layer and a transparent dielectric layer. In some implementations, the optical stack 16 is electrically conductive, partially transparent and partially reflective, and may be fabricated, for example, by depositing one or more of the above layers onto a transparent substrate 20. The electrode layer can be formed from a variety of materials, such as various metals, for example indium tin oxide (ITO). The partially reflective layer can be formed from a variety of materials that are partially reflective, such as various metals, e.g., chromium (Cr), semiconductors, and dielectrics. The partially reflective layer can be formed of one or more layers of materials, and each of the layers can be formed of a single material or a combination of materials. In some implementations, the optical stack 16 can include a single semi-transparent thickness of metal or semiconductor which serves as both an optical absorber and conductor, while different, more conductive layers or portions (e.g., of the optical stack 16 or of other structures of the IMOD) can serve to bus signals between IMOD pixels. The optical stack 16 also can include one or more insulating or dielectric layers covering one or more conductive layers or a conductive/absorptive layer.
In some implementations, the layer(s) of the optical stack 16 can be patterned into parallel strips, and may form row electrodes in a display device as described further below. As will be understood by one having skill in the art, the term “patterned” is used herein to refer to masking as well as etching processes. In some implementations, a highly conductive and reflective material, such as aluminum (Al), may be used for the movable reflective layer 14, and these strips may form column electrodes in a display device. The movable reflective layer 14 may be formed as a series of parallel strips of a deposited metal layer or layers (orthogonal to the row electrodes of the optical stack 16) to form columns deposited on top of posts 18 and an intervening sacrificial material deposited between the posts 18. When the sacrificial material is etched away, a defined gap 19, or optical cavity, can be formed between the movable reflective layer 14 and the optical stack 16. In some implementations, the spacing between posts 18 may be approximately 1-1000 um, while the gap 19 may be less than 10,000 Angstroms (Å).
In some implementations, each pixel of the IMOD, whether in the actuated or relaxed state, is essentially a capacitor formed by the fixed and moving reflective layers. When no voltage is applied, the movable reflective layer 14 remains in a mechanically relaxed state, as illustrated by the IMOD 12 on the left in
The processor 21 can be configured to communicate with an array driver 22. The array driver 22 can include a row driver circuit 24 and a column driver circuit 26 that provide signals to, e.g., a display array or panel 30. The cross section of the IMOD display device illustrated in
In some implementations, a frame of an image may be created by applying data signals in the form of “segment” voltages along the set of column electrodes, in accordance with the desired change (if any) to the state of the pixels in a given row. Each row of the array can be addressed in turn, such that the frame is written one row at a time. To write the desired data to the pixels in a first row, segment voltages corresponding to the desired state of the pixels in the first row can be applied on the column electrodes, and a first row pulse in the form of a specific “common” voltage or signal can be applied to the first row electrode. The set of segment voltages can then be changed to correspond to the desired change (if any) to the state of the pixels in the second row, and a second common voltage can be applied to the second row electrode. In some implementations, the pixels in the first row are unaffected by the change in the segment voltages applied along the column electrodes, and remain in the state they were set to during the first common voltage row pulse. This process may be repeated for the entire series of rows, or alternatively, columns, in a sequential fashion to produce the image frame. The frames can be refreshed and/or updated with new image data by continually repeating this process at some desired number of frames per second.
The combination of segment and common signals applied across each pixel (that is, the potential difference across each pixel) determines the resulting state of each pixel.
As illustrated in
When a hold voltage is applied on a common line, such as a high hold voltage VCHOLD
When an addressing, or actuation, voltage is applied on a common line, such as a high addressing voltage VCADD
In some implementations, hold voltages, address voltages, and segment voltages may be used which always produce the same polarity potential difference across the modulators. In some other implementations, signals can be used which alternate the polarity of the potential difference of the modulators. Alternation of the polarity across the modulators (that is, alternation of the polarity of write procedures) may reduce or inhibit charge accumulation which could occur after repeated write operations of a single polarity.
During the first line time 60a, a release voltage 70 is applied on common line 1; the voltage applied on common line 2 begins at a high hold voltage 72 and moves to a release voltage 70; and a low hold voltage 76 is applied along common line 3. Thus, the modulators (common 1, segment 1), (1,2) and (1,3) along common line 1 remain in a relaxed, or unactuated, state for the duration of the first line time 60a, the modulators (2,1), (2,2) and (2,3) along common line 2 will move to a relaxed state, and the modulators (3,1), (3,2) and (3,3) along common line 3 will remain in their previous state. With reference to
During the second line time 60b, the voltage on common line 1 moves to a high hold voltage 72, and all modulators along common line 1 remain in a relaxed state regardless of the segment voltage applied because no addressing, or actuation, voltage was applied on the common line 1. The modulators along common line 2 remain in a relaxed state due to the application of the release voltage 70, and the modulators (3,1), (3,2) and (3,3) along common line 3 will relax when the voltage along common line 3 moves to a release voltage 70.
During the third line time 60c, common line 1 is addressed by applying a high address voltage 74 on common line 1. Because a low segment voltage 64 is applied along segment lines 1 and 2 during the application of this address voltage, the pixel voltage across modulators (1,1) and (1,2) is greater than the high end of the positive stability window (i.e., the voltage differential exceeded a predefined threshold) of the modulators, and the modulators (1,1) and (1,2) are actuated. Conversely, because a high segment voltage 62 is applied along segment line 3, the pixel voltage across modulator (1,3) is less than that of modulators (1,1) and (1,2), and remains within the positive stability window of the modulator; modulator (1,3) thus remains relaxed. Also during line time 60c, the voltage along common line 2 decreases to a low hold voltage 76, and the voltage along common line 3 remains at a release voltage 70, leaving the modulators along common lines 2 and 3 in a relaxed position.
During the fourth line time 60d, the voltage on common line 1 returns to a high hold voltage 72, leaving the modulators along common line 1 in their respective addressed states. The voltage on common line 2 is decreased to a low address voltage 78. Because a high segment voltage 62 is applied along segment line 2, the pixel voltage across modulator (2,2) is below the lower end of the negative stability window of the modulator, causing the modulator (2,2) to actuate. Conversely, because a low segment voltage 64 is applied along segment lines 1 and 3, the modulators (2,1) and (2,3) remain in a relaxed position. The voltage on common line 3 increases to a high hold voltage 72, leaving the modulators along common line 3 in a relaxed state.
Finally, during the fifth line time 60e, the voltage on common line 1 remains at high hold voltage 72, and the voltage on common line 2 remains at a low hold voltage 76, leaving the modulators along common lines 1 and 2 in their respective addressed states. The voltage on common line 3 increases to a high address voltage 74 to address the modulators along common line 3. As a low segment voltage 64 is applied on segment lines 2 and 3, the modulators (3,2) and (3,3) actuate, while the high segment voltage 62 applied along segment line 1 causes modulator (3,1) to remain in a relaxed position. Thus, at the end of the fifth line time 60e, the 3×3 pixel array is in the state shown in
In the timing diagram of
The details of the structure of IMODs that operate in accordance with the principles set forth above may vary widely. For example,
As illustrated in
In implementations such as those shown in
The process 80 continues at block 84 with the formation of a sacrificial layer 25 over the optical stack 16. The sacrificial layer 25 is later removed (e.g., at block 90) to form the cavity 19 and thus the sacrificial layer 25 is not shown in the resulting IMODs 12 illustrated in
The process 80 continues at block 86 with the formation of a support structure e.g., a post 18 as illustrated in
The process 80 continues at block 88 with the formation of a movable reflective layer or membrane such as the movable reflective layer 14 illustrated in
The process 80 continues at block 90 with the formation of a cavity, e.g., cavity 19 as illustrated in
In this example, the method 900 begins with a process of scanning a sensor array (block 905). In some implementations, block 905 involves scanning a touch sensor array, such as a projected capacitive touch sensor array. Accordingly, in this example block 910 involves detecting array capacitances of a touch sensor array. However, blocks 905 and/or 910 also may involve receiving sensor signals from other types of sensors, such as a pressure sensor, an infrared (IR) sensor, an accelerometer, a gyroscope, an orientation sensor, and/or a camera. In some implementations, the sensor signals from the other types of sensors may be received to augment the signals from the touch sensor array.
The sensor signals may then be analyzed. In the example shown in block 915, array capacitances of the touch sensor array are analyzed. It may then be determined whether the array capacitances indicate the presence of an ear, such as an ear of a user of a mobile device. Blocks 915 and/or 920 may involve a number of sub-processes, such as determining a pattern of array capacitance values and comparing the pattern to ear pattern data and/or face pattern data stored in a memory. The ear pattern data and/or face pattern data may have been previously acquired and stored during a “set-up” or registration process. Some examples are described below with reference to
When it is determined in block 920 that the sensor signals (in this example, the array capacitances) indicate the presence of an ear, one or more device operations may be invoked in block 925. A device operation may involve controlling at least one speaker of a cell phone. A device operation may involve switching to a speaker phone mode, switching to a normal audio mode, adjusting a volume of an audio output device, adjusting a directionality of an audio output device, adjusting a directionality of a microphone, etc. For example, when the presence of an ear is detected, the volume of a cell phone speaker may be reduced. In a second example, when the presence of an ear or of a particular ear is detected, the cell phone may be unlocked or powered up. In a third example, a voice recognition capability may be invoked or negated when the presence of an ear is detected. Other examples are described below.
In block 930, it is determined whether the method 900 will continue. For example, in block 930, a logic system of a mobile device (such as the display devices 40 shown in
The row 1010 includes a rectangle for each of
The touch sensor array 1000 includes a plurality of sensor elements or “sensels” 1005. In
In the example shown by
In
In
In
In alternative implementations, other device operations may be invoked in block 925. In some such implementations, the device operation may involve tracking an ear position and/or orientation. Microphone, speaker and/or other device functionality may be adjusted according to the ear position and/or orientation. For example, referring to
In some implementations, the device operations invoked in block 925 may involve voice commands and/or voice recognition functionality. According to some such implementations, a device may control voice recognition functionality according to whether an ear and/or an ear gesture is detected. For example, a voice recognition feature may be turned on or turned off when an ear is detected.
Alternatively, or additionally, the device operation of block 925 may involve recognizing a characteristic pattern of an ear and/or a portion of a face. In some implementations, block 925 may involve detecting a left ear, detecting a right ear, and/or recognizing a particular ear. In some such implementations, ear recognition may be used as a type of user authentication. For example, an ear recognition process may be used in lieu of (or in addition to) an authorization code, such as a personal identification number (PIN). In some implementations, the ear recognition process may invoke device operations for accessing a mobile device, unlocking a mobile device, etc.
In block 1110, the logic system determines whether the sensor data indicated the presence of an ear. If so, ear pattern and/or face pattern data are accessed by the logic system in block 1115. Such data may be stored in a storage medium of the display device or of another device, e.g., by a storage device accessible by the logic system via a network.
Implementations that enable ear and/or face recognition to be used as a type of user authentication can provide varying levels of device security. In some implementations, an ear recognition/authentication process alone may be sufficient to invoke device operations, such as allowing device access or unlocking or powering up a mobile device. Using an ear recognition process in addition to an authorization code can provide a relatively higher level of security. However, requiring the use of an authorization code may be less convenient for users. Accordingly, in some implementations method 1100 includes an optional process of receiving an additional authorization code, such as a PIN, an alphanumeric password or passcode, a voice recognition input, etc. as shown in optional block 1120.
In block 1125, the logic system determines whether the stored ear pattern data match the sensor data received in block 1105. When an authorization code is received in optional block 1120, the logic system also may determine whether the authorization code is correct.
If the authentication process of block 1125 is successful, one or more device operations may be invoked in block 1130. In some implementations, block 1130 may involve allowing access to other functions of a mobile device. A user may, for example, be able to initiate a cell phone call, unlock a device, use a web browser, access an account, etc.
In this example, method 1100 ends (block 1135) after the device operation is invoked. Method 1100 also ends if the authentication process of block 1125 fails, e.g., for a predetermined number of times or the sensor data do not indicate the presence of an ear in block 1110. However, in alternative implementations, method 1100 may continue if, for example, the sensor data do not initially indicate the presence of an ear in block 1110. Sensor data may continue to be received in block 1105 and evaluated in block 1110 for a predetermined time and/or until the occurrence of one or more predetermined conditions.
The ear authentication method described above involves the use of previously-acquired ear pattern and/or face pattern data. Some implementations described herein provide methods for acquiring and storing such data.
In this example, method 1200 begins with optional block 1205, in which a user is prompted to enter a user identification code and/or a password. Such information may, for example, be used to associate a particular user with a set of ear pattern and/or face pattern data. In block 1210, a user is prompted to position an ear for acquiring sensor data. The prompts may, for example, indicate where the user's ear should be positioned. In some implementations, block 1210 may involve a visual prompt that is displayed on a display device.
Alternatively, or additionally, block 1210 may involve audio prompts. Audio prompts may be advantageous if ear pattern data are to be acquired from a sensor or a sensor array that is located near a display. For example, audio prompts may be advantageous if ear pattern data are to be acquired from a touch sensor array, because a user will not generally be able to see the touch sensor array when the user's ear is pressed against the display device. Even if the sensor data will be acquired by another type of sensor, audio prompts may still be advantageous. Due to the small size of many display devices 40, it may be difficult for a user to see prompts displayed on the display array 30 while sensor data are being acquired from the user's ear.
If ear pattern data are to be acquired from a touch sensor array, in some implementations the prompts may indicate how hard the user should press an ear against the touch sensor array. For example, the display device 40 may include one or more pressure or force sensors. When a user is pressing the ear against the touch sensor array, the pressure sensor(s) may indicate corresponding pressure data. A logic system of the display device 40 may be configured to receive the pressure data from the pressure sensor(s), to determine whether the ear is being pressed hard enough against the touch sensor array, too hard, etc. In some implementations, the logic system may be configured to control the speaker 45 to provide corresponding voice prompts to the user.
When the ear is positioned properly, the logic system may control the sensor(s) to acquire the sensor data (block 1215). In some implementations, the raw sensor data may be stored. In alternative implementations, as here, a logic system will receive the sensor data (block 1220) and determine ear pattern data and/or face pattern data from the sensor data (block 1225). In some implementations, the logic system may determine the ear pattern data and/or face pattern data according to an algorithm, such as a contouring or pattern recognition algorithm. In some such implementations, sensor array data, such as array capacitances, may be input into the algorithm. The ear touch zones 1020a-1020c shown in
In block 1235, it is determined whether additional sensor data will be acquired. This determination may be made by the logic system and/or according to user input. In some implementations, more than one type of sensor data will be acquired for a user. In other implementations, multiple instances of the same type of sensor data will be acquired for a user. For example, a user may be prompted so that sensor data may be acquired with the user's ear in more than one position. The user may be prompted so that data may be acquired for a left ear and a right ear. The user may be prompted so that data may be acquired at varying pressures, such as the varying pressures indicated in
Some implementations involve detecting an ear gesture and controlling a device according to the ear gestures.
The sensor signals may then be analyzed (block 1315). In the example shown in block 1315, array capacitances of the touch sensor array are analyzed. It may then be determined whether the array capacitances indicate not only the presence of an ear, but of an ear gesture (block 1320). The ear gesture may be, for example, an ear touch, an ear press, an ear swipe, an ear rotation, an ear position, an ear distance, and/or an ear motion. Although the term “ear gesture” is used herein, an ear gesture may be caused primarily by force and/or motion of a hand holding a mobile device against the ear while the ear remains relatively stationary. Alternatively, or additionally, the ear gesture may actually be caused, at least in part, by force and/or motion of the ear while the mobile device remains relatively stationary.
The direction of the swipe may or may not matter, depending on the implementation. In some implementations, for example, the same device operation(s) may be associated with the ear gesture 1405a, regardless of whether an upward or a downward ear swipe is detected. In alternative implementations, a downward swipe may be associated with a first device operation and an upward swipe may be associated with a second device operation.
When one of these ear gestures is detected, the logic system may determine that the corresponding sensor signals indicate a type of ear gesture in block 1320 (see
Other ear gestures do not necessarily involve swipes along substantially straight lines. For example, referring to
In some implementations, an ear gesture may be associated with the shape of a pattern and not necessarily with the orientation of the pattern. For example, in some implementations a triangular ear gesture may be recognized by a detected triangular pattern, regardless of the orientation of each side of the triangle relative to rows or columns of a sensor array.
Various other types of ear gestures are provided herein. Some such ear gestures do not necessarily involve swipes along substantially straight or curved lines. For example, ear gesture 1405f of
In block 1330 of
Some implementations may involve machine learning processes for associating a detected ear or a detected ear gesture with a device operation. Some such learning processes may include receiving and storing user input regarding device functionality. Some implementations may include registration or calibration procedures.
In this example, the user is prompted to select an ear gesture type and a device operation to associate with the ear gesture (block 1510). For example, the user may be prompted to indicate whether the ear gesture will be a substantially linear ear swipe, a curved ear swipe, a pattern (circular, oval, triangular, etc.), an ear press, a sequence of gestures, etc. In some implementations, block 1510 may involve receiving user input regarding a first desired speaker volume level to be applied when a cell phone is pressed against the user's ear with a first pressure and/or regarding a second desired speaker volume level to be applied when the cell phone is pressed against the user's ear with a second pressure.
In some implementations, however, the user may not be prompted to indicate the type of ear gesture. Instead, the ear gesture trajectory and/or pattern type may be determined according to received sensor data.
Some implementations also may involve associating the ear gesture type and the device operation(s) with a particular user. For example, the user may be prompted to enter user information, such as a user name, a user ID and/or a password or passcode.
In block 1515, the user may be prompted to make the ear gesture. One or more sensors may be controlled to acquire sensor data (block 1520), which may be received by a logic device in block 1525. The logic device may analyze the sensor data to determine a corresponding ear gesture trajectory and/or pattern (block 1530). For example, block 1530 may involve determining whether the ear gesture trajectory and/or pattern type detected by the sensor(s) corresponds to the type indicated by the user in block 1510. If not, the logic system may determine that additional sensor data should be acquired (block 1535). Accordingly, the process may revert to block 1515. In some implementations, the logic system may acquire multiple instances of an ear gesture trajectory and/or pattern even if the first instance is satisfactory.
If it is determined in block 1535 that no additional sensor data will be acquired for the ear gesture trajectory and/or pattern, the ear gesture trajectory and/or pattern data may be stored and associated with the indicated device operation(s), as shown in block 1540. In block 1545, it is determined whether the process will continue. For example, the logic system may prompt the user for input regarding whether additional ear gesture trajectory and/or pattern data is or will be acquired. If so, the process may revert to block 1510. If not, the process may end, as in block 1550.
The display device 40 includes a housing 41, a display 30, a touch sensor array 1000, an antenna 43, a speaker 45, an input device 48, and a microphone 46. The housing 41 can be formed from any of a variety of manufacturing processes, including injection molding, and vacuum forming. In addition, the housing 41 may be made from any of a variety of materials, including, but not limited to: plastic, metal, glass, rubber, and ceramic, or a combination thereof. The housing 41 can include removable portions (not shown) that may be interchanged with other removable portions of different color, or containing different logos, pictures, or symbols.
The display 30 may be any of a variety of displays, including a bi-stable or analog display, as described herein. The display 30 also can be configured to include a flat-panel display, such as plasma, EL, OLED, STN LCD, or TFT LCD, or a non-flat-panel display, such as a CRT or other tube device. In addition, the display 30 can include an IMOD display, as described herein.
The components of the display device 40 are schematically illustrated in
In this example, the display device 40 also includes a sensor system 77. In this example, the sensor system 77 includes the touch sensor array 1000. The sensor system 77 also may include other types of sensors, such as one or more cameras, pressure sensors, infrared (IR) sensors, accelerometers, gyroscopes, orientation sensors, etc. In some implementations, the sensor system 77 may include part of the logic system of the display device 40. For example, the sensor system 77 may include a touch controller that is configured to control, at least in part, the operations of the touch sensor array 1000. In alternative implementations, however, the processor 21 (or another such device) may be configured to provide some or all of this functionality.
The network interface 27 includes the antenna 43 and the transceiver 47 so that the display device 40 can communicate with one or more devices over a network. The network interface 27 also may have some processing capabilities to relieve, e.g., data processing requirements of the processor 21. The antenna 43 can transmit and receive signals. In some implementations, the antenna 43 transmits and receives RF signals according to the IEEE 16.11 standard, including IEEE 16.11(a), (b), or (g), or the IEEE 802.11 standard, including IEEE 802.11a, b, g or n. In some other implementations, the antenna 43 transmits and receives RF signals according to the BLUETOOTH standard. In the case of a cellular telephone, the antenna 43 is designed to receive code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), Global System for Mobile communications (GSM), GSM/General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), Terrestrial Trunked Radio (TETRA), Wideband-CDMA (W-CDMA), Evolution Data Optimized (EV-DO), 1xEV-DO, EV-DO Rev A, EV-DO Rev B, High Speed Packet Access (HSPA), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Evolved High Speed Packet Access (HSPA+), Long Term Evolution (LTE), AMPS, or other known signals that are used to communicate within a wireless network, such as a system utilizing 3G or 4G technology. The transceiver 47 can pre-process the signals received from the antenna 43 so that they may be received by and further manipulated by the processor 21. The transceiver 47 also can process signals received from the processor 21 so that they may be transmitted from the display device 40 via the antenna 43. The processor 21 may be configured to receive time data, e.g., from a time server, via the network interface 27.
In some implementations, the transceiver 47 can be replaced by a receiver. In addition, the network interface 27 can be replaced by an image source, which can store or generate image data to be sent to the processor 21. The processor 21 can control the overall operation of the display device 40. The processor 21 receives data, such as compressed image data from the network interface 27 or an image source, and processes the data into raw image data or into a format that is readily processed into raw image data. The processor 21 can send the processed data to the driver controller 29 or to the frame buffer 28 for storage. Raw data typically refers to the information that identifies the image characteristics at each location within an image. For example, such image characteristics can include color, saturation, and gray-scale level.
The processor 21 can include a microcontroller, CPU, or logic unit to control operation of the display device 40. The conditioning hardware 52 may include amplifiers and filters for transmitting signals to the speaker 45, and for receiving signals from the microphone 46. The conditioning hardware 52 may be discrete components within the display device 40, or may be incorporated within the processor 21 or other components.
The driver controller 29 can take the raw image data generated by the processor 21 either directly from the processor 21 or from the frame buffer 28 and can re-format the raw image data appropriately for high speed transmission to the array driver 22. In some implementations, the driver controller 29 can re-format the raw image data into a data flow having a raster-like format, such that it has a time order suitable for scanning across the display array 30. Then the driver controller 29 sends the formatted information to the array driver 22. Although a driver controller 29, such as an LCD controller, is often associated with the system processor 21 as a stand-alone integrated circuit (IC), such controllers may be implemented in many ways. For example, controllers may be embedded in the processor 21 as hardware, embedded in the processor 21 as software, or fully integrated in hardware with the array driver 22.
The array driver 22 can receive the formatted information from the driver controller 29 and can re-format the video data into a parallel set of waveforms that are applied many times per second to the hundreds, and sometimes thousands (or more), of leads coming from the display's x-y matrix of pixels.
In some implementations, the driver controller 29, the array driver 22, and the display array 30 are appropriate for any of the types of displays described herein. For example, the driver controller 29 can be a conventional display controller or a bi-stable display controller (e.g., an IMOD controller). Additionally, the array driver 22 can be a conventional driver or a bi-stable display driver (e.g., an IMOD display driver). Moreover, the display array 30 can be a conventional display array or a bi-stable display array (e.g., a display including an array of IMODs). In some implementations, the driver controller 29 can be integrated with the array driver 22. Such an implementation is common in highly integrated systems such as cellular phones, watches and other small-area displays.
In some implementations, the input device 48 can be configured to allow, e.g., a user to control the operation of the display device 40. The input device 48 can include a keypad, such as a QWERTY keyboard or a telephone keypad, a button, a switch, a rocker, a touch-sensitive screen, or a pressure- or heat-sensitive membrane. The microphone 46 can be configured as an input device for the display device 40. In some implementations, voice commands through the microphone 46 can be used for controlling operations of the display device 40.
The power supply 50 can include a variety of energy storage devices as are well known in the art. For example, the power supply 50 can be a rechargeable battery, such as a nickel-cadmium battery or a lithium-ion battery. The power supply 50 also can be a renewable energy source, a capacitor, or a solar cell, including a plastic solar cell or solar-cell paint. The power supply 50 also can be configured to receive power from a wall outlet.
In some implementations, control programmability resides in the driver controller 29 which can be located in several places in the electronic display system. In some other implementations, control programmability resides in the array driver 22. The above-described optimization may be implemented in any number of hardware and/or software components and in various configurations.
The various illustrative logics, logical blocks, modules, circuits and algorithm processes described in connection with the implementations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. The interchangeability of hardware and software has been described generally, in terms of functionality, and illustrated in the various illustrative components, blocks, modules, circuits and processes described above. Whether such functionality is implemented in hardware or software depends upon the particular application and design constraints imposed on the overall system.
The hardware and data processing apparatus used to implement the various illustrative logics, logical blocks, modules and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some implementations, particular processes and methods may be performed by circuitry that is specific to a given function.
In one or more aspects, the functions described may be implemented in hardware, digital electronic circuitry, computer software, firmware, including the structures disclosed in this specification and their structural equivalents thereof, or in any combination thereof. Implementations of the subject matter described in this specification also can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a computer storage media for execution by, or to control the operation of, data processing apparatus.
If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a non-transitory computer-readable medium. The processes of a method or algorithm disclosed herein may be implemented in a processor-executable software module which may reside on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that can be enabled to transfer a computer program from one place to another. Storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such non-transitory computer-readable media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection can be properly termed a computer-readable medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and instructions on a machine readable medium and computer-readable medium, which may be incorporated into a computer program product.
Various modifications to the implementations described in this disclosure may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of this disclosure. Thus, the claims are not intended to be limited to the implementations shown herein, but are to be accorded the widest scope consistent with this disclosure, the principles and the novel features disclosed herein.
Additionally, a person having ordinary skill in the art will readily appreciate, the terms “upper” and “lower” are sometimes used for ease of describing the figures, and indicate relative positions corresponding to the orientation of the figure on a properly oriented page, and may not reflect the proper orientation of the IMOD (or any other device) as implemented.
Certain features that are described in this specification in the context of separate implementations also can be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation also can be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Further, the drawings may schematically depict one more example processes in the form of a flow diagram. However, other operations that are not depicted can be incorporated in the example processes that are schematically illustrated. For example, one or more additional operations can be performed before, after, simultaneously, or between any of the illustrated operations. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Additionally, other implementations are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results.