Automated response to and sensing of user activity in portable devices

Information

  • Patent Grant
  • 9958987
  • Patent Number
    9,958,987
  • Date Filed
    Monday, April 10, 2017
    7 years ago
  • Date Issued
    Tuesday, May 1, 2018
    6 years ago
Abstract
The various methods and devices described herein relate to devices which, in at least certain embodiments, may include one or more sensors for providing data relating to user activity and at least one processor for causing the device to respond based on the user activity which was determined, at least in part, through the sensors. The response by the device may include a change of state of the device, and the response may be automatically performed after the user activity is determined.
Description
FIELD OF THE INVENTION

This invention relates to the field of portable devices and, in particular, to systems and methods for sensing or determining user activities and responding to the user's activities.


BACKGROUND OF THE INVENTION

Portable devices, such as cell phones, are becoming increasingly common. These portable devices have grown more complex over time, incorporating many features including, for example, MP3 player capabilities, web browsing capabilities, capabilities of personal digital assistants (PDAs) and the like.


The battery life of these portable devices, however, can be limited. Often, the primary draw of battery power is the display device for these portable devices and, in particular, the backlight, which can be used to illuminate the display device. In many current portable devices, an exemplary process 10 for illuminating the display is shown in FIG. 1. In this process, the backlight is activated (e.g. turned on to generate light) when a user enters a key input into the portable device as shown in operation 12. A timer is started at operation 14 in response to activating the backlight, and the portable device determines, at operation 16, whether a time out of the timer has occurred. If the portable device receives a user key input at operation 18, the timer restarts (in operation 14) and the process continues as described above. If the portable device does not receive a user input while the timer is counting, the time out will occur, and the backlight is deactivated at operation 20.


In addition, users often inadvertently enter key inputs on the input device of the portable device, which can result in the telephone performing operations which the user did not intend. For example, the backlight may illuminate, a telephone number may be dialed, a ring setting may be changed, etc., because the user accidentally contacts a key on the input device. The inadvertent entry of an input is particularly troublesome for a touch screen device, especially one which may receive an inadvertent input when a user has the portable device placed next to the user's ear.


SUMMARY OF THE DESCRIPTION

The various methods and devices described herein relate to devices which, in at least certain embodiments, may include one or more sensors, such as a proximity sensor, for providing data relating to user activity and at least one processor for causing the device to respond based on the user activity which was determined, at least in part, through the sensors. The response by the device may include a change of state of the device, and the response may be automatically performed after the user activity is determined.


According to one embodiment of the inventions, a method includes receiving data from at least one sensor, such as a proximity sensor, at a device, analyzing the data to determine activity of a user of the device, and modifying, in response to the analyzing, at least one setting of the device such as a setting relating to how input data from an input device is processed. The device may be a telephone, such as a cell phone, or an integrated telephone and media player, or other types of devices which can include the sensor and logic to process data from the sensor and the input device may be a keypad or a multi-touch input panel. The sensors may be any one of a variety of sensors including, for example, a proximity sensor, a temperature sensor, an accelerometer, a light sensor, a position determination device, an orientation determination device, a radio frequency electromagnetic wave sensor, a touch input panel, a motion sensor, or a sound sensor. In certain embodiments, the device may include a plurality of sensors which together provide data to at least one processor which analyzes the data to determine activity of the user. In at least certain embodiments, the analyzing may use an artificial intelligence process which includes a comparison of data, derived from the one or more sensors, to a threshold; for example, the data from a proximity sensor may be compared to a threshold value which represents a distance between a portion of the device and the user's ear or head. The data from a sensor may represent an analog value detected by the sensor; the analog value may indicate any one of a variety of analog values including, for example, a distance, a temperature, a light intensity, a measurement of motion or orientation, a measurement of sound intensity or an RF electromagnetic measurement. The artificial intelligence process, in at least certain embodiments, may be required to receive the data and to select an interpretation of the data from a set of possible interpretations, and the interpretation which is selected will in turn be used to decide how to modify the at least one setting. The setting which is modified may be any one of a variety of settings including, for example, a setting of a display illuminator, a setting of a sound input or sound output parameter, a setting of a state of an input device relative to receipt of user input, a setting of a communication parameter, a setting of a processor, or a setting of the mode of the device (e.g. cell phone mode or media player mode).


Various devices which perform one or more of the foregoing methods and machine readable media which, when executed by a processing system, cause the processing system to perform these methods, are also described.


According to another embodiment of the inventions, a method of operating a portable device having a hinge includes sensing a state of the hinge, analyzing data from a proximity sensor when the state of the hinge shows that the portable device is in an open configuration, and modifying at least one setting of the portable device in response to the analyzing. In at least certain embodiments, when the state of the hinge indicates that the portable device is in a closed configuration, the data from the proximity sensor is either ignored or not processed to determine whether a user's head or ear is near the proximity sensor; further, when in this closed configuration, the proximity sensor is placed in a reduced power consumption state. An implementation of one or more of these methods may be performed by a cellular telephone which typically includes a first housing, a second housing, a hinge which couples the first housing to the second housing, a hinge state detector to detect a state of the hinge, a proximity sensor, and at least one processor coupled to the proximity sensor and to the hinge state detector.


Other methods, devices and machine readable media are also described.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention is described by way of example with reference to the accompanying drawings, wherein:



FIG. 1 is flow chart of a prior art method for responding to user input and controlling the backlight of a display in response to the user input;



FIG. 2 is a perspective view of a portable device in accordance with one embodiment of the present invention;



FIG. 3 is a perspective view of a portable device in accordance with one embodiment of the present invention;



FIG. 4 is a perspective view of a portable device in accordance with one embodiment of the present invention;



FIG. 5A is a perspective view of a portable device in a first configuration (e.g. in an open configuration) in accordance with one embodiment of the present invention;



FIG. 5B is a perspective view of the portable device of FIG. 5A in a second configuration (e.g. a closed configuration) in accordance with one embodiment of the present invention;



FIG. 6 is a block diagram of a system in which embodiments of the present invention can be implemented;



FIG. 7A is a schematic side view of a proximity sensor in accordance with one embodiment of the present invention;



FIG. 7B is a schematic side view of an alternative proximity sensor in accordance with one embodiment of the present invention;



FIG. 8 is a block diagram of inputs and outputs for logic, such as artificial intelligence logic, in accordance with embodiments of the present invention;



FIGS. 9A-C are views of user activities in accordance with embodiments of the present invention;



FIG. 10 is a flow chart of a method that includes automated responses to user activity in accordance with embodiments of the present invention;



FIGS. 11A-F are flow charts of combinations of sensing to determine user activity and performing automated responses in accordance with embodiments of the present invention; and



FIG. 12 is a block diagram of a digital processing system in accordance with one embodiment of the present invention.





DETAILED DESCRIPTION

Various embodiments and aspects of the inventions will be described with reference to details discussed below, and the accompanying drawings will illustrate the various embodiments. The following description and drawings are illustrative of the invention and are not to be construed as limiting the invention. Numerous specific details are described to provide a thorough understanding of various embodiments of the present invention. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments of the present inventions.


Some portions of the detailed descriptions which follow are presented in terms of algorithms which include operations on data stored within a computer memory. An algorithm is generally a self-consistent sequence of operations leading to a desired result. The operations typically require or involve physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.


It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, can refer to the action and processes of a data processing system, or similar electronic device, that manipulates and transforms data represented as physical (electronic) quantities within the system's registers and memories into other data similarly represented as physical quantities within the system's memories or registers or other such information storage, transmission or display devices.


The present invention can relate to an apparatus for performing one or more of the operations described herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a machine (e.g. computer) readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), erasable programmable ROMs (EPROMs), electrically erasable programmable ROMs (EEPROMs), magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a bus.


A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium includes read only memory (“ROM”); random access memory (“RAM”); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.); etc.


At least certain embodiments of the present inventions include one or more sensors to monitor user activity. At least certain embodiments of the present inventions also include automatically changing a state of the portable device based on user activity, such as, for example, automatically activating or deactivating a backlight of a display device of the portable device or setting an input device of the portable device to a particular state, based on certain predetermined user activities.


At least certain embodiments of the inventions may be part of a digital media player, such as a portable music and/or video media player, which may include a media processing system to present the media, a storage device to store the media and may further include a radio frequency (RF) transceiver (e.g., an RF transceiver for a cellular telephone) coupled with an antenna system and the media processing system. In certain embodiments, media stored on a remote storage device may be transmitted to the media player through the RF transceiver. The media may be, for example, one or more of music or other audio, still pictures, or motion pictures.


The portable media player may include a media selection device, such as a click wheel input device on an iPod® or iPod Nano® media player from Apple Computer, Inc. of Cupertino, Calif., a touch screen input device, pushbutton device, movable pointing input device or other input device. The media selection device may be used to select the media stored on the storage device and/or the remote storage device. The portable media player may, in at least certain embodiments, include a display device which is coupled to the media processing system to display titles or other indicators of media being selected through the input device and being presented, either through a speaker or earphone(s), or on the display device, or on both display device and a speaker or earphone(s). Examples of a portable media player are described in published U.S. patent application numbers 2003/0095096 and 2004/0224638 both of which are incorporated herein by reference.


Embodiments of the inventions described herein may be part of other types of data processing systems, such as, for example, entertainment systems or personal digital assistants (PDAs), or general purpose computer systems, or special purpose computer systems, or an embedded device within another device, or cellular telephones which do not include media players, or devices which combine aspects or functions of these devices (e.g., a media player, such as an iPod®, combined with a PDA, an entertainment system, and a cellular telephone in one portable device).



FIG. 2 illustrates a portable device 30 according to one embodiment of the invention. FIG. 2 shows a wireless device in a telephone configuration having a “candy-bar” style. In FIG. 2, the wireless device 30 may include a housing 32, a display device 34, an input device 36 which may be an alphanumeric keypad, a speaker 38, a microphone 40 and an antenna 42. The wireless device 30 also may include a proximity sensor 44 and an accelerometer 46. It will be appreciated that the embodiment of FIG. 2 may use more or fewer sensors and may have a different form factor from the form factor shown in FIG. 2.


The display device 34 is shown positioned at an upper portion of the housing 32, and the input device 36 is shown positioned at a lower portion of the housing 32. The antenna 42 is shown extending from the housing 32 at an upper portion of the housing 32. The speaker 38 is also shown at an upper portion of the housing 32 above the display device 34. The microphone 40 is shown at a lower portion of the housing 32, below the input device 36. It will be appreciated that the speaker 38 and microphone 40 can be positioned at any location on the housing, but are typically positioned in accordance with a user's ear and mouth, respectively. The proximity sensor 44 is shown at or near the speaker 38 and at least partially within the housing 32. The accelerometer 46 is shown at a lower portion of the housing 32 and within the housing 32. It will be appreciated that the particular locations of the above-described features may vary in alternative embodiments.


The display device 34 may be, for example, a liquid crystal display (LCD) which does not include the ability to accept inputs or a touch input screen which also includes an LCD. The input device 36 may include, for example, buttons, switches, dials, sliders, keys or keypad, navigation pad, touch pad, touch screen, and the like.


Any well-known speaker, microphone and antenna can be used for speaker 38, microphone 40 and antenna 42, respectively.


The proximity sensor 44 may detect location (e.g. X, Y, Z), direction, speed, orientation (e.g., roll, pitch, yaw), etc. of objects relative to the wireless device 30. A location of an object relative to the wireless device can be represented as a distance in at least certain embodiments. The proximity sensor may generate location or movement data or both, which may be used to determine the location of objects relative to the portable device 30 and/or proximity sensor 44. An example of a proximity sensor is shown in FIG. 7A.


In addition, a processing device (not shown) is coupled to the proximity sensor(s) 44. The processing device may be used to determine the location of objects relative to the portable device 30 or proximity sensor 44 or both based on the location and/or movement data provided by the proximity sensor 44. The proximity sensor may continuously or periodically monitor the object location. The proximity sensor may also be able to determine the type of object it is detecting.


Additional information about proximity sensors can be found in U.S. patent application Ser. No. 11/241,839, titled “PROXIMITY DETECTOR IN HANDHELD DEVICE,” and U.S. patent application Ser. No. 11/240,788, titled “PROXIMITY DETECTOR IN HANDHELD DEVICE;” U.S. patent application Ser. No. 11/165,958, titled “METHODS AND APPARATUS FOR REMOTELY DETECTING PRESENCE,” filed Jun. 23, 2005; and U.S. Pat. No. 6,583,676, titled “PROXIMITY/TOUCH DETECTOR AND CALIBRATION CIRCUIT,” issued Jun. 24, 2003, all of which are incorporated herein by reference in their entirety.


According to one embodiment, the accelerometer 46 is able to detect a movement including an acceleration or de-acceleration of the wireless device. The accelerometer 46 may generate movement data for multiple dimensions, which may be used to determine a direction of movement of the wireless device. For example, the accelerometer 46 may generate X, Y and Z axis acceleration information when the accelerometer 46 detects that the portable device is moved. In one embodiment, the accelerometer 46 may be implemented as described in U.S. Pat. No. 6,520,013, which is incorporated herein by reference in its entirety. Alternatively, the accelerometer 46 may be a KGF01 accelerometer from Kionix or an ADXL311 accelerometer from Analog Devices or other accelerometers which are known in the art.


In addition, a processing device (not shown) is coupled to the accelerometer(s) 46. The processing device may be used to calculate a direction of movement, also referred to as a movement vector of the wireless device 30. The movement vector may be determined according to one or more predetermined formulas based on the movement data (e.g., movement in X, Y and Z) provided by accelerometer 46. The processing device may be integrated with the accelerometer 46 or integrated with other components, such as, for example, a chipset of a microprocessor, of the portable device.


The accelerometer 46 may continuously or periodically monitor the movement of the portable device. As a result, an orientation of the portable device prior to the movement and after the movement may be determined based on the movement data provided by the accelerometer attached to the portable device.


Additional information about accelerometers can be found in co-pending U.S. patent application Ser. No. 10/986,730, filed Nov. 12, 2004, which is hereby incorporated herein by reference in its entirety.


The data acquired from the proximity sensor 44 and the accelerometer 46 can be combined together, or used alone, to gather information about the user's activities. The data from the proximity sensor 44, the accelerometer 46 or both can be used, for example, to activate/deactivate a display backlight, initiate commands, make selections, control scrolling or other movement in a display, control input device settings, or to make other changes to one or more settings of the device.



FIG. 3 shows an alternative portable device 30a, which is similar to the portable device 30 illustrated in FIG. 2. The portable device 30a shown in FIG. 3 can differ from the portable device 30 shown in FIG. 2 in that the proximity sensor 44a (FIG. 3) is located at or near the microphone 40.



FIG. 4 shows a portable device 50 in accordance with one embodiment of the invention. The portable device 50 may include a housing 52, a display/input device 54, a speaker 56, a microphone 58 and an optional antenna 60 (which may be visible on the exterior of the housing or may be concealed within the housing). The portable device 50 also may include a proximity sensor 62 and an accelerometer 64. The portable device 50 may be a cellular telephone or a device which is an integrated PDA, and a cellular telephone or a device which is an integrated media player and a cellular telephone or a device which is both an entertainment system (e.g. for playing games) and a cellular telephone, or the portable device 50 may be other types of devices described herein. In one particular embodiment, the portable device 50 may include a cellular telephone and a media player and a PDA, all contained within the housing 52. The portable device 50 may have a form factor which is small enough that it fits within the hand of a normal adult and is light enough that it can be carried in one hand by an adult, It will be appreciated that the term “portable” means the device can be easily held in an adult user's hands (one or both); for example, a laptop computer and an iPod are portable devices.


In one embodiment, the display/input device 54 may include a multi-point touch input screen in addition to being a display, such as an LCD. In one embodiment, the multi-point touch screen is a capacitive sensing medium configured to detect multiple touches (e.g., blobs on the display from a user's face or multiple fingers concurrently touching or nearly touching the display) or near touches (e.g., blobs on the display) that occur at the same time and at distinct locations in the plane of the touch panel and to produce distinct signals representative of the location of the touches on the plane of the touch panel for each of the multiple touches. Additional information about multi-point input touch screens can be found in co-pending U.S. patent application Ser. No. 10/840,862, filed May 6, 2004 (see published U.S. patent application 20060097991), which is incorporated herein by reference in its entirety. A multi-point input touch screen may also be referred to as a multi-touch input panel.


A processing device (not shown) may be coupled to the display/input device 54. The processing device may be used to calculate touches on the touch panel. The display/input device 54 can use the detected touch (e.g., blob or blobs from a user's face) data to, for example, identify the location of certain objects and to also identify the type of object touching (or nearly touching) the display/input device 54.


The data acquired from the proximity sensor 62 and the display/input device 54 can be combined to gather information about the user's activities as described herein. The data from the proximity sensor 62 and the display/input device 54 can be used to change one or more settings of the portable device 50, such as, for example, change an illumination setting of the display/input device 54,


In one embodiment, as shown in FIG. 4, the display/input device 54 occupies a large portion of one surface (e.g. the top surface) of the housing 52 of the portable device 50. In one embodiment, the display/input device 54 consumes substantially the entire front surface of the portable device 50. In another embodiment, the display/input device 54 consumes, for example, at least 75% of a front surface of the housing 52 of the portable device 50. In alternative embodiments, the portable device 50 may include a display which does not have input capabilities, but the display still occupies a large portion of one surface of the portable device 50. In this case, the portable device 50 may include other types of input devices such as a QWERTY keyboard or other types of keyboard which slide out or swing out from a portion of the portable device 50.



FIGS. 5A and 5B illustrate a portable device 70 according to one embodiment of the invention. The portable device 70 may be a cellular telephone which includes a hinge 87 that couples a display housing 89 to a keypad housing 91. The hinge 87 allows a user to open and close the cellular telephone so that it can be placed in at least one of two different configurations shown in FIGS. 5A and 5B. In one particular embodiment, the hinge 87 may rotatably couple the display housing to the keypad housing. In particular, a user can open the cellular telephone to place it in the open configuration shown in FIG. 5A and can close the cellular telephone to place it in the closed configuration shown in FIG. 5B. The keypad housing 91 may include a keypad 95 which receives inputs (e.g. telephone number inputs or other alphanumeric inputs) from a user and a microphone 97 which receives voice input from the user. The display housing 89 may include, on its interior surface, a display 93 (e.g. an LCD) and a speaker 98 and a proximity sensor 84; on its exterior surface, the display housing 89 may include a speaker 96, a temperature sensor 94, a display 88 (e.g. another LCD), an ambient light sensor 92, and a proximity sensor 84A. Hence, in this embodiment, the display housing 89 may include a first proximity sensor on its interior surface and a second proximity sensor on its exterior surface. The first proximity sensor may be used to detect a user's head or ear being within a certain distance of the first proximity sensor and to cause an illumination setting of displays 93 and 88 to be changed automatically in response to this detecting (e.g. the illumination for both displays are turned off or otherwise set in a reduced power state). Data from the second proximity sensor, along with data from the ambient light sensor 92 and data from the temperature sensor 94, may be used to detect that the cellular telephone has been placed into the user's pocket.


In at least certain embodiments, the portable device 70 may contain components which provide one or more of the functions of a wireless communication device such as a cellular telephone, a media player, an entertainment system, a PDA, or other types of devices described herein. In one implementation of an embodiment, the portable device 70 may be a cellular telephone integrated with a media player which plays MP3 files, such as MP3 music files.


Each of the devices shown in FIGS. 2, 3, 4, 5A and 5B may be a wireless communication device, such as a cellular telephone, and may include a plurality of components which provide a capability for wireless communication. FIG. 6 shows an embodiment of a wireless device 100 which includes the capability for wireless communication. The wireless device 100 may be included in any one of the devices shown in FIGS. 2, 3, 4, 5A and 5B, although alternative embodiments of those devices of FIGS. 2-5B may include more or fewer components than the wireless device 100.


Wireless device 100 may include an antenna system 101. Wireless device 100 may also include a digital and/or analog radio frequency (RF) transceiver 102, coupled to the antenna system 101, to transmit and/or receive voice, digital data and/or media signals through antenna system 101.


Wireless device 100 may also include a digital processing system 103 to control the digital RF transceiver and to manage the voice, digital data and/or media signals. Digital processing system 103 may be a general purpose processing device, such as a microprocessor or controller for example. Digital processing system 103 may also be a special purpose processing device, such as an ASIC (application specific integrated circuit), FPGA (field-programmable gate array) or DSP (digital signal processor). Digital processing system 103 may also include other devices, as are known in the art, to interface with other components of wireless device 100. For example, digital processing system 103 may include analog-to-digital and digital-to-analog converters to interface with other components of wireless device 100. Digital processing system 103 may include a media processing system 109, which may also include a general purpose or special purpose processing device to manage media, such as files of audio data.


Wireless device 100 may also include a storage device 104, coupled to the digital processing system, to store data and/or operating programs for the wireless device 100. Storage device 104 may be, for example, any type of solid-state or magnetic memory device.


Wireless device 100 may also include one or more input devices 105, coupled to the digital processing system 103, to accept user inputs (e.g., telephone numbers, names, addresses, media selections, etc.) Input device 105 may be, for example, one or more of a keypad, a touchpad, a touch screen, a pointing device in combination with a display device or similar input device.


Wireless device 100 may also include at least one display device 106, coupled to the digital processing system 103, to display information such as messages, telephone call information, contact information, pictures, movies and/or titles or other indicators of media being selected via the input device 105. Display device 106 may be, for example, an LCD display device. In one embodiment, display device 106 and input device 105 may be integrated together in the same device (e.g., a touch screen LCD such as a multi-touch input panel which is integrated with a display device, such as an LCD display device). Examples of a touch input panel and a display integrated together are shown in U.S. published application No. 20060097991. The display device 106 may include a backlight 106a to illuminate the display device 106 under certain circumstances. It will be appreciated that the wireless device 100 may include multiple displays.


Wireless device 100 may also include a battery 107 to supply operating power to components of the system including digital RF transceiver 102, digital processing system 103, storage device 104, input device 105, microphone 105A, audio transducer 108, media processing system 109, sensor(s) 110, and display device 106. Battery 107 may be, for example, a rechargeable or non-rechargeable lithium or nickel metal hydride battery.


Wireless device 100 may also include audio transducers 108, which may include one or more speakers, and at least one microphone 105A.


Wireless device 100 may also include one or more sensors 110 coupled to the digital processing system 103. The sensor(s) 110 may include, for example, one or more of a proximity sensor, accelerometer, touch input panel, ambient light sensor, ambient noise sensor, temperature sensor, gyroscope, a hinge detector, a position determination device, an orientation determination device, a motion sensor, a sound sensor, a radio frequency electromagnetic wave sensor, and other types of sensors and combinations thereof. Based on the data acquired by the sensor(s) 110, various responses may be performed automatically by the digital processing system, such as, for example, activating or deactivating the backlight 106a, changing a setting of the input device 105 (e.g. switching between processing or not processing, as an intentional user input, any input data from an input device), and other responses and combinations thereof.


In one embodiment, digital RF transceiver 102, digital processing system 103 and/or storage device 104 may include one or more integrated circuits disposed on a printed circuit board (PCB).



FIGS. 7A and 7B illustrate exemplary proximity sensors in accordance with embodiments of the invention. It will be appreciated that, in alternative embodiments, other types of proximity sensors, such as capacitive sensors or sonar-like sensors, may be used rather than the proximity sensors shown in FIGS. 7A and 7B. In FIG. 7A, the proximity sensor 120 includes an emitter 122, a detector 124, and a window 126. The emitter 122 generates light in the infrared (IR) bands, and may be, for example, a Light Emitting Diode (LED). The detector 124 is configured to detect changes in light intensity and may be, for example, a phototransistor. The window 126 may be formed from translucent or semi-translucent material. In one embodiment, the window 126 is an acoustic mesh, such as, for example, a mesh typically found with a microphone or speaker of the portable device. In other embodiments, the window 126 may be MicroPerf, IR transparent strands wound in a mesh, or a cold mirror.


During operation, the light from the emitter 122 hits an object 128 and scatters when the object is present above the window 126. The light from the emitter may be emitted in square wave pulses which have a known frequency, thereby allowing the detector 124 to distinguish between ambient light and light from emitter 122 which is reflected by an object, such as the user's head or ear or a material in a user's pocket, back to the detector 124. At least a portion of the scattered light is reflected towards the detector 124. The increase in light intensity is detected by the detector 124, and this is interpreted by a processing system (not shown in FIG. 7A) to mean an object is present within a short distance of the detector 124. If no object is present or the object is beyond a certain distance from the detector 124, an insufficient or smaller amount of the emitted light is reflected back towards the detector 124, and this is interpreted by the processing system (not shown in FIG. 7A) to mean that an object is not present or is at a relatively large distance. In each case, the proximity sensor is measuring the intensity of reflected light which is related to the distance between the object which reflects the light and detector 124.


In one embodiment, the emitter 122 and detector 124 are disposed within the housing of a portable device, as described above with reference to FIGS. 2-5B.


In FIG. 7B, the emitter 122 and detector 124 of the proximity sensor are angled inward towards one another to improve detection of the reflected light, but the proximity sensor of FIG. 7B otherwise operates in a manner similar to the proximity sensor of FIG. 7A.


It will be appreciated that at least some of the sensors which are used with embodiments of the inventions may determine or provide data which represents an analog value. In other words, the data represents a value which can be any one of a set of possible values which can vary continuously or substantially continuously, rather than being discrete values which have quantum, discrete jumps from one value to the next value. Further, the value represented by the data may not be predetermined. For example, in the case of a distance measured by a proximity sensor, the distance is not predetermined, unlike values of keys on a keypad which represent a predetermined value. For example, a proximity sensor may determine or provide data that represents a distance which can vary continuously or nearly continuously in an analog fashion; in the case of such a proximity sensor, the distance may correspond to the intensity of reflected light which originated from the emitter of the proximity sensor. A temperature sensor may determine or provide data that represents a temperature, which is an analog value. A light sensor, such as an ambient light sensor, may determine or provide data that represents a light intensity which is an analog value. A motion sensor, such as an accelerometer, may determine or provide data which represents a measurement of motion (e.g. velocity or acceleration or both). A gyroscope may determine or provide data which represents a measurement of orientation (e.g. amount of pitch or yaw or roll). A sound sensor may determine or provide data which represents a measurement of sound intensity. For other types of sensors, the data determined or provided by the sensor may represent an analog value.



FIG. 8 shows a diagram of various inputs from sensors that can be used and actions that can be performed in accordance with at least one embodiment of the invention. Any one of the devices described herein, including the devices shown in FIGS. 2, 3, 4, 5A and 5B, may operate in accordance with the use of artificial intelligence as represented by FIG. 8. One or more inputs on the left side of FIG. 8 are received from various sensors of a device and are input into the artificial intelligence (AI) logic. One or more actions on the right side of FIG. 8 may be implemented by the AI logic automatically in response to any combination of the inputs. In one implementation of this embodiment, the actions are implemented substantially immediately after the data is sensed by one or more sensors.


Exemplary inputs of FIG. 8 may include, for example, proximity data, proximity data and blob detect data (e.g., from a multipoint touch input screen), proximity data and accelerometer data, accelerometer data and blob detect data, proximity data and temperature data, proximity data and ambient light data, and numerous other possible combinations.


Exemplary actions of FIG. 8 may include, for example, turning off the backlight of the portable device's display, suppressing the user's ability to input at the user interface (e.g., locking the input device), changing the telephone's mode, and the like. It will be appreciated that combinations of the above actions may also be implemented by the AI logic. For example, the AI logic may both turn off the display's backlight and suppress the user's ability to input at the user interface.


AI logic of FIG. 8 performs an AI (artificial intelligence) process. In certain embodiments, the AI process may be performed without a specific, intentional user input or without user inputs having predetermined data associated therewith (e.g., key inputs). The artificial intelligence process performed by the AI logic of FIG. 8 may use a variety of traditional AI logic processing, including pattern recognition and/or interpretation of data. For example, the AI logic may receive data from one or more sensors and compare the data to one or more threshold values and, based on those comparisons, determine how to interpret the data. In one embodiment, a threshold value may represent a distance which is compared to a value derived from a light intensity measurement in a proximity sensor. A light intensity measurement which represents a distance larger than the threshold value indicates that the object (which reflected the emitter's light) is not near, and a light intensity measurement which represents a distance smaller than the threshold value indicates that the object is near. Further, the input data may be subject to at least two interpretations (e.g. the data from a proximity sensor indicates that the user's head is near to the sensor, so turn off the back light, or the data from the proximity sensor indicates the user's head is not near, so leave the backlight under the control of a display timer), and the AI process attempts to select from the at least two interpretations to pick an interpretation that predicts a user activity. In response to the interpretation (e.g. the selection of one interpretation), the AI logic causes an action to be performed as indicated in FIG. 8, wherein the action may modify one or more settings of the device. In at least certain embodiments, the AI logic may perform an AI process which interprets the data from one or more sensors (which interpretation requires the AI process to select between at least two possible interpretations) and which selects an action (e.g. modifying a setting of the device) based on both the interpretation of the sensor data and the current state of the device; the method shown in FIG. 11A is an example of the use of information about the current state of the device (e.g. whether the user is currently communicating through the telephone in the device) along with an interpretation of sensor data (proximity data in the case of FIG. 11A).


In certain embodiments, the AI process may perform traditional methods of pattern recognition on the sensor data. For example, the rate of change of the distance between the device and the user's ear may have a pattern (e.g. revealing a deceleration as the user moves the device closer to their ear), and this pattern in the rate of change of distance may be detected by a pattern matching algorithm. The phrase “artificial intelligence” is used throughout to mean that a conclusion (whether explicit or implicit) can be drawn from data available from one or more sensors about a mode of usage by the user of the device. This conclusion may or may not be expressed in the device (e.g., “the user is talking on the phone”) but it will be mapped to specific actions or settings for the device that would be appropriate if the user was using the device in that way. For example, a telephone may be pre-programmed such that whenever it detects (1) a voice being spoken into the microphone, (2) that the phone is connected to a network, and (3) the proximity sensor is active, then the screen backlight will be dimmed. Such pre-programming may involve simple logic (e.g. simple combinatorial logic), but would nonetheless be within the scope of artificial intelligence as used herein. While learning, statistical analysis, iteration, and other complex aspects of AI can be used with the present invention, they are not required for the basic artificial intelligence contemplated. Likewise, the word “analyze” does not imply sophisticated statistical or other analysis, but may involve observation of only a single threshold or datum.


The AI processing, in at least certain embodiments, may be performed by a processor or processing system, such as digital processing system 103, which is coupled to the one or more sensors that provide the data which form the inputs to the AI process. It will be appreciated that an AI process may be part of one or more of the methods shown in FIGS. 10 and 11A-11F.


In at least certain embodiments, the device, which operates according to any of those methods, may have at least one input device (e.g. a keypad or keyboard or touch input panel) which is designed to receive intentional user inputs (e.g. which specify a specific user entry) in addition to one or more sensors which are distinct and separate from the at least one input device and which sensors are not designed to receive intentional user inputs. In fact, a user may not even be aware of the presence of the one or more sensors on the device.



FIGS. 9A-C illustrate exemplary user activities that can be determined based on input data acquired by the one or more sensors of the portable device. Exemplary user activities include, but are not limited to, the user looking directly at the portable device (FIG. 9A), the user holding the portable device at or near their ear (FIG. 9B), the user putting the portable device in a pocket or purse (FIG. 9C), and the like.


Additional information about user activities and/or gestures that can be monitored in accordance with embodiments of the present invention are disclosed in U.S. patent application Ser. No. 10/903,964, titled “GESTURES FOR TOUCH SENSITIVE INPUT DEVICES,” filed Jul. 30, 2004, U.S. patent application Ser. No. 11/038,590, titled “MODE-BASED GRAPHICAL USER INTERFACES FOR TOUCH SENSITIVE INPUT DEVICES,” filed Jan. 18, 2005, all of which are incorporated herein by reference in their entirety.



FIG. 10 is a flowchart illustrating a method 200 for automatically responding to certain user activities with respect to a portable device. In one embodiment, method 200 includes, but is not limited to, gathering sensor data designed to indicate user activity with respect to a portable device, and executing machine-executable code to perform one or more predetermined automated actions in response to the detection of the user activity.


The method 200 may be performed by any one of the devices shown in FIGS. 2, 3, 4, 5A, 5B, 6 and 12 and may or may not use the artificial intelligence process shown in FIG. 8. Operation 202 gathers sensor data, from one or more sensors; the sensor data provides information about user activity. For example, a proximity sensor may indicate whether the device is near the user's ear; a temperature sensor, an ambient light sensor (or a differential ambient light sensor) and a proximity sensor may together indicate that the device is in the user's pocket; a gyroscope and a proximity sensor may together indicate that the user is looking at the device. In operation 204, the data from the one or more sensors is analyzed; this analysis may be performed by one or more processors within the device, including a processor within one or more of the sensors. The analysis attempts to predict user activity based on the sensor data. It will be appreciated that a prediction from this analysis may, in some cases, be wrong. For example, if a user places a finger over a proximity sensor when the user holds the device, this may cause the analysis to incorrectly conclude that the device is near the user's head or ear. In operation 206, one or more device settings may be adjusted based upon, at least in part, the analysis of the data from the one or more sensors. This adjusting may include changing an illumination setting of the device or other actions described herein.



FIGS. 11A-F illustrate exemplary methods for sensing data and automatically responding to the sensed data, and these methods may be performed by any one of the devices shown in FIGS. 2, 3, 4, 5A, 5B, 6 and 12 and may or may not use the artificial intelligence process shown in FIG. 8. It will be appreciated that several variations can be made to the illustrated methods, including variations to the data sensed, analysis of the data and the response(s) to the sensed data,


The method of FIG. 11A includes optional operation 220 in which the device determines if the user is communicating through the telephone within the device. This may be performed by conventional techniques known in the art which can sense when a telephone call is in progress or when the user is otherwise communicating through the telephone or other communication device. In operation 222, proximity sensor data is received from one or more proximity sensors on the device. Then in operation 224, the proximity sensor data is analyzed. For example, the data is analyzed to determine whether an object, such as the user's ear or head, is near the device. This analysis is used to decide whether and how to adjust the device's settings as shown in operation 226. One or more settings of the device may be automatically adjusted based on the analysis of the proximity sensor data and optionally based on whether or not the user is communicating through the telephone or other communication device. For example, if the proximity sensor indicates that the device is near the user's head or ear and it has been determined that the user is communicating through the telephone, then the device determines that the user is talking or otherwise communicating on the telephone or other communication device by having the device next to the user's ear as shown in FIG. 9B. In this situation, the device automatically changes the manner in which data from one or more input devices is processed, such as suppressing a user's ability to make intentional inputs on an input device, such as a keypad or a touch input panel on the device. In addition to suppressing intentional inputs, the device may automatically adjust a power setting of one or more displays of the device. If, on the other hand, the device determines that the user is not communicating though the telephone while the proximity sensor data indicates that an object is near to the device, the device may decide not to modify an illumination setting of the display and to not suppress the user's ability to enter intentional user inputs on an input device. The suppressing of inputs may occur in one of a variety of ways, for example, inputs may be suppressed by turning off or reducing power to the input device such that it is not operational while in this mode; in another example, inputs may be suppressed while in this mode by not processing any inputs which are received by a fully powered input device; in yet another example, inputs are not processed as intentional inputs but are processed to confirm they are “blobs” resulting from touches or near touches on the input device. In the last example, even though an input appears to be an activation of a key (the “3” button on a keypad) or other user interface item, the input is not processed as an activation of that key but rather is processed to determine whether it is a “blob.”



FIG. 11B shows a method of an embodiment of the present inventions which relates to a technique for controlling when data from an input device is processed as an input and when it is ignored as an intentional user input. In operation 230, the device receives movement data from one or more sensors. These sensors may include an accelerometer or a motion sensor or other types of sensors which indicate movement data. These sensors may be designed to distinguish between rapid movements and slow movements. This is particularly true if the movements involve high levels of acceleration. It is assumed in this embodiment that rapid movements may be so rapid that it is unlikely the user could be intending to enter a user input and hence the device may decide to ignore inputs which occur when such sensors indicate that the movement is faster than a threshold movement value. The movement data is analyzed in operation 232 to determine whether or not to automatically suppress a user's ability to input key inputs or other inputs based on the device's movement. In operation 234, the device may automatically suppress a user's ability to enter inputs on an input device in response to the analysis in operation 232.



FIG. 11C relates to an embodiment of the present inventions in which data relating to a location of the device and data relating to movement of the device are analyzed to determine whether or not to adjust one or more settings of the device. In operation 260, data relating to the location of the device is received; this data may, for example, be provided by a proximity sensor. In operation 262, data relating to device movement is also received. This data may be from a motion sensor or from an accelerometer. In operation 264, the data relating to location and the data relating to device movement arc analyzed to determine whether or not to adjust a setting of the device. This analysis may be performed in a variety of different ways. For example, the data relating to device motion may show a pattern of movement which matches the movement which occurs when a user moves the device from the user's pocket to the user's head. The analysis may further determine that the proximity data or other data relating to location showed that the device was not near the user's head or another object until near the end of the movement. In such a situation, the analysis would determine that the user has pulled the device from their pocket and placed it against the user's ear. In operation 266, one or more settings of the device are adjusted automatically, without any intentional user input, based upon the analysis. For example, an adjustment may be made in the manner in which data from an input device, such as a touch input panel, is processed. For example, inputs to the input device are not processed as intentional user inputs, effectively suppressing the inputs. In addition, a display's illumination setting may be adjusted. For example, if the analysis of operation 264 determines the user has moved the device from a location away from the ear to a location close to the ear then, in one embodiment, an illumination setting may be adjusted and the user's ability to enter intentional inputs into an input device may be suppressed.



FIG. 11D shows an embodiment of the present inventions in which data relating to location and data relating to temperature is processed through an analysis to determine whether or not to adjust one or more device settings of the device. In operation 270, data relating to location, such as data from a proximity sensor, is received. In operation 272, data relating to temperature, such as temperature data or temperature differential data, is received. In operation 274, the data relating to location and the data relating to temperature are analyzed to determine whether to adjust one or more settings of the device. In operation 276, one or more device settings are adjusted in response to the analysis of operation 274.



FIG. 11E shows an embodiment of the present inventions in which data relating to location of a device and data relating to touches on a touch input panel of the device are analyzed to determine whether to adjust a setting of the device. In this embodiment, data relating to location of the device is received in operation 290 and data relating to touches on a touch input panel is received in operation 292. The data relating to location may be from a proximity sensor. The data relating to touches on a touch input panel may be from a multi-point touch input panel which is capable of detecting multiple point touches which may occur when a user's face is pressed against or is otherwise near the touch input panel. In operation 294, the data relating to location and the data relating to touches are analyzed to determine whether to adjust a setting of the device. As a result of this analysis, in operation 296, one or more device settings are adjusted. For example, the adjustment may include automatically reducing power to the backlight of a display or changing the manner in which data from the touch input panel is processed, or both adjustments.


A mode of the device may be used in order to determine whether to or how to adjust a setting of the device. The mode of the device may include any one of a variety of modes or conditions, such as speakerphone mode or non-speakerphone mode, battery powered mode or not battery powered mode, call waiting mode or not call waiting mode, an alert mode in which the device may make a sound, such as the sound of an alarm, etc. The data relating to user activity (e.g. data from one or more sensors, such as a proximity sensor and/or a touch input panel, which is capable of detecting blobs from a face) is analyzed relative to the mode of the device and the analysis attempts to determine whether to adjust a setting of the device. One or more device settings may be adjusted based on the sensed user activity and the device mode. For example, the device may automatically switch from speakerphone mode to non-speakerphone mode when proximity data, and optionally other data (e.g. data from a motion sensor and an ambient light sensor) indicate the user has placed the device, which in this case may be a telephone, next to the user's ear. In this example, the device has automatically switched from speakerphone mode to non-speakerphone mode without any intentional input from the user which indicates that the switch should occur. Another method involves adjusting an alert or alarm volume depending on whether or not the device is near to the user's ear. In this example, if the data relating to user activity indicates that the device is adjacent to the user's ear and if the mode of the device is set such that alarms or alerts will cause the device to make a sound, then the device will automatically change the volume level for an alert or an alarm from a first level to a second level which is not as loud as the first level.



FIG. 11F shows an embodiment of the inventions in which data from a device configuration detector, such as a hinge detector, is used to determine how to process data from one or more sensors on the device. In one embodiment, this method shown in FIG. 11F may be used with the device shown in FIGS. 5A and 5B (and the proximity sensor referred to in FIG. 11F may be proximity sensor 84 in FIG. 5A). In particular, a hinge detector which is coupled to the hinge 87 may detect whether the device is open as shown in FIG. 5A or closed as shown in FIG. 5B. Other configuration detectors may indicate whether a slide out input device (e.g. a slide out keyboard) or other type of input device has been pulled out (or swung out) or not from a portion of the device. In operation 320, the device determines whether data from a hinge detector shows that the device is open. If the device is not open, then in operation 322, data from a proximity sensor is ignored if the proximity sensor is disposed on an interior surface of the device. Optionally, the power to the proximity sensor may be reduced by, for example, turning off the proximity sensor when the device is in a closed state. If it is determined in operation 320 that the device is open, then in operation 324, data from the proximity sensor is processed to determine whether the device is placed near an object, such as the user's ear. If it is determined from the processing of operation 324 that the device is not near the user's ear, then the display timer which controls the time that the display is illuminated, is allowed to continue to run in operation 326. This display timer may be similar to the display timer which operates in the method shown in FIG. 1. If, in operation 324, it is determined that the device is near the user's ear, then in operation 328, power to an illuminator of the display is reduced. This may be performed by setting the display timer's value to a time out state to thereby cause the display's illuminator to be powered off. It will be appreciated that the method of FIG. 11F may save additional battery life by reducing power to the illuminator of the display before the display timer runs out.


It will be appreciated that a method which is similar to the method shown in FIG. 1 may be used in addition to at least certain embodiments of the inventions which adjust illumination settings. For example, in the embodiment shown in FIG. 11A, a display timer which has been started (e.g. in operation 14 of FIG. 1) may continue to count while the method shown in FIG. 11A is performed. The display timer will count, while the method of FIG. 11A is being performed, until its time out state is reached and, upon doing so, the display timer may cause the illumination setting to be changed before the method of FIG. 11A is completed. In this case, the illumination setting is controlled by both the display timer and one or more sensors of at least certain embodiments of the inventions which cause an adjusting of illumination settings based upon the analysis of data from one or more sensors.


The phrase “proximity sensor” is used throughout to mean a sensor, such as a capacitive, temperature, inductive, infrared or other variety of sensor, which is capable of detecting whether an object is present within a certain distance of the sensor. A primary object of this detecting may be the head of the user (or any other object that would present viewing of the display screen).


Any of the embodiments of the inventions may include one or more user interface controls which allow a user to override a result caused by one or more sensors. For example, a control, such as a button, may be pressed by the user to cause the display to return to full power after a proximity sensor has caused the display to enter a reduced power consumption state. In another example, the user interface control may be a sensor (or group of sensors), such as an accelerometer, which detects a user interaction with the device (e.g. shaking the device), and the user interaction has been set up to cause an overriding of a state caused by one or more sensors.


Certain embodiments of the inventions may employ one or more light sensors which provide data relating to light, which data is analyzed to determine whether or not to adjust one or more settings of a device, such as wireless device 100. Ambient light level data may be provided by an ambient light sensor which indicates the level of light intensity surrounding that sensor. Ambient light differential data may be obtained from two or more ambient light sensors which are disposed at different positions on the device. For example, one ambient light sensor may be on one side of the device, and another ambient light sensor may be on another side of the device. A different in the light intensity levels may be determined by comparing the data from these two ambient light sensors on two different sides or surfaces of the device. There are a variety of possible uses of a light sensor. A light sensor may be used with a proximity sensor to determine when a device is placed in a pocket to cause the device to be set in vibrate mode only or vibrate mode with audible ringing. In another example, in response to a light sensor determining that the ambient light is very low, and optionally in response to a user having set the device to visibly light up to show an incoming call when the ambient light is very low, the device may automatically be put in a “light ring” mode when it is dark so that instead of an audible ring from the device, the display flashes visibly (e.g. by repeatedly turning on and off the backlight) to indicate an incoming call. Another exemplary use of a light sensor involves using it as an alarm indicating that a dark room (or environment) has become brighter (e.g. the sun has risen or a door to a darkened room is opened to let light into the room). A light sensor may also be used to cause a device to automatically act as a source of light (e.g. as a flashlight, in effect) upon sensing a low ambient light level.



FIG. 12 shows another example of a device according to an embodiment of the inventions. This device may include a processor, such as microprocessor 402, and a memory 404, which are coupled to each other through a bus 406. The device 400 may optionally include a cache 408 which is coupled to the microprocessor 402. This device may also optionally include a display controller and display device 410 which is coupled to the other components through the bus 406. One or more input/output controllers 412 are also coupled to the bus 406 to provide an interface for input/output devices 414 and to provide an interface for one or more sensors 416 which are for sensing user activity. The bus 406 may include one or more buses connected to each other through various bridges, controllers, and/or adapters as is well known in the art. The input/output devices 414 may include a keypad or keyboard or a cursor control device such as a touch input panel. Furthermore, the input/output devices 414 may include a network interface which is either for a wired network or a wireless network (e.g. an RF transceiver). The sensors 416 may be any one of the sensors described herein including, for example, a proximity sensor or an ambient light sensor. In at least certain implementations of the device 400, the microprocessor 402 may receive data from one or more sensors 416 and may perform the analysis of that data in the manner described herein. For example, the data may be analyzed through an artificial intelligence process or in the other ways described herein. As a result of that analysis, the microprocessor 402 may then automatically cause an adjustment in one or more settings of the device.


In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of the invention as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.

Claims
  • 1. A method, comprising: at an electronic device with one or more processors, memory, and a touch-sensitive display: receiving data relating to at least one of device movement and device proximity;receiving a touch input at a location on the touch-sensitive display;processing the received data relating to the at least one of device movement and device proximity to determine whether the electronic device is in one of a first or second state based on the received data relating to at least one of device movement and device proximity;if it is determined that the electronic device is in the first state, processing the touch input, including determining the location the touch input was received on the touch-sensitive display, and performing a function associated with the location of the processed touch input; andif it is determined that the electronic device is in the second state, processing the touch input, including determining the location the touch input was received on the touch-sensitive display, and forgoing the function associated with the location of the processed touch input.
  • 2. The method of claim 1, wherein the data relating to device proximity represents at least one of a distance and a rate of change of distance.
  • 3. The method of claim 1, wherein the data related to device proximity corresponds to a location of the electronic device relative to a user.
  • 4. The method of claim 3, wherein the second state occurs when the data relating to device proximity indicates the electronic device is proximate to the user.
  • 5. The method of claim 1, wherein the data relating to device movement represents a speed of movement of the electronic device.
  • 6. The method of claim 5, wherein: it is determined that the electronic device is in the first state when the speed of movement is less than a threshold movement value; andit is determined that the electronic device is in the second state when the speed of movement is greater than the threshold movement value.
  • 7. The method of claim 1, further comprising: in response to detecting a first change in the received data relating to the at least one of device movement and device proximity, determining that the device has changed from the second state to the first state.
  • 8. The method of claim 7, further comprising: in response to detecting a second change in the received data relating to the at least one of device movement and device proximity, determining that the device has changed from the first state to the second state.
  • 9. The method of claim 1, wherein the data relating to device movement is received from an accelerometer.
  • 10. The method of claim 1, wherein the data relating to device proximity is received from a proximity sensor.
  • 11. A non-transitory computer readable storage medium storing executable program instructions, which, when executed by an electronic device with one or more processors, memory, and a touch-sensitive display, cause the electronic device to:
  • 12. The non-transitory computer readable storage medium of claim 11, wherein the data relating to device proximity represents at least one of a distance and a rate of change of distance.
  • 13. The non-transitory computer readable storage medium of claim 11, wherein the data related to device proximity corresponds to a location of the electronic device relative to a user.
  • 14. The non-transitory computer readable storage medium of claim 11, wherein the data relating to device movement represents a speed of movement of the electronic device.
  • 15. The non-transitory computer readable storage medium of claim 11, wherein the executable program instructions further cause the electronic device to: in response to detecting a first change in the received data relating to the at least one of device movement and device proximity, determine that the device has changed from the second state to the first state.
  • 16. An electronic device, comprising: one or more processors;a touch-sensitive display; andmemory storing one or more programs to be executed by the one or more processors, the one or more programs comprising instructions for: receiving data relating to at least one of device movement and device proximity;receiving a touch input at a location on the touch-sensitive display;processing the received data relating to the at least one of device movement and device proximity to determine whether the electronic device is in one of a first or second state based on the received data relating to at least one of device movement and device proximity;if it is determined that the electronic device is in the first state, processing the touch input, including determining the location the touch input was received on the touch-sensitive display, and performing a function associated with the location of the processed touch input; andif it is determined that the electronic device is in the second state, processing the touch input, including determining the location the touch input was received on the touch-sensitive display, and forgoing the function associated with the location of the processed touch input.
  • 17. The electronic device of claim 16, wherein the data relating to device proximity represents at least one of a distance and a rate of change of distance.
  • 18. The electronic device of claim 16, wherein the data related to device proximity corresponds to a location of the electronic device relative to a user.
  • 19. The electronic device of claim 16, wherein the data relating to device movement represents a speed of movement of the electronic device.
  • 20. The electronic device of claim 16, wherein the one or more programs further comprise instructions to: in response to detecting a first change in the received data relating to the at least one of device movement and device proximity, determine that the device has changed from the second state to the first state.
  • 21. The method of claim 1, wherein: a user interface item is located at the location on the touch-sensitive display,determining the location the touch input was received on the touch-sensitive display comprises determining the user interface item located at the location on the touch-sensitive display, andthe function is associated with the user interface item.
  • 22. The non-transitory computer readable storage medium of claim 11, wherein: a user interface item is located at the location on the touch-sensitive display,determining the location the touch input was received on the touch-sensitive display comprises determining the user interface item located at the location on the touch-sensitive display, andthe function is associated with the user interface item.
  • 23. The electronic device of claim 16, wherein a user interface item is located at the location on the touch-sensitive display,determining the location the touch input was received on the touch-sensitive display comprises determining the user interface item located at the location on the touch-sensitive display, andthe function is associated with the user interface item.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 15/207,260, filed Jul. 11, 2016, which is a continuation of U.S. application Ser. No. 14/137,837, filed Dec. 20, 2013, which is a continuation of U.S. application Ser. No. 12/613,439, filed Nov. 5, 2009, which is a divisional of U.S. application Ser. No. 11/586,862 filed Oct. 24, 2006, now U.S. Pat. No. 7,633,076, which is a continuation-in-part of U.S. application Ser. No. 11/241,839, filed Sep. 30, 2005, now U.S. Pat. No. 7,653,883, and U.S. application Ser. No. 11/240,788, filed Sep. 30, 2005, now U.S. Pat. No. 8,381,135, which are incorporated herein by reference in their entireties.

US Referenced Citations (665)
Number Name Date Kind
3704345 Coker et al. Nov 1972 A
3828132 Flanagan et al. Aug 1974 A
3979557 Schulman et al. Sep 1976 A
4278838 Antonov Jul 1981 A
4282405 Taguchi Aug 1981 A
4310721 Manley et al. Jan 1982 A
4348553 Baker et al. Sep 1982 A
4653021 Takagi Mar 1987 A
4688195 Thompson et al. Aug 1987 A
4692941 Jacks et al. Sep 1987 A
4718094 Bahl et al. Jan 1988 A
4724542 Williford Feb 1988 A
4726065 Froessl Feb 1988 A
4727354 Lindsay Feb 1988 A
4776016 Hansen Oct 1988 A
4783807 Marley Nov 1988 A
4811243 Racine Mar 1989 A
4819271 Bahl et al. Apr 1989 A
4827520 Zeinstra May 1989 A
4829576 Porter May 1989 A
4833712 Bahl et al. May 1989 A
4839853 Deerwester et al. Jun 1989 A
4852168 Sprague Jul 1989 A
4862504 Nomura Aug 1989 A
4878230 Murakami et al. Oct 1989 A
4903305 Gillick et al. Feb 1990 A
4905163 Garber et al. Feb 1990 A
4914586 Swinehart et al. Apr 1990 A
4944013 Gouvianakis et al. Jul 1990 A
4965763 Zamora Oct 1990 A
4974191 Amirghodsi et al. Nov 1990 A
4977598 Doddington et al. Dec 1990 A
4992972 Brooks et al. Feb 1991 A
5010574 Wang Apr 1991 A
5020112 Chou May 1991 A
5021971 Lindsay Jun 1991 A
5022081 Hirose et al. Jun 1991 A
5027406 Roberts et al. Jun 1991 A
5031217 Nishimura Jul 1991 A
5032989 Tornetta Jul 1991 A
5040218 Vitale et al. Aug 1991 A
5072452 Brown et al. Dec 1991 A
5091945 Kleijn Feb 1992 A
5095945 Jensen Mar 1992 A
5103085 Zimmerman Apr 1992 A
5127053 Koch Jun 1992 A
5127055 Larkey Jun 1992 A
5128672 Kaehler Jul 1992 A
5133011 McKiel, Jr. Jul 1992 A
5142584 Ozawa Aug 1992 A
5164900 Bernath Nov 1992 A
5165007 Bahl et al. Nov 1992 A
5179652 Rozmanith et al. Jan 1993 A
5194950 Murakami et al. Mar 1993 A
5199077 Wilcox et al. Mar 1993 A
5202952 Gillick et al. Apr 1993 A
5208862 Ozawa May 1993 A
5216747 Hardwick et al. Jun 1993 A
5220639 Lee Jun 1993 A
5220657 Bly et al. Jun 1993 A
5222146 Bahl et al. Jun 1993 A
5230036 Akamine et al. Jul 1993 A
5231381 Duwaer Jul 1993 A
5235680 Bijnagte Aug 1993 A
5267345 Brown et al. Nov 1993 A
5268990 Cohen et al. Dec 1993 A
5270818 Ottenstein Dec 1993 A
5282265 Rohra et al. Jan 1994 A
RE34562 Murakami et al. Mar 1994 E
5291286 Murakami et al. Mar 1994 A
5293448 Honda Mar 1994 A
5293452 Picone et al. Mar 1994 A
5297170 Eyuboglu et al. Mar 1994 A
5301109 Landauer et al. Apr 1994 A
5303406 Hansen et al. Apr 1994 A
5317507 Gallant May 1994 A
5317647 Pagallo May 1994 A
5325297 Bird et al. Jun 1994 A
5325298 Gallant Jun 1994 A
5327498 Hamon Jul 1994 A
5333236 Bahl et al. Jul 1994 A
5333275 Wheatley et al. Jul 1994 A
5345536 Hoshimi et al. Sep 1994 A
5349645 Zhao Sep 1994 A
5353377 Kuroda et al. Oct 1994 A
5377301 Rosenberg et al. Dec 1994 A
5384892 Strong Jan 1995 A
5384893 Hutchins Jan 1995 A
5386494 White Jan 1995 A
5386556 Hedin et al. Jan 1995 A
5390279 Strong Feb 1995 A
5396625 Parkes Mar 1995 A
5400434 Pearson Mar 1995 A
5406305 Shimomura et al. Apr 1995 A
5424947 Nagao et al. Jun 1995 A
5434777 Luciw Jul 1995 A
5455888 Iyengar et al. Oct 1995 A
5469529 Bimbot et al. Nov 1995 A
5475587 Anick et al. Dec 1995 A
5479488 Lennig et al. Dec 1995 A
5483261 Yasutake Jan 1996 A
5488204 Mead et al. Jan 1996 A
5491772 Hardwick et al. Feb 1996 A
5502790 Yi Mar 1996 A
5502791 Nishimura et al. Mar 1996 A
5515475 Gupta et al. May 1996 A
5528266 Arbeitman et al. Jun 1996 A
5536902 Serra et al. Jul 1996 A
5574823 Hassanein et al. Nov 1996 A
5577241 Spencer Nov 1996 A
5579436 Chou et al. Nov 1996 A
5581655 Cohen et al. Dec 1996 A
5596676 Swaminathan et al. Jan 1997 A
5608624 Luciw Mar 1997 A
5613036 Strong Mar 1997 A
5617507 Lee et al. Apr 1997 A
5621859 Schwartz et al. Apr 1997 A
5642464 Yue et al. Jun 1997 A
5642519 Martin Jun 1997 A
5664055 Kroon Sep 1997 A
5675819 Schuetze Oct 1997 A
5682539 Conrad et al. Oct 1997 A
5684294 Kouhi Nov 1997 A
5687077 Gough, Jr. Nov 1997 A
5712957 Waibel et al. Jan 1998 A
5727950 Cook et al. Mar 1998 A
5729694 Holzrichter et al. Mar 1998 A
5732390 Katayanagi et al. Mar 1998 A
5734791 Acero et al. Mar 1998 A
5748974 Johnson May 1998 A
5786801 Ichise Jul 1998 A
5790978 Olive et al. Aug 1998 A
5794050 Dahlgren et al. Aug 1998 A
5794182 Manduchi et al. Aug 1998 A
5799276 Komissarchik et al. Aug 1998 A
5825352 Bisset et al. Oct 1998 A
5826261 Spencer Oct 1998 A
5828999 Bellegarda et al. Oct 1998 A
5835079 Shieh Nov 1998 A
5835893 Ushioda Nov 1998 A
5839106 Bellegarda Nov 1998 A
5860063 Gorin et al. Jan 1999 A
5864806 Mokbel et al. Jan 1999 A
5867799 Lang et al. Feb 1999 A
5873056 Liddy et al. Feb 1999 A
5880411 Gillespie et al. Mar 1999 A
5884156 Gordon Mar 1999 A
5894298 Hoeksma Apr 1999 A
5895466 Goldberg et al. Apr 1999 A
5899972 Miyazawa et al. May 1999 A
5913193 Huang et al. Jun 1999 A
5915249 Spencer Jun 1999 A
5943670 Prager Aug 1999 A
5952992 Helms Sep 1999 A
5987404 Della Pietra et al. Nov 1999 A
6016471 Kuhn et al. Jan 2000 A
6029132 Kuhn et al. Feb 2000 A
6038533 Buchsbaum et al. Mar 2000 A
6052656 Suda et al. Apr 2000 A
6064960 Bellegarda et al. May 2000 A
6081750 Hoffberg et al. Jun 2000 A
6088731 Kiraly et al. Jul 2000 A
6108028 Skarbo et al. Aug 2000 A
6108627 Sabourin Aug 2000 A
6122616 Henton Sep 2000 A
6144938 Surace et al. Nov 2000 A
6173261 Arai et al. Jan 2001 B1
6188391 Seely et al. Feb 2001 B1
6188999 Moody Feb 2001 B1
6195641 Loring et al. Feb 2001 B1
6208330 Hasegawa et al. Mar 2001 B1
6208854 Roberts et al. Mar 2001 B1
6208971 Bellegarda et al. Mar 2001 B1
6233559 Balakrishnan May 2001 B1
6246862 Grivas et al. Jun 2001 B1
6246981 Papineni et al. Jun 2001 B1
6266637 Donovan et al. Jul 2001 B1
6285786 Seni et al. Sep 2001 B1
6289453 Walker et al. Sep 2001 B1
6308149 Gaussier et al. Oct 2001 B1
6310610 Beaton et al. Oct 2001 B1
6317594 Gossman et al. Nov 2001 B1
6317707 Bangalore et al. Nov 2001 B1
6317831 King Nov 2001 B1
6321092 Fitch et al. Nov 2001 B1
6323846 Westerman et al. Nov 2001 B1
6334103 Surace et al. Dec 2001 B1
6356854 Schubert et al. Mar 2002 B1
6366883 Campbell et al. Apr 2002 B1
6366884 Bellegarda et al. Apr 2002 B1
6373612 Hoffman et al. Apr 2002 B1
6421672 McAllister et al. Jul 2002 B1
6434524 Weber Aug 2002 B1
6446076 Burkey et al. Sep 2002 B1
6453292 Ramaswamy et al. Sep 2002 B2
6466654 Cooper et al. Oct 2002 B1
6477374 Shaffer et al. Nov 2002 B1
6477488 Bellegarda Nov 2002 B1
6487534 Thelen et al. Nov 2002 B1
6499013 Weber Dec 2002 B1
6501937 Ho et al. Dec 2002 B1
6505158 Conkie Jan 2003 B1
6513063 Julia et al. Jan 2003 B1
6520013 Wehrenberg Feb 2003 B1
6522697 Spickermann Feb 2003 B1
6523061 Halverson et al. Feb 2003 B1
6526395 Morris Feb 2003 B1
6532444 Weber Mar 2003 B1
6532446 King Mar 2003 B1
6553344 Bellegarda et al. Apr 2003 B2
6583676 Krah et al. Jun 2003 B2
6598039 Livowsky Jul 2003 B1
6601012 Horvitz et al. Jul 2003 B1
6601026 Appelt et al. Jul 2003 B2
6604059 Strubbe et al. Aug 2003 B2
6615172 Bennett et al. Sep 2003 B1
6615175 Gazdzinski Sep 2003 B1
6631346 Karaorman et al. Oct 2003 B1
6633846 Bennett et al. Oct 2003 B1
6647260 Dusse et al. Nov 2003 B2
6650735 Burton et al. Nov 2003 B2
6654740 Tokuda et al. Nov 2003 B2
6664744 Dietz Dec 2003 B2
6665639 Mozer et al. Dec 2003 B2
6665640 Bennett et al. Dec 2003 B1
6665641 Coorman et al. Dec 2003 B1
6684187 Conkie Jan 2004 B1
6690387 Zimmerman et al. Feb 2004 B2
6691111 Lazaridis et al. Feb 2004 B2
6691151 Cheyer et al. Feb 2004 B1
6697780 Beutnagel et al. Feb 2004 B1
6735632 Kiraly et al. May 2004 B1
6742021 Halverson et al. May 2004 B1
6757362 Cooper et al. Jun 2004 B1
6757718 Halverson et al. Jun 2004 B1
6778951 Contractor Aug 2004 B1
6778952 Bellegarda Aug 2004 B2
6778962 Kasai et al. Aug 2004 B1
6792082 Levine Sep 2004 B1
6803920 Gossett et al. Oct 2004 B2
6807574 Partovi et al. Oct 2004 B1
6810379 Vermeulen et al. Oct 2004 B1
6812466 O″Connor et al. Nov 2004 B2
6813491 McKinney Nov 2004 B1
6822635 Shahoian et al. Nov 2004 B2
6832194 Mozer et al. Dec 2004 B1
6842767 Partovi et al. Jan 2005 B1
6847966 Sommer et al. Jan 2005 B1
6851115 Cheyer et al. Feb 2005 B1
6856259 Sharp Feb 2005 B1
6859931 Cheyer et al. Feb 2005 B1
6895380 Sepe, Jr. May 2005 B2
6895558 Loveland May 2005 B1
6912499 Sabourin et al. Jun 2005 B1
6928614 Everhart Aug 2005 B1
6937975 Elworthy Aug 2005 B1
6937986 Denenberg et al. Aug 2005 B2
6947017 Gettemy Sep 2005 B1
6947571 Rhoads et al. Sep 2005 B1
6956564 Williams Oct 2005 B1
6964023 Maes et al. Nov 2005 B2
6980949 Ford Dec 2005 B2
6980955 Okutani et al. Dec 2005 B2
6985865 Packingham et al. Jan 2006 B1
6988071 Gazdzinski Jan 2006 B1
6996531 Korall et al. Feb 2006 B2
6999927 Mozer et al. Feb 2006 B2
7015894 Morohoshi Mar 2006 B2
7016705 Bahl et al. Mar 2006 B2
7019622 Orr et al. Mar 2006 B2
7020685 Chen et al. Mar 2006 B1
7027974 Busch et al. Apr 2006 B1
7036128 Julia et al. Apr 2006 B1
7050977 Bennett May 2006 B1
7058569 Coorman et al. Jun 2006 B2
7062428 Hogenhout et al. Jun 2006 B2
7069560 Cheyer et al. Jun 2006 B1
7092887 Mozer et al. Aug 2006 B2
7092928 Elad et al. Aug 2006 B1
7093693 Gazdzinski Aug 2006 B1
7117021 Shearer et al. Oct 2006 B2
7117380 Kangas Oct 2006 B2
7127046 Smith et al. Oct 2006 B1
7136710 Hoffberg et al. Nov 2006 B1
7137126 Coffman et al. Nov 2006 B1
7139714 Bennett et al. Nov 2006 B2
7139722 Perrella et al. Nov 2006 B2
7171221 Amin et al. Jan 2007 B1
7177664 Weinzweig et al. Feb 2007 B2
7177798 Hsu et al. Feb 2007 B2
7184064 Zimmerman et al. Feb 2007 B2
7197460 Gupta et al. Mar 2007 B1
7200559 Wang Apr 2007 B2
7203646 Bennett Apr 2007 B2
7209719 Liebenow Apr 2007 B2
7216073 Lavi et al. May 2007 B2
7216080 Tsiao et al. May 2007 B2
7225125 Bennett et al. May 2007 B2
7233790 Kjellberg et al. Jun 2007 B2
7233904 Luisi Jun 2007 B2
7266496 Wang et al. Sep 2007 B2
7277854 Bennett et al. Oct 2007 B2
7290039 Lisitsa et al. Oct 2007 B1
7299033 Kjellberg et al. Nov 2007 B2
7310600 Garner et al. Dec 2007 B1
7324947 Jordan et al. Jan 2008 B2
7349953 Lisitsa et al. Mar 2008 B2
7376556 Bennett May 2008 B2
7376645 Bernard May 2008 B2
7379874 Schmid et al. May 2008 B2
7386449 Sun et al. Jun 2008 B2
7392185 Bennett Jun 2008 B2
7398209 Kennewick et al. Jul 2008 B2
7403938 Harrison et al. Jul 2008 B2
7409337 Potter et al. Aug 2008 B1
7415100 Cooper et al. Aug 2008 B2
7418392 Mozer et al. Aug 2008 B1
7426467 Nashida et al. Sep 2008 B2
7427024 Gazdzinski et al. Sep 2008 B1
7447635 Konopka et al. Nov 2008 B1
7454351 Jeschke et al. Nov 2008 B2
7467087 Gillick et al. Dec 2008 B1
7475010 Chao Jan 2009 B2
7483894 Cao Jan 2009 B2
7487089 Mozer Feb 2009 B2
7496498 Chu et al. Feb 2009 B2
7496512 Zhao et al. Feb 2009 B2
7502738 Kennewick et al. Mar 2009 B2
7508373 Lin et al. Mar 2009 B2
7522065 Falcon Apr 2009 B2
7522927 Fitch et al. Apr 2009 B2
7523108 Cao Apr 2009 B2
7526466 Au Apr 2009 B2
7529671 Rockenbeck et al. May 2009 B2
7529676 Koyama May 2009 B2
7539656 Fratkina et al. May 2009 B2
7546382 Healey et al. Jun 2009 B2
7548895 Pulsipher Jun 2009 B2
7555431 Bennett Jun 2009 B2
7558730 Davis et al. Jul 2009 B2
7571106 Cao et al. Aug 2009 B2
7599918 Shen et al. Oct 2009 B2
7605693 Kulas Oct 2009 B2
7620549 Di Cristo et al. Nov 2009 B2
7624007 Bennett Nov 2009 B2
7633076 Huppi et al. Dec 2009 B2
7634409 Kennewick et al. Dec 2009 B2
7636657 Ju et al. Dec 2009 B2
7640160 Di Cristo et al. Dec 2009 B2
7647225 Bennett et al. Jan 2010 B2
7653883 Hotelling et al. Jan 2010 B2
7657424 Bennett Feb 2010 B2
7663607 Hotelling et al. Feb 2010 B2
7672841 Bennett Mar 2010 B2
7676026 Baxter, Jr. Mar 2010 B1
7684985 Dominach et al. Mar 2010 B2
7693715 Hwang et al. Apr 2010 B2
7693720 Kennewick et al. Apr 2010 B2
7698131 Bennett Apr 2010 B2
7702500 Blaedow Apr 2010 B2
7702508 Bennett Apr 2010 B2
7707027 Balchandran et al. Apr 2010 B2
7707032 Wang et al. Apr 2010 B2
7707267 Lisitsa et al. Apr 2010 B2
7711565 Gazdzinski May 2010 B1
7711672 Au May 2010 B2
7715790 Kennedy May 2010 B1
7716056 Weng et al. May 2010 B2
7720674 Kaiser et al. May 2010 B2
7720683 Vermeulen et al. May 2010 B1
7725307 Bennett May 2010 B2
7725318 Gavalda et al. May 2010 B2
7725320 Bennett May 2010 B2
7725321 Bennett May 2010 B2
7729904 Bennett Jun 2010 B2
7729916 Coffman et al. Jun 2010 B2
7734461 Kwak et al. Jun 2010 B2
7752152 Paek et al. Jul 2010 B2
7752623 Crawford, Jr. Jul 2010 B1
7774204 Mozer et al. Aug 2010 B2
7783486 Rosser et al. Aug 2010 B2
7801729 Mozer Sep 2010 B2
7809570 Kennewick et al. Oct 2010 B2
7809610 Cao Oct 2010 B2
7818176 Freeman et al. Oct 2010 B2
7822608 Cross, Jr. et al. Oct 2010 B2
7826945 Zhang et al. Nov 2010 B2
7831426 Bennett Nov 2010 B2
7840400 Lavi et al. Nov 2010 B2
7840447 Kleinrock et al. Nov 2010 B2
7873519 Bennett Jan 2011 B2
7873654 Bernard Jan 2011 B2
7881936 Longe et al. Feb 2011 B2
7912702 Bennett Mar 2011 B2
7916473 Jang Mar 2011 B2
7917367 Di Cristo et al. Mar 2011 B2
7917497 Harrison et al. Mar 2011 B2
7920678 Cooper et al. Apr 2011 B2
7925525 Chin Apr 2011 B2
7930168 Weng et al. Apr 2011 B2
7949529 Weider et al. May 2011 B2
7949534 Davis et al. May 2011 B2
7974844 Sumita Jul 2011 B2
7974972 Cao Jul 2011 B2
7983915 Knight et al. Jul 2011 B2
7983917 Kennewick et al. Jul 2011 B2
7983997 Allen et al. Jul 2011 B2
7987151 Schott et al. Jul 2011 B2
8000453 Cooper et al. Aug 2011 B2
8005679 Jordan et al. Aug 2011 B2
8015006 Kennewick et al. Sep 2011 B2
8019390 Sindhu Sep 2011 B2
8024195 Mozer et al. Sep 2011 B2
8036901 Mozer Oct 2011 B2
8041570 Mirkovic et al. Oct 2011 B2
8041611 Kleinrock et al. Oct 2011 B2
8055708 Chitsaz et al. Nov 2011 B2
8065155 Gazdzinski Nov 2011 B1
8065156 Gazdzinski Nov 2011 B2
8069046 Kennewick et al. Nov 2011 B2
8073681 Baldwin et al. Dec 2011 B2
8078473 Gazdzinski Dec 2011 B1
8082153 Coffman et al. Dec 2011 B2
8095364 Longe et al. Jan 2012 B2
8099289 Mozer et al. Jan 2012 B2
8107401 John et al. Jan 2012 B2
8112275 Kennewick et al. Feb 2012 B2
8112280 Lu Feb 2012 B2
8117037 Gazdzinski Feb 2012 B2
8130211 Abernathy Mar 2012 B2
8131557 Davis et al. Mar 2012 B2
8140335 Kennewick et al. Mar 2012 B2
8165886 Gagnon et al. Apr 2012 B1
8166019 Lee et al. Apr 2012 B1
8190359 Bourne May 2012 B2
8195467 Mozer et al. Jun 2012 B2
8204238 Mozer Jun 2012 B2
8205788 Gazdzinski et al. Jun 2012 B1
8219407 Roy et al. Jul 2012 B1
8285551 Gazdzinski Oct 2012 B2
8285553 Gazdzinski Oct 2012 B2
8290778 Gazdzinski Oct 2012 B2
8290781 Gazdzinski Oct 2012 B2
8296146 Gazdzinski Oct 2012 B2
8296153 Gazdzinski Oct 2012 B2
8301456 Gazdzinski Oct 2012 B2
8311834 Gazdzinski Nov 2012 B1
8370158 Gazdzinski Feb 2013 B2
8371503 Gazdzinski Feb 2013 B2
8381135 Hotelling et al. Feb 2013 B2
8447612 Gazdzinski May 2013 B2
8479122 Hotelling et al. Jul 2013 B2
8614431 Huppi Dec 2013 B2
9389729 Huppi et al. Jul 2016 B2
9619079 Huppi et al. Apr 2017 B2
9740268 Parivar et al. Aug 2017 B2
20010031633 Tuomela et al. Oct 2001 A1
20010031645 Jarrett Oct 2001 A1
20020018050 Turner Feb 2002 A1
20020019249 Kashu et al. Feb 2002 A1
20020032564 Ensani et al. Mar 2002 A1
20020046025 Hain Apr 2002 A1
20020065099 Bjorndahl May 2002 A1
20020069063 Buchner et al. Jun 2002 A1
20020077817 Atal Jun 2002 A1
20020167488 Hinckley et al. Nov 2002 A1
20030022666 Sato Jan 2003 A1
20030022671 Huomo et al. Jan 2003 A1
20030085870 Hinckley May 2003 A1
20030095096 Robbin et al. May 2003 A1
20030108300 Walker, Jr. et al. Jun 2003 A1
20030197597 Bahl et al. Oct 2003 A1
20030224726 Shearer et al. Dec 2003 A1
20040012556 Yong et al. Jan 2004 A1
20040110472 Witkowski et al. Jun 2004 A1
20040135701 Yasuda et al. Jul 2004 A1
20040163003 Dutton et al. Aug 2004 A1
20040176047 Trively Sep 2004 A1
20040180649 Vogel et al. Sep 2004 A1
20040203351 Shearer et al. Oct 2004 A1
20040213576 Tan et al. Oct 2004 A1
20040224638 Fadell et al. Nov 2004 A1
20040233153 Robinson Nov 2004 A1
20040245438 Payne et al. Dec 2004 A1
20050057169 Noguchi et al. Mar 2005 A1
20050071332 Ortega et al. Mar 2005 A1
20050080625 Bennett et al. Apr 2005 A1
20050119897 Bennett et al. Jun 2005 A1
20050132416 Wasilewski Jun 2005 A1
20050143057 Shiraga et al. Jun 2005 A1
20050143972 Gopalakrishnan et al. Jun 2005 A1
20050168658 Woolgar et al. Aug 2005 A1
20050171662 Strege et al. Aug 2005 A1
20050182629 Coorman et al. Aug 2005 A1
20050190142 Ferguson Sep 2005 A1
20050196733 Budra et al. Sep 2005 A1
20050219223 Kotzin et al. Oct 2005 A1
20050219228 Alameh et al. Oct 2005 A1
20050219394 Du et al. Oct 2005 A1
20050221791 Angelhag Oct 2005 A1
20050253317 Rytivaara et al. Nov 2005 A1
20060007107 Ferguson Jan 2006 A1
20060017692 Wehrenberg et al. Jan 2006 A1
20060018492 Chiu et al. Jan 2006 A1
20060026521 Hotelling et al. Feb 2006 A1
20060026535 Hotelling et al. Feb 2006 A1
20060052141 Suzuki Mar 2006 A1
20060060762 Chan et al. Mar 2006 A1
20060087245 Ng et al. Apr 2006 A1
20060097991 Hotelling et al. May 2006 A1
20060101293 Chandley et al. May 2006 A1
20060116175 Chu Jun 2006 A1
20060117108 Salisbury et al. Jun 2006 A1
20060122834 Bennett Jun 2006 A1
20060143007 Koh et al. Jun 2006 A1
20060146012 Arneson et al. Jul 2006 A1
20060161870 Hotelling et al. Jul 2006 A1
20060161871 Hotelling et al. Jul 2006 A1
20060164241 Makela et al. Jul 2006 A1
20060166702 Dietz et al. Jul 2006 A1
20060174201 Zaner-Godsey et al. Aug 2006 A1
20060197753 Hotelling Sep 2006 A1
20060199536 Eisenbach Sep 2006 A1
20060229101 LaBauve et al. Oct 2006 A1
20060279548 Geaghan Dec 2006 A1
20060290921 Hotelling et al. Dec 2006 A1
20060291863 Chan et al. Dec 2006 A1
20070003289 Tan et al. Jan 2007 A1
20070042714 Ayed Feb 2007 A1
20070046629 Chi-Boon et al. Mar 2007 A1
20070050191 Weider et al. Mar 2007 A1
20070054651 Farmer et al. Mar 2007 A1
20070055529 Kanevsky et al. Mar 2007 A1
20070057773 Hsieh et al. Mar 2007 A1
20070058832 Hug et al. Mar 2007 A1
20070075965 Huppi et al. Apr 2007 A1
20070085157 Fadell et al. Apr 2007 A1
20070088556 Andrew Apr 2007 A1
20070099574 Wang May 2007 A1
20070100480 Sinclair et al. May 2007 A1
20070100790 Cheyer et al. May 2007 A1
20070118377 Badino et al. May 2007 A1
20070123171 Slamka et al. May 2007 A1
20070123287 Mock et al. May 2007 A1
20070135091 Wassingbo Jun 2007 A1
20070135151 Dendy Jun 2007 A1
20070161410 Huang et al. Jul 2007 A1
20070174188 Fish Jul 2007 A1
20070185917 Prahlad et al. Aug 2007 A1
20070225047 Bakos Sep 2007 A1
20070233759 Tomlinson et al. Oct 2007 A1
20070239903 Bhardwaj et al. Oct 2007 A1
20070266185 Goddi et al. Nov 2007 A1
20070282595 Tunning et al. Dec 2007 A1
20070288779 Kim Dec 2007 A1
20070293188 Houghton et al. Dec 2007 A1
20080006762 Fadell et al. Jan 2008 A1
20080015864 Ross et al. Jan 2008 A1
20080021708 Bennett et al. Jan 2008 A1
20080031206 Sharma Feb 2008 A1
20080034032 Healey et al. Feb 2008 A1
20080052063 Bennett et al. Feb 2008 A1
20080090617 Sutardja Apr 2008 A1
20080102882 Sutardja May 2008 A1
20080113618 De Leon et al. May 2008 A1
20080120112 Jordan et al. May 2008 A1
20080129520 Lee Jun 2008 A1
20080140657 Azvine et al. Jun 2008 A1
20080158172 Hotelling et al. Jul 2008 A1
20080192129 Walker et al. Aug 2008 A1
20080204359 Tsui et al. Aug 2008 A1
20080221903 Kanevsky et al. Sep 2008 A1
20080228496 Yu et al. Sep 2008 A1
20080247519 Abella et al. Oct 2008 A1
20080249770 Kim et al. Oct 2008 A1
20080297487 Hotelling et al. Dec 2008 A1
20080300878 Bennett Dec 2008 A1
20090006100 Badger et al. Jan 2009 A1
20090006343 Platt et al. Jan 2009 A1
20090019448 Bouge et al. Jan 2009 A1
20090030800 Grois Jan 2009 A1
20090047904 Preston et al. Feb 2009 A1
20090048709 DeLine Feb 2009 A1
20090058823 Kocienda Mar 2009 A1
20090076796 Daraselia Mar 2009 A1
20090098865 Vaghi et al. Apr 2009 A1
20090100049 Cao Apr 2009 A1
20090112677 Rhett Apr 2009 A1
20090140863 Liu et al. Jun 2009 A1
20090150156 Kennewick et al. Jun 2009 A1
20090157401 Bennett Jun 2009 A1
20090164441 Cheyer Jun 2009 A1
20090171664 Kennewick et al. Jul 2009 A1
20090244092 Hotelling Oct 2009 A1
20090290718 Kahn et al. Nov 2009 A1
20090299745 Kennewick et al. Dec 2009 A1
20090299849 Cao et al. Dec 2009 A1
20090313473 Walker et al. Dec 2009 A1
20090322683 Tsuji et al. Dec 2009 A1
20100005081 Bennett Jan 2010 A1
20100023320 Di Cristo et al. Jan 2010 A1
20100036660 Bennett Feb 2010 A1
20100042400 Block et al. Feb 2010 A1
20100079508 Hodge et al. Apr 2010 A1
20100088020 Sano et al. Apr 2010 A1
20100134423 Brisebois et al. Jun 2010 A1
20100138680 Brisebois et al. Jun 2010 A1
20100145700 Kennewick et al. Jun 2010 A1
20100153553 Sheth et al. Jun 2010 A1
20100156790 Su et al. Jun 2010 A1
20100164887 Nakamura Jul 2010 A1
20100204986 Kennewick et al. Aug 2010 A1
20100217604 Baldwin et al. Aug 2010 A1
20100228540 Bennett Sep 2010 A1
20100235341 Bennett Sep 2010 A1
20100257160 Cao Oct 2010 A1
20100277579 Cho et al. Nov 2010 A1
20100280983 Cho et al. Nov 2010 A1
20100286985 Kennewick et al. Nov 2010 A1
20100299142 Freeman et al. Nov 2010 A1
20100312547 Van Os et al. Dec 2010 A1
20100318576 Kim Dec 2010 A1
20100332235 David Dec 2010 A1
20100332348 Cao Dec 2010 A1
20110060807 Martin et al. Mar 2011 A1
20110082688 Kim et al. Apr 2011 A1
20110112827 Kennewick et al. May 2011 A1
20110112921 Kennewick et al. May 2011 A1
20110119049 Ylonen May 2011 A1
20110125540 Jang et al. May 2011 A1
20110130958 Stahl et al. Jun 2011 A1
20110131036 DiCristo et al. Jun 2011 A1
20110131045 Cristo et al. Jun 2011 A1
20110144999 Jang et al. Jun 2011 A1
20110161076 Davis et al. Jun 2011 A1
20110163976 Barnhoefer et al. Jul 2011 A1
20110175810 Markovic et al. Jul 2011 A1
20110184730 LeBeau et al. Jul 2011 A1
20110218855 Cao et al. Sep 2011 A1
20110231182 Weider et al. Sep 2011 A1
20110231188 Kennewick et al. Sep 2011 A1
20110264643 Cao Oct 2011 A1
20110279368 Klein et al. Nov 2011 A1
20110306426 Novak et al. Dec 2011 A1
20120002820 Leichter Jan 2012 A1
20120016678 Gruber et al. Jan 2012 A1
20120020490 Leichter Jan 2012 A1
20120022787 LeBeau et al. Jan 2012 A1
20120022857 Baldwin et al. Jan 2012 A1
20120022860 Lloyd et al. Jan 2012 A1
20120022868 LeBeau et al. Jan 2012 A1
20120022869 Lloyd et al. Jan 2012 A1
20120022870 Kristjansson et al. Jan 2012 A1
20120022874 Lloyd et al. Jan 2012 A1
20120022876 LeBeau et al. Jan 2012 A1
20120023088 Cheng et al. Jan 2012 A1
20120032894 Parivar et al. Feb 2012 A1
20120034904 LeBeau et al. Feb 2012 A1
20120035908 LeBeau et al. Feb 2012 A1
20120035924 Jitkoff et al. Feb 2012 A1
20120035931 LeBeau et al. Feb 2012 A1
20120035932 Jitkoff et al. Feb 2012 A1
20120042343 Laligand et al. Feb 2012 A1
20120271676 Aravamudan et al. Oct 2012 A1
20120311583 Gruber et al. Dec 2012 A1
Foreign Referenced Citations (85)
Number Date Country
1263385 Aug 2000 CN
1581052 Feb 2005 CN
3837590 May 1990 DE
19841541 Dec 2007 DE
138061 Apr 1985 EP
218859 Apr 1987 EP
262938 Apr 1988 EP
138061 Jun 1988 EP
293259 Nov 1988 EP
299572 Jan 1989 EP
313975 May 1989 EP
314908 May 1989 EP
327408 Aug 1989 EP
389271 Sep 1990 EP
411675 Feb 1991 EP
559349 Sep 1993 EP
570660 Nov 1993 EP
559349 Jan 1999 EP
0992969 Apr 2000 EP
1185058 Mar 2002 EP
1245023 Oct 2002 EP
1335430 Aug 2003 EP
1355223 Oct 2003 EP
1445922 Aug 2004 EP
1452988 Sep 2004 EP
1465462 Oct 2004 EP
1507132 Feb 2005 EP
1507196 Feb 2005 EP
1650938 Apr 2006 EP
1667103 Jun 2006 EP
1696414 Aug 2006 EP
2 081 113 Jul 2009 EP
2 081 113 Jul 2009 EP
2346500 Aug 2000 GB
2418808 Apr 2006 GB
5323277 Dec 1993 JP
6-19965 Jan 1994 JP
9-18566 Jan 1997 JP
11-203045 Jul 1999 JP
2000-13478 Jan 2000 JP
2000-163031 Jun 2000 JP
2000-286924 Oct 2000 JP
2001-111661 Apr 2001 JP
2001-125896 May 2001 JP
2001-244498 Sep 2001 JP
2001-352395 Dec 2001 JP
2002-024212 Jan 2002 JP
2002-57791 Feb 2002 JP
2002-135369 May 2002 JP
2002-342033 Nov 2002 JP
2003-517158 May 2003 JP
2003-204390 Jul 2003 JP
2003-219024 Jul 2003 JP
2003-309644 Oct 2003 JP
2003-345476 Dec 2003 JP
2004-021476 Jan 2004 JP
2004-501586 Jan 2004 JP
2004-159028 Jun 2004 JP
2004-357193 Dec 2004 JP
2005-223629 Aug 2005 JP
2005-260996 Sep 2005 JP
2005-278043 Oct 2005 JP
2006-109004 Apr 2006 JP
2006-115043 Apr 2006 JP
2007-163872 Jun 2007 JP
2009-036999 Feb 2009 JP
10-2005-0113650 Dec 2005 KR
10-0776800 Nov 2007 KR
10-0810500 Mar 2008 KR
10-2008-0109322 Dec 2008 KR
10-2009-0086805 Aug 2009 KR
10-0920267 Oct 2009 KR
10-2011-0113414 Oct 2011 KR
200 919 281 May 2009 TW
2000079766 Dec 2000 WO
2004093045 Oct 2004 WO
2005101176 Oct 2005 WO
2005114369 Dec 2005 WO
2006129967 Dec 2006 WO
WO-2009062176 May 2009 WO
WO-2009062176 May 2009 WO
WO-2009096643 Aug 2009 WO
WO-2010051001 May 2010 WO
2011088053 Jul 2011 WO
WO-2012019153 Feb 2012 WO
Non-Patent Literature Citations (303)
Entry
Acero et al., “Environmental Robustness in Automatic Speech Recognition”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP'90), Apr. 1990, pp. 849-852.
Acero et al., “Robust Speech Recognition by Normalization of The Acoustic Space”, International Conference on Acoustics, Speech, and Signal Processing, 1991, pp. 893-896.
Agilent Technologies Inc., “Agilent Unveils Optical Proximity Sensor or Mobile Appliances”, available at <http:/www.embeddedstar.com/press/content/2004/8/embedded16015.html>, Aug. 31, 2004, 2 pages.
Ahlbom et al., “Modeling Spectral Speech Transitions Using Temporal Decomposition Techniques”, IEEE International Conference of Acoustics, Speech, and Signal Processing (ICASSP'87), vol. 12, Apr. 1987, pp. 13-16.
Aikawa, et al., “Speech Recognition Using Time-Warping Neural Networks”, Proceedings of the 1991, IEEE Workshop on Neural Networks for Signal Processing, Sep. 30-Oct. 1, 1991, pp. 337-346.
Alfred App, “Alfred”, available at <http://www.alfredapp.com/>, retrieved on Feb. 8, 2012, 5 pages.
Ambite et al., “Design and Implementation of the CALO Query Manager”, American Association for Artificial Intelligence, 2006, 8 pages.
Ambite et al., “Integration of Heterogeneous Knowledge Sources in the CALO Query Manager”, The 4th International Conference on Ontologies, DataBases, and Applications of Semantics (ODBASE), 2005, 18 pages.
Anastasakos, et al., “Duration Modeling in Large Vocabulary Speech Recognition”, International Conference on Acoustics, Speech, and Signal Processing, vol. 1, May 9-15, 1995, pp. 628-631.
Anderson, R. H, “Syntax-Directed Recognition of Hand-Printed Two-Dimensional Mathematics”, In Proceedings of Symposium on Interactive Systems for Experimental Applied Mathematics, Proceedings of the Association for Computing Machinery Inc. Symposium, 1967, pp. 436-459.
Ansari, et al., “Pitch Modification of Speech using a Low-Sensitivity Inverse Filter Approach”, IEEE Signal Processing Letters, vol. 5, No. 3, Mar. 1998, pp. 60-62.
Anthony, et al., “Supervised Adaption for Signature Verification System”, IBM Technical Disclosure, Jun. 1, 1978, 3 pages.
Apple Computer, “Guide Maker User's Guide”, Apple Computer, Inc., Apr. 27, 1994, 8 pages.
Apple Computer, “Introduction to Apple Guide”, Apple Computer, Inc., Apr. 28, 1994, 20 pages.
Asanovic, et al., “Experimental Determination of Precision Requirements for Back-Propagation Training of Artificial Neural Networks”, In Proceedings of the 2nd International Conference of Microelectronics for Neural Networks, 1991, www.ICSI.Berkelev.EDU, 1991, 7 pages.
Atal, et al., “Efficient Coding of LPC Parameters by Temporal Decomposition”, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'83), Apr. 1983, pp. 81-84.
Bahl, et al., “A Maximum Likelihood Approach to Continuous Speech Recognition”, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. PAMI-5, No. 2, Mar. 1983, pp. 179-190.
Bahl, et al., “A Tree-Based Statistical Language Model for Natural Language Speech Recognition”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 37, No. 7, Jul. 1989, pp. 507-514.
Bahl, et al., “Acoustic Markov Models Used in the Tangora Speech Recognition System”, In Proceeding of International Conference on Acoustics, Speech, and Signal Processing (ICASSP'88), vol. 1, Apr. 1988, pp. 497-500.
Bahl, et al., “Large Vocabulary Natural Language Continuous Speech Recognition,”, In Proceedings of 1989 International Conference on Acoustics, Speech, and Signal Processing, vol. 1, May 1989, pp. 465-467.
Bahl, et al., “Multonic Markov Word Models for Large Vocabulary Continuous Speech Recognition”, IEEE Transactions on Speech and Audio Processing, vol. 1, No. 3, Jul. 1993, pp. 334-344.
Bahl, et al., “Speech Recognition with Continuous-Parameter Hidden Markov Models”, Proceeding of International Conference on Acoustics, Speech, and Signal Processing (ICASSP'88), vol. 1, Apr. 1988, pp. 332-339.
Banbrook, M., “Nonlinear Analysis of Speech from a Synthesis Perspective”, A Thesis Submitted for the Degree of Doctor of Philosophy, The University of Edinburgh, Oct. 15, 1996, 35 pages.
Belaid, et al., “A Syntactic Approach for Handwritten Mathematical Formula Recognition”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-6, No. 1, Jan. 1984, pp. 105-111.
Bellegarda et al., “Experiments Using Data Augmentation for Speaker Adaptation”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP'95), May 9-12, 1995, pp. 692-695.
Bellegarda et al., “A Latent Semantic Analysis Framework for Large-Span Language Modeling”, 5th European Conference on Speech, Communication and Technology, (EuroSpeech'97), Sep. 22-25, 1997, 4 pages.
Bellegarda et al., “A Multispan Language Modeling Framework for Large Vocabulary Speech Recognition”, IEEE Transactions on Speech and Audio Processing. vol. 6, No. 5, Sep. 1998, pp. 456-467.
Bellegarda et al., “A Novel Word Clustering Algorithm Based on Latent Semantic Analysis”, In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 1, May 7-10, 1996, pp. 172-175.
Bellegarda et al., “On-Line Handwriting Recognition Using Statistical Mixtures”, Advances in Handwriting and Drawings: A Multidisciplinary Approach, Europia, 6th International IGS Conference on Handwriting and Drawing, Paris-France, Jul. 1993, pp. 99-116.
Bellegarda et al., “Performance of the IBM Large Vocabulary Continuous Speech Recognition System on the ARPA Wall Street Journal Task”, Signal Processing VII: Theories and Applications, European Association for Signal Processing, 1994, pp. 123-126.
Bellegarda et al., “The Metamorphic Algorithm: A Speaker Mapping Approach to Data Augmentation”, IEEE Transactions on Speech and Audio Processing, vol. 2, No. 3, Jul. 1994, pp. 413-420.
Bellegarda, J. R., “Exploiting Both Local and Global Constraints for Multi-Span Statistical Language Modeling”, Proceeding of the 1998 IEEE International Conference on Acoustics, Speech, and Signal Processing (1CASSP'98), vol. 2, May 1998, pp. 677-680.
Bellegarda, J. R., “Interaction-Driven Speech Input-A Data-Driven Approach to the Capture of both Local and Global Language Constraints”, available at <http://old.sig.chi.ora/bulletin/1998.2/bellegarda.html>, 1992, 7 pages.
Bellegarda, J. R., “Large Vocabulary Speech Recognition with Multispan Statistical Language Models”, IEEE Transactions on Speech and Audio Processing, vol. 8, No. 1, Jan. 2000, pp. 76-84.
Bellegarda, Jerome R., “Exploiting Latent Semantic Information in Statistical Language Modeling”, Proceedings of the IEEE, vol. 88, No. 8, Aug. 2000, pp. 1-18.
Belvin et al., “Development of the HRL Route Navigation Dialogue System”, Proceedings of the First International Conference on Human Language Technology Research, Paper, 2001, 5 pages.
Berry et al., “PTIME: Personalized Assistance for Calendaring”, ACM Transactions on Intelligent Systems and Technology, vol. 2, No. 4, Article 40, Jul. 2011, pp. 1-22.
Black et al., “Automatically Clustering Similar Units for Unit Selection in Speech Synthesis”, Proceedings of Eurospeech, vol. 2, 1997, 4 pages.
Blair et al., “An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System”, Communications of the ACM, vol. 28, No. 3, Mar. 1985, pp. 289-299.
Briner, L. L., “Identifying Keywords in Text Data Processing”, In Zelkowitz, Marvin V., ED, Directions and Challenges, 15th Annual Technical Symposium, Gaithersbury, Maryland, Jun. 17, 1976, pp. 85-90.
Bulyko et al., “Joint Prosody Prediction and Unit Selection for Concatenative Speech Synthesis”, Electrical Engineering Department, University of Washington, Seattle, 2001, 4 pages.
Bussey et al., “Service Architecture, Prototype Description and Network Implications of a Personalized Information Grazing Service”, INFOCOM'90, Ninth Annual Joint Conference of the IEEE Computer and Communication Societies, available at <http://slrohall.com/oublications/>, Jun. 1990, pp. 1046-1053.
Bussler et al., “Web Service Execution Environment (WSMX)”, retrieved from Internet on Sep. 17, 2012, available at <http://www.w3.org/Submission/WSMX>, Jun. 3, 2005, 29 pages.
Butcher, Mike, “EVI Arrives in Town to go Toe-to-Toe with Siri”, TechCrunch, available at <http://techcrunch .com/20 12/01/23/evi-arrives-in-town-to-go-toe-to-toe-with-siri/> Jan. 23, 2012, pp. 1-2.
Buzo et al., “Speech Coding Based Upon Vector Quantization”, IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. Assp-28, No. 5, Oct. 1980, pp. 562-574.
Caminero-Gil et al., “Data-Driven Discourse Modeling for Semantic Interpretation”, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 1, May 1996, pp. 401-404.
Cawley G. C., “The Application of Neural Networks to Phonetic Modelling”, PhD Thesis, University of Essex, Mar. 1996, 13 pages.
Chang et al., “A Segment-based Speech Recognition System for Isolated Mandarin Syllables”, Proceedings TEN CON '93, IEEE Region 10 conference on Computer, Communication, Control and Power Engineering, vol. 3, Oct. 1993, 6 pages.
Chen, Yi, “Multimedia Siri Finds and Plays Whatever You Ask For”, PSFK Report, Feb. 9, 2012, pp. 1-9.
Cheyer et al., “Spoken Language and Multimodal Applications for Electronic Realties”, Virtual Reality, vol. 3, 1999, pp. 1-15.
Cheyer, Adam, “A Perspective on AI & Agent Technologies for SCM”, VerticalNet presentation, 2001, 22 pages.
Cheyer, Adam, “About Adam Cheyer”, available at <http://www.adam.cheyer.com/about.html>, retrieved on Sep. 17, 2012, pp. 1-2.
CNET news.com, “Reinventing the Scroll Wheel”, Photo 1, available at <http://news.com.com/2300-1041_3-6107951-1.html?tag=ne.gall.pg>, Aug. 22, 2006, 2 pages.
CNET news.com, “Reinventing the Scroll Wheel”, Photo 2, available at <http://news.com.com/2300-1041_3-6107951-2.html?tag=ne.gall.pg>, Aug. 22, 2006, 2 pages.
CNET news.com, “Reinventing the Scroll Wheel”, Photo 3, available at <http://news.com.com/2300-1041_3-6107951-3.html?tag=ne.gall.pg>, Aug. 22, 2006, 2 pages.
CNET news.com, “Reinventing the Scroll Wheel”, Photo 4, available at <http://news.com.com/2300-1041_3-6107951-4.html?tag=ne.gall.pg>, Aug. 22, 2006, 2 pages.
CNET news.com, “Reinventing the Scroll Wheel”, Photo 5, available at <http://news.com.com/2300-1041_3-6107951-5.html?tag=ne.gall.pg>, Aug. 22, 2006, 2 pages.
CNET news.com, “Reinventing the Scroll Wheel”, Photo 6, available at <http://news.com.com/2300-1041_3-6107951-6.html?tag=ne.gall.pg>, Aug. 22, 2006, 2 pages.
CNET news.com, “Reinventing the Scroll Wheel”, Photo 7, available at <http://news.com.com/2300-1041_3-6107951-7.html?tag=ne.gall.pg>, Aug. 22, 2006, 2 pages.
CNET news.com, “Reinventing the Scroll Wheel”, Photo 8, available at <http://news.com.com/2300-1041_3-6107951-8.html?tag=ne.gall.pg>, Aug. 22, 2006, 2 pages.
Conklin, J., “Hypertext: An Introduction and Survey”, COMPUTER Magazine, Sep. 1987, pp. 17-40.
Connolly et al., “Fast Algorithms for Complex Matrix Multiplication Using Surrogates”, IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, No. 6, Jun. 1989, pp. 938-949.
Cutkosky et al., “PACT: An Experiment in Integrating Concurrent Engineering Systems”, Journal, Computer, vol. 26, No. 1, Jan. 1993, 14 pages.
Decision to Grant received for European Patent Application No. 07839698.3, mailed on Jul. 31, 2014, 1 page.
Deerwester et al., “Indexing by Latent Semantic Analysis”, Journal of the American Society for Information Science, vol. 41, No. 6, Sep. 1990, pp. 391-407.
Deller, Jr. et al., “Discrete-Time Processing of Speech Signals”, Prentice Hall, ISBN: 0-02-328301-7, 1987, 14 pages.
Digital Equipment Corporation, “Open VMS Software Overview”, Software Manual, Dec. 1995, 159 pages.
Domingue et al., “Web Service Modeling Ontology (WSMO)-An Ontology for Semantic Web Services”, Position Paper at the W3C Workshop on Frameworks for Semantics in Web Services, Innsbruck, Austria, Jun. 2005, 6 pages.
Donovan, R. E., “A New Distance Measure for Costing Spectral Discontinuities in Concatenative Speech Synthesisers”, available at <http://citeseerx.ist.osu.edu/viewdoc/summarv?doi=1 0.1.1.21.6398>, 2001, 4 pages.
Elio et al., “On Abstract Task Models and Conversation Policies”, Proc. Workshop on Specifying and Implementing Conversation Policies, Autonomous Agents'99 Conference, 1999, pp. 1-10.
Ericsson et al., “Software Illustrating a Unified Approach to Multimodality and Multilinguality in the in-home Domain”, Talk and Look: Tools for Ambient Linguistic Knowledge, Dec. 2006, 127 pages.
Evi, “Meet Evi: The One Mobile Application that Provides Solutions for your Everyday Problems”, Feb. 2012, 3 pages.
Extended European Search Report (includes Partial European Search Report and European Search Opinion) received for European Patent Application No. 13169672.6, dated Aug. 14, 2013, 11 pages.
Feigenbaum et al., “Computer-Assisted Semantic Annotation of Scientific Life Works”, Oct. 15, 2007, 22 pages.
Final office Action mailed received for U.S. Appl. No. 11/650,014, dated Nov. 15, 2010, 16 pages.
Final Office Action received for U.S. Appl. No. 11/620,702, dated Nov. 19, 2009, 12 pages.
Frisse, M. E., “Searching for Information in a Hypertext Medical Handbook”, Communications of the ACM, vol. 31, No. 7, Jul. 1988, 8 pages.
Gannes, Liz , “Alfred App Gives Personalized Restaurant Recommendations”, allthingsd.com, Jul. 18, 2011, Available at <http://allthingsd.com/20110718/alfred-app-gives-personalized-restaurant-recommendations/>, 3 pages.
Gautier et al., “Generating Explanations of Device Behavior Using Compositional Modeling and Causal Ordering”, CiteSeerx, 1993, pp. 89-97.
Gervasio et al., “Active Preference Learning for Personalized Calendar Scheduling Assistancae”, CiteSeerx, Proceedings of IUI'05, Jan. 9-12, 2005, pp. 90-97.
Glass et al., “Multilingual Spoken-Language Understanding in the Mit Voyager System”, Available online at <http://groups.csail.mit.edu/sis/publications/1995/speechcomm95-voyager.pdf>, Aug. 1995, pp. 1-29.
Glass, Alyssa, “Explaining Preference Learning”, CiteSeerx, 2006, pp. 1-5.
Goddeau et al., “A Form-Based Dialogue Manager for Spoken Language Applications”, Available online at <http://phasedance.com/pdflicslp96.pdf>, Oct. 1996, 4 pages.
Goddeau et al., “Galaxy: A Human-Language Interface to On-Line Travel Information”, International Conference on Spoken Language Processing, Yokohama, 1994, pp. 707-710.
Goldberg et al., “Using Collaborative Filtering to Weave an Information Tapestry”, Communications of the ACM, vol. 35, No. 12, Dec. 1992, 10 pages.
Gorin et al., “On Adaptive Acquisition of Language”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP'90), vol. 1, Apr. 1990, 5 pages.
Gotoh et al., “Document Space Models Using Latent Semantic Analysls”, In Proceedings of Eurospeech, 1997, 4 pages.
Gray, R. M., “Vector Quantization”, IEEE ASSP Magazine, Apr. 1984, 26 pages.
Gruber et al., “A Translation Approach to Portable Ontology Specifications”, Knowledge Acquisition, vol. 5, No. 2, Jun. 1993, pp. 199-220.
Gruber et al., “An Ontology for Engineering Mathematics”, Fourth International Conference on Principles of Knowledge Representation and Reasoning, Available at <http://www-ksl.stanford.edu/knowledge-sharing/papers/engmath.html>, 1994, pp. 1-22.
Gruber et al., “Generative Design Rationale: Beyond the Record and Replay Paradigm”, Knowledge Systems Laboratory, Technical Report KSL 92-59, Dec. 1991, Updated Feb. 1993, 24 pages.
Gruber et al., “Machine-generated Explanations of Engineering Models: A Compositional Modeling Approach”, Proceedings of International Joint Conference on Artificial Intelligence, 1993, 7 pages.
Gruber et al., “NIKE: A National Infrastructure for Knowledge Exchange”, A Whitepaper Advocating and ATP Initiative on Technologies for Lifelong Learning, Oct. 1994, pp. 1-10.
Gruber et al., “Toward a Knowledge Medium for Collaborative Product Development”, Proceedings of the Second International Conference on Artificial Intelligence in Design, Jun. 22-25, 1992, pp. 1-19.
Gruber, Thomas R., “(Avoiding) the Travesty of the Commons”, Presentation at NPUC, New Paradigms for User Computing, IBM Almaden Research Center, Jul. 24, 2006, 52 pages.
Gruber, Thomas R., “2021: Mass Collaboration and the Really New Economy”, TNTY Futures, vol. 1, No. 6, Available at <http://tomgruber.org/writing/tnty2001.htm>, Aug. 2001, 5 pages.
Gruber, Thomas R., “Automated Knowledge Acquisition for Strategic Knowledge”, Machine Learning, vol. 4, 1939, pp. 293-336.
Gruber, Thomas R., “Big Think Small Screen: How Semantic Computing in the Cloud will Revolutionize the Consumer Experience on the Phone”, Keynote presentation at Web 3.0 conference, Jan. 2010, 41 pages.
Gruber, Thomas R., “Collaborating Around Shared Content on the WWW, W3C Workshop on WWW and Collaboration”, available at <http://www.w3.org/Collaboration/Workshop/Proceedings/P9.html>, Sep. 1995, 1 page.
Gruber, Thomas R., “Collective Knowledge Systems: Where the Social Web meets the Semantic Web”, Web Semantics: Science, Services and Agents on the World Wide Web, 2007, pp. 1-19.
Gruber, Thomas R., “Despite our Best Efforts, Ontologies are not the Problem”, AAAI Spring Symposium, Available at <http://tomgruber.org/writing/aaai-ss08.htm>, Mar. 2008, pp. 1-40.
Gruber, Thomas R., “Enterprise Collaboration Management with Intraspect”, Intraspect Technical White Paper, Jul. 2001, pp. 1-24.
Gruber, Thomas R., “Every Ontology is a Treaty—A Social Agreement—Among People with Some Common Motive in Sharing”, Official Quarterly Bulletin of AIS Special Interest Group on Semantic Web and Information Systems, vol. 1, No. 3, 2004, pp. 1-5.
Gruber, Thomas R., “Helping Organizations Collaborate, Communicate, and Learn”, Presentation to NASA Ames Research, available at <http://tomgruber.org/writing/organizational-intelligence-talk.htm>, Mar.-Oct. 2003, 30 pages.
Gruber, Thomas R., “Intelligence at the Interface: Semantic Technology and the Consumer Internet Experience”, Presentation at Semantic Technologies Conference, Available online at <http://tomgruber.org/writing/semtech08.htm>, May 20, 2008, pp. 1-40.
Gruber, Thomas R., “Interactive Acquisition of Justifications: Learning “Why” by Being Told “What””, Knowledge Systems Laboratory, Technical Report KSL 91-17, Original Oct. 1990, Revised Feb. 1991, 24 pages.
Gruber, Thomas R., “It Is What It Does: The Pragmatics of Ontology for Knowledge Sharing”, Proceedings of the International CIDOC CRM Symposium, Available at <http://tomgruber.org/writing/cidoc-ontology.htm>, Mar. 26, 2003, 21 pages.
Gruber, Thomas R., “Ontologies, Web 2.0 and Beyond”, Ontology Summit, available at <http://tomgruber.org/writing/ontolog-social-web-keynote.htm>, Apr. 24, 2007, 17 pages.
Gruber, Thomas R., “Ontology of Folksonomy: A Mash-up of Apples and Oranges”, Int'l Journal on Semantic Web & Information Systems, vol. 3, No. 2, 2007, 7 pages.
Gruber, Thomas R., “Siri, A Virtual Personal Assistant-Bringing Intelligence to the Interface”, Semantic Technologies conference, Jun. 16, 2009, 21 pages.
Gruber, Thomas R., “TagOntology”, Presentation to Tag Camp, Oct. 29, 2005, 20 pages.
Gruber, Thomas R., “Toward Principles for the Design of Ontologies Used for Knowledge Sharing”, International Journal of Human-Computer Studies, vol. 43, No. 5-6, Nov. 1995, pp. 907-928.
Gruber, Thomas R., “Where the Social Web meets the Semantic Web”, Presentation at the 5th International Semantic Web Conference, Nov. 2006, 38 pages.
Guzzoni et al., “A Unified Platform for Building Intelligent Web Interaction Assistants”, Proceedings of the 2006 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Computer Society, 2006, 4 pages.
Guzzoni et al., “Active, A Platform for Building Intelligent Operating Rooms”, Surgetica 2007 Computer-Aided Medical Interventions: tools and Applications, 2007, pp. 191-198.
Guzzoni et al., “Active, A Tool for Building Intelligent User Interfaces”, ASC 2007, Palma de Mallorca, Aug. 2007, 6 pages.
Guzzoni et al., “Modeling Human-Agent Interaction with Active Ontologies”, AAAI Spring Symposium, Interaction Challenges for Intelligent Assistants, Stanford University, Palo Alto, California, 2007, 8 pages.
Hardwar, Devemder, “Driving App Waze Builds its own Siri for Hands-Free Voice Control”, Available online at <http://venturebeat.com/2012/02/09/driving-app-waze-builds-its-own-siri-for-hands-free-voice-control/>, retrieved on Feb. 9, 2012, 4 pages.
Harris, F. J., “On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform”, In Proceedings of the IEEE, vol. 66, No. 1, Jan. 1978, 34 pages.
Helm, et al., “Budding Visual Language Parsers”, Proceedings of CHI'91, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1991, 8 pages.
Hermansky, H., “Perceptual Linear Predictive (PLP) Analysis of Speech”, Journal of the Acoustical Society of America, vol. 87, No. 4, Apr. 1990, 15 pages.
Hermansky, H., “Recognition of Speech in Additive and Convolutional Noise Based on Rasta Spectral Processing”, Proceedings of IEEE International Conference on Acoustics, speech and Signal Processing (ICASSP'93), Apr. 27-30, 1993, 4 pages.
Hewlett Packard, “Irda Physical Layer Implementation For Hewlett Packard Infrared Products”, Application Note 1119, 1998, 4 pages.
Hoehfeld et al., “Learning with Limited Numerical Precision Using the Cascade-Correlation Algorithm”, IEEE Transactions on Neural Networks, vol. 3, No. 4, Jul. 1992, 18 pages.
Holmes, J. N., “Speech Synthesis and Recognition-Stochastic Models for Word Recognition”, Published by Chapman & Hall, London, ISBN 0 412 534304, 1998, 7 pages.
Hon et al., “CMU Robust Vocabulary-Independent Speech Recognition System”, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-91), Apr. 14-17, 1991, 4 pages.
IBM, “Integrated Audio-Graphics User Interface”, IBM Technical Disclosure Bulletin, vol. 33, No. 11, Apr. 1991, 4 pages.
IBM, “Speech Editor”, IBM Technical Disclosure Bulletin, vol. 2 , No. 10, Mar. 10, 1987, 3 pages.
IBM, “Speech Recognition with Hidden Markov Models of Speech Waveforms”, IBM Technical Disclosure Bulletin, vol. 34, No. 1, Jun. 1991, 10 pages.
Integration Associates Inc., “Proximity Sensor Demo Kit User Guide, Version 0.62-Preliminary”, 2004, 17 pages.
Intention to Grant received for European Patent Application No. 07839698.3, dated Mar. 21, 2014, 6 pages.
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US1993/012666, dated Mar. 1, 1995, 5 pages.
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US1993/012637, dated Apr. 10, 1995, 7 pages.
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US1994/011011, dated Feb. 28, 1996, 4 pages.
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US1995/008369, dated Oct. 9, 1996, 4 pages.
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2007/022335, dated Apr. 28, 2009, 13 pages.
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2007/023124, dated May 28, 2009, 8 pages.
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2007/026130, dated Jul. 7, 2009, 11 pages.
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2007/026141, dated Jul. 7, 2009, 5 pages.
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2007/026164, dated Jul. 7, 2009, 11 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2007/022335, dated Feb. 18, 2008, 15 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2007/023124, dated Jul. 3, 2008, 10 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2007/026130, dated Aug. 21, 2008, 12 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2007/026164, dated Jun. 3, 2008, 13 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2011/020861, dated Nov. 29, 2011, 15 pages.
International Search Report received for PCT Patent Application No. PCT/US1993/012666, dated Nov. 9, 1994, 8 pages.
International Search Report received for PCT Patent Application No. PCT/US1994/11011 dated Feb. 8, 1995, 3 pages (International Search Report only).
International Search Report received for PCT Patent Application No. PCT/US1995/008369, dated Nov. 8, 1995, 6 pages.
Intraspect Software, “The Intraspect Knowledge Management Solution: Technical Overview”, available at <http://tomgruber.org/writing/intraspect-whitepaper-1998.pdf>, 1998, 18 pages.
Iowegian International, “FIR Filter Properties”, DSPGuru, Digital Signal Processing Central, available at <http://www.dspguru.com/dsp/faq/fir/properties, > retrieved on Jul. 28, 2010, 6 pages.
Jacobs et al., “Scisor: Extracting Information from On-Line News,”, Communications of the ACM, vol. 33, No. 11, Nov. 1990, 10 pages.
Jelinek, F., “Self-Organized Language Modeling for Speech Recognition,”, Readings in Speech Recognition, edited by Alex Waibel and Kai-Fu Lee, Morgan Kaufmann Publishers, Inc., ISBN: 1-55860-124-4, 1990, 63 pages.
Jennings et al., “A Personal News Service Based on a User Model Neural Network”, IEICE Transactions on Information and Systems, vol. E75-D, No. 2, Mar. 1992, 12 pages.
Ji et al., “A Method for Chinese Syllables Recognition Based upon Sub-syllable Hidden Markov Model”, 1994 International Symposium on Speech, Image Processing and Neural Networks, Hong Kong, Apr. 1994, 4 pages.
Jones, J., “Speech Recognition for Cyclone”, Apple Computer, Inc., E.R.S. Revision 2.9, Sep. 10, 1992, 93 pages.
Julia et al., “Un Editeur Interactif De Tableaux Dessines a Main Levee (An Interactive Editor for Hand-Sketched Tables)”, Traitement du Signal, vol. 12, No. 6, 1995, pp. 619-626.
Karp, P. D., “A Generic Knowledge-Base Access Protocol”, Available online at <http://lecture.cs.buu.ac.th/-f50353/Document/gfp.pdf>, May 12, 1994, 66 pages.
Katz, S. M., “Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer”, IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. ASSP-35, No. 3, Mar. 1987, 3 pages.
Kitano, H., “PhiDM-Dialog, An Experimental Speech-to-Speech Dialog Translation System”, COMPUTER, vol. 24, No. 6, Jun. 1991, 13 pages.
Klabbers et al., “Reducing Audible Spectral Discontinuities”, IEEE Transactions on Speech and Audio Processing, vol. 9, No. 1, Jan. 2001, 13 pages.
Klatt et al., “Linguistic Uses of Segmental Duration in English: Acoustic and Perpetual Evidence”, Journal of the Acoustical Society of America, vol. 59, No. 5, May 1976, 16 pages.
Knownav, “Knowledge Navigator”, YouTube Video available at <http://www.youtube.com/watch?v=QRH8eimU_20>, Apr. 29, 2008, 1 page.
Kominek et al., “Impact of Durational Outlier Removal from Unit Selection Catalogs”, 5th ISCA Speech Synthesis Workshop, Jun. 14-16, 2004, 6 pages.
Kubala et al., “Speaker Adaptation from a Speaker-Independent Training Corpus”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP'90), Apr. 1990, 4 pages.
Kubala et al., “The Hub and Spoke Paradigm for CSR Evaluation”, Proceedings of the Spoken Language Technology Workshop, Mar. 1994, 9 pages.
Lee et al., “A Real-Time Mandarin Dictation Machine for Chinese Language with Unlimited Texts and Very Large Vocabulary”, International Conference on Acoustics, Speech and Signal Processing, vol. 1, Apr. 1990, 5 pages.
Lee et al., “Golden Mandarin(II)-An Improved Single-Chip Real-Time Mandarin Dictation Machine for Chinese Language with Very Large Vocabulary”, IEEE International Conference of Acoustics, Speech and Signal Processing, vol. 2, 1993, 4 pages.
Lee et al., “Golden Mandarin(II)-An Intelligent Mandarin Dictation Machine for Chinese Character Input with Adaptation/Learning Functions”, International Symposium on Speech, Image Processing and Neural Networks, Hong Kong, Apr. 1994, 5 pages.
Lee et al., “System Description of Golden Mandarin (I) Voice Input for Unlimited Chinese Characters,”, International Conference on Computer Processing of Chinese & Oriental Languages, vol. 5, Nos. 3 & 4, Nov. 1991, 16 pages.
Lee, Kai F., “Large-Vocabulary Speaker-Independent Continuous Speech Recognition: The SPHINX System”, Partial Fulfilment of the Requirements for the Degree of Doctor of Philosophy, Computer Science Department, Carnegie Mellon University, Apr. 1988, 195 pages.
Lemon et al., “Multithreaded Context for Robust Conversational Interfaces: Context- Sensitive Speech Recognition and Interpretation of Corrective Fragments”, ACM Transactions on Computer-Human Interaction, vol. 11 , No. 3, Sep. 2004, pp. 241-267.
Leong et al., “CASIS: A Context-Aware Speech Interface System”, Proceedings of the 10th International Conference on Intelligent user Interfaces, Jan. 9-12, 2005, pp. 231-238.
Lieberman et al., “Out of Context: Computer Systems that Adapt to, and Learn from, Context”, IBM Systems Journal, vol. 39, Nos. 3&4, 2000, pp. 617-632.
Lin et al., “A Distributed Architecture for Cooperative Spoken Dialogue Agents with Coherent Dialogue State and History”, Available on line at <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=1 0.1.1.42.272>, 1999, 4 pages.
Lin et al., “A New Framework for Recognition of Mandarin Syllables with Tones Using Sub-syllabic Unites”, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-93), Apr. 27-30, 1993, 4 pages.
Linde et al., “An Algorithm for Vector Quantizer Design”, IEEE Transactions on Communications, vol. 28, No. 1, Jan. 1980, 12 pages.
Liu et al., “Efficient Joint Compensation of Speech for the Effects of Additive Noise and Linear Filtering”, IEEE International Conference of Acoustics, Speech, and Signal Processing,ICASSP-92, Mar. 23-26, 1992, 4 pages.
Logan, B., “Mel Frequency Cepstral Co-efficients for Music Modeling”, International Symposium on Music Information Retrieval, 2000, 2 pages.
Lowerre, B. T., “The-HARPY Speech Recognition System,”, Doctoral Dissertation, Department of Computer Science, Carnegie Mellon University, Apr. 1976, 20 pages.
Maghbouleh, A., “An Empirical Comparison of Automatic Decision Tree and Linear Regression Models for Vowel Durations”, Revised Version of a Paper Presented at the Computational Phonology in Speech Technology workshop, 1996 annual meeting of the Association for Computational Linguistics in Santa Cruz, California, 7 pages.
Markel et al., “Linear Prediction of Speech”, Springer-Verlag, Berlin Heidelberg New York, 1976, 12 pages.
Martin et al., “The Open Agent Architecture: A Framework for Building Distributed Software Systems”, Applied Artificial Intelligence: An International Journal, vol. 13, No. 1-2, available at <http://adam.cheyer.com/papers/oaa.pdf> >, retrieved from internet on, Jan.-Mar. 1999, 38 pages.
McGuire et al., “Shade: Technology for Knowledge-Based Collaborative Engineering”, Journal of Concurrent Engineering, Applications and Research (CERA), 1993, 18 pages.
“Mel Scale”, Wikipedia the Free Encyclopedia, last modified on Oct. 13, 2009 and retrieved on Jul. 28, 2010, available online <http://en.wikipedia.org/wiki/Mel_scale>, 2 pages.
Meng et al., “Wheels: A Conversational System in the Automobile Classified Domain”, Proceedings of Fourth International Conference on Spoken Language, ICSLP 96, vol. 1, Oct. 1996, pp. 1-4.
Milward et al., “D2.2: Dynamic MultimodalInterface Reconfiguration alk and Look: Tools for Ambient Linguistic Knowledge”, available at <http://www.ihmc.us/users/nblaylock!Pubs/Files/talk d2.2.pdf>, Aug. 8, 2006, 69 pages.
“Minimum Phase”, Wikipedia the free Encyclopedia, Last Modified on Jan. 12, 2010 and retrieved on Jul. 28, 2010, available online at <http://en.wikipedia.org/wiki/Minimum_phase>, 8 pages.
Mitra et al., “A Graph-Oriented Model for Articulation of Ontology Interdependencies”, Advances in Database Technology, Lecture Notes in Computer Science, vol. 1777, 2000, pp. 1-15.
Moran et al., “Multimodal User Interfaces in the Open Agent Architecture”, International Conference on Intelligent User Interfaces (IUI97), 1997, 8 pages.
Morgan, B., “Business Objects (Business Objects for Windows) Business Objects Inc.”, DBMS, vol. 5, No. 10, Sep. 1992, 3 pages.
Mountford et al., “Talking and Listening to Computers”, The Art of Human-Computer Interface Design, Addison-Wesley Publishing Company, Inc, 1990, pp. 319-334.
Mozer, Michael C., “An Intelligent Environment Must be Adaptive”, IEEE Intelligent Systems, Mar./Apr. 1999, pp. 11-13.
Murty et al., “Combining Evidence from Residual Phase and MFCC Features for Speaker Recognition”, IEEE Signal Processing Letters, vol. 13, No. 1, Jan. 2006, 4 pages.
Murveit et al., “Integrating Natural Language Constraints into HMM-based Speech Recognition”, International Conference on Acoustics, Speech, and Signal Processing, Apr. 3-6, 1990, 5 pages.
Nakagawa et al., “Speaker Recognition by Combining MFCC and Phase Information”, IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), Mar. 14-19, 2010, 4 pages.
Naone, Erica, “TR10: Intelligent Software Assistant”, Technology Review, Mar.-Apr. 2009, 2 pages.
Neches et al., “Enabling Technology for Knowledge Sharing”, Fall, 1991, pp. 37-56.
Niesler et al., “A Variable-Length Category-Based N-Gram Language Model”, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'96), vol. 1, May 7-10, 1996, 6 pages.
Non Final Office Action received for U.S. Appl. No. 15/207,260, dated Aug. 11, 2016, 5 pages.
Non Final Office Action received for U.S. Appl. No. 11/638,251, dated Sep. 29, 2009, 7 pages.
Non Final Office Action received for U.S. Appl. No. 11/650,014, dated May 24, 2010, 16 pages.
Non Final Office Action received for U.S. Appl. No. 11/770,614, dated Nov. 15, 2010, 16 pages.
Non Final Office Action received for U.S. Appl. No. 11/871,725, dated May 18, 2010, 11 pages.
Non Final Office Action received for U.S. Appl. No. 11/871,725, dated Oct. 27, 2010, 26 pages.
Non-Final Office Action received for U.S. Appl. No. 11/586,862, dated Mar. 31, 2009, 11 pages.
Non-Final Office Action received for U.S. Appl. No. 11/620,702, dated Mar. 8, 2010, 16 pages.
Non-Final Office Action received for U.S. Appl. No. 11/638,251, dated Mar. 22, 2010, 11 pages.
Non-Final Office Action received for U.S. Appl. No. 12/613,439, dated Jun. 6, 2013, 9 pages.
Noth et al., “Verbmobil: The Use of Prosody in the Linguistic Components of a Speech Understanding System”, IEEE Transactions On Speech and Audio Processing, vol. 8, No. 5, Sep. 2000, pp. 519-532.
Notice of Allowance received for Canadian Patent Application No. 2,812,011, dated Jan. 5, 2016, 1 page.
Notice of Allowance received for Chinese Patent Application No. 201410051717.9, dated Feb. 3, 2016, 4 pages (2 pages of English Translation and 2 pages of Official Copy).
Notice of Allowance received for Japanese Patent Application No. 2013-087347, dated Jan. 5, 2016, 3 pages (Official Copy Only).
Notice of Allowance received for U.S. Appl. No. 11/586,862, dated Sep. 15, 2009, 6 pages.
Notice of Allowance received for U.S. Appl. No. 12/613,439, dated Oct. 2, 2013, 9 pages.
Notice of Allowance received for U.S. Appl. No. 14/137,837, dated Mar. 11, 2016, 9 pages.
Notice of Allowance Received for U.S. Appl. No. 15/207,260, dated Nov. 23, 2016, 9 pages.
Office Action received for Canadian Patent Application No. 2,812,011, dated Nov. 24, 2014, 5 pages.
Office Action received for Chinese Patent Application No. 201410051717.9, dated May 27, 2015, 8 pages (3 pages of English Translation and 5 pages of Official copy).
Office Action received for European Patent Application No. 07839698.3, dated Aug. 13, 2013, 6 pages.
Office Action received for European Patent Application No. 07839698.3, dated Jun. 8, 2011, 6 pages.
Office Action received for European Patent Application No. 07839698.3, dated Mar. 2, 2012, 6 pages.
Office Action received for Indian Patent Application No. 2571/DELNP/2009, dated Sep. 30, 2016, 8 pages.
Office Action received for Japanese Patent Application No. 2013-087347, dated Apr. 8, 2014, 4 pages (English Translation only).
Office Action received for Japanese Patent Application No. 2013-087347, dated Mar. 30, 2015, 9 pages (6 pages of English Translation and 3 pages of Official Copy).
Papadimitriou et al., “Latent Semantic Indexing: A Probabilistic Analysis”, available online at <http://citeseerx.ist.psu.edu/messaqes/downloadsexceeded.html>, Nov. 14, 1997, 21 pages.
Parsons, T. W., “Voice and Speech Processing”, Linguistics and Technical Fundamentals, Articulatory Phonetics and Phonemics, 1987 McGraw-Hill, Inc., ISBN: 0-07-0485541-0, pp. 92-97.
Parsons, Thomas W., “Voice and Speech Processing”, Pitch and Formant Estimation, 1987, McGraw-Hill, Inc., ISBN: 0-07-0485541-0, 1987, 15 pages.
Phoenix Solutions, Inc., “Declaration of Christopher Schmandt Regarding the MIT Galaxy System”, West Interactive Corp., a Delaware Corporation, Document 40, Jul. 2, 2010, 162 pages.
Picone, J., “Continuous Speech Recognition Using Hidden Markov Models”, IEEE ASSP Magazine, vol. 7, No. 3, Jul. 1990, 16 pages.
Rabiner et al., “Fundamentals of Speech Recognition”, AT&T, Published by Prentice-Hall, Inc., ISBN: 0-13-285826-6, 1993, 17 pages.
Rabiner et al., “Note on the Properties of a Vector Quantizer for LPC Coefficients”, Bell System Technical Journal, vol. 62, No. 8, Oct. 1983, pp. 2602-2617.
Ratcliffe, M., “ClearAccess 2.0 Allows SQL Searches Off-Line (Structured Query Language) (ClearAccess Corp. Preparing New Version of Data-Access Application with Simplified User Interface, New Features) (Product Announcement)”, MacWeek, vol. 6, No. 41, Nov. 16, 1992, 2 pages.
Remde et al., “SuperBook: An Automatic Tool for Information Exploration-Hypertext?”, In Proceedings of Hypertext, 87 papers, Nov. 13-15, 1987, 14 pages.
Reynolds, C. F., “On-Line Reviews: A New Application of the HICOM Conferencing System”, IEE Colloquium on Human Factors in Electronic Mail and Conferencing Systems, Feb. 3, 1989, 4 pages.
Rice et al., “Monthly Program: Nov. 14, 1995”, The San Francisco Bay Area Chapter of ACM SIGCHI, Available at <http://www.baychi.org/calendar/19951114>, Nov. 14, 1995, 2 pages.
Rice et al., “Using the Web instead of a Window System”, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI'96, 1996, pp. 1-14.
Rigoll, G., “Speaker Adaptation for Large Vocabulary Speech Recognition Systems Using Speaker Markov Models”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP'89), May 23-26, 1989, 4 pages.
Riley, M D., “Tree-Based Modelling of Segmental Durations”, Talking Machines Theories, Models, and Designs, 1992 Elsevier Science Publishers B.V., North-Holland, ISBN: 08-444-89115.3, 1992, 15 pages.
Rivlin et al., “Maestro: Conductor of Multimedia Analysis Technologies”, SRI International, 1999, 7 pages.
Rivoira et al., “Syntax and Semantics in a Word-Sequence Recognition System”, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'79), Apr. 1979, 5 pages.
Roddy et al., “Communication and Collaboration in a Landscape of B2B eMarketplaces”, VerticalNet Solutions, A Business White Paper, Jun. 15, 2000, 23 pages.
Roos, Gina, “Agilent's new proximity sensor beats the fumble-fingered competition hands down . . . literally”, eeProductCenter, available at <http://www.eeproductcenter.com/showArticle.jhtml?articleID_46200544>, Sep. 1, 2004, 3 pages.
Rosenfeld, R., “A Maximum Entropy Approach to Adaptive Statistical Language Modelling”, Computer Speech and Language, vol. 10, No. 3, Jul. 1996, 25 pages.
Roszkiewicz, A., “Extending your Apple”, Back Talk—Lip Service, A+ Magazine, The Independent Guide for Apple Computing, vol. 2, No. 2, Feb. 1984, pp. 3-7.
Sakoe et al., “Dynamic Programming Algorithm Optimization for Spoken Word Recognition”, IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. ASSP-26, No. 1, Feb. 1978, 8 pages.
Salton et al., “On the Application of Syntactic Methodologies in Automatic Text Analysis”, Information Processing and Management, vol. 26, No. 1, Great Britain, 1990, 22 pages.
Savoy, J., “Searching Information in Hypertext Systems Using Multiple Sources of Evidence”, International Journal of Man-Machine Studies, vol. 38, No. 6, Jun. 1996, 15 pages.
Scagliola, C., “Language Models and Search Algorithms for Real-Time Speech Recognition”, International Journal of Man-Machine Studies, vol. 22, No. 5, 1985, 25 pages.
Schmandt et al., “Augmenting a Window System with Speech Input”, IEEE Computer Society, Computer, vol. 23, No. 8, Aug. 1990, 8 pages.
Schnelle, Dirk, “Context Aware Voice User Interfaces for Workflow Support”, Dissertation paper, Aug. 27, 2007, 254 pages.
Schutze, H., “Dimensions of Meaning”, Proceedings of Supercomputing'92 Conference, Nov. 16-20, 1992, 10 pages.
Seneff et al., “A New Restaurant Guide Conversational System: Issues in Rapid Prototyping for Specialized Domains”, retrived from Internet on Oct. 1996 <citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.16 . . . rep . . . ,>, Oct. 1996, 4 pages.
Sheth et al., “Evolving Agents for Personalized Information Filtering,”, In Proceedings of the Ninth Conference on Artificial Intelligence for Applications, Mar. 1-5, 1993, 9 pages.
Sheth et al., “Relationships at the Heart of Semantic Web: Modeling, Discovering, and Exploiting Complex Semantic Relationships”, Enhancing the Power of the Internet: Studies in Fuzziness and Soft Computing, Oct. 13, 2002, pp. 1-38.
Shikano et al., “Speaker Adaptation Through Vector Quantization”, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'86), vol. 11, Apr. 1986, 4 pages.
Sigurdsson et al., “Mel Frequency Cepstral Co-efficients: An Evaluation of Robustness of MP3 Encoded Music”, Proceedings of the 7th International Conference on Music Information Retrieval, 2006, 4 pages.
Silverman et al., “Using a Sigmoid Transformation for Improved Modeling of Phoneme Duration”, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Mar. 15-19, 1999, 5 pages.
Simonite, Tom, “One Easy Way to Make Siri Smarter”, Technology Review, Oct. 18, 2011, 2 pages.
Stent et al., “The CommandTalk Spoken Dialogue System”, SRI International, 1999, pp. 183-190.
Supplemental European Search Report received for European Application No. 07863192.6, dated Apr. 4, 2011, 8 pages.
Tenenbaum et al., “Data Structure Using Pascal”, 1981 Prentice-Hall, Inc, 1981, pp. 252-283.
Textndrive, “Text'nDrive App Demo—Listen and Reply to your Messages by Voice while Driving!”, YouTube Video available at <http://www.youtube.com/watch?v=WaGfzoHsAMw>, Apr. 27, 2010, 1 page.
Tofel, Kevin C. et al., “SpeakToit: A Personal Assistant for Older iPhones, iPads”, Apple News, Tips and Reviews, Feb. 9, 2012, 7 pages.
Tsai et al., “Attributed Grammar—A Tool for Combining Syntactic and Statistical Approaches to Pattern Recognition”, IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-10, No. 12, Dec. 1980, 13 pages.
Tucker, Joshua, “Too Lazy to Grab Your TV Remote? Use Siri Instead”, Engadget, Nov. 30, 2011, pp. 1-8.
Tur et al., “The CALO Meeting Assistant System”, IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, No. 6, Aug. 2010, pp. 1601-1611.
Tur et al., “The CALO Meeting Speech Recognition and Understanding System”, Proc. IEEE Spoken Language Technology Workshop, 2008, 4 pages.
Udell, J., “Computer Telephony”, BYTE, vol. 19, No. 7, Jul. 1, 1994, 9 pages.
Universal Remote Control, Inc., “All Complete Control Remotes Now Use Narrow Band RF”, available at <http://www.universalremote.com/corporate/press_release.php?press=13>, 2008.
Universal Remote Control, Inc., “Operating System with the Aurora MX-950”, MX-950 Owners Manual, 2005.
Universal Remote Control, Inc., “MX-950 (The Aurora”, available at <www.unversalremote.com>, 2005.
Van Santen, J. P., “Contextual Effects on Vowel Duration”, Journal Speech Communication, vol. 11, No. 6, Dec. 1992, pp. 513-546.
Vepa et al., “New Objective Distance Measures for Spectral Discontinuities in Concatenative Speech Synthesis”, Proceedings of the IEEE 2002 Workshop on Speech Synthesis, 2002, 4 pages.
Verschelde, J., “MATLAB Lecture 8. Special Matrices in MATLAB,”, UIC Dept. of Math., Stat. & C.S., MCS 320, Introduction to Symbolic Computation, Nov. 23, 2005, 4 pages.
Vingron, M., “Near-Optimal Sequence Alignment”, Current Opinion in Structural Biology, vol. 6, No. 3, Jun. 1996, pp. 346-352.
Vlingo Lncar, “Distracted Driving Solution with Vlingo InCar”, YouTube Video, Available at <http://www.youtube.com/watch?v=Vqs8XfXxgz4>, Oct. 2010, 2 pages.
Vlingo, “Vlingo Launches Voice Enablement Application on Apple App Store”, Press Release, Dec. 3, 2008, 2 pages.
Voiceassist, “Send Text, Listen to and Send E-Mail by Voice”, YouTube Video available at <http://www.youtube.com/watch?v=0tEU61nHHA4>, Jul. 30, 2009, 1 page.
Voiceonthego, “Voice On The Go (BlackBerry)”, YouTube Video, available at <http://www.youtube.com/watch?v=pJqpWgQS98w>, Jul. 27, 2009, 1 page.
Werner et al., “Prosodic Aspects of Speech”, Universite de Lausanne, Fundamentals of Speech Synthesis and Speech Recognition: Basic Concepts, State of the Art and Future Challenges, 1994, 18 pages.
Wolff, M., “Post Structuralism and the ARTFUL Database: Some Theoretical Considerations”, Information Technology and Libraries, vol. 13, No. 1, Mar. 1994, 10 pages.
Wu, Min, “Digital Speech Processing and Coding”, Multimedia Signal Processing, Lecture-2 Course Presentation, University of Maryland, College Park, 2003, 8 pages.
Wu, Min, “Speech Recognition, Synthesis, and H.C.I.”, Multimedia Signal Processing, Lecture-3 Course Presentation, University of Maryland, College Park, 2003, pp. 1-11.
Wyle, M. F., “A Wide Area Network Information Filter,”, Proceedings of First International Conference on Artificial Intelligence on Wall Street, Oct. 9-11, 1991, 6 pages.
Yankelovich et al., “Intermedia: The Concept and the Construction of a Seamless Information Environment”, COMPUTER Magazine, IEEE, Jan. 1988, 16 pages.
Yoon et al., “Letter-to-Sound Rules for Korean”, Department of Linguistics, The Ohio State University, 2002, 4 pages.
Zhao, Y., “An Acoustic-Phonetic-Based Speaker Adaptation Technique for Improving Speaker-Independent Continuous Speech Recognition,”, IEEE Transactions on Speech and Audio Processing, vol. 2, No. 3, Jul. 1994, pp. 380-394.
Zovato et al., “Towards Emotional Speech Synthesis: A Rule based Approach”, Proceedings of 5th ISCA Speech Synthesis Workshop—Pittsburgh, 2004, pp. 219-220.
Zue, Victor W., “Toward Systems that Understand Spoken Language”, ARPA Strategic Computing Institute, Feb. 1994, 9 pages.
Zue, Victor, “Conversational Interfaces: Advances and Challenges”, Spoken Language System Group, Sep. 1997, 10 pages.
Office Action received for Japanese Patent Application No. 2015-150013, dated Feb. 3, 2017, 10 pages (5 pages of English Translation and 5 pages of Official Copy).
Final Office Action dated Nov. 23, 2012, for U.S. Appl. No. 12/852,056, filed Aug. 6, 2010, 13 pages.
Final Office Action dated Apr. 8, 2014, for U.S. Appl. No. 12/852,056, filed Aug. 6, 2010, 13 pages.
International Search Report dated Dec. 2, 2011, for PCT Application No. PCT/US2011/046840, filed Aug. 5, 2011, seven pages.
Lee, S.K. et al. (Apr. 1985). “A Multi-Touch Three Dimensional Touch-Sensitive Tablet,” Proceedings of CHI: ACM Conference on Human Factors in Computing Systems, pp. 21-25.
Non-Final office Action dated Jul. 6, 2012, for U.S. Appl. No. 12/852,056, filed Aug. 6, 2010, 14 pages.
Non-Final office Action dated Oct. 18, 2013, for U.S. Appl. No. 12/852,056, filed Aug. 6, 2010, 14 pages.
Non-Final office Action dated Aug. 25, 2014, for U.S. Appl. No. 12/852,056, filed Aug. 6, 2010, 14 pages.
Non-Final office Action dated Oct. 31, 2016, for U.S. Appl. No. 14/598,044, filed Jan. 15, 2015, 12 pages.
Notice of Allowance dated Apr. 20, 2017, for U.S. Appl. No. 14/598,044, filed Jan. 15, 2015, five pages.
Rubine, D.H. (Dec. 1991). “The Automatic Recognition of Gestures,” CMU-CS-91-202, Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Computer Science at Carnegie Mellon University, 285 pages.
Rubine, D.H. (May 1992). “Combining Gestures and Direct Manipulation,” CHI '92, pp. 659-660.
Westerman, W. (Spring 1999). “Hand Tracking, Finger Identification, and Chordic Manipulation on a Multi-Touch Surface,” A Dissertation Submitted to the Faculty of the University of Delaware in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Electrical Engineering, 364 pages.
Related Publications (1)
Number Date Country
20170220199 A1 Aug 2017 US
Divisions (1)
Number Date Country
Parent 11586862 Oct 2006 US
Child 12613439 US
Continuations (3)
Number Date Country
Parent 15207260 Jul 2016 US
Child 15484018 US
Parent 14137837 Dec 2013 US
Child 15207260 US
Parent 12613439 Nov 2009 US
Child 14137837 US
Continuation in Parts (2)
Number Date Country
Parent 11241839 Sep 2005 US
Child 11586862 US
Parent 11240788 Sep 2005 US
Child 11241839 US