1. Field
The present disclosure relates to a method and device for augmented handling of multiple calls with gestures.
2. Introduction
There are a number of ways to handle multiple calls for wireless communication devices. Most require a user's attention, touch or voice, which are often inconvenient and can cause user distraction. Voice is not practical when in a crowd. In more detail, in conventional mobile devices, when a device is engaged in an active call, and another call comes in, a user has to switch back and forth between the two numbers to handle which call to proceed with and which to dismiss. This process entails device touch interface, manual communication with caller, conversation interruptions, and call handling delays. This gets excessive and can be distracting when user is pre-occupied with other tasks, near other people, in noisy environments, listening to the radio, or device is in placed in a dock out of the user's reach.
Thus, there is a need in connection with handling multiple calls for wireless communication devices, to eliminate or minimize touch commands and manual actuation of commands.
Thus, there is a need for augmented handling of multiple calls with intuitive gestures, for electronic devices, such as wireless communication devices.
There is also a need for simplified and intuitive ways to effectively handle multiple incoming calls, quickly, easily and reliably, without requiring pain staking attention to the operation of the device.
Thus, a method and device with augmented ways of handling multiple calls with intuitive gestures that addresses these needs, would be considered an improvement in the art.
In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the disclosure briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the disclosure will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
As shown in
In one embodiment, the module 290 can reside within in the controller 220, can reside within the memory 270, can be an autonomous module, can be software, can be hardware, or can be in any other format useful for a module on a wireless communication device 200.
The display 240 can be a liquid crystal display (LCD), a light emitting diode (LED) display, a plasma display, a touch screen display or any other means for displaying information. The transceiver 250 may include a transmitter and/or a receiver. The audio input and output circuitry 230 can include a microphone, a speaker, a transducer, or any other audio input and output circuitry. The user interface 260 can include a keypad, buttons, a touch screen or pad, a joystick, an additional display, or any other device useful for providing an interface between a user and an electronic device. The memory 270 may include a random access memory, a read only memory, an optical memory or any other memory that can be coupled to a wireless communication device.
A block diagram of a wireless communication method 300, is shown in
Advantageously, the method 300 provides a simplified way to handle multiple calls that can minimize distractions and simplifies actuating of commands via gestures. The method 300 can utilize intuitive gesturing to effectively handle multiple incoming calls, quickly, easily and reliably, without requiring a user's undivided attention when using touch screens, in one embodiment.
In one use case, the receiving step 320 can include presenting a representation of the incoming voice call and a representation of the existing voice call, defining a first region 242 in the form of an incoming call region and a second region 242 in the form of an existing call region, in a substantially side by side arrangement, on a display 240, as shown in landscape mode, in
In one embodiment, the evaluating step 330 includes utilizing the sensing assembly 295 to determine which gesture was performed. As should be understood, the sensing assembly 295 can vary, as will be detailed below. It can be a modular or it can include discrete components located at various locations. In more detail, the evaluating step 330 includes: utilizing the sensing assembly 295, to determine which gesture was performed, and actuating a command to send the incoming voice call to voicemail in response to the send gesture, to answer the incoming voice call in response to the answer gesture or conference the existing voice call with the incoming voice call in response to the conference gesture. This feature provides a simple way to quickly and easily actuate and execute a user command, based on a certain gesture.
The evaluating step 330 can include an object touching the display, the display including a touch screen display. As should be understood, Applicant's intuitive gesturing can vary. In a preferred use case, it can be touch free, and in an alternative embodiment, it can include touch, by use of a touch screen display interface.
In an alternative embodiment, a wireless communication device 120 or 200 is shown, for example, in
As shown in
Also, the wireless communication device or terminal 120 can include a display 240 configured to present content in a portrait mode, as shown in
The wireless communication device 120 can include the send gesture comprising passing an object in a positive X direction, as arrow 508, the answer gesture comprising passing an object in a negative X direction, as arrow 502, and the conference gesture comprising passing an object in a negative Y direction, arrow 510, or passing an object in a circular direction, arrow 512, above the wireless communication device. Advantageously, a user can easily handle multiple calls, while not looking at the device, such as when working out, bike riding and the like. In one embodiment, the gesture module 290 is coupled to the sensing assembly 295, to determine which gesture was performed, and is configured to actuate a command: to send the incoming voice call to voicemail in response to the send gesture, to answer the incoming voice call in response to the answer gesture or conference the existing voice call with the incoming voice call in response to the conference gesture. This structure provides a reliable structure, for reliable call handling of a device.
In an alternative embodiment, the display 240 includes a touch screen display that can read touch gestures, represented by arrows 502, 508, 510 and 512, as previously detailed.
As shown in
Thus, the sensing assembly 400 includes multiple phototransmitters arranged about (and equally spaced about) a single photoreceiver that is centrally positioned in between the phototransmitters.
Further shown in
As understood by those skilled in the art, the photoreceiver 492 need not extend up to the very outer surfaces of the sensing assembly, and can be positioned by a transparent window or wall, so as to provide protection for the photoreceiver and/or provide desired optical properties. Further, depending upon the embodiment, the photoreceivers can take a variety of forms including, for example, angle-diversity receivers or fly-eye receivers. Depending upon the embodiment, various filters can be employed above the photoreceivers and/or phototransmitters to filter out undesired light. Different filters can in some circumstances be employed with different ones of the phototransmitters/photoreceivers, for example, to allow for different colors of light to be associated with, transmitted by, or received by, the different components.
In one embodiment, the sensing assembly 400 can include multiple phototransmitters and/or photoreceivers and be co-located in a single or shared small region and can be mounted on a circuit board along with other circuit components.
Also, the sensing assembly 400 can potentially be discrete structures that can be implemented in relation to many different types of existing electronic devices, by way of a relatively simple installation process, as add-on or even after-market devices. Generally a slide or swipe gestures can be defined to be movement of an object in a defined plane across an electronic device, and preferably at a generally constant distance from (typically above) an electronic device. For example,
Similarly,
Similarly, a top-to-bottom (or bottom to top) slide gesture can be defined by movement of an object across the sensing device such as from a top side of the electronic device in a negative y direction (as indicated by arrow 510) to a bottom of the electronic device, as shown in
Basically in order to detect gestures, one or more phototransmitters of the sensing assembly are controlled by a processor, to emit light over sequential time periods as a gesture is being performed, and one or more photoreceivers of the sensing assembly receive any light that is emitted from a corresponding phototransmitter and is then reflected by the object (prior to being received by a photoreceiver) to generate measured signals. The processor, which preferably includes an analog to digital converter, receives these measured signals from the one or more photoreceivers, and converts them to a digital form, such as 10 bit digital measured signals. The processor then analyzes all or a portion of these digital measured signals over time to detect the predefined gesture. The analysis can be accomplished by determining specific patterns or features in one or more of measured signal sets or modified or calculated signal sets. In some cases, the timing of detected patterns or features in a measured signal set can be compared to the timing of detected patterns or features in other measured signal sets. Other data manipulation can also be performed. The predefined basic gestures can be individually detected or can be detected in predefined combinations, allowing for intuitive and complex control of the electronic device.
As previously detailed, a specific gesture can be used to intuitively, easily and quickly select one or more items displayed on a display screen of the electronic device in a touchless manner, in a preferred embodiment. Since predefined gestures are detectable in a three dimensional space, this allows for various menus or displays of items such as contacts, icons or pictures, to be arranged in various desired manners on a display screen of the electronic device. Specific items selectable through the use of one or more predefined gestures including push/pull (x direction), (y direction), and circular or hover type gestures, for controlling and providing commands to an electronic device.
As mentioned earlier, various gesture detection routines including various processing steps can be performed to evaluate the measured signals. For example, assuming the use of a sensing assembly 400 as shown in
With respect to a slide gesture, assuming that a z-axis distance of the object from the sensing assembly remains relatively constant, then the occurrence of a slide gesture and its direction can be determined by examining the timing of the occurrence of intensity peaks in corresponding measured signal sets with respect to one or more of the other measured signal sets. As an object gets closer to a specific phototransmitter's central axis of transmission, the more light from that transmitter will be reflected and received by a photoreceiver, such as the photoreceiver 492 of sensing assembly 400 shown in
Similarly,
The devices 120, 200 and 400 and method 300 are preferably implemented on a programmed processor. However, the controllers, flowcharts, and modules may also be implemented on a general purpose or special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit elements, an integrated circuit, a hardware electronic or logic circuit such as a discrete element circuit, a programmable logic device, or the like. In general, any device on which resides a finite state machine capable of implementing the flowcharts shown in the figures may be used to implement the processor functions of this disclosure.
While this disclosure has been described with specific embodiments thereof, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art. For example, various components of the embodiments may be interchanged, added, or substituted in the other embodiments. Also, all of the elements of each figure are not necessary for operation of the disclosed embodiments. For example, one of ordinary skill in the art of the disclosed embodiments would be enabled to make and use the teachings of the disclosure by simply employing the elements of the independent claims. Accordingly, the preferred embodiments of the disclosure as set forth herein are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the disclosure. In this document, relational terms such as “first,” “second,” and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “a,” “an,” or the like does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element. Also, the term “another” is defined as at least a second or more. The terms “including,” “having,” and the like, as used herein, are defined as “comprising.”
Number | Date | Country | |
---|---|---|---|
61836792 | Jun 2013 | US |