Finger or stylus-operated graphical touch-screen keyboards (sometimes referred to as virtual keyboards and digital keyboards) present some challenging design problems, especially on small form-factors such as a mobile phone. The small form factor means that screen real-estate is limited, especially when using a graphical keyboard, because the keyboard and application are competing for screen real-estate.
From the perspective of the keyboard, the designer is confronted by a number of tradeoffs. For a given footprint, the designer has to make a choice between more but smaller keys, or fewer but bigger keys. Having more keys on a keyboard means less expensive hopping/time-consuming navigation from one graphical keyboard (e.g., the primary) to another graphical keyboard (e.g., the secondary or tertiary keyboard character sets and so on). However the potential to reduce the size of the keys in order to present the additional keys from other keyboards is very limited, because the smaller the keys, the harder it is for users to accurately tap the desired key in a timely manner.
As a result, the keys can only be shrunk to a reasonable size, whereby designs typically resort to limiting the number of keys available at any one time, and employing a multiple-keyboard strategy. Moving from keyboard to keyboard imposes extra burden on the user, in terms of time-motion (i.e., hand movement and keystrokes to navigate from one to the other) as well as cognitive (i.e., remembering where characters are located and/or searching for them). There is additional cognitive load imposed by the disruption of flow and disruption in the context, and the associated need to assimilate the new menu—as well as the cost of switching back to the standard keyboard when finished.
Thus, access to the full character set comes at the cost of user overhead in switching from keyboard to keyboard, knowing (or hunting for) which keyboard contains the character or characters needed to be entered, and the disruption of attention and working memory imposed by switching contexts. As one example, there are four separate graphical keyboards used in one mobile smartphone device, including a main alphabetic keyboard, an emoticon keyboard, a first numeric/special character keyboard and a second numeric/special character keyboard.
This Summary is provided to introduce a selection of representative concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used in any way that would limit the scope of the claimed subject matter.
Briefly, various aspects of the subject matter described herein are directed towards a technology in which a graphical or printed keyboard is provided on a touch-sensitive surface at which tap input and gesture input is received. The keyboard is configured with a removed key set comprising at least one removed or substantially removed key, in which each key of the removed key set corresponds to a character, action, or command code that is enterable via a gesture.
In one aspect, a keyboard is provided, in which the keyboard includes alphabetic keys and numeric keys in a same-sized or substantially same-sized touch-sensitive area relative to a different keyboard that includes alphabetic keys and does not include numeric keys, and in which the keyboard and the different keyboard have same-sized or substantially same-sized alphabetic keys. The keyboard is provided by removing one or more keys from the keyboard that are made redundant by gesture input.
In one aspect, there is described receiving data corresponding to interaction with a key of a keyboard, in which at least one key represents at least three characters (including letters, numbers, special characters and/or commands). If the data indicates that the interaction represents a first gesture, a first character value is output. If the data indicates that the interaction represents a second gesture (that is different from the first gesture), a second character value is output. If the data indicates that the interaction represents a tap, a tap-related character value represented by the key may be output.
Other advantages may become apparent from the following detailed description when taken in conjunction with the drawings.
The present invention is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which:
Various aspects of the technology described herein are generally directed towards a touch-sensitive graphical or printed keyboard technology in which gestures replace certain keys on the keyboard, e.g., those that are made unnecessary (that is, made otherwise redundant) by the gestures. The removal of otherwise redundant keys allows providing more keys on the provided keyboard in the same touch-sensitive real estate, providing larger keys in the same touch-sensitive real estate, and/or reducing the amount of touch-sensitive real estate consumed by the keyboard. Note that as used herein, a “graphical” keyboard is one that is rendered on a touch-sensitive display surface, and can therefore programmatically change its appearance. A “printed” keyboard is one associated with a pressure sensitive surface or the like (e.g., built into the cover of a slate computing device) that is not programmatically changeable in appearance, e.g., a keyboard printed, embossed, physically overlaid as a template or otherwise affixed or part of a pressure sensitive surface. As will be understood, the keyboards described herein generally may be either graphical keyboards or printed keyboards, except for those graphical keyboards that programmatically change in appearance.
Another aspect is directed towards the use of additional gestures to allow a single displayed key to represent multiple characters, e.g., three or four. As used herein, “character” refers to anything that may be entered into a system via a key, including alphabetic characters, numeric characters, symbols, special characters, and commands. For example, a key may display one character for a “tap” input, and three characters for three differentiated upward gestures, namely one for a generally upward-left gesture, one for a generally straight up gesture, and one for a generally upward-right gesture.
Another aspect is directed towards providing a virtual touchpad or the like that facilitates text editing. A gesture may be used to invoke the virtual touchpad and enter an editing mode. The gesture may be the same as another, existing gesture, with the two similar/like gestures distinguished by their starting locations on the keyboard, or gestures that cross the surface boundary (bezel) for example.
It should be understood that any of the examples herein are non-limiting. For instance, the keyboards and gestures exemplified herein are only for purposes of illustration; other keys made redundant by other gestures may be removed, and/or not all those shown herein need be removed. Different keyboard layouts—or different device dimensions, physical form factors, and/or device usage postures or grips, in addition to those exemplified herein—will benefit from the technology described herein. Different gestures other than and/or in addition to one or more of those exemplified also may be used; further, the gestures may be “air” gestures, not necessarily on a touch-sensitive surface, such as sensed by a Kinect™ device or the like. As another example, finger input is generally described, however a mechanical intermediary such as a plastic stick/stylus or a capacitive pen that is basically indistinguishable from a finger, or a battery-powered or inductively coupled stylus that can be distinguished from the finger are some of the possible alternatives that may be used; moreover the input may be refined, (e.g., hover feedback may be received for the gestural commands superimposed on the keys), and/or different length and/or accuracy constraints may be applied on the stroke gesture depending on whether a pen or finger is known to be performing the interaction (which may be detected by contact area). As such, the present invention is not limited to any particular embodiments, aspects, concepts, structures, functionalities or examples described herein. Rather, any of the embodiments, aspects, concepts, structures, functionalities or examples described herein are non-limiting, and the present invention may be used various ways that provide benefits and advantages in computers and keyboard and gesture technology in general.
In general, radial, or “marking” menus provide for conventional tapping on the keyboard 106 to be augmented by the use of gestures, such as simple strokes (comprising detected finger or pen movement in one general direction), received in the same area. Typically taps versus strokes may be distinguished by a minimum time of finger or stylus contact and/or a threshold on a total distance moved by the finger or other input mechanism (e.g., stylus). This is generally because “taps” may inadvertently slide a little bit, and thus very short strokes are treated as taps in one implementation. Further, long strokes may return to (near) the starting point. This reverse gesture may be used as a way to “cancel” a stroke gesture in progress in one implementation, before the finger or other input mechanism is lifted. In this situation, no input to the buffer occurs (i.e. these are neither taps nor gestures). Similarly, a user may initiate a shift with a gesture up on a key and decide to not used the shifted key; the user may stroke downward around the initial position of the touch (e.g., without having lifted the finger) and then release the finger. This reverse gesture may output the lowercase character; note that the current state displayed on the key may reflect the state (e.g., to show a shifted character when the finger is above the key beyond a certain threshold, and the lowercase character when the finger is close to the initial position).
In one implementation, tapping on any alphabetic key of the keyboard 106 outputs the lower-case character associated with that key, whereas an upward stroke initiated on the same key results in the shifted value (e.g., uppercase) of the associated character being output, thus avoiding the need for a separate tap on a Shift key. A stroke to the right initiated anywhere on the keyboard 106 outputs a Space. Likewise, a stroke to the left, initiated anywhere on the keyboard 106 outputs a Backspace, while one slanting down to the left (e.g., initiated anywhere on the keyboard 106) outputs Enter. In some embodiments, the standard stroke gestures are enabled on the central cluster of alphanumeric characters, whereas one or more peripheral keys (e.g. specific keys, such as backspace or Ctrl, or specific regions, such as a numeric keypad or touch-pad area for cursor control (if any), may have different or just partially overlapping stroke gestures assigned to them, including no gestures at all, e.g. in the case of cursor control from a touchpad starting region as exemplified below). Thus, the stroke menus may be spatially multiplexed (e.g., potentially different from some keys, or for certain sets of keys). Also, keys near the keyboard edge, where gestures in certain directions may not be possible due to lack of space (e.g. a right stroke from a key on the right edge of the surface), whereby the user may start a gesture more from the center to enter the input.
Note that gestures also may be used to input other non-character actions (not only backspace), such as user interface commands in general (e.g., Prev/Next fields in form-filling, Go commands, Search commands, and so forth) which sometimes have representations on soft keyboards. Still further, richer or more general commands (such as Cut/Copy/Paste) may also be entered by gestures, macros may be invoked by gestures, and so forth.
To this end, as shown in
Note that gestures are generally based upon North-South-East-West (NSEW) directions of the displayed keyboard. However, the NSEW axis may be rotated an amount (in opposite, mirrored directions), particularly for thumb-based gestures, because users intending to gesture up with the right thumb actually tend to gesture more NE or NNE; similarly the left thumb tends to gesture more NW or NNW.
Further, as described herein, the tap or gesture handling logic 108 provides a user with a mechanism for entering an edit mode in which a virtual editing touchpad 116 or the like is made available to the user, along with a mechanism for exiting the edit mode. As also described herein, taps, movements and gestures on the virtual editing touchpad 116 are handled by a touchpad manager 118 and may result in character values and/or pointer events entered into the buffer 114. Note that in another implementation, a touchpad is always visible (at least for one associated keyboard), and there is no need to switch modes.
Because of the ability to use gestures for certain keys, those keys become unnecessary/otherwise redundant for entering their corresponding characters. Described herein is the removal of those keys from the keyboard, thus providing a number of benefits.
As can be seen, via the removal, numerical/special characters may be substituted, e.g., the top row of the standard QWERTY keyboard (the digits one through nine and zero, as well as the shifted characters above them) is provided in the space freed up by removing the redundant keys. In one implementation, employing the uppercase and lowercase symbols of the added keys moves a total of twenty-six characters to the primary keyboard from a secondary one. Note that other characters that appear on a physical QWERTY keyboard also appear to the right and lower left. By removing the Space, Enter, Shift and Backspace keys, this keyboard provides far more characters while consuming the same touch-sensitive surface real estate and having the same size of keys, for example, as other keyboards with far less characters. The immediate access to those common characters that this mechanism provides produces a very significant increase in text entry speed, and reduces complexity.
The increase in entry speed may be accomplished without changing the size of the keys or the amount of real-estate consumed by the keyboard. Furthermore, the technology reduces or even eliminates the frequency of shifting from one graphical keyboard to another, while building on existing user skills rather than requiring a significant user investment in learning new ones. Users may start to benefit virtually immediately.
In
In another embodiment, a gesture may be used to initiate an action, with a holding action after initiation being used to enter a control state. For example, a stroke left when lifted may be recognized as a backspace, whereas the same stroke, but followed by holding the end position of the stroke instead of lifting, initiates an auto-repeat backspace. Moving left after this point may be used to speed up auto-repeat. Moving right may be used to slow down the auto-repeat, and potentially reverse the auto-repeat to replace deleted characters.
The arrow labeled 331 shows how an upward stroke gesture is processed into a shift version of the character. That is, instead of the user tapping, if the user does an upward stroke, the shifted version of that character results. In the example of
Note that in an alternative embodiment, (or in the same implementation but from a certain starting area), a generic upward gesture may be used to engage a shift state for the entire keyboard (rather than requiring a targeted gesture to produce the shift character). This helps with edge gesture detection where users need to gesture from the bottom row of keys (which may inadvertently invoke other functionality). Also, an upward gesture with two fingers instead of one (and initiated anywhere on the keyboard) may cause a Caps Lock instead of Shift (and a downward gesture with two fingers down may restore the default state). Instead of a two-finger gesture, a single finger gesture made while another finger is pressing on the keyboard may be interpreted to have a different meaning from a similar single-finger gesture.
In one example implementation, if a user touches anywhere on the keyboard and does a stroke to the right, a Space character results. This is illustrated by arrow 332 in
Note that because the SPACE, BACKSPACE and ENTER strokes can be initiated anywhere on the keyboard, which is a large target, and that their direction is both easy to articulate and has strong mnemonic value, they can be articulated using an open-loop ballistic action (ballistic gestures not requiring any fine motor control), rather than a closed-loop attentive key press. The result is an easy-to-learn way to significantly increase text entry rates. Thus, also described herein is improving the overall performance of entering alphanumeric text with a keyboard. The technique achieves improvements by significantly reducing the number of keystrokes required to enter almost any character string, and also significantly reduces the need to move back-and-forth between the primary QWERTY keyboard and secondary keyboards with special characters. Avoiding switching keyboards not only increases performance because there is no need to tap on a dedicated key, but also because it avoids the visual parsing of the keyboard layout for every switch. The size of the QWERTY keyboard may be unchanged, as may be the size of the keys.
Furthermore, the technique is designed to build upon existing skills, such as familiarity with the QWERTY layout. The technique is easily discoverable, can be learned in easily, and unlike other techniques, (which can enable far faster speeds than the technique proposed, but only for relatively very few users), this technique benefits users almost immediately. Example ways to facilitate discovery are described in U.S. Pat. No. 8,196,042, and U.S. published patent applications nos. 20090187824 and 20120240043. Such assistance may illustrate the gestures, as well as particular manual strategies for articulating them, such as entering the space (right stroke) with the left thumb, and the backspace (left stroke) with the right thumb, which has been found to encourage an efficient typing rhythm.
Thus, the technology described herein increases text entry speed, and unlike previous implementations, makes the new gesture technique very discoverable. As described herein, keys from the keyboard that are made redundant by the strokes are removed. Doing so enables freeing up valuable screen or surface real-estate used for other keys, e.g., by removing an entire row from the keyboard. However, what remains is still immediately recognizable as a QWERTY keyboard. Any missing keys are quickly noticed as soon as one wants to use them, which facilitates discoverability of the new technique. For example, via a HELP key/HELP key combination/HELP gesture or other referenced ways to facilitate discovery, the gestures (e.g., single strokes) are explained are almost immediately remembered, thereby enabling the user to use the keyboard productively. Further, context may be used to explain the gestures; for example, if the system knows that a user has never used the new keyboard and there is a long pause before an expected space character, the system may conclude that the user is most likely looking for the space key, thus triggering a visual explanation for the space gesture, (and possibly explaining other available gestures too at the same time).
Turning to aspects of reducing key count and/or menu count, the technology described herein also may eliminate duplicated keys, as there are some characters that conventionally appear on more than one keyboard. For example, the ten digits often appear on multiple numeric keyboards, as do the period “.” and comma “,” characters. Duplicates of such keys may be eliminated. This may be used to significantly reduce the number of overall keys needed by a system, while still supporting all of the keys and functions of the current keyboard. Furthermore, in so doing, the number and/or size of any secondary, tertiary (and/or other) keyboards may be reduced, or the secondary, tertiary (and/or other) keyboards may be eliminated because they are no longer necessary.
Note that two (or more) simultaneous finger gestures may be used with such a three (or more) character key. This may be used to enter commands, or provide for even more than three or more characters per key than a single finger gesture.
By this technique, all shifted characters are accessible, yet a secondary keyboard that would otherwise provide such characters may be eliminated (which is also true of the example keyboards of
In summary, a hybrid tap/stroke keyboard is provided which augments a QWERTY tap keyboard with gestures (e.g., strokes) that provide alternatives for the frequently used Space, Backspace, Shift, and Enter keys. The keys made redundant by the strokes are removed from the keyboard. This frees up surface real estate, e.g., a whole row, into which the set of numbers and special characters or the like may appear on the primary keyboard, without impacting key size or overall keyboard footprint. Different upward strokes provide for an even richer character set.
Having created space by eliminating keys, the ten vacant keys in the top row may be populated in a manner consistent with the top row of the standard QWERTY Keyboard, with the ten digits in the lower-case positions, and the usual characters occupying the upper case positions. Likewise, the three unused keys in the bottom row may be populated with the six characters (three upper-case and three lower-case) typically found in the bottom row of a standard QWERTY keyboard. As with the general shift character concept described above, for alphabetic characters tapping outputs the lower-case character, while an upward stroke starting on a particular key outputs the associated shifted (e.g., uppercase) character.
By the removal of keys made redundant by gestures in this example graphical keyboard, twenty-six new characters are added that are directly accessible from the main keyboard. In so doing, the standard layout of the traditional QWERTY keyboard is basically retained, thereby reducing problems of visual search for users familiar with the standard layout and significantly reducing the frequency with which users have to go to a secondary keyboard in order to type a message. Furthermore, the more efficient gestural means of articulating the SHIFT, SPACE, BACKSPACE and ENTER keys are integrated.
To accommodate other characters, one way to accomplish this is to add a second graphical keyboard, such as is done in contemporary phone implementations. However, rather than a whole new graphical keyboard, in one implementation only selected keys may change (e.g.,
An emoticon keyboard, such as the example graphical emoticon keyboard 660 of
Note that as in the tablet (or slate) style keyboard of
Turning to aspects related to editing, described herein is a virtual touchpad, which may include cursor keys and/or be used to enter pointer events, for example.
Then, for example, a left stroke 881 in the region to the left of the dashed line is still a Backspace. However, instead of a right-to-left stroke anywhere on the graphical keyboard always being a Backspace, spatial multiplexing may be used, e.g., the same gesture 882 starting in the region/keys to the right of the dashed line may instead have a different meaning. For example, on a graphical keyboard, such a gesture to the right of the dashed line may bring up a virtual touchpad (cursor mode) 990, as generally represented in
As can be readily appreciated, this is only one example, and alternatively a different gesture (e.g., a stroke straight down) or more elaborate gesture (e.g., a circular or zigzag gesture, or a gesture with two or more fingers) may be used to bring up the virtual touchpad without having different regions. Stroking on the keyboard with two fingers in contact offers another example, which, for example, may eliminate the intermediate step of bringing up the virtual touchpad; (e.g., a two-finger movement, or movement with one finger held down while the other finger or a stylus enters a gesture may be directly interpreted as a cursor mode input). Another gesture (possibly the same one) or interaction with another part of the keyboard may be used to remove the virtual touchpad (cursor mode) 990 to resume typing.
The keys shown in the virtual touchpad (cursor mode) 990 are only examples of one possible implementation, with cursor, home and end keys allowing for cursor movement. A Select key may toggle between a cursor movement mode and a mode in which text is highlighted for selection as the user moves over it via the cursor keys, for example.
A Pointer Mode key may be used to toggle from the virtual touchpad cursor mode into which a user enters pointer events by dragging a finger or stylus, tapping, double-tapping and so forth as with existing touchpad mechanisms. One such virtual touchpad pointer mode 1090 is exemplified in
In this example implementation, more than two characters may be available on a given key, with the selected one corresponding to up-left, up, and up-right gestures. Thus, if a generally upward gesture is detected at step 1112, steps 1114 and 1116 handle such a straight-up gesture by outputting the center key's character value (of the shifted key). Steps 1118 and 1120 output the leftmost upper key's character value (of the shifted key), and step 1122 outputs the rightmost upper key's character value (of the shifted key). Note that rather than left, “leftmost” is exemplified because not all keys need have a left character, and similarly “rightmost” is used for the same reason. For example, in
Steps 1124 and 1126 handle the output of the Enter character. Step 1128 detects a left gesture for handling as generally shown in
If the left stroke started in the right region (using the example of
If not in the editing mode at step 1206, step 1210 enters the editing mode, including by displaying the virtual touchpad. Step 1212 represents operating in the editing mode, including its cursor key sub-mode and pointer sub-mode, (as well as possibly one or more other sub-modes), which continues until a user exits the mode via a left gesture at step 1214. Again, the stroke may clearly have to exit the virtual touchpad area, particularly if the user is in the pointer entry sub-mode. In another instance, if the virtual touchpad is large enough to have the editing mode and pointer mode on it together, thus there is no need to have a sub-mode because the editing mode and pointer sub-mode are visible at the same time.
As can be seen, there is shown implementations of graphical and/or printed keyboards that provide access to more of the character set than other known keyboards. At the same time, the real-estate footprint of the keyboard may remain unchanged, and/or the footprint can be reduced. The key size may remain constant. Further, not only is time saved by not having to navigate between character sets, typing speed tends to increase due to using directional stroke gestures for Space, Backspace, Shift, and Enter, including that Space, Backspace and Shift may be entered without having to look at the keyboard. A standard QWERTY keyboard layout may be used, in which event users will recognize the keyboard when they encounter it. Similar situations exist for keyboards of other countries/character sets.
Unlike prior keyboards, the otherwise redundant keys are removed from the layout, whereby discovering the gestures is inherent. For example this frees up a row on the keyboard, whereby the numeric, punctuation and special characters typically on one or more secondary keyboards fit into the resulting freed up space.
With reference to
Components of the device 1500 may include, but are not limited to, a processing unit 1505, system memory 1510, and a bus 1515 that couples various system components including the system memory 1510 to the processing unit 1505. The bus 1515 may include any of several types of bus structures including a memory bus, memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures, and the like. The bus 1515 allows data to be transmitted between various components of the mobile device 1500.
The mobile device 1500 may include a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the mobile device 1500 and includes both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the mobile device 1500.
Communication media typically embodies computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, Bluetooth®, Wireless USB, infrared, Wi-Fi, WiMAX, and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
The system memory 1510 includes computer storage media in the form of volatile and/or nonvolatile memory and may include read only memory (ROM) and random access memory (RAM). On a mobile device such as a cell phone, operating system code 1520 is sometimes included in ROM although, in other embodiments, this is not required. Similarly, application programs 1525 are often placed in RAM although again, in other embodiments, application programs may be placed in ROM or in other computer-readable memory. The heap 1530 provides memory for state associated with the operating system 1520 and the application programs 1525. For example, the operating system 1520 and application programs 1525 may store variables and data structures in the heap 1530 during their operations.
The mobile device 1500 may also include other removable/non-removable, volatile/nonvolatile memory. By way of example,
In some embodiments, the hard disk drive 1536 may be connected in such a way as to be more permanently attached to the mobile device 1500. For example, the hard disk drive 1536 may be connected to an interface such as parallel advanced technology attachment (PATA), serial advanced technology attachment (SATA) or otherwise, which may be connected to the bus 1515. In such embodiments, removing the hard drive may involve removing a cover of the mobile device 1500 and removing screws or other fasteners that connect the hard drive 1536 to support structures within the mobile device 1500.
The removable memory devices 1535-1537 and their associated computer storage media, discussed above and illustrated in
A user may enter commands and information into the mobile device 1500 through input devices such as a key pad 1541, which may be a printed keyboard, and the microphone 1542. In some embodiments, the display 1543 may be a touch-sensitive screen (or even support pen and/or touch) and may allow a user to enter commands and information thereon. The key pad 1541 and display 1543 may be connected to the processing unit 1505 through a user input interface 1550 that is coupled to the bus 1515, but may also be connected by other interface and bus structures, such as the communications module(s) 1532 and wired port(s) 1540. Motion detection 1552 can be used to determine gestures made with the device 1500.
A user may communicate with other users via speaking into the microphone 1542 and via text messages that are entered on the key pad 1541 or a touch sensitive display 1543, for example. The audio unit 1555 may provide electrical signals to drive the speaker 1544 as well as receive and digitize audio signals received from the microphone 1542.
The mobile device 1500 may include a video unit 1560 that provides signals to drive a camera 1561. The video unit 1560 may also receive images obtained by the camera 1561 and provide these images to the processing unit 1505 and/or memory included on the mobile device 1500. The images obtained by the camera 1561 may comprise video, one or more images that do not form a video, or some combination thereof.
The communication module(s) 1532 may provide signals to and receive signals from one or more antenna(s) 1565. One of the antenna(s) 1565 may transmit and receive messages for a cell phone network. Another antenna may transmit and receive Bluetooth® messages. Yet another antenna (or a shared antenna) may transmit and receive network messages via a wireless Ethernet network standard.
Still further, an antenna provides location-based information, e.g., GPS signals to a GPS interface and mechanism 1572. In turn, the GPS mechanism 1572 makes available the corresponding GPS data (e.g., time and coordinates) for processing.
In some embodiments, a single antenna may be used to transmit and/or receive messages for more than one type of network. For example, a single antenna may transmit and receive voice and packet messages.
When operated in a networked environment, the mobile device 1500 may connect to one or more remote devices. The remote devices may include a personal computer, a server, a router, a network PC, a cell phone, a media playback device, a peer device or other common network node, and typically includes many or all of the elements described above relative to the mobile device 1500.
Aspects of the subject matter described herein are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with aspects of the subject matter described herein include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microcontroller-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Aspects of the subject matter described herein may be described in the general context of computer-executable instructions, such as program modules, being executed by a mobile device. Generally, program modules include routines, programs, objects, components, data structures, and so forth, which perform particular tasks or implement particular abstract data types. Aspects of the subject matter described herein may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Furthermore, although the term server may be used herein, it will be recognized that this term may also encompass a client, a set of one or more processes distributed on one or more computers, one or more stand-alone storage devices, a set of one or more other devices, a combination of one or more of the above, and the like.
While the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention.
Number | Date | Country | |
---|---|---|---|
61720335 | Oct 2012 | US |