BODY MOVEMENT BASED USAGE OF MOBILE DEVICE

Abstract
Body-based sensors are used to interact with one or more mobile devices. This interaction can be an alternative to or a supplement to traditional input methods on mobile devices. In order to facilitate everyday use, sensors can be hidden in jewelry or other fashion accessories normally worn by a user and wirelessly coupled to the mobile devices. When certain body movement is detected, a mobile device can automatically initiate one or more processes that perform various actions, such as actions on that mobile device or actions on communicatively coupled devices. The body movements can be user-defined so that a user can customize his interaction with the mobile device to meet the user's particular needs. The sensor array can also be adapted to turn on or off depending on the current environment.
Description
TECHNICAL FIELD

The subject disclosure relates to user interaction with computing devices, and more particular, to body-based user interaction with mobile devices.


BACKGROUND

Mobile devices, such as cell phones, portable media players, personal digital assistants (PDAs), messaging devices, portable game players, are ubiquitous. Many users carry multiple mobile devices with them. Input devices for these mobile devices however are usually inadequate for a number of reasons.


With traditional keypads, there is a tradeoff between size and number of buttons. For example, in today's cell phones, at one end of the spectrum, due to the desire to have a palm sized device, a full QWERTY keypad input drives the size of buttons to the limit of smallness, whereas at the other end of the spectrum, character groupings for buttons (e.g., ABC->1), while they save space and allow for bigger buttons, the buttons may require tedious multiple pressing of buttons to achieve input precision for a single character.


Moreover, since keypads are often integrated into the devices, user input often cannot be personalized for a particular user of the device. Therefore, people with limited eyesight, for instance, often have problems finding devices with large buttons to meet their needs. In addition, buttons on the embedded keypad often cannot be reassigned by the user to perform user-defined functions.


Audio input and command also suffers from disadvantages relating to precision of input, and may not be intuitive for different languages or dialects. Furthermore, audio input suffers from a lack of privacy and potential to control other nearby mobile devices within earshot. Thus, audio input is inappropriate in a number of different environments (e.g., a public bus, office cubicles).


In addition, today's mobile devices usually do not share their user input devices with other computing devices, such as the user's other mobile devices. For example, a user's portable media player is often unaware that a user is making or receiving a call on the user's cell phone. Thus, improved ways to interact with mobile devices are desirable.


The above-described deficiencies of interacting with mobile devices are merely intended to provide an overview of some of the problems of interacting with today's mobile devices, and are not intended to be exhaustive. Other problems with the state of the art may become further apparent upon review of the description of various non-limiting embodiments that follows.


SUMMARY

The following presents a simplified summary in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is intended to neither identify key or critical elements of the invention nor delineate the scope of the invention. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.


Body movements or other body information are captured as input to one or more mobile devices, such as a cell phone. This type of interaction can be instead or a supplement to traditional keypads and audio interaction with a mobile device. The sensors can include orientation sensors that track movement of various parts of the body, which can be used on the mobile device to make input to the one or more mobile devices.


In various embodiments, sensors are placed on or proximate to one or more places of the body (e.g., fingers, legs, torso, head, etc.) capturing some or all of a user's body movements and conditions. The body movements sensed by the sensors can be used as to make input to one or more mobile devices for a variety of tasks, such as making finger graffiti in the air for phone input to make phone calls, playing games such as boxing that sense a punch is thrown, receiving sports instruction (e.g., how to improve a golf swing or a bowling ball roll), and a variety of other tasks and services as well.


The sensors are advantageously wireless coupled to one or more mobile devices although some or all of the sensors can be connected by wire to the mobile device. The sensors can also advantageously be included as part of jewelry (e.g., rings, watches, bracelets, necklaces, pins, cuff links, etc.) or other fashion accessories (e.g., shoes, belts, foot bands, socks, pocket squares, head bands, hats, etc.).


In one embodiment, as mobile devices become connected to other autonomous computing devices and networks, certain gestures by the user can also initiate a series of actions and/or communications that cause an external response. For instance, one might drive up to their garage door, and through a body movement gesture captured by body sensors communicatively coupled to the mobile device, initiate a sequence of operations that opens the garage door.


In one embodiment, one or more ring sensors for sensing finger movement are particularly advantageous because many people wear rings today, and can be used to determine what input is being conveyed for a virtual input device. For example, the rings sensor can determine what keys of a virtual “air” keyboard a user is pressing or where to move the cursor for a virtual mouse or trackball.


In at least some embodiments, the sensor array is able to be turned on/off, such as automatically depending on the environment for privacy reasons and/or to prevent undesired actions from being performed. For example, the array can be automatically turned off when the user enters a bathroom if a sensor is available to detect a user's location in a bathroom. Similarly, body-based movement interaction can be turned off when it is detected that the user is performing an activity that involves body-movement unrelated to input to the mobile device (e.g., driving, dancing, playing an instrument, etc.).


To the accomplishment of the foregoing and related ends, certain illustrative aspects of the invention are described herein in connection with the following description and the annexed drawings. These aspects are indicative, however, of but a few of the various ways in which the principles of the invention may be employed and the present invention is intended to include all such aspects and their equivalents. Other advantages and novel features of the invention may become apparent from the following detailed description of the invention when considered in conjunction with the drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates an exemplary non-limiting block diagram of a mobile device that can receive body movement based interaction.



FIG. 2 is an exemplary non-limiting block diagram showing illustrative aspects of embodiments in the context of body movement based usage of mobile device.



FIG. 3 is an exemplary non-limiting block diagram illustrating body movement based usage of multiple mobile devices.



FIGS. 4A-4C illustrate block diagrams of sensors that are embedded in jewelry or fashion accessories according to an embodiment.



FIG. 5 is an exemplary non-limiting block diagram showing illustrative aspects of embodiments in the context of body movement based usage of other computing devices and/or appliances via the mobile device according to one aspect.



FIGS. 6A-6C illustrate various exemplary body movements that can be used to initiate a process on a mobile device.



FIG. 7 is a flow diagram of virtual device input according to one aspect.



FIG. 8 is a diagram of an exemplary virtual input device that can be used to interact with a mobile device according to one embodiment.



FIG. 9 is a flow diagram showing illustrative aspects of embodiments in the context of body movement based usage of mobile device.



FIG. 10 is a flow diagram illustrating aspects of embodiments that use a virtual input device for input.



FIG. 11 is a flow diagram illustrating aspects of embodiments that automatically turn on/off body-based movement interaction depending on the user's environment.



FIG. 12 is a block diagram of an input processing component for a mobile device according to one aspect.





DETAILED DESCRIPTION

The present invention is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It may be evident, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the present invention.


As discussed in the background, today, input devices for cell phones are inadequate due to the technological and engineering tradeoffs involved. In consideration of these limitations on input techniques for mobile devices, the invention captures body movements or other body information as input to a mobile device, such as a cell phone. In various embodiments, sensors are placed on one or more places of the body, e.g., fingers, legs, torso, head, etc. everywhere, capturing all of a user's body movements and conditions. The body movement sensed by the sensors can be used as to make input to a mobile device for a variety of tasks, such as making finger graffiti in the air for phone input to make phone calls, playing games such as boxing that sense a punch is thrown, receiving sports instruction, e.g., how to improve a golf swing, or communicating a need to drink more fluids, etc., and a variety of other tasks and services as well.


Turning to FIG. 1, an exemplary non-limiting mobile computing system environment in which the present invention may be implemented is illustrated. Even though a general-purpose mobile computing device is illustrated, one will appreciate that any mobile computing device, including mobile computing devices implemented using multiple processors or a System on a chip or wearable mobile devices are contemplated. Although not required, the invention can partly be implemented via software (e.g., firmware). Software may be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers.



FIG. 1 thus illustrates an example of a mobile computing device. Those skilled in the art will appreciate that the invention may be practiced with any suitable computing system environment 100 in which the invention may be implemented but the computing system environment 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 100.


With reference to FIG. 1, an example of a computing device for implementing the invention includes a general-purpose mobile computing device in the form of a computer 110. Components of computer 110 may include, but are not limited to, a processing unit 120, a system memory 130, and a system bus 121 that couples various system components including the system memory to the processing unit 120. The system bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.


Computer 110 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 110. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile as well as removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CDROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 110. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.


The system memory 130 may include computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) and/or random access memory (RAM). A basic input/output system (BIOS), containing the basic routines that help to transfer information between elements within computer 110, such as during start-up, may be stored in memory 130. Memory 130 typically also contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120. By way of example, and not limitation, memory 130 may also include an operating system, application programs, other program modules, and program data.


The computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media. For example, computer 110 could include a flash memory that reads from or writes to non-removable, nonvolatile media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and/or an optical disk drive that reads from or writes to a removable, nonvolatile optical disk, such as a CD-ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, digital versatile disks, digital video tape, solid state RAM, solid state ROM and the like.


A user may enter commands and information into the computer 110 through input devices. Input devices are often connected to the processing unit 120 through user input 140 and associated interface(s) that are coupled to the system bus 121, but may be connected by other interface and bus structures, in a wired or wireless manner, such as a parallel port, game port, a universal serial bus (USB), wireless USB, or Bluetooth. A graphics subsystem may also be connected to the system bus 121. One or remote sensors, including orientation sensors 145 are also connected to system bus 121 via input interface 140. At least one of the sensors is attached or proximate to the user's body and each sensor is communicatively coupled to computer via wired or wireless means. A monitor or other type of remote output devices 155 may also connected to the system bus 121 via an interface, such as output interface 150, which may in turn communicate with video memory. In addition to a monitor, computer 110 may also include other peripheral output devices, which may be connected through output interface 150.


The computer 110 may operate in a networked or distributed environment using logical connections to one or more other remote computers, such as remote computer 170, which may in turn have capabilities different from device 110. The logical connections depicted in FIG. 1 include a network 171. The network 171 can include both the wireless network described herein as well as other networks, such a personal area network (PAN), a local area network (LAN) or wide area network (WAN).


When used in a PAN networking environment, the computer 110 is connected to the PAN through a network interface or adapter, such as a Bluetooth or Wireless USB adapter. When used in a LAN networking environment, the computer 110 is connected to the LAN through a network interface or adapter. When used in a WAN networking environment, the computer 110 typically includes a communications component, such as a modem, or other means for establishing communications over the WAN, such as the Internet. A communications component, such as a network interface card, which may be internal or external, wired or wireless, may be connected to the system bus 121 via the user input interface of input 140, or other appropriate mechanism. In a networked environment, program modules, or portions thereof, may be stored in a remote memory storage device.



FIG. 2 illustrates a variety of body sensors S1, S2, S3, S4, S5, S6, S7, etc., including but not limited to orientation sensors on body parts, making input to device 210, which processes the input at 220, e.g., tracks movement of various body parts via the sensors. If a response condition is detected at 230 with the input, automatic action is taken by device 210 on behalf of the user 200 at 240. Response conditions can include, but are not limited to, predefined finger movements, digit taps, arm movements, head movements, etc. In at least some embodiments, some or all of the movements are user-defined. Automatic actions to be performed can be mobile device-dependent. For a cell phone, the automatic actions can include, but are not limited to, initiating a phone call, hanging up a phone call, calling a particular individual, retrieving information from the cell phone's phonebook, etc. For a portable game player, the automatic actions can include moving a game piece according to the body movement, quitting the game, saving the game, etc. As an additional example, automatic actions for a portable media player can include resuming playback, stopping playback, pausing playback, start recording, fast forward, rewind, etc.


Otherwise, device 210 continues to monitor the input from the sensors S1 to S7. In one embodiment, the sensors include orientation sensors that track movement of various parts of the body, which can be used on the mobile device to make input to the mobile device. The device 210 can be a wearable computer or a more traditional handheld mobile device that is carrier in a purse, pocket, or belt holder/holster.


In at least one embodiment, one or more body-movement sensors are shared between multiple mobile devices. Advantageously, this allows each mobile device to respond appropriately based on the user's movement. For example, assume that a user has both a cell phone and portable video player. When body movements indicate the user is answering the cell phone, the portable video player can paused. Once the user hangs up on the cell phone, the portable video player can resume playback. One will appreciate that this functionality can be used in other scenarios and with different device pairs, such as a portable media player and a portable game player, a cell phone and a portable game player, etc.



FIG. 3 illustrates this scenario. Similar to FIG. 2, user 300 has sensors S1-S7. However, these sensors are shared via communication framework 310 to mobile devices 320 and 330. The communication framework can be wired or wireless, such as Bluetooth, wireless USB, or a wired USB bus. Mobile devices 320 and 330 share at least some of the sensor and can, as discussed supra, initiate different actions on each of the two devices. One will appreciate that although only two devices are shown and described for the sake of simplicity, any number of mobile devices can share one or more body sensors.


Sensors that sense body position or other user information can be advantageously placed in or attached to jewelry, such as a rings, watches or necklaces, fashion accessories, such as belts, headbands, or hats, or garments, such as shoes. Accordingly, there can be no need to look for or hold a special device when body-based movement is used for input.



FIGS. 4A-4C illustrate block diagrams of various sensors embedded in various jewelry and fashion accessories. In particular, FIG. 4A illustrates a sensor S3 placed into a ring 400. FIG. 4B illustrates a sensor s6 embedded in a watch. The orientation of a finger can be sensed via the sensors in these two pieces of jewelry so that the direction a user is pointing can be determined by the mobile device. FIG. 4C illustrates sensor S6 embedded in a belt buckle of a belt 440. These sensors can be embedded upon manufacture of the jewelry or fashion accessories or can be attached subsequently, such as by the user or a retailer (e.g., jewelry retailer, clothing retailer, or mobile device retailer).


One will also appreciate that one or more sensors can be embedded or attached to traditional input/output devices for mobile devices that are worn by the user. For example, one or more sensors can be embedded into a Bluetooth or other wired/wireless headset or wireless/wired headphones.


As mobile devices become connected to other autonomous computing devices and networks, certain gestures by the user can also initiate a series of actions and/or communications that cause an external response. For instance, one might drive up to their garage door, and through a body movement gesture captured by body sensors communicatively coupled to the mobile device, initiate a sequence of operations that opens the garage door. As a second example, one might operate remote-controlled devices (e.g., television, DVR, DVD, VCR, stereo, set-top box, etc.) through a body movement gesture captured by body sensors communicatively coupled to the mobile device and initiate a sequence of operations on a nearby universal remote that operates a remote-controlled device.


This aspect is illustrated in FIG. 5. FIG. 5 illustrates sensors S1-S7 on or proximate to user 500. These sensors are communicatively attached to mobile device 510. Mobile device 510 is then communicatively coupled to at least one of an appliance 520 (e.g., a garage door, smart refrigerator, smart microwave, oven, air conditioner, etc.) or a non-portable computing device 530, such as a desktop or flat-panel display. Upon receiving a specific body gesture, the mobile device 510 can initiate communication with and control appliance 520 or non-portable computer 430. This communication can include the command to perform and may also include identifying information for the mobile device 510.


For example, the mobile device can interpret pointing at an appliance or non-portable computing device as a desire to control the appliance or device. Combining the orientation of a finger/hand with a map (e.g., a 3D map) of nearby objects, one can activate/operate the appliance or computing device with a natural pointing gesture. For example, one can point to an air conditioner thermostat at a distance to turn the air conditioning on/off in an indoor environment even without a specific remote controller at hand. This communication can include the command to perform and may also include identifying information for the mobile device 510.


The communicative coupling of the sensors to the mobile device can be different than the communicative coupling of the mobile device 510 to either the appliance or the non-portable computing device. For example, the sensors can be connected to the mobile device via Bluetooth and the mobile device to the appliance via wireless Ethernet.


Finger or arm movement sensors communicatively coupled to a mobile device are conducive to recognizing such unique hand gestures. In this manner, multiple sensors for different body locations can together form a single input to a mobile device. As a simple example, doing “jumping jacks” requires movement of both the arms and legs in a certain harmonic manner, which could be detected as a unique gesture input to the mobile device by positioning sensors on both the arms and legs. In this fashion, unique gestures may be defined by a user of the mobile device for performing common actions with the mobile device, such as “Call my friend Lee.”



FIGS. 6A-6C illustrate exemplary finger/hand gestures that can be used to initiate a process. The hand gestures form a gesture-based input language. These hand gestures may need to occur within a predetermined period of time, such as 3 seconds. Depending on the number of sensors available, the same gesture can have a different meaning for different hands, different fingers, or different finger combinations.


One will appreciate however, that other body gestures can be used as well. For example, actions can be initiated based on a number of taps of fingers, feet, or hands. Some body gestures may involve the use of two different body areas, such as the “jumping jacks” movement above.


In addition, the gesture-based input language can be customized to the user's needs. For example, a person lacking one or more fingers may choose to use leg gestures instead of hand or finger gestures for gesture-based input. As a second example, a handicapped person may use a standardized sign language as a gesture-based input language.


One or more ring sensors for sensing finger movement are particularly advantageous because many people wear them today, and can be used to determine, for example, which keys of a virtual “air” keyboard a user is pressing. Key input of a mobile device is thus advantageously complemented or replaced by the supplemental mobile device input sensors of the invention.



FIG. 7 illustrates input via such virtual input devices according to one embodiment. User 700 has sensors S1-S7 and two output display devices O1 and O2. Output devices O1, such as a wearable display, or O2 a mini-wearable projector, can be used to display a virtual input device, such as a keypad or mouse/trackball, for mobile device 710. The virtual input device is displayed at 720. Movement is tracked by sensors at 730, and input corresponding to the tracked body movement determined at 740. The movement can correspond to the pressing of a virtual keypad or the movement of the virtual mouse/trackball.



FIG. 8 illustrates an exemplary virtual “air” keyboard 800 that is displayed to the user of a cell phone via an output display device, such as O1 or O2 of FIG. 7. The illustrated keypad displayed via the output display device includes a standard telephone keypad 802 and four additional keys 804, 806, 808, 810. Additional keys 804, 806, 808 are user-defined. Key 810 allows body movement to be temporarily turned off, such as when the user is dancing, driving, or in a bathroom.


Although FIG. 8 illustrates additional keys being added to a traditional keypad layout for a telephone, one will appreciate that various other layout not including the traditional keys or key locations are possible in other embodiments. For example, a virtual keyboard with the letters in alphabetical order rather than a QWERTY keyboard can be created. As a second example, virtual keypads can be constructed that are entirely composed of virtual keys associated with user-defined functionality.


One will also appreciate that although a virtual input device is described as being displayed to the user on an output display device, in other embodiments, the virtual input device can be constructed out of paper printout of an input device if the layout is previously known to the mobile device and can automatically determined (e.g., via picture recognition or via barcode or other machine-readable tag). Accordingly, virtual keypads of any size, shape, or button configuration can be created.


Turning briefly to FIGS. 9-11, methodologies that may be implemented in accordance with the present invention are illustrated. While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of blocks, it is to be understood and appreciated that the present invention is not limited by the order of the blocks, as some blocks may, in accordance with the present invention, occur in different orders and/or concurrently with other blocks from that shown and described herein. Moreover, not all illustrated blocks may be required to implement the methodologies in accordance with the present invention.



FIG. 9 is an exemplary non-limiting flow diagram showing a method 900 for use in connection with a mobile device in accordance with the invention. At 910, orientation input data is received relating to an orientation of body part(s) of a user from input sensor(s), including orientation sensor(s), attachable to or near the body part(s) of the user. At 920, the movement of the input sensor(s) is tracked via the orientation sensor(s). At 930, pattern(s) in the orientation input data generated by the at least one input sensor are recognized. At 940, one or more processes are automatically initiated by the mobile device in response to the pattern(s) recognized in the input data.



FIG. 10 is an exemplary non-limiting flow diagram of a method 1000 for using a virtual input device. At 1010, the virtual input device is displayed to the user. As discussed supra, this display can be via an output display device communicatively coupled to the mobile device or maybe displayed on piece of paper. At 1020, movement of input sensors is tracked via one or more orientation sensors with regard to the displayed virtual input device. At 1030, input date from the input sensor and information associated with the displayed virtual input device is used to determine the user input. Although not shown, this method can be performed repeatedly to receive multiple inputs (e.g., multiple key inputs) from the virtual input device.



FIG. 11 is an exemplary non-limiting flow diagram of a method 1100 of automatically turning on/off body-based movement as input depending on the user's environment. Some user environments, based on the user location (e.g., bathroom) or current activity (e.g., driving, dancing) are not conducive to body-based movement being used as input. One will appreciate that body-based movement can also be turned on/off manually by the user.


At 1110, the user's current environment is determined based on one or more sensors, such as global positioning sensors or sensors that read location-specific identifiers (e.g., RFID tags or wireless network gateway MAC addresses, etc.). This can be performed periodically, such as once every minute, or be determined based on the amount or nature of body movements. At 1120, it is determined if the current user environment is appropriate for body-based movement. At 1130, depending on the current state of body-based movement input and the determination made at 1120, body-based movement is turned on/off as appropriate.



FIG. 12 is a block diagram of an input processing component 1210 according to one embodiment. The sensor tracking component 1214 is communicatively coupled to at least one input sensor that tracks the movement of the input sensor via an orientation sensor. The pattern recognition component 1212 then recognizes at least one pattern in input data generated by at least one input sensor. In response to a pattern being recognized in the input data, one or more processes are automatically initiated by the mobile device via the input indication component 1216. A user-defined function component allows a user to store and manage body gestures that will be recognized by the pattern recognition component and associate those gestures with processes to automatically performed when the pattern is recognized. Optional virtual input device component 1220 facilitates display of virtual devices and stores and manages corresponding processes to perform when certain body gestures are received. For example, virtual input device component can store layout of one or more virtual keypads.


The present invention has been described herein by way of examples. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, for the avoidance of doubt, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.


Various implementations of the invention described herein may have aspects that are wholly in hardware, partly in hardware and partly in software, as well as in software. As used herein, the terms “component,” “system” and the like are likewise intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on computer and the computer can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.


Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. In the case of program code execution on programmable computers, the computing device generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.


Furthermore, the invention may be described in the general context of computer-executable instructions, such as program modules, executed by one or more components. Generally, program modules include routines, programs, objects, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments. Furthermore, as will be appreciated various portions of the disclosed systems above and methods below may include or consist of artificial intelligence or knowledge or rule based components, sub-components, processes, means, methodologies, or mechanisms (e.g., support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, data fusion engines, classifiers . . . ). Such components, inter alia, can automate certain mechanisms or processes performed thereby to make portions of the systems and methods more adaptive as well as efficient and intelligent.


Additionally, the disclosed subject matter may be implemented at least partially as a system, method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer or processor based device to implement aspects detailed herein. The terms “article of manufacture,” “computer program product” or similar terms, where used herein, are intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick). Additionally, it is known that a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN).


The aforementioned systems have been described with respect to interaction between several components. It can be appreciated that such systems and components can include those components or specified sub-components, some of the specified components or sub-components, and/or additional components, according to various permutations and combinations of the foregoing. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components, e.g., according to a hierarchical arrangement. Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components, and any one or more middle layers, such as a management layer, may be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described herein may also interact with one or more other components not specifically described herein but generally known by those of skill in the art.

Claims
  • 1. A mobile device system comprising: at least one input sensor attachable to or near at least one body part of the user including at least one orientation sensor; andan input processing component of a first mobile device communicatively coupled to the at least one input sensor that tracks the movement of the at least one input sensor via the at least one orientation sensor, wherein the input processing component recognizes at least one pattern in input data generated by the at least one input sensor and communicated to the input processing component, whereby in response to the at least one pattern recognized in the input data, one or more processes are automatically initiated by the first mobile device.
  • 2. The mobile device system of claim 1, wherein at least one input sensor is or proximate to a piece of jewelry, the piece of jewelry is proximate to at least one body part of the user.
  • 3. The mobile device system of claim 1, further comprising an input processing component of a second mobile device communicatively coupled to the at least one input sensor that tracks the movement of the at least one input sensor via the at least one orientation sensor, wherein the input processing component recognizes at least one pattern in input data generated by the at least one input sensor and communicated to the input processing component, whereby in response to the at least one pattern recognized in the input data, one or more processes are automatically initiated by the second mobile device.
  • 4. The mobile device system of claim 1, wherein at least one input sensor attachable to or near at least one body part of the user is communicatively coupled to the first mobile device wirelessly.
  • 5. The mobile device system of claim 1, wherein at least one input sensor is or proximate to a fashion accessory, the fashion accessory is proximate to at least one body part of the user.
  • 6. The mobile device system of claim 1, wherein the input processing component recognizes at least one user-defined pattern in input data generated by the at least one input sensor.
  • 7. The mobile device system of claim 1, wherein the one or more processes automatically initiated by the first mobile device comprises communicating with a remote computing device to automatically initiate one or more processes on the remote computing device.
  • 8. A method for use in connection with a mobile device, comprising: displaying a virtual input device to a user;receiving orientation input data relating to an orientation of at least one body part of a user from at least one input sensor, including at least one orientation sensor attachable proximate to at least one body part of the user;tracking the movement of the at least one input sensor via the at least one orientation sensor in relation to the displayed virtual input device; andrecognizing input to the displayed virtual input device based on the tracked movement.
  • 9. The method of claim 8, further comprising automatically initiating one or more processes by the mobile device in response to the recognized input.
  • 10. The method of claim 8, wherein the displaying of a virtual input device comprises displaying at least one of a virtual mouse or a virtual trackball.
  • 11. The method of claim 8, wherein the displaying of a virtual input device comprises displaying the virtual input device via a wearable output display.
  • 12. The method of claim 8, wherein the displaying of a virtual input device comprises displaying a virtual keypad with at least one user-defined key.
  • 13. The method of claim 8, wherein the receiving orientation input data relating to an orientation of at least one body part of a user from at least one input sensor comprises receiving orientation input data from at least one input sensor in a piece of jewelry.
  • 14. The method of claim 8, wherein the tracking includes tracking the movement of the at least one input sensor via the at least one orientation sensor in relation to the displayed virtual input device comprises tracking the movement of one or more fingers of a user.
  • 15. The method of claim 8, wherein the displaying of a virtual input device comprises displaying the virtual input device on an external output device communicatively coupled to the mobile device.
  • 16. A method of automatically switching on or off body-based movement interaction with a mobile device: determining a current environment for a user based on one or more sensors;determining whether the current environment is appropriate for body-based movement;when it is determined that the current environment is not appropriate for body-based movement, turning body-based movement interaction off; andwhen it is determined that the current environment is appropriate for body-based movement, turning body-based movement interaction on.
  • 17. The method of claim 16, wherein the determining of a current environment for a user based on one or more sensors comprises determining a current environment for user based on at least one sensor attached to a body of the user.
  • 18. The method of claim 16, wherein the determining of a current environment for a user based on one or more sensors comprises determining a current location for the user based on at least one sensor attached to a body of the user.
  • 19. The method of claim 16, wherein the determining of a current environment for a user based on one or more sensors comprises determining a current activity for the user based on at least one sensor attached to a body of the user.
  • 20. The method of claim 16, wherein the determining of a current environment for a user based on one or more sensors comprises determining a current activity for the user based on at least one wireless sensor proximate to at least one of a piece of jewelry or fashion accessory.
CROSS-REFERENCE TO RELATED APPLICATIONS

This non-provisional application claims benefit under 35 U.S.C. §119(e) of U.S. provisional Application No. 60/910,109, filed Apr. 4, 2007.

Provisional Applications (1)
Number Date Country
60910109 Apr 2007 US