Embodiments of the present invention are directed to mobile devices and, more particularly, to deriving contexts from nearby mobile devices to change a current state of other mobile devices.
In many cases, actions performed on mobile devices (such as setting operational modes) require explicit user interaction, although the action to be performed could in principle be deduced from the device's context.
For example, usually everybody attending a conference, cultural event, or in a theater, etc. manually sets their phone to “mute”. This needs to be done explicitly, because the phone has no way of knowing by itself that it would be appropriate not to ring. Inevitably several phones will ring and disrupt the event despite a prior announcement or signs informing people to mute their phones.
Deriving the current context and appropriate actions is a difficult challenge for mobile devices, as every “kind” of context exhibits different properties that cannot be uniformly or cheaply measured. In many cases, the kinds of contexts a device is expected to react upon may not even be known at design time, but be defined by later software additions (i.e. apps).
One approach that is used to automatically set device modes based on its environment may use complex sensors and sophisticated data processing to accurately deduce the current context from sensor data. For example, to determine a suitable recording mode for a digital camera, complex scene analysis algorithms are used to “guess” the nature of the scene. However, this requires that the device has the right set of sensors and sufficient processing capabilities to deduce the specific context and automatically invoke appropriate actions.
In the case of phone muting it has been suggested to use GPS or other location data to determine when a phone is in an area where it should be mute. However, these solutions may be lacking since it may not always be necessary to mute in a certain location.
The foregoing and a better understanding of the present invention may become apparent from the following detailed description of arrangements and example embodiments and the claims when read in connection with the accompanying drawings, all forming a part of the disclosure of this invention. While the foregoing and following written and illustrated disclosure focuses on disclosing arrangements and example embodiments of the invention, it should be clearly understood that the same is by way of illustration and example only and the invention is not limited thereto.
Described is a scheme to record context state decisions of other users, based on the state of the mobile devices in the vicinity and, determine if it reasonable to have your device make or suggest a similar state change. By broadcasting state changes or identifiable actions to all other devices in the vicinity using short-range communications, devices can anonymously notify others in their vicinity of actions they or their users have taken. By collecting and analyzing these notifications, devices can then build their own understanding of the current context and autonomously decide on appropriate actions to take for themselves.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The mobile device 100 may further include one or more memories or sets of registers 112, which may include non-volatile memory, such as flash memory, and other types of memory. The memory or registers 112 may include one more groups of settings for the device 114, including default settings, user-set settings established by user of the mobile device, and enterprise-set settings established by an enterprise, such as an employer, who is responsible for IT (information technology) support. The memory 112 may further include one or more applications 116, including applications that support or control operations to send or receive state change or current mode information according to embodiments. The memory 112 may further include user data 118, including data that may affect limitations of functionality of the mobile device and interpretations of the circumstances of use of the mobile device. For example, the user data 118 may include calendar data, contact data, address book data, pictures and video files, etc.
The mobile device 100 may include various elements that are related to the functions of the system. For example, the mobile device may include a display 120 and display circuitry 121; a microphone and speaker 122 and audio circuitry 123 including audible signaling e.g., ringers); a camera 124 and camera circuitry 125; and other functional elements such as a table state changes or modes of nearby devices 126, according to one embodiment. The mobile device may further include one or more processors 128 to execute instructions, to control the various functional modules of the device.
Referring now to
According to embodiments, the mobile device 200 may record decisions other users via devices 202, 204, 206, 208, and 210 in the vicinity have taken, and use this information to deduce an appropriate context that may be also taken by device 200. By broadcasting state changes 212 or identifiable actions to all other devices in the vicinity using short-range communications, devices can anonymously notify others in their vicinity of actions they or their users have taken (e.g. mute phone), possibly in response to a specific context (e.g. a conference presentation about to start and phones should be muted). By collecting and analyzing these notifications, devices can then build their own understanding of the current context and autonomously decide on appropriate actions to take for themselves.
Useable information are, for example, user actions performed on mobile devices (mode/state changes), or events detected by infrastructure components (e.g. device log-on, device shut down, etc.).
Referring to
Referring now to
Referring now to
Referring now to
Likewise, camera modes of nearby cameras may be monitored as shown in the Example in
Referring now to
This approach has the distinct advantage to be uniformly applicable to all kinds of contexts, as their detection is done purely by analyzing notifications received via a communication link, and not dependent on the presence of a specific sensor. The definition of contexts and notifications can be done purely in software, and can be changed over the lifetime of the device (e.g, based on installed applications etc.). Such an approach also may require far less computational complexity than the analysis of complex and real-time sensor data, thus saving energy and extending battery life.
Also, this method uses the distributed intelligence of other users instead of relying on hardcoded context detection algorithms. That is, it could be considered an application of “crowd sourcing”, as the actual “detection events” used for deriving the context are collected from other devices/users; though an important distinction to existing applications is that relevant data is only collected in the device's vicinity. Generally speaking, that more data points (more generated events and notifications) may improve the quality and reliability of the context derivation process. Given that the confidence in the derived context is high enough, an appropriate response might be to simply take the exact same action indicated by the received notifications (i.e., in the example if many nearby phones going mute, simply mute this phone as well).
In one example, at least one machine readable storage medium comprises a set of instructions which, when executed by a processor, cause a first mobile device to receive mode information from a plurality of other mobile devices, store the mode information in a memory, and determine from the mode information if the first mobile device should change mode.
In another example the mode information comprises a change of mode.
In another example, the first mobile device comprises a mobile phone and wherein the mode information comprises ones of the plurality of other devices changing to a mute mode.
In another example, first mobile device comprises a mobile camera and wherein the mode information comprises ones of the plurality of other devices changing to a particular photography mode.
In another example, the photography mode comprises landscape mode or portrait mode.
In another example, the photography mode comprises flash or no flash.
In another example, the first mobile device is associated with a vehicle and the mode information comprises sensed deceleration.
In another example, a method for changing a mode of a first mobile device, comprises: receiving mode information from a plurality of other mobile devices, storing the mode information, analyzing the mode information to determine if a threshold number of the plurality of other mobile devices have entered a same mode within a threshold time period, and determining from the analysis if the first mobile device should change to the same mode.
In another example the first mobile device comprises a mobile phone and wherein the mode information comprises ones of the plurality of other mobile devices changing to a mute mode.
In another example, the first mobile device comprises a mobile camera and wherein the mode information comprises ones of the plurality of other mobile devices changing to a particular photography mode.
In another example the photography mode comprises landscape mode or portrait mode.
In another example, the photography mode comprises flash or no flash.
In another example, the first mobile device is associated with a vehicle and wherein the mode information comprises sensed deceleration.
In another example, a mobile device comprises a plurality of mode settings, a receiver to receive mode information from other mobile devices, a memory to store the mode information, a processor to analyze the mode information to change the mode of the mobile device based on the mode information from the other mobile devices.
In another example, the mobile device comprises a mobile phone and the mode information comprises a plurality of the other mobile devices in mute mode.
In another example, the mobile device comprises a mobile camera and wherein the mode information comprises a plurality of the other mobile devices changing to a particular photography mode.
In another example, the photography mode comprises landscape mode or portrait mode.
In another example the photography mode comprises flash or no flash.
In another example the mobile device comprises an in vehicle infotainment (IVI) system and the wherein the mode information comprises sensed deceleration.
The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.
These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.