“Not Applicable”
“Not Applicable”
“Not Applicable”
Currently typical user interface for mobile devices and computers in general is focused around apps (applications), more specifically app icons and functional buttons. User interface in simplified terms is “clicking app icons and buttons”. For example, when user wants to listen to music, user clicks an audio app and clicks some buttons to select playlist and play music. Such user interface, “clicking app icons and buttons” has been the most dominant computing practice since the introduction of Macintosh 128K in 1984 with graphical user interface and mouse.
Although many new user interface techniques have been introduced since 1984, such as a major breakthrough with touch screen and touch gesture recognition with Apple iPhone in 1996, the fundamentals of “clicking app icons and buttons” have not been changed. Now there are taps, swipes, pinches, and all, but these new interface techniques are fundamentally easier ways of “clicking app icons and buttons”. Furthermore, typical user interface for electronic devices in general includes a power switch and functional buttons. User interface in simplified terms is “clicking power switches and buttons”. Therefore, both user interface fundamentals for mobile device and electronic devices in general are similar in a way that in simplified terms they are both “clicking activation and functional buttons”.
Recent movement towards smart electronic devices is adding computing capabilities and network connectivity to almost all electronic devices, and thus extending functionalities beyond the devices themselves towards collective capabilities of all devices within the network. For example, smart phones now are often connected to home appliances, home automation systems, automotive vehicle electronics, or any other type of electronic devices connected to network. Such movement has added ever more functionalities but with more complexity with the user interface. The systems and methods to connect to and control other electronic devices differ in most cases, require more icons, switches, and buttons, thus add complexity, and degrade the value of interconnectivity and control.
Current user interface can be described as “functional” user interface. User is constantly changing the functionality of mobile device or electronic devices by “clicking activation and functional buttons” to user's needs. However, the complexity of functionality within a single electronic device or networked electronic devices all together reveals the limitation of mere “functional” user interface. Speech and/or gesture recognition is gaining technological advances plus popularity for its easy user interface, yet again to augment “clicking switches and buttons”.
Historically, more functionality has brought more complexity and usual solution has been new user interface. Now is the time for a new user interface beyond “clicking app icons and buttons” or “clicking power switches and buttons”. This invention is to reengineer systems and methods for user interface so that mobile experience or electronic device experience in general is better suited for recent complexity in functionality.
Complexity of functionality in mobile devices and modern electronic devices in general awaits a new user interface to resolve the issue. Resolving complexity issues requires simplifying and simplifying involves common denominators. This invention adopts context, more specifically user context, as the common denominator to simplify the complexity of functionality in electronic devices.
Context in definition is the surroundings, circumstances, environment, background or settings that determine, specify, or clarify the meaning of an event or other occurrence. In modern science, user context is often referred to as context awareness or location awareness. User's whereabouts has been the most common user context since the proliferation of mobile devices.
This invention defines user context with user's activity and/or user's intention to an activity. (From now onward, user's activity and/or user's intention to an activity is shortened to user's activity.) This invention identifies user's activity as the common denominator to simplify the complexity of functionality in electronic devices. Such identification assumes that changes to user's activity accompany functional changes to electronic devices. For example, when user is cooking, user may want to use smart phone hands-free, turn off all lights except dining room, and play classic music. But when user is watching TV, user may want to use vibration mode instead of ring tone, dim living room lights, and stop audio. User wants functional changes to electronic devices or different behaviors of electronic devices when user shifts activities.
In other words, complex functional changes to electronic devices are due to a single change in user's activity. Such one-to-many relationship between activity and functions of electronic devices is the key to simplification of complexity. This one-to-many relationship between activity and functions of electronic devices is defined in “mode of operation”. Each mode of operation includes functional changes of electronic devices required to changes in user's activity. Thus, each activity has respective mode of operation, which is defined in this invention as activity-centric contextual mode of operation for electronic devices.
In order to use user's activity as user context, defined as activity-centric context in this invention, the principle of 5W1H (When, Where, Who, What, Why, How) from linguistic grammar is used to gather information and get the complete story on user-activity. Thus, details for user-activity may include, but not limited to, time (When), place (Where), user group (Who), object (What), intention (Why), and other contextual information (How), which may be defined in data structure. Standardized data structure may be required to gather, save, access, and communicate user-activity as information within and amongst electronic devices.
As user-activity is defined as activity-centric context with the principle of 5W1H, respective functional changes of electronic devices are defined as mode of operation, which is called in this invention as “activity-centric contextual mode of operation”. Mode of operation may include, but not limited to, functions and settings for software as well as for hardware. For example, mode of operation for smart phone may include playlist and volume for audio app, or ring tone setting for the smart phone itself. Mode of operation for smart-home lighting control system may include dimming settings. Thus, when user shifts user-activity from “cooking” to “watching TV”, smart phone automatically sets to vibration mode and stops the audio app while smart-home lighting control system automatically dims the dining room lights, as said changes to functions and settings are predefined in the contextual mode of operation for “watching TV” user-activity.
User only needs to predefine how electronic devices need to function for each user-activity as contextual mode of operation. Then, whenever user engages to a predefined new user-activity, the electronic devices will change to predefined functions and settings. Therefore, this invention simplifies user's mobile experience and all electronic devices in general. User can control smart phones, wearable devices, home appliances, home automation devices, automotive vehicle electronics, and etc by a simple user interaction, changing user-activity.
Systems and methods for supporting activity-centric contextual modes of operation for one or more electronic devices are provided and described with reference to
Steps from 102 to 128 are related to user's jog activity and throughout the steps user interacts with 3 different electronic devices, smart phone, heater control system, and lighting control system. User's interaction with said 3 different electronic devices is to change functions and settings of said 3 different electronic devices to suit user's jog activity.
Steps from 130 to 136 show changes to user's activity in which after hot shower said user decides to go to sleep and changes functions and settings of electronic devices in order to suit user's new activity, “sleep”. For user-activity “sleep”, user turns on audio, selects music playlist appropriate before sleep, and sets audio-off timer for 30 minutes in step 132. In step 134 and 136, user sets lights-off timer for 30 minutes and goes to sleep. From step 130 to step 136, user interacts with 2 electronic devices, lighting control system and audio.
Illustrative user scenario 100 is an example of current user experience with electronic devices, which involves 2 user-activities (jog and sleep), 4 different electronic devices (smart phone, heater control system, lights control system, and audio), and numerous changes to functions and settings of said 4 electronic devices. Between shifts in user's activities, user constantly needs to activate/deactivate electronic devices and change functions and settings of said electronic devices to suit user's new activity. As more electronic devices become smarter and networked in future, user may enjoy with ever more functionalities but need to constantly change functions and settings between shifts in user's activities which results in added complexity to user interface.
This invention proposes a solution to tackle the complexity within the user interface. In order to resolve the complexity, this invention uses user-activity as a common denominator to sort complex functions and settings of electronic devices and to group only required functions and settings of electronic devices with user's current activity.
As shown in
This invention identifies the fact that user changes functions and settings of electronic devices because user wants different behaviors from electronic devices for different user-activities, as described in illustrative scenario 100. Therefore, shift in user-activity is the root cause of multiple changes to functions and settings of electronic devices. This invention provides systems and methods that detects user-activity and automatically apply changes to functions and settings of electronic devices as predefined for said user-activity. The predefined changes to functions and settings of electronic devices are called “modes of operation” in this invention. Different user-activity has different modes of operation for electronic devices. Since this invention defines user-activity as user context, “modes of operation” for respective user-activity are thus defined as “activity-centric contextual modes of operation” in this invention. Activity-centric contextual modes of operation contain information on the mode or the state of electronic devices to which electronic devices change functions and settings when user shifts activity. Changing functions and setting may involve disabling, enabling or restricting access to one or more functionalities, applications, or assets of the electronic devices.
The functionalities may include, but are not limited to, any input functionalities (e.g., microphone), any output functionalities (e.g., audio level), any communication functionalities (e.g., Bluetooth), any graphics functionalities (e.g., display brightness), or any combination of the aforementioned types of functionalities. For example, a contextual mode of operation for “secret meeting” activity may disable microphone to prevent recording conversation and disable camera to prevent taking pictures.
A contextual mode of operation may alter the priority or the availability of one or more assets accessible to user of electronic devices. Asset may include, but are not limited to, any media assets (e.g., songs, videos), any electronic communication assets (e.g., e-mails, text messages, contact information), any other various assets (e.g., “favorite” links for internet browser), or any combination of the aforementioned types of assets. For example, a contextual mode of operation for “at-home” activity may disable work related or corporate confidential e-mails, contacts or favorite links.
Electronic device 400 may be any portable, hand-held, wearable, implanted in human flesh, or other embodiments that allows user to use the device wherever said user travels. Alternatively, electronic device 400 may not be portable at all, but may instead be generally stationary, such as smart TVs or HVAC (heating, ventilation and air conditioning). Moreover, electronic device 400 may not be portable or stationary, but instead be mobile, such as electronic devices of transportation vehicles (car navigation system, vehicle dynamics control system or control system for airplane seats).
Electronic device 400 may include, among other components, input component 402, output component 404, control module 406, graphics module 408, bus 410, memory 412, storage device 414, communication module 416, and activity-centric contextual mode of operation control module 418. Input component 402 may include touch interface, GPS sensor, microphone, camera, neural sensors, or other means of detecting human activity and intention to an activity. Output components 404 may include display, speaker, or other means of presenting information or media to user. Electronic device 400 may include operating system or applications. Said operating system or said applications running on control module 406 may control functions and settings of electronic device 400. Said operating system or application may be stored in memory 412 or storage device 414. Graphics module 408 may include systems, software, and other means of presenting visual information or media to user. Electronic device 400 may communicate with one or more other electronic devices by using any means of communicating information with communication module 416. Communication module 416 may be operative to interface with the communications network using any suitable communications protocol including, but not limited to, Wi-Fi, Ethernet, Bluetooth, NFC, infrared, cellular, any other communication protocol, or any combinations thereof. Activity-centric contextual mode of operation control module 418 may be implemented in software in some embodiments, or be implemented in hardware, firmware, or any combination of software, hardware, firmware in other embodiments. Activity-centric contextual mode of operation control module 418 may use information from other components of electronic device 400 (e.g., input component 402, control module 406, communication module 416) to detect a new user-activity. For example, GPS information from GPS sensor of input component 402 may be used to detect “study” user-activity with GPS information at a library. Communication module 416 may receive “house cleaning” user-activity from a vacuum cleaner.
Systems and methods for activity-centric contextual modes of operation may include a single electronic device in identical embodiment as electronic device 400, a single electronic device in other embodiments, or plurality of electronic devices in either identical or different embodiments as electronic device 400.
This invention provides systems and methods that detect user-activity as user context. Detecting user-activity may include user's explicit input or implicit assumption. In other words, user may manually input user's activity or electronic devices may infer user-activity by analyzing information available. An illustrative diagram 600 of potential options to detecting user-activities is shown in
As shown in
Any means of user's explicit manual input may include, but not limited to, internal inputs using input components of said electronic device itself (e.g., touch screen input of user's smart phone) or external inputs using input components of other electronic devices that are connected to said electronic device via network (e.g., wearable device input such as smart watch connected to user's smart phone).
Any means of automatic detection may include, but not limited to, location-based sensors, any means of tagging, calendar entry, or other detection circuitries (e.g., “work” activity with GPS within office premise, “coffee break” activity with Starbucks Wi-Fi connection, “shopping” activity with an NFC tagged in a shopping mall, “meeting” activity with calendar entry for meeting, “wake-up” activity with alarm clock entry or “driving” activity with in-car Bluetooth connection). As technologies mature in future, motion analysis, neural analysis and other means of detecting user-activity or intention to user-activity may be used for automatic detection of user-activity.
User-activity detection 602 may take place any time within the lifecycle of said new user-activity. For example, a new user-activity may be detected in advance, in transition, in progress, at the end, or afterwards of said new user-activity.
When a new user-activity is detected, a comprehensive user context is identified using the principle of 5W1H (When, Where, Who, What, Why, How) from linguistic grammar to gather information and get the complete story on user-activity. Therefore, when systems and methods of this invention detect user-activity, this invention may also detect and record, but not limited to, time (When), place (Where), user group (Who), object (What), intention (Why), and other contextual information (How) associated with detected user-activity.
Using the principle of 5W1H to identify user-activity as user context is a substantial innovation from current practice to identify user context. Since smart phones have been introduced, location awareness has been the most prevalent information as user context. However, as breadth and functionalities of electronic devices grow, identifying user context with only location information has had limitations. Different technologies, such as accelerometer, gesture recognition and video analytics, have been developed to identify user context beyond location awareness but said different technologies have lacked a framework to define “comprehensive” user context as a whole. Using the principle of 5W1H as a framework to define user-activity as user context allows this invention to gather information and get the complete story of user's current context, as the principle does in linguistic grammar.
In order to detect time (When from 5W1H) of user-activity, timer or clock function of electronic devices may be used to measure relative or absolute time information of user-activity. To detect place or location (Where from 5W1H) of user-activity, location awareness technologies, such as, but not limited to, GPS, Bluetooth, Wi-Fi, NFC tags, or combinations thereof, may be used. User group information (Who from 5W1H) may use, but not limited to, user identification information stored in electronic devices, NFC tags, RFID chips, barcode, facial recognition, finger print detection, iris recognition, or other biometric identification technologies to detect a user or a group of users associated with user-activity. Object information (What of 5W1H) may include, but not limited to, information about any objects associated with user-activity, such as what user is carrying while user-activity “jogging” or what user is eating while user-activity “dining”. User intention (Why of 5W1H) may include user's explicit input to identify intention or inferred assumption from implicit information. When user is engaged in user-activity “shower”, user may input “after jog” as intention or electronic device may assume “after jog” intention from previous user-activity “jog”. How from 5W1H may include multitudes of other contextual information on user or environment associated with user-activity, such as, but not limited to, user's mood or weather. Systems and methods with this invention may include all or part of these 5W1H depending on requirements for user context awareness.
The profile or definition associated with a user-activity may take a standardized format as in data structure 700 of
When a new user-activity is detected along with 5W1H data as user context, changes to functions and settings of electronic devices for the detected user-activity are applied. The changes to functions and settings of electronic devices are defined as “activity-centric contextual modes of operation” in this invention. “Activity-centric contextual modes of operation” represent modes of operation for a given user-activity as user context. Activity-centric contextual modes of operation are predefined for each user-activity so that when a new user-activity is detected, respective contextual modes of operation are accessed from memory 412 or storage device 414 and applied to electronic devices.
Data structure for contextual mode of operation may include, but not limited to, mode ID 802, user-activity ID 804, device ID 806, electronic device 808, mode name 810, mode description 812, mode owner 814, public versus private mode identifier 816, device functions and settings 818 that need to change, and mode priority 820. Mode ID 802 has corresponding user-activity ID 804 so that when new user-activity of corresponding user-activity ID is detected, the contextual mode of operation with corresponding mode ID 802 may be applied to electronic device with corresponding device ID 806. For example as in row 822 of
Rows 822, 824, 826, 828, 830 in
Custom, user-defined activity-centric contextual mode of operation for a user-activity may be defined in some embodiments. Mode owner information 814 defines the original editor of the mode. Private/public identifier 816 defines whether an activity-centric contextual mode of operation is for private-use or for public-use. User may define all or part of information for activity-centric contextual mode of operation and said mode may be “published” to network server so that other users may use the custom activity-centric contextual mode of operation, or vise versa.
In the same notion as user-activity “jog”, when user decides to go to sleep as in step 1028, user only needs to select new user-activity “sleep” from primary device (smart phone) as in step 1030, and primary device (smart phone) sends new user-activity to secondary/peripheral devices via network as in step 1032. Consequently as secondary/peripheral electronic devices, audio and lighting controller automatically play predefined playlist and turn themselves off after predefined time delay as in steps 1034 and 1036. Finally, user may go to sleep as in step 1038.
In this scenario 1000 with activity-centric contextual modes of operation, user does not need to hassle with complexity of functionalities. User's main interactions are selecting “jog” and “sleep” from user-activity list as in steps 1004 and 1030 respectively. Once activity-centric contextual mode of operation controller detects user-activity “jog” and “sleep”, primary and secondary/peripheral electronic devices automatically apply activity-centric contextual modes of operation for “jog” and “sleep” user-activity respectively. Compared to scenario 1000 of this invention, current practice scenario 100 has more user interaction with apps, functions, and settings of electronic devices. In scenario 100, user keeps activating/deactivating and changing individual functions and settings of electronic devices as user changes activity. In scenario 1000 of this invention, user's interaction is minimal and intuitive. User simply selects user-activity and almost all of required changes are applied automatically as predefined.
This invention changes user interaction from individual functions and settings to selecting user-activity.
Once activity-centric contextual modes of operation and automatic detection criteria are defined in step 1302, electronic device awaits for a new user-activity detection as in step 1306 while in stand-by as in step 1304. User-activity may be detected in various ways as shown in steps 1308, 1310, and 1312. As in step 1308, user may manually select a user-activity from a predefined list of user-activity. User's manual selection of a new user-activity is an explicit user expression to a new user-activity or user's intention to user-activity. As in step 1310, electronic devices of this invention may automatically detect a new user-activity by monitoring the criteria defined in step 1302. When predefined criteria defined in step 1302 are met, electronic device may confirm a new user-activity with user as in step 1320.
As in step 1312, secondary/peripheral electronic devices may notify a new user-activity and the user-activity may be scanned in memory 412 or storage 414 to check if predefined activity-centric contextual mode of operation exists as in step 1314. If predefined activity-centric contextual mode of operation does not exist in memory 412 or storage 414, user may edit a new contextual mode of operation as in step 1318.
Once activity-centric contextual mode of operation for new user-activity is accessed from memory 412 or storage device 414 as in step 1316, or newly defined as in step 1318, user confirmation step 1320 is executed before shifting to new user-activity. After user confirmation step 1320, current activity-centric contextual mode of operation is backed up for possible later use to memory 412, to storage device 414, to network storage, or combinations thereof as in step 1322. As in step 1324, the activity-centric contextual mode of operation is applied to the electronic device. That is predefined changes to functionalities of said electronic device are applied. Finally, “confirmed” new user-activity is notified to other electronic devices as in step 1326 and the electronic device resumes to stand-by step 1304.
User may add new automatic detection criteria via “add new automatic detection criteria” button 1610, which may be used to define conditions where an embodiment of this invention may automatically detect a new user-activity as in step 1310 of
As shown in application activation cell 1714 as example, activating navigation application will trigger automatic detection of user-activity “driving”. User may add more applications to application activation cell 1714 through “add application activation” button 1712, which segue to an illustrative screenshot of application activation editing screen 1800 of
As shown in sensor input cell 1718 as example, predefined Bluetooth input will trigger automatic detection of user-activity “driving”. User may add sensor inputs to automatic detection criteria through “add sensor input” button 1716, which segue to an illustrative screenshot of sensor input editing screen 1900 of
Current activity-centric contextual mode of operation cell 1616 shows exemplary activity-centric contextual modes of operation for user-activity “driving”. Navigation application, primary device settings, and garage gate control are defined in the exemplary activity-centric contextual modes of operation for user-activity “driving”. Thus, when exemplary user-activity “driving” is detected, predefined changes to functionalities are applied to navigation application, primary device settings, and garage gate control automatically as shown in cell 1616.
In order to edit activity-centric contextual modes of operation, user may use “add activity-centric contextual mode of operation” button 1614 of
In the example shown in screen 2100, navigation application is defined as one of activity-centric contextual modes of operation. Therefore, when user-activity “driving” is detected, navigation application is automatically activated. To add more applications to be activated for user-activity “driving”, “add application” button 2108 may be used. “Add application” button 2108 may segue to an illustrative screenshot of application activation editing screen 2200 of
Activity-centric contextual modes of operation may also define primary device settings as shown in 2110 so that when exemplary user-activity “driving” is detected, predefined primary device settings are applied. In the example, pressing arrow button 2112 may segue to an illustrative screenshot of primary device settings editing screen 2300 of
Activity-centric contextual modes of operation may also include secondary/peripheral devices so that when exemplary user-activity “driving” is detected, predefined settings are applied to secondary/peripheral devices. In the example of user-activity “driving”, garage gate control system is selected as a secondary/peripheral device to be activated when user-activity “driving” is detected as shown in
If the notified user-activity exits in primary electronic device, primary electronic device may show new user-activity notification alert 2902, user-activity description 2904, activity-centric contextual mode of operation information 2906, cancel button 2908, and accept button 2910. User may ignore notification alert and be sent back to previous screen by using cancel button 2908, or accept notification alert and start notified user-activity by using accept button 2910 as in step 1320 of
If the notified user-activity does not exit in primary electronic device, primary electronic device may show new user-activity notification alert 3002 with message that there is no activity-centric contextual mode of operation available for the notified user-activity in the primary electronic device. Primary electronic device may also display contextual mode edit button 3004 and cancel button 3006. Contextual mode edit button 3004 may segue to new user-activity editing screen, identical to 1600 of
First screen 1500 of this invention may have settings button 1508 which may segue to illustrative settings editing screen 2700 of
Electronic devices have become smarter and now use sensors to monitor environment in order to operate differently for different environmental conditions. This invention identifies user's activity and/or user's intention to an activity as the key environmental condition. Thus, electronic devices with this invention detect user's activity and/or user's intention to an activity, and operate differently for different user-activity and/or user's intention to different activity. User-activities are defined using the principle of 5W1H (When, Where, Who, What, Why, How) from linguistic grammar to describe full details of user-activity. For each user-activity, there are one or more respective activity-centric contextual modes of operation, which define how said electronic devices operate respectively and differently for different user-activities. This invention also provides how activity-centric contextual modes of operation should be implemented to plurality of electronic devices within network. Therefore, when one electronic device detects a new user-activity, said electronic device may notify said new user-activity to other electronic devices within network and plurality of electronic devices within network may shift to respective contextual modes of operation for said new user-activity.
Electronic devices are now flooding with functionalities. Complexity in functionality and thus difficulty in user experience weakens full potential of the electronic devices. For example, although smart phones have changed everyday lives of people with ever-extensive functionalities, many still struggle to exploit their mere potentials and even find more difficult to use than conventional flip phones. This invention defines systems and methods that simplify user interaction with electronic devices. Once activity-centric contextual modes of operation are defined for different user-activities, electronic devices of this invention may change their functionalities automatically when a new user-activity is detected. Therefore, user's interaction with said electronic devices may become as simple as selecting a new user-activity, or even simpler when the electronic devices detect a new user-activity automatically. As technologies, such as motion analysis and voice recognition, mature in future, home automation system of this invention may recognize user's intention for jogging by analyzing user's motion wearing running shoes and ask user “John, are you ready for jogging?” for confirmation and said user's mere interaction may become saying “yes” or “no” for the reply.
While detailed description of this invention contains embodiments with smart phone as the primary electronic device, these embodiments should not be construed as limitations on the scope, but rather as a typical exemplification of embodiments since smart phone stands for the major user interface device within current electronic devices. Thus, this invention may have embodiments with smart watch with voice recognition as the primary electronic device. In this case, user may “talk to” smart watch what his/her intention is for a new user-activity, and secondary/peripheral electronic devices of this invention will change functionalities to predefined activity-centric contextual modes of operation. Accordingly, the scope should be determined not by the embodiment(s) illustrated, but by the appended claims and their legal equivalents.