SYSTEM AND METHODS FOR DETECTION AND HANDLING OF FOCUS ELEMENTS

Information

  • Patent Application
  • 20170024086
  • Publication Number
    20170024086
  • Date Filed
    February 25, 2016
    8 years ago
  • Date Published
    January 26, 2017
    7 years ago
Abstract
The present disclosure relates to detection and handling of focus elements associated with an application. In an embodiment, a device presents at least one graphical entry space for entry of focus elements and detects an input. The device categorizes the input, including determining a focus element type for the input and assigning a focus element type to the input. The device creates a focus element, based on the input and the focus element type. The device displays a graphical representation of the focus element, including a graphical symbol identifying the focus element type. The graphical representation of the focus element is presented in a list of one or more focus elements.
Description
FIELD

The present disclosure relates to operation of computing devices with displays, and in particular, detection and handling of focus elements associated with an application.


BACKGROUND

Information associated with electronic devices, and in particular personal electronic devices is both numerous and varied. Many electronic devices can be used to view or access hundreds, if not thousands (or more), of instances of applications and websites every day. Information is varied in that the type of information received and processed by personal electronic devices can be from any number of sources such as text, communication, location data, photographs, web browsing, etc. Beyond various types of information, information can vary in its degree of importance or priority; some pieces of information are more important than others. There exists a need for device configuration to allow for access to and storage of information based on priority or importance.


Tracking information and inputs across a device can be overwhelming with conventional devices and methods. Conventional devices typically store information based on the type. For example, contacts may be stored in a particular application of a device. As such, with conventional devices, users actively determine particular types of information for storage. In addition, conventional applications are configured to receive only a particular type of input. For example, a photo application for a device is not capable of saving, processing, and interacting with text information, address information, contact information, etc. Each of these particular types of data inputs requires its own additional application, focused on those particular types of data inputs. Storage of data inputs, and accessibility of stored data inputs, becomes more and more difficult as the number of different types of data inputs grows. Furthermore, as the number of different types of data input grows so too does the number of applications with which the user must interact. For these reasons, there exists a need for devices to detect information and allow for characterization. There also exists a need to address storage and access to information within a device that addresses the user interface deficiencies of devices. While conventional computing devices allow for file folders and conventional mobile devices provide user interface layouts, these configurations fail are limited in the presentation and access of inputs. There is a desire for devices and methods that detect and characterize input to a device.


BRIEF SUMMARY OF THE EMBODIMENTS

Disclosed and claimed herein are methods and devices for detection and handling of focus elements associated with an application. In one embodiment, a method for detection and handling of focus elements associated with an application includes presenting, by a device, at least one graphical entry space for entry of focus elements on a display of the device. The method also includes detecting, by the device, an input to the at least one graphical entry space. The method also includes categorizing, by the device, the input. Categorizing includes determining a focus element type for the input and assigning the focus element type to the input. The method also includes creating, by the device, a focus element based on the input and the focus element type. The method also includes displaying, by the device, a graphical representation of the focus element, including the input and at least one graphical symbol identifying the focus element type. The graphical representation of the focus element is presented in a list of one or more focus elements.


In one embodiment, the input is one of typed data, copy and paste data, audio data, image data, video data, and location data.


In one embodiment, the focus element type is one of one of a note, event, contact, website, audio recording, location, photo, video, task, message, and barcode.


In one embodiment, the graphical entry space includes a text entry area on the display of the device.


In one embodiment, the graphical entry space includes a plurality of selectable elements, wherein each selectable element is associated with one of a plurality of predefined focus element types.


In one embodiment, categorizing further includes matching at least a portion of the input to one or more data patterns associated with a plurality of predefined focus element types.


In one embodiment, categorizing further includes updating the graphical representation of the focus element.


In one embodiment, creating includes storing, by the device, the focus element, the input, and the focus element type, in an input list.


In one embodiment, displaying includes displaying the graphical representation of the focus element in addition to the plurality of previously created focus elements, wherein each of the plurality of previously created focus elements includes the input and the at least one graphical symbol identifying the focus element type.


In one embodiment, the method also includes detecting a selection of the graphical representation of the focus element and transferring an input for a selected focus element to an application, wherein the application is associated with the focus element type for the selected focus element.


Another embodiment is directed to a device including an input, a display configured for presentation of a user interface, and a controller configured to communicate with the input and the display. The controller is further configured to control presentation of at least one graphical entry space for entry of focus elements on the display. The controller is further configured to detect the input to the at least one graphical entry space. The controller is further configured to categorize the input, wherein categorizing includes determining a focus element type for the input and assigning the focus element type to the input. The controller is further configured to control creation of a focus element based on the input and the focus element type. The controller is further configured to control display of a graphical representation of the focus element, including the input and at least one graphical symbol identifying the focus element type, wherein the graphical representation of the focus element is presented in a list of one or more focus elements.


In one embodiment, the input is one of typed data, copy and paste data, audio data, image data, video data, and location data.


In one embodiment, the focus element type is one of one of a note, event, contact, website, audio recording, location, photo, video, task, message, and barcode.


In one embodiment, the graphical entry space includes a text entry area on the display of the device.


In one embodiment, the graphical entry space includes a plurality of selectable elements, wherein each selectable element is associated with one of a plurality of predefined focus element types.


In one embodiment, categorizing further includes matching at least a portion of the input to one or more data patterns associated with a plurality of predefined focus element types.


In one embodiment, categorizing further includes updating the graphical representation of the focus element.


In one embodiment, controlling creation includes storing the focus element, the input, and the focus element type, in an input list.


In one embodiment, controlling display includes displaying the graphical representation of the focus element in addition to the plurality of previously created focus elements, wherein each of the plurality of previously created focus elements includes the input the at least one graphical symbol identifying the focus element type.


In one embodiment, controlling also includes detecting a selection of the graphical representation of the focus element and transferring an input for a selected focus element to an application, wherein the application is associated with the focus element type for the selected focus element.


Other aspects, features, and techniques will be apparent to one skilled in the relevant art in view of the following detailed description of the embodiments.





BRIEF DESCRIPTION OF THE DRAWINGS

The features, objects, and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout and wherein:



FIGS. 1A-1E depict graphical representations of a device with focus element entry according to one or more embodiments;



FIG. 2 depicts a graphical representation of a device with a focus element and a list of focus elements according to one or more embodiments;



FIG. 3 depicts a graphical representation of a process of detection and handling of focus elements according to one or more embodiments;



FIG. 4 depicts a simplified diagram of a device according to one or more embodiments;



FIG. 5 depicts a graphical representation of the focus application according to one or more embodiments;



FIG. 6 depicts a graphical representation of the focus application according to one or more embodiments;



FIG. 7 depicts a graphical representation of focus element types according to one or more embodiments; and



FIG. 8 depicts a process of detection and handling of focus elements according to one or more embodiments.





DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS
Overview and Terminology

Disclosed herein are methods and devices for the detection and handling of focus elements. One aspect of this disclosure relates to detection and handling of inputs, including data and inputs which can vary both in type and in priority. Personal electronic devices, such as phones, tablets, laptops, personal computers, televisions, gaming systems and other electronic display devices, can receive a massive amount of information or data input every day. Inputs to a device may be detected and one or more focus elements may be generated based on the inputs.


In one embodiment, inputs relate to any particular information to be stored on a device. For example, inputs can include typed data, copy and paste data, audio data, image data, video data, and location data. Data can be user generated, received from other users, or obtained from other sources (e.g., from the Internet). In certain embodiments, inputs relate to entries and/or data supplied to a particular application of a device, such as a Focus application. In other embodiments, inputs in general to the device and/or data that is associated with Focus application types may be stored as focus elements. In one embodiment, various inputs are identified and categorized into particular focus element types. For example, focus element types can include note, event, contact, website, audio recording, location, photo, video, task, message, and barcode. In one embodiment, from this categorization, focus elements are generated.


Another aspect of this disclosure relates to an application for detection and handling of focus elements. In one embodiment, the Focus application detects and handles focus elements that are processed on a device. Implementation may be system-wide, across both the device and all related applications on the device. Input detection is built into the system, such that the device can dynamically identify, categorize, and generate focus elements. In an embodiment, the Focus application is running underneath the typical user interface of the device. This allows the Focus application to operate while the device is running other applications on the user interface. In another embodiment, the Focus application runs as a full application on the device. Likewise, the user has the ability to transition between these different embodiments.


Another aspect of this disclosure relates to a device including a display configured for presentation of the user interface, and a controller configured to communicate with the display. In an embodiment, the device detects and handles focus elements through implementation of the Focus application. In a different embodiment, the Focus application operates across the devices. For example, the user can save pertinent information, derived from inputs on a mobile device with a network list on the Focus application.


Through implementation of the Focus application, a device can identify, for the user, different types of information. Generation and use of focus elements, including graphical symbols, allows the user to quickly assess information in an efficient manner. For example, the user is no longer required to self-identify whether text is merely text, or whether it includes a web address. Through the Focus application, the device will identify the important aspects of a given piece of information. As important information is identified, the device provides for recordation in a central location. The user is no longer burdened by having to save information to its respective application location (e.g., saving a picture to the photo application); likewise, the user is no longer burdened by having to transition between a multitude of different applications. The user can quickly and efficiently record important information to a centralized location. Providing a centralized location allows for the user to recover previously saved information, without having to search for where it is located. Categorizing of saved information, into different focus element types, allows for quick and efficient navigation. Additionally, saving all pertinent information to a centralized location acts as a timeline of relevant content, as dictated by the user. In this way, the Focus application acts as an aggregator for important user-specific information.


As used herein, a focus element is derived from information that is on a device. More particularly, focus elements include a data input, which is a portion of relevant data associated with a particular focus element type. The data input can be user generated or can be received from other sources, both within the device and from sources beyond the device. Graphically, a focus element will include the data input and a graphical symbol. The graphical symbol, like the focus element itself, is associated with a particular focus element type.


As used herein, a focus element type is a category of focus element. Focus elements are grouped into specific categories, or types. As an example, focus element types can include note, event, contact, website, audio recording, location, photo, video, task, message, and barcode. Focus element types are used with subsequent interaction of focus elements.


As used herein, the Focus application is source of identification, categorization, and generation for all focus elements. Likewise, the Focus application is the centralized location from where the list of focus elements is stored. The Focus application may be run as a discrete application, accessed like any other typical application on a device. Alternatively, the Focus application may be constantly running underneath the typical user interface of the device. Updates to the Focus application can implement changes to the identification, categorization, and generation for focus elements. For example, application updates can add additional focus element types to the Focus application, add additional pattern matching parameters to improve categorization accuracy, etc.


As used herein, the terms “a” or “an” shall mean one or more than one. The term “plurality” shall mean two or more than two. The term “another” is defined as a second or more. The terms “including” and/or “having” are open ended (e.g., comprising). The term “or” as used herein is to be interpreted as inclusive or meaning any one or any combination. Therefore, “A, B or C” means “any of the following: A; B; C; A and B; A and C; B and C; A, B and C”. An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.


Reference throughout this document to “one embodiment,” “certain embodiments,” “an embodiment,” or similar term means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of such phrases in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner on one or more embodiments without limitation.


EXEMPLARY EMBODIMENTS

Referring now to the figures, FIGS. 1A-1E depict graphical representations of a device with focus element entry according to one or more embodiments. As depicted in FIG. 1A, device 100 may be configured for presentation of focus elements on a display 101. The device 100 may be any one of a phone, tablet, laptop, personal computer, television, gaming system, and other electronic display device. Device 100 includes a controller (not shown). Display 101 may additionally include a plurality of inputs 115. Device 100 may further include a data entry field 110 configured to allow for the user to enter information into the device 100.


According to one embodiment, device 100 is configured to detect and characterize inputs to the device, including text information (e.g., txt format), websites (e.g., html format), pictures (e.g., jpeg file), etc. Unlike typical applications for personal electronic devices are designed solely for particular types of data inputs, such as a photo application designed to save, process, and interact with pictures (e.g., jpeg files) which only accept a single type of input (e.g., images), device 100 may be configured to collect different types of input in a single application. In addition to collection, device 100 may be configured to detect and characterize the inputs. Device 100 is configured to process data inputs across a variety of different types. Device 100 can categorize different types of data inputs and provide a central location from which other applications can be conveniently accessed.


As depicted in FIG. 1B, the display 101 may additionally include a data input interface 102 (e.g., keyboard, free-form pad, etc.). The data input interface 102 can be a part of the display 101 (e.g., touch-screen keyboard). Alternatively, the data input interface 102 can be separate from the display 101 (e.g., a physical keyboard). Information can be entered into the device 100, at the data entry field 110, via the data input interface 102 (e.g., typed data). For example, the text “office meeting” is added into the data entry field 110. Alternatively, information can be entered into the device 100 via the plurality of inputs 115 (e.g. audio data, image data, video data, location data, etc.). Likewise, information can be entered into the device 100 via copy and paste data. This information entered into the data entry field 110 is processed, by the device 100, into a focus element 120.


As depicted in FIG. 1C, information entered into data entry field 110 was processed, by the device 100, into focus element 1201. The focus element 1201 includes a data input 1211 and a graphical symbol 1221, which is associated with a specific type of focus element. For example, the text “office meeting at 11:30 a.m.” is the data input 1211 for the focus element 1201. The symbol of a calendar is the graphical symbol 1221 for focus element 1201. In an embodiment, data input 1211 is one of typed data, copy and paste data, audio data, image data, video data, and location data. Data input 1211 can be user generated (e.g., via the keyboard), entered into the device 100 (e.g., via the plurality of inputs 115) or can be received from other sources (e.g., via a received SMS text message).


As depicted in FIGS. 1D-1E, the device 100 will detect the data input 1212, which is at least a portion of information in the data entry field 110. This data input 1212 is subsequently used by the display device 100 to generate the focus element 1202. For example, in FIG. 1D the information “frank's phone 416-358-8543” is entered into the data entry field 110. In FIG. 1E, this information is subsequently converted to the data input 1212 of the focus element 1202.


To generate the focus element 1202, the device 100 must categorize the data input 1212. Categorizing includes determining a specific type of focus element for the data input 1212. There are a number of specific types of focus elements, from which the data input 1212 may be associated. In an embodiment, focus element type includes notes, events, contacts, websites, audio recordings, location, photo, video, task, message, and barcode. A camera on the device 100 can dynamically determine whether something being viewed is a standard photo or, alternatively, a barcode. Barcode information is translated into other ASCI information or textual information. To determine a specific type of focus element for the data input 1212, the device 100 may match at least a portion of the input to one or more data patterns. These data patterns can be associated with a plurality of predefined focus element types. For example, names and phone numbers are associated with contacts; city/states are associated with locations, etc. In this way, through pattern matching, the device 100 can determine a specific type of focus element for the data input 1212. Once a specific type of focus element has been determined for the data input 1212, the device assigns the focus element type to the data input 1212. Using this newly determined information: the data input 1212 and the focus element type, the device generates a focus element 1202.


Once generated, the focus element 1202 is depicted graphically on the device 100. The graphical representation of the focus element 1202 includes the data input 1212 and the graphical symbol 1222. The graphical symbol 1222 is used to identify the focus element type. More particularly, the graphical symbol 1222 helps the user quickly identify the type of focus element through visual cues. For example, the user can quickly determine that focus element 1201 is the focus element type of events, by seeing the graphical symbol 1221 of a calendar. Likewise, the user can quickly determine that focus element 1202 is the focus element type of contacts, by seeing the graphical symbol 1222 of a face.


In an embodiment, as new information is added to the data entry field 110, the focus element 120 may be updated. For example, as information is first added to the data entry field 110, the device 100 will dynamically identify, categorize, and generate focus elements (as described above and in greater detail below). Imagine the user enters the information “Becky . . . ” into the data entry field 110. The device 100 may determine that the data input 121 is a name: Becky. Through pattern matching, the device 100 may assign the focus element type of contact, and subsequently assign the graphical symbol 122 of a face, associated with the focus element type of contacts. Thus, a focus element 120 has been dynamically created, based off information the user has added to the data entry field 110. However, the user continues to enter more information into the same data entry field 110. Imagine that “Becky . . . ” is now changed, by the user, to “Becky, 1600 Pennsylvania Ave., Washington D.C.” The device 100 may determine that the data input 121, which was previously a name, is now actually an address. Through pattern matching, the device 100 may assign a new focus element type of location, and subsequently assign the graphical symbol 122 of a moon, associated with the focus element type of location. Thus, the focus element 120 that was dynamically created, as a contact, has now been dynamically re-categorized. The graphical representation of the focus element 120 is updated to reflect this dynamic re-categorization.


In certain embodiments, new information is not necessarily required to be entered into data entry field 110 by the user, in order to be identified, categorized, and generated into focus element 120. User entry is only one way that data can be processed by device 100. Beyond typed data, by the user, information can be entered into the device 100 through the plurality of inputs 115, including audio data, image data, video data, and location data. Likewise, information can be entered into the device through copy and paste data. In an exemplary scenario, device 100 can detect online article browsing using a web browser application for the device 100 including content associated with “The White House” in Washington, D.C. The online article may note in part that “The White House is located at 1600 Pennsylvania Ave.” In response to user action including copy and paste of the address, “1600 Pennsylvania Ave.” to the device 100, the device 100 may determine that the data input is an address: 1600 Pennsylvania Ave. Through pattern matching, the device 100 may assign the focus element type of location, and subsequently assign the graphical symbol of a moon, associated with the focus element type of location. Thus, a focus element has been dynamically created, based on information input by the user via a copy and pasted with the device 100. In this way, the device can detect clipboard information to identify, categorize, and generate additional focus elements. Likewise, focus elements can be identified, categorized, and generated through audio data, image data, video data, and location data. This includes data received from other sources, such as the Internet, or from devices of other users.


With the focus element 120 being presented graphically, the device 100 can take a number of different actions. In one embodiment, the focus element 120 is automatically stored, by the device 100, once it is created. In a different embodiment, the focus element 120 is not stored, by the device 100, until the user performs some additional action. For example, the user swipes the focus element 120 to the right to store the focus element 120 to the device 100. Alternative commands to store a focus element 120 can include swipes, flicks, taps, on-display gestures, off-to-on display gestures, off-display gestures, and off-display buttons.


In an embodiment, focus element identification, categorization, and generation, can continually operate underneath the typical user interface of the device. For example, the interface could be a text message conversation; typically, a text message conversation will take place on a text message application. Though the device 100 is running a text message application, the device 100 may still identify pertinent information and generate a focus element 120. Continual uninterrupted analysis of information, for identification, categorization, and generation of focus elements, is beneficial in many ways. Through continual analysis, the device 100 may automatically detect content that is categorized by one of the specific types of focus elements (e.g., addresses, contact information, websites, etc.). Categorization utilizes pattern-matching for all content automatically processed. By graphically categorizing content, including adding graphical symbols for each category, the user can instantly see whether information on the device is relevant (i.e., information is a specific type of focus element) or irrelevant (i.e., information is not a specific type of focus element). Furthermore, dynamic categorization enables the user to push content from specific applications (e.g., a text message application) into a centralized list. Likewise, as discussed below, the user has the ability to modify the categorization for any focus element. The list may be stored on a specific Focus application on the device 100. The Focus application is described in greater detail below with respect to FIGS. 5-7.



FIG. 2 depicts a graphical representation of a display device with a focus element and a list of focus elements according to one or more embodiments. Device 200 may be configured for presentation of focus elements on a display 201 that includes a data input interface 202. Display 201 can additionally include a plurality of inputs 215. Device 200 may store a focus element 220, including the data input 221 and a graphical symbol 222 for the focus element type, in an input list. For example, focus element 220 can include a graphical symbol 222 of a face, which is associated with the focus element type of contacts. Often, this input list is displayed on a specific application: the Focus application. Device 200 may display the focus element 220 in the Focus application. Additionally, by displaying the graphical representation of the focus element 220, the device may additionally display a plurality of previously created focus elements 230 in the Focus application. Each of the plurality of previously created focus elements 230 may, likewise, have a data input and a graphical symbol. For example, the plurality of previously created focus elements 230 each have a graphical symbol (e.g., cloud, note, moon), which is associated with a respective focus element type (e.g., websites, notes, location). It should be appreciated that the graphical symbols used herein are merely examples. A number of other illustrative graphics and symbols could be used.


By providing the focus element 220 and the plurality of previously created focus elements 230 in a central location (e.g., the list on the Focus application), the device is able to view all information that the user has saved as important. Because focus elements are displayed to include graphical symbols, the user can quickly navigate among all saved information to find specific information. Likewise, and as discussed below, the user is able to interact with all information that the user has saved as important, from one centrally organized location. In this sense, the Focus application acts as a gateway to a number of related applications. The Focus application itself can be invoked, by the user, by clicking an icon on the device 200 that is associated with the Focus application (e.g., an app icon). Alternatively, the Focus application can be invoked via swipes, flicks, taps, on-display gestures, off-to-on display gestures, off-display gestures, and off-display buttons.



FIG. 3 depicts a graphical representation of a process of detection and handling of focus elements according to one or more embodiments. Although process 300 is described with reference to the flowchart illustrated in FIG. 3, it will be appreciated that many other processes of performing the acts associated with the process 300 may be used. The process 300 may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software, or a combination of both. Process 300 may be performed by a device such as device 100 of FIG. 1A.


At block 305, at least one graphical entry space for entry of focus elements is presented on a display of the device. In some embodiments, this graphical entry space is an area where the user can type or paste information. In different embodiments, the graphical entry space can be either generated or received copy and paste data, audio data, image data, video data, or location data. At block 310, an input to the at least one graphical entry space is detected. The input may be any type of data in the graphical entry space, as described above.


The device will then categorize the input. Categorization may involve both a determination and an assignment, by the device, at block 315. Categorization may include determining a focus element type for the input. Pre-defined focus element types may include notes, events, contacts, websites, audio recordings, location, photo, video, task, message, and barcode. The particular focus element type assigned to an input is determined, by the device, through pattern matching. For example, the device will match at least a portion of the input to one of the pre-defined focus element types. The device may also assign the focus element type to the input. While assignment is made by the device, it should be noted that the user can modify the assignment, for any input, to a different focus element type.


At block 320, the device will create a focus element, based on the input and the focus element type. At this stage, the device now recognizes a new element: a focus element, which is based on the categorization of the input, as discussed above. The device may store the input, the focus element type, and the graphical symbol in an input list. In an example embodiment, this input list is accessed, modified, and interacted with via a Focus application that is running on the device.


At block 325, the device will display a graphical representation of the focus element. This could include the input and at least one graphical symbol identifying the focus element type. In an example embodiment, this graphical representation of the focus element is presented dynamically, in another application on the device. For example, the focus element could be shown by the device while the user is accessing a text message application. In a different example embodiment, this graphical representation of the focus element is presented alone. In a different example embodiment, this graphical representation of the focus element is presented in a list. The list may include a plurality of previously created focus elements. The list may be accessed through the Focus application on the device.



FIG. 4 depicts a simplified diagram of a device according to one or more embodiments. Device 400 may relate to one or more devices for providing an application, such as a Focus application. In one embodiment, device 400 relates to a device including a display, such as a phone, tablet, laptop, personal computer, television, gaming system, and other electronic display device. As shown in FIG. 4, device 400 includes controller 405, user interface 410, communications unit 415, and memory 420.


Controller 405 may be configured to execute code stored in memory 420 for operation of device 400 including presentation of a graphical user interface. Controller 405 may include a processor and/or one or more processing elements. In one embodiment controller 405 may be include one or more of hardware, software, firmware and/or processing components in general. According to one embodiment, controller 405 may be configured to perform one or more processes described herein. Controller 405 may be configured to run a Focus application, the Focus application including one or more focus elements, Focus application user interface configuration.


User interface 410 may be configured to receive one or more commands via an input/output (I/O) interface 425, which may include one or more inputs or terminals to receive user commands. When device 400 relates to a display device, I/O interface 425 may receive one or more remote control commands. Likewise, graphical user interface 410 may be configured to receive one or more commands from a display 430. In one embodiment, commands from the display 430 are sent to the controller 405 via user interaction with a touch screen.


Communications unit 415 may be configured for wired and/or wireless communication with one or more network elements, such as servers. Memory 420 may include non-transitory RAM and/or ROM memory for storing executable instructions, operating instructions and content for display.



FIG. 5 depicts a graphical representation of the focus application according to one or more embodiments. Focus element identification, categorization, and generation can be triggered, on the device 500, by the Focus application. The Focus application provides the user with the ability to view all focus elements, in a list. Because focus elements typically represent key pieces of information, the user may find it useful to view and interact with all key information from one central location. The Focus application can include a focus menu 510, which acts as a precursor to viewing all focus elements in a list.


The focus menu 510 provides the user with the opportunity to quickly access the plurality of inputs 5151 to 5155, and subsequently create a focus element by entering data into the device 500. In an embodiment, clicking the location input button 5151 would enter data related to the user's current location into the device 500 as a focus element. In another embodiment, clicking the photo input button 5152 would enter picture data into the device 500 as a focus element. In another embodiment, clicking the video input button 5153 would enter video data into the device 500 as a focus element. In another embodiment, clicking the microphone input button 5154 would input audio data into the device 500 as a focus element. In another embodiment, clicking the note input button 5155 would input text data into the device 500 as a note. It should be appreciated that each of the plurality of inputs 5151 to 5155 discussed herein are merely examples. A number of other illustrative graphics and symbols could be used.



FIG. 6 depicts a graphical representation of the Focus application according to one or more embodiments. Focus element identification, categorization, and generation can be triggered, on the device 600, by the Focus application. The Focus application, as shown graphically on the display 601, provides the user with the ability to view all focus elements, in a list. The Focus application displays, to the user, a focus element 620, including the data input 621 and a graphical symbol 622 for the focus element type, in an input list. For example, focus element 620 can include a graphical symbol 622 of a calendar, which is associated with the focus element type of events. Because focus elements typically represent key pieces of information, the user may find it useful to view and interact with all key information from one central location.


The Focus application interface allows the user to add additional focus elements directly into the Focus application (e.g., into the list of focus elements). The user may enter information into the Focus application via the data entry field 610, such that the device 600 will dynamically identify, categorize, and generate a focus element. Likewise, the user may enter information into the Focus application via the plurality of inputs 615, such that the device 600 will dynamically identify, categorize, and generate a focus element. By comparison, the user is still able to add focus elements from other applications (e.g., a text message application) when the device is constantly analyzing information for identification, categorization, and generation of focus elements as previously described above. The user can add information in a number of different ways (e.g., typed data, copy and paste data, audio data, image data, video data, and location data).


As shown in FIG. 6, device 600 includes a plurality of previously created focus elements 630 in a list. Device 600 also includes the data entry field 610. The data entry field 610 is a text entry area on the device 600. The user may enter text into the data entry field 610 (e.g., “Buy Stamps”). In an example embodiment, the device 600 will identify information in the data entry field 610 via pattern matching (as discussed above) in order to categorize a focus element type. In a different example embodiment, the user has the ability to set or change the focus element type for the data entry field 610 through use of the plurality of inputs 615. Data in the data entry field 610 can be categorized by the user, using the plurality of inputs 615 to represent each of the focus element types. Likewise, the user can select one of the plurality of previously created focus elements 630 and re-categorize the element, using the plurality of inputs 615. In this way, the user has the capability to override the pattern matching typically done by the device 600.


More specifically, beyond allowing the user to add additional focus elements directly into a list, the Focus application interface gives the user the ability to categorize focus elements, and re-categorize previous focus elements. While pattern matching is an ideal way to dynamically categorize information, the user may prefer certain information or focus elements to be categorized in a different way. The Focus application interface gives the user the ability to customize focus elements to the user's individual preferences. Finally, the Focus application interface allows the user to access, through the user selection command, third party applications that are linked to focus elements and associated with specific focus element types. By providing focus elements in a central location, and allowing the user to take action with respect to individual focus elements, the Focus application effectively links important information to the third party applications.


In an example embodiment, the device 600 will use the “note” as the default focus element type, if pattern matching does not identify another focus element type. Likewise, with dynamic identification, categorization, and generation, the device may re-categorize a default “note” based on additional information that is added by the user (e.g., the note changes to maps, once the user types an address). As previously mentioned, the user may have the capability to override the pattern matching through a plurality of selectable elements via the Focus application.



FIG. 7 depicts a graphical representation of focus element types according to one or more embodiments. In an embodiment, focus element 701 is categorized as the focus element type of a note. In an embodiment, focus element 702 is categorized as the focus element type of an event. In an embodiment, focus element 703 is categorized as the focus element type of a contact. In an embodiment, focus element 704 is categorized as the focus element type of a website. In an embodiment, focus element 705 is categorized as the focus element type of an audio recording. In an embodiment, focus element 706 is categorized as the focus element type of a location. In an embodiment, focus element 707 is categorized as the focus element type of a photo. In an embodiment, focus element 708 is categorized as the focus element type of a task. In an embodiment, focus element 709 is categorized as the focus element type of a message. In an embodiment, focus element 710 is categorized as the focus element type of a barcode.



FIG. 8 depicts a graphical representation of a process of detection and handling of focus elements according to one or more embodiments. Although process 800 is described with reference to the flowchart illustrated in FIG. 8, it will be appreciated that many other processes of performing the acts associated with the process 800 may be used. For example, the order of some of the blocks may be changed, certain blocks may be combined with other blocks, and some of the blocks described are optional. The process 800 may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software, or a combination of both. Process 800 may be performed by a device such as device 100 of FIG. 1A.


The device 100 will detect an input at block 810. The input may be any type of data. In an embodiment, the input is received at least one graphical entry space for entry of focus elements on a display of the device. In some embodiments, this graphical entry space is an area where the user can type or paste information. In different embodiments, the graphical entry space can be either generated or received copy and paste data, audio data, image data, video data, or location data.


The device will match at least a portion of the input with a data pattern at block 820. This matching process requires a determination of whether the input can be assigned one of the focus element types. In an embodiment, pre-defined focus element types may include notes, events, contacts, websites, audio recordings, location, photo, video, task, message, and barcode. The particular focus element type assigned to an input is determined, by the device 100, through pattern matching. For example, the device 100 will match at least a portion of the input to one of the pre-defined focus element types.


The device 100 determines at decision block 825 whether, in fact, the input is assignable. Responsive to determining that the input is assignable, the device 100 assigns, to the input, a focus element type and a graphical symbol at block 830. The graphical symbol is a symbol associated with a specific type of focus element. Responsive to determining that the input is not assignable, the device 100 skips block 830. In an embodiment, if the input is not assignable, the device 100 assigns, to the input, a focus element type of a “note”; in this embodiment, the device 100 will use the “note” as the default focus element type, if pattern matching does not identify another focus element type. In an embodiment, while assignment is made by the device 100, it should be noted that the user can modify the assignment, for any input, to a different focus element type.


At block 840, the device 100 displays the input with the graphical symbol. In an embodiment, display of the input with the graphical symbol is characterized as display of the focus element. At this stage, the device 100 now recognizes a new element: a focus element, which is based on the categorization of the input, as discussed above.


The device will identify a user storage command at block 850. In an embodiment, the user swipes the focus element on the device 100 to the right to store the focus element to the device 100. Alternative commands to store a focus element can include swipes, flicks, taps, on-display gestures, off-to-on display gestures, off-display gestures, and off-display buttons. At block 760, the device 100 will store the input, the focus element type, and the graphical symbol in an input list. In an example embodiment, this input list is accessed, modified, and interacted with via a Focus application that is running on the device 100.


At block 870, the device will display the input list. In an embodiment, display of the input list includes display of a graphical representation of the focus element, including the input and at least one graphical symbol identifying the focus element type, and display of additional focus elements. In an example embodiment, this graphical representation of the focus element is presented dynamically, in another application on the device 100. For example, the focus element could be shown by the device while the user is accessing a text message application. In a different example embodiment, this graphical representation of the focus element is presented alone. In a different example embodiment, this graphical representation of the focus element is presented in a list. The list may include a plurality of previously created focus elements. The list may be accessed through the Focus application on the device 100.


The device 100 identifies a user selection command at block 880. In an embodiment, the user swipes the focus element to the right on the device 100, for user selection of the focus element. Alternative commands for selection of a focus element can include swipes, flicks, taps, on-display gestures, off-to-on display gestures, off-display gestures, and off-display buttons. It should be noted that user selection (e.g., block 880), as described herein, is different from the user storage command (e.g., block 850).


Through a user selection command, the device 100 will transfer the focus element to a third party application at block 890. The device 100, post-user selection command, transfers the data input for the selected focus element to a third party application. The third party application is associated with the focus element type for the selected focus element. For example, if the selected focus element is a website focus element type, the third party application associated with the selected focus element would be a web-browser application on the device 600. The device 600 displays the application associated with the selected focus element on the display of the device 600. In an embodiment, the third party application is located on the device 100 (e.g., an app on the device). In a different embodiment, the third party application is located on an external network (e.g., the Internet).


As examples, the third party application may be one of a notepad application, calendar application, contacts application, web browser application, microphone application, camera application, map application, navigation application, task list application, email application, text message application, telephone application, and bar code reader application.


While FIGS. 5-8 illustrate selection of a focus element within the Focus application, interface, it should be appreciated that the user can select a focus element at any other point in time, from any other interface (e.g., from a text message application) so long as the focus element has been identified, categorized, and generated. In an example embodiment, suppose that the user is in a text message application and the user types “Becky: 555-5555.” In other embodiments, the user receives the information “Becky: 555-5555” from another person, via a received SMS message. Regardless of information-source, the device will identify the data input, categorize the input based on a focus element type (e.g., contacts), and assign a graphical symbol (e.g., a face). At this point, a focus element has been generated even though the user is still in the text message application. Responsive to a focus element being generated, the user can take a number of additional actions. One action that the user can take is to store the focus element in a list, within the Focus application. Another action that the user can take is to immediately act on the focus element, through user selection. For example, if the user takes immediate action with the “Becky: 555-5555” focus element, which is a “contact” focus element, the device, will transition directly to the contacts application for the device. The user does not have to go through the process of selecting the text message information, copying the text message information, leaving the text message application, opening the contacts application, and then saving the text message information. Rather, by taking action on the focus element, the user immediately transitions the information to the appropriate application (as dictated by the focus element type).


Gestures for user selection of the focus element can include swipes, flicks, taps, on-display gestures, off-to-on display gestures, off-display gestures, and off-display buttons. In other related embodiments, different gestures can trigger different actions for focus elements. Actions may include saving the focus element to the focus element list, transitioning to the Focus application, saving the focus element and transitioning to the Focus application, taking immediate action with the focus element, saving the focus element and taking immediate action with the focus element, etc. Likewise, different gestures can trigger different applications that might be related. For example, with a “contact” focus element, swiping a first direction may send the focus element to the contacts application, swiping a second direction may send the focus element to the telephone application, swiping a third direction may send the focus element to the Focus application, etc. With other types of focus elements, swipes could trigger different functionalities and applications. It should be appreciated that various different actions for focus elements can be combined as well.


It will be appreciated that all of the disclosed methods and procedures described herein can be implemented using one or more computer programs or components. These components may be provided as a series of computer instructions on any conventional computer-readable medium, including RAM, ROM, flash memory, magnetic or optical disks, optical memory, or other storage media. The instructions may be configured to be executed by a processor, which, when executing the series of computer instructions, performs or facilitates the performance of all or part of the disclosed methods and procedures.


It should be understood that various changes and modifications to the example embodiments described herein will be apparent to those skilled in the art. Such changes and modifications can be made without departing from the spirit and scope of the present subject matter and without diminishing its intended advantages. It is therefore intended that such changes and modifications be covered by the appended claims


While this disclosure has been particularly shown and described with references to exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the claimed embodiments.

Claims
  • 1. A method for detection and handling of focus elements associated with an application, the method comprising: presenting, by a device, at least one graphical entry space for entry of focus elements on a display of the device;detecting, by the device, an input to the at least one graphical entry space;categorizing, by the device, the input, wherein categorizing includes determining a focus element type for the input, andassigning the focus element type to the input;creating, by the device, a focus element based on the input and the focus element type; anddisplaying, by the device, a graphical representation of the focus element, including the input and at least one graphical symbol identifying the focus element type, wherein the graphical representation of the focus element is presented in a list of one or more focus elements.
  • 2. The method of claim 1, wherein the input is one of typed data, copy and paste data, audio data, image data, video data, and location data.
  • 3. The method of claim 1, wherein the focus element type is one of one of a note, event, contact, website, audio recording, location, photo, video, task, message, and barcode.
  • 4. The method of claim 1, wherein the graphical entry space includes a text entry area on the display of the device.
  • 5. The method of claim 1, wherein the graphical entry space includes a plurality of selectable elements, wherein each selectable element is associated with one of a plurality of predefined focus element types.
  • 6. The method of claim 1, wherein categorizing further includes matching at least a portion of the input to one or more data patterns associated with a plurality of predefined focus element types.
  • 7. The method of claim 6, wherein categorizing further includes updating the graphical representation of the focus element.
  • 8. The method of claim 1, wherein creating includes storing, by the device, the focus element, the input, and the focus element type, in an input list.
  • 9. The method of claim 1, wherein displaying includes displaying the graphical representation of the focus element in addition to a plurality of previously created focus elements, wherein each of the plurality of previously created focus elements includes the input and the at least one graphical symbol identifying the focus element type.
  • 10. The method of claim 1, further comprising: detecting a selection of the graphical representation of the focus element; andtransferring an input for a selected focus element to an application, wherein the application is associated with the focus element type for the selected focus element.
  • 11. A device comprising: a display configured for presentation of a graphical user interface; anda controller configured to communicate with the display, wherein the controller is further configured to: control presentation of at least one graphical entry space for entry of focus elements on the display;detect the input to the at least one graphical entry space;categorize the input, wherein categorizing includes determining a focus element type for the input, andassigning the focus element type to the input;control creation of a focus element based on the input and the focus element type; andcontrol display of a graphical representation of the focus element, including the input and at least one graphical symbol identifying the focus element type, wherein the graphical representation of the focus element is presented in a list of one or more focus elements.
  • 12. The device of claim 11, wherein the input is one of typed data, copy and paste data, audio data, image data, video data, and location data.
  • 13. The device of claim 11, wherein the focus element type is one of one of a note, event, contact, website, audio recording, location, photo, video, task, message, and barcode.
  • 14. The device of claim 11, wherein the graphical entry space includes a text entry area on the display of the device.
  • 15. The device of claim 11, wherein the graphical entry space includes a plurality of selectable elements, wherein each selectable element is associated with one of a plurality of predefined focus element types.
  • 16. The device of claim 11, wherein categorizing further includes matching at least a portion of the input to one or more data patterns associated with a plurality of predefined focus element types.
  • 17. The device of claim 16, wherein categorizing further includes updating the graphical representation of the focus element.
  • 18. The device of claim 11, wherein controlling creation includes storing the focus element, the input, and the focus element type, in an input list.
  • 19. The device of claim 11, wherein controlling display includes displaying the graphical representation of the focus element in addition to the plurality of previously created focus elements, wherein each of the plurality of previously created focus elements includes the input the at least one graphical symbol identifying the focus element type.
  • 20. The device of claim 11, further comprising: detecting a selection of the graphical representation of the focus element; andtransferring an input for a selected focus element to an application, wherein the application is associated with the focus element type for the selected focus element.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. Provisional Application No. 62/183,613 titled SYSTEM AND METHODS FOR A USER INTERFACE AND DEVICE OPERATION filed on Jun. 23, 2015, and U.S. Provisional Application No. 62/184,476 titled SYSTEM AND METHODS FOR A USER INTERFACE AND DEVICE OPERATION filed on Jun. 25, 2015, the contents of which are expressly incorporated by reference in their entirety.

Provisional Applications (2)
Number Date Country
62183613 Jun 2015 US
62184476 Jun 2015 US