1. Technical Field
The present disclosure relates generally to computerized systems and methods for scheduling events. More particularly, and without limitation, the present disclosure relates to systems and methods for scheduling, viewing, updating, and managing events with gesture-based input.
2. Background Information
Often people want to schedule events for themselves or a group, for example, their friends or family. Scheduling an event use to require writing down an event in a calendar or appointment book. Today, people use computing devices (such as mobile phones and tablets) to manage their daily activities and scheduled events. Event scheduling, however, can be cumbersome and require a detailed process, which, to date, often involves multiple screens and use of a keyboard for input.
Additionally, users desire more functionality with regard to scheduling events to better streamline appointments, communications, etc. Further, as increasing numbers of people, including business groups, athletic teams, social groups, families and the like, associate and communicate with one another, the need for improved systems and methods for efficiently scheduling events grows.
Embodiments of the present disclosure relate to computerized systems and methods for scheduling events. Embodiments of the present disclosure also encompass systems and methods for gestured-based input for scheduling events and manipulating a timeline. Further, some embodiments of the present disclosure relate to systems and methods for updating at least one graphical object associated with a scheduled event.
In accordance with certain embodiments, a computerized method is provided for scheduling events. The method includes receiving an indication of a gesture via a multi-touch display of a computing device, wherein the indication of the gesture comprises data representing a starting location and data representing a directional vector. The method also includes identifying a first graphical object associated with the gesture. Further, the method includes displaying an event context menu in response to the received gesture and receiving a selection of an event from the event context menu, the selected event corresponding to a second graphical object. In addition, the method includes displaying, on the multi-touch display, the second graphical object in place of the first graphical object to confirm the event selection.
In accordance with additional embodiments of the present disclosure, a computer-implemented system is provided for scheduling events. The system may comprise at least one processor and a memory device that stores instructions which, when executed by the at least one processor, causes the at least one processor to perform a plurality of operations, including receiving an indication of a gesture via a multi-touch display of a computing device, wherein the indication of the gesture comprises data representing a starting location and data representing a directional vector. The operations performed by the at least one processor also include identifying a first graphical object associated with the gesture. Further, the operations performed by the at least one processor include displaying an event context menu in response to the received gesture and receiving a selection of an event from the event context menu, the selected event corresponding to a second graphical object. In addition, the operations performed by the at least one processor include displaying, on the multi-touch display, the second graphical object in place of the first graphical object to confirm the event selection.
In accordance with further embodiments of the present disclosure, a computerized method is provided for manipulating a timeline. The method includes displaying, on a multi-touch display, a plurality of content areas, each content area corresponding to a starting graphical object and an associated amount of time. The method also includes receiving an indication of a pinch or spread gesture via the multi-touch display, the indication of the pinch or spread gesture comprising data representing a first location and a first direction, wherein a first set of graphical objects comprises first plural graphical objects fully displayed on the multi-touch display, and wherein a second set of graphical objects comprises second plural graphical objects depicting time not yet displayed on the multi-touch display. Further, the method includes updating the content area corresponding to the starting graphical object to depict the time to display the second set of graphical objects.
In accordance with additional embodiments of the present disclosure, a system is provided for manipulating a timeline. The system may comprise at least one processor and a memory device that stores instructions which, when executed by the at least one processor, causes the at least one processor to perform a plurality of operations, including displaying, on a multi-touch display, a plurality of content areas, each content area corresponding to a starting object and an associated amount of time. Further, the operations performed by the at least one processor may also include receiving an indication of a pinch or spread gesture via the multi-touch display, the indication of the pinch or spread gesture comprising data representing a first location and a first direction, wherein a first set of graphical objects comprises first plural graphical objects fully displayed on the multi-touch display, and wherein a second set of graphical objects comprises second plural graphical objects depicting time not yet displayed on the multi-touch display. The operations performed by the at least one processor also include updating the content area corresponding to the starting graphical object to depict the time to display the second set of graphical objects.
In accordance with further embodiments of the present disclosure, a computerized method is provided for updating a graphical object associated with a scheduled event. The method includes scheduling an event, the event being associated with a start time. The method also includes displaying at least one graphical object corresponding to the event, the at least one graphical object being displayed in at least one color. Further, the method includes updating, progressively, the at least one color of the at least one graphical object as the current time approaches the start time of the scheduled event.
In accordance with additional embodiments of the present disclosure, a system is provided for updating a graphical object associated with a scheduled event. The system includes at least one processor and a memory device that stores instructions which, when executed by the at least one processor, causes the at least one processor to perform a plurality of operations. The operations include scheduling an event, the event being associated with a start time. Further, the operations performed by the at least one processor include displaying at least one graphical object corresponding to the event, the at least one graphical object being displayed in at least one color. The operations performed by the at least one processor also include updating, progressively, the at least one color of the at least one graphical object as the current time approaches the start time of the scheduled event.
In accordance with further embodiments of the present disclosure, a computerized method for is provided for scheduling events with at least one participant. The method includes receiving an indication of a gesture via a multi-touch display of a computing device, wherein the indication of the gesture comprises data representing a starting location and data representing a directional vector. The method also include identifying a first graphical object and the at least one participant associated with the gesture. Further, the method includes displaying an event context menu in response to the received gesture and receiving a selection of an event from the event context menu, the selected event corresponding to a second graphical object. The method also includes displaying, on the multi-touch display, the second graphical object in place of the first graphical object to confirm the event selection. In addition, the method also includes generating a notification for the scheduled event including the at least one participant associated with the gesture.
In accordance with further embodiments of the present disclosure, a system is provided for scheduling events with at least one participant. The system may comprise at least one processor and a memory device that stores instructions which, when executed by the at least one processor, causes the at least one processor to perform a plurality of operations, including receiving an indication of a gesture via a multi-touch display of a computing device, wherein the indication of the gesture comprises data representing a starling location and data representing a directional vector. The operations performed by the at least one processor also include identifying a first graphical object and the at least one participant associated with the gesture. Further, the operations performed by the at least one processor include displaying an event context menu in response to the received gesture and receiving a selection of an event from the event context menu, the selected event corresponding to a second graphical object. The operations performed by the at least one processor also include displaying, on the multi-touch display, the second graphical object in place of the first graphical object to confirm the event selection. In addition, the operations performed by the at least one processor also include generating a notification for the scheduled event including the at least one participant associated with the gesture.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only, and are not restrictive of embodiments consistent with the present disclosure. Further, the accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the present disclosure and together with the description, serve to explain principles of the present disclosure.
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate several embodiments and aspects of the present disclosure, and together with the description, serve to explain the principles of the presently disclosed embodiments. In the drawings:
The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar parts. While several illustrative embodiments are described herein, modifications, adaptations and other implementations are possible. For example, substitutions, additions, or modifications may be made to the components illustrated in the drawings, and the illustrative methods described herein may be modified by substituting, reordering, removing, or adding steps to the disclosed methods. Accordingly, the following detailed description is not limiting of the disclosed embodiments. Instead, the proper scope is defined by the appended claims.
In this application, the use of the singular includes the plural unless specifically stated otherwise. In this application, the use of “or” means “and/or” unless stated otherwise. Furthermore, the use of the term “including.” as well as other forms such as “includes” and “included,” is not limiting. In addition, terms such as “element” or “component” encompass both elements and components comprising one unit, and elements and components that comprise more than one subunit, unless specifically stated otherwise. Additionally, the section headings used herein are for organizational purposes only, and are not to be construed as limiting the subject matter described.
As shown in
Computing device 110 may be configured to receive, process, transmit, and display data, including scheduling data. Computing device 110 may also be configured to receive gesture inputs from a user. The gestures may include a predefined set of inputs via a multi-touch display (not illustrated) including, for example, hold, pinch, spread, swipe, scroll, rotate, or drag. In addition, the gestures may include a learned set of moves corresponding to inputs. Gestures may be symbolic. Computing device 110 may capture and generate depth images and a three-dimensional representation of a capture area including, for example, a human target gesturing. Gesture input devices may include stylus, remote controls, visual eye cues, and/or voice guided gestures. In addition, gestures may be inputted by sensory information. For example, computing device 110 may monitor neural sensors of a user and process the information to input the associated gesture from the user's thoughts.
In the exemplary embodiment of
Computing device 110 may include a multi-touch display (not illustrated). Multi-touch display may be used to receive input gestures from a user of computing device 110. Multi-touch display may be implemented by or with a trackpad and/or mouse capable of receiving multi-touch gestures. Multi-touch display may also be implemented by or with, for example, a liquid crystal display, a light-emitting diode display, a cathode-ray tube, etc.
In still additional embodiments, computing device 110 may include physical input devices (not illustrated), such as a mouse, a keyboard, a trackpad, one or more buttons, a microphone, an eye tracking device, and the like. These physical input devices may be integrated into the computing device 110 or may be connected to the computing device 110, such as an external trackpad. Connections for external devices may be conventional electrical connections that are implemented with wired or wireless arrangements.
In an exemplary embodiment, computing device 110 may be a device that receives, stores, and/or executes applications. Computing device 110 may be configured with storage or a memory device that stores one or more operating systems that perform known operating system functions when executed by one or more processors, such as or more software processes configured to be executed to run an application.
The exemplary system environment 100 of
Although system environment 100 is illustrated in
The various components of the system of
As further shown in
The above system, components, and software associated with
As part of process 200, computing device 110 may receive at least one gesture from a user (step 201). This may be performed by detecting n-contacts with the display surface. Once a contact is detected the number of contacts, for example, the number of fingers in contact with the display surface may be determined. In various embodiments, gesture inputs do not require contact with the display. For example, a user may swipe in mid-air to input a gesture. The received indication may include data representing a starting location. In one embodiment, the starting location may correspond to a single contact on the display surface. For example, a single finger in contact with the display. In another embodiment, there may be n starting locations corresponding to the n-contacts. For example, n-fingers having contacted the display 120. The received indication may also include data representing a directional vector. The directional vector may correspond to a type of motion, such as a rotating, twisting, swiping, pinching, spreading, holding, or dragging gesture. In additional embodiments, a directional vector may not exist. For example, the gesture has no motion associated. When a directional vector does not exist it may be determined the gesture corresponds to a press and hold or tap. In another embodiment, the directional vector may correspond to the starting location for gestures without motion.
The received indication may include data representing a start and end time. For example, if a user wishes to schedule an event from 5 p.m. to 6 p.m., the received indication data may contain 5 p.m. as a start time and 6 p.m. as the end time. In further embodiments, the received indication data may contain information related to a plurality of users when the proposed scheduled event corresponds to a plurality of users, such as their names, location, and photo. However, it should be understood that this data might include more information or less information.
Computing device 110 may receive the orientation of the device from the operating system. In another embodiment, the orientation of computing device 110 may be determined and the correlating direction vector for the input gesture may also be determined based on the orientation of the computing device. For example, if computing device 110 is vertical in orientation and receives a gesture from the left side of the display to the right side of the display, then the determined direction of the vector may correspond to a swipe gesture from left to right. As a further example, if computing device 110 is horizontal in orientation and the same start and end position is used, then it may be determined that the gesture corresponds to a swipe gesture from top to bottom of the display.
When a gesture indication is received, computing device 110 may identify at least one graphical object associated with the gesture (step 203). In one embodiment, the identification of at least one graphical object may be based on the use of coordinates to identify where the graphical objects are in relation to the display. In another embodiment, the determination of at least one graphical object associated with a gesture may be calculated using the starting location data and the directional vector data. For example, if computing device 110 receives a swipe gesture from left to right, computing device 110 may determine the number of graphical objects associated with the gesture using a formula to determine the number and position of objects between the starting location and the directional vector end location. Display 820 may be divided into sections, computing device 810 may obtain the contact positions associated with each section and calculate, for example, the spread distance between the contact points for each section. In another embodiment, the graphical objects may be along a directional vector corresponding to n-contact points with n-starting locations. For example, two fingers may contact the display each having a starting location along the y-axis and the system receives a swipe gesture computing device 110 may determine n-graphical objects along the two finger gesture and store the received data in memory. In a further embodiment, the first graphical objects may have configured locations on the display and the directional vector data may be matched to the configured location of the first graphical objects.
In some embodiments, when a gesture is received, computing device 110 may identify at least one participant associated with the gesture. Each identified participant may be confirmed to the user or creator of the event. For example,
Computing device 110 may determine the receive indication does not have a motion associated with the gesture; however, if the gesture exceeds a threshold time it may be determined the gesture is associated with a press and hold. In one embodiment, the threshold time is fixed or predetermined. For example, the threshold time may be fixed to 0.2 seconds. In another embodiment, the threshold time is configurable by the application or user. For example, the threshold may have a default value of at least 0.2 seconds; however, the application or user may configure the threshold time to 0.5 seconds. For example, where the application has set the threshold time to be at least 0.5 seconds, computing device 110 would execute the associated command for a press and hold gesture when the contact has been determined to exceed 0.5 seconds. In a further embodiment, if motion is not detected then a command is selected based on the number of contacts only and performed on the at least one associated graphical object (step 205) associated with that gesture. In a further embodiment, where the gesture has no motion and does not exceed a minimum threshold, it may be determined the gesture is a tap.
In various embodiments, an event context menu may be displayed in response to a received gesture (step 205). In certain embodiments in which the event context menu is displayed the context menu may be overlaid over the first graphical objects. For example, a menu allowing a user to select an event type may be overlaid in front the original screen where the gesture was initiated. In another example, the first graphical objects are dimmed behind the displayed context menu. The context menu may contain n-submenus. In some embodiments, the context menu is n-levels deep. The context menu may contain a plurality of graphical objects corresponding to events. Each graphical object may have an associated name. For example, a menu is displayed with graphical objects corresponding to exercise. The sub menu under exercise may contain another set of graphical objects such as Cardio, Bike, Hike, etc. Each event may have a corresponding graphical object associated with the event.
In certain embodiments, the displayed context menu may be uniquely associated with the type of gesture. For example, a press and hold gesture may result in a unique context menu than a swipe gesture. Where the context menu is activated by a gesture on a scheduled event the context menu may contain information associated with the scheduled event. Such information may include, for example, the people scheduled to participate in the event, the time, date, alert time, location. In another embodiment, the context menu may allow the users to chat with other members scheduled to participate in the event. The display context menu associated with a scheduled event may display the location of the event on a map and each user's location on the map in relationship to the scheduled event's location. In addition, similar locations may be displayed on the map along with user favorite locations in proximity to the user or event location. Recommendations may be displayed from saved favorites or locations corresponding to the event. Further, the display context menu associated with a scheduled event may allow the event to be deleted.
Computing device 110 may receive a selected event for scheduling. For example, a user may select dinner. In one embodiment, the event may have an associated second graphical object. Continuing with the dinner example, the user may select an event associated with dinner, for example, pizza. The second graphical object may be displayed in place of the first graphical object to confirm the scheduled event (step 209). For example, a graphical object associated with pizza event may replace the first graphical object corresponding to the start and end time in the received data. Alternatively, the second graphical object may be displayed over the first graphical object in step 209. For example, a graphical object associated with pizza event may be overlayed on top of the first graphical object corresponding to the start and end time in the received data.
As shown in FIG, 3A, first graphical object 301 corresponds to a time on display 320. Further, as depicted in
Along the top of the display, a graphical object 305 may represent each member whose time is capable of being scheduled. The graphical object may correspond to a column or row associated with that member's schedule. In one embodiment, the user may configure members of a group. In some embodiments, numerous groups are capable of existing simultaneously. When a user opens their calendar application, for example, the user may be prompted to create one or more groups. A default graphical object may be assigned to each newly created member or the user may assign a custom graphical object associated with that member. The user may select contacts from an address book, friends from social networks, or members from a photograph. For example, computing device 110 may use facial recognition when the user selects a photo. Computing device 110 may detect the faces in the photo and may create graphical objects and associate the face with a member profile. Once the faces are selected other identifying information may be entered. In various embodiments, when the member corresponds to an individual in an address book or social network the member fields may be populated for the user. The user may also select a graphical object to correspond to each member of the group. Group members may be created by the user or from pre-existing groups on event server 120 in FIG, 1. A pre-existing group, for example, may correspond to an e-mail listserv.
As shown in
Continuing with
Each sub-menu may contain a plurality of second graphical objects 313 associated with its parent menu. For example, under the Exercise parent folder the sub-menu may contain and display graphical object associated with Cardio, Bike, Hike, Run, Lift, Swim, Walk, Weigh In, and Yoga events. The sub-menus may have a plurality of second graphical objects 313. The plurality of second graphical objects 313 may not be displayed initialing however, a user may allow page through to display the plurality of second graphical objects 313 not displayed.
Once a user has made a selection of an event a new context menu may be displayed to the user. In one embodiment, the user may again change the selection of participating members. For example,
The selected graphical object may replace the top-level menu graphical object. Each time the user selects an event the selected event may be tracked. The tracked selections may be used for creating favorite events or targeted advertisements. In some embodiments, advertisements related to the event selection by the user or the profile(s) of members in the group may be presented to the user.
The confirmed event may be displayed to the user as a single graphical object. For example, Pizza graphical object under the daughter in
A schedule request may be created that includes a set of details for the event. The schedule request may be, for example, an HTTP (hypertext transfer protocol) request that includes the set of details for the event. The set of details may include information such as date, time, location, etc. for the event. The schedule request may also include an identifier of the event and an identifier of the event creator. In order to create the schedule request, the event details are parsed using parsing methods known in the art. The schedule request including the set of details may be sent to a server (e.g., server system 120) that stores or has access to the invitee's calendar and calendar data, as well as the calendar and calendar data for the event creator. The server system may send the schedule request to the selected group members. For example, Mom schedules dinner with Dad. Dad receives a schedule invite including event details from the server system. The server system may store, for example, in a database, recurring appointments, or scheduled events.
As shown in
In one embodiment, process 400 may determine whether the gesture is towards the starting location (step 405). One way it may be determined that the gesture is a pinch gesture is, for example, determining whether at least one contact has a directional vector toward the starting point associated with the contact. Another way it may be determined that the gesture is a pinch gesture is to determine the area covered on the display decreases. In a further embodiment, the gesture may be associated with a pinch where it is determined the amount of area between a plurality of first locations is less than the original area between the first locations. Conversely, determining whether a gesture is associated with a spread/anti-pinch gesture may encompass the opposites of the describe methods for determining whether the gesture is associated with a pinch. For example, a user could place two fingers on the display and move the fingers away from one another. The area between the fingers' starting locations and end location may be greater than the starting area between the fingers. Where it is determine the gesture is associated with a pinch the viewable range may be decreased (step 407). Where it is determine the gesture is associated with a spread/anti-pinch gesture the viewable range may increase (step 409). A pinch gesture may be used to condense a displayed object on the display where a spread/anti-pinch gesture may be used to expand a displayed object for display. Pinch and spread/anti-pinch gestures may also semantically zoom through different levels of graphical objects not yet displayed. For example, continuing with the days of the week example above, where a user performs a spread gesture on the graphical object corresponding to Monday the graphical object may be updated to display the hours of the day associated with Monday. In one embodiment, the display automatically displays the closest hour to the current time. In another embodiment, the start time displayed may be set as a default. For example, when viewing by hours within the day the timeline always begins at 8 a.m.
Process 400 may update the content area to depict the time to display corresponding to a second set of graphical objects (step 411). In one embodiment, the updated content area is simultaneously displayed with the gesture input. For example, continuing with the days of the week example, where the user performs a spread gesture on Monday the display may be updated to display the hours of the day for Monday. The display may show scheduled events scheduled for Monday. If the user continues to perform a spread gesture, the display may continue to update and replace the first set of graphical objects with a set of second graphical objects not yet displayed. The user may continually zoom the timeline, for example, from a day view to 15-minute interval view of a single day.
As shown in
In some embodiments, the progressive update of the at least one graphical object may result from brightening the graphical objects, changing the transparency of the graphical object, or changing the overlaid shadow. In additional embodiments, the progressive update may include updating the graphical object by changing the color of the graphical objects as the start time approaches.
As shown in
Processor(s) 1140 may include one or more known processing devices, such as a microprocessor from the Pentium™ or Xeon™ family manufactured by Intel™, the Turion™ family manufactured by AMD™, or any of various processors manufactured by Sun Microsystems. The disclosed embodiments are not limited to any type of processor(s) configured in computing device 1110.
Interfaces 1180 may be one or more devices configured to allow data to be received and/or transmitted by computing device 1110. Interfaces 1180 may include one or more digital and/or analog communication devices that allow computing device 1110 to communicate with other machines and devices.
The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limiting to the precise forms or embodiments disclosed. Modifications and adaptations will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments. For example, systems and methods consistent with the disclosed embodiments may be implemented as a combination of hardware and software or in hardware alone. Examples of hardware include computing or processing systems, including personal computers, laptops, mainframes, micro-processors and the like. Additionally, although aspects are described for being stored in memory, one skilled in the art will appreciate that these aspects can also be stored on other types of computer-readable media, such as secondary storage devices, for example, hard disks, floppy disks, or CD-ROM, or other forms of RAM or ROM.
Programmable instructions, including computer programs, based on the written description and disclosed embodiments are within the skill of an experienced developer. The various programs or program modules may be created using any of the techniques known to one skilled in the art or may be designed in connection with existing software. For example, program sections or program modules may be designed in or by means of C#, Java, C++, HTML, XML, CSS, JavaScript, or HTML with included Java applets. One or more of such software sections or modules may be integrated into a computer system or browser software or application.
In some embodiments disclosed herein, some, none, or all of the logic for the above-described techniques may be implemented as a computer program or application or as a plug-in module or subcomponent of another application. The described techniques may be varied and are not limited to the examples or descriptions provided. In some embodiments, applications may be developed for download to mobile communications and computing devices (e.g., laptops, mobile computers, tablet computers, smart phones, etc.) and made available for download by the user either directly from the device or through a website.
The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limiting to the precise forms or embodiments disclosed. Modifications and adaptations will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments.
The claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification, which examples are to be construed as non-exclusive. Further, the steps of the disclosed methods may be modified in any manner, including by reordering steps and/or inserting or deleting steps.
It is intended, therefore, that the specification and examples be considered as exemplary only. Additional embodiments are within the purview of the present disclosure and sample claims.