The disclosed embodiments relate generally to subject matter wherein presentation of data to a computer operator of a system contains components that enable interaction by nonverbal representations and symbols. More particularly, the preferred embodiments relate to a client component that has a graphical user interface that allow the user to interact with the client component through gestures to both draw new shapes/objects and to issue control commands in the same editing mode.
Flow chart diagraming tools, such as Microsoft Visio, exist in the industry that allow users to create flow and process diagrams by dragging and dropping various components from a list of components to a work grid and to create any appropriate connections between the components.
However, this form of interaction and diagram creation is not ideal on touch based devices. Society is becoming increasingly mobile and touch based systems such as smartphones and tablets are becoming more pervasive among users. As portable electronic devices become more common in our society, there is an increased demand to allow users to leverage such devices and systems to create, modify, interact with, transmit and receive content.
While touch based systems that allow diagraming by users do exist in the current market, they do not integrate control operations into the same input flow as drawing the shapes/components. For example, to delete an existing object, some applications may require the user to select the object and to click on a delete button with the mouse or alternatively to hit the delete key. However, on a tablet device, a physical keyboard may not be available and it may be inconvenient to move away from the work grid and possibly scroll through menus to find the delete button.
One of the goals of this document is to alleviate such inconveniences by introducing gesture based draw and control input operations that can co-exist in the same work flow. With this approach, gestures that are considered to be control gestures are drawn on the same draw grid that contains the shapes and components that are drawn with draw gestures.
The present invention improves upon the existing touch based systems by introducing new mechanisms of interaction with the user that can improve the productivity of users by supporting draw shape gestures and control gesture operations, both drawn on the same draw grid. It is important to note that a control gesture operation, as the term is used in this document, does not include clicking or selecting a button that is outside the draw grid to achieve a particular control objective. Moreover, as described in the detailed description, manipulating a control widget would also not be considered to be a control gesture operation.
In one embodiment of the present invention there is provided a primary client system that enables gesture based interaction with a primary user, the primary client system comprising 1) a primary draw grid that enables interaction between the primary client system and the primary user by means of input gesture operations and 2) a primary pattern recognition component that interprets the input gestures. Here, the input gesture operations include one or more draw shape gestures and one or more control gesture operations that are drawn within the boundaries of the primary draw grid.
In another embodiment of the present invention there is provided a method that enables gesture based interaction between a primary user and a primary client system that includes a primary draw grid. The method comprises 1) the primary client system receiving from the primary user two or more input gesture operations and 2) a pattern recognition component of the primary client system determining two or more recognized gesture commands, by determining, for each of the two or more input gesture operations, one or more recognized gesture commands. Here, each input gesture operation is entered by the primary client through the primary draw grid and the two or more input gesture operations include one or more draw shape gestures and one or more control gesture operations drawn within the boundaries of the primary draw grid. Also, the two or more recognized gesture commands include one or more draw shape commands and one or more recognized control commands.
In another embodiment of the present invention there is provided a computer program product for use on a primary client system that enables gesture based interaction with a primary user. The computer program product comprises a non-transitory recording medium and instructions recorded on the non-transitory recording medium for instructing the primary client system to receive from the primary user two or more input gesture operations and to determine two or more recognized gesture commands, by determining, for each of the two or more input gesture operations one or more recognized gesture commands. Here, each input gesture operation is entered by the primary user through a primary draw grid of the primary client system and the two or more input gesture operations include one or more draw shape gestures and one or more control gesture operations. Also, the two or more recognized gesture commands include one or more draw shape commands and one or more recognized control commands.
Embodiments of the present invention will be described more fully with reference to the accompanying drawings in which:
In another preferred embodiment, the primary client system may be a portable electronic device that creates a virtual view for the user, the primary graphical user interface may be a virtual area created by the primary client system for the purposes of interaction with the user, the primary draw grid may be a sub-area or volume within the primary graphical user interface, and the pointer used by the user to “draw” on the draw grid may be a wearable device containing an accelerometer and/or other location sensitive components to ascertain the position of the pointer relative to the draw grid.
The primary user may interact with the primary client system by drawing input gesture operations on the primary draw grid, wherein the input gesture operations include one or more draw shape gestures and one or more control gesture operations.
A control gesture operation can involve multiple gesture strokes, who in combination, can comprise the complete control gesture operation.
The PPRC may utilize one or more processors of the primary client system to exercise its functions.
It is important to note that a control gesture operation, as the term is used in this document, does not include clicking or selecting a button that is outside the draw grid to achieve a particular control objective. For example, after selecting some pre-existing shapes, clicking a delete button outside the draw grid (but still in the primary graphical user interface) would not be considered to be a control gesture operation. Moreover, manipulating a control widget would also not be considered to be a control gesture operation (emphasis added). For example, if a pre-existing shape object is selected, it is possible for a draw system to “pop-up” a re-size or move widget that can be further manipulated by the user to achieve a control objective. This approach would also not constitute a control gesture operation in the context of this discussion, as it would involve the manipulation of a control widget. To summarize, in the context of the embodiments described in this document, a control gesture operation is drawn on the draw grid itself to manipulate selected or unselected pre-existing shapes/objects to achieve a particular control objective (e.g. delete, move, change connector type, change connector end type, morph the selected objects, etc.).
The process of copying a collection of objects on a draw grid to another area of the draw grid, may comprise the steps of (a) recognizing the outline drawn on the draw grid by the user, where the outline may or may not be a completely closed curve, (b) identifying which existing objects on the draw grid fall within the boundary of the outline drawn in the previous step, (c) recognizing that the user draws an approximate “+” symbol, multi-touch or otherwise, on the draw grid subsequent to the selection of existing objects in the previous step, and (d) duplicating the objects selected in step (b) at the location on the draw grid where the user draw the approximate “+” symbol. In this embodiment of the invention, in step (a), where the outline drawn by the user is not in closed form, for the purposes of identifying the “selected” objects in step (b), the curve may be completed virtually, with or without the completion shown on the draw grid. Alternatively, instead of selecting the shapes to be copied through the outline that was described in steps (a) and (b) above, the user may select the shapes one at a time before proceeding to step (c).
An embodiment of the primary client system that allows the user to copy a collection of objects may comprise one or more processors for performing the steps of: (a) recognizing the outline drawn on the draw grid by the user, where the outline may or may not be a completely closed curve, (b) identifying which existing objects on the draw grid fall within the boundary of the outline drawn in the previous step, (c) recognizing that the user draws an approximate “+” symbol, multi-touch or otherwise, on the draw grid subsequent to the selection of existing objects in the previous step, and (d) duplicating the objects selected in step (b) at the location on the draw grid where the user draw the approximate “+” symbol. In this embodiment of the invention, in step (a), where the outline drawn by the user is not in closed form, for the purposes of identifying the “selected” objects in step (b), the curve may be completed virtually, with or without the completion shown on the draw grid.
Alternatively, the processor of the system may perform the steps of: (a) recognizing the shape object selected by the user using the pointer, (b) recognizing that the user draws an approximate “+” symbol, multi-touch or otherwise, on the draw grid subsequent to the selection of existing object in the previous step, and (c) duplicating the objects selected in step (a) at the location on the draw grid where the user draw the approximate “+” symbol.
Alternatively, as can be seen in steps 303-304, the user can select multiple objects by drawing an outline around existing objects on the draw grid and the system will recognize the objects that fall within the boundary. If the outline is not closed, there may be an additional step prior to the recognition of the enclosed objects, where the outline is “virtually” completed to achieve a closed boundary. Subsequently in 304, the user draws an approximate “+” symbol, multi-touch or otherwise, on the drawing surface and the primary pattern recognition component recognizes the gesture as an attempt to copy. In 305, the primary client system creates a new copy of the selected objects where the approximate “+” symbol was drawn.
The process of deleting an object shown on the draw grid, may comprise the steps of: (a) selecting an object on the draw grid by the user, (b) recognizing that the user draws a “x” symbol, multi-touch or otherwise, on the draw grid subsequent to the selection of the objection in the previous step, and (c) deleting the selected object and any associated connectors from the draw grid. Alternatively, the user may delete an object by drawing an approximate “x” symbol, multi-touch or otherwise, on an object that is not selected and the primary client system will recognize the request and delete the object in question that has the most overlap with the approximate “x” symbol. With this alternative method, part of the “x” symbol may be on the outside of the shape to be deleted.
An embodiment of the primary client system that allows the user to delete an object may comprise one or more processors for performing the steps of: (a) recognizing the shape object selected by the user using the pointer, (b) recognizing that the user draws an approximate “x” symbol, multi-touch or otherwise, on the draw grid subsequent to the selection of the objection in the previous step, and (c) deleting the selected object and any associated connectors from the draw grid. Alternatively, the user may delete an object by drawing an approximate “x” symbol, multi-touch or otherwise, on an object that is not selected and the system will recognize the request and delete the object in question that has the most overlap with the approximate “x” symbol. With this alternative method, part of the approximate “x” symbol may be on the outside of the shape to be deleted.
Alternatively, as can be seen in step 403-404, the user can select multiple objects by drawing an outline around existing objects on the draw grid and the system will recognize the shapes that fall within the boundary. If the outline is not closed, there may be an additional step prior to the recognition of the enclosed objects, where the outline is “virtually” completed to achieve a closed boundary. Subsequently in 404, the user draws an approximate “x” symbol, multi-touch or otherwise, on the drawing surface and the primary client system recognizes the gesture as an attempt to delete object(s). In 405, the primary client system deletes the objects that were previously selected.
The process of changing the line type of a connector between two objects to a dashed line from a solid line or to a solid line from a dashed line, may comprise the steps of: (a) the user drawing two approximately parallel lines that are in close proximity to each other that are both relatively perpendicular to the connector line at the point of intersection and (b) changing the line type to solid if the current line type is dashed or changing the line type to dashed if the current line type is solid. In a preferred embodiment, the two approximately parallel lines will be considered to be approximately parallel if the angles the two lines make with a base axis is within 20 degrees of each other. Moreover, the two lines will be considered to be relatively perpendicular to the connector line if each of the two lines are between 70 to 110 degrees (20 degrees from perpendicular) from the connector line or curve at the point of intersection.
An embodiment of the primary client system that allows the user to change the line type of a connector between two objects to a dashed line from a solid line or to a solid line from a dashed line may comprise one or more processors for performing the steps of: (a) recognizing that the user has drawn two approximately parallel lines that are in close proximity to each other that are both relatively perpendicular to the connector line or curve at the point of intersection and (b) changing the line type to solid if the current line type is dashed or changing the line type to dashed if the current line type is solid.
The process of changing the termination type of an existing connector object may comprise the steps of: (a) receiving from the primary user an arrow gesture that overlaps the existing connector object, anywhere on the length of the existing connector, (b) identifying a termination end of the existing connector that corresponds to the direction pointed to by the arrow gesture, wherein the termination end is one of the two endpoints of the existing connector, and (c) changing the termination type of the termination end to the next termination type on a termination type list. In a preferred embodiment, the arrow gesture will take the form of an approximate “>” or “<” gesture and the termination type list will include a list of the permitted terminations types.
An embodiment of the primary client system that allows the user to change the termination type of an existing connector object may comprise one or more processors for performing the steps of: (a) receiving from the primary user an arrow gesture that overlaps the existing connector object, anywhere on the length of the existing connector, (b) identifying a termination end of the existing connector that corresponds to the direction pointed to by the arrow gesture, wherein the termination end is one of the two endpoints of the existing connector, and (c) changing the termination type of the termination end to the next termination type on a termination type list.
The process of creating a new sequence diagram may comprise the steps of: (a) the user drawing an object on the draw grid that is appropriately converted into a standard shape, such as a rectangle, circle or ellipse, on the draw grid, (b) the user drawing a relatively horizontal or vertical line with a starting point inside the object drawn in the previous step that extends to outside the shape, (c) the touch based system inquiring from the user whether a sequence diagram is desired, and (d) if the user answers in the affirmative to the inquiry in the previous step, the touch based system converting the object with a line originating from it, into one of the columns or rows of a standard sequence diagram.
An embodiment of the primary client system that allows the user to create a new sequence diagram may comprise one or more processors for performing the steps of: (a) recognizing that the user has drawn an object on the draw grid that is interpreted as a standard shape, such as a rectangle, circle or ellipse, (b) recognizing that the user has drawn a relatively horizontal or vertical line with a starting point inside the object drawn in the previous step that extends to outside of the shape, (c) displaying a query to the user inquiring whether a sequence diagram is desired, (d) receiving the input from the user to the inquiry from the previous step, and (e) if the user answers in the affirmative to the inquiry in the previous step, converting the object with a line originating from it, into one of the columns or rows of a standard sequence diagram.
In a preferred embodiment of the invention, the primary user may user the primary client system for the purposes of drawing and designing flow chart diagrams, use case diagrams, mind maps, and other relational diagrams.
This application claims priority to U.S. Provisional Application No. 61/761,664, filed on Feb. 6, 2013.
Number | Date | Country | |
---|---|---|---|
61761664 | Feb 2013 | US |