Not applicable.
Not applicable.
1. Field of the Invention
The invention relates generally to computer operating environments, and more particularly to a method for performing operations in a computer operating environment.
2. Description of Related Art
A newly introduced computer operating arrangement known as Blackspace™ has been created to enable computer users to direct a computer to perform according to graphic inputs made by a computer user. One aspect of Blackspace is generally described as a method for creating user-defined computer operations that involve drawing an arrow in response to user input and associating at least one graphic to the arrow to designate a transaction for the arrow. The transaction is designated for the arrow after analyzing the graphic object and the arrow to determine if the transaction is valid for the arrow. The following patents describe this system generally: U.S. Pat. No. 6,883,145, issued Apr. 19, 2005, titled Arrow Logic System for Creating and Operating Control Systems; U.S. Pat. No. 7,240,300, issued Jul. 3, 2007, titled Method for Creating User-Defined Computer Operations Using Arrows. These patents are incorporated herein by reference in their entireties. The present invention comprises improvements and applications of these system concepts.
The present invention generally comprises a computer control environment that builds on the Blackspace™ software system to provide further functionality and flexibility in directing a computer. It employs graphic inputs drawn by a user and known as gestures to replace and supplant the pop-up and pull-down menus known in the prior art.
The present invention generally comprises various embodiments of the Gestures computer control environment that permit a user to have increased efficiency for operating a computer. The description of these embodiments utilizes the Blackspace environment for purposes of example and illustration only. These embodiments are not limited to the Blackspace environment. Indeed these embodiments have application to the operation of virtually any computer and computer environment and any software that is used, to operate, control, direct, cause actions, functions, operations or the like, including for desktops, web pages, software applications, and the like.
Key areas of focus include:
1) Removing the need for text in menus, represented in Blackspace as IVDACCs, which is an acronym for “Information VDACC.” A VDACC is an acronym for “Virtual Display and Control Canvas.
2) Removing the need for menus altogether.
Regarding word processing: A VDACC is an object found in Blackspace. As an object it can be used to manage other objects on one or more canvases. A VDACC also has properties which enable it to display margins for text. In other software applications dedicated word processing windows are used for text. Many of the embodiments found herein can apply to both VDACC type word processing and windows type word processing. Subsequent sections in this provisional application include embodiments that permits users to program computers via graphical means, verbal means, drag and drop means, and gesture means.
There are two considerations regarding menus: (1) Removing the need for language in menus, and (2) removing the need for menu entries entirely. Regarding VDACCs and IVDACCs, see “Intuitive Graphic User Interface with Universal Tools,” Pub. No.: US 2005/0034083, Pub. Date: Feb. 10, 2005, incorporated herein by reference.
This invention includes various embodiments that fall into both categories. The result of the designs described below is to greatly reduce the number of menu entries and menus required to operate a computer and at the same time to increase the speed and efficiency of its operation. The operations, functions, applications, methods, actions and the like described herein apply to all software and to all computer environments. Blackspace is used as an example only. The embodiments described herein employ the following: drawing input, verbal (vocal) input, new uses of graphics, all picture types (including GIF animations), video, gestures, 3-D and user-defined recognized objects.
As illustrated in
The processing device 708 of the computer system 700 includes a disk drive 710, memory 712, a processor 714, an input interface 716, an audio interface 718 and a video driver 720. The processing device 708 further includes a Blackspace Operating System (OS) 722, which includes an arrow logic module 724. The Blackspace OS provide the computer operating environment in which arrow logics are used. The arrow logic module 724 performs operations associated with arrow logic as described herein. In an embodiment, the arrow logic module 724 is implemented as software. However, the arrow logic module 724 may be implemented in any combination of hardware, firmware and/or software.
The disk drive 710, the memory 712, the processor 714, the input interface 716, the audio interface 718 and the video driver 60 are components that are commonly found in personal computers. The disk drive 710 provides a means to input data and to install programs into the system 700 from an external computer readable storage medium. As an example, the disk drive 710 may a CD drive to read data contained therein. The memory 712 is a storage medium to store various data utilized by the computer system 700. The memory may be a hard disk drive, read-only memory (ROM) or other forms of memory. The processor 714 may be any type of digital signal processor that can run the Blackspace OS 722, including the arrow logic module 724. The input interface 716 provides an interface between the processor 714 and the input device 702. The audio interface 718 provides an interface between the processor 714 and the microphone 704 so that use can input audio or vocal commands. The video driver 720 drives the display device 706. In order to simplify the figure, additional components that are commonly found in a processing device of a personal computer system are not shown or described.
One solution is to rescale the picture's top edge just enough so the line of text above the picture does not wrap. A far better solution would be for the software to accomplish this automatically. One way to do this is for the software to analyze the vertical space above and below any object wrapped in text. If a space, like what is shown above, is produced, namely, the object just barely impinges the lower edge of a line of text, then the software would automatically adjust the vertical height of the object to a position that does not cause the line of text to wrap around the object. A user-adjustable maximum distance could be used to determine when the software would engage this function. For instance if a picture (wrapped in a text object) impinges the line of text above it by less than 15%, this software feature would be automatically engaged. The height of the picture would be reduced and the line of text directly above the picture would no longer wrap around the picture.
(1) Drawing a vertical line (preferably drawn as a perfectly straight line—but the software should be able to interpret a hand drawn line that is reasonably straight—like what you would draw to create a fader).
(2) Having the drawn line intersect text that is wrapped around at least one object or having the drawn line be within a certain number of pixels from such an object. Note: (3) below is optional.
(3) Having the line be of a certain color. This may not be necessary. It could be determined that any color line drawn in the above two described contexts will comprise a reliably recognizable context. The use of a specific color (i.e., one of the 34 Onscreen Inkwell colors) is that this would distinguish a “border distance” line from just a purely graphical line drawn for some other purpose along side a picture wrapped in text.
Once the line is drawn and an upclick is performed, the software will recognize the line as a programming tool and the text that is wrapped on the side of the picture where the line was drawn will move its wrap to the location marked by the line. As an alternate a user action could be required, for example, dragging the line at least one pixel or double-clicking on the line in enable the text to be rewrapped by the software.
In the example of
To fix this problem the software automatically (or by user input) rescales these words by elongating each individual character and increasing the space between the text (the kerning). One benefit to this solution is that the increase in kerning is not done according to a set percentage. Instead it is done according to the individual widths of the characters. So the rescaling of the spaces between these characters can be non linear. In addition, the software maintains the same weight of the text such that it matches the text around it. When text is resealed wider, it usually increases in weight (the line thickness of the text increases). This makes the text appear bulkier and it no longer matches the text around it. This is taken into account by the software when it rescales text and as part of the rescaling process the line thickness of the resealed text remains the same as the original text in the rest of the text object. (
With regard to
To accomplish this without resorting to menu (IVDACC) entries, drag the object (for which “wrap to square” is desired) in the rectangular motion gesture (drag path) over the text object (
NOTE: When you drag an object, in this case a star, in a rectangular gesture, the ending position for the “wrapped to square” object is the original position of the object as it was wrapped in the text before you dragged it to create the “wrap to square” gesture.
(1) Use the circular arrow gesture of
(2) Use a verbal command, i.e., “show border values”, “show values”, etc.
(3) Double click on the star graphic to toggle the parameters on and off.
(4) Use a traditional menu (Info Canvas) with the four Wrap to Squareentries—but this is what we wish to eliminate.
(5) Click on the star graphic and then push a key to toggle between“show” and “hide.”
(6) Float the mouse over any edge of the wrap square and a pop up tooltip appears showing the value that is set for that edge.
The following examples illustrate eliminating the need for vertical margin menu entries. Vertical margin menu entries (IVDACCs) can be removed by the following means. Use any line OR use a gesture line that Invokes “margins,” e.g., from a “personal objects toolbox.” This could be a line with a special color or line style or both.
Using this line, draw a horizontal line that impinges a VDACC or word processor environment.
Alternatively, draw a horizontal line that is above or below or that impinges a text object that is not in a VDACC. Note: objects that are not in VDACCs are in Primary Blackspace. A simple line can be drawn. Then type or draw a specifier graphic, i.e., the letter “m” for margin. Either draw this specifier graphic directly over the drawn line or drag the specifier object to intersect the line. If a gesture line that invokes margins is used, then no specifier would be needed. Determine if the horizontal line is above or below a first drawn horizontal line. This determination is simply to decide if a drawn horizontal line is the top or bottom margin for a given page of text or text object. There are many ways to this, for example, if there is only one drawn horizontal line, then that could be determined to be the top margin if it is above a point that equal 50% of the height of the page or the height of the text object not in a VDACC. And it will be determined to be a bottom margin if it is below a point that equals 50% of the height of a page or the height of a text
object not in a VDACC. If there is no page then it will be measured according to the text object's height.
If it is desired to have a top margin that is below this 50% point, then a more specific specifier will be needed for the drawn line. An example would be “tm” for “top margin,” rather than just “m.” Or “bm” or “btm” for bottom margin, etc. Note: The above described items would apply to one or more lines drawn to determine clipping regions for a text object.
With regard to
margin line for this same text. This is a text object typed in Primary Blackspace. It is not in a VDACC. This is a change in how text processing works. Here a user can do effective word processing without a VDACC or window. The advantage is that users can very quickly create a text object and apply margins to that text object without having to first create a VDACC and then place text in that VDACC. This opens up many new possibilities for the creation of text and supports a greater independence for text objects. The idea here is that a user can create a text object by typing onscreen and then by drawing lines in association with that text object can create margins for that text object. The association of drawn lines with a text object can be by spatial distance, e.g., default distance saved in software, or a user defined distance, by intersection with the bounding rectangle for a text object whose size is user-definable. In other words, the size of the invisible bounding rectangle around a text object can be altered by user input. This input could be by dragging, drawing, verbal and the like. In addition to the placement of margins, clip regions can become part of a text object's properties. These clip regions would also enable the scrolling of a text object inside its own clip regions, which are now a part of it as a text object.
Creating margins for a text object in Primary Blackspace or its equivalent can be done with single stroke lines. Below is shown a loop in a line to designate “margin”. In this example a line containing an upper loop is a top margin and a line containing a bottom loop is a bottom margin. Also drawn are two clip lines drawn as a line with a as part of the line. In this case the shape means “clip.” This is a text object typed in Primary Blackspace. It is not in a VDACC. Here a user can do effective word processing without a window or without a VDACC object. The advantage is that users can very quickly create a text object with the use of margins without having to first create a VDACC object and then place the text in that VDACC object.
This opens up many new possibilities for the creation of text and supports a greater independence for text. So the idea here is that a user creates a text object by typing or otherwise presenting it in a computer environment and then draws a line above and, if desired, below the text object. The“shape” used in the line determines the action of the line. Thus the recognition of lines by the software is facilitated by using shapes or gestures in the lines that are recognizable by the software. In addition, these gestures can be programmed by a user to look and work in a manner desirable to the user.
The drawing of a recognized modifier object, like the “C” in this example, turns a simple line style into a programming line, like a “gesture line.” The software recognizes the drawing of this line, impinged by the “C”, as a modifier for the text object. The could produce many results. For example, other objects could be drawn, dragged or otherwise presented within the text object's clipping region and these objects would immediately become controlled (managed) by the text object. As another example, if the text object itself were duplicated, these clipping regions could define the size of the text object's invisible bounding rectangle. A wide variety of inputs (beyond the drawing of a “C”) could be used to modify a line such that it can be used to program an object. These inputs include: verbal inputs, gestures, composite objects (i.e., glued objects, or objects in a container of some sort) and assigned objects dragged to impinge a line.
When a clip region is created for a text object this clip region becomes part of the property of that text object and a VDACC is not needed. So there is no longer a separate object needed to manage the text object. The text object itself becomes the manager and can be used to manage other text objects, graphic objects video objects, devices, web objects and the like.
The look of the text object's clip region can be anything. It could look like a rectangular VDACC. Or a simple look would be to just have vertical lines placed above and below the text object. These lines would indicate where the text would disappear as it scrolls outside the text's clip region. Another approach would be to have invisible boundaries appear visibly only when they are floated over with acursor, hand (as with gesturing controls), wand, stylus, or any other suitable control in either a 2-D or 3-D environment.
With regards to top and bottom clip boundaries, it would be feasible for such a text object to have no vertical clip boundaries on its right or left side. The text's width would be entirely controlled by vertical margins, not the edges of a VDACC or a computer environment. If there were no vertical margins, then the “clip” boundaries could be the width of a user's computer screen, or handheld screen, like a cell phone screen.
It is important to set forth how the software knows which objects a text object is managing. Whatever objects fall within a text object's clip region or margins could be managed by that text object. A text object that manages other objects is being called a “primary text object” or “master text object.” If clip regions are created for a primary text object and objects fall outside these clip regions, then these objects would not be managed by the primary text object.
A text object can manage any type object, including pictures, devices (switches, faders, joysticks, etc.), animations, videos, drawings, recognized objects and the like.
Other methods that can be employed to cause a text object to manage other text objects. These methods could include but are not be limited to: (1) lassoing a group of objects and selecting a menu entry or issuing a verbal command to cause the text primary text object to manage these other objects, (2) drawing a line that impinges a text object and that also impinges one or more other objects for which the text object is to take ownership, such line would convey an action, like “control”, (3) impinging a primary text object with an second object that is programmed to cause the primary text object to become a “manager” for a group of objects assigned to such second object.
Text objects may take ownership of one or more other objects. There are many ways for a text object to take ownership of one or more objects. One method discussed above is to enable a text object to have its own clipping regions as part of its object properties. This can be activated for a text object or for other objects, like pictures, recognized geometric objects, i.e., stars, ellipses, squares, etc., videos, lines, and the like. So any object can take ownership of one or more other objects. Therefore, the embodiments herein can be applied to any object. But the text object will used for purposes of illustration.
Definition of object “ownership: This means that the functions, actions, operations, characteristics, qualities, attributes, features, logics, identities and the like, that are part of the properties or behaviors of one object, can be applied to or used to control, affect, create one or more contexts for, or otherwise influence one or more other objects.
For instance, if an object that has ownership of other objects, (“primary object”), is moved, all objects that it “owns” will be moved by the same distance and angle. If a primary object's layer is changed, the objects it “owns” would have their layer changed. If a primary object were resealed, any one or more objects that its owns would be resealed by the same amount and proportion, unless any of these “owned” objects were in a mode that prevented them from being rescaled, i.e., they have “prevent rescale” or “lock size” turned on.
The invention provides methods for activating an object to take ownership of one or more other objects.
Menu: Activate a menu entry for a primary object that enables it to have ownership of other objects.
Verbal command: An object could be selected, then a command could be spoken, like “take ownership”, then each object that is desired to be“owned” by the selected object would in turn be selected.
Lasso: Lasso one or more objects where one of the objects is a primary object. The lassoing of other objects included with a primary object could automatically cause all lassoed objects to become “owned” by the primary object. Alternately, a user input could be used to cause the ownership. One or more objects could be lassoed and then dragged as a group to impinge a primary object.
In
Some pictures cause very undesirable text wrap because of their uneven edges. However, putting them into a wrap square is not the always the desired look. In these cases, being able to draw a custom wrap border for a picture or other object and edit that wrap border can be used to achieve the desired result.
Draw one or more gesture lines that intersect the left edge of a VDACC containing a text object. The gesture line could be programmed with the following action: “Create a vertical margin line.” A gesture object could be used to cause a ruler to appear along the top and left edges of the VDACC. Below, two blue gesture lines have been drawn to cause a top and bottom margin line to appear and a gesture object has been drawn to cause rulers to appear. The result is shown in
Eliminating the menus for Snap (
Vocal commands.
Engaging snap is a prime candidate for the use of voice. To engage the snap function a user need only say “snap.” Voice can easily be used to engage new functions like, snapping one object to another where the size of the object being snapped is not changed. To engage this function a user could say: “snap without rescale” or “snap, no resize,” etc.
Graphic Activation of a Function.
This is a familiar operation in Blackspace. Using this a user would click on a switch or other graphic to turn on the snap function for an object. This is less elegant than voice and requires either placing an object onscreen or requiring the user to draw an object or enabling the user to create his own graphic equivalent for such object.
Programming Functions by Dragging Objects.
Another approach would be the combination of a voice command and the dragging of objects. One technique to make this work will eliminate the need for all Snap Info Canvases.
1) Issue a voice command, like: “set snap” or “set snap distance” or “program snap distance” or just “snap distance”. Equivalents are as usable for voice commands as they are for text and graphic commands in Blackspace.
2) Click on the object for which you want to program “snap.”
3) Issue a voice command, e.g., “set snap distances.” Select a first object to which this command is to be applied. [Or enable this command to be global for all objects or select an object and then issue the voice command]. Drag a
second object to the first object, but don't intersect the first object. The distance that this second object is from the first object when a mouse upclick
or its equivalent is performed, determines the second object's position in relation to the first object. This distance programs the first object's snap distance.
If the drag of the second object was to a location to the right or left of the first object, this sets the horizontal snap distance for the first object. If the second object was dragged to a location below or above the first object, this sets the vertical snap distance for the first object. Let's say the drag is horizontal. Then if a user drags a third object to a vertical position near the first object, this sets the vertical snap distance for the first object.
Conditions:
User definable default maximum distance—a user preference can exist where a user can determine the maximum allowable snap distance for programming a snap space (horizontal or vertical) for a Blackspace object. So if an object drag determines a distance that is beyond a maximum set distance, that maximum distance will be set as the snap distance.
Change size condition—a user preference can exist where the user can determine if objects snapped to a first object change their size to match the size of the first object or not. If this feature is off, objects of the same type but of different sizes can be snapped to each other without causing any change is the size of either object.
Snapping different object types to each other—a user preference can exist where the user can determine if the snapping of objects of differing types will be allowed, i.e., snapping a switch to a picture or piece of text to a line, etc.
Saving snap distances. There are different possibilities here, which could apply to changing properties for any object in Blackspace.
Automatic save. A first object is put into a “program mode” or “set parameter mode.” This can be done with a voice command, i.e., “set snap space.”Then when a second object is dragged to within a maximum horizontal or vertical distance from this first object and a mouse upclick (or its equivalent) is performed, the horizontal or vertical snap distance is automatically saved for the first object or for all objects of its type, i.e., all square objects, all star objects, etc.
Drawing an arrow to save. In this approach a red arrow is drawn to impinge all of the objects that comprise a condition or set of conditions (a context) for the defining of one or more operations for one or more objects within this context.
In the example below, the context includes the following conditions:
Verbal save command. Here a user would need to tell the software what they want to save. In the case of the example above, the a verbal utterance would be made to save the horizontal and vertical snap distances for the magenta square. There are many ways to do this. Below are two of them.
First Way: Utter the word “save” immediately after dragging the third object to the first to program a vertical snap distance.
Second Way: Click on the objects that represent the programming that you want to include in your save command. For example if you want to save both the horizontal and vertical snap distances, you could click only on the magenta square or on the magenta square and then on the green and orange rectangles that set the snap distances for the magenta square. If you wanted to only save the horizontal snap distance for the magenta square, you could click on the magenta square and then on the green rectangle or only on the green rectangle, as the subject of this save is already the magenta square.
Change Size Condition. A user can determine whether a snapped object must change its size to match the size of the object it is being snapped to or whether the snapped object should retain its original size and not be altered when it is snapped to another object. This can be programmed by the following methods:
Arrow—Draw an arrow to impinge the snap objects and then type, speak or draw an object that denotes the command: “match size” as a specifier of the arrow's action. As with all commands in Blackspace any equivalent that can be recognized by the software is viable here.
Verbal command—Say a command that causes the matching or not matching of sizes for snapped objects, i.e., “match size” or “don't match size.”
Draw one or more Gesture Objects—A gesture line be used to program snap distance. It could consist of two equal or unequal length lines which would be hand drawn and recognized by the software as a gesture line. This would require the following:
(1) A first object exists with its snap function engaged (turned on).
(2) Two lines are drawn of essentially equal length (e.g. that are within 90% of the same length) to cause the action: “change the size of the dragged object to match the first object.” Or two lines of differing lengths are drawn to cause the opposite action.
(3) The two lines are drawn within a certain time period of each other, e.g., 1.5 seconds, in order to be recognized as a gesture object.
(4) Such recognized gesture object is drawn within a certain proximity to a first object with “snap” turned on. This distance could be an intersection or a minimum default distance to the object, like 20 pixels. These drawn objects don't have to be lines. In fact, using a recognized object could be easier to draw and to see onscreen. Below is the same operation as illustrated above, but instead of drawn lines, objects are used to recall gesture lines.
Pop Up VDACC This is a traditional but useful method of programming various functions for snap. When an object is put into snap and a second object is dragged to within an desired proximity of that object, a pop up VDACC could appear with a short list of functions that can be selected.
Drawing to snap dissimilar objects to each other. One method would be to use a gesture object that has been programmed with the action “snap dissimilar type and/or size objects to each other.” The programming of gesture objects is discussed herein. Below a gesture line that equals the action: “turn on snap and permit objects of dissimilar types and sizes to be snapped to each other” has been drawn to impinge a star object. A green gesture line with a programmed action described above has been drawn to impinge a red star object. This changes the snap definition of the star from its default, which is to only permit like objects to be snapped to it, e.g., only star objects, to now permitting any type of object, like a picture, to be snapped to it. The picture object can then be dragged to intersect the star and this will result in the picture being snapped to the star. The snap distance can either be a property of the gesture line or a property of the default snap setting for the star, or set according to a user input.
The software accomplishes this by preventing the agglomeration of newly drawn objects with previously existing objects. One method to do this would be for the software to determine if the time that previously existing objects were drawn is greater than a minimum time, then the drawing of new objects that impinge these previously existing objects will not result in the newly drawn objects agglomerating to the previously drawn objects.
Definition of agglomeration: this provides that an object can be drawn to impinge an existing object, such that the newly drawn object, in combination with the previously existing object (“combination object”) can be recognized as a new object. The software's recognition of said new object results in the computer generation of the new object to replace the two or more objects comprising said combination object. Note: an object can be a line.
Notes for: “Preventing the agglomeration of newly drawn objects on previously existing objects” flow chart.
1. Has a new (first) object been drawn such that it impinges an existing object. An existing object is an object that was already in the computer environment before the first object was presented. An object can be “presented” by any of the following means: dragging means, verbal means, drawing means, context means, and assignment means.
2. A minimum time can be set either globally or for any individual object. This “time” is the difference between the time that a first object is presented (e.g., drawn) and the time that a previously existing object was presented in a computer environment.
3. Is the time that the previously existing object (that was impinged by the newly drawn “first” object) was originally presented in a computer environment greater than this minimum time?
4. Has a second object been presented such that it impinges the first object? For example, if the first object is a circle, then the second object could be a diagonal line drawn through the circle, like this:
5. The agglomeration of the first and second objects with the previously existing object is prevented. This way the drawing of the first and second objects can't agglomerate with the previously existing object and cause it turned into another object.
6. When the second object impinges the first object can the computer recognize this impinging as a valid agglomeration of the two objects?
7. The impinging of the first object with these second object are recognized by the software and as a result of this recognition the software replaces both the first and second objects with a new computer generated object.
8. Can the computer generated object convey an action to an object that it impinges? Note: turning a first and second object into a computer generated object, results in having that computer generated object impinge the same previously existing object that was impinged by the first and second objects.
9. Apply the action that can be conveyed by the computer generated graphic to the object that it is impinging. For instance, if the computer generated object conveyed the action: “prevent,” then the previously existing object being impinged by the compute generated object would have the action “prevent” applied to it.
In this way a recognized graphic that conveys an action can be drawn over any existing object without the risk of any of the newly drawn strokes causing an agglomeration with the previously existing object.
The conditions of this new recognition are as follows:
(1) According to a determination of the software or via user-input, the newly draw one or more objects will not create an agglomeration to any previously existing object.
(2) The drawn circle can be drawn in the Recognize Draw Mode. The circle will be turned into a computer generated circle after it is drawn and recognized by the software.
(3) The diagonal line can be drawn thorough the recognized circle. But if the circle is not recognized, when the circle is intersected by the diagonal line no “prevent object” will be created.
(4) The diagonal line must intersect at least one portion of a recognized circle's circumference line (perimeter line) and extend to some user-definable length, like to a length equal to 90% of the diameter of the circle or to a definable distance from the opposing perimeter of the circle, like within 20 pixels of the opposing perimeter, as shown in
Prevent Assignment—to prevent any object from being assigned to another object, draw the “prevent object” to impinge the object. The default for drawing the prevent object to impinge another object can be “prevent assignment,” and the default for drawing the prevent object in blank space could be: “show a list of prevent functions.” Such defaults are user-definable by any known method.
The invention may also remove menus for UNDO function and substitute graphic gesture methods. This is one of the most used functions in any program. These action can be called forth by graphical drawing means.
Combining graphical means with a verbal command. If a user is required to first activate one or more drawing modes by clicking on a switch or on a graphical equivalent before they can draw, the drawing of objects for implementing software functions is not as efficient as it could be.
A potentially more efficient approach would be to enable users to turn on or off any software mode with a verbal command. Regarding the activation of the recognize draw mode, examples of verbal utterances that could be used are: “RDraw on”—“RDraw off” or “Recognize on”—“Recognize off”, etc.
Once the recognize mode is on, it is easy to draw an arrow curved to the right for Redo and an arrow curved to the left for Undo.
Combining drawing recognized objects with a switch on a keyboard or cell phone, etc. For hand held devices, it is not practical to have software modes switches onscreen. They take up too much space and will clutter the screen thus becoming hard to use. But pushing various switches, like number switches, to engage various modes could be very practical and easy. Once the mode is engaged, in this case, Recognize Draw, drawing an Undo and Redo graphic to impinge any object is easy.
Using programmed gesture lines. As explained herein a user can program a line or other objects that have recognizable properties, like a magenta dashed line, to invoke (or be the equivalent for) any definable action, like Undo or Redo. The one or more actions programmed for the gesture object would be applied to the one or more objects impinged by the drawing of the gesture object.
Multiple UNDOs and REDOs. One approach is to enable a user to modify a drawn graphic that causes a certain action to occur, like an arched arrow to cause Undo or Redo. First a graphic would be drawn to cause a desired action to be invoked. That graphic would be drawn to impinge one or more objects needing to be undone. Then this graphic can be modified by graphical or verbal means. For instance a number could be added to the drawn graphic, like a Redo arrow. This would Redo the last number of actions for that object. In
The removing of menus as a necessary vehicle for operating a computer serves many purposes: (a) it frees a user from having to look through a menu to find a function, (b) whenever possible, it eliminates the dependence upon language of any kind, (c) it simplifies user actions required to operate a computer, and (d) it replaces computer based operations with user-based operations.
Selecting Modes
A. Verbal—Say the name of the mode or an equivalent name, i.e., RDraw, Free Draw, Text, Edit, Recog, Lasso, etc., and the mode is engaged.
B. Draw an object—Draw an object that equals a Mode and the mode is activated.
C. A Mode can be invoked by a gesture line or object. —A gesture line can be drawn in a computer environment to activate one or more modes. A gesture object that can invoke one or more modes can be dragged or otherwise presented in a computer environment and then activated by some user action or context.
D. Using rhythms to activate computer operations—The tapping of a rhythm on a touch screen or by pushing a key on a cell phone, keyboard, etc., or by using sound to detect a tap, e.g., taping on the case of device or using a camera to detect a rhythmic tap in free space can be used to activate a computer mode, action, operation, function or the like.
The embodiment described below, enables a user to draw a single graphic that does the following things:
(a) It selects the objects to be contained in or managed by a VDACC object.
(b) It defines the visual size and shape of the VDACC object.
(c) It supports further modification to the type of VDACC to be created.
A graphic that can be drawn to accomplish these tasks is a rectangular arrow that points to its own tail. This free drawn object is recognized by the software and is turned into a recognized arrow with a white arrowhead. Click on the white arrowhead to place all of the objects impinged by this drawn graphic into a VDACC object.
A place in VDACC arrow may be modified, as shown in
Removing Flip menus. Below are various methods of removing the menus (IVDACCs) for flipping pictures and replacing them with gesture procedures. The embodiments below enable the flipping of any graphic object (i.e., all recognized objects), free drawn lines, pictures and even animations and videos.
Tap and drag—Tap or click on an edge of a graphic and then within a specified time period, like 1 second, drag in the direction that you wish to flip the object. See
Filling objects and changing their line color—This removes the need for Fill menus (IVDACCs). This idea utilizes a gesture that is much like what you would do to paint something. Here's how this works. Click on a color in an inkwell then float your mouse, finger, pen or the like over an object in the following pattern. This circular motion feels like painting on something, like filling it in with brush strokes. There are many ways of invoking this: (1) with a mouse float after selecting a color, (2) with a drawn line after selecting a color, (3) with a hand gesture in the air—recognized by a camera device, etc.
The best way to utilize the drawn line is to have a programmed line for “fill” in your personal object toolbox, accessed by drawing an object, like a green star, etc. These personal objects would have the mode that created them built into their object definition. So selecting them from your toolbox will automatically engage the required mode to draw them again. Utilizing this approach, you would click on a “fill” line in your tool box and draw as shown in
Removing the Invisible menu. —Verbal command Say “invisible.” Draw an “i’ over the object you wish to make invisible. The “i” would be a letter that is recognized by the software. The idea here is that this letter can be hand draw in a relative large size, so it's easy to see and to draw and then when it's recognized, the image that is impinged by this hand draw letter is made invisible. (
Removing the need for “wrap to edge” menu item for text. This is a highly used action, so more than one alternate to an IVDACC makes good sense. There are two viable replacements for the “wrap to edge” IVDACC. Each serves a different purpose. They are illustrated In
Vocal command—Wrap to edge can be invoked by a verbal utterance, e.g., “wrap to edge.” A vocal command is only part of the solution here, because if you click on text and say “wrap to edge”, the text has to have something to wrap to. So if the text is in a VDACC or typed against the right side of one's computer monitor where the impinging of the monitor's edge by the text can cause “wrap to edge,” a vocal utterance can be a fast way of invoking this feature for the text object. But if a text object is not situated such that it can wrap to an “edge” of something, then a vocal utterance activating this “wrap to edge” will not be effective. So in these cases you need to be able to draw a vertical line in or near the text object to tell it where to wrap to. This, of course, is only for existing text objects. Otherwise, using the “wrap to edge” line as described under A above is a good solution for freshly typed text. But for existing text, drawing a vertical line through the text and then saying “wrap to edge” or its equivalent would be quite effective.
The software would recognized the vocal command, e.g., “wrap to edge” and then look for a vertical line that is some minimum length (i.e., one half inch) and which impinges a text object.
Removing the IVDACCs for lock functions, such as move lock, copy lock, delete lock, etc. Distinguishing free drawn user inputs used to create a folder from free drawn user inputs used to create a lock object.
Currently drawing an arch over the left, center or right top edge of a rectangle results in the software's recognition of a folder. A modification to this recognition software provides that any rectangle that is impinged by a drawn arch that extends to within 15% of its left and right edges will not be recognized as a folder. Then drawing this will cause the software to recognize a lock object which can be used to activate any lock mode.
There are different ways to utilize the Lock recognized object.
a. Accessing a List of Choices
Draw a recognized lock object, and once it is recognized, click on it and the software will present a list of the available lock features in the software. These features can be presented as either text objects or graphical objects. Then select the desired lock object or text object.
b. Activating a Default Lock Choice.
With this idea the user sets one of the available lock choices as a default that will be activated when the user draws a “lock object” and then drags that object to impinge an object for which they wish to convey the default action for lock. Possible lock actions include: move lock, lock color, delete lock, and the like.
If the software finds these conditions, then it implements a wrap action for the text, such that the text wraps at the point where the vertical line has been drawn. If the software does not find this vertical line, it cannot activate the verbal “wrap to edge” command. In this case, a pop up notice may appear alerting the user to this problem. To fix the problem, the user would redraw a vertical line through the text object or to the right or left of the text object and restate: “wrap to edge.” See
In the above described embodiment, the line does not have to be drawn to intersect the text. If this were a requirement, then you could never make the wrap width wider than it already is for a text object. So the software needs to look to the right for a substantially vertical line. If it doesn't find it, it looks farther to the right for this line. If it finds a vertical line anywhere to the right of the text and that line impinges a horizontal plane defined by the text object, then the verbal command “wrap to text” will be implemented.
Another way to invoke Lock Color would be to drag a lock object through the object you want to lock the color for and then drag the lock to intersect an inkwell. Below a lock object has been dragged to impinge two colored circle objects and then dragged to impinge the free draw inkwell. This locks the color of these two impinged objects.
Verbal commands This is a very good candidate for verbal commands. Such verbal commands could include: “lock color,”, “move lock”, “delete lock,” “copy lock,” etc.
Unique recognized objects. These would include hand drawn objects that would be recognized by the software.
Creating user-drawn recognized objects. This section describes a method to “teach” Blackspace how to recognize new hand drawn objects. This enables users to create new recognized objects, like a heart or other types of geometric objects. These objects need to be easy to draw again, so scribbles or complex objects with curves are not good candidates for this approach. What are good candidates are simple objects where the right and left halves of the object are exact matches.
This carries with it two advantages: (1) the user only has to draw the left half of the object, and (2) the user can immediately if their hand drawn object ha been recognized by the software. Here's how this works. A grid appears onscreen when a user selects a mode which can carry any name. Let's call it: “design an object.” So for instance, a user clicks on a switch labeled “design an object” or types this text or its equivalent in Blackspace, clicks on it and a grid appears. This grid has a vertical line running down its center. The grid is comprised of relatively small grid squares, which are user-adjustable. This smaller squares (or rectangles) are for accuracy of drawing and accuracy of computer analysis.
The idea is this. A user draws the left half of the object they want to create. Then when they lift off their mouse (do an upclick or its equivalent) the software analyzes the left half of the user-drawn object and then automatically draws the second half of the object on the right side of the grid.
The user can see immediately if the software has properly recognized what they drew. If not, the user will probably need to simplify their drawing or draw it more accurately.
For these new objects to have value to a user as operational tools, whatever is drawn needs to be repeatable. The idea is to give a user unique and familiar recognized objects to use in as tools in computer environment. So these new objects need to have a high degree of recognition accuracy.
Then when the user activates a recognize draw mode and draws the new object, in this case a heart, the computer creates a perfect computer rendered heart from the user's free drawn object. And the user would only need to draw half of the object. This process is shown in
The foregoing description of the preferred embodiments of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and many modifications and variations are possible in light of the above teaching without deviating from the spirit and the scope of the invention. The embodiment described is selected to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as suited to the particular purpose contemplated. It is intended that the scope of the invention be defined by the claims appended hereto.
This application claims the priority date benefit of Provisional Application No. 61/201,386, filed Dec. 9, 2008.
Number | Date | Country | |
---|---|---|---|
61201386 | Dec 2008 | US |