The present disclosure relates to an interface and method for interacting with an artificial intelligence engine on a virtual canvas.
One or more embodiments are directed to a method of collaborating between a first computer associated with a first display at a first location and a second computer associated with a second display at a second location, the method including establishing a connection between the first and second computers, starting a virtual canvas on the first computer, sending the virtual canvas from the first computer to the second computer, in response to an artificial intelligence icon being selected on the first computer or the second computer, open a prompt tab to receive a prompt input, and in response to the prompt input and activation of a button, generate a response from an artificial intelligence engine in the prompt tab or in a new window on the canvas.
One or more embodiments are directed to a method of using an artificial intelligence engine in a virtual canvas running on a computer associated with a display, the method including starting a virtual canvas on the computer, in response to an artificial intelligence icon being selected in an icon window of the virtual canvas, open a prompt tab to receive a prompt input, and in response to the prompt input and activation of a button, generate a response from an artificial intelligence engine in the prompt tab or in a new window on the canvas.
The prompt input may be text or a sketch.
Each window on the canvas generated from the artificial intelligence engine may include a ranking metric for selection by a user.
In response to a user ranking one more windows generated from the artificial intelligence engine on the canvas, feeding back the ranked windows to the artificial intelligence engine.
One or more embodiments are directed to a graphical user interface for using an artificial intelligence engine in a virtual canvas running on a computer associated with a display, the graphical user interface being displayed in response to an artificial intelligence icon being selected in an icon window of the virtual canvas. The graphical user interface may include a prompt tab for inputting a single prompt to retrieve a single text result from an artificial intelligence engine, a notes tab for inputting a single prompt to retrieve more than one text result from the artificial intelligence engine, and a media tab for inputting a single prompt to retrieve one or more images from the artificial intelligence engine.
In response to the prompt being input to the prompt tab, the GUI displays the single text result in the prompt tab, in response to the prompt being input to the notes tab, the GUI displays reach text result in a separate window on the canvas, and in response to the prompt being input to the media tab, the GUI displays each image result in a separate window on the canvas.
The scope of the present disclosure is best understood from the following detailed description of exemplary embodiments when read in conjunction with the accompanying drawings.
As illustrated in
As used herein, peer-to-peer connection means that the computers forming the peer-to-peer connection communicate with each other, either directly or through a relay server 300. When the relay server 300 is used, the relay server 300 just relays data in real time to facilitate the communication, but does not rely on storing data to transfer information. In contrast, in cloud-based communication protocol, each local computer communicates information to a cloud-based server. The cloud-based server has the master information or the master canvas and all objects on the canvas. Periodically, each local computer requests from the cloud server if any changes have been made to the master canvas and if so, a synchronization occurs between the local computer and the cloud server.
In accordance with embodiments, initially a virtual session is started at a first location, with a computer, e.g., Comp1 120 or Comp2 220, that is running virtual canvas software, i.e., a canvas APP, e.g., ThinkHub® multi-site software by TIV®. In particular, a virtual canvas that exists within the software that may be much larger than the physical display region 112, 212 on Display1 110, 210. Any section of the virtual canvas can be viewed on the display region 112, 212 of the Display1 110, 210. Suppose that at first the entire virtual canvas is viewed on the display region 112, 212 and that each of the displays 110, 210 include a touch or gesture sensor 116, 216 associated therewith. Then by zooming into regions, a portion of the virtual canvas can be viewed. By pinching, zooming and panning with gestures detected by the touch sensor 116, 216, a user can zoom into various regions of the virtual canvas to be shown on the display region 112, 212.
The virtual canvas is a virtual region that expands to greater than the physical area of the display regions 112, 212, e.g., any number of times the physical area up to infinite. The use of the virtual canvas allows additional files to be accessible and can be saved, but off the display regions 112, 212. Gestures, such as pan, zoom and pinch gestures can be made to move and resize the scale of the virtual canvas, allowing the full canvas to be displayed all at once or only a small section thereof.
After connection is established, then a user, e.g., at Comp1 120 starts a multisite session and gives it a name. Then, a user at a second location with a computer, e.g., Comp 2 220, that is also running the virtual canvas software can see a list of active multisite sessions that it has permission to access. Then, the user at the second location taps on a button or on the name in the list on the virtual canvas to join the session. Once the second computer joins the session, the connection is established between Comp1 and Comp2, and the data for all the objects on Comp1 in the virtual canvas is then sent to Comp 2, and vice versa, such that the virtual canvas is now a shared canvas. For example, when a window VCW1 is opened in Display1 at Location 1, the window VCW1 will also appear in Display 2 at Location 2, and, when a window VCW2 is opened in Display2 at Location 2, the window VCW2 will also appear in Display 2 at Location 2. From this point forward, any time a user on either computer (Comp1 or Comp 2) enters any data onto the canvas or opens a new window, the entire data or window is transmitted to the other computer, so that at any time after the initial set-up all objects in either display are stored locally on both computers. Further, objects are at a single location on the shared canvas, and objects on the shared canvas can be manipulated from both the first and second display computers simultaneously. File-based objects can be displayed and edited by both sides simultaneously by exchanging all file-based data about the object with both sides and any changes are exchanged in real time. Examples of file-based objects are notes, images, sketches and pdf files. These can be accessed and manipulated with native apps built into the canvas software, for example a Note App, Sketch App, image viewer or pdf viewer. Live Source objects may change continuously and may be edited or changed outside of the canvas APP. For example, if the Canvas is closed and reopened later, the live source object may be different. What is stored on the Canvas is the metadata, e.g., the location of the live source object on the canvas and a pointer to the source of the live source object. For file-based objects, the entire file is saved on the canvas and the object information for that file can only be changed within the canvas App: a file-based object is changed on one computer within the canvas App and then updated on the virtual canvas stored in the cloud and then all computers that connect to the Canvas receive the updated file information.
Native objects are objects that are controlled within the application APP, e.g., a sketch, a note, objects, pdf viewer, image viewer, and so forth. While the above has shown the displays displaying the objects at the same locations on the two displays, this is true if the displays are both displaying the same views of the canvas. In other words, while the objects are at the same location on the canvas, different views of the canvas may be viewed on the different displays. For example, if Display 2 were viewing the same general region of the virtual canvas, but with a different zoom factor and slightly different center point, then on Display 2 VCW1 and VCW2 would appear on the same places on the virtual canvas but different locations on the Display and with different scale factors. Further, one display may be viewing a different portion of the shared canvas without altering the view on the other display. Alternatively, changing portions being viewed on one display may alter that view on the other display.
U.S. Pat. Nos. 11,729,355 and 11,709,647, both of which are incorporated herein by reference in their entirety for all purposes, disclose the use of a canvas in which various types of windows are displayed with various manners in which to interact with these different types of windows. A Canvas is a virtual region that expands to greater than the physical area of a display region of a display, e.g., any number of times the physical area up to infinite. The use of the Canvas allows additional files to be accessible and can be saved, but off the display region. Further discussion of the canvas may be found in U.S. Pat. No. 9,596,319, which is hereby incorporated by reference in its entirety for all purposes. Gestures, such as pan, zoom and pinch gestures can be made to move and resize the scale of the Canvas, allowing the full canvas to be displayed at once or only a small section thereof, while gestures associated with a particular window may be used pan, zoom and pinch gestures can be made to move and resize only that window. Further, tapping or otherwise selecting that window may bring up menus or window trays associated with that window adjacent to that window, in accordance with a type of window These different types of windows may windows that display one of a mobile device stream, a white board, a video conference stream, a browser stream, a snapshot of a stream, and the like. The window trays may include annotation tool, a virtual keyboard tool, a snapshot tool, and the like. Such a canvas may be used with only one user, i.e.,
With the introduction of numerous AI engines that allow images to be retrieved or created in response to user input, e.g., text, a way to efficiently select and refine the image output may be needed. The use of an AI engine within a canvas may allow such refinement to be readily achieved. Further, while the use of a virtual canvas is described above regarding facilitating collaboration between remote users, a canvas for use by a single user may also allow ease of using an AI engine.
A detailed view of a virtual canvas window VCW is shown in
As shown in
This may be repeated numerous times until a desired image(s) is output. This improves the convenience of interacting with the AI engine and allows refinement to be quickly realized.
As a further alternative, as shown in
Another alternative is shown in
As shown in
If more than one AI engine is to be used or more than one response from an AI engine is desired, a notes tab 540 including a prompt entry section 542, a number of responses entry section 544, a sliding scale of the types of responses wanted 545, and a generated button 546 may be selected. A user will input a prompt in the prompt entry section 542, a number in the number of responses entry section 544, and a range of response in the sliding scale of the types of responses wanted 545, and then select the generate button 546. Unlike the prompt tab 530, selection of the generate button 546 will automatically open a new notes window 548 for each response generated. These multiple windows may then be grouped or sorted as discussed above, e.g., in
If the AI engine is to be used to generate an image, a media tab 550 including a prompt entry section 552, a number of responses entry section 554, a sliding scale of the types of responses wanted 555, and a generated button 536 may be selected. A user will input a prompt in the prompt entry section 542, a number in the number of responses entry section 544, and a range of response in the sliding scale of the types of responses wanted 545, and then select the generate button 546. Unlike the prompt tab 530 and the notes tab 540, selection of the generate button 556 will automatically open media windows 558 for each response generated. These multiple windows may then be grouped or sorted as discussed above, e.g., in
Further, while the prompts in
Embodiments are described, and illustrated in the drawings, in terms of functional blocks, units and/or modules. Those skilled in the art will appreciate that these blocks, units and/or modules are physically implemented by electronic (or optical) circuits such as logic circuits, discrete components, microprocessors, hard-wired circuits, memory elements, wiring connections, and the like, which may be formed using semiconductor-based fabrication techniques or other manufacturing technologies. In the case of the blocks, units and/or modules being implemented by microprocessors or similar, they may be programmed using software (e.g., microcode) to perform various functions discussed herein and may optionally be driven by firmware and/or software. Alternatively, each block, unit and/or module may be implemented by dedicated hardware, or as a combination of dedicated hardware to perform some functions and a processor (e.g., one or more programmed microprocessors and associated circuitry) to perform other functions. Also, each block, unit and/or module of the embodiments may be physically separated into two or more interacting and discrete blocks, units and/or modules without departing from the scope of the inventive concepts. Further, the blocks, units and/or modules of the embodiments may be physically combined into more complex blocks, units and/or modules without departing from the scope of this disclosure.
The methods and processes described herein may be performed by code or instructions to be executed by a computer, processor, manager, or controller. Because the algorithms that form the basis of the methods (or operations of the computer, processor, or controller) are described in detail, the code or instructions for implementing the operations of the method embodiments may transform the computer, processor, or controller into a special-purpose processor for performing the methods described herein.
Also, another embodiment may include a computer-readable medium, e.g., a non-transitory computer-readable medium, for storing the code or instructions described above. The computer-readable medium may be a volatile or non-volatile memory or other storage device, which may be removably or fixedly coupled to the computer, processor, or controller which is to execute the code or instructions for performing the method embodiments described herein.
Example embodiments have been disclosed herein, and although specific terms are employed, they are used and are to be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, as would be apparent to one of ordinary skill in the art as of the filing of the present application, features, characteristics, and/or elements described in connection with a particular embodiment may be used singly or in combination with features, characteristics, and/or elements described in connection with other embodiments unless otherwise specifically indicated. Accordingly, it will be understood by those of skill in the art that various changes in form and details may be made without departing from the spirit and scope of the present invention as set forth in the following claims.
The present application claims priority to Provisional Ser. No. 63/460,132 filed on Apr. 18, 2023, the entire contents of which are hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
63460132 | Apr 2023 | US |