Interactive input systems that allow users to inject input (e.g., digital ink, mouse events etc.) into an application program using an active pointer (e.g., a pointer that emits light, sound, or other signal), a passive pointer (e.g., a finger, cylinder or other suitable object) or other suitable input devices such as for example, a mouse, or trackball, are known. These interactive input systems include but are not limited to: touch systems comprising touch panels employing analog resistive or machine vision technology to register pointer input such as those disclosed in U.S. Pat. Nos. 5,448,263; 6,141,000; 6,337,681; 6,747,636; 6,803,906; 7,232,986; 7,236,162; and 7,274,356 and in U.S. Patent Application Publication No. 2004/0179001, all assigned to SMART Technologies of ULC of Calgary, Alberta, Canada, assignee of the subject application, the entire disclosures of which are incorporated by reference; touch systems comprising touch panels employing electromagnetic, capacitive, acoustic or other technologies to register pointer input; tablet and laptop personal computers (PCs); smartphones; personal digital assistants (PDAs) and other handheld devices; and other similar devices.
Above-incorporated U.S. Pat. No. 6,803,906 to Morrison et al. discloses a touch system that employs machine vision to detect pointer interaction with a touch surface on which a computer-generated image is presented. A rectangular bezel or frame surrounds the touch surface and supports digital imaging devices at its corners. The digital imaging devices have overlapping fields of view that encompass and look generally across the touch surface. The digital imaging devices acquire images looking across the touch surface from different vantages and generate image data. Image data acquired by the digital imaging devices is processed by on-board digital signal processors to determine if a pointer exists in the captured image data. When it is determined that a pointer exists in the captured image data, the digital signal processors convey pointer characteristic data to a master controller, which in turn processes the pointer characteristic data to determine the location of the pointer in (x,y) coordinates relative to the touch surface using triangulation. The pointer coordinates are conveyed to a computer executing one or more application programs. The computer uses the pointer coordinates to update the computer-generated image that is presented on the touch surface. Pointer contacts on the touch surface can therefore be recorded as writing or drawing or used to control execution of application programs executed by the computer.
Multi-touch interactive input systems that receive and process input from multiple pointers using machine vision are also known. One such type of multi-touch interactive input system exploits the well-known optical phenomenon of frustrated total internal reflection (FTIR). According to the general principles of FTIR, the total internal reflection (TIR) of light traveling through an optical waveguide is frustrated when an object such as a pointer touches the waveguide surface, due to a change in the index of refraction of the waveguide, causing some light to escape from the touch point. In such a multi-touch interactive input system, the machine vision system captures images including the point(s) of escaped light, and processes the images to identify the touch position on the waveguide surface based on the point(s) of escaped light for use as input to application programs.
The application program with which the users interact provides a canvas for receiving user input. The canvas is configured to be extended in size within its two-dimensional plane to accommodate new input as needed. As will be understood, the ability of the canvas to be extended in size within the two-dimensional plane as needed causes the canvas to appear to be generally infinite in size. Accordingly, managing the collaboration session may become burdensome, resulting in a diminished user experience.
It is therefore an object to provide a novel method of navigation during an interactive input session and a novel interactive board employing the same.
According to an aspect there is provided a method for automatically grouping objects on a canvas in an collaborative workspace, the method comprising: defining at least one zone within the canvas into which a plurality of users can contribute content; in response to a user-based manipulation of the zone, automatically manipulating all of the content contained within the zone; and in response to a user-based manipulation of selected ones the content with the zone, manipulating only the selected ones of the content.
If a plurality of zones has been defined, then at least a pair of the plurality of zones may overlap. The overlapping section of the pair of zones behaves as a combined set of the restrictions of each of the pair of zones.
In accordance with another aspect, there is provided an interactive input system comprising: a touch surface; memory comprising computer readable instructions; and a processor configured to implement the computer readable instructions to: provide a canvas on the touch surface via which a plurality of users can collaborate; define at least one zone within the canvas into which users can contribute content; in response to a user-based manipulation of the at least one zone, automatically manipulate the content contained with the at least one zone; and in response to a user-based manipulation of selected ones the content with the at least one zone, automatically manipulate only the selected ones of the content.
Embodiments of the invention will now be described by way of example only with reference to the accompanying drawings in which:
For convenience, like numerals in the description refer to like structures in the drawings. Referring to
The interactive board 22 employs machine vision to detect one or more pointers brought into a region of interest in proximity with the interactive surface 24. The interactive board 22 communicates with a general purpose computing device 28 executing one or more application programs via a universal serial bus (USB) cable 32 or other suitable wired or wireless communication link. General purpose computing device 28 processes the output of the interactive board 22 and adjusts image data that is output to the interactive board 22, if required, so that the image presented on the interactive surface 24 reflects pointer activity. In this manner, the interactive board 22 and general purpose computing device 28 allow pointer activity proximate to the interactive surface 24 to be recorded as writing or drawing or used to control execution of one or more application programs executed by the general purpose computing device 28.
Imaging assemblies (not shown) are accommodated by the bezel 26, with each imaging assembly being positioned adjacent a different corner of the bezel. Each imaging assembly comprises an image sensor and associated lens assembly that provides the image sensor with a field of view sufficiently large as to encompass the entire interactive surface 24. A digital signal processor (DSP) or other suitable processing device sends clock signals to the image sensor causing the image sensor to capture image frames at the desired frame rate. The imaging assemblies are oriented so that their fields of view overlap and look generally across the entire interactive surface 24. In this manner, any pointer such as for example a user's finger, a cylinder or other suitable object, a pen tool 40 or an eraser tool that is brought into proximity of the interactive surface 24 appears in the fields of view of the imaging assemblies and thus, is captured in image frames acquired by multiple imaging assemblies.
When the imaging assemblies acquire image frames in which a pointer exists, the imaging assemblies convey the image frames to a master controller. The master controller in turn processes the image frames to determine the position of the pointer in (x,y) coordinates relative to the interactive surface 24 using triangulation. The pointer coordinates are then conveyed to the general purpose computing device 28 which uses the pointer coordinates to update the image displayed on the interactive surface 24 if appropriate. Pointer contacts on the interactive surface 24 can therefore be recorded as writing or drawing or used to control execution of application programs running on the general purpose computing device 28.
The general purpose computing device 28 in this embodiment is a personal computer or other suitable processing device comprising, for example, a processing unit, system memory (volatile and/or non-volatile memory), other non-removable or removable memory (e.g., a hard disk drive, RAM, ROM, EEPROM, CD-ROM, DVD, flash memory, etc.) and a system bus coupling the various computing device components to the processing unit. The general purpose computing device 28 may also comprise networking capability using Ethernet, WiFi, and/or other network format, for connection to access shared or remote drives, one or more networked computers, or other networked devices. The general purpose computing device 28 is also connected to the World Wide Web via the Internet.
The interactive input system 20 is able to detect passive pointers such as for example, a user's finger, a cylinder or other suitable objects as well as passive and active pen tools 40 that are brought into proximity with the interactive surface 24 and within the fields of view of imaging assemblies. The user may also enter input or give commands through a mouse 34 or a keyboard (not shown) connected to the general purpose computing device 28. Other input techniques such as voice or gesture-based commands may also be used for user interaction with the interactive input system 20.
Referring to
One or more participants can join the collaboration session by connecting their respective client computing devices 60 to the cloud server 90 via web browser applications running thereon. Participants of the collaboration session can all be co-located at a common site, or can alternatively be located at different sites. It will be understood that the computing devices may run any operating system such as Microsoft Windows™, Apple iOS, Apple OS X, Linux, Android and the like. The web browser applications running on the computing devices provide an interface to the remote host server, regardless of the operating system.
When a computing device user wishes to join the collaborative session, the client collaboration application 70 is launched on the computing device. Since, in this embodiment, the client collaboration application is in the form of a web browser application, an address of an instance of the server collaboration application 92, usually in the form of a uniform resource locator (URL), is entered into the web browser. This action results in a collaborative session join request being sent to the cloud server 90. In response, the cloud server 90 returns code, such as HTML5 code, to the client computing device 60. The web browser application launched on the computing device 60 in turn parses and executes the received code to display a shared two-dimensional workspace of the collaboration application within a window provided by the web browser application. The web browser application also displays functional menu items, buttons and the like within the window for selection by the user. Each collaboration session has a unique identifier associated with it, allowing multiple users to remotely connect to the collaboration session. The unique identifier forms part of the URL address of the collaboration session. For example, the URL “canvas.smartlabs.mobi/default.cshtml?c=270” identifies a collaboration session that has an identifier 270. Session data may be stored on the cloud server 90 and may be associated with the session identified by the session identifier during hypertext transfer protocol (HTTP) requests from any of the client devices 60 that have joined the session.
The server collaboration application 92 communicates with each computing device joined to the collaboration session, and shares content of the collaboration session therewith. During the collaboration session, the collaboration application provides the two-dimensional workspace, referred to herein as a canvas, onto which input may be made by participants of the collaboration session using their respective client devices 60. The canvas is shared by all computing devices joined to the collaboration session.
Referring to
Only a portion of the canvas 134 is displayed because the canvas 134 is configured to be extended in size within its two-dimensional plane to accommodate new input as needed during the collaboration session. As will be understood, the ability of the canvas 134 to be extended in size within the two-dimensional plane as needed causes the canvas to appear to be generally infinite in size.
Each of the participants in the collaboration application can change the portion of the portion of the canvas 134 presented on their computing devices, independently of the other participants, through pointer interaction therewith. For example, the collaboration application, in response to one finger held down on the canvas 134, pans the canvas 134 continuously. The collaboration application is also able to recognize a “flicking” gesture, namely movement of a finger in a quick sliding motion over the canvas 134. The collaboration application, in response to the flicking gesture, causes the canvas 134 to be smoothly moved to a new portion displayed within the web browser application window 130.
However, one of the challenges when working in an extremely large or infinite space is organizing and managing the large amounts of content that may be created or added. Furthermore, once that space becomes collaborative, the challenge of managing users is added. The terms “user” and “participant” will be used interchangeably herein. Accordingly, the canvas is divided into a number of zones. Each zone is a defined area within the canvas that can group both content and participants and provide different levels of restrictions on them. As will be described, using zones facilitates several techniques that can be used to help manage both content and participants in a large shared space.
Referring to
Any content 308 added to the zone 300 is automatically correlated with the zone 300. When manipulating the zone 300, all of its content 308 is treated as a group and can be moved, hidden, shown or modified as a single group. At the same time, the ability to manage individual content is retained.
Referring to
Returning to 503, if the received instructions relate to content creation for a specified zone, then, at 507, content is created within the zone. At 509 content data associated with the created content is communicated to the client collaboration application 70 for display in the zone on the client computing device 60.
Returning again to 503, if the received instructions relate to content manipulation, then, at 510, it is determined if the zone is to be manipulated. If the zone is to be manipulated then at 512, all the content in the zone is automatically manipulated. This can be accomplished, for example, by registering event handlers of the content 308 with event handers of the zone 300 when the content 308 is added to the zone 300. Thus, any manipulation of the zone 300 can be automatically communicated to the event handlers of the content 308. When the content 308 is deleted or removed from the zone 300, the corresponding event handlers of the removed content 308 are deregistered from the event handers of the zone 300.
If the zone is not to be manipulated then, at step 511, only the selected content is manipulated. At 514, the manipulated content is communicated to the client collaboration application 70 for display on the client computing device 60.
Returning again to 503, if the received instructions relate to something other than zone creation, content creation or content manipulation, then, at 516, the instructions are processed accordingly.
The ability to automatically manipulate all of the content 308 within the zone by manipulating the zone 300 provides the advantages of multiple object selection and grouping, without the difficulties inherent in those two actions. Specifically, multiple object selection involves complicated algorithms and modifier keys to get the desired effect. Grouping often means that the group must be ungrouped to be edited and then the desired multiple objects must be selected again to be regrouped. Multiple object selection is especially hard on touch devices without modifier keys. With the zones 300, as described above, both of these challenges could be eased, while still allowing for easy grouping and reorganizing of items.
A number of different types of zone 300 can be defined, each type of zone differing in restrictions and permissions applied to the zone 300. The restrictions and permissions are applied to the users accessing the canvas within the collaboration application. However, an administrator of the collaboration application can define super users, to whom the restrictions and permissions of the different types of zones 300 do not apply. For example, in a classroom environment, students may be designated as users and a teacher may be designated as a super user. In this manner, the students will be restricted by the restrictions and permissions applied to the zones 300 and the teacher will not be bound by the same restrictions and permissions.
For example, referring to
When a user who does not have access to the contribution zone 300′, referred to as an unauthorized user, attempts to provide content to contribution zone 300′, the content is not accepted. The unauthorized user may be presented with a notification, in the form of a pop-up text for example, advising the user that s/he is not permitted to add content to the contribution zone 300′. Alternatively, any content added to the contribution zone 300′ by an unauthorized user may be moved from the contribution zone 300′ and placed outside of it. The movement of the content from an unauthorized user may be performed after a small delay so as to create a “bouncing” or “repelling” visual effect from inside the contribution zone 300′ to outside the contribution zone. As shown in
If a user is assigned to only one contribution zone 300′, the content 308 added to the canvas by that user may automatically be placed within the assigned contribution zone 300′. In an embodiment, unauthorized users can view and interact with the contribution zone 300′. For example, although unauthorized users cannot contribute content to the contribution zone 300′, they may be permitted to manipulate content already included therein.
An example of dividing a canvas into a plurality of contribution zones 300′ is described as follows. Using a Cartesian coordinate representation for the canvass, with the origin proximate a centre of the canvas, each of quadrants (x>0, y>0); (x>0, y<0); (x<0, y>0) (x<0, y<0) may be defined as contribution zones 300′ to which different subsets of users may be assigned. In one implementation, authorized users in one quadrant may view the other three quadrants and manipulate the content therein, but may only contribute content to the quadrant in which they are authorized. In another implementation, only authorized users can view and interact with the contribution zone 300′.
As another example, referring to
For example, as illustrated in
The segregated zone 300″ may be converted to a contribution zone 300′ or basic zone 300 once a predefined task associated with the segregated zone 300″ is complete. Once the segregated zone 300″ is converted, the user will no longer be locked therein and will only be subject to the rules and restrictions of the zone to which the segregated zone is converted. For example, there may be no restrictions on the zone so that the users assigned to the zone may now freely use the entire workspace with full access to create, view, delete and manipulate content as well as pan and zoom-in/zoom-out throughout the workspace.
Referring once again to
If Zone 1 is converted to a contribution zone 300′, then the users AA and BB will be able to see other zones. However, only users AA and BB will be permitted to provide content to Zone 1. If Zone 2 is converted to a contribution zone 300′, then the users CC and DD will be able to see other zones. However, only users CC and DD will be able to provide content to Zone 2. If Zone 1 and Zone 2 are converted to contribution zones 300′, then the users AA, BB, CC, and DD will be able to see other zones, but only users AA and BB will be able to provide content to Zone 1 and only users CC and DD will be able to provide content to Zone 2.
The segregated zone 300″ can be converted into another type of zone in response to a number of different criteria. For example, the segregated zone 300″ can be converted automatically once the users assigned therein have provided content that meets predefined criteria. As another example, the segregated zone 300″ can be converted automatically after a predefined period of time. As yet another example, the super user can convert the segregated zone 300″ manually once the super user decides either enough time has passed or sufficient content has been provided by the users.
Any of the basic zone 300, contribution zone 300′ and segregation zone 300″ can also be removed so the content included therein becomes part of the canvas without any of the features and restrictions provided by the zones.
Yet further, the zones 300, 300′and 300″ can overlap to provide additional levels of collaboration between users. Referring to
If the first zone 354 and the second zone 356 are both basic zones 300, then the behaviour of the overlap zone 352 is no different than the rest of the first zone 354 and the second zone 356.
If the first zone 354 is a basic zone 300 and the second zone 356 is a contribution zone 300′, then the behaviour of the overlap zone 352 mimics the first zone 354. This allows users of the second zone 356 to interact with other, unauthorized users within the second zone 356. Similarly, if the second zone 356 is a basic zone 300 and the first zone 354 is a contribution zone 300′, then the behaviour of the overlap zone 352 mimics the second zone 356.
If the first zone 354 is a basic zone 300 and the second zone 356 is a segregated zone 300″, then the behaviour of the overlap zone 352 mimics the first zone 354. This allows users of the second zone 356 to interact with other, unauthorized users who would otherwise be invisible to the users of the second zone 356. Similarly, if the second zone 356 is a basic zone 300 and the first zone 354 is a segregated zone 300″, then the behaviour of the overlap zone 352 mimics the first zone 354.
If both the first zone 354 and the second zone 356 are contribution zones 300′, then the behaviour of the overlap zone 352 mimics the contribution zone 300′. However, the users from both the first zone 354 and the second zone 356 can contribute content in the overlap zone 352.
If the first zone 354 is a contribution zone 300′ and the second zone 356 is a segregated zone 300″, then the behaviour of the overlap zone 352 mimics the first zone 354. This allows users of the second zone 356 to interact with the users of the first zone 354, who would otherwise be invisible to the users of the second zone 356. Similarly, if the second zone 356 is a contribution zone 300 and the first zone 354 is a segregated zone 300″, then the behaviour of the overlap zone 352 mimics the first zone 354.
If both the first zone 354 and the second zone 356 are segregation zones 300″, then the behaviour of the overlap zone 352 mimics the segregation zone 300″. However, the users from both the first zone 354 and the second zone 356 are only visible to each other and can only contribute content in the overlap zone 352.
As described above, different zone types can be made more or less restrictive depending on how the zones are to be used. For example, the zones can be restricted so that the authorized users can only view the zone to which their access is restricted. In cases where there is a clear leader, such as in a classroom environment with teachers and students, for example, the leader could be designated as the super user and given special privileges to control and monitor all zones, regardless of its restrictions and permissions.
Further, the zones 300, 300′, 300″ can be given backgrounds, including template backgrounds, thereby providing group or individual activity spaces within each zone 300, 300′, 300″.
Yet further, although the contribution zone 300′ and the segregation zone 300″ are described as types of zones, other types of zones will become apparent to a person skilled in the art.
Referring to
As described above, the collaboration application is executed via a web browser application executing on the user's computing device. In an alternative embodiment, the collaboration application is implemented as a standalone application running on the user's computing device. The user gives a command (such as by clicking an icon) to start the collaboration application. The application collaboration starts and connects to the remote host server using the URL. The collaboration application displays the canvas to the user along with the functionality accessible through buttons and/or menu items.
In the embodiments described the content in the zone is automatically manipulate using event handlers. Alternatively, callback procedures may be used. In this implementation, each content object may register its event handler routine as a callback procedure with a contact event monitor. In the event that the zone is manipulated, the contact event monitor calls the registered callback procedures or routines for each of the affected content objects such that each graphical object is manipulated.
In another embodiment, bindings may be used. In this implementation, the event handlers of each content object may be bound to a function or routine that is provided, for example, in a library. When the zone it to be manipulated, the corresponding bound library routine is used to process the manipulation.
Although in embodiments described above, the interactive input system is described as being in the form of an LCD screen employing machine vision, those skilled in the art will appreciate that the interactive input system may take other forms and orientations. The interactive input system may employ FTIR, analog resistive, electromagnetic, capacitive, acoustic or other technologies to register input. For example, the interactive input system may employ: an LCD screen with camera based touch detection (such as SMART Board™ Interactive Display model 8070i); a projector-based interactive whiteboard (IWB) employing analog resistive detection (such SMART Board™ IWB Model 640); a projector-based IWB employing a surface acoustic wave (WAV); a projector-based IWB employing capacitive touch detection; a projector-based IWB employing camera based detection (such as SMART Board™ model SBX885ix); a table (such as SMART Table™, and described in U.S. Patent Application Publication No. 2011/069019 assigned to SMART Technologies ULC of Calgary); a slate computer (such as SMART Slate™ Wireless Slate Model WS200); a podium-like product (such as SMART Podium™ Interactive Pen Display) adapted to detect passive touch (for example fingers, pointer, and the like, in addition to or instead of active pens); all of which are provided by SMART Technologies ULC of Calgary, Alberta, Canada.
Other interactive input systems that utilize touch interfaces such as for example tablets, smartphones with capacitive touch surfaces, flat panels having touch screens, track pads, interactive tables, and the like may embody the above described interactive interface.
Those skilled in the art will appreciate that the host application described above may comprise program modules including routines, object components, data structures, and the like, embodied as computer readable instructions stored on a non-transitory computer readable medium. The non-transitory computer readable medium is any data storage device that can store data. Examples of non-transitory computer readable media include for example read-only memory, random-access memory, CD-ROMs, magnetic tape, USB keys, flash drives and optical data storage devices. The computer readable instructions may also be distributed over a network including coupled computer systems so that the computer readable program code is stored and executed in a distributed fashion.
Although embodiments have been described above with reference to the accompanying drawings, those of skill in the art will appreciate that variations and modifications may be made without departing from the scope thereof as defined by the appended claims.
This application claims priority to U.S. Provisional Application No. 62/094,970 filed Dec. 20, 2014. The present invention relates generally collaboration within an interactive workspace, and in particular to a to a system and method for facilitating collaboration by provide zones within the interactive workspace.
Number | Date | Country | |
---|---|---|---|
62094970 | Dec 2014 | US |