The entire disclosure of Japanese Patent Application No. 2015-049734 filed on Mar. 12, 2015 including description, claims, drawings, and abstract are incorporated herein by reference in its entirety.
1. Technological Field
The present invention relates to a conference support apparatus, a conference support system, and a non-transitory computer-readable recording medium storing a conference support program.
2. Background
In recent years, conference support apparatuses have been proposed in which an electronic blackboard provided with a touch panel or the like is used to store screen transition in accordance with user operation and to manage the flow of the conference in a time-series manner (see, e.g., Japanese Patent Applications Laid-Open No. 2003-339033 and No. 2010-176216).
The conference support apparatuses manage the flow of the conference in a time-series manner and thus can reproduce the state of the screen at an arbitrary time point.
In general, a plurality of terminal apparatuses each having a display section and an operation section are connected to the conference support apparatus through a network such as the Internet and a wireless/wired LAN (Local Area Network). Hereinafter, a system in which a plurality of terminal apparatuses are connected communicably to the conference support apparatus is referred to as a “conference support system.” Note that, there may be a case where one of the terminal apparatuses serves as the conference support apparatus.
In such a conference support system, the conference support apparatus manages a cooperative work region in the conference support system. In the plurality of terminal apparatuses, the cooperative work region is partially or entirely displayed on the respective display sections as individual work regions, and work is performed on each of the individual work regions.
In such a conference support system, the users can perform, by operating his or her terminal apparatus, various object operations such as moving, expanding and contracting with respect to various objects including text boxes in which to input letters and marks, and diagrams, for example. The object to be operated by the object operation (hereinafter, such an object is referred to as “target object for operation”) is selected by touch operation or enclosing operation, for example. The touch operation is an operation to directly select an object and includes an operation to touch the touch panel with a finger or stylus (including a multi-touch operation) and a mouse clicking operation. The enclosing operation is the operation to select objects within an enclosed region at once and includes an operation to slide a finger or stylus on the touch panel and a mouse drag operation on the touch panel.
In a case where the operation section is formed by a large screen touch panel such as an electronic blackboard and object operations are to be performed on this touch panel, it is easy to know whether a touch panel operation or enclosing operation is to be performed, i.e., it is easy to know the target object for operation based on the position or eyesight of the user. Thus, it is unlikely that object operations for the same object or group are performed almost simultaneously. Hereinafter, the following object operations are referred to as “conflicting operation:” a plurality of object operations for the same object or group to be performed although they are not allowed to be performed simultaneously; and a certain object operation and another object operation to be performed for an object or group that is to be a target for the certain object operation when the other object operation is performed before a screen based on the certain object operation is not reflected on the display section yet, or another object operation to be performed for the object or group that is a target for the certain object operation, while the certain object operation is performed for the object or group.
In a case where the users participate in a conference at various locations and perform object operations on the respective terminal apparatuses, it is difficult for each user to know the behavior of the other users, so that a conflicting operation is likely to occur. In this case, when the individual work regions on a plurality of terminal apparatuses overlap each other, the users may be mutually notified of the individual work regions, but such an attempt ends up notifying the users that a conflicting operation may occur. In other words, each user cannot know an object operation that is performed on another terminal apparatus, until an object selection operation such as a touch operation or an enclosing operation, for example, is updated (until the selected state of object is displayed) on the display section of his or her own terminal apparatus. For this reason, this attempt is not sufficient as a solution for conflicting operations.
An object of the present invention is to provide a conference support apparatus, a conference support system, and a non-transitory computer-readable recording medium storing a conference support program which can improve efficiency of a conference by predicting an object operation to be performed on a certain terminal apparatus and allowing another terminal apparatus to know the result of prediction to avoid occurrence of a conflicting operation.
To achieve the abovementioned object, a conference support apparatus reflecting one aspect of the present invention is used in a conference support system in which a plurality of terminal apparatuses configured to perform work on a cooperative operation region are communicably connected to each other, each of the terminal apparatuses including: a display which displays an object; and an operation acceptor which accepts an operation of the object by a user, the conference support apparatus including a controller connected communicably to the display and the operation acceptor, the controller including at least a microprocessor, wherein the controller manages an individual work region that is applicable to a display region of each of the plurality of terminal apparatuses, analyzes an operation performed by a user via the operation acceptor, causes each of the displays to display a screen in which an object operation in each of the individual work regions is reflected, predicts a target object for operation based on a result of the analysis, and suppresses an operation of the predicted target object in a specific terminal apparatus that includes the predicted target object in the individual work region, the specific terminal apparatus being a terminal apparatus other than a terminal apparatus whose object has been predicted as the target object according to the operation thereon, among the plurality of terminal apparatuses.
A conference support system reflecting one aspect of the present invention includes: the conference support apparatus according to the aspect of the present invention mentioned above; and a terminal apparatus having at least one of the display and the operation acceptor, and communicably connected to the conference support apparatus.
A non-transitory computer-readable recording medium reflecting one aspect of the present invention is a computer-readable recording medium storing a conference support program configured to cause a computer of a conference support apparatus to execute processing, the conference support apparatus being used in a conference support system in which a plurality of terminal apparatuses configured to perform work on a cooperative operation region are communicably connected to each other, each of the terminal apparatuses including: a display which displays an object; and an operation acceptor used to operate the object by a user, the processing including: managing an individual work region that is applicable to a display region of each of the plurality of terminal apparatuses, analyzing an operation performed by a user via the operation acceptor, causing each of the displays to display a screen in which an object operation in each of the individual work regions is reflected, predicting a target object that is a target for the object operation based on a result of the analysis, and suppressing an operation of the predicted target object in a specific terminal apparatus that includes the predicted target object in the individual work region, the specific terminal apparatus being a terminal apparatus other than a terminal apparatus whose object has been predicted as the target object according to the operation thereon, among the plurality of terminal apparatuses.
The advantages and features provided by one or more embodiments of the invention will become more fully understood from the detailed description given hereinbelow and the appended drawings which are given by way of illustration only, and thus are not intended as a definition of the limits of the present invention, and wherein:
Hereinafter, an embodiment of the present invention will be described with reference to the drawings. However, the scope of the invention is not limited to the illustrated examples.
In the following, an embodiment of the present invention is described in detail with reference to the drawings.
As illustrated in
Conference support apparatus 10 is composed of an electronic whiteboard, a projector, a server apparatus, a personal computer, or a mobile terminal (such as a smartphone, tablet terminal or notebook computer), for example. In the present embodiment, a description will be given of an exemplary case where an electronic whiteboard that functions as a display section and an operation section of conference support system 1 is adopted as conference support apparatus 10. Note that, the display section and the operation section of conference support system 1 may not be included in conference support apparatus 10 as long as they are communicatively connected to conference support apparatus 10.
Terminal apparatus 20 is composed of a mobile terminal such as a smartphone, tablet terminal, or notebook computer, or a desktop computer or the like. Terminal apparatus 20 functions as the display section or the operation section of conference support system 1.
As described above, conference support system 1 is composed of a plurality of terminal apparatuses (conference support apparatus 10, and terminal apparatuses 20A and 20B) each of which includes a display section for displaying an object and an operation section for operating an object and used for work on a cooperative work region and which are communicably connected to each other. Note that, terminal apparatus 20 may be placed in the same conference room as that of conference support apparatus 10, or may be placed remotely from conference support apparatus 10. In addition, the number of terminal apparatuses 20 is not limited to a particular number.
Conference support apparatus 10 stores screen transition based on the user operation and manages the flow of the conference in a time-series manner. Specifically, when a user participates in a conference and adds an object of an element of the proceedings to the display region of display section 13 or 23, or operates a displayed object using operation section 12 or 22 of conference support apparatus 10 or terminal apparatus 20, information relating to the screen at this time (hereinafter referred to as “screen information”) is stored. In addition, the screen displayed on display section 13 of conference support apparatus 10 is reflected on display section 23 of terminal apparatus 20.
Here, the object is data to be operated, and is displayed on display section 13 in the form of a text box in which a letter or sign is input, a graphic, a photographic image, or a work region (window) of an application, or the like on display section 13. In the present embodiment, the object is displayed in the form of a simple graphic. In addition, operations of changing the state of an object such as adding (newly creating), moving, editing, resizing (expanding or contracting), deleting, grouping, ungrouping and the like of objects are referred to as “object operation.” Such an object operation is performed after a target object for operation is selected by an object selection operation including a touch operation and an enclosing operation.
Note that, the “grouping” is an operation of assigning a plurality of objects to one group. The objects thus grouped can be collectively moved, and can be simultaneously expanded or contracted. In addition, a touch operation and an enclosing operation can select not only a single object but also a plurality of objects.
In conference support system 1, conference support apparatus 10 manages a maximum work region (cooperative work region) to be handled by conference support system 1. Individual work region R1 of conference support apparatus 10 is assumed to be the same as the cooperative work region. Moreover, in terminal apparatuses 20A and 20B, the cooperative work region is partially or entirely displayed on display sections 23A and 23B as individual work regions R2 and R3, and work is to be done in individual work regions R2 and R3 (see
In
In
The user performs an object operation on main screen MD, and a timeline operation on sub-screen SD. The timeline operation refers to an operation performed utilizing timeline TL, and includes an operation of moving marker M, and a branching operation of branching a discussion. For example, the user can reproduce a screen at an arbitrary time point on main screen MD by moving marker M on timeline TL.
As illustrated in
Control section 11 includes central processing unit (CPU) 111, such as a microprocessor, serving as a computing/controlling apparatus, read only memory (ROM) 112 and random access memory (RAM) 113 serving as a main storage apparatus. ROM 112 stores therein basic setting data and a basic program so called basic input output system (BIOS). CPU 111 reads out a program suited for processing contents from ROM 112 or storage section 14, deploys the program in RAM 113, and controls each block in cooperation with the deployed program.
Operation section 12 and display section 13 are composed of a flat panel display provided with a touch panel, for example. Various kinds of known devices such as liquid crystal displays, organic EL displays, and electronic paper displays having a memory feature are adaptable as the flat panel display. Hereinafter, a component element that serves as operation section 12 and display section 13 like a flat panel display is referred to as “operation display section 17.”
Operation section 12 receives handwritten input, an object operation and a timeline operation as well as a touch operation and an enclosing operation (to be described, hereinafter) performed by users and outputs a signal in accordance with the received operation to control section 11. Display section 13 displays various kinds of information on main screen MD and sub-screen SD in accordance with the display control information input from control section 11. Note that, hereinafter, a description will be given based on an assumption that users mainly use fingers to perform the operations, but the users may perform the operations using a part of body other than fingers or a contact member such as a stylus, and the same applies to each terminal apparatus. In addition, an input device such as a mouse or keyboard may be provided as operation section 12.
Storage section 14 is, for example, an auxiliary storage such as a hard disk drive (HDD), a solid state drive (SSD), or a secure digital (SD) card and stores therein a conference support program and information relating to a screen, for example. Storage section 14 includes object information table 141, screen transition information table 142, timeline storage section 143, and individual work region table 144, for example (see
Communication section 15 is, for example, a communication interface such as a network interface card (NIC), a modulator-demodulator (MODEM), or a universal serial bus (USB). Control section 11 transmits and receives various kinds of information to and from terminal apparatus 20 connected to a network such as a wired/wireless LAN through communication section 15. Communication section 15 may be composed of a near field wireless communication interface such as near field communication (NFC) or Bluetooth (registered trademark), for example.
Approach-operation detection section 16 is a sensor configured to detect a finger position of the user with respect to operation display section 17. Approach-operation detection section 16 acquires x and y coordinates corresponding to a finger position of the user that is projected onto operation display section 17 (display section 13), and distance z from the tip of the finger to operation display section 17 (display section). Whether the user intends to select an object by touch operation can be determined based on three dimensional coordinates (x, y, and z) of the finger of the user. Kinect (registered trademark) may be applied to approach-operation detection section 16. Kinect measures a distance to a real object by irradiating the real object with an infrared special pattern using an infrared projector, for example, to capture a distorted pattern due to the real object using an infrared camera (depth sensor), and analyzing the captured pattern.
Terminal apparatus 20 includes control section 21, operation section 22, display section 23, storage section 24, communication section 25, and approach-operation detection section 26, for example. A component element serving as operation section 22 and display section 23 is referred to as “operation display section 27.” The configurations of the blocks are substantially the same as those of blocks 11 to 17 of conference support apparatus 10, so that the description will not be repeated.
Control section 21 of terminal apparatus 20 transmits operation information (object operation or timeline operation) input from operation section 22 to conference support apparatus 10 through communication section 25 when a predetermined transmission operation is performed. The term “predetermined transmission operation” used refers to an operation of a transmission key displayed on display section 23 or a flick operation on operation display section 27, for example. Control section 21 receives, via communication section 25, display control information transmitted from conference support apparatus 10, and causes display section 23 to display the information.
In addition, control section 21 always transmits approach-operation information (three dimensional information on finger) acquired by approach-operation detection section 26 to conference support apparatus 10 via communication section 25.
In a case where conference support apparatus 10, terminal apparatuses 20A and 20B in conference system 1 are placed at different locations, it is difficult to know behavior of the other users. Accordingly, a conflicting operation easily occurs in this case. In this embodiment, the target object for operation, which is a target for the object operation, is predicted based on the behavior of the user with conference support apparatus 10 or terminal apparatus 20A or 20B. For example, the occurrence of a conflicting operation is avoided by allowing conference support apparatus 10 and terminal apparatus 20B to know the predicted target object on terminal apparatus 20A.
As illustrated in
User operation analysis section 11A analyzes operation information input from operation section 12 or communication section 15, and identifies the operation performed by the user. Screen information recording section 11B, branch information recording section 11C, timeline creation section 11D, individual work region recording section 11E, object prediction section 11F, and display control section 11H execute predetermined processing preliminarily associated with the contents of operations (e.g., enlarging the object by pinching out or the like), based on the user operation identified by user operation analysis section 11A.
The term “user operation” used herein includes an operation to be performed by the user using operation section 17 (finger approaching operation with respect to operation display section 17) in addition to the operation actually performed by the user using operation section 12. More specifically, user operation analysis section 11A can determine whether an enclosing operation is performed, based on a finger slide operation (operation to touch the screen with a finger first and slide the finger on the screen) on operation display section 17. In addition, user operation analysis section 11A can determine whether a touch operation is performed, based on a finger approach operation with respect to operation display section 17, which is detected by approach operation detection section 16 or 26.
Screen information recording section 11B records the flow of a conference (screen transition) based on the object operation by the user in storage section 14 as screen information. The screen information is information representing elements of a screen and the time when these elements are created and changed. The screen information includes object information for individually managing operations with respect to objects or groups, and screen transition information for managing the flow of a conference in a time-series manner. The object information is stored in object information table 141 of storage section 14, and the screen transition information is stored in screen transition information table 142 of storage section 14.
Branch information recording section 11C records branch information in screen transition information table 142 based on a branching operation (included in the timeline operation) performed by the user. The branching operation is an operation of generating a branch in timeline TL, and is, for example, an object operation performed on main screen MD at an arbitrary time point displayed by moving marker M on timeline TL, and an operation of requesting creation of a branch on timeline TL (for example, an operation of selecting “create branch” from a context menu that is displayed in response to a press-and-hold operation at an arbitrary time point on timeline TL). Alternatively, a predetermined gesture operation on timeline TL may be assigned as the branching operation.
Timeline creation section 11D refers to information of screen transition information table 142, and creates timeline TL. When branch information to be described hereinafter is recorded in screen transition information table 142, timeline TL having a branched structure is created. The information on timeline TL thus created is stored in timeline storage section 143 of storage section 14, for example. Timeline TL may include thumbnails of representative screens (for example, screens representing a conclusion and a branch point), and thumbnails of newly created objects. For example, timeline creation section 11D creates and updates timeline TL at predetermined time intervals or in response to an object operation by the user. Timeline TL may be formed by conversion from the time information to the time axis length and displayed in a size fit to sub-screen SD or may be displayed such that its entirety can be viewed by scrolling on sub-screen SD.
Individual work region recording section 11E records information on the respective individual work regions of conference support apparatus 10 and terminal apparatuses 20A and 20B in individual work region table 144 of storage section 14. The information on the individual work regions include information indicating the position and size of the individual work region in the cooperative work region as well as information on an object included in the individual work region. The individual work region management section is composed of individual work region recording section 11E and individual work region table 144.
Object prediction section 11F predicts a target object for operation that is a target for an object operation in each of the individual work regions based on the result of analysis of user operation analysis section 11A. Object prediction section 11F predicts, when an enclosing operation, for example, is performed, an object that is a target for the enclosing operation based on a trajectory of a slide operation or the like. In addition, object prediction section 11F predicts, when a touch operation, for example, is performed, an object that is a target for the touch operation based on spatial coordinates of a finger of the user (particularly, x and y coordinates).
Object operation suppressing section 11G suppresses an operation of a predicted object in a specific terminal apparatus that includes an object predicted as a target object for operation by object prediction section 11F (hereinafter, referred to as “predicted object”) in the individual work region. For example, object operation suppressing section 11G notifies the specific terminal apparatus including the predicted object in the individual work region that this predicted object may become a target object for operation.
Display control section 11H generates display control information (screen data) for causing display section 13 to display a screen based on the user operation and causes display section 13 to perform a display operation based on the screen data, or transmits the screen data to terminal apparatus 20 through communication section 15, thereby causing display section 23 of terminal apparatus 20 to perform a display operation. When generating display control information, display control section 11H acquires required information from storage section 14. The display control information includes screen display control information for displaying a screen in which the object operation is reflected, and timeline display control information for displaying timeline TL created by timeline creation section 11D. When an enclosing operation is performed on conference support apparatus 10 or terminal apparatus 20A or 20B, for example, display control section 11H updates a trajectory of the enclosing operation as needed and causes display section 12 or 23 to display the trajectory.
The “object ID” is identification information that is given to each object when an object or a group is newly created. The “operation content” is information representing an operation performed on an object or a group. The “operation content” of an object includes new creation, movement, editing, resizing, deletion, grouping, ungrouping and the like, for example. The “operation content” of a group includes group creation, movement, editing, resizing, ungrouping and the like, for example.
The “operation time” is information representing the time at which an object operation is executed. The “meta data” is detailed information on an object or a group. The “meta data” of an object includes the image information, text information, position information (coordinates) and size of the object, for example. The “meta data” of a group includes the image information of the group region, position information (coordinates), size of the group region, and object IDs of objects of the group, for example. The “object ID,” “operation content,” “operation time,” and “meta data” are stored in storage section 14 by screen information recording section 11B.
According to
As described above, adding data in a time series manner every time an object is added or changed makes the data configuration simple and enables easier recognition of screen transition along time course. Note that, in
As illustrated in
The “branch ID” is identification information that is given when a branching operation is performed, and the same identification information is given to the screen on which the branching operation is performed. The “branch ID” is recorded in storage section 14 by branch information recording section 11C.
As illustrated in
When a display region is changed by scrolling operation or the like, or an object is added or deleted in display section 23A or 23B of terminal apparatuses 20A or 20B, information on “work region” and “display object” is updated.
At step S101, control section 11 determines whether a finger of the user approaches operation display section 17 of conference support apparatus 100 or operation display section 27 of terminal apparatus 20 (as processing performed by user operation analysis section 11A). When the finger of the user approaches operation display section 17 or 27 (“YES” at step S101), the processing moves to step S102. When the finger of the user approaches neither operation display section 17 nor 27, the processing moves to step S109. When the finger of the user approaches operation display section 17 or 27, it can be predicted that the user is about to execute an operation on the terminal apparatus.
Control section 11 determines that the finger has approached when the distance from operation display section 17 or 27 to the finger decreases to a predetermined value (e.g., 3 cm) or less, based on z coordinates of the finger that is included in the approach operation information obtained by approach operation detection section 16 or 26, for example.
At step S102, control section 11 determines whether the predicted user operation is a touch operation to select a target object for operation (processing performed as user operation analysis section 11A). When the predicted user operation is a touch operation (“YES” at step S102), the processing moves to step 5103. When the predicted user operation is not a touch operation (“NO” at step S102), the processing moves to step 5201 in
Control section 11 compares the x and y coordinates included in the approach operation information with the position information and sizes of all objects (included in “meta data” of object information table (see
At step S103, control section 11 predicts a target object for operation selectable by touch operation (processing performed as object prediction section 11F). In the processing of step S102, an object including the x and y coordinates of the finger within the object region becomes the target object.
At step S104, control section 11 determines whether the predicted target object for operation (predicted target object) is operable by another terminal apparatus (an terminal apparatus other than the terminal apparatus for which finger approach is detected), i.e., determines whether there is a risk of conflict (processing performed as object operation suppressing section 11G). If there is a risk of conflict (“YES” at step S104), the processing moves to step S105. When there is no risk of conflict (“NO” at step S104), the processing moves to step S106.
Control section 11 determines, with reference to the “display object” of individual work region table 144 (see
At step S105, control section 11 suppresses the operation of the predicted object (object F in
Control section 11 notifies a terminal apparatus having a risk of conflict that the individual work region of this terminal apparatus includes the predicted object. This notification indicating that the individual work region includes the predicted object is made by changing the way how the object is displayed (e.g., changing the background color or frame color) or by displaying an alert (e.g., displaying a message or causing the object to blink). Accordingly, it is made possible to call for attention from users to note that the users should not operate the predicted object. In
In addition, control section 11 restricts the object operation of the predicted object in a terminal apparatus having a risk of conflict, for example. In this case, it is preferable to prohibit an operation of changing the coordinates of the predicted object (e.g., moving the object, and performing enlargement/contraction, rotation, deletion, undo and redo) but to allow for an operation of editing the contents of the predicted object (e.g., changing the background color or written contents).
In addition, as the prediction accuracy for target objects for operations improves, the object operation of the predicted object may be restricted instead of the notification indicating that the individual work region includes the predicted object or in addition to this notification.
At step S106, control section 11 determines whether a touch operation is performed within a predetermined time. When a touch operation is actually detected within the predetermined time (YES at step S106), the processing moves to step S107. When no touch operation is detected within the predetermined time (NO at step S106), the processing moves to step S108.
At step S107, control section 11 displays the object selected by the touch operation in a selected state. The selected object becomes the target for the object operation to be performed thereafter. For the selected target object, an object operation on another terminal apparatus is no longer performed. Note that, the selected state of the object is canceled when a selection cancelling operation (e.g., an operation to touch a region where no object exists) is performed or when a predetermined time elapses. There is a case where an object operation (moving an object or performing enlargement/contraction) is performed subsequently from a touch operation.
At step S108, control section 11 cancels the operation restricted state of the predicted object. In other words, when the predicted object is not actually selected as the target object, in order for another terminal apparatus to freely operate this predicted object, the operation restricted state is cancelled immediately.
When determining that the predicted user operation is not a touch operation at step S 102, control section 11 determines whether a region other than the object is touched within a predetermined time (processing performed as user operation analysis section 11A). When a region other than the object is touched within the predetermined time (“YES” at step S201), the processing moves to step S202. When no region other than the object is touched within the predetermined time (“NO” at step S201), i.e., the finger approaches operation display section 17 or 27, but does not touch operation display section 17 or 27 actually, the processing moves to step S109 in
At step S202, control section 11 determines whether the predicted user operation is an enclosing operation of selecting the target object for operation (processing performed as user analysis section 11A). When the predicted user operation is an enclosing operation (“YES” at step S202), the processing moves to step S203. When the predicted user operation is not an enclosing operation (“NO” at step S202), i.e., when the predicted user operation is an operation of simply touching a region other than the object, processing moves to step S109 of
At step S203, control section 11 predicts the target object selectable by an enclosing operation (processing performed as object prediction section 11F). Control section 11 predicts the target object based on a finger slide operation on operation display section 17 or 23 (operation section 22), for example. The finger slide operation includes, for example, the start point of the enclosing operation (the point where the finger touches very first time), acceleration, trajectory from the start point to the end point (the point where the finger touches currently) (free curve), and the curvature of the trajectory.
For example, let us consider a case where an enclosing operation is performed in which the operation starts from one side of an object group including a plurality of objects C, D, F, and G arranged in a flying geese pattern among the objects within the work region as illustrated in
The processing of steps S204 and S205 is similar to the processing of steps S104 and S105 of
Control section 11 determines, with reference to the display object information of individual work region table 144 (see
At step S205, control section 11 suppresses the operation of the predicted object in a terminal apparatus having a risk of conflict, i.e., a specific terminal apparatus including the predicted object (conference support apparatus 10 and terminal apparatus 20A) (processing performed as object operation suppressing section 11G). In
At step S206, control section 11 determines whether the enclosing operation is completed (processing performed as user operation analysis section 11A). When the enclosing operation is completed (“NO” at step S206), processing moves to step S207. When the enclosing operation is not completed (“NO” at step S206), processing moves to step S201, and the enclosing operation and the prediction of the enclosing operation are continued. Control section 11 determines that the enclosing operation is completed, when the end point of the enclosing operation returns to the start point and the enclosing operation specifies a closed region as illustrated in
At step S207, control section 11 displays the object in the specified region surrounded by the enclosing operation in a selected state (see
When the finger leaves from operation display section 17 or 27 before completion of the enclosing operation, it is determined “NO” at step S201, and the processing moves to step S109 in
When the enclosing operation is not completed, the processing of steps S201 to S206 is repeated, and every time the enclosing operation proceeds, the predicted object is updated. For example, the enclosing operation in individual work region R3 of terminal apparatus 20B proceeds from the state illustrated in
Object operation suppressing section 11G suppresses the operation of the existing object and takes no part in creating a new object. For this reason, it is expected that a new object is created in a specified region that may be surrounded by the enclosing operation while the enclosing operation is in progress. In this case, the added new object may be or may not be included in the predicted objects.
At step S109 of
As described above, conference support apparatus 10 includes an individual work region management section (individual work region recoding section 11E and individual work region table 144) configured to manage individual work regions R1, R2, and R3, which become display regions of a plurality of terminal apparatuses, respectively; user operation analysis section 11A configured to analyze the operations performed via operation sections 12 and 22; screen display control section 11H configured to cause display sections 12 and 22 to display screens on which the object operations in individual work regions R1, R2, and R3 are reflected; object prediction section 11F configured to predict a target object for operation that becomes the target for the object operation, based on the result of analysis of user operation analysis section 11A; and object operation suppressing section 11G configured to suppress the operation of the predicted object in a specific terminal apparatus that is a terminal apparatus other than the terminal apparatus on which the object operation by the user is predicted and that includes, in individual work region R1, R2, or R3, the predicted object that has been predicted by object prediction section 11F, among a plurality of terminal apparatuses.
According to conference support apparatus 10, the target object that becomes the target for the object operation performed in a certain terminal apparatus is predicted, and the object operation on another terminal apparatus is restricted based on the prediction. Thus, the occurrence of a conflict operation can be effectively avoided. Accordingly, the efficiency of conferences significantly improves.
While the invention made by the present inventor has been specifically described based on the preferred embodiment, it is not intended to limit the present invention to the above-mentioned preferred embodiment, and the present invention may be further modified within the scope and spirit of the invention defined by the appended claims.
For example, the present invention is applicable to a case where the target object for operation is selected by an object selection operation other than a touch operation or an enclosing operation (e.g., input of an object ID using a keyboard). In this case, an object selection may be predicted based on the object ID that is inputted but not determined.
While control section 11 of conference support apparatus 10 executes the conference support program to achieve the conference support processing in the embodiment, the conference support processing may also be achieved using a hardware circuit. The conference support program may be stored in a non-transitory computer-readable storage medium such as a magnetic disc, an optical disk, and a flash memory so as to be provided to an apparatus (for example, a personal computer) which can be used as the conference support apparatus. Alternatively, the conference support program may be provided by downloading through communication lines such as the Internet.
The embodiment disclosed herein is merely an exemplification and should not be considered as limitative. The scope of the present invention is specified by the following claims, not by the above-mentioned description. It should be understood that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors in so far as they are within the scope of the appended claims or the equivalents thereof.
Although an embodiment of the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustrated and example only and is not to be taken by way of limitation, the scope of the present invention being interpreted by terms of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2015-049734 | Mar 2015 | JP | national |