The present invention relates to a control method, an electronic blackboard system, a display device, and a program.
Patent Literature 1 discloses an electronic blackboard system having the following functions. The electronic blackboard system disclosed in Patent Literature 1 has the functions to detect the color of an input object from the captured image of the input object used for specifying coordinates and to reflect the detection result in colors rendered on the computer-operating screen.
For example, the electronic blackboard system disclosed in Patent Literature 1 eliminates the necessity of selecting colors on an on-screen display menu. In addition, the electronic blackboard system disclosed in Patent Literature 1 eliminates the necessity of preparing multiple specially-designed pens for specifying colors. Therefore, it is possible to simplify the operation and the configuration by way of the electronic blackboard system disclosed in Patent Literature 1.
The electronic blackboard system disclosed in Patent Literature 1 aims to automatically set the rendered color to the original color of an input object. For this reason, it is not easy for the electronic blackboard system disclosed in Patent Literature 1 to set the rendered color differently from the original color of an input object; hence, it suffers from a problem for degrading operability.
The present invention is made in consideration of the aforementioned circumstances, and therefore it aims to provide a control method, an electronic blackboard system, a display device, and a program, which can solve the above problem.
To solve the above problem, an aspect of the present invention is directed to a control method, which includes a touch detecting step for detecting a touch operation using an input object; an image capturing step for capturing an image including at least part of the input object; and a process determination step for determining a process to be executed depending on the detection result of the touch detecting step and the captured image of the image capturing step.
Another aspect of the present invention is directed to an electronic blackboard system, which includes a detector configured to detect a touch operation using an input object; an image capture part configured to capture an image including at least part of the input object; a controller configured to determine a process depending on a detection result of the detector and a captured image of the image capture part; and a display configured to display an image according to the process determined by the controller.
A further aspect of the present invention is directed to a display device, which includes a display configured to display an image according to a process determined by a controller configured to determine the process to be executed depending on the detection result of a detector configured to detect a touch operation using an input object and the captured image of an image capture part configured to capture an image including at least part of the input object.
A still further aspect of the present invention is directed to a program causing a computer to implement a touch detecting step for detecting a touch operation using an input object; an image capturing step for capturing an image including at least part of the input object; and a process determination step for determining a process to be executed depending on the detection result of the touch detecting step and the captured image of the image capturing step.
According to the present invention, it is possible to determine the process to be executed based on the touch-detection result and the acquired image; hence, it is possible to improve operability with ease.
Hereinafter, the first embodiment of the present invention will be described with reference to the drawings.
For example, the control device 1 may be embodied using one or multiple computers, a peripheral device of a computer, and programs executed on a computer. Herein, the computer may be a personal computer or a terminal such as a smartphone, or it may be an embedded computer such as a micro-controller. For example, the peripheral device may include a detection device for detecting a touch operation. Alternatively, the peripheral device may include an interface for inputting or outputting signals with a detection device without including the detection device for detecting a touch operation. For example, the peripheral device may include an imaging device for capturing images. In addition, the peripheral device may include an interface for inputting or outputting signals with the imaging device. For example, the peripheral device may include a display device for displaying images. In addition, the peripheral device may include an interface for inputting or outputting signals with the display device. The display device displays images according to the process determined by the process determination part 4, which will be described later. In this connection, the touch detector 2, the image capture part 3, and the process determination part 4 represent the functions realized using the computer and its peripheral device by executing predetermined programs on the computer. The present invention refers to functional blocks as the blocks corresponding to the functions of the image capture part 3 and the process determination part 4.
The touch detector 2 detects touches on the detection screen by means of an input object such as a user's finer or hand and a pen or it may receive signals representing detection results. The touch detector 2 sends to the process determination part 4 the information representing that the detection screen is being touched and the information representing one or multiple positions of the touch screen being touched. For example, the touch detector 2 includes a display device and a detection device for detecting touch operations on a touch panel. Alternatively, the touch detector 2 may be an input interface for inputting signals output from a detection device for detecting touch operations.
The image capture part 3 captures an image including at least part of a subject indicating an input object being touched or to be touched with the touch detector 2. Herein, an image including at least part of an input object represents an image including part of an input object to the extent of extracting feature data of an input object. When an input object is a user's finger, for example, the image may cover the scope of a user ranging from a fingertip to a wrist. For example, the image capture part 3 includes an imaging device. Alternatively, the image capture part 3 may be an input interface for inputting image data output from an imaging device.
The process determination part 4 determines a process to be executed depending on the detection result of the touch detector 2 and an image captured by the image capture part 3. The entity for executing the process determined by the process determination part 4 may be the process determination part 4 or a functional block different from the process determination part 4, or it may be shared by both of them. For example, the process to be executed would be a rendering process. In this case, the process determination part 4 determines the details of a rendering process depending on the detection result of the touch detector 2 and an image captured by the image capture part 3. When the process determination part 4 renders characters or lines responsive to touch operations, for example, it should determine rendering colors and the shape of a rendering pen. Alternatively, the process to be executed would be the process that recognizes an operation of the input object on the detection screen as an operation of a virtual mouse so as to generate and output the information representing the recognized mouse operation. In this case, the process determination part 4 determines the details of the information representing the clicked condition of a mouse and the position of a mouse, which is generated based on the detection result of the touch detector 2 and an image captured by the image capture part 3. In this connection, the process determined by the process determination part 4 should not be limited to these examples.
In the control device having the above configuration, the touch detector 2 executes a touch detecting step for detecting a touch operation by an input object or for inputting the detection result of a touch operation. In addition, the image capture part 3 executes an image capturing step for capturing an image reflecting at least part of an input object. Subsequently, the process determination part 4 executes a process determination step for determining the process to be executed based on the detection result of the touch detecting step and the image captured by the image capturing step. Therefore, it is possible for the present embodiment to determine the process to be executed based on the result of detecting a touch operation and the captured image. Thus, it is possible to flexibly deal with various processes and to thereby improve operability with ease.
Next, the second embodiment of the present invention will be described with reference to the drawings.
For example, the image pickup part 11 corresponds to a camera attached to a touch panel 13 shown in
The image pickup part 11 may include multiple cameras. Multiple cameras can be attached to the upper side of the display 19 as well as the right side or the left side of the display 19. In this case, it is possible to capture images of an input object in different directions; hence, it is possible to accurately determine the shape and color of the input object. In addition, one or multiple cameras should be fixed at positions for capturing images of an input object in a region including an operational field; hence, they do not need to be attached to the touch panel 13.
The touch panel 13 having the detector 18 is attached to the display 19. The touch panel 13 and the display 19 can be integrally combined as a single device. The display 19 displays images according to image signals input from the controller 12. For example, the display 19 displays images according to a rendering process determined by the controller 12. For example, the display 19 is a liquid-crystal display. The detector 18 detects a touch operation on the screen 13a of the touch panel 13, i.e. the display screen of the display 19, by means of an input object such as a user's finger and a pen. The detector 18 outputs signals representing the presence/absence of touching and the touched position to the controller 12 as its detection result. For example, the detector 18 is a touch pad formed as a transparent screen on the display screen of a liquid crystal display.
The controller 12 is a computer, for example, which includes a CPU (Central Processing Unit), a storage device including volatile memory and nonvolatile memory, an input/output interface, and a communication device. The controller 12 includes an image recognition processor 14, a determination processing part 15, a rendering processor 17, and a coordinate storage media 16. The image recognition processor 14, the determination processing part 15, the rendering processor 17, and the coordinate storage media 16 are illustrated as the foregoing functional blocks.
The image recognition processor 14 temporarily stores image data obtained from the image pickup part 11 on a storage device inside the controller 12. The image recognition processor 14 carries out a process of recognizing the shape and color (i.e. the shape and/or the color) of an input object from an image which is captured when the detector 18 detects a touch operation of the input object. The image recognition processor 14 compares feature data representing shaping extracted from an image serving as a recognized subject with feature extracting data of an input object which are stored on the coordinate storage media 16 in advance, and therefore the image recognition processor 14 produces the identification information of the feature extracting data showing a high similarity as its detection result. Alternatively, the image recognition processor 14 compares pixel values for each color component occupying a certain region in an image serving as a recognized subject with pixel values for each color component stored in advance, and therefore the image recognition processor 14 produces the identification information for the color showing a high similarity as its detection result.
The determination processing part 15 determines the details of a rendering process for the display 19 based on the detection result of the detector 18 and the recognition result of the image recognition processor 14. As shown in
As shown in
The determination processing part 15 sets the coordination between the shape and color of the recognized input object and the details of a rendering process based on the detection result of the detector 18 and the shape and color of the input object recognized by the image recognition processor 14. According to the setting, for example, the determination processing part 15 controls the rendering processor 17 to display a color setup menu 20 on the screen 13a. For example, the color setup menu 20 can be displayed on screen when a user presses a button on the touch panel 13 which is not shown or when a user performs a specific gesture with the image pickup part 11. The color setup menu 20 shown in
The coordinate storage media 16 stores the information representing the coordination between the information representing the shape and color of an input object and the details of processing.
In
Before a user carries out the aforementioned setting process, for example, it is possible to store on the coordinate storage media 16 multiple sets of coordination between the typical feature extracting data information and the display information in the shipping stage of products.
The rendering processor 17 shown in
Next, an example of the operation of the electronic blackboard system 10 shown in
When the detector 18 detects a touch on the screen 13a by an input object (step S11), the determination processing part 15 determines whether or not the color setup menu 20 is displayed on screen (step S12). When the color setup menu 20 is displayed on screen (i.e. Yes in step S12), the image recognition processor 14 obtains an image captured by the image pickup part 11 (step S13) and thereby stores the image on a predetermined memory device so as to execute an image recognition process (step S14). Next, the determination processing part 15 compares the recognition result of the image recognition processor 14 with the setting value of the feature extracting data (i.e. the feature extracting data information) of an input object already stored on the coordinate storage media 16 (step S15). In step S15, for example, the determination processing part 15 may compares the recognition result and the setting value according to the table-lookup method.
When the recognized shape of an input object is determined to show the highest similarity to the shape A (i.e. shape A in step S15), the determination processing part 15 registers the color information, which a user designates on the color setup menu 20, in the display information coordinated with the feature extracting data information of the shape A (step S16). As shown in
On the other hand, when the color setup menu 20 is not displayed on screen (i.e. No in step S12), the image recognition processor 14 obtains an image captured by the image pickup part 11 (step S17) and thereby stores the image on a predetermined memory device so as to execute image recognition (step S18). Next, the determination processing part 15 compares the recognition result of the image recognition processor 14 with the setting data of the feature extracting data of an input object already stored on the coordinate storage media 16 (step S19). In step S19, for example, the determination processing part 15 may compare the recognition result and the setting value according to the table-lookup method.
When the recognized shape of an input object is determined to be most similar to the shape A (i.e. shape A in step S19), the determination processing part 15 reads the color information registered with the coordinate storage media 16 as the display information coordinated with the feature extracting data information of the shape A (step S20). Next, the determination processing part 15 controls the rendering processor 17 to execute a rendering process using the color designated by the color information read from the coordinate storage media 16 (step S21). When the shape 91 is input by means of the input object 31 as shown in
According to the aforementioned operation, for example, it is possible to write an image in black by touching the screen 13a with an index finger of a right hand, while it is possible to write an image in red by touching the screen 13a with a middle finger of a right hand. In addition, it is possible for a user to arbitrarily set the correspondence between the shape and the color of an input object.
As described above, the second embodiment allows the detector 18 to detect a touch operation of an input object. The image pickup part 11 captures an image including at least part of an input object. The controller 12 determines a process to be executed based on the detection result of the detector 18 and the captured image of the image pickup part 11. In addition, the display 19 displays an image according to the process determined by the controller 12. Therefore, it is possible to improve operability with ease according to the second embodiment that can determine the process to be executed based on the result of detecting a touch operation and the captured image.
For example, the second embodiment can be modified as follows. In step S16 of
In addition, it is possible to make the color setup menu 20 in a hierarchical structure. After a user selects a color on the color setup menu 20, for example, the electronic blackboard system 10 may display a setup menu 20a showing shapes of lines in
It is possible to coordinate the details of a rendering process with the shape of an input object with reference to only the setup menu 20a instead of the color setup menu 20. When the electronic blackboard system 10 is used in a monochrome display mode, for example, the shape of a line and the shape of an input object are set up on the setup menu 20a. In this case, as shown in
In addition, it is possible to set the coordination between an input object and its color by use of general-purpose pens having different colors. As shown in
As shown in
In this connection, it is possible to normally display the icon 81 or the icon 82 on the screen 13a until other shapes and colors are recognized, or it is possible to display the icon 81 or the icon 82 on the screen 13a for a certain period of time when each icon is changed in shape or color. When each icon is normally displayed on the screen 13a, a user may normally recognize the color of the information that can be currently shown on the screen 13a. When the color is not a preferable color, for example, a user may take an action to change the shape of his/her finger again. On the other hand, when each icon is displayed on the screen 13a for a certain period of time at the time of changing each icon in shape or color, the displayed icon may not visually discomfort users.
Two scenarios can be provided for the timing of changing the rendering color. That is, one scenario is to carry out a rendering process using the preset color when starting a touch operation while another scenario is to change a current color to the preset color when terminating a touch operation. To change colors upon starting a touch operation, as shown in
The setting regarding the coordination between the details of a rendering process and the shape and color of an input object may be uniformly determined with respect to the entirety of the screen 13a. Alternatively, it is possible to divide the screen 13a into multiple partial regions so as to change settings for each partial region. As shown in
As shown in
The image pickup part 11 is not necessarily limited to cameras; hence, the image pickup part 11 can be embodied by using infrared sensors or by using combinations of cameras and infrared sensors. The electronic blackboard system is not necessarily limited to systems using liquid crystal displays; hence, the electronic blackboard system can be embodied using projectors. In addition, input objects should not be limited to the foregoing ones; hence, it is possible to employ any objects that are able to identify shapes and colors and that are hard to damage the screen 13a. For example, the touch panel 13 may be exemplified by touch panels installed in tablet terminals or smartphones. In this case, the image pickup part 11 may be formed using an in-camera embedded in a tablet terminal or a smartphone and a prism which is externally provided to capture an image of an input object.
It is possible to establish the correspondence between the constituent elements of the first embodiment and the constituent elements of the second embodiment as follows. The control device 1 shown in
Next, the third embodiment of the present invention will be described with reference to the drawings.
The camera 100 includes an optical module 101 and a signal processor 104. The optical module 101 includes an optical system 102 and an image pickup device 103. The image pickup device 103 is a CMOS (Complementary Metal Oxide Semiconductor) image sensor, a CCD (Charge-Coupled Device) image sensor, or the like. The signal processor 104 reads pixel values from the image pickup device 103 and thereby carries out signal processing for the read pixel values so as to convert them into video signals having a predetermined format, so that video signals are output from the signal processor 104. Based on control signals output from the CPU 200, the signal processor 104 controls the optical system 102, controls the image pickup device 103, and change the details of signal processing.
The touch panel 300 includes a liquid-crystal display device 301 and a touch sensor 302. The liquid-crystal display device 301 displays videos based on video signals output from the PC 400. The touch sensor 302 detects a touch operation on the display screen of the liquid-crystal display device 301 so as to produce a touch detection signal representing the detected touch operation and screen coordinate data representing the touched position.
The CPU 200 includes a camera interface 201 and an arithmetic processing unit 202. The camera interface 201 is circuitry for inputting video signals output from the camera 100 into the arithmetic processing unit 202. The arithmetic processing unit 202 inputs a touch detection signal and screen coordinate data from the touch panel 300. For example, the arithmetic processing unit 202 outputs a control signal to the camera 100 so as to control the image capturing timing. In addition, the arithmetic processing unit 202 outputs a control signal to the PC 400 so as to indicate an image to be rendered.
The storage media 500 stores a table representing the correspondence between data extracting features such as the shape and color of an input object and a process coordinated with the shape and color of an input object. For example, the storage media 500 is a rewritable nonvolatile memory device which can be detachably attached to the CPU 200.
According to a control signal input from the CPU 200 and the information representing an operation screen for videos and applications designated by a user, the PC 400 generates images to be displayed on the touch panel 300, thus outputting video signals having a predetermined format.
The operation regarding a setting process and a rendering process depending on a touch operation with the electronic blackboard system 10a according to the third embodiment is identical to the operation of the electronic blackboard system 10 according to the second embodiment. In this connection, the camera 100 of the third embodiment may correspond to the image pickup part 11 of the second embodiment. In addition, the touch panel 300 of the third embodiment may correspond to the touch panel 13 of the second embodiment. Moreover, a combination of the CPU 200, the PC 400, and the storage media 500 according to the third embodiment may correspond to the controller 12 of the second embodiment.
According to the third embodiment similar to the second embodiment, it is possible to determine processes to be executed depending on the result of detecting touch operations and the captured images; hence, it is possible to improve operability with ease. When the storage media 500 can be detachably attached to the CPU 200, it is possible to easily update the information representing the coordination between the shape and color of an input object and its process. The third embodiment provides a simple configuration achieving an function of displaying an operation screen for application programs with the PC 400 and a function of displaying combinations of characters and lines, which are written into the touch panel 300, on the touch panel 300.
Heretofore, the foregoing embodiments of the present invention have been described in detail with reference to the drawings; however, the detailed configurations should not be limited to the foregoing embodiments; hence, the present invention may embrace any designs not departing from the essence of the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2015/080563 | 10/29/2015 | WO | 00 |