The present disclosure relates to an image drawing method, and particularly to an image drawing method for deforming an image according to a gesture input by a user.
Remote desktop systems that display images on remote computers are becoming popular. Patent Document 1 discloses an example of a remote desktop system. This type of remote desktop system includes a host computer that generates an image for display on a display of a remote computer and the remote computer that displays an image supplied from the host computer.
In the meantime, a remote computer has an input sensor for allowing a user to input information with his or her finger or a pen, in some cases. When the remote computer having the input sensor detects that a position in the above-described image has been indicated by a finger or a pen, various types of data including the indicated position are transmitted to the host computer in the form of a report. Based on a series of reports thus received, the host computer detects a gesture (for example, pinch-out gesture, pinch-in gesture, drag gesture, rotation gesture, or the like) and imparts a deformation (for example, enlargement, reduction, movement, rotation, or the like) according to the detected gesture to the above-described image. When the image thus deformed is supplied from the host computer to the remote computer, the deformed image can visually be recognized even in the remote computer.
However, according to the processing described above, since the communication between the remote computer and the host computer is required after the user makes a gesture in the remote computer until the deformation based on the gesture is reflected on the image displayed on the remote computer, a considerable processing delay occurs. Due to the processing delay, the user may wrongly recognize that the gestures are insufficient, although enough gestures have actually been made, and may make extra gestures in some cases. This causes the image to be excessively deformed as if inertia worked, and thus, improvement has been needed.
Therefore, embodiments of the present disclosure provide an image drawing method that can prevent a deformation of an image based on a gesture from becoming excessive due to a processing delay.
An image processing method according to a first aspect of the present disclosure is an image drawing method performed by a system including a host computer that executes an operating system and generates a plurality of images, and a remote computer that includes an input sensor having a sensor surface and a display and that displays at least some of the images generated by the host computer on the display. The method includes, by the remote computer, sequentially detecting a plurality of positions of an indicator on the sensor surface at a predetermined sampling rate using the input sensor and sequentially transmitting plurality of pieces of report data including the plurality of positions to the host computer, wherein the remote computer transmits one of the pieces of report data each time one of the plurality of positions of the indicator is detected. The method also includes, by the host computer, detecting a gesture based on a series of the pieces of report data received from the remote computer, generating one or more deformed images by imparting a deformation corresponding to contents and an amount that are indicated by the gesture, to one or more of the images or a previously-transmitted deformed image, and transmitting the one or more deformed images to the remote computer. The method also includes, by the host computer performing cancelation processing that cancels the generating of the one or more deformed images when one or more of the pieces of the report data received from the remote computer includes an end report indicating that the indicator has been moved away from the sensor surface.
An image processing method according to a second aspect of the present disclosure is an image drawing method performed by a system including a host computer that executes an operating system and generates plurality of images, and a remote computer that includes an input sensor having a sensor surface and a display and that displays at least some of the images generated by the host computer on the display. The method includes, by the remote computer, sequentially detecting a plurality of positions of an indicator on the sensor surface at a predetermined sampling rate using the input sensor and determining whether a movement of the indicator indicated by the plurality of positions of the indicator corresponds to a fixed gesture, generating report data for the input sensor and transmitting the report data to the host computer in a case where the movement of the indicator is not determined to correspond to the fixed gesture, and generating report data for a device different from the input sensor and transmitting the report data to the host computer in a case where the movement of the indicator is determined to correspond to the fixed gesture.
According to the first aspect of the present disclosure, since the deformation of the image is canceled according to the end report, it is possible to prevent the deformation of the image based on the gesture from becoming excessive due to a processing delay.
According to the second aspect of the present disclosure, since the gesture is replaced with data having a small data size and is then transmitted, it is possible to prevent the deformation of the image based on the gesture from—becoming excessive due to a processing delay.
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the attached drawings.
The computer 1, which is a remote computer, includes a processor 10, a memory 11, a communication device 12, an input device 13, a display 14, and an input sensor 15, while the computer 2, which is a host computer, includes a processor 10, a memory 11, a communication device 12, and a display 14.
The processors 10 are central processing units of the computers 1 and 2, and are configured to read, from the memories 11, programs included in various device drivers and various applications in addition to the illustrated operating systems 30, and to execute the programs. The applications executed by the processor 10 of the computer 2 include a drawing application 31 having a function of generating an image. The image generated by the drawing application 31 includes an image to be deformed by a user. In addition, the operating system 30 includes a desktop window manager 30a that is a program for managing the drawing on the screen. The desktop window manager 30a plays a role of generating a video signal based on the image generated by the drawing application 31 and supplying the image to its own display 14 and the other computer, and also plays a role of supplying a video signal supplied from the other computer to its own display 14.
The memory 11 is a storage device that stores the programs to be executed by the processor 10 and that also stores various types of data to be referred to by the processor 10. More specifically, the memory 11 includes a main storage device such as a DRAM (Dynamic Random Access Memory) and an auxiliary storage device such as a hard disk.
The communication device 12 is a device having a function of communicating with the other computer via the Internet or an ad hoc network. The operating system 30 transmits and receives various types of data to/from the operating system 30 of the other computer via the communication device 12.
The input device 13 is a device for accepting input made by the user on the operating system 30, and includes, for example, a keyboard and a mouse. When the user operates the input device 13, the input device 13 generates data indicating the operation contents and supplies the data to the operating system 30. The operating system 30 accepts the user input based on the data thus supplied.
The display 14 is a device that visually outputs a video signal supplied from the operating system 30 (more specifically, the desktop window manager 30a). When the display 14 outputs the video signal in such a manner, the user can view the image generated by the drawing application 31.
The input sensor 15 is a device having a sensor surface and has a function of repeatedly and sequentially detecting the position of an indicator such as an illustrated pen P or a finger of the user on the sensor surface at a predetermined sampling rate and receiving various types of data from the pen P. The data received from the pen P includes a pen pressure detected by a pressure sensor built into the pen P. Although the specific method for detecting the pen P and the finger by the input sensor 15 is not limited to a particular method, for example, an active capacitive method or an electromagnetic induction method can suitably be used for the detection of the pen P, and a capacitance method can suitably be used for the detection of the finger. In the following, the explanation will be continued on the assumption that the detection of the pen P is performed by the active capacitive method and the detection of the finger is performed by the capacitance method.
The sensor surface of the input sensor 15 also typically serves as the display surface of the display 14. In this case, the user can operate the image displayed on the display 14, by using the indicator, as if the user directly touched and operated the image. However, the sensor surface of the input sensor 15 may be provided separately from the display surface of the display 14 as in the case of, for example, a touchpad of a notebook personal computer or an external digitizer.
Directly below the sensor surface, a plurality of x electrodes and a plurality of y electrodes are arranged. The x electrodes each extend in a y direction and are arranged at equal intervals in an x direction. The y electrodes each extend in the x direction and are arranged at equal intervals in the y direction. By using these electrodes, the input sensor 15 detects the position of the indicator and receives data from the pen P. Hereinafter, processing performed by the input sensor 15 will be described in detail separately for a case where the detection target is the pen P and a case where the detection target is the finger.
First, the case where the detection target is the pen P will be described. The input sensor 15 is configured to periodically transmit a predetermined uplink signal by using either the plurality of x electrodes or the plurality of y electrodes as transmission electrodes. The uplink signal includes a local ID (Identification) that is to be assigned to the undetected pen P.
When the pen P which is yet to be paired with the input sensor 15 receives the uplink signal through the capacitive coupling with the above-described x electrodes or y electrodes, the pen P extracts and acquires the local ID therefrom, and transmits a downlink signal at a timing corresponding to the reception timing of the uplink signal. The downlink signal transmitted by the pen P at this stage consists of only a position signal that is an unmodulated carrier signal. The input sensor 15 receives the position signal at each of the plurality of x electrodes and y electrodes in the input sensor 15, detects the position of the pen P based on the reception intensity at each electrode (global scan), and performs pairing with the pen P by using the succeeding uplink signals.
The pen P which has established pairing with the input sensor 15 performs processing of transmitting the downlink signal according to the uplink signal periodically received from the input sensor 15. Here, the downlink signal includes the above-described position signal and a data signal which is a carrier signal modulated by transmission data, such as a pen pressure. The input sensor 15 updates the position of the pen P by receiving the position signal thus transmitted, at each of a predetermined number of x electrodes and y electrodes in the vicinity of the previously detected position (local scan), and acquires the data transmitted by the pen P, by receiving the data signal. Then, the updated position and acquired data are supplied to the processor 10 together with the local ID assigned to the pen P. Accordingly, the operating system 30 can sequentially acquire the position and the data such as the pen pressure for each pen P.
Next, the case where the detection target is the finger of the user will be described. The input sensor 15 is configured to supply a predetermined finger detection signal to either the plurality of x electrodes or the plurality of y electrodes, and to periodically perform processing of receiving the signal at the other electrodes. When the finger is close to the intersection of the x electrode and the y electrode, the reception intensity of the finger detection signal received by the input sensor 15 becomes smaller through the capacitance generated between the finger and the x electrode and the y electrode. The input sensor 15 periodically detects the position of the finger of the user by detecting the change in reception intensity.
Here, the input sensor 15 is configured to perform tracking processing to track the detected position of the finger. Specifically, the input sensor 15 is configured to, when newly detecting the position of the finger, impart a finger ID to the detected position and supply the ID to the processor 10 together with the detected position. In the case where the finger is detected at the next timing at a position within a predetermined range with respect to the previously detected position, the same finger ID is imparted to the detected position and supplied to the processor 10 together with the detected position. Accordingly, even in the case where the positions of a plurality of fingers are detected, the operating system 30 can individually acquire the trajectory of each finger in reference to the finger ID.
When the position and other data (including the data received from the pen P, the local ID, and the finger ID) are supplied from the input sensor 15 of the computer 1 as described above, the operating system 30 of the computer 1 performs processing of generating report data including the supplied position and data and transmitting the report data to the computer 2 via the communication device 12. Accordingly, the contents of the operation performed by the user on the sensor surface of the input sensor 15 of the computer 1 are supplied to the operating system 30 of the computer 2.
A start report SR generated for the pen P is report data R including, in addition to the position data PD, the pen pressure data PRE, and the local ID, pen-down information PenDown indicating that the pen P has come into contact with the sensor surface. The processor 10 is configured to generate the start report SR in response to a change in pen pressure data PRE from 0 to a value larger than 0.
In addition, an end report ER generated for the pen P is report data R including, in addition to the position data PD, the pen pressure data PRE, and the local ID, pen-up information PenUp indicating that the pen P has been moved away from the sensor surface. The processor 10 is configured to generate the end report ER in response to a change in pen pressure data PRE from a value larger than 0 to 0.
A start report SR generated for the finger includes, in addition to the position data PD and the finger ID, track start information Finger Track Start indicating that tracking of the finger has started. The processor 10 is configured to generate the start report SR in response to the start of new finger tracking processing by the input sensor 15.
An end report ER generated for the finger includes, in addition to the finger ID, loss information Finger Track Loss indicating that the movement of the finger tracked as a series of positions by the tracking processing has not been detected. The position data PD is not included in the end report ER generated for the finger. The processor 10 is configured to generate the end report ER in response to the completion of the tracking processing of the finger by the input sensor 15.
Return to
The drawing application 31 has a function of generating a deformed image by imparting, to the generated image, a deformation corresponding to the contents and amount that are indicated by the gesture detected as described above. The deformations thus imparted to the image include an enlargement, a reduction, a movement, rotation, scrolling, and the like. The deformed image generated by the drawing application 31 is supplied to the computer 1 as a video signal through the desktop window manager 30a of the computer 2. The desktop window manager 30a of the computer 1 updates the display of the display 14 of the computer 1 with the video signal thus supplied. Accordingly, the user of the computer 1 can visually recognize the deformed image generated as a result of the operation made by himself/herself. In addition, the desktop window manager 30a of the computer 2 also supplies the video signal of the deformed image to its own display 14. Accordingly, the user of the computer 2 can also visually recognize the deformed image in a similar manner to the user of the computer 1.
The drawing application 31 first generates an image PIC displayed on the display 14 of each of the computers 1 and 2 and writes the image PIC into the end of the transmission buffer 30c. Thereafter, when the report data R indicating the result of the operation by the user of the computer 1 is written into the reception buffer 30b, the report data R is read in order from the oldest one, and the gesture is detected based on a series of pieces of report data R including the past report data R read so far. Then, the latest image PIC is deformed based on the detected gesture and the image PIC indicating the result is written into the end of the transmission buffer 30c.
The desktop window manager 30a performs processing of sequentially reading the image PIC from the transmission buffer 30c, generating a video signal based on the read image PIC each time, and supplying the video signal to the communication device 12 and the display 14. The communication device 12 transmits the supplied video signal to the computer 1. Although not illustrated in the drawing, the desktop window manager 30a of the computer 1, which has received the video signal thus transmitted, updates the display of the display 14 of the computer 1 with the received video signal. Accordingly, the user of the computer 1 can visually recognize a state in which the result of the operation is reflected on the image PIC. In addition, the display 14 of the computer 2, which has been supplied with the video signal from the desktop window manager 30a, updates the displayed image with the supplied video signal. Accordingly, the user of the computer 2 can also visually recognize a state in which the result of the operation performed by the other user on the computer 1 is reflected on the image PIC.
Here, according to the above-described processing, since the communication between the computer 1 and the computer 2 is required after the user makes a gesture on the input sensor 15 of the computer 1 until the deformation based on the gesture is reflected on the image displayed on the display 14 of the computer 1, a considerable processing delay occurs. Due to the processing delay, the user may wrongly recognize that the gestures are insufficient, although enough gestures have actually been made, and may make extra gestures in some cases. This causes the image to be excessively deformed as if inertia worked.
Accordingly, the drawing application 31 according to the present embodiment performs cancelation processing that cancels the deformation of the image when the report data R received from the computer 1 is the end report ER. Specifically, as illustrated in
The drawing application 31 first generates an image for display on the displays 14 of the computers 1 and 2 as a premise (S1) and writes the generated image into the transmission buffer 30c (S2). Thereafter, the drawing application 31 attempts to acquire the report data R from the reception buffer 30b (S3), and determines whether the report data R has been acquired (S4). In the case where the drawing application 31 determines at S4 that the report data R has not been acquired, it repeats S3. On the other hand, in the case where the drawing application 31 determines that the report data R has been acquired, the acquired report data R is temporarily stored in the memory 11 (S5). Then, the drawing application 31 determines whether the acquired report data R is the end report ER (S6).
In the case where the drawing application 31 determines at S6 that the acquired report data R is not the end report ER, it attempts to detect a gesture based on a series of pieces of report data R acquired so far (S7), and determines whether the gesture has been detected (S8). As a result, in the case where the gesture has not been detected, the flow returns to S3 to repeat the processing. On the other hand, in the case where the gesture has been detected, the deformation corresponding to the contents and amount that are indicated by the detected gesture is imparted to the image (S9). The image to be deformed here is the image that the drawing application 31 has previously written into the transmission buffer 30c. The drawing application 31 which has generated the deformed image at S9 writes the generated deformed image into the transmission buffer 30c (10) and returns to S3.
The drawing application 31 which has determined at S6 that the acquired report data R is the end report ER refers to the number n of canceled images preliminarily set in the memory 11 and determines whether n=0 is satisfied (S11). The details of the number n of canceled images will be described later with reference to
The drawing application 31 which has determined at S11 that n=0 is satisfied returns the processing to S3. In this case, the cancelation processing that cancels the deformation of the image is not performed. On the other hand, the drawing application 31 which has determined at S11 that n=0 is not satisfied controls the desktop window manager 30a to stop the transmission of previous n images (S12). Accordingly, the images corresponding to n pieces of report data R before the end report ER are deleted from the transmission buffer 30c and are not reflected on the displays 14 of the computers 1 and 2.
Then, the drawing application 31 determines whether the control of the desktop window manager 30a has been performed in time (S13). That is, if the value of n is large, there is a possibility that some of the previous n images have already been transmitted at the time of executing S11. In such a case, the drawing application 31 determines that the control of the desktop window manager 30a has not been performed in time. The drawing application 31 which has determined at S13 that the control of the desktop window manager 30a has been performed in time returns the processing to S3. On the other hand, the drawing application 31 which has determined at S13 that the control of the desktop window manager 30a has not been performed in time regenerates the (n+1)-th previous image (rewind image to be displayed on the display 14 in the case where no gesture is made) based on the series of pieces of report data R stored in the memory 11 and writes the image into the transmission buffer 30c (S14). Accordingly, the image displayed on the display 14 is excessively deformed once, but immediately after that, it is possible to return to the image that has not excessively been deformed.
The environment information is information indicating the environment of the computer 1 and includes, for example, either the size of the display 14 of the computer 1 or a length of time T_delay required to display an image corresponding to the position of the indicator on the display 14 of the computer 1 after the computer 1 detects the position of the indicator. The drawing application 31 only needs to acquire the size of the display 14 of the computer 1 by receiving the information indicating the size of the display 14 from the computer 1. In addition, the drawing application 31 only needs to acquire the length of time T_delay by causing the operating system 30 of the computer 1 to measure the length of time T_delay and receiving the result from the computer 1. The length of time T_delay will be described in detail later with reference to
Next, the drawing application 31 decides the number n of canceled images based on the acquired environment information (S21). In one example, in the case where the size of the display 14 of the computer 1 is small enough, n=0 is set (that is, the cancelation processing that cancels the deformation of the image is not performed), and as the size of the display 14 becomes larger, n may be increased. Since the excessive deformation of the image is more noticeable as the size of the display 14 on which it is displayed becomes larger, it is possible to keep the excessive deformation of the image to a less noticeable range by deciding the number n of canceled images in such a manner as described above.
Return to
Return to
As described above, according to the image processing method of the present embodiment, since the deformation of the image is canceled according to the end report ER, it is possible to prevent the deformation of the image based on the gesture from becoming excessive due to a processing delay.
In addition, since the number n of canceled images is set based on the size of the display 14 of the computer 1, the length of time T_delay required to display the image corresponding to the position of the indicator on the display 14 of the computer 1 after the computer 1 detects the position of the indicator, and the like, it is possible to optimize the number n of canceled images according to the environment.
Further, since the test interface including the test image that can be deformed according to the gesture made by the user and the slider for selecting the number n of canceled images is displayed on the display of the computer 1, the user of the computer 1 can select the optimal value of the number n of canceled images while checking the actual deformation status.
It should be noted that, in the present embodiment described above, the drawing application 31 deforms the image. However, the desktop window manager 30a may deform the image.
In addition, although the example of setting the number n of canceled images based on the size of the display 14 of the computer 1 has been described in the present embodiment, the number n of canceled images may be set based on the moving speed of the indicator in the computer 1. It should be noted that the moving speed is the distance by which the indicator moves per unit time, and the unit of distance may be such a length as a sensor meter or may be the number of pixels. In this case, it is preferable that the drawing application 31 causes the computer 1 to acquire the average value of the moving speed of the indicator, and as the acquired average value is smaller, the number n of canceled images is set smaller. As the moving speed of the indicator is smaller, the excessive deformation of the image becomes less noticeable, and thus, it is possible to cancel the image deformation only by an optimal amount even in such a manner as described above.
Next, an image processing method according to a second embodiment of the present disclosure will be described. The system configurations of the computers 1 and 2 executing the image processing method according to the present embodiment are similar to those in
In the case where the operating system 30 of the computer 1 determines at S43 that the gesture has been detected, it determines whether the detected gesture corresponds to a fixed gesture stored in a correspondence table to be described later (S44). The details of the correspondence table will be described later with reference to
Each of
Each of the examples of the correspondence table will specifically be described below. In the example of
Next, in the example of
As described above, according to the image processing method of the present embodiment, the gesture performed by the pen P or finger can be replaced with data having a small data size (specifically, report data for the keyboard or mouse) and transmitted. Since the processing delay can thus be reduced, it is possible to prevent the deformation of the image based on the gesture from becoming excessive due to the processing delay, similarly to the first embodiment.
Although the preferred embodiments of the present disclosure have been described above, it is obvious that the present disclosure is not limited to such embodiments at all, and the present disclosure can be carried out in various forms without departing from the gist thereof.
The various embodiments described above can be combined to provide further embodiments. All of the U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments.
These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
Number | Date | Country | Kind |
---|---|---|---|
2021-017152 | Feb 2021 | JP | national |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2022/001733 | Jan 2022 | US |
Child | 18362820 | US |