This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2015-076822, filed on Apr. 3, 2015, the entire contents of which are incorporated herein by reference.
The embodiments discussed herein are related to a technique that controls display of content.
In a general window system, when a user designates the start of application (an application program, an application), a window is displayed at a predetermined position on a screen, and the content of the application is displayed. In addition, there are also techniques disclosed in Japanese Laid-open Patent Publication No. 2013-130915 and Japanese Laid-open Patent Publication No. 2001-5599.
According to an aspect of the invention, a display control system includes a communication device and a display control device configured to communicate with the communication device, wherein the communication device includes first circuitry configured to detect a specific event, and transmit notification information on the specific event to the display control device, and wherein the display control device includes second circuitry configured to detect drawing processing performed by a user on a screen, determine a window frame on the screen according to the drawing processing, and perform control to display a content in the window frame when the notification information is received, the content being designated as a display object by the communication device that has transmitted the notification information.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, as claimed.
In recent years, almost any location in space is used as a display due to improvement of the performance of projectors and displays. In such a situation, it is important to display how large window at which position on which display.
For instance, assume that a user desires to display an application screen on one of a plurality of displays installed in a room using an application on a client terminal such as a smart phone at hand. In this case, when the position of another display is selected as a default, an application screen may be displayed, for instance, on a display far from the display which is intended to be used by the user. In this case, the user has to perform an operation to move the window on the application screen from the far display to an intended position of the intended display. For the size of the window, an operation has to be performed, such as dragging the edge of the window to adjust the size.
In order to avoid such a cumbersome operation, it is desirable that the system estimate the area of a window intended by a user. In general, it is however difficult to automatically estimate an intention of a user.
When there is a high degree of freedom in choosing a display area of a window like this, it is desirable that the system estimate an appropriate area for displaying a window. Conventional techniques, however, do not provide sufficient usability.
Thus, as an aspect, the disclosed embodiment aims to improve the usability in content display coordination between a client terminal and a display.
Hereinafter, a preferred embodiment of the present disclosure will be described.
<Configuration>
Returning to
The control device 2 includes a display control unit 21, an operation detection unit 22, a correspondence processing unit 23, and a terminal management and communication unit 24. The display control unit 21 has functions of outputting a video signal to the display 1 and controlling screen display. In particular, the display control unit 21 controls the display of content, distributed from a client terminal 4, on a window frame designated by a user on the display 1, according to the correspondence between a window ID and a terminal ID.
The operation detection unit 22 has functions of detecting a user's touch operation based on a sensor signal from the display 1 and detecting a user's operation (designation). In particular, the operation detection unit 22 detects a designation from a user as to the position, size and other attributes of a window frame on the display 1. The correspondence processing unit 23 has a function of bringing a user-designated window frame detected by the operation detection unit 22 into correspondence with a client terminal 4 which is confirmed to be present in a vicinity by the terminal management and communication unit 24. The terminal management and communication unit 24 has functions of detecting the presence of the client terminal 4 via the access point 3, performing processing of check-in (login) as occasion calls, and communicating with the client terminal 4.
The client terminal 4 includes an application unit 41, a specific operation event transmitting unit 42, and a content transmitting unit 43. The application unit 41 has a function of executing any application (an application program, an application). The specific operation event transmitting unit 42 has a function of transmitting the fact of execution of a specific operation to the control device 2 as an event when a specific operation (for instance, a shake operation of the client terminal 4 where the shake refers to shaking of the client terminal 4 by hand) is used for bringing a user-designated window frame into correspondence with a client terminal 4. The content transmitting unit 43 has a function of transmitting content to be displayed on a window of the display 1 to the control device 2.
It is to be noted that instead of transmitting the content from the client terminal 4 to the control device 2, the content may be directly obtained by transferring an application in the client terminal 4 to the control device 2 and executing the application by the control device 2 in synchronization with the client terminal 4. In this case, transmission of the content via the access point 3 is unnecessary, and thus it is possible to display the content without delay.
It is to be noted that the control device 2 and the client terminal 4 have a hardware configuration of a general computer device. In other words, the control device 2 and the client terminal 4 each have a central processing unit (CPU), a read only memory (ROM), a random access memory (RAM), a non-volatile random access memory (NVRAM), an auxiliary storage device, and a wireless interface.
The touch position detection unit 221 has a function of detecting a touch position (coordinates) based on a sensor signal from the display 1. The touch motion measurement unit 222 has a function of measuring touch motion (such as a path) based on the touch position detected by the touch position detection unit 221. The touch motion analysis and drawing designation unit 223 has functions of analyzing the touch motion measured by the touch motion measurement unit 222, giving a drawing designation to the display control unit 21, and giving a window drawing event to the display control unit 21 in the case of an operation of designating a window frame. The touch motion analysis and drawing designation unit 223 also notifies the correspondence processing unit 23 of an occurrence of a window drawing event.
<Overall Processing>
Subsequently, the correspondence processing unit 23 of the control device 2, when receiving a notification of an occurrence of a window drawing event from the operation detection unit 22 (yes in step S4), checks whether or not a predetermined operation (for instance, shake) has been performed within a predetermined preset time by the specific operation event transmitting unit 42 of the client terminal 4 via the terminal management and communication unit 24 (step S5).
When a predetermined operation has been performed (yes in step S5), the correspondence processing unit 23 brings the window ID for the window drawing event into correspondence with the terminal ID of the client terminal 4 in which a predetermined operation has been performed, and sets the correspondence in the display control unit 21 (step S6). Consequently, the display control unit 21 displays the content received from the content transmitting unit 43 of the client terminal 4, on the window with a corresponding window ID.
Also, the number of angles and sides of a handwritten figure in an operation of designating a window frame is registered in advance in relation to a user, thereby making it possible to establish the correspondence without performing a specific operation such as shake.
In this case, the correspondence processing unit 23 of the control device 2, when receiving a notification of an occurrence of a window drawing event from the operation detection unit 22 (yes in step S7), checks based on the number of angles or sides obtained from a path whether or not the client terminal 4 of a user corresponding to the number of angles and sides of a handwritten figure has already checked-in (step S8).
When the client terminal 4 has already checked-in (yes in step S8), the correspondence processing unit 23 brings the window ID for the window drawing event into correspondence with the terminal ID of the user-check-in client terminal 4 corresponding to the number of angles or sides, and sets the correspondence in the display control unit 21 (step S9). Consequently, the display control unit 21 displays the content received from the content transmitting unit 43 of the client terminal 4, on the window with a corresponding window ID.
Also, the path of a handwritten signature of a user may be pre-registered, and a user may also be authenticated using the signature drawn by the user in an operation of designating a window frame. In this case, the terminal ID of the client terminal 4 of a user is pre-registered or a user is prompted to perform a predetermined operation such as shake, thereby making it possible to simultaneously perform sign-in and establishment of a correspondence between a window ID and a terminal ID.
<Processing of Operation Detection Unit>
[First Processing Example]
When it is determined that dragging is started (yes in step S102), the operation detection unit 22 draws a start circle centered on the position at which dragging is started (step S103). The start circle is for informing a user of the goal of dragging, and is not limited to a circle and may be any closed figure. Also, display of such a figure allows to distinguish a window display designation operation from other pointing operations.
Subsequently, returning to
Subsequently, returning to
When it is determined that dragging is not continued (no in step S106), the operation detection unit 22 checks a position at which dragging is interrupted (step S107), and determines whether or not the position is in the start circle (step S108).
When it is determined that the position is in the start circle (yes in step S108), the operation detection unit 22 estimates a window frame (step S109).
Returning to
When it is determined that the position at which dragging is interrupted is not in the start circle (no in step S108), estimation is not performed, and the start circle and the path (if the path has been drawn) are erased (step S111) and the processing is completed. Thus, when designation of a window frame is desired to be changed before the dragging reaches the start circle, cancellation of the designation is substantially made by interrupting the dragging, and continuous dragging allows the processing to be performed again from step S101.
[Second Processing Example]
This processing example allows input of a signature for identifying and authenticating a user when an operation of designating a window frame is performed by the user. Except for this, the second processing example is the same as the first processing example.
When it is determined that dragging is started (yes in step S202), the operation detection unit 22 draws a start circle centered on the position at which dragging is started (step S203).
Subsequently, the operation detection unit 22 captures a touch position from a sensor signal of the display 1 (step S204), and draws a path (step S205). It is to be noted that drawing of a path may not be performed.
Subsequently, the operation detection unit 22 determines whether or not dragging is continued (step S206), and when it is determined that dragging is continued (yes in step S206), the flow returns to “capture touch position” (step S204).
When it is determined that dragging is not continued (no in step S206), the operation detection unit 22 checks a position at which dragging is interrupted (step S207), and determines whether or not the position is in the start circle (step S208).
When it is determined that the position is in the start circle (yes in step S208), the operation detection unit 22 activates a timer (step S209), and stays on stand-by until a predetermined time elapses (step S210). It is to be noted that the predetermined time is provided for a user to write a signature in a window frame, and capturing a touch position and drawing a path are continued for obtaining the path of the signature.
When a predetermined time elapses (yes in step S210), the operation detection unit 22 estimates a window frame (step S211) and notifies the display control unit 21 of a window drawing event (step S212). In this step, presence of a signature and the path of a signature (if a signature is provided) are included in the notification. The path of the signature notified to the display control unit 21 is compared with the path of the signature registered in relation to the user in advance, and is used for authentication.
Subsequently, returning to
[Third Processing Example]
In this processing example, a frame probability (the possibility of being recognized as a predetermined figure) is evaluated by which a window frame may be estimated based on the path created by a user's dragging, and the size of the start circle is adjusted according to the evaluation. In other words, when the path is further drawn, which allows a window frame to be estimated, the operation may be completed without continuing to drag to the start circle in the original size so that time may be saved. Except for this, the third processing example is the same as the first processing example.
When it is determined that dragging is started (yes in step S302), the operation detection unit 22 draws a start circle centered on the position at which dragging is started (step S303).
Subsequently, the operation detection unit 22 captures a touch position from a sensor signal of the display 1 (step S304), and draws a path (step S305). It is to be noted that drawing of a path may not be performed.
Subsequently, the operation detection unit 22 evaluates the path and adjusts the size of the start circle (step S306).
Subsequently, returning to
When it is determined that dragging is not continued (no in step S307), the operation detection unit 22 checks a position at which dragging is interrupted (step S308), and determines whether or not the position is in the start circle (step S309).
When it is determined that the position is in the start circle (yes in step S309), the operation detection unit 22 estimates a window frame (step S310).
Subsequently, the operation detection unit 22 notifies the display control unit 21 of a window drawing event (step S311), erases the start circle and the path (if the path has been drawn) (step S312), and completes the processing.
When it is determined that the position at which dragging is interrupted is not in the start circle (no in step S309), estimation is not performed, and the start circle and the path are erased (if the path has been drawn) (step S312).
Subsequently, returning to
Subsequently, returning to
Subsequently, returning to
Subsequently, returning to
[Modification of Third Processing Example]
In the third processing example described above, the size of the start circle (closed start figure) is adjusted based on the evaluation of “frame probability”. In addition, the following modification is possible.
As an example, currently drawn path is continuously evaluated and when the forward direction of the path is toward the closed start figure, the size of the closed start figure is adjusted according to the distance between the position of the current path and the closed start figure. This makes it possible to reduce the time for drawing the frame.
As another example, for the currently drawn figure, the number of angles is counted, and the size of the closed start figure is adjusted according to the number of angles. In other words, since the possibility of successful estimation increases as the number of angles of the path increases, the closed start figure is enlarged so that drawing of the path may be completed earlier.
[Fourth Processing Example]
In this processing example, when a user starts to drag (starts drawing), a closed figure such as a start circle is not displayed, and it is checked whether the figure of a window frame may be estimated in the process of dragging. When the estimation is possible, a candidate figure is displayed and is determined after confirmation by a user.
When it is determined that a user has started drawing (yes in step S401), the operation detection unit 22 stores drawn points (step S402) and determines whether or not a figure may be estimated based on the stored drawn points (step S403). For determination as to whether or not a figure may be estimated, the same technique as the evaluation of a frame probability depicted in
When it is determined that a figure may not be estimated (no in step S403), the operation detection unit 22 returns to the storing of drawn points (step S402).
When it is determined that a figure may be estimated (yes in step S403), the operation detection unit 22 estimates a figure based on the stored drawn points and draws the figure (step S404).
Subsequently, returning to
When it is determined that a user has decided to accept the drawn candidate figure (yes in step S405), the operation detection unit 22 notifies the display control unit 21 of a window drawing event, erases the drawn figure (step S406), and completes the processing. It is to be noted that the candidate figure may be left on display until a window is drawn.
[Fifth Processing Example]
In this processing example, when a user has not completed the drawing, estimation of a figure continues. Except for this, the fifth processing example is the same as the fourth processing example.
When it is determined that a user has started drawing (yes in step S501), the operation detection unit 22 stores drawn points (step S502) and determines whether or not a figure may be estimated based on the stored drawn points (step S503).
When it is determined that a figure may not be estimated (no in step S503), the operation detection unit 22 returns to the storing of drawn points (step S502).
When it is determined that a figure may be estimated (yes in step S503), the operation detection unit 22 estimates a figure based on the stored drawn points and draws a candidate figure (step S504).
Subsequently, the operation detection unit 22 determines whether or not a user has completed the drawing (step S505).
When it is determined that the drawing is not completed (no in step S505), the operation detection unit 22 returns to the storing of drawn points (step S502).
Returning to
[Sixth Processing Example]
This processing example allows cancellation in the case where an estimated figure is not accepted by a user. Except for this, the sixth processing example is the same as the fifth processing example.
When it is determined that a user has started drawing (yes in step S601), the operation detection unit 22 stores drawn points (step S602) and determines whether or not a figure may be estimated based on the stored drawn points (step S603).
When it is determined that a figure may not be estimated (no in step S603), the operation detection unit 22 returns to the storing of drawn points (step S602).
When it is determined that a figure may be estimated (yes in step S603), the operation detection unit 22 estimates a figure based on the stored drawn points and draws a candidate figure (step S604).
Subsequently, the operation detection unit 22 determines whether or not a user has completed the drawing (step S605).
When it is determined that the drawing is not completed (no in step S605), the operation detection unit 22 returns to the storing of drawn points (step S602).
When it is determined that the drawing is completed (yes in step S605), the operation detection unit 22 determines whether or not the end position is in a cancellation area (step S606).
Subsequently, returning to
When it is determined that the end position is in the cancellation area (yes in step S606), the operation detection unit 22 erases the drawn figure (step S608), and completes the processing.
<Summation>
As described above, according to the present embodiment, it is possible to improve the usability in content display coordination between a client terminal and a display.
A preferred embodiment has been described above. Although specific examples have been depicted and described herein, it is apparent that various modifications and changes may be made on these examples without departing from the broad gist and scope defined in the appended claims. In other words, the details of specific examples and the accompanying drawings should not be construed as limiting the disclosure.
The control device 2 is an example of a content display control device. The terminal management and communication unit 24 is an example of a unit that detects a client terminal. The operation detection unit 22 is an example of a unit that detects a touch operation. The operation detection unit 22 is an example of a unit that estimates a window frame. The correspondence processing unit 23 is an example of a unit that establishes correspondence with a client terminal. The display control unit 21 is an example of a unit that displays content.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2015-076822 | Apr 2015 | JP | national |