This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2014-030964 filed on Feb. 20, 2014; the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to a display apparatus, a display method, and a computer program product.
In a display apparatus provided with a pen input interface, there is known a technique of, by utilizing pen operation, changing substance of content displayed in a region of interest through zooming (zooming-in and zooming-out) and the like.
However, in the above-described conventional art, the change of substance of content displayed in a region of interest causes the size of the region of interest to also change. As a result, the visibility of an outside of the region of interest reduces.
According to an embodiment, a display apparatus includes an acquisition controller, a region setting controller, a change controller, and a display controller. The acquisition controller sequentially acquires a point position of an input device on a display that displays content, and a pressure acting on the point position. The region setting controller sets a region of interest on the display, based on the point position and the pressure that are sequentially acquired. The change controller changes, based on the pressure, substance displayed in the region of interest, of the content, while the region of interest is fixed. The display controller displays the changed substance in the region of interest.
Embodiments will be described in detail below with reference to accompanying drawings.
The input unit 11 can be implemented by an input device capable of inputting by handwriting, such as an electronic pen, a touch panel, a touch pad and a mouse. The acquisition unit 13, the analyzer 15, the region setting unit 17, the state setting unit 19, the notification controller 21, the change unit 25, and the display controller 29 may be implemented by, for example, allowing a processor such as a CPU (Central Processing Unit) to execute a program, that is, by software; may be implemented by hardware such as an IC (Integrated Circuit); or may be implemented by a combination of software and hardware. The notification unit 23 can be implemented by a notification device such as a touch panel display, a speaker, a lamp and a vibrator. It is noted that when the notification unit 23 is implemented by a touch panel display, the display 31 may play a role of the notification unit 23. The storage 27 can be implemented by, for example, a magnetically, optically, or electrically storable storage device such as an HDD (Hard Disk Drive), an SSD (Solid State Drive), a memory card, an optical disk, a ROM (Read Only Memory), and a RAM (Random Access Memory). The display 31 can be implemented by, for example, a display device such as a touch panel display.
The input unit 11 sequentially inputs, to the display apparatus 10, a point position of the input unit 11 on the display 31 that displays content, and a pressure acting on the point position. The input unit 11 may further sequentially input at least one of a tilt angle that is an angle formed between the display 31 and the input unit 11, and an azimuth angle that is an angle formed between a straight line of the input unit 11 projected on the display 31 and a prescribed line.
The tilt angle and the azimuth angle can be detected by, for example, in the case of an electromagnetic induction system, detecting the position of electric power generated by resonance caused by a reaction of a plurality of coils embedded in the input unit 11 (an electronic pen) with an AC allowed to flow through an antenna coil extending all over the display 31 (a touch panel), on the antenna coil side, to scan a plurality of positions of electric power. Alternatively, for example, the tilt angle and the azimuth angle can be detected by embedding an acceleration sensor or an angular velocity sensor in the input unit 11.
In the first embodiment, the input unit 11 sequentially inputs a point position, a pressure acting on the point position, a tilt angle and an azimuth angle, but the information input by the input unit 11 is not limited thereto.
An example of the information (stroke information) input by the input unit 11 during a period from being brought into contact with the display 31 to being moved apart from the display 31 (from pen down to pen up) includes {(x(1,1), y(1,1), p(1,1), r(1,1), s(1,1)), (x(1,N(1)), y(1,N(1)), p(1,N(1)), r(1,N(1)), s(1,N(1)))}. Here, x indicates an x-coordinate of the point position; y indicates a y-coordinate of the point position; p indicates a pressure (writing pressure) acting on the point position; r indicates a tilt angle; and s indicates an azimuth angle. N(i) indicates a score when sampling of an image i (the i-th stroke) is performed.
The acquisition unit 13 sequentially acquires the point position and the pressure acting on the point position that are input by the input unit 11. The acquisition unit 13 may further sequentially acquire at least one of the tilt angle and the azimuth angle.
In the first embodiment, the acquisition unit 13 is assumed to sequentially acquire the point position, the pressure acting on the point position, the tilt angle and the azimuth angle.
The analyzer 15 sequentially analyzes the point position and the pressure that are sequentially acquired by the acquisition unit 13. Specifically, the analyzer 15 sequentially analyzes the point position and the pressure sequentially acquired by the acquisition unit 13, to analyze a value and a change of the pressure at the point position.
The analyzer 15 further sequentially analyzes at least one of the tilt angle and the azimuth angle that are sequentially acquired by the acquisition unit 13. Specifically, the analyzer 15 further sequentially analyzes at least one of the tilt angle and the azimuth angle sequentially acquired by the acquisition unit 13, to analyze a gesture of the input unit 11. Examples of the gesture include movement and resting of the input unit 11.
The analyzer 15 may analyze the whole stroke acquired by the acquisition unit 13. An example thereof will be described later.
The region setting unit 17 sets a region of interest on the display 31, based on the analysis result by the analyzer 15. For example, the region setting unit 17 sets a region of interest on the display 31, based on the value of the pressure at the point position.
In order to inhibit an effect of swinging of the point position 51, the gravity center of the point position 51 for a certain period may be defined as the point position 51 for the certain period.
Alternatively, for example, the region setting unit 17 may set a region of interest on the display 31, based on the movement and the resting of the input unit 11. For example, the region setting unit 17 enlarges a circular region with centering an point position while the input unit 11 is in the movement gesture, and fixes the circular region when the input unit 11 becomes in the resting gesture, thereby to set the region of interest.
Still alternatively, for example, when the analysis result of the analyzer 15 shows that a stroke input by the input unit 11 constitutes a closed loop, the region setting unit 17 may set the closed loop region as a region of interest. Here, a method for analyzing whether or not a stroke by the analyzer 15 constitutes a closed loop will be described.
The analyzer 15 can easily analyze that a stroke constitutes a closed loop when end points of the input stroke overlap each other. However, in practice, there is often a case where the end points of the input stroke do not overlap each other, but a user having inputting the stroke intends a closed loop.
For this reason, the analyzer 15 determines whether or not the distance between the end points of the input stroke is shorter than a reference length N, or whether or not an intersection exists. For example, the reference length N can be set to be 0.05 times the length of an input stroke or the length of a short side of a circumscribed rectangle of a stroke.
Alternatively, for example, as a result of the analysis by the analyzer 15, the region setting unit 17 may set, as a region of interest, the circumscribed rectangle of the stroke input by the input unit 11.
The state setting unit 19 sets a region of interest to be in a change state, based on the analysis result of the analyzer 15 after the region of interest has been set by the region setting unit 17. For example, the state setting unit 19 sets the region of interest to be in the change state, based on a change of a pressure at the point position after the region of interest has been set. For example, the state setting unit 19 sets a region of interest to be in the change state, if, for a certain period of time, there is no change of a pressure at the point position after the region of interest has been set. Alternatively, for example, the state setting unit 19 may set a region of interest to be in the change state, based on the movement and the resting of the input unit 11. For example, the state setting unit 19 may set a region of interest to be in the change state when the input unit 11 changes from the movement gesture to the resting gesture, and may release the change state of the region of interest when the input unit 11 changed from the resting gesture to the movement gesture.
The notification controller 21 causes the notification unit 23 to notify that the region of interest has been set to be in the change state by the state setting unit 19. The notification may be screen output, speech output, light output or vibration output by the notification unit 23.
The change unit 25 changes the substance displayed in the region of interest, of the content displayed on the display 31, while the region of interest is fixed, based on the analysis result after the region of interest has been set by the region setting unit 17. Specifically, the change unit 25 changes the substance displayed in the region of interest, of the content, in a state of fixing the region of interest, based on the analysis result after the change state has been set by the state setting unit 19. For example, the change unit 25 changes the substance displayed in the region of interest, of the content, while the region of interest is fixed, based on the change of the pressure at the point position after the change state has been set.
In the first embodiment, the change state is a zoom state, and the change unit 25 zooms (zooms in or zooms out) the substance displayed in the region of interest, of the content, while the region of interest is fixed, based on the analysis result after the change state has been set by the state setting unit 19. For example, as illustrated in
The storage 27 stores content.
The display controller 29 acquires content from the storage 27, and displays the acquired content on the display 31. When the substance displayed in the region of interest of the content is changed by the change unit 25, the display controller 29 displays the changed substance in the region of interest.
First, the acquisition unit 13 acquires a stroke that is input by the input unit 11 (the point position, the pressure acting on the point position, the tilt angle, and the azimuth angle) (step S101).
Subsequently, the analyzer 15 analyzes the stroke acquired by the acquisition unit 13 (step S103).
Subsequently, the region setting unit 17 sets a region of interest on the display 31, based on the analysis result of the analyzer 15 (step S105).
First, the acquisition unit 13 acquires a stroke that is input by the input unit 11 (the point position, the pressure acting on the point position, the tilt angle, and the azimuth angle) (step S201).
Subsequently, the analyzer 15 analyzes the stroke acquired by the acquisition unit 13 (step S203).
Subsequently, the state setting unit 19 sets a region of interest to be in the zoom state, based on the analysis result of the analyzer 15 (step S205).
Subsequently, the notification controller 21 causes the notification unit 23 to notify that the region of interest has been set to be in the zoom state by the state setting unit 19 (step S207).
First, the acquisition unit 13 acquires a stroke that is input by the input unit 11 (the point position, the pressure acting on the point position, the tilt angle, and the azimuth angle) (step S301).
Subsequently, the analyzer 15 analyzes the stroke acquired by the acquisition unit 13 (step S303).
Subsequently, the change unit 25 zooms the substance displayed in the region of interest, of the content displayed on the display 31, while the region of interest is fixed, based on the analysis result (step S305).
Subsequently, when the substance displayed in the region of interest is zoomed by the change unit 25, the display controller 29 displays the zoomed substance in the region of interest (step S307).
As described above, according to the first embodiment, the substance displayed in the region of interest, of the content, is changed while the region of interest is fixed. Accordingly, the substance of the content displayed in the region of interest can be changed without reducing the visibility outside the region of interest. Especially, even when the substance displayed in the region of interest is zoomed in, the region of interest is still fixed, so that the region outside the region of interest is not hidden. This is suitable when zooming the region of interest while overlooking the substance outside the region of interest.
Further, according to the first embodiment, when zooming out the substance displayed in the region of interest, the substance outside the region of interest is also zoomed out. This is suitable when zooming the region of interest while overlooking the substance outside the region of interest.
Furthermore, according to the first embodiment, since the change of the substance displayed in the region of interest is performed based on the change of the pressure (in particular, based on the change after the pressure is stabilized), the operation to an input unit for zooming out also becomes easier.
In a second embodiment, an example of inputting a stroke in a region of interest will be described. Hereinafter, a difference from the first embodiment will be mainly described. Constituents having the same functions as in the first embodiment are assigned with names and reference numerals similar to those in the first embodiment, and a description thereof will be omitted.
The acquisition unit 13 acquires a stroke that is input from the input unit 11 to a region of interest.
The analyzer 115 analyzes whether or not the stroke acquired by the acquisition unit 13 is a stroke input to the region of interest.
Specifically, the analyzer 115 determines whether the input stroke is a stroke for operation or a stroke for writing, based on a trace, an end point position, a circumscribed rectangle and the like of an input stroke. For example, the analyzer 115 can determine that the input stroke is the stroke for operation, when a trace from an end point to another end point falls within a certain range. For example, the analyzer 115 can determine that the input stroke is the stroke for operation of setting a region of interest, when a brushstroke from an end point to another end point constitutes a closed loop. Therefore, when a stroke other than these, that is, a stroke other than the strokes for operation is input, the analyzer 115 can determine that the input stroke is a stroke for writing.
The display controller 129 further displays a stroke acquired by the acquisition unit 13 in a region of interest. Specifically, when the analyzer 115 analyzes the input stroke is the stroke input to a region of interest, the display controller 129 further displays the stroke in the region of interest.
When a text stroke 161 for writing is input in the region of interest 52 as illustrated in
As above, according to the second embodiment, with the same input unit, the change of the substance of content displayed in a region of interest and the writing to the region of interest can be achieved.
In the above-mentioned embodiments, the change of a spatial axis, that is, zooming, has been described as an example of the change state. However, the change state may be a temporal axis (past, present, future (estimation)). In this case, the change unit 25 may change, in terms of a temporal axis, the substance displayed in the region of interest, of the content, while the region of interest is fixed, based on the analysis result after the change state has been set by the state setting unit 19.
Hardware Configuration
A program to be executed in the display apparatus according to each of the embodiments and the modification described above is provided by being previously incorporated into a ROM or the like.
Further, a program to be executed in the display apparatus according to each of the embodiments and the modification described above may be provided by being stored in a storage medium that can be read by a computer in a file of an installable format or an executable format. Examples of such a storage medium may include a CD-ROM, a CD-R, a memory card, a DVD and a flexible disk (FD).
Furthermore, a program to be executed in the display apparatus according to each of the embodiments and the modification described above may be provided by being stored on a computer connected to a network such as the Internet and being downloaded via a network. A program to be executed in the display apparatus according to each of the embodiments and the modification described above may be provided or distributed via a network such as the Internet.
A program to be executed in the display apparatus according to each of the embodiments and the modification described above has a module structure for achieving the above-described units on a computer. As actual hardware, for example, the controller 901 retrieves a program from the external storage device 903 to the storage device 902, and executes the retrieved program, thereby to achieve the above-described units on a computer.
Alternatively, the functions of the display apparatus according to each of the embodiments and the modification described above may be dispersedly executed, as illustrated in
As described above, according to each of the embodiments and the modification described above, the substance of the content displayed in the region of interest can be changed without reducing the visibility of the outside of the region of interest.
For example, the steps in the flow charts of the above-described embodiments may be changed in an execution order, may be plurally executed in a simultaneous manner, or may be executed in a different order for each implementation, unless the nature of the steps is not violated.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2014-030964 | Feb 2014 | JP | national |