The following relates to a user interface having a plurality of display units, and to a method for positioning content in working environments having a plurality of display units.
Working environments having a plurality of display units, including for example monitors or big screens, depict a wide range of content and are often used in control rooms. The display units in this case form, in their specific arrangement, a multi-monitor working environment. They may be any desired screens, big screens or projectors that receive and output image information from a computer.
It is known to position content on the various display units by way of a computer mouse. To this end, a user moves an item of content, possibly even over a plurality of monitors, to the desired position. By way of example, the Microsoft Windows operating system allows the desktop to be expanded over a plurality of monitors and windows to be moved over the monitor boundaries.
The following is intended to create a user interface having a plurality of display units and a method for positioning content on a plurality of display units, which interface and method provide an alternative to the prior art.
An aspect relates to a user interface having a plurality of display units. The user interface has an operator display unit and at least one processor, which is programmed to output a graphical user interface on the operator display unit, wherein the graphical user interface contains a number of windows. The user interface furthermore has at least one further display unit, which is connected to the processor, and is characterized in that the processor is programmed to select a window to be distributed from the number of windows depending on a first user action on the graphical user interface, select a display unit from the further display units depending on a second user action on the graphical user interface, and output the selected window to be distributed on the selected display unit.
In the method for positioning content on a plurality of display units, at least one processor displays a graphical user interface on an operator display unit, which graphical user interface depicts a number of windows that are able to be positioned on at least one further display unit. The processor selects a window to be distributed from the number of windows on the graphical user interface on the basis of a first user action. It then selects a display unit from the further display units on the basis of a second user action on the graphical user interface. It then outputs the selected window to be distributed on the selected display unit.
This may involve a single processor. The processor may for example execute an operating system that provides the graphical user interface. The processor, which is for example a microprocessor of a computer, may however be supported by further processors to any desired extent in the execution of the method. For example, the processor is part of a single-chip system, which contains at least one graphics processor. Likewise, the computer system in which the processor works may have one or more separate graphics cards. The graphics processors or graphics cards may in this case assume tasks in the outputting of the graphical user interface and controlling the display units that relieve the processor. The same also applies to the embodiments and developments of the invention.
The advantages mentioned below do not necessarily have to be achieved through the subject matter of the independent patent claims. Rather, they may also be advantages that are achieved only through individual embodiments, variants or developments.
The user interface and the method allow a user to select the window to be distributed on the graphical user interface by way of the first user action, and to select a target display unit, for example a monitor or a big screen, on which the window to be distributed is intended to be depicted, by way of the second user action.
The user interface and the method thus offer a user the advantage of being able to position, from the operator display unit, into any desired display units of a multi-monitor working environment, including any big screens that may be present. In this case, the user no longer has to devote his attention to the moving process, for example by keeping a mouse button pressed over a long path in order to move a window over several monitors, but is able to concentrate entirely on the positioning of the window on the display units of the multi-monitor working environment. The positioning is in this case effected accurately and is intuitive to operate. Positioning is also possible when the target display unit is not in the preferred field of view of the user or completely outside his field of vision. The positioning is ergonomic as the user is able to work on the operator display unit in his preferred field of view.
Furthermore, the user interface and the method for the first time allow such positioning by touch input, even though the target for the positioning lies outside the operator display unit. In the case of touchscreens, there is no provision for an input device such as a mouse for moving the content. It is therefore not possible to work or position beyond the physical boundaries of the touchscreen using touch gestures.
However, since all of the user actions take place on the operator display unit, the user interface and the method also allow positioning by touch input. They are therefore suitable for various input methods such as touch or mouse operation, and independent of the specific input medium, as a result of which other input methods such as for example keypad inputs are also able to be used.
According to one embodiment, the at least one processor outputs a schematic depiction of the spatial arrangement of the further display units on the graphical user interface, wherein each of the further display units in the schematic depiction is able to be selected by way of the second user action.
The schematic depiction is a scaled-down depiction or a schematic portrayal of a multi-monitor working environment that is formed from the display units of the user interface. The schematic depiction in this case forms an operating element of the graphical user interface, which operating element is able to be actuated for example by a mouse or by touch.
According to one variant of this embodiment, the first user action forms the start and the second user action forms the end of a drag-and-drop operation, which is able to be performed in particular with mouse operation, or which is able to be performed in particular by touch, wherein the operator display unit is a touchscreen. As an alternative, the second user action forms a drag-and-drop operation, which is able to be executed only after the first user action has concluded.
This variant makes it possible to drag the item of content to be distributed by drag and drop onto one of the display units in the schematic depiction.
According to one development, the processor allows successive selection of a plurality of the further display units during the second user action, wherein the respectively last selected display unit is the selected display unit. The processor outputs visual feedback on the respectively currently selected display unit, which temporarily highlights said display unit, during the second user action.
By way of example, in the context of the second user action, in the schematic depiction on the operator display unit (which is configured as a touchscreen in this case), the user touches one of the further display units. While his finger is still touching the touchscreen, the currently selected display unit is visually highlighted, for example by a colored marking or a change in brightness or contrast. This visual feedback is effected both in the schematic depiction and on the actual selected display unit. On the basis of this visual feedback, the user is able to move his fingertip on the operator display unit, in the context of the second user action, until it comes to lie on another display unit in the schematic depiction. The visual feedback is then output on the newly selected display unit. In the same way, the user could also travel through the display units in the schematic depiction with a pressed mouse button. The final selection of the display unit results from the last selected display unit when the user releases the mouse button or lifts his finger from the operator display unit.
Furthermore, the window to be distributed may also be dragged onto the schematic depiction by way of a drag-and-drop operation, wherein in this case too both touch operation and mouse operation is able to be implemented. In this case, the visual feedback may be displayed successively before the end of the drag-and-drop operation, i.e. before the release. The visual feedback is effected here without a noteworthy time delay and follows the movement of the second user action. It supports collaborative working, since all of the users of the multi-monitor working environment follow the second user action and are able to comment on and support the selection. Furthermore, the visual feedback also allows the user himself to direct his view to the multi-monitor working environment, i.e. to the actual further display units, during the second user action.
According to one embodiment, at least one of the further display units has a graphical user interface that is divided into a plurality of display areas. In the context of the second user action, one of the display areas is able to be selected. The selected window to be distributed is output on the selected display area.
In combination with the schematic depiction explained above, the variant results here that the display areas are also identified in the schematic depiction. Furthermore, in combination with the visual feedback explained above, the variant results that the user sweeps over various display areas in the schematic depiction in the context of the second user action, wherein in each case visual feedback is output on the currently selected display area. Not all of the display units have to be divided into a plurality of display areas.
In one development, before the first user action, for each window of the number of windows, an operating element is displayed on the graphical user interface, wherein each of the operating elements is able to be chosen by way of the first user action.
According to one embodiment, the schematic depiction is displayed immediately after the first user action overlapping or adjacent to the chosen operating element.
In one development, a distribution mode is activated beforehand following detection of an initial user action, wherein the windows from the number of windows are identified visually as able to be distributed on the graphical user interface only in the distribution mode. In one variant, the abovementioned operating elements are displayed only in the distribution mode. In a further variant, the abovementioned schematic depiction is displayed only in the distribution mode.
The distribution mode enables visual highlighting and identification of the windows able to be distributed flexibly as able to be manipulated for the user. As a result, the windows able to be distributed in the multi-monitor working environment are illustrated extremely unambiguously and in a manner that does not additionally overload the complex graphical user interface with permanently visible operating elements. This is because the windows able to be distributed are visually highlighted only after activation of the distribution mode. The legibility of the graphical user interface in a standard mode is thus not restricted, as no additional operating elements or icons other than an operating element for the mode change have to be displayed there.
According to one embodiment, a switch is displayed on the graphical user interface, wherein the initial user action is detected when the switch is actuated.
In one development, the user interface has an electrical switch, which is arranged in the vicinity of the operator display unit. The processor is programmed to detect the initial user action on the basis of actuation of the switch.
According to one embodiment, the operator display unit is a touchscreen that is set up to detect the first user action and the second user action as touch inputs.
A computer program is recorded on the computer-readable data medium, which computer program executes the method when it is run in the processor. The computer program is run in the processor and executes the method there.
Some of the embodiments will be described in detail, with references to the following Figures, wherein like designations denote like members, wherein:
Of course, the user interface may have further display units that are controlled and selected in the same way as is described in the exemplary embodiments. In principle, each of the display units may also have a plurality of partial areas, for example quadrants, which on their part are able to be treated and selected as separate display units. Conversely, the second display unit 2 and the third display unit 3 may also be just partial areas of a big screen. These generalizations and variants apply with respect to all of the exemplary embodiments.
The operator display unit 1, the second display unit 2 and the third display unit 3, and possibly further display units, form a multi-monitor working environment in their spatial arrangement. The second display unit 2 is in this case by way of example divided into four quadrants, which each form a display area for one of the windows 15, 16, 17, 18 on the operator display unit 1, whereas each of these windows would be depicted full-screen on the third display unit 3. Of course, such a division or a full-screen depiction may be selected for each of the display units 2, 3 depending on requirements.
Within the schematic depiction 13, the user now selects, by way of a second user action 42, the lower right-hand quadrant of the second display unit 2 as display area for the window 15 to be distributed. Visual feedback 21 is then output on the second display unit 2, which feedback for example changes a background color, a contrast, a brightness or similar image parameters of the lower right-hand quadrant or displays a marking.
All of the user actions may be executed as touch inputs if the operator display unit 1 is a touchscreen. By way of example, the operator display unit 1 could be part of a tablet computer. However, the user actions may also be performed by mouse, keypad or any other suitable input device that allows choosing of windows of the graphical user interface 11. These generalizations and variants apply for all of the exemplary embodiments.
The difference between a preliminary selection, as in
In
The user then touches, as shown in
The depiction on the third display unit 3 is then replaced by the window 15 to be distributed, which the user had selected by way of the first user action 41. This is shown in
The user then ends the distribution mode by way of a final user action 44 by deactivating the switch 14 for the distribution mode, as shown in
The switch 14 does not necessarily have to be an element of the graphical user interface 11. It may also be configured as a physical electrical switch and for example be formed by a special button on a keypad.
It is possible in principle for the computer 9 to control the second display unit 2 and the third display unit 3 with image information directly by way of a graphics card. In this case, this is for example an expanded desktop whose various display units are managed and controlled by the graphics card.
Although the invention has been illustrated and described in greater detail with reference to the preferred exemplary embodiment, the invention is not limited to the examples disclosed, and further variations can be inferred by a person skilled in the art, without departing from the scope of protection of the invention.
For the sake of clarity, it is to be understood that the use of “a” or “an” throughout this application does not exclude a plurality, and “comprising” does not exclude other steps or elements.
Number | Date | Country | Kind |
---|---|---|---|
10 2016 202 694.1 | Feb 2016 | DE | national |
This application claims priority to PCT Application No. PCT/EP2017/053194, having a filing date of Feb. 14, 2017, based on German Application No. 10 2016 202 694.1, having a filing date of Feb. 22, 2016, the entire contents both of which are hereby incorporated by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2017/053194 | 2/14/2017 | WO | 00 |