This application claims priority to European Patent Application No. EP 20155164.5 filed Feb. 3, 2020, the disclosure of which is incorporated herein by reference in its entirety and for all purposes.
The present invention relates to system, methods and arrangements for selecting objects and in particular for selecting objects on a remote display using a user electronic device.
As more and more content, trade, and even social interaction move into the digital arena many new applications are being developed to enhance the user experience. Furthermore, as we move towards digital solutions trust issues will become of increased importance. If you as a user interact with a software solution there can always be a risk that the software is programmed to deliver content and solutions that are biased in view of the expectations and to the benefit of the content provider.
This is for instance a highly relevant issue relating to areas that depends on some kind of random generator; there is often a risk that these type of solutions will be interpreted as being biased towards the service provider's benefit. One such technology area can for instance be seen in the gaming industry, such as online casinos providing randomized games such as roulette, card games, crap games, and so on. For this purpose, parts of the online gaming industry combine digital and real-world experience by filming actual casino games with real people as service providers at the casino game tables and providing the user a possibility to interact with real-life games digitally and remotely. Thus, the user will be provided with a gaming experience environment as if he or she were present in the casino and with the same randomness as a live casino experience from home or on a mobile user device.
The same holds true for other areas where the users interact with the digital world, combining digital and live real-world content can provide an enhanced experience and improved user-machine interaction. Furthermore, behavioral studies show that by gamifying tasks, users will be more efficient in performing the tasks since level of satisfaction and acknowledgement increases. Thus, there exists a need for improving human-machine interactions and increasing trust of digital solutions.
It is therefore an object to obviate at least some of the above disadvantages and provide improved devices and methods for improving the user experience.
One aspect of the present invention is a method as defined in independent claim 1. Other aspects of the invention are an electronic device, computer readable storage medium, and system in independent claims 13 to 15, respectively. Further aspects of the invention are the subject of the dependent claims. Any reference throughout this disclosure to an embodiment may point to alternative aspects relating to the invention, which are not necessarily embodiments encompassed by the claims, rather examples and technical descriptions useful for understanding the invention. The scope of present invention is defined by the claims.
This is provided in a number of embodiments, such as a method for selecting objects displayed on a remote display. The method comprising, at a user electronic device having one or more processors, at least one memory, user display, and at least one user input device providing user input coordinates and events, displaying a video stream of the remote display in a remote display window on the user display, detecting a user interface input coordinate and if the user input coordinate is located in the remote display window, displaying a user selection affordance comprising a focus area overlaid over the video stream, obtaining a position of the focus area in relation to the video stream area; displaying in the focus area a virtual rendering of the remote display corresponding to the area under the focus area, receiving a user selection event from the user input device, determining the current location of the focus area in relation to the video stream; and if an object is located at the current location of the focus area, selecting the object displayed at the current location of the video stream.
The focus area may depicted as a virtual sniper scope. Furthermore, the focus area may display an enlarged portion of the remote display corresponding to the location of the focus area, i.e. providing a zoom functionality.
In the method, the electronic device may be arranged to display a video stream that has been obtained by recording the remote display using an external video recording device.
The objects may be stationary or non-stationary in the video stream during selection period or before the selection period.
The method may further comprise a step of presenting a reward when an object has been selected. Presenting the reward may comprise changing the symbol of the object to alert the user of the selection.
The method may further comprise receiving information about all objects on the remote screen from the server device, wherein the information about all objects comprise at least one of position, type of object, and reward associated with the object.
The method may further comprise steps, in the user electronic device, of analysing the video stream, determining image data related to the current position of the focus area, and creating the virtual rendering of the remote display corresponding to the area under the focus area.
Another aspect of the present invention is provided, an electronic device comprising a display, at least one user input device, one or more processors, at least one memory, and one or more programs, stored in the memory, to be executed by the one or more processors, the one or more programs including instruction sets for performing the method as defined in claims 1 to 12.
Yet another aspect of the present invention is provided, a computer readable storage medium storing one or more programs, the one or more programs comprising instruction sets for performing the method as defined in claims 1 to 12.
Furthermore, a system is provided for selecting objects at a display at a remote location using a user electronic device. The system comprising a remote display presenting objects to be selected, a camera recording a video stream of the remote display, a server obtaining the video stream and providing the video stream to user electronic devices, a user electronic device comprising one or more processors, at least one memory, a communication interface, a display, and at least one user input device. The processor of the user electronic device is arranged to operate instructions sets stored in the memory for displaying the video stream and detecting user input events from the at least one user input device, and further arranged to operate the method as defined in claims 1 to 12.
The proposed solution makes it possible to achieve a more efficient human-machine interaction. This also provides an efficient way of interacting with the user on all types of devices for instance by making efficient use of a user electronic device display. By providing an efficient interaction interface, the user will be less prone to making mistakes and operate the electronic device more efficiently. This in turn reduce battery consumption. Furthermore, the proposed solution provide a solution for enhancing the overall efficiency in solving particular tasks by gamifying the tasks.
In the following the invention will be described in a non-limiting way and in more detail with reference to exemplary embodiments illustrated in the enclosed drawings, in which:
In
As shown in
The proposed solution is designed to be flexible for adoption to any location of stream elements on the stream itself. UI elements are being moved, scaled and transformed, using configurational variables, which are actual for particular video stream, produced by camera located in a physical studio located remotely from the user device. The analysis and any transformations of the video stream may be executed by instruction sets in the processor of the user electronic device. The user electronic device may do all or some of the calculations on the user device to scale up/down the video stream, and relocate the UI elements (focus area) depending on user device or game stages.
The server device 103 is shown in
The processors used in the user electronic device and server device may for instance comprise microprocessors such as a central processing unit (CPU), application specific integrated circuits (ASIC), digital signal processors (DSP), field programmable gate arrays (FPGA), graphics processing unit (GPU), or any other processing device for running instruction sets for operating different software functions, or a combination of these processors. The memory may be a volatile or non-volatile memory type and of transitory or non-transitory type. For instance the memory may comprise a random access memory (RAM) of any suitable type, solid state memory types, flash memory, magnetic disk storage devices. Access to memory by other components of the device, such as CPUs is optionally controlled by a memory controller (not shown). Peripherals interface of the user electronic device can be used to couple input and output peripherals of the device to CPUs 201 and memory 202. The one or more processors may run or execute various software programs and/or sets of instructions stored in memory to perform various functions for the device 101, 103 and to process and/or store data related to the operation of the device(s).
In some embodiments, the user electronic device is a desktop computer. In some embodiments, the device is portable (e.g., a notebook computer, tablet computer, personal digital assistant (PDA), or handheld device such as a mobile phone or smartphone). In some embodiments, the device is a personal electronic device (e.g., a wearable electronic device, such as a watch). In some embodiments, the device has a touchpad. In some embodiments, the device has a touch-sensitive display (also known as a “touch screen” or “touch-screen display”). In some embodiments, the device has a graphical user interface (GUI), one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions.
The user electronic device may be wirelessly connected to a network 102. The wireless communication optionally uses any of a plurality of communications standards, protocols and technologies, including but not limited to ETSI based cellular communication standards as for instance Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSUPA), Evolution, Data-Only (EV-DO), HSPA, HSPA+, Dual-Cell HSPA (DC-HSPDA), long term evolution (LTE), 5G and NR protocols, near field communication (NFC), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, IEEE based communication protocols such as for instance Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11ac, IEEE 802.11ax, IEEE 802.11b, IEEE 802.11g and/or IEEE 802.11n), or voice over Internet Protocol (VoIP), Wi-MAX. It should be noted that other communication protocols may be used, including communication protocols not yet developed. In case of a wired communication link, the communication protocol may for instance be an Ethernet link, an optical fibre, or other suitable physical links.
The display may be divided into different areas for different types of interaction with the user, some parts may be for input from the user and other parts may be for outputting visual information to the user. Some areas of the display may be combined for input and output. The pointer may change appearance and functions depending on the location on the display and in which part of the display the pointer currently is located.
In
The virtual rendering may be obtained in the user electronic device by analysing the video stream and keeping track of the position of the user pointer in relation to the video stream. For instance the electronic device may determine a specific x and y coordinate of the user pointer in the remote display window and then determine image data of the video stream for the same coordinate and of an area around that coordinate that corresponds to the focus area/pointer area. The electronic device then calculates a virtual rendering of the video stream data of that area and displays this in the focus area of the sniper scope. The user electronic device can enlarge, i.e. zoom in, into the area and show an enlarged image in the focus area of the sniper scope.
Successfully selecting an object may trigger a signal response internally in the user electronic device or send information to the server device and the signal may include information about which of the objects was selected and the user device or server device in turn may trigger a function in response. The server may provide information that a particular object has been successfully selected to the remote display control device that in turn will trigger a response to the remote display, e.g. by changing the appearance of the selected object. The user electronic device may be provided with information about the objects on the remote display, e.g. type of object, location of each object in coordinates of the remote display, any reward associated with each object and so on from the server device. This information may be used for the virtual rendering of the data in the sniper scope.
For instance, the system may be used for playing a game or perform a task where the user is to select a cash prize or some other type of reward.
In one exemplary embodiment, the user is playing a game on a web browser or a stand-alone application on the user electronic device and if the user wins a particular game or successfully completes a task, the user may be given the opportunity to select an object 405 with an undisclosed prize/reward associated with the object. The cash/reward can be randomly selected using a random number generator (RNG) operated in the server device 103 or some other device 110 on the operator side of the system. If the user successfully aims and hits an object by aiming using the sniper scope and inputting a user event at a correct location of the display, such as tap pressing on the touch-sensitive display or clicking with a mouse button, that particular object is selected and the user wins a prize. In
The objects in the remote display can be made non-stationary, for instance they may float around in a random manner or in a systematic manner (for instance moving between two positions back and forth for each line of objects) when the user tries to select the objects, thus even further enhancing the user experience by providing an increased level of difficulty in selecting the objects. The level of movement can optionally be related to the successfulness of the previous task completed, for instance if the previous task was completed with high success, the movement can be slower thus increasing the users chance of hitting or successfully selecting an object, or if the previous task was completed with a lower success, the movement can be more rapid and/or random to reflect this outcome and thus decreasing the users chance of hitting/selecting an object.
In one embodiment, the user is briefly presented with the cash or reward symbols for each object and then the symbols are changed to anonymous symbols, e.g. ducks, rabbits, cactus, and stars as shown in
Now, referring to
The user electronic device is arranged to operate functionality as method steps in instruction sets and software code executed in the processor for performing the proposed solution. In a first step, the device is arranged to display 501 a video stream of the remote display in a remote display window 410 on the user electronic device display 402. It should be noted that the video stream as shown in the user display may be arranged to cover an entire remote display or only a part of the remote display and the user electronic device may be arranged to zoom and pan over the remote display.
The user electronic device is furthermore arranged to detect 502 a user interface input coordinate, for instance a finger moving on a touch-sensitive display or a mouse moving in the display 402 will provide continuous coordinates of the current position of the pointer associated with the input coordinate. If the user input coordinate is located in the remote display window, displaying a user selection affordance comprising a focus area overlaid over the video stream and following the movement of the user input coordinates. The focus area may optionally be designed as a virtual sniper scope-like function and appearance.
The device obtains 503 a position of the focus area, e.g. in the form of a sniper scope, in relation to the video stream area, i.e. synchronizes the position of the pointer and the video stream and in particular to the part of the video stream that covers the remote display. Synchronization may be performed by for instance analysing the video stream data and identifying objects and locations or (continuously) obtain information from the server relating to type of objects and locations of objects. The device may continuously obtain positions of the focus area as the user moves the pointer (focus area) in time. If the pointer is moved outside or inside the remote display window, the appearance of the pointer may change; for instance when moving into the remote display window the appearance may change into the sniper scope design and when moving out from the remote display window the pointer may change into a default design such as an arrow or similar appearance.
The device further displays 504 in the focus area a virtual rendering of the remote display corresponding to the area under the focus area. With a virtual rendering of the remote display corresponding to the area under the focus area, it is possible to provide, in the focus area, an enlarged rendering, thus mimicking a zoom functionality and a telescoping appearance enhancing the ability to aim for the user. Furthermore, virtually rendering the remote screen in the focus area provides with the possibility to provide the video stream to a plurality of users and each user will only see what he or she selects. In relation to step 504, the user electronic device may be arranged to optionally analyse 504′ the video stream, determining 504″ image data related to the current position of the focus area, and creating 504′″ the virtual rendering of the remote display corresponding to the area under the focus area.
During operation the user electronic device receives 505 a user selection event from the user input device. Such a user selection event comprise for instance, but not limited to, a hard press on the touch-sensitive display, quick tap on the touch-sensitive display, a mouse click on a mouse button, a tap on a trackpad, a key stroke on a keyboard, and so on. The user may be provided with a time period during which it is possible to select an object, this time period may be illustrated in the user interface by some type of time period indicator 730, e.g. as depicted in
When detecting a user selection event in the remote display window, the electronic device determines 506 the current location of the focus area in relation to the video stream, and if an object is located at the current location of the focus area, selecting 507 the object displayed at the current location of the video stream and focus area. In some embodiments, the objects are non-stationary in the video stream in order to increase the difficulty to select an object for a richer user experience.
Optionally, If an object has been successfully selected, the user may be presented 508 a reward. This can be done by changing the appearance of the object to a symbol representing the reward, for instance for a cash reward the amount won may be shown in the symbol, see for instance
In accordance with some embodiments, a computer readable storage medium has stored therein instructions which when executed by an electronic device with a display, and a user input device may cause the device to perform or cause performance of the operations of any of the methods described herein.
It should be understood that at least parts of the solution may be arranged at the server side of the system, for instance, the user electronic device may be arranged to only provide a current location of the pointer relative the remote display information and the server may be arranged to determine if an object has been successfully selected and change the appearance of the object as discussed above. This also applies to other functionality such as virtual rendering of area in the focus area and so on.
Referring to
In
To further increase the human-machine interaction, the remote display can be changed so all the symbols in the display are shown as grey-scale symbols during the selection period, whereas otherwise they are shown in colour. During the selection period, the rendered image in the sniper scope can be showing symbols in colour reducing the cognitive burden on the user and enhancing the human-machine interaction.
The user electronic device may further be arranged to determine the selected cash multiplier and send this information to the server and/or to update a cash wallet locally. It should be noted that other types of rewards than cash may be used, for instance product rewards, extra time for playing and so on.
The use of the object selection solution as presented herein may be used for other areas then casino gaming solutions. Such tasks may be connected to many different areas of interest including but not limited to repetitive computer related tasks, responding to surveys, or other tasks that benefit from a more motivated user. By gamifying a task, the user is more interested in performing the task and can potentially be more efficient in performing such tasks. The suggested solution according to the present invention provides an effective way of gamifying tasks and providing an efficient human-machine interaction functionality. This in turn may also reduce the risk of the user making mistakes on operating the user electronic device and be more efficient in general relating to tasks on the device; this may reduce the computational need of the device and reduce the power consumption increasing the time between charging in case the user device is a battery operated device.
User Interface elements are responsible for interaction with user. Video stream is delivering rich visual experience. Some elements/zones of the video stream are visually matching User interface elements that creates feeling of something solid and whole.
Solution can be used for live game streaming, for instance as a web-based application. UI elements are being shown as second layer, overlaid above the video stream. UI visually copies graphics that is shown in video stream, and allow user to interact with it to accomplish game related activities.
Proposed solution allows to dramatically save space on the screen of user's device, by matching UI layer over the visual (video stream) layer, which is extremely important nowadays for mobile devices. In the same time, solution creates virtual experience for user, by giving him a feeling that user interacts with the physical (studio based) elements, being streamed through the video.
The proposed solution uses different techniques to allow configuration and camera installation process easy.
Preferably, having the user electronic device handle the processing of the video stream and the user affordance provides a faster and more realistic user experience.
It should be noted that the word “comprising” does not exclude the presence of other elements or steps than those listed and the words “a” or “an” preceding an element do not exclude the presence of a plurality of such elements. It should further be noted that any reference signs do not limit the scope of the claims, that the invention may be at least in part implemented by means of both hardware and software, and that several “means” or “units” may be represented by the same item of hardware.
The above mentioned and described embodiments are only given as examples and should not be limiting to the present invention. Other solutions, uses, objectives, and functions within the scope of the invention as claimed in the below described patent embodiments should be apparent for the person skilled in the art. The scope of the present invention is defined by the claims.
CMOS Complementary metal-oxide-semiconductor
DSLR Digital Single-lens reflex (camera)
Number | Date | Country | Kind |
---|---|---|---|
20155164.5 | Feb 2020 | EP | regional |