The invention concerns in general the technical field of elevators. More particularly, the invention concerns controlling of elevator systems.
Elevator systems interact with users through user interfaces allowing input and output of information between the parties. A typical example of such a user interface may be mentioned an elevator calling device, such as a car operating panel (COP) or a destination operating panel (DOP). An interaction with the user interface is typically performed with a finger of the user when inputting e.g. service calls to the elevator system. In other words, the user touches an area of the user interface corresponding to her/his need and the user interface generates an internal control signal in accordance with the input received from the user through the user interface. Typical way to implement the user interface is either a panel with one or more buttons or a touch screen as static implementations which may be problematic e.g. to users with special needs but also if the space in the user interface resides is crowded and the access to the panel may be difficult.
Further, there is developed other ways to provide input to the elevator system. For example, a use of eye tracking technology has been introduced e.g. in a document EP 3575257 A1 for helping persons with special needs to be able to provide input, such as service calls, to the elevator system. The eye tracking system as such is interesting also other users that those with the special needs since the technology helps avoiding physical contact between the user and the user interface, which may turn out to be necessary for health-related reasons, such as if a pandemic disease spreads in population through contacts.
Hence, there is a need to introduce solutions for mitigating the above described drawbacks as well as to take novel approaches in the area.
The following presents a simplified summary in order to provide basic understanding of some aspects of various invention embodiments. The summary is not an extensive overview of the invention. It is neither intended to identify key or critical elements of the invention nor to delineate the scope of the invention.
The following summary merely presents some concepts of the invention in a simplified form as a prelude to a more detailed description of exemplifying embodiments of the invention.
An object of the invention is to present an arrangement, a method, an elevator system, and a computer program product for controlling the elevator system.
The objects of the invention are reached by an arrangement, a method, an elevator system, and a computer program product as defined by the respective independent claims.
According to a first aspect, an arrangement of an elevator system for controlling the elevator system is provided, the arrangement comprising: at least one projector device for projecting a virtual user interface; at least one gaze tracking device for capturing a number of images representing at least one eye of a person; a control unit configured to, in response to a receipt of the number of images from the gaze tracking device, perform: detect a predefined input from image data received from the gaze tracking device; detect an intersection of the virtual user interface and a gaze point, the gaze point being related to a detection of the predefined input and determined from the image data received from the gaze tracking device; generate a control signal to the elevator system in accordance with a position of the gaze point intersecting the virtual user interface.
The control unit of the arrangement may further be configured to control an operation of the at least one projector device.
Moreover, the control unit of the arrangement may be configured to perform a detection of the predefined input from image data based on a detection from at least one image among the number of images at least one of: a blink of the at least one eye of the person; a position of the gaze point remains stationary with a predefined margin a period of time exceeding a predefined threshold time.
For example, the control unit of the arrangement may be configured to generate the control signal to the elevator system in accordance with the position of the gaze point intersecting the virtual user interface in response to a detection that the gaze point intersects a predefined area of the virtual user interface.
The control unit of the arrangement may also be configured to determine a relation of the gaze point to the detection of the predefined input by one of: the gaze point corresponds to a gaze point determined from at least one same image as from which image the predefined input is detected; the gaze point corresponds to a gaze point determined from at least one previous image to the image from which the predefined input is detected.
The projector device of the arrangement may be arranged to generate the virtual user interface by projecting it to at least one of: a physical surface, air.
For example, the projector device may be arranged to project the virtual user interface to the physical surface with a light beam.
Alternatively or in addition, the projector device may be arranged to project the virtual user interface to the air by applying a photophoretic optical trapping technique.
Still further, the projector device may be arranged to project the virtual user interface to the air by controlling a foam bead with ultrasound waves to meet a light generated by the projector device.
In still further example, the projector device may be arranged to project the virtual user interface to the air by utilizing a fog screen as a projecting surface in the air.
According to a second aspect, a method for controlling an elevator system is provided, the method, performed by an apparatus, comprising: receiving a number of images representing at least one eye of a person received from a gaze tracking device; detecting a predefined input from image data received from the gaze tracking device; detecting an intersection of a virtual user interface and a gaze point, the gaze point being related to a detection of the predefined input and determined from the image data received from the gaze tracking device; and generating a control signal to the elevator system in accordance with a position of the gaze point intersecting the virtual user interface.
Further, the method may comprise: controlling an operation of the at least one projector device.
A detection of the predefined input from image data may e.g. be performed based on a detection from at least one image among the number of images at least one of: a blink of the at least one eye of the person; a position of the gaze point remains stationary with a predefined margin a period of time exceeding a predefined threshold time.
For example, the control signal to the elevator system may be generated in accordance with the position of the gaze point intersecting the virtual user interface in response to a detection that the gaze point intersects a predefined area of the virtual user interface.
A relation of the gaze point to the detection of the predefined input may be determined by one of: the gaze point corresponds to a gaze point determined from at least one same image as from which image the predefined input is detected; the gaze point corresponds to a gaze point determined from at least one previous image to the image from which the predefined input is detected.
The virtual user interface may be generated by controlling the at least one projector device to project the virtual user interface to at least one of: a physical surface, air.
According to a third aspect, an elevator system is provided, the elevator system comprising an arrangement according to the first aspect as defined above.
According to a fourth aspect, a computer program is provided, the computer program comprising computer readable program code configured to cause performing of the method according to the second aspect as defined above when said program code is run on one or more computing apparatuses.
The expression “a number of” refers herein to any positive integer starting from one, e.g. to one, two, or three.
The expression “a plurality of” refers herein to any positive integer starting from two, e.g. to two, three, or four.
Various exemplifying and non-limiting embodiments of the invention both as to constructions and to methods of operation, together with additional objects and advantages thereof, will be best understood from the following description of specific exemplifying and non-limiting embodiments when read in connection with the accompanying drawings.
The verbs “to comprise” and “to include” are used in this document as open limitations that neither exclude nor require the existence of unrecited features. The features recited in dependent claims are mutually freely combinable unless otherwise explicitly stated. Furthermore, it is to be understood that the use of “a” or “an”, i.e. a singular form, throughout this document does not exclude a plurality.
The embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.
The specific examples of the invention provided in the description given below should not be construed as limiting the scope and/or the applicability of the appended claims. Lists and groups of examples provided in the description given below are not exhaustive unless otherwise explicitly stated.
Further,
The projector device 150 may be a device configured to generate an image of a user interface 160 of the elevator system 100 in accordance with a control by the control unit 140. The image of the user interface 160 is called as a virtual user interface 160 herein. The virtual user interface 160 may refer to any user interface of the elevator system 100 through which the person may interact with the elevator system as is described herein. The virtual user interface 160 may e.g. represent a car operating panel (COP) or a destination operating panel (DOP) or any other. The virtual user interface 160 may be projected to a medium suitable for the projection by the projector device 150. In accordance with an example, the projector device 150 may be such that it may generate the virtual user interface 160 by projecting a predefined image accessible by a control unit 140, or the projector device 150 itself, on a predefined surface, such as on a wall. The predefined image may e.g. be stored in data storage, such as in an internal memory of the control unit 140 or the projector device 150 wherefrom the image may be retrieved e.g. in response to a control signal generated by the control unit 140. According to another example, the virtual user interface 160 may be projected to air as a 2-dimensional surface projection or even as a 3-dimensional volume projection. The projection of the virtual user interface 160 to air may be implemented by applying so-called photophoretic optical trapping which may be implemented by the projector device 150 e.g. under control of the control unit 140. In another example, the projecting to the air may be achieved by establishing so-called fog screen in a desired location in the space and project the virtual user interface 160 on the fog screen. In a still further example, the image in a 3-dimensional volume may be generated by using ultrasound waves to control a movement of a foam bead, such as a polystyrene bead, between a plurality of speakers, and by projecting light e.g. with LEDs in the volume an image may be generated when the light ray hits the bead. In some examples the virtual user interface 160 shall be understood as a hologram of the user interface. The listed techniques to generate the virtual user interface 160 are non-limiting example and any applicable solution may be applied in this context.
In general, the image may be generated by transmitting electromagnetic radiation with an applicable wavelength, such as with a wavelength of a visible light or a laser light, to a selected medium. Moreover, the image i.e. the virtual user interface 160 has a known size and shape so as visualize one or more areas in the image, such as areas representing destination floors for a selection by the person in a manner as is to be described. Still further, the generated virtual user interface has a known location in a space defined by a coordination system by means of which the areas of the virtual user interface 160 may be defined as areas, or even as volumes, in the coordinate system (cf. 2-dimensional or 3-dimensional). The information relating to the location of the virtual user interface as well as the size and the shape of it, may be managed by the control unit 140.
Depending on the implementation of the arrangement in the elevator system 100 the projector device 150 may be arranged to generate the image to one or more predefined locations in the space. In case a plurality of images are generated it may be performed by a single projector device 150 if the projector device 150 is suitable for such an operation or with a plurality of projector devices 150 controllable by the control unit 140. Alternatively or in addition, the image may be generated to a one predefined locations among a plurality of possible locations with a respective projector device 150, such as the one belonging to the arrangement whose beam may be directed to the predefined locations or by controlling one of the projector devices 150 to perform the operation. The location of the virtual user interface 160 may be selected based on a receipt of predefined input data, such as data from one or more sensors. For example, the building may be equipped with one or more sensors, e.g. along a path towards the elevator, and based on a detection of the person on a basis of sensor data, the control unit 140 may be configured to generate a control signal to the respective projector device(s) 150 to generate the virtual user interface 160 to one of the locations and maintain the information for further processing.
The arrangement may further comprise at least one gaze tracking device 170 arranged to capture data for determining the person's gaze point (black dot referred with 180 in
In accordance with the present application area the arrangement may be configured to operate so that the control unit 140 is arranged to determine a position of the gaze point 180 with respect to the virtual user interface 160, and any areas of that, generated by the projector device 150. This is possible due to that the control unit 140 is aware of the location of the virtual user interface 160 in the space and by receiving the data representing the gaze point 180 it is possible to estimate a position of the virtual user interface 160 the person stares at. The estimation of the position of the gaze point 180 with respect to the virtual user interface 160 may be performed by estimating an intersection point of the gaze point of the at least one eye derived from data obtained with the gaze tracking device 170 and the virtual user interface 160 generated by projecting the image representing the user interface to the known location. In other words, the determined gaze point 180 shall comprise a number of common position points with the virtual user interface 160 when these are determined in the common coordinate system. In some examples, the gaze point 180 may correspond to an intersection of a line of sight determined from the data obtained from the gaze tracking device 170 with a surface representing the virtual user interface 160. For sake of completeness it is worthwhile to mention that if the virtual user interface 160 is implemented as a 3-dimensional object, the gaze point 180 in a volume comprising the 3-dimensional object representing the virtual user interface 160 is to be determined in order to enable a detection of the selection through the virtual user interface 160.
Now, at some point of time the person is willing to provide in indication of his/her selection to the elevator system 100. This may be performed by arranging the control unit 140 to detect, from data received from the gaze tracking device 170, a predefined input from the person to indicate the selection. For example, the input may be an eye blink of at least one eye detectable from the image stream received with the gaze tracking device 170. In accordance with some other example, the input may be given by staring at the same position, with an applicable margin, a time exceeding a threshold time set for the selection. In other words, if the person keeps her/his gaze at the same position on the virtual user interface 160 over a predefined time, it may be interpreted to correspond to a selection. In other words, if the virtual user interface 160 represents destination floors and it is detected that the person stares at a certain area representing a certain floor, it may concluded by the control unit 140 that the person is willing to travel to the respective floor and the control unit 140 may be arranged to generate a control signal to the elevator controller 130 to indicate the destination floor in the elevator system 100. In this manner, the elevator controller 130 may perform an allocation of the elevator car 110 to serve the service call given in the form of the destination call through the arrangement.
It is worthwhile to mention that a number of the gaze tracking devices 170 may be selected in accordance with the implementation. For example, if the arrangement is implemented so that there a plurality of locations into which the virtual user interface 160 may be generated, there may be a need to arrange a plurality of gaze tracking devices 170 in a plurality of positions so that it is possible to monitor person's eyes with respect to the plurality of virtual user interfaces 160 at the different possible locations.
Next, some aspects of an example are described by referring to
The gaze tracking device 170 may perform a monitoring of a movement of the gaze and generate image stream to the control unit 140 accordingly. At some point the person may give an input to the arrangement with at least one predefined method, such as blinking of at least one eye or staring at the same point over a threshold time, and the input may be detected 230 by the control unit 140 from one or more images. In response to the receipt of the input the control unit 140 may be arranged to determine the gaze point 180 at the time of input, or just before the input, from the received images. For example, if the input giving method is based on a detection of the blink of the at least one eye the control unit 140 may be arranged to define the gaze point 180 from at least one previous image frame to the image frame from which the blink of the at least one eye is detected. According to another example, if the input mechanism is based on a detection that the gaze point 180 remains at a certain position, with applicable margins, over the threshold time, the position of the gaze point 180 may be determined from the same image frame as is the last one for the decision-making of the input, or from any previous image frame in which the position of the gaze point 180 has remained the same.
In response to the receipt of the input the control unit 140 may be configured to detect 240 if the gaze point 180 indicated with the input is within an area, or in a volume, of the virtual user interface 160 and at which position the gaze point intersects with the virtual user interface 160 and/or any sub-area thereof if any. In other words, the aim is to determine if the gaze point resides in such a point within the virtual user interface 160 which causes a request in the elevator system 100. An example of this is schematically illustrated in
(x−a1)2+(y−b1)2=r2, and Circle 310:
(x−a2)2+(y−b2)2=r2 Circle 320:
wherein (a1, b1) and (a2, b2) represent the coordinates of the centres of the respective circles, and r represents the radius of the circles which in this nonlimiting example is the same for the both circles. In other words, the above given equations defined the areas of the respective circles in the coordinate system.
Now, the position of the gaze point 180 in relation to the input as described is determined in the same coordinate system. For example, the gaze point 180 in the present example in relation to the input under consideration may be (a3, b3). Hence, the control unit 140 may be arranged to detect if the gaze point 180 resides within the area of any of the virtual buttons 310, 320 by determining if any one of the following equations is true in a position (a3, b3) in the coordinate system:
(x−a1)2+(y−b1)2=r2, or
(x−a2)2+(y−b2)2=r2.
In the fictitious non-limiting example of
In the foregoing description the step 240 of
For example, the control unit 140 may refer to a computing device, such as a server device, a laptop computer, or a PC, as schematically illustrated in
The memory 420 and a portion of the computer program code 425 stored therein may be further arranged, with the processor 410, to cause the apparatus, i.e. the device to perform a method as described in the foregoing description. The processor 410 may be configured to read from and write to the memory 420. Although the processor 410 is depicted as a respective single component, it may be implemented as respective one or more separate processing components. Similarly, although the memory 420 is depicted as a respective single component, it may be implemented as respective one or more separate components, some or all of which may be integrated/removable and/or may provide permanent/semi-permanent/dynamic/cached storage.
The computer program code 425 may comprise computer-executable instructions that implement functions that correspond to steps of the method when loaded into the processor 410. As an example, the computer program code 425 may include a computer program consisting of one or more sequences of one or more instructions. The processor 410 is able to load and execute the computer program by reading the one or more sequences of one or more instructions included therein from the memory 420. The one or more sequences of one or more instructions may be configured to, when executed by the processor 410, cause the apparatus to perform the method be described. Hence, the apparatus may comprise at least one processor 410 and at least one memory 420 including the computer program code 425 for one or more programs, the at least one memory 420 and the computer program code 425 configured to, with the at least one processor 410, cause the apparatus to perform the method as described.
The computer program code 425 may be provided e.g. a computer program product comprising at least one computer-readable non-transitory medium having the computer program code 425 stored thereon, which computer program code 425, when executed by the processor 410 causes the apparatus to perform the method. The computer-readable non-transitory medium may comprise a memory device or a record medium such as a CD-ROM, a DVD, a Blu-ray disc, or another article of manufacture that tangibly embodies the computer program. As another example, the computer program may be provided as a signal configured to reliably transfer the computer program.
Still further, the computer program code 425 may comprise a proprietary application, such as computer program code for executing the control of the elevator system in the manner as described.
Any of the programmed functions mentioned may also be performed in firm-ware or hardware adapted to or programmed to perform the necessary tasks.
Moreover, as mentioned a functionality of the apparatus implementing the control unit 140 may be shared between a plurality of devices as a distributed computing environment. For example, the distributed computing environment may comprise a plurality of devices as schematically illustrated in
Some aspects relate to an elevator system 100 comprising the arrangement as described in the foregoing description and wherein the method as described may be performed in order to control the elevator system accordingly.
An advantage of the examples as described is that they provide a sophisticated solution for interacting with the elevator system 100. The solution provides a way to establish the user interface to an optimal position with respect to people flow in premises as well as enable touchless interaction with the elevator system which may e.g. prevent a spread of diseases.
The specific examples of the invention provided in the description given above should not be construed as limiting the applicability and/or the interpretation of the appended claims. Lists and groups of examples provided in the description given above are not exhaustive unless otherwise explicitly stated.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/FI2020/050471 | Jun 2020 | US |
Child | 17993792 | US |