The present invention relates to a storage and order-picking system which is equipped with a motion-sensor system which allows to draw conclusions on a correct conduction of manipulation processes by means of measurement of motions of an operator, wherein piece goods are manually manipulated. Further, piece goods can be measured by using the hands only, i.e. without additional aids. Finally, control instructions, which cause movements within the storage and order-picking system, can be generated, or triggered, by means of specific gestures of the operator only.
In the field of intralogistics substantially two principles exist according to which goods are moved within a warehouse. The order-picking process either happens in accordance with the principle “man-to-goods” or in accordance with the principle “goods-to-man”. Additionally, a plurality of different order-picking systems, or order-picking guidance systems, exist which are designated by terms such as “Pick-to-Belt” or “Pick-by-Light” or the like.
Timm Gudehus describes in his book “Logistics” (Springer-Verlag, 2004, ISBN 3-540-00606-0) the term “Pick-to-Belt” as an order-picking method, wherein the picking happens in a decentralized manner wherein the articles are provided statically. Provision units (such as storage containers or piece goods) have a fixed location, if picking happens in a decentralized manner. An order-picking person moves within a (decentralized) working area for the purpose of picking, the working area containing a certain number of access locations. Picking orders, with or without collecting containers, sequentially travel to corresponding order-picking zones (working area of the order-picking person) on a conveyor system. An order, or a picking order, is to be understood, for example, as a customer's order which includes one or more order positions (order lines) including a respective amount (removal quantity) of one article or one piece good. The orders stop in the order-picking zone until required article amounts are removed and deposited. Then, the order can travel, if necessary, to a subsequent order-picking person, who operates an order-picking zone, which is arranged downstream, for processing next order lines. Advantages of the decentralized picking process are: short paths and continuous operation; no set-up times and waiting times at a central basis; as well as a higher picking performance of the order-picking persons. Therefore, “batch picking” is often conducted with “Pick-to-Belt” applications, i.e. as much as possible customers orders, which contain a specific article type, are concatenated so that the order-picking person removes this article type for all of the customer orders. This reduces the walking path of the order-picking person.
Another order-picking method is designated as “Pick-by-Light” (source: Wikipedia). Pick-by-Light offers significant advantages in comparison to classic manual order-picking methods which require the presence of delivery notes or debit notes at the time of the order-picking process. With Pick-by-Light systems a signal lamp including a digital or alphanumeric display as well as at least one acknowledgement key and, if necessary, entry and correction keys are located at each of the access locations. If the order container, into which the articles are to be deposited, for example, from storage containers, arrives at an order-picking position, then the signal lamp of the access location is lit from which the articles or piece goods are to be removed. The number, which is to be removed, appears on the display. Then, the removal is confirmed by means of the acknowledgement key, and the inventory change can be reported back to the warehouse management system in real time. In most cases the Pick-by-Light systems are operated in accordance with the principle “man-to-goods”.
Further, paperless order-picking by means of “Pick-by-Voice” is known (source: Wikipedia). In this case communication between a data processing system and the order-picking system happens via voice. Most of the time the order-picking person works with a headset (earphone and microphone), which can be connected, for example, to a commercially available pocket PC, instead of using printed order-picking lists or data radio terminals (i.e. mobile data acquisition units, MDU). The orders are radio transmitted by the warehouse management system, most of the time by means of WLAN/WiFi, to the order-picking person. Typically, a first voice output includes the rack from which piece goods are to be removed. If the order-picking person has arrived at the rack, he/she can name a check digit attached to the rack, which allows the system to check the access location. If the correct check digit has been named, a removal quantity in terms of a second voice output is named to the order-picking person. If the rack comprises several access locations, as a matter of course the order-picking person is named the specific access location in terms of a voice output as well. After removal of the to-be-picked piece good, or of the to-be-picked piece goods, the order-picking person acknowledges this process by means of key words which are understood by a data processing device due to voice recognition.
In the house of the applicant coordination of the processing of orders is conducted by an order processing system, the order processing system being integrated most of the time into an order-picking control, which can also comprise, for example, a material management system. Further, a (warehouse) location management as well as an information-display system can be integrated into the order-picking control. The order-picking control is typically realized by a data processing system which preferably works online for transmitting data without delay and for processing data. One problem of the above-mentioned conventional order-picking methods is to be seen in the manner of how the order-picking person—i.e. the operator of a work station—communicates with the order-picking control. Another problem is to be seen in the checking and monitoring of the operator.
Often an order-picking process consists of a plurality of sequential operation and manipulation steps, wherein the piece goods are picked, for example, at a source location and delivered to a target location. It is not clear whether the operator accesses the right source location and delivers to the right target location, and therefore needs to be monitored (e.g. by means of light barriers). Further, deviations can occur between a number of to-be-manipulated piece goods and a number of actually manipulated piece goods. Therefore, also the number of manipulated piece goods is to be monitored.
In order to begin a manipulation, the operator needs to communicate with the order-picking control. The same applies with regard to indication of an end of a manipulation. Frequently the above already mentioned acknowledgement keys are used for this purpose. One disadvantage of the acknowledgement keys is to be seen in that they are arranged stationary and that the operator needs to walk to the acknowledgement keys in order to actuate the same. This requires time. The more time is needed for each manipulation, the lower the picking performance (number of manipulations per unit of time) is.
The document U.S. Pat. No. 6,324,296 B1 discloses a distributed-processing motion capture system (and inherent method) comprising: plural light point devices, e.g., infrared LEDs, in a motion capture environment, each providing a unique sequence of light pulses representing a unique identity (ID) of a light point device; a first imaging device for imaging light along a first and second axis; and a second imaging device for imaging light along a third and fourth axis. Both of the imaging devices filter out information not corresponding to the light point devices, and output one-dimensional information that includes the ID of a light point device and a position of the light point device along one of the respective axes. The system also includes a processing device for triangulating three-dimensional positions of the light point devices based upon the one-dimensional information. The system is very fast because the necessary processing is distributed to be maximally parallel. The motion capture system uses a cylindrical collimating (CC) optics sub-system superimposed on a cylindrical telecentric (CT) optics sub-system. The outputs of the plural light point devices are modulated to provide a unique sequence of light pulses representing a unique identifier (ID) for each of the light point devices according to a predetermined cycle of modulation intervals based upon synchronization signals provided via RF communication. At least two of the light point devices concurrently provide light during the cycle.
The document U.S. Pat. No. 6,724,930 B1 discloses a three-dimensional position and orientation sensing apparatus including: an image input section which inputs an image acquired by an image acquisition apparatus and showing at least three markers having color or geometric characteristics as one image, three-dimensional positional information of the markers with respect to an object to be measured being known in advance; a region extracting section which extracts a region corresponding to each marker in the image; a marker identifying section which identifies the individual markers based on the color or geometric characteristics of the markers in the extracted regions; and a position and orientation calculating section which calculates the three-dimensional position and orientation of the object to be measured with respect to the image acquisition apparatus, by using positions of the identified markers in the image input to the image input section, and the positional information of the markers with respect to the object to be measured.
The document WO 2011/013079 A1 discloses a method for depth mapping includes projecting a pattern of optical radiation onto an object. A first image of the pattern on the object is captured using a first image sensor, and this image is processed to generate pattern-based depth data with respect to the object. A second image of the object is captured using a second image sensor, and the second image is processed together with another image to generate stereoscopic depth data with respect to the object. The pattern-based depth data is combined with the stereoscopic depth data to create a depth map of the object.
Therefore, it is an object to monitor the manipulations better and to facilitate the communication between the operator and the order-picking control, in particular if guidance of the operator with regard to the order-picking process is concerned.
According to a first aspect of the invention it is disclosed a storage and order-picking system for storing and picking piece goods comprising: a manual work station comprising a defined working area, in which an operator is supposed to manipulate a piece good with his/her hands in a default manner, which is communicated to the operator visually and/or audibly, in that the operator moves the piece good within the working area; a motion-sensor system, which detects motions, preferably of the hands and/or forearms, of the operator within the working area of the work station and which converts same into corresponding motion signals; and a computing unit, which is data connected to the motion-sensor system and which is configured to convert the motion signals into corresponding, preferably time-dependent, trajectories in a virtual space, which is an image of the working area and where the trajectories are compared to reference trajectories, or reference volumina, in the virtual space, in order to generate and output control signals which indicate a correct or wrong performance of the default manipulation manner to the operator.
According to a second aspect of the invention it is disclosed a storage and order-picking system for storing and picking piece goods, comprising: a manually operated work station arranged in a fixed working area, in which an operator manipulates the piece goods with his/her hands in a default manipulation manner, which is communicated to the operator visually, or audibly, wherein the operator moves the piece goods within the working area; a motion-sensor system configured to detect the operators's motions within the working area of the work station, and to convert same into corresponding motion signals; and a computing unit, which is connected to the motion-sensor system and which is configured to convert the motion signals into corresponding trajectories in a virtual space, which represents an image of the working area in real space, wherein the converted trajectories are compared to reference trajectories, or reference volumina, in the virtual space, which is modeled in accordance with the real space as a reference model, the computing unit being further configured to generate and output control signals, based on the comparison, which indicate a correct or wrong performance of the default manipulation manner to the operator.
The invention tracks the operator's motion during an order-picking process, preferably in real time. If the operator gets a piece good from a wrong location, this can be recognized in the virtual world (3D reference model) of the work station immediately by comparison of a calculated position with a reference position. Of course, the same applies to the delivery and, if necessary, also to the movement of the piece good between the pick-up and the delivery. For example, it might happen that the piece good is to be rotated about a specific angle during the pick-up and the delivery, in order to be orientated better on an order pallet for subsequent stacking purposes. Modern packing software definitely considers such movements during the planning of a loading configuration.
It is clear that hereinafter a trajectory is not only to be understood as a time-dependent curve in space, which is typically caused by a (dynamic) motion of the operator, but the term “trajectory” can also include freezing at one location. In this case the trajectory does not represent a track extending through the space but represents the course of one point within a very very little volume. Ideally, the point does not move in this case. In general, a “trajectory”, in terms of an object tracking, is a time sequence of (3D) coordinates which represent a motion path of the object during a run time.
With a preferred embodiment the storage and order-picking system comprises a goods receipt, a goods issue, at least one warehouse, and/or a conveyor system.
The invention can be used in each area of a storage and order-picking system and is not limited to specific locations, or areas.
In particular, the work station can be a packing station, an order-picking station, or a teach-in station (station for measuring piece goods), which preferably is operated in accordance with the principle “goods-to-man”.
With preferred embodiment the motion-sensor system comprises a position-determining system, which at least comprises one camera and at least two light sources, wherein the at least two light sources are arranged at a fixed distance to each other, wherein respectively the camera or the two light sources are attached, preferably in parallel to the ell or stretched index finger, to the hands or forearms of the operator, and wherein the calculating unit is configured to perform, based on an image of the two light sources which is recorded by the camera, an absolute position determination of the hands and/or forearms within the working area.
The (absolute) position determination presently takes place in the so-called pointer mode. The light sources and the camera are orientated to each other and can “see” each other. The position determination happens in terms of triangulation, wherein the distance of the light sources relative to each other is already known in advance. In this context it is irrelevant whether the light sources rest and the camera moves, or whether the camera rests and the light sources move.
Further, it is preferred if the at least one camera or the at least two light sources are respectively attached to a holding device, which preferably is flexible and formed such that the operator can wear the holding device during the performance of the manipulation of piece goods permanently, captively, and in a manner which allows to keep a fixed orientation. The above-mentioned position-determining system, or parts thereof, are to be attached to the operator in a preset orientation. The attachment happens, for example, by means of a glove, an arm gaiter, or the like such as rubber ribbons or elastic rings. The index finger and the ell are predestined for the attachment and orientation. A stretched index finger is typically orientated in parallel to the ell, if an extended arm points to an object.
As mentioned above in particular a glove, an arm gaiter, or a plurality of, preferably elastic, ribbons or rings are used as the holding device.
Further, it is advantageous to provide the motion-sensor system additionally with at least two motion sensors, which are orientated along different spatial directions and which generate the direction-dependent (motion and position) information, which can be transmitted to the calculating device, wherein the calculating device is configured to conduct a relative position determination of the operator within the working area based on the direction-dependent information.
Both translatory motions and rotatory motions can be detected by means of the motion sensors. If three motion sensors are provided, which are orientated along vectors which in turn span the space of the working area, each position change can be determined by calculation. If the system has been calibrated additionally in advance, by conducting an absolute position determination, then an absolute position can also be calculated over longer periods.
Therefore, motion sensors are ideally suitable for being combined with the above-mentioned position-determining system, which however is only operable in the pointing mode without additional technical aid. If the pointing mode is quit, the position determination can be continued—by calculation—based on the data delivered by the motion sensors.
Additionally, it is advantageous if the motion-sensor system comprises a position-determining system which comprises at least one stationary light source and at least one stationary camera, wherein each of the light sources illuminates the working area, wherein the at least one stationary camera is arranged such that at least some rays are detected, which are reflected by the operator and which are converted into reflection signals by the at least one stationary camera, wherein the calculating device is configured to conduct a relative position determination of the operator within the working area based on the reflection signals.
In this case, the invention utilizes the so-called “Motion Capturing Method”. Points which can be additionally marked by markers are permanently illuminated and reflections thereof are detected, in order to allow reconstruction of the motion of the points in space by calculation. This coarse position determination, which is typically slightly delayed in time, is sufficient for many applications in the field of intralogistics, in order to check approximately the quality (correctness) of the order-picking process, and to initiate correction measures, if necessary.
With another preferred embodiment the position determining system comprises additional markers, wherein preferably each hand and/or each forearm of the operator is connected to one of the markers in an unchangeable preset orientation relative to the operator, and wherein the at least one stationary light source emits (isotopic) rays at a selected wavelength into the working area, which are not reflected at all, or only weakly, by the operator, the piece goods, and the work station, wherein the marker is made of a material which reflects the selected wavelength particularly well.
Thus, the operator does not necessarily need to be equipped with a marker, which actively transmits, in order to allow gain of information on the position of the operator. Since the markers substantially reflect the rays of the stationary light source, time consuming post-processing of the data and expensive filters for suppressing undesired signals can be omitted.
Besides this, the markers can be longitudinal flexible stripes which are attachable along an ell, a thumb, or an index finger of the operator, or can be points attachable along a grid.
With another advantageous embodiment the stationary light source of the position-determining system transmits a plurality of separate rays in a predefined discrete pattern (anisotropically) into the working area, wherein at least two stationary cameras are provided, which are arranged in common with the at least one stationary light source along a straight line so that the at least two stationary cameras detect at least some of the separate rays reflected by the operator and convert the same into reflection signals, wherein the calculating unit is configured to conduct a relative position determination of the hands and/or forearms within the working area based on the reflection signals.
In the present case, again a passive system is described, in which the operator merely serves as a reflector. In this case it is not even necessarily required that the operator is equipped with corresponding markers. Since the light source emits a regular pattern, consisting of discrete rays, depth information on the reflecting object (operator) can already be achieved alone by means of one single camera. Due to the fact that two cameras are arranged at same height relative to the light source, further the principles of stereoscopy can be used for attracting additional depth information. In addition, it is possible to conduct a relative position determination on the object, which moves within the illuminated field of view of the light source and thus also reflects radiation. The system can be calibrated in advance, in order to allow calculation of an absolute position. In this context, it is an advantage that the operator does not need to be provided with markers or the like, wherein information on a current position can be determined nevertheless. In any case, the operator can work in an undisturbed manner.
Further, it is advantageous if the at least two stationary cameras are operated in different frequency ranges, preferably in the infrared range and in the visible spectrum.
Infrared light does not disturb the operator during work. An RGB camera, which records visible light, can be used additionally for generating a normal video image besides the gain of depth information.
With another particular embodiment the system comprises a display device, which receives the control signals of the calculating unit and which communicates to the operator a manipulation manner recognized as being right or wrong.
In this manner it is possible to immediately intervene if an error is recognized. The operator can even be prevented from terminating an erroneous manipulation step. In this case the error does not happen at all. When carried out properly this can be communicated to the operator in a timely manner in terms of positive feedback.
Preferably, the system further comprises a video camera generating a real image of the working area, wherein the calculating unit is configured to generate image signals in real time and to transmit the same to the display device, which superimposes to the real image a reference source volume, a reference target volume as well as the recognized hands and/or forearms of the operator, and/or work instructions.
If such a video image is displayed to the operator, while the desired manipulation is conducted within the working area, the operator can immediately recognize whether the desired manipulation is correctly conducted and what needs to be done. If piece goods are taken, the operator can see on the screen whether the piece goods are taken from the correct location, because the correct location needs to be located within the source volume, which is displayed in a superimposed manner. The same applies analogously during delivery, since the target volume is displayed in a superimposed manner. If a tilt or rotation or the piece good is to be conducted additionally on the path between the pick-up and the delivery, this can also be visualized (dynamically). This is particularly advantageous with regard to packing applications, since the piece goods most times need to be stacked in one single preset orientation onto the piece good stack already located on the order pallet. In this context it can be quite relevant whether the piece good is orientated correctly, or stands upside down, because not every piece good has a homogenous weight distribution.
Additionally, the system can further comprise a voice-guidance system, which comprises an earphone and a microphone, preferably in terms of a headset.
The manipulation steps can be controlled additionally by the voice-guidance system (Pick-by-Voice) by means of voice. This concerns both instructions, which are received by the operator audibly, and also instructions (e.g. a confirmation), which is directed by the operator to the order-picking control in terms of voice.
According to a third aspect of the invention it is disclosed a method for monitoring and guiding a manual order-picking process, wherein a piece good is manually picked up by an operator at a source location, in accordance with an order-picking task, and is delivered to a target location comprising the steps of: assigning an order-picking task to the operator; visually or audibly communicating the task, preferable in terms of a sequence of manipulation steps, to the operator in the real space; picking-up, moving, and delivering the piece good in the real space by the operator; scanning the actual movement, preferably of the hands and/or the forearms, of the operator in the real space by means of a motion-sensor system; converting the movements, scanned in the real space, into image points or into at least one trajectory in a virtual space, which is modeled in accordance with the real space as a reference model and in which the source location is defined as a reference-source volume and the destination location is defined as a reference-destination volume; checking by comparing whether the trajectory matches a reference trajectory, wherein the reference trajectory fully corresponds to a motion sequence in the virtual space in accordance with the communicated task, or whether the image points are located initially within the reference-source volume and later in the reference-destination volume; and outputting an error notification, or a correction notification, to the operator, if the step of checking has resulted in a deviation between the trajectory and the reference trajectory, or if the step of checking results in that the image points are not located in the reference-source volume and/or in the reference-destination volume.
According to a fourth aspect of the invention it is disclosed a method for monitoring and guiding a manual order-picking process, wherein in accordance with an order-picking task a piece good is manually picked up by an operator at a source location and delivered to a target location in real space, the method comprising the steps of: assigning an order-picking task to the operator; visually, or audibly, communicating the order-picking task to the operator in the real space; picking-up, moving, and delivering the piece good in the real space by the operator; detecting the actual movement of the operator in the real space by means of a motion-sensor system; converting the detected movements into one of image points and at least one trajectory in a virtual space, which is modeled in accordance with the real space as a reference model and in which the source location is defined as a reference-source volume and the destination location is defined as a reference-destination volume, by means of a computing unit; checking, by means of the computing unit, by comparing: whether the at least one trajectory matches a reference trajectory, wherein the reference trajectory corresponds to a motion sequence in the virtual space in accordance with the communicated order-picking task, or whether the image points are located initially within the reference-source volume and later in the reference-destination volume; and outputting an error notification, or a correction notification, to the operator, if the step of checking has resulted in a deviation between the trajectory and the reference trajectory, or if the step of checking results in that the image points are not located in the reference-source volume and the reference-destination volume.
With the above-described method in accordance with the invention the motion of the operator is tracked (tracking) in the real space, which is mapped in terms of actual data into the virtual space and which is compared to nominal data there. The resolution is so good that it is possible to track the operator's hands alone. As soon as the operator does something unexpected, it is recognized and countermeasures can be initiated. The error rate can be drastically reduced in this manner. Therefore, acknowledgement keys or the like do not need to be actuated so that the operator can conduct the order-picking process completely undisturbed. The order-picking time is reduced. If a piece good is retrieved from the wrong source location, or is delivered to a wrong destination location, this is immediately registered (i.e., in real time) and communicated to the operator.
With a preferred embodiment at least one reference trajectory is calculated for each hand or forearm of the operator, which starts in the reference-source volume and ends in the reference-destination volume.
The order-picking control knows the pick-up location and the delivery location before the desired manipulation is conducted by the operator. Thus, it is possible to determine nominal motion sequences, which can be compared subsequently to actual motion sequences, in order to allow a determination of deviations.
Additionally, it is advantageous to further check whether the operator is picking up a correct number of piece goods in that a distance between the hands of the operator is determined and the same is compared to an integral multiple of one dimension of one of the piece goods with regard to plausibility, wherein several piece goods of one type only have to be moved simultaneously in accordance with the task.
The operator does not need to move the multiple piece goods individually for allowing determination of whether the right number of piece goods has been manipulated (counting check). The order-picking person can simultaneously move all of the to-be-manipulated piece goods, if he/she is able to, wherein the actually grabbed piece goods are counted during the motion. In this context, it is not required that the operator stops the motion at a preset time or preset location. In this sense the operator can work undisturbedly and conduct the motion continuously. The inventors have recognized that when multiple piece goods are grabbed most of the time both hands are used and have a constant distance relative to each other during the motion sequence, the distance can be clearly recognized during analysis of the trajectories. If piece goods of one sort only are manipulated the basic dimensions (such as height, width, depth) are kept within reasonable limits with regard to the possible combinations and can slip into the analysis of the distance between the hands. In this manner it can be determined rapidly whether the operator has grabbed the right number and the right piece goods.
According to a fifth aspect of the invention it is disclosed a method for manually determining a dimension of a piece good, wherein a system in accordance with the invention is used, wherein the hands, in particular the index finger, are provided with markers, the method comprising the steps of: selecting a basic-body shape of the piece good, wherein the basic-body shape is defined by a set of specific basic lengths; sequentially communicating the to-be-measured basic lengths to the operator; positioning the markers laterally to the piece good in the real world for determining each of the communicated basic lengths; and determining a distance between the markers in the virtual world, and assigning the so-determined distance to the to-be-measured basic length, respectively.
According to a sixth aspect of the invention it is disclosed a method for manually determining a dimension of a piece good in a storage and order-picking system, wherein an operator's hands, or index fingers, are provided with markers, the method comprising the steps of: selecting a basic body shape of the piece good, which is to be measured, wherein the basic body shape is defined by a set of specific basic lengths; sequentially communicating the to-be-measured basic lengths to the operator; positioning the markers laterally to the to-be-measured piece good in the real world for determining each of the communicated basic lengths; and determining a distance between the markers in the virtual world, which is modeled in accordance with the real space as a reference model, and assigning the so-determined distance to the to-be-measured basic length, respectively.
The operator does not need anything else but his/her hands for determining a length of a piece good. Any additional auxiliary tool can be omitted. The measuring of one of the piece goods happens rapidly since the hands only need to be in contact for a very short period of time.
Even more complex geometrical shapes such as a tetrahedron (pyramid) can be measured rapidly and easily. A selection of basic bodies can be displayed to the operator from which the operator can select the shape of the piece good, which is currently to be measured. As soon as one of the basic shapes is selected, it is automatically displayed to the operator, which lengths are to be measured. In this context, the indication preferably happens visually by representing the points in a marked manner at the selected basic shape.
Also the thumbs, besides the index fingers, can be additionally provided at least with one marker, wherein the index finger and the thumb of each hand are spread away from each other during the measuring process, preferably in a perpendicular manner.
In this case thumbs and index fingers span a plane which can be used for measuring the piece good. Additionally, angles can be indicated in a simple manner. Rotating and tilting the piece good, in order to measure each of the sides, is not necessarily required. The index fingers and thumbs do not necessarily need to be spread perpendicularly. Any arbitrary angle can be measured on the piece good by means of an arbitrary angle between the index finger and the thumb.
With another embodiment of the method the to-be-measured piece good is rotated about one of its axes of symmetry for determining a new basic length.
According to a seventh aspect of the invention it is disclosed a method for controlling a storage and order-picking system in accordance with the invention comprising the steps of: defining a set of gestures, which respectively correspond to one unique motion or rest position of at least one arm and/or at least one hand of the operator and which sufficiently distinguish from normal motions, respectively, in the context of desired manipulations of the piece good in the working area; generating reference gestures in the virtual world, wherein at least one working-area control instruction is assigned to each of the reference gestures; scanning the actual motion of the operator in the real world, and converting same into at least one corresponding trajectory in the virtual world; comparing the trajectory to the reference gestures; and executing the assigned working-area control instruction if the comparison results in a sufficient match.
According to an eighth aspect of the invention it is disclosed a method for controlling a storage and order-picking system, which comprises a work station arranged in a fixed working area in real space, comprising the steps of: defining a set of gestures, which respectively correspond to one unique motion, or rest position, of at least one of an arm and of at least one hand of an operator and which sufficiently distinguishes from normal motions, respectively, in the context of desired manipulations of a piece good in the working area; generating reference gestures in a virtual world, which is modeled in accordance with the real space as a reference model, wherein at least one working-area control instruction is assigned to each of the reference gestures; scanning the actual motion of the operator in the real world, and converting the scanned motion into at least one corresponding trajectory in the virtual world; comparing the trajectory to the reference gestures; and executing the assigned working-area control instruction if the comparison results in a sufficient match.
The operator can indicate to the order-picking control by means of hand motions only whether the operator has completed one of the partial manipulation steps, or whether the operator wants to begin with a new manipulation step. Acknowledgment keys, switches, light barriers, and the like can be omitted completely. A manipulation step can be conducted at a higher speed since the actuation of an acknowledgement key or the like, is omitted, in particular the paths associated therewith.
In particular, the operator can log in at a superordinate control unit as soon as the operator enters a working cell for the first time.
The operator can easily identify himself/herself to the order-picking control by a “Log-on” or registration gesture, the order-picking control preferably being implemented within the control unit by means of hardware and/or software. Each of the operators can have a personal (unambiguous) identification gesture. In this manner each motion detected within a working cell can be assigned unambiguously to one of the operators.
Further, it is advantageous to attach at least one marker to each of the operator's hands, and/or to each of the operator's forearms, before the operator enters the working cell.
In this constellation operators are allowed to enter the working cell without being recognized if they do not have markers with them. Thus, differentiation between active and inactive operators is easily possible.
With another preferred embodiment the operator, and in particular the markers, are permanently scanned for recognizing a log-in gesture.
Additionally, it is generally advantageous to conduct the respective steps in real time.
Thus, it is possible to intervene at any time in a correcting manner and to inquire at any time which of the persons is currently conducting a process within the storage and order-picking system, where the person is located, where the person has been located before, how efficient the person works, and the like.
Further, it is generally preferred to conduct a position calibration in a first step.
Position calibration is particularly advantageous for determining an absolute position, because in this case absolute positions can be determined even by means of the relative position-determining systems.
In particular, the trajectories of the operator are stored and are data-associated to information of such piece goods which have been moved by the operator during a (work)shift, wherein in particular a work period, a motion path, particularly in horizontal and vertical directions, and a weight of each moved piece good are considered.
In many countries statutory provisions consist for ergonomical reasons in that operators may not exceed fixedly preset limit values with regard to weights, which need to be lifted or pushed during one work shift. Up to now it was almost impossible to determine an overall weight, which has already been lifted or pushed by the operator during his/her work shift. In particular, it was almost impossible to reconstruct lifting motions. In this case, the present invention provides remedy. The properties (e.g. weight) of the piece goods are known. The motion of the operator is tracked. It is possible to immediately draw conclusions with regard to concise values.
With another advantageous embodiment a video image of the working area is generated additionally, to which the source volume, the target volume, the scanned hands, the scanned forearms, and/or the scanned operator is/are superimposed and subsequently displayed to the operator via a display device in real time.
It is clear that the above-mentioned and hereinafter still to be explained features cannot only be used in the respectively given combination but also in other combinations or alone, without departing from the scope of the present invention.
Embodiments of the invention are depicted in the drawings and will be explained below in further detail, wherein:
a and 12b show perspective illustrations of a counting check during a transfer process;
a to 13c shows a perspective view of a sequence of measuring processes;
During the following description of the figures identical elements, units, features, and the like will be designated by the same reference numerals.
The invention is used in the field of intralogistics and substantially concerns three aspects interacting with each other, namely i) order-picking guidance (in terms of an order-picking guidance system), ii) checking and monitoring employees (order-picking persons and operators), and iii) control of different components of a work station, or of an entire storage and order-picking system by means of gesture recognition.
The term “gesture recognition” is to be understood subsequently as an automatic recognition of gestures by means of an electronic data processing system (computer) which runs corresponding software. The gestures can be carried out by human beings (order-picking persons or operators). Gestures, which are recognized, are used for human-computer interaction. Each (rigid) posture and each (dynamic) body motion can represent a gesture in principle. A particular focus will be put below on the recognition of hand and arm gestures.
In the light of a human-computer interaction a gesture can be defined as a motion of the body, the motion containing information. For example, waving can represent a gesture. Pushing a button on a keyboard does not represent a gesture since the motion of a finger towards a key is not relevant. The only thing which counts in this example is the fact that the key is pressed. However, gestures are not exhausted in motions only, a gesture can also happen by means of a static (hand) posture. In order to detect the gesture, an (active) sensor technology can be attached directly to the operator's body. Alternatively (and supplementarily), the operator's gestures can also be observed by means of an external sensor technology (in a passive manner) only. The hereinafter still to be explained sensor systems are worn at the body of the operator, in particular on the hands and/or forearms. The operator can wear, for example, a data glove, arm gaiters, rings, ribbons, and the like. Alternatively systems can be used, which are guided manually. Systems including external sensor technology most of the time are represented by camera-aided systems. The cameras are used for generating images of the operator, which are subsequently analyzed by means of software for recognizing motions and postures of the operator.
During the actual recognition of gestures information of the sensor technology are used in algorithms which analyze the raw data and recognize gestures. In this context, algorithms for pattern recognition are used. The input data are often filtered, and pre-processed if necessary, in order to suppress noise and reduce data. Then gesture-relevant features are extracted, which are classified. In this context, for example, neural networks (artificial intelligence) are used.
With another passive approach of the general motion recognition (without gesture recognition), which will be described in more detail below, for example, a depth-sensor camera and a color camera including corresponding software are used, as exemplarily described in the document WO 2011/013079 A1 which is completely incorporated herewith by reference. For example, an infrared laser projects a regular pattern, similar to a night sky, into a (working) area, which is to be observed and within which the operator moves. The depth-sensor camera receives the reflected infrared light, for example, by means of a monochrome CMOS sensor. Hardware of the sensor compares an image, which is generated based on the reflected infrared rays, to a stored reference pattern. Additionally, an active stereotriangulation can calculate a so-called depth mask based on the differences. The stereotriangulation records two images at different perspectives, searches the points, which correspond to each other, and uses the different positions thereof within both of the images, in order to calculate the depth. Since the determination of corresponding points is generally different, in particular if a scene, which is offered, is completely unknown, illumination by means of a structured light pattern pays off. In principle, one camera is sufficient if a reflected pattern of a reference scene (e.g., chessboard at a distance of one meter) is known. A second camera can be implemented in terms of a RGB camera.
In this manner both the shape (depth) of the operator and the distance relative to the cameras can be determined. After a short scan also the shape (contour) of the operator can be detected and stored. Then, it is not disturbing if different objects move through the image, or are put between the operator and the camera.
With reference to
Basically, each component of the storage and order-picking system 10, which is involved in a material flow, can be connected through conveying systems, or conveyors 14, which are drivable in a bidirectional manner. The conveyors 14 are indicated by means of arrows in
Getting back to
An order-picking person, or an operator, 34 works in the working area 30 and is also designated as an employee MA below. The operator 34 substantially moves within the working area 30 for picking-up piece goods 40 from (storage) load supports 36, such as trays 38, and for retrieving the piece goods 40, which are conveyed into the working area 30 via a conveyor 14, as indicated by means of an arrow 39. In
The operator 34 moves (manipulates) piece goods 40 at the packing station 20 from the trays 38 to, for example, an order pallet 48 or another target (container, card, tray, etc.) where the piece goods 40 are stacked on top of each other in accordance with a loading configuration which is calculated in advance. In this context, the operator 34 can be (ergonomically) assisted by a loading-aid device 42. In
The different manipulation steps are visually indicated to the operator 34, for example, via a display device 52. The display device 52 can be a screen 54, which can be equipped with an entering unit 56 in terms of a keyboard 58. It can be visually indicated to the operator 34 via the screen 54 how the piece good 40 looks like (label, dimension, color, etc.), which one of the piece goods the operator 34 is supposed to pick up from an offered tray 38 and which one is to be put on the order pallet 48. Further, it can be displayed to the operator 34 where the piece good 40, which is to be picked up, is located on the tray 38. This is particularly advantageous if the trays 38 are not loaded by one article type only, i.e. carry piece goods 40 of different types. Further, a target region on the order pallet can be displayed in 3D to the operator 34 so that the operator 34 merely pulls one of the to-be-packed piece goods 40 from the tray 38, pushes the same over the board 44 to the order pallet 48, as indicated by means of a (motion) arrow 46, and puts the same to a location, in accordance with a loading configuration calculated in advance, on the already existing stack of piece goods on the order pallet 48. In this context, the conveyor 14 is preferably arranged at a height so that the operator 34 does not need to lift the piece goods 40 during the removal. The order pallet 40 in turn can be positioned on a lifting device (not illustrated), in order to allow transfer of the order pallet 48 to a height (direction Y) so that a to-be-packed piece good 50 can be dropped into the packing frame 50. The packing station 20 can have a structure as described in the German patent application DE 10 2010 056 520, which was filed on Dec. 21, 2010.
As it will be described in more detail below, the present invention allows detection, for example, of the motion 46 (transfer of one piece good 40 from the tray 38 onto the order pallet 48) in real time, allows checking, and allows superimposing the motion 46 to the visual work instructions, which are displayed to the operator 34 through the screen 54. For example, nominal positions or nominal motions of the hands of the operator 34 can be presented. A superordinated intelligence such as the control unit 24 can then check, based on the detected and recognized motion 46, whether the motion 46 is conducted correctly.
At least the working area 30, which exists in the real world, is reproduced in a virtual (data) world including substantial components thereof (e.g., the convey- or 14, load support device 42, packing frame 50, and order pallet 48). If the real motion 46 is mapped into the virtual world, it can be determined easily by comparison whether the motion 46 has started at a preset location (source location) and has stopped at another preset location (target location). It is clear that a spatially and temporarily discrete comparison is already sufficient for this comparison, in order to allow the desired statements. Of course, motion sequences, i.e. the spatial position of an object dependent on time, i.e. trajectories, can be compared to each other as well.
Thus, additional information, besides the typical order-picking instructions such as the to-be-manipulated number of pieces and the type of piece goods, can be communicated to the operator 34 on the screen 54, the information increasing the quality and the speed of the order-picking process.
If the working area 30 is additionally recorded, for example, by means of a conventional (RGB) video camera, graphical symbols can be superimposed and displayed in this real image, the symbols corresponding to the expected source location, the target location (including orientation of the to-be-packed piece good 40), and/or the expected motion sequence. In this manner the operator 34 can recognize relatively simple whether a piece good 40 is picked up at the correct (source) location, whether the picked-up piece good 40 is correctly moved and/or orientated (e.g. by rotation), or whether the to-be-packed piece good 40 is correctly positioned on the already existing stack of piece goods on the order pallet 48.
A first motion-sensor system 60 including a first absolute position-determining system 100-1 (
The motion-sensor system 60 of
The light sources 64-1 and 64-2 transmit rays, preferably isotropically. The light sources 64-1 and 64-2 are stationary arranged at a constant relative distance 76, preferably outside the working area 30. Of course, the relative distance between the light sources 64 and the camera 62 can be varied, because the operator 34 moves. However, the relative distance between the light sources 64 and the camera 62 is selected, if possible, such that the camera 62 has both of the light source 64-1 and 64-2 in its field of view at any time. It is clear that more than two light sources 64 can be utilized, which are arranged, in this case, along the virtual connection line between the light sources 64-1 and 64-2, preferably in accordance with a preset pattern.
If the camera 62 is directed towards the light sources 64, the camera 62 can see two “shining” points. Since the relative distance 76 is known, an absolute position determination can be performed by means of triangulation based on the distance of the light sources 64 in the image of the camera 62. Thus, in the present case the absolute position determination is achieved by triangulation.
Since it is possible that the camera 32 either does not “see” the light sources 64 at all or not in a sufficient number, another position-determining system 100-2 can be added to the position-determining system 100-1 of
The mobile sensor unit 80 of
A Cartesian coordinate system having base vectors X, Y, and Z is shown in
It can be derived from the data delivered by the acceleration sensors 82 how the mobile sensor unit 80 is moved, and has been moved, within space, in particular because the acceleration sensors 82 can also detect motions—in terms of corresponding accelerations—along the base vectors. Hence, if it comes to a situation in which the camera 62 of the first position-determining system 100-1 does no longer “see” the light sources 64, even the absolute position of the mobile sensor unit 80 can be at least calculated, until the light sources 64 return into the field of view of the camera 62, based on the relative position which can be calculated due to the acceleration sensors 82.
With reference to
The separate rays 104 can be reflected by the operator 34 within the working area 30. Reflected rays 106 are detected by the cameras 62-1 and 62-2 and can be evaluated in a manner as described in the above-cited WO application. In this manner first depth information is gained from the curvature of the pattern 102 on the operator 34. Other depth information can be achieved due to stereoscopy so that a relative position of the operator 34 can be calculated. If additional aids such as models of a skeleton are used during the image processing a relative motion of the operator 34 can be calculated almost in real time (e.g., 300 ms), which is sufficient for being used either for motion recognition or motion check. The resolution is sufficiently high, in order to also allow at least an isolated recognition of the motion of the individual hands of the operator 34.
The third position-determining system 100-3 shown in
The fourth position-determining system of
With the fourth position-determining system 100-4 the light sources 64-1 and 64-2 emit isotropic rays 108, which in turn are in the infrared range and are reflected by markers 130, which can be worn by the order-picking person 34 on his/her body. The motions of the order-picking person 34 are detected via the reflected rays and are converted into a computer readable format so that they can be analyzed and transferred to 3D models (virtual world) generated in the computer. It goes without saying that also other frequencies than HZ can be used.
In
With reference to
In a first step S210 an (order-picking) task is assigned to the operator 34. The order-picking task can comprise a number of sequential manipulation steps such as the picking up of a piece good 40 from a source location, the moving of the piece good 40 to a target location, and the putting of the piece good 40 on the target location.
In a step S212 the task is visually and/or audibly (Pick-by-Voice) communicated to the operator 34. In a step S214 markers 130 are scanned at a scanning rate which can be selected freely. Dependent on whether a passive or an active sensor system is utilized, a marker 130 can also be represented by the operator 34, one or both hands 68, one or both forearms 66, a reflecting web, fixed reference points, a data glove having active sensors, an arm gaiter having active sensors, or the like.
In a step S216 it is checked during the picking or transferring of a piece good 40 whether at least one marker 130 such as the hand 68 of the order-picking person 34 is located within a source area. The source area corresponds to a source location or source volume where the picking up of a to-be-manipulated piece good 40 is to occur. For example, this can be a provision position of the trays 38 in
However, if the marker reaches the target area, preferably within the preset period of time Δt, the corresponding (partial) task is completed (cf. step S222). In another step S224 it can be inquired whether additional (partial) tasks exist. If another task exists, return to step S212 is possible in step S228. Otherwise, the method ends in step S226. Additionally, number of pieces can be determined as well, as it will be explained below with reference to
With reference to
The method 300 shown in
If the marker(s) have been detected within the source area, in a step S320 it is again inquired when and if the marker(s) has reached the target area.
At the same time, in a step S326 determination of number of pieces can be conducted, which will be described in more detail in the context of
In a step S322 it can be inquired whether additional tasks need to be performed. If additional tasks need to be performed, one returns to step S312 in step S324. Otherwise, the method ends in step S326.
Further, a conventional Pick-by-Light order-picking guidance system is shown in
The flow chart of
If a counting check is to be conducted (cf. inquiry S412) it is inquired in a step S414 at a freely selectable scanning rate whether the markers 130 are at “rest” during a period of time Δt. At “rest” means during an order-picking process, for example, that the distance between the hands is not changing for a longer time because the operator 34 simultaneously transfers multiple piece goods 40 by laterally surrounding a corresponding group of piece goods, as will be explained in more detail in the context of
If the markers 130 do not have a fixed relative distance during a preset period of time, the piece goods 40 likely are not manipulated for the time being so that the counting check starts from the beginning.
However, if a relative distance is measured over a longer time this relative distance is the basis of the counting check in step S416 and is compared with an arbitrary multiple of the dimensions of the to-be-manipulated type of piece goods. If, for example, rectangular piece goods 40 are manipulated, the relative distance can be a multiple of the width, the height, and/or the depth of one piece good 40. Two piece goods (of one type only) can also be grabbed simultaneously so that the distance between the hands corresponds to a sum of a length and a width. However, since it is known how many piece goods are currently to be manipulated simultaneously, the amount of possible solutions is clear and can be compared rapidly.
If the manipulated number of the piece goods 40 corresponds to the expected number (cf. step S420), the counting check (S412) can start from the beginning. If a number of to-be-manipulated piece goods 40 is too big for being grabbed at once, the operator 34 can either indicate this so that the sum of correspondingly more manipulation processes is evaluated, or the order-picking control autonomously recognizes the necessity of dividing the manipulation process.
If the grabbed number does not correspond to the expected number an error is displayed in a step S422.
As an alternative to the counting check, a piece good 40 can also be measured, as it will be described in more detail in the context of the
If a piece good 40 is to be measured, in a step S426 similar as in the step S414, it is checked whether the markers are at “rest” during a (shorter) period of time Δt, i.e. if they have an almost constant relative distance.
In this manner the height, width, diagonal, depth, the diameter, or the like can be determined in a step S428. Then, in a step 430 the piece good 40 is rotated, and a new side of the piece good 40 is measured in the same manner.
With reference to
The index fingers of the order-picking person are respectively connected to an active marker 130, for example, to the mobile sensor unit 80, as shown in
With reference to
Also the thumbs 184 are provided with respectively one additional marker 186 additional to the markers 130 which are attached to the index fingers 140. In this context, again a mobile sensor unit 80 of
A length L is determined in
The piece good 180 without dimension, which is shown in
The method of measuring a piece good 180 without dimensions, as shown in the
In
In
If several operators 34 work within the system 10, the markers 130 can be equipped with individualizing features so that an assignment of the marker(s) 130 to the respective operator 34 is possible. In this case also a marker number is stored.
The first data set 552 of the employee MA1 expresses that this employee has already been working for three hours and sixteen minutes in the working cell No. 13 and has lifted an overall weight of 1352 kg about one meter and has pushed an overall weight of 542.3 kg about one meter. The marker pair No. 1 is assigned to the employee MA1. The employee MA i has worked sixteen minutes in the working cell No. 12, one hour and twelve minutes in the working cell No. 14, and then again five minutes in the working cell No. 12. In this context, he/she has lifted an overall weight of 637.1 kg (about one meter) and pushed 213.52 kg about one meter. The marker pair having the number i is assigned to the employee MA i. Data generated in this manner can be used for manifold purposes (survey of handicapped people, ergonomical survey, health survey, anti-theft security, tool issue survey, tracking of work and break times, etc.)
With reference to
As soon as the markers 130 are detected in step S614, either a log-in sequence can be interrogated (step S616) or an employee-identification number can be retrieved automatically (step S620), thereby logging on the employee MA in the system (order-picking control) in the corresponding cell 31 or in the working area 30. If the employee MA leaves a current cell 31, this is detected by the inquiry of step S622, thereby logging off the employee MA at the current cell 31 in step S626 so that the assignment employee-cell is closed. As long as the employee 34 stays within the cell 31 (step S624), he/she is kept logged in at the current cell 31 and the assignment of this cell 31 is kept. Then, in a step S628 it can be inquired whether the employee MA has logged off, for example, by performing a log-out gesture within the current cell 31 by means of his/her hands. If he/she has performed a log-out gesture, the method ends in step S630. Otherwise it is inquired in step S632 whether the employee 34 has moved to an adjacent neighbor cell 31. In this case, the markers 130 of the employee 34 are detected in the neighbor cell 31 so that the employee 34 can be assigned to the new working cell 31 in step S634. Then, it is again inquired in cycles in step S622 whether the employee 34 has left the (new) current cell 31. The order-picking control has knowledge of the relative arrangement of the cells 31. Based on the motions of the employee MA it can be determined between which cells/working areas the MA has changed.
Thus, the motions of the employee 34 are not only detected and evaluated within the working area 30, or one single cell 31, but also in these cases where the employee 34 changes the areas 30/cells 31. Preferably the storage and order-picking system 10 comprises a plurality of adjacent cells 31. Of course, the cells 31 can also be arranged remotely to each other. In this manner it is possible to complete tasks extending over several cells 31 or greater distances within the storage and order-picking system (“man-to-goods”).
During picking in accordance with the principle “man-to-goods” it can happen that the operator 34 walks through the aisles of a warehouse 12 with an order-picking trolley 142 for processing simultaneously multiple orders in parallel (collecting). For this purpose the operator 34 takes a number of order containers 122 in the order-picking trolley 142. Such a situation is shown in the perspective illustration of
In
The motion of the operator 34, or his/her index fingers 140, is recorded by a camera 62 operated, for example, in the infrared range. Light sources 64, which are not depicted, transmit isotropic infrared rays 108 from the ceiling of the warehouse 12, which are reflected by the markers 130, as indicated by means of dash-dotted arrows 106. If the operator 34 walks through the warehouse 12, the index fingers 140 describe the motion tracks (trajectories) 146-1 and 146-2 as indicated by means of dashed lines. The motion tracks 146 represent points moving in space at the scanning rate of the camera 62.
Alternatively to the just described passive motion tracking also an active motion tracking can be performed by using, for example, mobile sensor units 80 as the markers 130. The direction, into which the index finger 140 is pointed, is indicated by means of a dashed line 150 at the right hand 68 of the operator 34. Also in this case motion tracks 146 can be recorded and evaluated.
With reference to the illustrations of
In
In
The (static) gesture shown in
It is clear that the calculating unit 26 can evaluate and implement both (static) positions and dynamic motion sequences, in order to evaluate a situation (gesture).
With the above given description of the figures the orientation of the coordinate system has been chosen in general correspondence with the typical designations used in intralogistics so that the longitudinal direction of a rack is designated by X, the depth of the rack is designated by Z, and the (vertical) height of the rack is designated by Y. This applies analogously with regard to the system 10.
Further, identical parts and features have been designated by the same reference numerals. The disclosures included in the description can be transferred roughly onto identical parts and features having the same reference numerals. Position and orientation information (such as “above”, “below”, “lateral”, “longitudinal”, “transversal”, “horizontal”, “vertical”, or the like) are referring to the immediately described figure. If the position or orientation is changed, the information is to be transferred roughly to the new position and orientation.
It is clear that the gestures mentioned with reference to the
Further, it is clear that the “manipulation” explained above can mean different actions which are performed in the storage and order-picking system. In particular, “manipulation” comprises the performance of an order-picking task, i.e. the picking up, moving and delivering of piece goods from source locations to target locations in accordance with an order. However, it can also mean the measuring of a piece good, i.e. taking, holding and rotating the piece good while the operator's hands are in contact with the to-be-measured piece good.
This is a continuation application of the co-pending international application WO 2012/123033 A1 (PCT/EP2011/054087) filed on Mar. 17, 2011 which is fully incorporated herewith by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2011/054087 | Mar 2011 | US |
Child | 14028727 | US |