This application claims priority based on a Japanese patent application, No. 2008-069474 filed on Mar. 18, 2008, the entire contents of which are incorporated herein by reference.
The present invention relates to a technique of grasping a posture of an object on the basis of outputs from directional sensors for detecting directions in space, the directional sensors being attached to some of target parts of the object.
As a technique of grasping a posture of a human being or a device, there is a technique described in the following Patent Document 1, for example.
The technique described in Patent Document 1 involves attaching acceleration sensors to body parts of a human being as a target object in order to grasp motions of the body parts of that human being by using outputs from the acceleration sensors. First, according to this technique, outputs from the acceleration sensors at each type of motion are subjected to frequency analysis and output intensity of each frequency is obtained. Thus, a relation between a motion and respective output intensities of frequencies is investigated. Further, according to this technique, a typical pattern of output intensities of frequencies for each type of motion is stored in a dictionary. And, a motion of a human being is identified by making frequency analysis of actual outputs from acceleration sensors attached to the body parts of the human being and by judging which pattern the analysis result corresponds to.
Patent Document 1: Japanese Patent No. 3570163
However, according to the technique described in Patent Document 1, it is difficult to grasp a posture of a human being if he continues to be in a stationary state such as a state of stooping down or a state of sitting in a chair. Further, it is very laborious to prepare the dictionary, and a large number of man-hour is required for preparing the dictionary in order to grasp many types of motions and in order to grasp combined motions each consisting of many motions.
Noting these problems of the conventional technique, an object of the present invention is to make it possible to grasp a posture of an object whether the object is in motion or in a stationary state, while reducing man-hour required for preparation such as creation of a dictionary.
To solve the above problems, according to the present invention:
a directional sensor for detecting a direction in space is attached to some target part among a plurality of target parts of a target object;
an output value from the directional sensor is acquired;
posture data indicating a direction of the target part, to which the directional sensor is attached, with reference to reference axes that are directed in previously-determined directions are calculated by using the output value from the directional sensor;
positional data of the target part in space are generated by using previously-stored shape data of the target part and the previously-calculated posture data of the target part, and by obtaining positional data in space of at least two representative points in the target part indicated in the shape data, with reference to a connecting point with another target part connected with the target part in question;
two-dimensional image data indicating the target part are generated by using the positional data in space of the target part and the previously-stored shape data of the target part stored; and
a two-dimensional image of the target part is outputted on a basis of the two-dimensional image data of the target part.
According to the present invention, it is possible to grasp the posture of a target object whether the target object is in motion or in a stationary state. Further, according to the present invention, by previously acquiring shape data of a target body part, it is possible to grasp the posture of this target body part. And thus, man-hour required for preparation (such as creation of a dictionary for grasping postures) can be diminished very much.
In the following, embodiments of posture grasping system according to the present invention will be described referring to the drawings.
First, a first embodiment of posture grasping system will be described referring to
As shown in
The posture grasping apparatus 100 is a computer comprising: a mouse 101 and a keyboard 102 as input units; a display 103 as an output unit; a storage unit 110 such as a hard disk drive or a memory; a CPU 120 for executing various operations; a memory 131 as a work area for the CPU 120; a communication unit 132 for communicating with the outside; and an I/O interface circuit 133 as an interface circuit for input and output devices.
The communication unit 132 can receive sensor output values from the directional sensors 10 via a radio relay device 20.
The storage unit 110 stores shape data 111 concerning body parts of the worker W, a motion evaluation rule 112 as a rule for evaluating a motion of the worker W, and a motion grasping program P, in advance. In addition, the storage unit 110 stores an OS, a communication program, and so on, although not shown. Further, in the course of execution of the motion grasping program P, the storage unit 110 stores sensor data 113, posture data 114 indicating body parts' directions obtained on the basis of the sensor data 113, positional data 115 indicating positional coordinate values of representative points of the body parts, two-dimensional image data 116 for displaying the body parts on the display 103, motion evaluation data 117 i.e. motion levels of the body parts, and work time data 118 of the worker W.
The CPU 120 functionally comprises (i.e. functions as): a sensor data acquisition unit 121 for acquiring the sensor data from the directional sensors 10 through the communication unit 132; a posture data calculation unit 122 for calculating the posture data that indicate body parts' directions on the basis of the sensor data; a positional data generation unit 124 for generating positional data that indicate positional coordinate values of representative points of the body parts; a two-dimensional image data generation unit 124 for transforming body parts' coordinate data expressed as tree-dimensional coordinate values, into two-dimensional coordinate values; a motion evaluation data generation unit 125 for generating the motion evaluation data as motion levels of the body parts; an input control unit 127 for input control of the input units 101 and 102; and a display control unit 128 for controlling the display 103. Each of these functional control units functions when the CPU 120 executes the motion grasping program P stored in the storage unit 110. In addition, the sensor data acquisition unit 121 functions when the motion grasping program P is executed under the OS and the communication program. And the input control unit 127 and the display control unit 128 function when the motion grasping program P is executed under the OS.
As shown in
The shape data 111, which have been previously stored in the storage unit 110, exist for each motion part of the worker. As shown in
In the present embodiment, to express the body parts in a simplified manner, the trunk T1 and the head T2 are each expressed as an isosceles triangle, and the upper arms T3, T6, the forearms T4, T7 and the like are each expressed schematically as a line segment. Here, some points in an outline of each body part are taken as representative points, and a shape of each body part is defined by connecting such representative points with a line segment. Here, the shape of any part is extremely simplified. However, to approximate a shape to the actual one of the worker, a complex shape may be employed. For example, the trunk and the head may be expressed respectively as three-dimensional shapes.
In
As shown in
The representative point data 111a of each body part comprise a body part ID, representative point IDs, and X-, Y-, and Z-coordinate values of each representative point. For example, the representative point data of the trunk comprise the ID “T1” of the trunk, the IDs “P1”, “P2” and “P3” of three representative points of the trunk, and coordinate values of these representative points. And, the representative point data of the right forearm comprise the ID “T4” of the right forearm, the IDs “P9” and “P10” of two representative points of the right forearm, and coordinate values of these representative points.
The outline data 111b of each body part comprises the body part ID, line IDs of lines expressing the outline of the body part, IDs of initial points of these lines, and IDs of final points of these lines. For example, as for the trunk, it is shown that the trunk is expressed by three lines L1, L2 and L3, the line L1 having the initial point P1 and the final point P2, the line L2 the initial point P2 and the final point P3, and the line L3 the initial point P3 and the final point P1.
In the present embodiment, the coordinate values of a representative point of each body part are expressed in a local coordinate system for each body part. As shown in
Coordinate values of any representative point in each body part are indicated as coordinate values in its local coordinate system in the state of a reference posture. For example, as for the trunk T1, a reference posture is defined as a posture in which all the three representative points P1, P2 and P3 all located in the X1Y1 plane of the local coordinate system X1Y1Z1 and the Y1 coordinate values of the representative points P2 and P3 are the same value. The coordinate values of the representative points in this reference posture constitute the representative point data 111a of the trunk T1. As for the forearm T4, a reference posture is defined as a posture in which both the two representative points P9 and P10 are located on the Z4-axis of the local coordinate system X4Y4Z4. And, the coordinate values of the representative points in this reference posture constitute the representative point data 111a of the forearm T4.
As shown in
In this motion evaluation rule 112, as for the trunk T1 for example, when the angular displacement in the α direction is within the range of 60°-180° or 45°-60°, the motion level is “5” or “3”, respectively. And, when the motion level “5” is displayed, display in “Red” is specified, while the motion level “3” is displayed, display in “Yellow” is specified. Further, as for the right upper arm T3, the table shows that the motion level is “5” when the displacement magnitude in the Y-axis direction of the representative point P8 in the Y direction is 200 or more, and its display color is “Red”. Here, the displacement magnitude is one relative to the above-mentioned reference posture of the body part in question.
Next, referring to flowcharts shown in
When the worker attaches a directional sensor 10 to his body part and turns on the switch 15 (
When the sensor data acquisition unit 121 of the posture grasping apparatus 100 receives the data from the directional sensor 10 through the communication unit 132, the sensor data acquisition unit 121 stores the data as sensor data 113 in the storage unit 110 (S10).
When the sensor data acquisition unit 121 receives data from a plurality of directional sensors 10 attached to a worker, the sensor data acquisition unit 121 does not store these data in the storage unit 110 immediately. Only when it is confirmed that data have been received from all the directional sensors 10 attached to the worker, the sensor data acquisition unit 121 stores the data from the directional sensors 10 in the storage unit 110 from that point of time. If data cannot be received from any directional sensor 10 among all the directional sensors 10 attached to a worker, the sensor data acquisition unit 121 does not store the data that have been received at this point of time from directional sensors 10 in the storage unit 110. In other words, only when there are data received from all the directional sensors 10 attached to a worker, the data are stored in the storage unit 110.
As shown in
Next, the posture data calculation unit 122 of the posture grasping apparatus 100 calculates respective directions of the body parts on the basis of data shown in the sensor data 113 for each body part at each time, and stores, as posture data 114, data including thus-calculated direction data in the storage unit 113 (S20).
As shown in
Now, will be simply described a method of calculating data stored in the direction data field 114d from data stored in the acceleration sensor data field 113d and the magnetic sensor data field 113e in the sensor data 113.
For example, in the case where the right forearm T4 is made stationary in the reference posture, the acceleration in the direction of the Y-axis is −1G due to gravity, and the accelerations in the directions of the X- and Z-axes are 0. Thus, output from the acceleration sensor is (0, −1G, 0). When the right forearm is tilted in the α direction from this reference posture state, it causes changes in the values from the acceleration sensor 11 in the directions of the Y- and Z-axes. At this time, the value of α in the local coordinate system is obtained from the following equation using the values in the directions of the Y- and Z-axes from the acceleration sensor 11.
α=sin−1(z/sqrt(z2+y2))
Similarly, when the right forearm T4 is tilted in the γ direction from the reference posture, the value of γ in the local coordinate system is obtained from the following equation using the values in the directions of the X- and Y-axes from the acceleration sensor 11.
γ=tan−1(x/y)
Further, when the right forearm T4 is tilted in the β direction from the reference posture, the output values from the acceleration sensor 11 do not change but the values in the Z- and X-axes from the magnetic sensor 12 change. At this time, the value of β in the local coordinate system is obtained from the following equation using the values in the Z- and X-axes from the magnetic sensor 12.
β=sin−1(x/sqrt(x2+z2))
Next, the positional data generation unit 123 of the posture grasping apparatus 100 obtains coordinate values of the representative points of the body parts in the common coordinate system by using the shape data 111 and the posture data 114 stored in the storage unit 111, and stores, as positional data 115, data including thus-obtained coordinate values in the storage unit 110 (S30).
As shown in
Now, a method of obtaining the coordinate values of a representative point of a body part will be described referring to the flowchart shown in
First, among the posture data 114, the positional data generation unit 123 reads data in the first record (the record at the first receipt time) of the trunk T1 from the storage unit 110 (S31). Next, the positional generation unit 123 reads also the shape data 111 of the trunk T1 from the storage unit 110 (S32).
Next, the positional data generation unit 123 rotates the trunk T1 in the local coordinate system according to the posture data, and thereafter, translates the thus-rotated trunk T1 such that the origin P1 of the local coordinate system coincides with the origin of the common coordinate system, and obtains the coordinate values of the representative points of the trunk T1 in the common coordinate system at this point of time. In detail, first, the local coordinate values of the representative points P1, P2 and P3 of the trunk T1 are obtained by rotating the trunk T1 by the angles α, β and γ indicated in the posture data. Next, the coordinate values in the common coordinate system of the origin P1 of the local coordinate system are subtracted from these local coordinate values, to obtain the coordinate values in the common coordinate system (S33). Here, the local coordinate system of the trunk T1 and the common coordinate system coincide as described above, and thus it is not necessary to perform the translation processing in the case of the trunk T1.
Next, the positional data generation unit 123 stores the time data included in the posture data 114 in the time field 115a (
Next, the positional data generation unit 123 judges whether there is a body part whose positional data have not been obtained among the body parts connected to a body part whose positional data have been obtained (S35).
If there is such a body part, the flow returns to the step 31 again, to read the posture data 114 in the first record (the record at the first receipt time) of this body part from the storage unit 110 (S31). Further, the shape data 111 of this body part are also read from the storage unit 110 (S32). Here, it is assumed for example that the shape data and the posture data of the right upper arm T3 connected to the trunk T1 are read.
Next, the positional data generation unit 123 rotates the right upper arm T3 in the local coordinate system according to the posture data, and then translates the thus-rotated right upper arm T3 such that the origin (the representative point) P7 of this local coordinate system coincides with the representative point P3 of the trunk T1 whose position has been already determined in the common coordinate system, to obtain the coordinate values of the representative points of the right upper arm T3 in the common coordinate system at this point of time (S33).
Further, as for the right forearm T3, the right forearm T4 is rotated in the local coordinate system according to the posture data, and thereafter the thus-rotated right forearm T4 is translated such that the origin (the representative point) P9 of this local coordinate system coincides with the representative point P8 of the right upper arm T3 whose position has been already determined in the common coordinate system. Then, the coordinate values in the common coordinate system of the representative points of the right forearm T4 are obtained at this time point.
Thereafter, the positional data generation unit 123 performs the processing in the steps 31-36 repeatedly until judging that there is no body part whose positional data have not been obtained among the body parts connected to a body part whose positional data have been obtained (S36). In this way, the coordinate values in the common coordinate system of a body part are obtained starting from the closest body part to the trunk T1.
Then, when the positional data generation unit 123 judges that there is no body part whose positional data have not been obtained among the body parts connected to a body part whose positional data have been obtained (S36), the positional data generation unit 123 judges whether there is a record of the trunk T1 at the next point of time in the posture data 114 (S37). If there is a record of the next point of time, the flow returns to the step 31 again, to obtain the positional data of the body parts at the next point of time. If it is judged that a record of the next time point does not exist, the positional data generation processing (S30) is ended.
Although it is not necessary to describe a detailed method of coordinate transformation relating to the above-mentioned rotation and translation of a body part in a three-dimensional space, detailed description of such a method is given in Computer Graphics; A Programming Approach, Japanese translation by KOHRIYAMA, Akira (Originally written by Steven Harrington), issued 1984 by McGraw-Hill, Inc, for example.
As shown in the flowchart of
Also, a method of transforming three-dimensional image data into two-dimensional image data is obvious and does not require detailed description. For example, Japanese Patent No. 3056297 describes such a method in detail.
Next, the motion evaluation data generation unit 125 generates the motion evaluation data 117 for each worker and work time data 118 for each worker, and stores the generated data 117 and 118 in the storage unit 110 (S50). The work time data 118 for each worker comprise a work start time and a work finish time for the worker in question. Among the times stored in the time field 113a of the sensor data 113 (
Next, the display control unit 128 displays the above processing results on the display 103 (S60).
As shown in
When an operator wishes to know detailed motion evaluation data of a specific worker, not the integrated motion evaluation data 157a of the workers, the operator clicks the motion evaluation data expansion instruction box 155 displayed in front of the name of the worker in question. Then, the motion evaluation data 157b1, 175b2, 175b3 and so on of the body parts of the worker in question are displayed.
As described above, motion evaluation data are generated by the motion evaluation data generation unit 125 in the step 50. The motion evaluation data generation unit 125 first refers to the motion evaluation rule 112 (
Then, in the same way, the motion evaluation data generation unit 125 obtains a motion level at each time for each body part.
Next, the motion evaluation data generation unit 125 generates integrated motion evaluation data for the worker in question. In the integrated motion evaluation data, the highest motion level among the motion levels of the body parts of the worker at each time becomes an integrated motion level, i.e. the integrated motion evaluation data at that time.
The thus-generated motion evaluation data for the body parts and the thus-generated integrated motion evaluation data are stored as the motion evaluation data 117 of the worker in question in the storage unit 110. The display control unit 128 refers to the motion evaluation data 117 and displays in the output screen 150 the integrated motion evaluation data 157a for each worker, the motion evaluation data 157b1, 157b2, 157b3, and so on for the body parts of specific worker. Here, in the motion evaluation data 157a, 157b1, and so one, time periods of the level 5 and the level 3 are displayed in the colors stored in the display color field 122e (
When the operator sees the motion evaluation data 157a and so on of each worker and wishes to see the motion of a specific worker at a specific point of time, the operator moves the time specifying mark 159 to the time in question on the time scale 153. Then, a schematic dynamic state screen 151 of the worker after that point of time is displayed in the output screen 150. This dynamic state screen 151 is displayed by the display control unit 128 on the basis of the worker's two-dimensional image data 116 at each time which are stored in the storage unit 110. In this dynamic state screen 151, each body part of the worker is displayed in the color corresponding to its motion level. In this dynamic state screen 151, the representative point P1 of the trunk T1 of the worker becomes a fixed point, and other body parts move and rotate relatively. Accordingly, when the worker bends and stretches his legs, his loin (P1) does not go down and his feet go up, although his knees bend. Thus, if such dynamic display seems to be strange, it is possible to resolve elevation of the feet at the time of worker's bending and stretching, by translating the body parts in generation of the positional data in the step 30 such that the Y coordinate values of the feet becomes 0.
As described above, in the present embodiment, the posture data are generated on the basis of the sensor data from the directional sensors 10 whether any body part of the worker is in motion or in a stationary state, and a schematic image data of the worker are generated on the basis of the posture data. As a result, it is possible to grasp the posture of the body parts of the worker whether the worker is in motion or in a stationary state. Further, in the present embodiment, the posture of the body parts can be grasped by preparing the shape data 111 of the body parts in advance. Thus, man-hour required for preparation (such as creation of a dictionary for grasping postures) can be diminished very much.
Further, in the present embodiment, the motion evaluation level of each worker and the motion evaluation level of each body part of a designated worker are displayed at each time. Thus, it is possible to know which worker has a heavy workload at which time, and further which part of the worker has a heavy workload. Further, since also the work start time and the work finish time of each worker are displayed, it is possible to manage working hours of workers.
Next, a second embodiment of posture grasping system will be described referring to
In the first embodiment, the directional sensors 10 are attached to all body parts of a worker, and the posture data and the positional data are obtained on the basis of the sensor data from the directional sensors. On the other hand, in the present embodiment, a directional sensor is not used for some of the body parts of a worker, and posture data and positional data at such body parts are estimated on the sensor data from the directional sensors 10 attached to the other target body parts.
To this end, in the present embodiment, body parts each showing trailing movement along behind a movement of some body part are taken as trailing body parts, and a directional sensor is not attached to these trailing body parts. On the other hand, the other body parts are taken as detection target body parts, and directional sensors are attached to the detection target body parts. Further, as shown in
As shown in
For example, when a forearm is lifted, the upper arm also trails the motion of the forearm and is lifted in many cases. In that case, the displacement magnitude of the upper arm is often smaller than the displacement magnitude of the forearm. Thus, here, if the rotation angles in the rotation directions α, β and γ of the forearms T4 and T7 as the detection target body parts are respectively a, b and c, then the rotation angles in the rotation directions α, β and γ of the upper arms T3 and T6 as the trailing body parts are deemed to be a/2, b/2 and c/2 respectively. Further, when a knee is bent, the upper limb and the lower limb often displace by the same angle in the opposite directions to each other. Thus, here, if the rotation angle in the rotation direction α of the lower limbs T10 and T12 as the detection target body parts is a, the rotation angle in the rotation direction α of the upper limbs T9 and T11 as the trailing body parts is deemed to be −a. Further, as for the other rotation directions β and γ, it is substantially impossible because of the knee structure that an upper limb and the lower limb have different rotation angles. Thus, if the rotation angles in the rotation directions β and γ of the lower limbs T10 and T12 as the detection target body parts are respectively b and c, the rotation angles in the rotation directions β and γ of the upper limbs T9 and T11 as the trailing body parts are deemed to be respectively b and c also.
Next, operation of the posture grasping apparatus 100a of the present embodiment will be described.
Similarly to the step 10 in the first embodiment, in the present embodiment also, first the sensor data acquisition unit 121 of the posture grasping apparatus 100a receives data from the directional sensors 10, and stores the received data as the sensor data 113 in the storage unit 110.
Next, using the sensor data 113 stored in the storage unit 110, the posture data calculation unit 122a of the posture grasping apparatus 100a generates the posture data 114 and stores the generated posture data 114 in the storage unit 110. In so doing, as for data concerning the body parts included in the sensor data 113, the posture data calculation unit 122a performs processing similar to that in the step 20 of the first embodiment, to generate posture data of these body parts. Further, as for data of the body parts that are not included in the sensor data 113, i.e. data of the trailing body parts, the posture data calculation unit 122a refers to the trailing relation data 119 stored in the storage unit 110, to generate their posture data.
In detail, in the case where a trailing body part is the upper arm T3, the posture data calculation unit 122a first refers to the trailing relation data 119, to determine the forearm T4 as the detection target body part that is trailed by the posture of the upper arm T3, and obtains the posture data of the forearm T4. Then, the posture data calculation unit 122a refers to the trailing relation data 119 again, to grasp the relation between the posture data of the forearm T4 and the posture data of the upper arm T3, and obtains the posture data of the upper arm T3 on the basis of that relation. Similarly, also in the case where a trailing body part is the upper limb T9, the posture data of the upper limb T9 are obtained on the basis of the trailing relation with the lower limb T10.
When the posture data of all the body parts are obtained in this way, the obtained data are stored as the posture data 114 in the storage unit 110.
Thereafter, the processing in the steps 30-60 is performed similarly to the first embodiment.
As described above, in the present embodiment, it is possible to reduce the number of directional sensors 10 attached to a worker.
Next, a third embodiment of posture grasping system will be described referring to
As shown in
Accordingly, the CPU 120 of the posture grasping apparatus 100b of the present embodiment functionally comprises (i.e. functions as), in addition to the functional units of the CPU 120 of the first embodiment: a second positional data generation unit 129 that generates second positional data indicating the location of the worker and positions of the body parts by using outputs from the location sensor 30 and the positional data generated by the positional data generation unit 123. Further, the sensor data acquisition unit 121b of the present embodiment acquires outputs from the directional sensors 10 similarly to the sensor data acquisition unit 121 of the first embodiment, and in addition acquires the outputs from the location sensor 30. Further, the two-dimensional image data generation unit 124b of the present embodiment does not use the positional data generated by the positional data generation unit 123 differently from the two-dimensional image data generation unit 124 of the first embodiment, but uses the above-mentioned second positional data, to generate two-dimensional image data. Each of the above-mentioned functional units 121b, 124b and 129 functions when the CPU 120 executes the motion grasping program P similarly to any other functional unit. The storage unit 110 stores the second positional data 141 generated by the second positional data generation unit 129 in the course of execution of the motion grasping program P.
The location sensor 30 of the present embodiment comprises a sensor for detecting a location, in addition to a power supply, a switch and a radio communication unit as in the directional sensor 10 described referring to
Next, operation of the posture grasping apparatus 100b of the present embodiment will be described referring to the flowchart shown in
When the sensor data acquisition unit 121b of the posture grasping apparatus 100b receives data from the directional sensors 10 and the location sensor 30 through the communication unit 132, the sensor data acquisition unit 121b stores the data as the sensor data 113B in the storage unit 110 (S10b).
The sensor data 113B is expressed in the form of a table. As shown in
Although, here, the data from the directional sensors 10 and the data from the location sensor 30 are stored in the same table, a table may be provided for each sensor and sensor data may be stored in the corresponding table. Further, although here outputs from the location sensor 30 are expressed in an orthogonal coordinate system, the outputs may be expressed in a cylindrical coordinate system, a spherical coordinate system or the like. Further, in the case where a sensor detecting a two-dimensional location is used as the location sensor 30, the column for the Y-axis (the axis in the vertical direction) in the location sensor data field 113f may be omitted. Further, although here a cycle for acquiring data from the directional sensor 10 coincides with a cycle for acquiring data from the location sensor 30, however data acquisition cycles for the sensors 10 and 30 may not be coincident. In that case, sometimes data from one type of sensor do not exist while data from the other type of sensor exist. In such a situation, it is favorable that missing data of the one type of sensor are interpolated by linear interpolation of anterior and posterior data to the missing data.
Next, similarly to the first embodiment, the posture data calculation unit 122 performs calculation processing of the posture data 114 (S20), and the positional data generation unit 123 performs processing of generating the positional data 115 (S30).
Next, the second positional data generation unit 129 generates the above-mentioned second positional data 141 (S35).
In detail, as shown in
When the second positional data generation processing (S35) is finished, the two-dimensional image data generation unit 124b generates two-dimensional image data 114B by using the second positional data 141 and the shape data 111 (S40b) as described above. The method of generating the two-dimensional image data 114B is same as the method of generating the two-dimensional image data 114 by using the positional data 115 and the shape data 111 in the first embodiment.
Next, similarly to the first embodiment, motion evaluation data generation processing (S50) is performed and then output processing (S60b) is performed.
In this output processing (S60b), an output screen 150 such as shown in
Here, as shown in
As described above, according to the present invention, not only postures of the body parts of the workers but also location shift of the workers and the articles can be grasped, and thus a behavior form of a worker can be grasped more effectively in comparison with the first and second embodiments.
In the above embodiments, the motion evaluation data 157a, 157b1, and so on are obtained and displayed. These pieces of data may not be displayed, and simply the schematic dynamic screen 151, 161 of the worker may be displayed. Further, the output screen 150 displays the motion evaluation data 157a, 157b1, and so on and the schematic dynamic screen 151 of the worker, and the like. However, it is possible to install a camera in the workshop, and a video image by the camera may be displayed synchronously with the dynamic screen 151, 161.
Further, in the above embodiments, after the workers finishes their work and the sensor data for the time period from start to finish of the work of each worker are obtained (S10), the posture data calculation processing (S20), the positional data generation processing (S30) and so on are performed. However, before the workers finish their work, the processing in and after the step 20 may be performed on the basis of already-acquired sensor data. Further, here, after the two-dimensional image data are generated with respect to all the body parts and over the whole time period (S40), the schematic dynamic screen 151 of a worker at and after a target time is displayed on the condition that the time specifying mark 159 is moved to the target time on the time scale 153 in the output processing (S60). However, it is possible that when the time specifying mark 159 is moved to a target time on the time scale 153, then at this point of time, two-dimensional data of the worker from the designated time are generated and the schematic dynamic screen 151 of the worker is displayed by using the sequentially-generated two-dimensional image data.
Further, in the above embodiments, as a directional sensor 10, one having an acceleration sensor 11 and a magnetic sensor 12 is used. However, in the case where a posture change of a target object does not substantially include rotation in the y direction, i.e. horizontal rotation, or in the case where it is not necessary to generate posture data considering horizontal rotation, the magnetic sensor 12 may be omitted and the posture data may be generated by using only the sensor data from the directional sensor 11.
Number | Date | Country | Kind |
---|---|---|---|
2008-069474 | Mar 2008 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2009/055346 | 3/18/2009 | WO | 00 | 11/19/2010 |