This patent application is based on and claims priority pursuant to 35 U.S.C. § 119(a) to Japanese Patent Application No. 2018-064693, filed on Mar. 29, 2018, in the Japan Patent Office, the entire disclosure of which is hereby incorporated by reference herein.
The present invention relates to a behavior recognition apparatus, a behavior recognition method, and a recording medium recording computer readable program for the behavior recognition apparatus and the behavior recognition method.
It is an important task to visualize the behavior of workers and improve production efficiency in offices, factories, and other workplaces. Therefore, it is effective to photograph a movie at a workplace with a camera and examine the obtained movie to recognize and examine the behavior of a worker for a specific standard work (hereinafter referred to as standard work).
Nevertheless, in order to visually analyze movies photographed at the workplace by the camera, extract behaviors for a standard work performed in a certain procedure, measure the time for each action, and then visualize these actions, a huge amount of analysis time and effort is expected. Thus, conventionally, in order to automatically recognize a human behavior, a method has been proposed in which a person is recognized from a photographed movie, a movement trajectory of the recognized person is obtained from the center of gravity of the person, and a specific behavior is recognized from the movement trajectory.
However, in the workplace, the work posture when workers perform a specific behavior is diverse and it is difficult to recognize persons having variously changed postures. In addition, errors in human recognition greatly affect the detection result of a trajectory produced by a moving person. As a result, a large error occurs in the recognition of the specific behavior based on the trajectory of the movement of the person. Therefore, it is not practicable to accurately measure the start time and the required time for a specific behavior. As described above, conventionally, when a worker performs the standard work, there is a disadvantage that it is difficult to recognize the behavior for the standard work merely by the movement of the worker in a case where the worker carries an object or manipulates an object.
Example embodiments of the present invention include a behavior recognition apparatus, a behavior recognition method, and a recording medium storing a control program for performing the behavior recognition method, each of which: receives an input of a movie obtained by capturing images of a site; recognizes one or more element behaviors constituting a standard work of a worker included in the input movie; and determines a start time and a required time for the standard work from the recognized one or more element behaviors.
A more complete appreciation of the disclosure and many of the attendant advantages and features thereof can be readily obtained and understood from the following detailed description with reference to the accompanying drawings, wherein:
The accompanying drawings are intended to depict embodiments of the present invention and should not be interpreted to limit the scope thereof. The accompanying drawings are not to be considered as drawn to scale unless explicitly noted.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In describing embodiments illustrated in the drawings, specific terminology is employed for the sake of clarity. However, the disclosure of this specification is not intended to be limited to the specific terminology so selected and it is to be understood that each specific element includes all technical equivalents that have a similar function, operate in a similar manner, and achieve a similar result.
Embodiments of a behavior recognition apparatus, a behavior recognition method, computer readable program for the behavior recognition apparatus and the behavior recognition method, and a recording medium recording the program will be described in detail below with reference to the accompanying drawings. As described below, in the present embodiments, a workplace is photographed by a camera, an element behavior of a standard work performed by a worker at the workplace is automatically recognized from the photographed movie, a behavior for the standard work is recognized from the element behavior, and the time for the standard work is automatically measured. The behavior of workers at a workplace is diverse, while the behavior for a standard work has various postures. Therefore, in the present embodiment, by decomposing the behavior for the standard work into a plurality of element behaviors and separately recognizing the decomposed element behaviors, it becomes easier to cope with diverse posture changes of the workers at the work site and automatically measure the required time for the standard work. In the following description, “movie” includes not only “moving image (also referred to as video) data” but also “image data made up of a plurality of consecutive still images”. The “image data made up of a plurality of consecutive still images” may also include image data made up of a plurality of consecutive still images, for example, obtained by executing photographing at a predetermined cycle.
The camera 20 is a photographing device capable of photographing a moving image, such as a video camera; the camera 20 is installed at a workplace to photograph workers working at the workplace and inputs the obtained movie to the recognition processing device 10. An example of the input movie of the workplace is illustrated in
However, with a method of directly recognizing behaviors of workers having many changes in posture, it is sometimes difficult to specify the standard work at the workplace. Therefore, in the present embodiment, the standard work is decomposed into element behaviors as illustrated in
Furthermore, the recognition processing device 10 illustrated in
Here, the action of the behavior recognition processor 12 according to the present embodiment will be described in detail with reference to the block diagram illustrated in
As illustrated in
Next, in order to recognize a workplace standard work, an action to recognize an element behavior is executed. Specifically, the spatiotemporal feature point extraction unit 102 executes a process of selecting every N image frames from the movie that has been input and extracting a feature point in the spacetime (also referred to as spatiotemporal feature point) from each selection of the N image frames (step S102). First, in step S102, element behaviors as exemplified in
Subsequently, in step S102, the spatiotemporal feature point extraction unit 102 recognizes the defined element behavior. When a worker moves in the workplace, a change point is produced in the spatiotemporal image data illustrated in
Here, a feature point detection method according to the present embodiment will be described. In the present action, the spatiotemporal image data, which is image data in the spacetime made up of the N image frames, is divided into blocks, as illustrated in
In the example illustrated in
Subsequently, a method of extracting a block having a large amount of change as a feature point in step S102 will be described. In extracting a feature point from the spatiotemporal image data, the spatiotemporal feature point extraction unit 102 first performs a smoothing process for removing noise in a spatial direction, namely, in an (x, y) direction. In this smoothing process, following expression (1) is used.
[Mathematical Expression 1]
L(x,y,t)=I(x,y,t)*g(x,y) (1)
In expression (1), I(x, y, t) denotes the pixel value of a pixel at (x, y) coordinates in a frame at time t. In addition, g(x, y) denotes a kernel for the smoothing process. The asterisk (*) denotes a convolution process. The smoothing process may be simply a pixel averaging process or may be an existing Gaussian smoothing filtering process.
Next, in step S102, the spatiotemporal feature point extraction unit 102 performs a filtering process on the time axis. In this filtering process, a Gabor filtering process using following expression (2) is executed. Here, gev and god denote kernels of a Gabor filter indicated by expressions (3) and (4) to be described later. The asterisk (*) denotes a convolution process. Letters τ and ω denote parameters of the kernels of the Gabor filter.
[Mathematical Expression 2]
R(x,y,t)=(L(x,y,t)*gev)2+(L(x,y,t)*god)2 (2)
[Mathematical Expression 3]
gev(t;τ,ω)=−cos(2πtω)e−t
[Mathematical Expression 4]
god(t;τ,ω)=−sin(2πtω)e−t
Once the filtering process as described above is executed on all the pixels of the spatiotemporal image data illustrated in
Then, as illustrated in following expression (6), when the average value M(x, y, t) in the block is greater than a threshold value Thre_M, the spatiotemporal feature point extraction unit 102 assigns this block as a feature point.
[Mathematical Expression 6]
M(x,y,t);% Thre_M (6)
Subsequently, a method of describing feature points extracted from the spatiotemporal image data as described above will be explained. When a feature point block is extracted from the spatiotemporal image data illustrated in
As a result, in the spatiotemporal image data illustrated in
Next, prior to the execution of an element behavior recognition process, the element behavior recognition unit 103 creates an element behavior recognition histogram (step S103). In creating the element behavior recognition histogram, the element behavior recognition dictionary input unit 105 first acquires an element behavior recognition dictionary and inputs the acquired element behavior recognition dictionary to the element behavior recognition unit 103. The action of creating the element behavior recognition dictionary will be described later with reference to
Next, in step S103, the element behavior recognition unit 103 obtains a similarity S(T, H) between the feature point histogram T(k) of the test movie and a learning histogram H(k) of learning data using following expression (8).
Then, as indicated by following expression (9), the element behavior recognition unit 103 executes the element behavior recognition process to recognize that the test movie has the same element behavior as the learning data when the similarity S(T, H) between the feature point histogram T(k) of the test movie and the learning histogram H(k) of the learning data is greater than a certain threshold value Thre_S (step S104).
[Mathematical Expression 9]
S(T,H)>Thre_S (9)
Next, the standard work recognition processing unit 104 executes a standard work recognition process (step S105). In the standard work recognition process, the standard work recognition processing unit 104 recognizes a workplace behavior associated with the element behavior recognized in step S104. For example, in the case of the standard work as exemplified in
Next, an action when an error occurs in the element behavior recognition process indicated in step S104 in
As illustrated in
In addition, when another element behavior of the standard work besides the walking behavior is regarded as a recognition target, the recognition process for the another element behavior is also performed (S114). Likewise, when another behavior is recognized (YES in S114), the element behavior recognition for the standard work is terminated and the recognition result for the standard work is output (S116).
On the other hand, when another element behavior is not recognized (NO in S114), an interval T of the element behaviors recognized in the element behavior recognition process (S104) is compared with a time threshold value Thre_time determined in advance (S115), and the element behavior recognition process (S104) or standard work recognition result output (S116) is executed on the basis of the result of the comparison.
Here, the action in step S115 will be described with reference to an example of the required time for the recognized standard work and element behaviors of the standard work illustrated in
On the other hand, as illustrated in
In addition, as illustrated in
However, in a case where two non-consecutive element behaviors s1 and s3 are recognized and the interval T between the top element behavior s1 and the last element behavior s3 is equal to or longer than the time threshold value Thre_time (NO in S115), the element behaviors s1 and s3 are regarded as the same type of behavior, but the behaviors themselves are regarded as different behaviors, such that the start time and the required time for each behavior are separately counted.
Meanwhile, as illustrated in
Then, as indicated in S116 in
Next, the action of creating the element behavior recognition dictionary will be described in detail with reference to
As illustrated in
Next, the element behavior recognition dictionary input unit 105 gathers N image frames including the element behavior out of the input workplace learning movie into one piece of learning data and extracts the spatiotemporal feature point from the one piece of learning data (step S202). The method of extracting the spatiotemporal feature point may be the same as the method described above using step S102 in
Next, the element behavior recognition dictionary input unit 105 classifies (clusters) the spatiotemporal feature points extracted from all pieces of the learning data in step S202 (step S203). The element behavior recognition dictionary input unit 105 classifies the learned spatiotemporal feature points using, for example, the K-means clustering method. That is, the M×N×T×3-dimensional differential vectors are classified by the K-means clustering method. The number of classes that has been classified is assumed as K. With this classification, the feature points extracted from the learning data are classified into K types of feature points. In the K-means clustering method, feature points of the same type have similar features.
Next, the element behavior recognition dictionary input unit 105 averages M×N×T×3-dimensional edge vectors of the feature points of the same type for K types of spatiotemporal feature points to work out K mean vectors Vk (step S204). Each mean vector Vk is a vector serving as a representative of the feature points of the corresponding type.
Next, the element behavior recognition dictionary input unit 105 calculates a total number of blocks of each group for K types of spatiotemporal feature points to work out the learning histogram H(k) (step S205). H(k) denotes the frequency of a feature point k group.
Then, the element behavior recognition dictionary input unit 105 creates an element behavior recognition dictionary in which the mean vectors Vk and the learning histogram H(k) obtained from the learning data are accumulated as element behavior recognition dictionary data (step S206). The element behavior recognition dictionary thus created is input to the element behavior recognition unit 103 (see
Next, the camera 20 (see
The CCD 203 converts an optical image formed on an imaging surface into an electrical signal and outputs the converted electrical signal as analog image data. A noise component is removed from image information output from the CCD 203 by a correlated double sampling (CDS) circuit 204; the image information after removal of noise component is converted into a digital value by an analog-to-digital (A/D) converter 205 and then output to an image processing circuit 208.
The image processing circuit 208 performs various types of image processes such as YCrCb conversion process, white balance control process, contrast correction process, edge enhancement process, and color conversion process, using a synchronous dynamic random access memory (SDRAM) 212 that provisionally retains the image data. The white balance process is an image process for adjusting the color density of the image information and the contrast correction process is an image process for adjusting the contrast of the image information. The edge enhancement process is an image process for adjusts the sharpness of the image information and the color conversion process is an image process for adjusting the color tone of the image information. In addition, the image processing circuit 208 displays image information on which the signal process and the image process have been carried out, on a liquid crystal display 216 (hereinafter abbreviated as LCD 216).
Furthermore, the image information on which the signal process and the image process have been carried out is recorded in a memory card 214 via a compressor/decompressor 213. The above-mentioned compressor/decompressor 213 is a circuit that compresses the image information output from the image processing circuit 208 to output to the memory card 214, and also decompresses the image information read out from the memory card 214 to output to the image processing circuit 208, according to an instruction acquired from an operation device 215.
A central processing unit (CPU) 209 controls the timing for the CCD 203, the CDS circuit 204, and the A/D converter 205 via a timing signal generator 207 that generates a timing signal. Furthermore, the CPU 209 also controls the image processing circuit 208, the compressor/decompressor 213, and the memory card 214.
In an image pickup apparatus, the CPU 209 performs various types of arithmetic operation processes in accordance with a program and is built with a read only memory (ROM) 211, which is a read-only memory that retains a program and the like, a work area used in course of various types of processes, and a random access memory (RAM) 210, which is a freely readable and writable memory having a retention area for various types of data and the like. These built-in constituents are interconnected by a bus line.
Then, the output of the camera 20 described above is input to the behavior recognition processor 12 via the interface 11 of the recognition processing device 10 illustrated in
As described thus far, in the present embodiment, the standard work of the worker having a certain procedure is recognized from the movie obtained by photographing the workplace. The standard work is decomposed into a plurality of element behaviors and the standard work is recognized by the element behavior recognition. The time for the element behavior of the recognized standard work is measured and the work time for the entire standard work is calculated. A plurality of image frames is input and the spatiotemporal feature point is extract from these images. The feature amount of the element behavior of the standard work is obtained from the extracted feature point and the element behavior of the standard work is recognized. The standard work performed by the worker at the workplace is recognized according to the recognized element behavior. With such a configuration, it becomes practicable to recognize the standard work performed by the worker at the workplace and to measure the work time for the standard work. As a result, it becomes feasible to implement a behavior recognition apparatus, a behavior recognition method, computer readable program for the behavior recognition apparatus and the behavior recognition method, and a recording medium recording the program, which are capable of coping with diverse posture changes of the workers at the work site and automatically measuring the required time for the standard work.
The workplace standard work recognition program executed by the recognition apparatus of the present embodiment has a module configuration including the workplace standard work recognition function described above and, in actual hardware of the recognition processing device 10 in
The behavior recognition apparatus according to the present embodiment includes a control device such as a CPU, a storage device such as a read only memory (ROM) and a RAM, an external storage device such as a hard disk drive (HDD) or a compact disc (CD) drive device, a display device such as a display monitor device, and an input device such as a keyboard and a mouse, which form a hardware configuration using an ordinary computer.
Furthermore, the program executed by the behavior recognition apparatus according to the present embodiment is provided as a file in an installable format or executable format recorded in a computer readable recording medium, such as a CD-ROM, a flexible disk (FD), a CD-R, or a digital versatile disk (DVD).
The program executed by the behavior recognition apparatus of the present embodiment may be configured so as to be retained on a computer connected to a network such as the Internet and provided by being downloaded by way of the network. The program executed by the behavior recognition apparatus of the present embodiment may be configured so as to be provided or distributed by way of a network such as the Internet.
The program executed by the behavior recognition apparatus according to the present embodiment may be configured so as to be provided by being incorporated in advance in a ROM or the like.
The program executed by the behavior recognition apparatus according to the present embodiment has a module configuration including the above-described respective units (the workplace photographing/movie input unit 101, the spatiotemporal feature point extraction unit 102, the element behavior recognition unit 103, the standard work recognition processing unit 104, the element behavior recognition dictionary input unit 105, and the standard work recognition result output unit 106). In actual hardware, when a CPU (processor) reads out the program from the above-mentioned storage medium to execute, the above-described respective units are loaded on the main storage device, such that the workplace photographing/movie input unit 101, the spatiotemporal feature point extraction unit 102, the element behavior recognition unit 103, the standard work recognition processing unit 104, the element behavior recognition dictionary input unit 105, and the standard work recognition result output unit 106 are generated on the main storage device.
The above-described embodiments are illustrative and do not limit the present invention. Thus, numerous additional modifications and variations are possible in light of the above teachings. For example, elements and/or features of different illustrative embodiments may be combined with each other and/or substituted for each other within the scope of the present invention.
Any one of the above-described operations may be performed in various other ways, for example, in an order different from the one described above.
Each of the functions of the described embodiments may be implemented by one or more processing circuits or circuitry. Processing circuitry includes a programmed processor, as a processor includes circuitry. A processing circuit also includes devices such as an application specific integrated circuit (ASIC), digital signal processor (DSP), field programmable gate array (FPGA), and conventional circuit components arranged to perform the recited functions.
Number | Date | Country | Kind |
---|---|---|---|
2018-064693 | Mar 2018 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
20020075407 | Cohen-Solal | Jun 2002 | A1 |
20070282479 | Shibuya | Dec 2007 | A1 |
20080223917 | Miyashita | Sep 2008 | A1 |
20080244409 | Millar | Oct 2008 | A1 |
20140140596 | Kawaguchi | May 2014 | A1 |
20150169947 | Kawaguchi | Jun 2015 | A1 |
20150239127 | Barajas | Aug 2015 | A1 |
20160012361 | Sugiyama | Jan 2016 | A1 |
20170004586 | Suzuki | Jan 2017 | A1 |
20170337842 | Tsuji | Nov 2017 | A1 |
20170364857 | Suri | Dec 2017 | A1 |
20180082208 | Cormier | Mar 2018 | A1 |
20180290019 | Rahimi | Oct 2018 | A1 |
20180293598 | Sato | Oct 2018 | A1 |
20200026927 | Narasimha | Jan 2020 | A1 |
Number | Date | Country |
---|---|---|
2011-100175 | May 2011 | JP |
2018-055591 | Apr 2018 | JP |
Number | Date | Country | |
---|---|---|---|
20190303680 A1 | Oct 2019 | US |