The present invention relates to the technical field of passenger transport apparatuses, and more particularly, the present invention relates to a big data analysis and processing system and a big data processing method for a passenger transport apparatus. The passenger transport apparatus herein refers to an automatic escalator and a moving walkway.
At present, the operator of a passenger transport apparatus knows very little about daily running conditions of the passenger transport apparatus. The operator is unable to learn daily running environment, load conditions or use time and down time, etc. of the passenger transport apparatus. The operator always dispatches technicians to the site for maintenance periodically, while sometimes no maintenance is needed on the site. Alternatively, the technical staff must go to the site for maintenance immediately after a client requests for repair.
An objective of the present invention is to solve or at least alleviate the problem existing in the prior art.
The present invention provides a big data analysis and processing system for a passenger transport apparatus, the big data analysis and processing system comprising:
a data collection module, the data collection module comprising:
a sensor assembly, configured to collect image data and/or depth map data; and
an image processing module, configured to process the image data and/or depth map data to acquire a plurality of types of data of the passenger transport apparatus, comprising one or more of device running data, load data, abnormal behavior data and contingency data;
a database, the database gathering and storing the plurality of types of data; and
a statistical analysis unit, the statistical analysis unit performing classification and statistics on the plurality of types of data according to a statistical analysis method, and generating an analysis report. In addition, the present invention further provides a big data analysis and processing method for a passenger transport apparatus.
With reference to the accompanying drawings, the above and other features of the present invention will become apparent, in which:
It would be easy to be understood that those of ordinary skills in the art may propose a plurality of interchangeable structural modes and implementation methods according to the technical solution of the present invention without changing the essential spirit of the present invention. Therefore, the following specific embodiments and accompanying drawings are merely exemplary description of the technical solution of the present invention, and should not be deemed as all of the present invention or deemed as limitations or restrictions to the technical solution of the present invention.
The positional terms of up, down, left, right, before, behind, front, back, top, bottom, etc. which are referred to or possibly referred to in the present description are defined with respect to the construction shown in the various figures, and they are relative concepts; therefore, they may possibly change correspondingly according to different positions thereof and different use states. Hence, these and other positional terms should also not be construed as restrictive terms.
It should be understood that in the present invention, the term “passenger transport apparatus” only refers to an automatic escalator and a moving walkway.
It should be understood that in the present invention, the term “imaging sensor” may be various types of 2D image sensors. It should be understood that any image sensor that is able to shoot and acquire an image frame comprising pixel grey scale information may be applicable here. Certainly, an image sensor that is able to shoot and acquire an image frame comprising pixel grey scale information and color information (for example, RGB information) may also be applicable here.
It should be understood that in the present invention, the term “depth sensing sensor” may be any 1D, 2D and 3D depth sensor or a combination thereof. Such a sensor may be operated in optics, electromagnetism or acoustic spectrum which is able to generate a depth map (also known as a point cloud or occupying a grid) having a corresponding size. Various depth sensing sensor techniques and apparatuses comprise but are not limited to a structured light measurement, phase shift measurement, flight time measurement, a stereotriangulation apparatus, an optical triangulation apparatus plate, an optical field camera, a coded aperture camera, a computational imaging technique, simultaneous localization and mapping (SLAM), an imaging radar, an imaging sonar, an echo location apparatus, a scanning LIDAR, a flicker LIDAR, a passive infra-red (PIR) sensor and a small focal plane array (FPA) or a combination comprising at least one of the aforementioned techniques or apparatuses. Different techniques may comprise active (transmitting and receiving a signal) or passive (only receiving a signal) off-band operations which may also be on electromagnetism or acoustic spectrum (such as vision and infra-red). Using depth sensing may have particular advantages over conventional 2D imaging, and using infra-red sensing may have particular benefits over visible spectrum imaging. Alternatively or in addition, the sensor may also be enabled to be an infra-red sensor having one or more pixel space resolutions, for example, a passive infra-red (PIR) sensor or a small IR focal plane array (FPA).
It should be understood that there may be differences in property and quantity in terms of the degree of providing a number of advantages in depth sensing between the 2D imaging sensor (for example, a conventional security camera) and the 1D, 2D or 3D depth sensing sensor. In 2D imaging, a reflected color (a mixture of wavelengths) from the first object in each radial direction of the imager is captured. Then, the 2D image may comprise a combined spectrum of source illumination and spectrum reflection coefficients of objects in the scene. The 2D image may be interpreted into a picture by a person. In the 1D, 2D or 3D depth sensing sensor, color (spectrum) information does not exist; and more exactly, a distance (depth, range) from a radial direction (1D) or direction (2D, 3D) of the sensor to a first reflective object is captured. The 1D, 2D and 3D techniques may have an inherent maximum detectable range limitation and may have a spatial resolution relatively lower than a typical 2D imager. In terms of relative immunity to environmental illumination problems, compared to conventional 2D imaging, using 1D, 2D or 3D depth sensing may advantageously provide an improved operation, better separation for a shielded object and better privacy protection. Using infra-red sensing may have particular benefits over visible spectrum imaging. For example, a 2D image may be unable to be transformed into a depth map and a depth map can also not have the ability of being transformed into a 2D image (for example, artificially allocating continuous colors or grey scales to a continuous depth may enable a person to roughly interpret a depth map in a way slightly similar to how a person sees a 2D image, which is not an image in general sense).
It should be understood that the imaging sensor and the depth sensing sensor can be integrated as an RGB-D sensor, which may acquire RGB information and depth (D) information at the same time.
It should be understood that the imaging sensor and/or depth sensing sensor may be integrated with an infra-red thermal imager so as to detect the temperature of a component and other information at the same time.
First referring to
In addition, although
By virtue of the imaging sensor and/or depth sensing sensor and the image processing module 102, the data collection module 100 is able to collect a plurality of types of data. Compared to conventional sensors which may only collect one type of data, a combination of an imaging sensor and/or depth sensing sensor and an image processing module may easily obtain a greater volume of data, so that an automatic escalator operator may have a greater amount of information, and based on classification, statistics and analysis of the large amount of data, the operator can be guided to provide a higher quality of state-based service. In some embodiments, the data collection module 100 comprises an RGB-D sensor integrating an imaging sensor and a depth sensing sensor and provided at the top of an entry end and an exit end of the automatic escalator 500.
In some embodiments, the device running data comprises but is not limited to: a running speed, comprising a step tread speed, a handrail belt speed, etc., a braking distance, tautness of the handrail belt, a component temperature, a running time, a down time and so on. For example, the running speed may be calculated by an optical fluxon-based image analysis method. Although generally the handrail belt speed is hard to be determined by the optical fluxon method since generally there is no significant feature on the handrail belt (which is generally black as a whole), in such a case, an extra mark or change design which is able to be detected by the sensors 111 and 112 may be added to the handrail belt, for example, some stripes, graphs and trademarks. The detection of the braking distance will be described below. The tautness of the handrail belt may be detected by comparing the speed of the handrail belt and the step tread speed or be detected by self-comparison. For example, generally, when the handrail belt speed is slower than the step/tread speed or than its previous speed or sometimes slower while sometimes in a normal speed, then the tautness of the handrail belt is lower. In addition, the tautness of the handrail belt may also be detected according to a physical position of the handrail belt, for example, whether the handrail belt droops or becomes slack, etc. The component temperature may be detected by means of an imaging sensor running in an infra-red portion of an electromagnetic spectrum. The running time and down time may be determined by an actual speed of the step tread and an expected speed of the step tread.
In some embodiments, the load data comprises but is not limited to: passenger load data comprising: the number of passengers, the body shape and appearance of a passenger, and a dress color of a passenger; and object load data comprising: an object shape, an object size and an object category. For example, whether a passenger is the old, the disabled, a child, etc. may be recognized, and a baby stroller, a wheelchair, etc. may be recognized.
In some embodiments, the contingency data comprises but is not limited to: accident data and component failure data. For example, the accident data comprises a passenger falling down, a passenger tumbling, a passenger being stuck, foreign matter involvement, a fire, etc., and for example, the component failure data comprises a comb teeth defect, a step damage or missing, a cover plate damage or a missing indicator light damage.
The collection mentioned above generally comprises the image processing module comparing the image data and/or depth map data collected by the imaging sensor and/or depth sensing sensor with background image data and/or depth map data pre-stored in the system to judge a target feature, particularly comprising:
in some embodiments, the method for collecting the data mentioned above comprising:
a data frame acquisition step: sensing a monitoring region of the passenger transport apparatus to acquire a data frame;
a background acquisition step: acquiring a background model based on a data frame sensed in a normal state in the monitoring region;
a foreground detection step: performing differential processing on the data frame sensed in real time and the background model to obtain a foreground object;
a foreground feature extraction step: extracting a corresponding foreground object marked feature from the foreground object; and
a state judgement step: judging whether the foreground object belongs to an abnormal group at least based on the foreground object marked feature, and determining that the foreground object belongs to the abnormal group in the case that it is judged as “yes”.
In some embodiments, the body shape and appearance of a passenger are defined based on the method mentioned above and based on a data frame sensed in a normal state in a monitoring region of the passenger transport apparatus and/or a skeleton graph model and/or a shading image model and/or a human body self-learning model. The dress color of a passenger may be directly collected by means of an imaging apparatus which is able to collect color information.
In some embodiments, the method further comprises the extracted foreground object marked feature comprising the color and/or size and/or speed of a human body in a foreground object; and judging whether the foreground object belongs to an abnormal group based on the color and/or size and/or speed of the human body in the foreground object.
In some embodiments, the method further comprises, when the color and/or size and/or speed of a human body in a foreground object fall/falls within the abnormal human body model, judging that the foreground object belongs to an abnormal group.
In some embodiments, the method further comprises an abnormal object model generation sub-step: configured to define an abnormal object model based on a data frame sensed in a normal state in a monitoring region of the passenger transport apparatus and/or an object self-learning model.
In some embodiments, the method further comprises the extracted foreground object marked feature further comprising the size and/or shape of an object in a foreground object; and
in some embodiments, the method further comprise judging whether the foreground object belongs to an abnormal group based on the size and/or shape of the object in the foreground object.
In some embodiments, the method further comprises, when the size and/or shape of an object in a foreground object fall/falls within the abnormal object model, judging that the foreground object belongs to an abnormal group.
In some embodiments, the method further comprises, when the foreground object belongs to the abnormal group, further extracting a foreground object marked feature corresponding to the surrounding of the abnormal group from the foreground object; and further judging that the foreground object is in an abnormal state at least based on the foreground object marked feature corresponding to the surrounding of the abnormal group, and determining that the foreground object is in the abnormal state in the case that it is judged as “yes”.
In some embodiments, the method further comprises a pet model generation sub-step, where a pet model is defined based on a data frame sensed in a normal state in a monitoring region of the passenger transport apparatus and/or a pet self-learning model.
In some embodiments, the extracted foreground object marked feature comprises the shape and/or size and/or shape of a pet in a foreground object; and whether the foreground object belongs to an abnormal group is judged based on the shape and/or size and/or color of the pet in the foreground object.
In some embodiments, the method further comprises, when the size and/or shape of a pet in a foreground object fall/falls within the pet model, judging that the foreground object belongs to an abnormal group.
In some embodiments, the method further comprises a trajectory generation step: generating a change trajectory with regard to a foreground object marked feature according to the foreground object marked feature extracted in a foreground object corresponding to a plurality of continuous data frames respectively.
In some embodiments, the method further comprises judging whether a foreground object is about to belong to an abnormal group in advance based on a change trajectory of the foreground object marked feature, and determining that the foreground object is about to belong to the abnormal group in the case that it is judged as “yes”.
In some embodiments, the method further comprises determining that the foreground object belongs to an abnormal group when judgement results of at least two continuous data frames are both that the foreground object belongs to an abnormal group.
In some embodiments, the method further comprises sensing and acquiring data frames within a pre-determined time period after every pre-determined time period for the processing apparatus to perform data processing.
In some embodiments, the method further comprises a warning step: triggering a warning unit to work in the case of determining that the foreground object belongs to an abnormal group.
In some embodiments, a method for detecting an engagement state between a step and a comb plate of a passenger transporter comprises:
sensing an engagement part of at least a step and a comb plate of the passenger transporter by a depth sensing sensor to acquire a depth map;
acquiring a background model based on a depth map sensed when the passenger transporter is in no load and the engagement state is in a normal state;
performing differential processing on the depth map sensed in real time and the background model to obtain a foreground object; and
performing data processing at least based on the foreground object to judge whether the engagement state is in the normal state.
In some embodiments, the method further comprises, sensing an engagement part of the step and the comb plate comprising sensing engagement teeth of the step, and in the step of judging the engagement state, judging the engagement state as an abnormal state when at least one of the engagement teeth is damaged.
In some embodiments, a method for detecting foreign matter involvement, for example, foreign matter involvement in an entry of a handrail, comprises:
sensing at least a part of a handrail entry region of the passenger transporter by means of the imaging sensor and/or depth sensing sensor to acquire a data frame; and
analyzing the data frame to monitor whether the handrail entry of the passenger transporter in running is in a normal state or an abnormal state,
wherein the normal state refers to the fact that neither a foreign matter is about to enter nor at least a part has been located in a dangerous region of the handrail entry, and the abnormal state refers to the fact that a foreign matter is about to enter or at least a part has been located in the dangerous region of the handrail entry.
In some embodiments, the method further comprises:
acquiring a background model based on a data frame sensed when the handrail entry of the passenger transporter is in a normal state;
comparing the data frame sensed in real time and the background model to obtain a foreground object;
extracting a corresponding position feature from the foreground object; and
judging whether the foreground object is in a dangerous region of the handrail entry at least based on the position feature, and determining that the handrail entry is in an abnormal state in the case that it is judged as “yes”.
In addition, in some embodiments, a method for detecting that a step tread is missing comprises:
sensing a monitored object of the passenger transporter to acquire a data frame;
acquiring a background model in advance based on a data frame sensed when the monitored object is in a normal state or an abnormal state;
performing differential processing on the data frame sensed in real time and the background model to obtain a foreground object; and
performing data processing at least based on the foreground object to judge whether the monitored object is in the normal state.
In some embodiments, the method further comprises extracting a corresponding foreground feature from the foreground object according to the monitored object,
wherein in the judgement step, whether the monitored object is in a normal state is judged based on the foreground feature.
In some embodiments, the method further comprises, the monitored object comprising a landing plate of the passenger transporter, and in the judgement step, judging that the monitored object is in an abnormal state when the landing plate is shifted or missing.
In some embodiments, the method further comprises, in the step of extracting a foreground feature, the extracted foreground feature comprising the shape, size and position of a foreground object; and in the judgement step, judging whether the landing plate is shifted or missing based on the shape, size and position of the foreground object.
In some embodiments, the method further comprises, the monitored object comprising a security fence used by the passenger transporter in a working condition of maintenance and repair, and in the judgement step, judging that the monitored object is in an abnormal state when the security fence is missing and/or placed inappropriately.
In some embodiments, the abnormal behavior of a passenger comprises but is not limited to: carrying a pet, carrying a cart, carrying a wheelchair, carrying an object exceeding the standard and carrying any abnormal item, and climbing, going in a reverse direction, not holding a handrail, playing with a mobile phone, a passenger being in an abnormal position and any dangerous behavior.
In some embodiments, collecting an abnormal behavior may be implemented by the following method, the method further comprising:
a data frame acquisition step: sensing a monitoring region of the passenger transport apparatus to acquire a data frame;
a background acquisition step: acquiring a background model based on a data frame sensed in a normal state in the monitoring region;
a foreground detection step: performing differential processing on the data frame sensed in real time and the background model to obtain a foreground object;
a foreground feature extraction step: extracting a corresponding foreground object state feature from the foreground object; and
a state judgement step: judging whether the foreground object is in an abnormal state at least based on the foreground object state feature, and determining that the foreground object is in the abnormal state in the case that it is judged as “yes”.
In some embodiments, the method further comprises a scene model generation sub-step: configured to define a dangerous region based on a data frame sensed when a monitoring region of the passenger transport apparatus is in a normal state and/or a scene self-learning model.
In some embodiments, the method further comprises, the extracted foreground object state feature comprising the speed and/or accelerated speed and/or target intensity of the foreground object, and judging whether the foreground object is in an abnormal state based on the speed and/or accelerated speed and/or target intensity of the foreground object in the dangerous region.
In some embodiments, the method further comprises judging that a foreground object is in an abnormal state when the speed of the foreground object in the extracted foreground object state feature exceeds a set speed threshold value; and/or
judging that the foreground object is in an abnormal state when the accelerated speed of the foreground object in the extracted foreground object state feature exceeds a set accelerated speed threshold value; and/or
judging that the foreground object is in an abnormal state when the target intensity of the foreground object in the extracted foreground object state feature exceeds a set target intensity threshold value.
In some embodiments, the method further comprises an abnormal human body action model generation sub-step: configured to define an abnormal human body action model based on a data frame sensed when a monitoring region of the passenger transport apparatus is in a normal state and/or a skeleton graph model and/or a shading image model and/or a human body self-learning model.
In some embodiments, the method further comprises, the extracted foreground object state feature comprising a human body action in the foreground object; and
judging whether the foreground object is in an abnormal state based on the human body action in the foreground object.
In some embodiments, the method further comprises, when the human body action in the foreground object falls within the abnormal human body action model, judging that the foreground object is in an abnormal state.
In some embodiments, the method further comprises a scene model generation sub-step: defining a dangerous region based on a data frame sensed when a monitoring region of the passenger transport apparatus is in a normal state and/or a scene self-learning model.
In some embodiments, the method further comprises:
the extracted foreground object state feature comprising a human body position and action in the foreground object, and
judging whether the foreground object is in an abnormal state based on the human body position and the human body action in the foreground object.
In some embodiments, the method further comprises, when the human body position in the foreground object falls within the dangerous region and the human body action in the foreground object falls within the abnormal human body action model, judging that the foreground object is in an abnormal state.
In some embodiments, the method further comprises a trajectory generation step: generating a change trajectory with regard to a foreground object state feature according to the foreground object state feature extracted in a foreground object corresponding to a plurality of continuous data frames respectively.
In some embodiments, the method further comprises judging whether the foreground object is about to enter an abnormal state in advance based on a change trajectory of the foreground object state feature, and determining that the foreground object is about to enter the abnormal state in the case that it is judged as “yes”.
In some embodiments, the method further comprises, the extracted foreground object state feature comprising the speed and/or accelerated speed and/or target intensity of the foreground object; and
judging whether the foreground object is about to enter an abnormal state based on a change trajectory of the speed and/or accelerated speed and/or target intensity of the foreground object in the dangerous region.
In some embodiments, the method further comprises judging that the foreground object is about to enter an abnormal state when a speed change trajectory of the foreground object exceeds a set speed trajectory threshold value within a pre-set time period; and/or
judging that the foreground object is about to enter an abnormal state when an accelerated speed change trajectory of the foreground object exceeds a set accelerated speed trajectory threshold value within a pre-set time period; and/or
judging that the foreground object is about to enter an abnormal state when a target intensity change trajectory of the foreground object exceeds a set target intensity trajectory threshold value within a pre-set time period.
In some embodiments, the method further comprises:
the foreground object state feature extracted by the foreground feature extraction module comprising a human body action in the foreground object; and
judging whether the foreground object is about to enter an abnormal state based on a change trajectory the human body action in the foreground object.
In some embodiments, the method further comprises judging that the foreground object is about to enter an abnormal state when a change trajectory of a human body action in the foreground object exceeds a set action trajectory threshold value within a pre-set time period.
In some embodiments, the method further comprises:
the foreground object state feature extracted by the foreground feature extraction module further comprising a human body position in the foreground object; and
judging whether the foreground object is about to enter an abnormal state based on a change trajectory the human body position and the human body action in the foreground object.
In some embodiments, the method further comprises judging that the foreground object is about to enter an abnormal state when a change trajectory of a human body position in the foreground object approaches the dangerous region within a pre-set time period, and a change trajectory of a human body action in the foreground object exceeds a set action trajectory threshold value within the pre-set time period.
In some embodiments, the method further comprises determining that the foreground object is in an abnormal state when judgement results of at least two continuous data frames are both that the foreground object is in the abnormal state.
In some embodiments, the method further comprises sensing and acquiring data frames within a pre-determined time period after every pre-determined time period for the processing apparatus to perform data processing.
In some embodiments, the method further comprises a warning step: triggering the warning unit to work in the case of determining that the foreground object is in an abnormal state.
In one embodiment, the step of acquiring a running speed of an automatic escalator by virtue of the imaging sensor and/or depth sensing sensor and the image processing module 102 comprises:
sensing at least a part of the passenger transporter by means of the imaging sensor and/or depth sensing sensor to acquire sequence frames;
calculating a shift of a corresponding feature point (for example, a step tread or a passenger) in frame coordinates between any two frames in the sequence frames based on an optical flow method;
converting the shift of the feature point in the frame coordinates into a shift in global spatial coordinates by using scales under the imaging sensor or depth sensing sensor;
determining a time amount between any two frames in the sequence frames; and
obtaining by calculation speed information about corresponding time points of any two frames based on the shift of the feature point in the global spacial coordinates and the corresponding time amount, and further combining the same to obtain speed information about the sequence frames.
In one embodiment, acquiring a braking distance of an automatic escalator by virtue of the imaging sensor and/or depth sensing sensor and the image processing module 102 comprises:
in the step of acquiring sequence frames, the imaging sensor and/or depth sensing sensor starting to acquire the sequence frames at the same time when a working condition of braking is triggered;
wherein the speed detection method further comprises the steps of:
obtaining by calculation, based on the speed information, speed change information about sequence frames corresponding to a time period from the time when the working condition of braking is triggered to the time when a step, a passenger or any other detectable point slows down to 0, and
obtaining by calculation, at least based on the speed information, braking distance information corresponding to a time period from the time when the working condition of braking is triggered to the time when the step slows down to 0.
In one embodiment, the method for acquiring foreign matter involvement, for example, foreign matter involvement in an entry of a handrail, by virtue of the imaging sensor and/or depth sensing sensor and the image processing module 102 comprises:
sensing at least a part of a handrail entry region of the passenger transporter by means of the imaging sensor and/or depth sensing sensor to acquire a data frame; and
analyzing the data frame to monitor whether the handrail entry of the passenger transporter in running is in a normal state or an abnormal state,
wherein the normal state refers to the fact that neither a foreign matter is about to enter nor at least a part has been located in a dangerous region of the handrail entry, and the abnormal state refers to the fact that a foreign matter is about to enter or at least a part has been located in the dangerous region of the handrail entry.
In one embodiment, the method for acquiring a passenger being in an abnormal position by virtue of the imaging sensor and/or depth sensing sensor and the image processing module 102 comprises:
an image acquisition step: sensing a monitoring region of the passenger transport apparatus to acquire a data frame;
a background acquisition step: acquiring a background model based on a data frame sensed in a normal state in the monitoring region;
a foreground detection step: performing differential processing on the data frame sensed in real time and the background model to obtain a foreground object;
a foreground feature extraction step: extracting a corresponding foreground object state feature from the foreground object; and
the extracted foreground object state feature comprising a human body position and action in the foreground object, and
judging whether the foreground object is in an abnormal state based on the human body position and the human body action in the foreground object.
It should be understood that the method for acquiring various data mentioned above is merely exemplary, and those of skills in the art also master or may come up with more methods for acquiring a plurality of types of data by virtue of the imaging sensor and/or depth sensing sensor 101 and the image processing module 102, while the various methods cannot be completely enumerated herein.
A large volume of a plurality of types of data may be constantly collected for analysis and research because of the cooperation of the imaging sensor and/or depth sensing sensor and the image processing module, which is significantly superior to the traditional sensor or a single-purpose 2D imaging sensor. However, in some embodiments, the data collection module 100 of the present invention further comprises any other sensors, so as to obtain more data. The various data may be fed back constantly to the statistical analysis unit 300 so as to provide an emergency measure in time, and the statistical analysis unit 300 may perform statistics and analysis on big data based on any known technical means or new technical means, provide a component health report, propose an improvement suggestion, and perform remote diagnosis on a device or dispatch a worker to the site for maintenance and replacement. As will be described in detail below, the statistical analysis unit 300 may further predict a failure based on statistical analysis. For example, in some embodiments, the statistical analysis unit 300 may find a relationship between particular data and a particular component failure by statistical analysis of big data, for example, a relationship between a component running time and a component failure, a relationship between a component down time and a component failure, a relationship between a braking distance change and a braking apparatus failure, a relationship between a component working temperature curve or the number of times of cold start and a component failure, a relationship between the tautness of a handrail belt and a handrail belt failure, a relationship between the quantity of loaded passengers or a passenger load curve and a passenger load-relevant component failure, a relationship between an abnormal passenger behavior and a failed component, and so on. Based on these relationships, the failure prediction of the statistical analysis unit 300 may be Bayesian reasoning based on a physical model and/or empirical model which calculates parameters from big data by, for example, a least square method, wherein the empirical model comprises a component ageing model, the component ageing model being a Weibull distribution, a Rayleigh model, a learning empirical distribution model, a high cycle fatigue (HCF) model, a low cycle fatigue (LCF) model and/or a small probability event statistical model, for example, an extreme value statistical model. The statistical analysis unit 300 may further comprise a learning module, the learning module comparing a predicted failure with an actual failure to correct the parameters of the physical model and/or empirical model. Practical maintenance data may be utilized based on, for example, Bayesian estimation, to continuously update the model, and the physical model and/or empirical model may be corrected when there is a difference between the predicted failure and the actual failure to make the model better.
In some embodiments, the data collection module 100 is able to collect braking distance data of the passenger transport apparatus; the statistical analysis unit 300 performs statistics and analysis on the braking distance data, and provides a health report periodically; and the statistical analysis unit 300 predicts a failure in a braking apparatus of the passenger transport apparatus based on a change of the braking distance data and a physical model and/or empirical model.
In some embodiments, the data collection module 100 is further able to collect the number of borne people, for example, the data collection module 100 comprises: an imaging sensor and/or depth sensing sensor provided at the top of an entry end and an exit end of the passenger transport apparatus, configured to constantly collect image data and/or depth map data in an entry end region and an exit end region of the passenger transport apparatus; an image processing module, configured to process the image data and/or depth map data to acquire and record the shape and color of a target in the image data and/or depth map data, judge whether the target is a person or not, and make a record when a person is recognized; and a counter, the counter being set as zero when there is no person on the passenger transport apparatus; and when a person who has not been recorded is recognized at the entry end, adding 1 to a boarding counter and recording time information, when a person who has been recorded is recognized at the exit end, adding 1 to a drop-off counter and recording time information, with the difference between the boarding counter and the drop-off counter at any moment being the number of passengers at that time. The statistical analysis unit 300 is able to perform statistics and analysis on a curve of the number of borne people of the passenger transport apparatus in each time period, and determine an important component load curve on this basis. In some embodiments, the statistical analysis unit 300 is further connected with a passenger transport apparatus control system, the control system adjusting a running direction and running speed of the passenger transport apparatus based on the curve of the number of borne people.
Now
S1, collecting data;
S2, performing statistics and analysis; and
S3, providing a state-based service.
More specifically, refer to
Continuously refer to
Continuously refer to
Refer to
Refer to
Refer to
It should be understood that a limited number of embodiments are adopted to describe the big data analysis and processing system and method for a passenger transport apparatus according to the present invention in detail; however, those of skills in the art may implement more applications of the system and method of the present invention based on a large volume of a variety of data collected and statistical analysis means, and these applications will not depart from the scope of the present invention.
It should be understood that in the applications mentioned above, one item of data in one type of data in a plurality of types of data is merely used for statistical analysis; however, in other embodiments, the analysis may also be performed based on a plurality of items of data in one type of data or based on a plurality of items in a plurality of types of data. The collection of a large volume of data enables various statistical analyses to have a solid foundation, so that a better analysis conclusion and wider applications may be obtained.
The advantages of the big data analysis and processing system and method for a passenger transport apparatus of the present invention comprise but are not limited to: an improved service, better security and a reduced cost, more particularly comprising but not limited to: 1. providing a failure reminding in advance; 2. performing component maintenance in advance; 3. sharing data with a client to analyze the source of a failure or accident; 4. providing a fast reaction service; and 5. sharing data with a research department to improve the product design.
It should be understood that all the above preferred embodiments are exemplary rather than limiting, and various modifications or variants made on the specific embodiments described above by those of skills in the art within the concept of the present invention shall all fall within the scope of legal protection of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
201610610348.1 | Jul 2016 | CN | national |