The present invention belongs to the interdisciplinary field of space technology and pattern recognition, and more particularly to an attitude estimation method and system for an on-orbit three-dimensional space object, which is applicable to satellites, space aircrafts, and the like.
A large quantity of space objects such as communication satellites and resource satellites launched around the world can be used in application scenarios such as network communication, remote sensing, and geodesy. For ground-based optoelectronic observation on these space objects, it is essential in this type of system to analyze and judge attitudes thereof. Because a spatial resolution of a ground-based telescope system is limited and the atmospheric environment has random interference for long-distance optical imaging, a phenomenon of a blurred object boundary occurs easily in an image acquired by a ground-based sensor. When the boundary an imaged object is blurred, the accuracy of conventional attitude estimation and three-dimensional reconstruction algorithms based on feature point matching usually decreases rapidly as a blurring level of the object increases. Attitude estimation is to calculate, from a projection image of an object acquired by a two-dimensional camera coordinate system, a pitching angle α and a yaw angle β of the object in a three-dimensional object coordinate system, and a pair of angle values (α,β) correspond to one attitude. The accuracy of attitude estimation is highly significant for analysis of component dimensions and relative position relationships of components of space objects, and functional attributes of the space objects. Therefore, it is necessary to carry out research on a robust attitude estimation algorithm under a condition of ground-based long-distance optical imaging.
Scholars around the world have conducted detailed research on attitude estimation algorithms for space objects in this type of imaging and have obtained related results. For example, “Method of Measuring Attitude Based on Inclined Angle of Segment Between Feature Points” of Zhao Rujin, Zhang Qiheng, and Xu Zhiyong is published on ACTA PHOTONICA SINICA (February, 2010, Vol. 39, No. 2). Research is conducted on an iteration solution method for a 3-dimensional attitude of an object based on inclination angle information between object feature points. The method is applicable to solving an object attitude under conditions of a long-distance weak-perspective imaging object and unknown parameters inside a camera. However, the precision of the algorithm severely depends on the precision of edges, straight lines, and angular points extracted. When an iteration initial value is deviated from an actual attitude by a relatively large error, the algorithm requires a relatively large number of iterations, so that a computing quantity is large, and a condition in which iterations do not converge may occur. In ground-based long-distance optical imaging, an object boundary may be blurred easily, and positioning precision of a feature point is affected. Therefore, the precision of the algorithm is undesirable. “Mono-view image attitude determination method based on proportions of feature points of object” of Wang Kunpeng, Zhang Xiaohu, and Yu Qifeng on Journal of Applied Optics (November, 2009, Vol. 30, No. 6) proposes a mono-view attitude determination method for a recorded live image, in which an object attitude parameter is obtained by means of an iteration solution using a system of nonlinear equations using proportion information of coordinates of object feature points and position and attitude parameter relationships between an object imaging model and a coordinate system. The algorithm has high solving precision and desirable robustness; however, marking points on an object need to be known in advance, so that the algorithm is not suitable for attitude solving of non-cooperative objects and unmarked objects, and therefore has undesirable adaptability. In “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography” of FISHL E R M A, FISHL E R M A, BOLL E S R C. ([J]. Communications of the ACM, 1981, 24 (6):381-395), a large quantity of point pairs are extracted on an object and a projection image of the object, and a manner of consistent cross validation is used to select the fewest feature points to perform three-dimensional reconstruction of an attitude. The algorithm needs to extract a large quantity of feature point pairs and has a large computing quantity, and when the feature point pairs have a matching error, the algorithm has a great error. The foregoing research results all propose respective solutions for special cases of such type of problems, and each solution has its own algorithm characteristic. However, the algorithms all have problems such as a large computing quantity, undesirable precision or low adaptability.
To resolve problems of a large computing quantity, undesirable precision, or low adaptability of the existing methods, the present invention provides an attitude estimation method and system for an on-orbit three-dimensional space object, in which three-dimensional space attitude information of an object can be effectively estimated from a two-dimensional image of the space object, precision is high, a computing quantity is small, and adaptability is high.
An attitude estimation method for an on-orbit three-dimensional space object includes an offline feature library construction step and an online attitude estimation step, where
the offline feature library construction step specifically includes:
(A1) acquiring, according to a space object three-dimensional model, multi-viewpoint characteristic views of the object for characterizing various attitudes of the space object; and
(A2) extracting geometrical features from each space object multi-viewpoint characteristic view to form a geometrical feature library, where the geometrical features include an object main body height-width ratio Ti,1, an object longitudinal symmetry Ti,2, an object horizontal symmetry Ti,3, and an object main-axis inclination angle Ti,4, where the object main body height-width ratio Ti,1 refers to a height-width ratio of an minimum bounding rectangle of the object; the object longitudinal symmetry Ti,2 refers to a ratio of an area of the upper-half portion of the object to an area of the lower-half portion of the object within a rectangular region enclosed by the minimum bounding rectangle of the object; the object horizontal symmetry Ti,3 refers to a ratio of an area of the left-half portion of the object to an area of the right-half portion of the object within the rectangular region enclosed by the minimum bounding rectangle of the object; and the object main-axis inclination angle Ti,4 refers to an included angle between an object cylinder-body main axis and a view horizontal direction of a characteristic view; and
the online attitude estimation step specifically includes:
(B1) preprocessing an on-orbit space object image to be tested;
(B2) extracting features from the image to be tested after preprocessing, where the features are the same as the features extracted in Step (2); and
(B3) matching the features extracted from the image to be tested in the geometrical feature library, where a space object attitude characterized by a characteristic view corresponding to a matching result is an object attitude in the image to be tested.
Furthermore, a manner of extracting the feature, the object main body height-width ratio Ti,1 includes:
(A2.1.1) obtaining a threshold Ti by using a threshold criterion of a maximum between-cluster variance for a characteristic view Fi, setting a pixel gray value fi(x, y) greater than the threshold Ti in the characteristic view Fi as 255, and setting a pixel gray value fi(x, y) less than or equal to the threshold Ti as zero, thereby obtaining a binary image Gi, where Gi is a pixel matrix whose width is n and height is m, and gi(x, y) is a pixel gray value at a point (x,y) in Gi;
(A2.1.2) scanning the binary image Gi in an order from top to bottom and from left to right, if a current point pixel value gi(x, y) is equal to 255, recording a current pixel horizontal coordinate x=Topj, and a vertical coordinate y=Topi, and stopping scanning;
(A2.1.3) scanning the binary image Gi in an order from bottom to top and from left to right, if a current point pixel value gi(x, y) is equal to 255, recording a current pixel horizontal coordinate x=Bntj, and a vertical coordinate y=Bnti, and stopping scanning;
(A2.1.4) scanning the binary image Gi in an order from left to right and from top to bottom, if a current point pixel value gi(x, y) is equal to 255, recording a current pixel horizontal coordinate x=Leftj, and a vertical coordinate y=Lefti, and stopping scanning;
(A2.1.5) scanning the binary image Gi in an order from right to left and from top to bottom, if a current point pixel value gi(x, y) is equal to 255, recording a current pixel horizontal coordinate x=Rightj, and a vertical coordinate y=Righti, and stopping scanning; and
(A2.1.6) defining the object main body height-width ratio of the characteristic view Fi as
where Hi=|Topi−Bnti|, Wi=|Leftj−Rightj|, and the symbol |V| represents an absolute value of the variable V.
Furthermore, a manner of extracting the feature, the object longitudinal symmetry Ti,2 includes:
(A2.2.1) calculating a horizontal coordinate Cix=└(Leftj+Rightj)/2┘ and a vertical coordinate Ci=└(Topi+Bnti)/2┘ of a central point of the characteristic view Fi, where the symbol └V┘ represents taking an integral part for the variable V;
(A2.2.2) counting the number of pixel points whose gray value is 255 within a region where 1≦horizontal coordinate x≦n and 1≦vertical coordinate y≦Ciy in the binary image Gi, that is, the area STi of the upper-half portion of the object of the characteristic view Fi;
(A2.2.3) counting the number of pixel points whose gray value is 255 within a region where 1≦horizontal coordinate x≦n and Ciy+1≦vertical coordinate y≦m in the binary image Gi, that is, the area SDi of the lower-half portion of the object of the characteristic view Fi; and
(A2.2.4) calculating the object longitudinal symmetry
of the characteristic view Fi.
Furthermore, a manner of extracting the feature, the object horizontal symmetry Ti,3 includes:
(A2.3.1) counting the number of pixel points whose gray value is 255 within a region where 1≦horizontal coordinate x≦Cix and 1≦vertical coordinate y≦m in the binary image Gi, that is, the area SLi of the left-half portion of the object of the characteristic view Fi;
(A2.3.2) counting the number of pixel points whose gray value is 255 within a region where Cix+1≦horizontal coordinate x≦n and 1≦vertical coordinate y≦m in the binary image Gi, that is, the area SRi of the right-half portion of the object of the characteristic view Fi; and
(A2.3.3) calculating the object horizontal symmetry
of the characteristic view Fi.
Furthermore, a manner of extracting the feature, the object main-axis inclination angle Ti,4 includes:
(A2.4.1) calculating a horizontal coordinate xi0 and a vertical coordinate yi0 of a gravity center of the binary image Gi corresponding to the characteristic view Fi:
where in the formula,
k=0, 1, and j=0, 1;
(A2.4.2) calculating a p+qth central moment μi(p,q) corresponding to the binary image Gi corresponding to the characteristic view Fi:
where p=0, 1, 2, and q=0, 1, 2;
(A2.4.3) constructing a real symmetrical matrix
and calculating feature values V1 and V2 of the matrix Mat and feature vectors
and
corresponding to the feature vectors; and
(A2.4.4) calculating the object main-axis inclination angle Ti4 of the characteristic view Fi:
where
in the formula, the symbol π represents a ratio of the circumference of a circle to the diameter thereof, and the symbol a tan2 represents an arctangent function.
Furthermore, normalization processing is further performed on the geometrical feature library constructed in Step (A2), and normalization processing is performed on the features extracted from the image to be tested in Step (B2).
Furthermore, a specific implementation manner of the acquiring, according to a space object three-dimensional model, multi-viewpoint characteristic views of the object for characterizing various attitudes of the object in Step (A1) includes:
dividing a Gaussian observation sphere into K two-dimensional planes at an angle interval of γ for pitching angle α and at an interval of γ for yaw angle β, where α=−180° to 0°, β=−180° to 180°, and K=360*180/γ2; and
placing the space object three-dimensional model OT at the spherical center of the Gaussian observation sphere, and performing orthographic projection of the three-dimensional model OT from the spherical center respectively onto the K two-dimensional planes, to obtain multi-viewpoint characteristic views Fi of K three-dimensional template objects in total, where each characteristic view Fi is a pixel matrix whose width is n and height is m, fi(x, y) is a pixel gray value at a point (x,y) in Fi, 1≦horizontal coordinate x≦n, 1≦vertical coordinate y≦m, and i=1, 2, . . . , and K.
Furthermore, in Step (B1), noise suppression is first performed on the image to be tested by using non-local means filtering first, and then deblurring is performed by using a maximum likelihood estimation algorithm.
Furthermore, a specific implementation manner of (B3) includes:
(B3.1) traversing the entire geometrical feature library SMF, and calculating Euclidean distances, represented as D1, . . . , and DK, between four geometrical features {SG1,SG2,SG3,SG4} of the image to be tested and each row of vectors in the geometrical feature library SMF, where K is a quantity of the multi-viewpoint characteristic views of the object; and
(B3.2) choosing four minimum values DS, Dt, Du, and Dv from the Euclidean distances D1, . . . , and DK, and calculating an arithmetic mean of four object attitudes corresponding to the four minimum values, where the arithmetic mean is an object attitude in the image to be tested.
An attitude estimation system for an on-orbit three-dimensional space object includes an offline feature library construction module and an online attitude estimation module, where
the offline feature library construction module specifically includes:
a first sub-module, configured to acquire, according to a space object three-dimensional model, multi-viewpoint characteristic views of the object for characterizing various attitudes of the space object; and
a second sub-module, configured to extract geometrical features from each space object multi-viewpoint characteristic view to form a geometrical feature library, where the geometrical features include an object main body height-width ratio Ti,1, an object longitudinal symmetry Ti,2, an object horizontal symmetry Ti,3, and an object main-axis inclination angle Ti,4, where the object main body height-width ratio Ti,1 refers to a height-width ratio of an minimum bounding rectangle of the object; the object longitudinal symmetry Ti,2 refers to a ratio of an area of the upper-half portion of the object to an area of the lower-half portion of the object within a rectangular region enclosed by the minimum bounding rectangle of the object; the object horizontal symmetry Ti,3 refers to a ratio of an area of the left-half portion of the object to an area of the right-half portion of the object within the rectangular region enclosed by the minimum bounding rectangle of the object; and the object main-axis inclination angle Ti,4 refers to an included angle between an object cylinder-body main axis and a view horizontal direction of a characteristic view; and
the online attitude estimation module specifically includes:
a third sub-module, configured to preprocess an on-orbit space object image to be tested;
a fourth sub-module, configured to extract features from the image to be tested after preprocessing, where the features are the same as the features extracted by the second sub-module; and
a fifth sub-module, configured to match the features extracted from the image to be tested in the geometrical feature library, where a space object attitude characterized by a characteristic view corresponding to a matching result is an object attitude in the image to be tested.
Technical effects of the present invention lie in that:
In the present invention, Step (A1) and Step (A2) are an offline training stage, in which multi-viewpoint characteristic views of the object are acquired by using a three-dimensional template object model, geometrical features of characteristic views are extracted, and further a geometrical feature library of the template object is established. Step (B1) and Step (B2) are an online estimation stage of an attitude of the image to be tested, and a geometrical feature of the image to be tested is compared with the geometrical feature library of the template object, so as to obtain the attitude of the image to be tested through estimation. A geometrical feature specifically used for matching in the present invention has scale invariance; therefore, as long as a relative dimension scale and position relationship between various components of an object are accurately acquired in a three-dimensional modeling stage, subsequent relatively high matching precision can be ensured. The entire method is simple to implement and has desirable robustness, high attitude estimation precision, low susceptibility to an imaging condition, and desirable applicability.
As an optimization, normalization processing is performed on extracted geometrical features, so that influence of each characteristic quantity on attitude estimation can be effectively balanced; an operation of preprocessing the image to be tested is performed, and non-local means filtering and a maximum likelihood estimation algorithm are preferably chosen to perform denoising and deblurring processing on the image to be tested, thereby improving attitude estimation precision of the algorithm under a turbulence blurring imaging condition; and a weighted arithmetic mean of an attitude estimation result is calculated, thereby improving the stability of the attitude estimation algorithm.
To make the objectives, technical solutions, and advantages of the present invention clearer and more comprehensible, the present invention is further described below in detail with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described here are merely used to explain the present invention rather than to limit the present invention. In addition, the technical features involved in the implementation manners of the present invention described below can be combined with each other as long as the technical features do not conflict with each other.
In the present invention, an on-orbit three-dimensional space object is an on-orbit Hubble telescope, and the structure of a satellite platform of the Hubble telescope is a cylinder. Two rectangular solar panels are mainly carried on the satellite platform, and an object attitude that needs to be estimated refers to an attitude of the satellite platform in the three-dimensional object coordinate system.
The present invention is further described below in detail by using the structure of an object shown in
A procedure of the present invention is shown in
(A1) Step of acquiring multi-viewpoint characteristic views of a template object includes the following sub-steps:
(A1.1) Step of establishing a template object three-dimensional model:
For a cooperative space object, for example, a satellite object, detailed three-dimensional structures and relative position relationships such as a satellite platform, a load carried by a satellite, and relative position relationships among components of the satellite can be precisely obtained. For an uncooperative space object, approximate geometrical structures and relative position relationships of various components of the object are deduced from multi-viewpoint projection images of the object. By using a priori knowledge that when an object satellite moves on an orbit, a connecting line between the center of mass of a satellite platform and the center of the earth is perpendicular to the satellite platform, that a solar panel of the object satellite always points to an incident direction of sunlight, and the like, spatial position relationships among various components of the satellite are further determined. A three-dimensional modeling tool Multigen Creator is used to establish a three-dimensional model of an object satellite.
(A1.2) Step of acquiring multi-viewpoint characteristic views of the template object:
As shown in
In the present invention, a Hubble telescope simulated satellite is used as the template object. As shown in
(A2) Step of establishing a geometrical feature library of the template object includes the following sub-steps:
This example is described by using i=1886 frames of 2592 frame characteristic views as an example:
(A2.1) Calculate an object main body height-width ratio Ti,1 of each characteristic view Fi:
(A2.1.1) Obtain a threshold Ti=95 by using a threshold criterion of a maximum between-cluster variance for the input characteristic view Fi shown in
(A2.1.2) Scan the binary image Gi in an order from top to bottom and from left to right, if a current point pixel value gi(x, y) is equal to 255, record a current pixel horizontal coordinate x=Topj, and a vertical coordinate y=Topi, and stop scanning, where in this example, Topj=272, and Topi=87.
(A2.1.3) Scan the binary image Gi in an order from bottom to top and from left to right, if a current point pixel value gi(x, y) is equal to 255, record a current pixel horizontal coordinate x=Bntj, and a vertical coordinate y=Bnti, and stop scanning, where in this example, Bntj=330, and Bnti=315.
(A2.1.4) Scan the binary image Gi in an order from left to right and from top to bottom, if a current point pixel value gi(x, y) is equal to 255, record a current pixel horizontal coordinate x=Leftj, and a vertical coordinate y=Lefti, and stop scanning, where in this example, Leftj=152, and Lefti=139.
(A2.1.5) Scan the binary image Gi in an order from right to left and from top to bottom, if a current point pixel value gi(x, y) is equal to 255, record a current pixel horizontal coordinate x=Rightj, and a vertical coordinate y=Righti, and stop scanning, where in this example, Rightj=361, and Righti=282.
(A2.1.6) Define the object main body height-width ratio of the characteristic view Fi as a ratio
of an object height Hi to an object width Wi, where Hi=|TopiBnti|, Wi=|Leftj−Rightj|, and the symbol |V| represents an absolute value of the variable V. As shown in
(A2.2) Calculate an object longitudinal symmetry Ti,2 of each characteristic view Fi:
(A2.2.1) Calculate a horizontal coordinate Cix=└(Leftj+Rightj)/2┘ and a vertical coordinate Ciy=└(Topi+Bnti)/2┘ of a central point of the characteristic view Fi, where the symbol └V┘ represents taking an integral part for the variable V, where in this example, Cix=256, and Ciy=201.
(A2.2.2) Count the number of pixel points whose gray value gi(x, y) is 255 within a region where 1≦horizontal coordinate x≦500 and 1≦vertical coordinate y≦201 in the binary image Gi, that is, the area STi of the upper-half portion of the object of the characteristic view Fi. In this example, an area of a region enclosed by a rectangular box abcd in
(A2.2.3) Count the number of pixel points whose gray value gi(x, y) is 255 within a region where 1≦horizontal coordinate x≦500 and 202<≦vertical coordinate y≦411 in the binary image Gi, that is, the area SDi of the lower-half portion of the object of the characteristic view Fi. In this example, an area of a region enclosed by a rectangular box cdef in
(A2.2.4) Calculate the object longitudinal symmetry
of the characteristic view Fi.
The object longitudinal symmetry of the characteristic view Fi is defined as a ratio of an area STi of the upper-half portion of the object to an area SDi of the lower-half portion within a rectangular region enclosed by an minimum bounding rectangle of the object, where in this example, Ti,2=1.0873.
(A2.3) Calculate an object horizontal symmetry Ti,3 of each characteristic view Fi:
(A2.3.1) Count the number of pixel points whose gray value gi(x, y) is 255 within a region where 1≦horizontal coordinate x≦Cix and 1≦vertical coordinate y≦m in the binary image Gi, that is, the area SLi of the left-half portion of the object of the characteristic view Fi. In this example, an area of a region enclosed by a rectangular box hukv in
(A2.3.2) Count the number of pixel points whose gray value gi(x, y) is 255 within a region where Cix+1≦horizontal coordinate x≦n and 1≦vertical coordinate y≦m in the binary image Gi, that is, the area SRi of the right-half portion of the object of the characteristic view Fi. In this example, an area of a region enclosed by a rectangular box ujvl in
(A2.3.3) Calculate the object horizontal symmetry
of the characteristic view Fi.
The object horizontal symmetry of the characteristic view Fi is defined as a ratio of an area SLi of the left-half portion of the object to an area SRi of the right-half portion within a rectangular region enclosed by a minimum bounding rectangle of the object, where in this example, Ti,3=0.9909.
(A2.4) Calculate an object main-axis inclination angle Ti,4 of the characteristic view Fi:
The object main-axis inclination angle is defined as an included angle θ between an object cylinder-body axis of the characteristic view Fi and an image horizontal direction. The feature represents an attitude feature of an object most distinctively, has a value range of 0° to 180°, and is represented by using a one-dimensional floating-point number.
(A2.4.1) Calculate a horizontal coordinate Xi0 and a vertical coordinate yi0 of a gravity center of the binary image Gi corresponding to each characteristic view Fi, where in this example, xi0=252, and yi0=212.
(A2.4.2) Calculate a p+qth central moment μi(p, q) of the binary image Gi corresponding to the characteristic view Fi.
(A2.4.3) Construct a real symmetrical matrix
and calculate feature values V1 and V2 of the matrix Mat and feature vectors
corresponding to the feature vectors, where in this example,
the feature values are V1=6.2955×109 and V2=2.3455×1010, and the feature vectors are
(A2.4.4) Calculate the object main-axis inclination angle Ti4 shown in
where
in the formula, the symbol π represents a ratio of the circumference of a circle to the diameter thereof, and the symbol a tan 2 represents an arctangent function.
In this example, the object main-axis inclination angle Ti4=50.005°.
(A2.5) Construct a geometrical feature library MF of the multi-viewpoint characteristic views Fi of the template object:
where
in the formula, the ith row {Ti,1,Ti,2,Ti,3,Ti,4} represents a geometrical feature of the characteristic view Fi of the ith frame, where in this example, as shown in
(A2.6) Normalization processing step:
Perform normalization processing on the geometrical feature library MF of the multi-viewpoint characteristic views Fi of the template object, to obtain a normalized geometrical feature library SMF of the template object:
where in the formula,
Vecj=max{T1,j,T2,j, . . . , Ti,j, . . . , TK,j} i=1, 2, . . . , and K, j=1, 2, 3, and 4; and the symbol Max{V} represents taking a maximum value in a set V.
An online attitude estimation step specifically includes:
(B1) Step of calculating geometrical features of the image to be tested, including the following sub-steps:
(B1.1) Step of preprocessing the image to be tested
Imaging data of a space object has much noise and a low signal-to-noise ratio, and blurring is obvious. Therefore, before subsequent processing is performed on the imaging data, it is necessary to perform preprocessing on the imaging data first. That is, denoising is performed on the imaging data first, and then, for characteristics of the imaging data, an effective calibration algorithm is used to perform image restoration processing on an image of the space object. In this example, non-local means filtering (the following parameters are chosen: the size of a similarity window is 5×5, the size of a search window is 15×15, and an attenuation parameter is 15) is chosen to first perform noise suppression on the image to be tested.
(B2) Step of extracting geometrical features from the image to be tested
Replace fi(x, y) with the image g(x, y) after preprocessing, perform sub-step (2.1) to sub-step (2.4), to obtain geometrical features {G1,G2,G3,G4} of the image to be tested, and perform normalization processing on the geometrical features {G1,G2,G3,G4}, to obtain normalized geometrical features {SG1,SG2,SG3,SG4} of the image to be tested, where
SG
j
=G
j
/Vec
j, and j=1,2,3,4.
(B3) Object attitude estimation step, including the following sub-steps:
(B3.1) Traverse the entire geometrical feature library SMF of the template object, and calculate Euclidean distances D1, . . . , and DK between geometrical features {SG1,SG2,SG3,SG4} of the image to be tested and each row of vectors in SMF; and
(B3.2) Choose four minimum values DS, Dt, Du, and Dv from the Euclidean distances D1, . . . , and DK, where an attitude of the image to be tested is set as an arithmetic mean of pattern attitudes represented by DS, Dt, Du, and Dv.
The results show that a precision error of an estimation result of the pitching angle is zero degree, and a precision error of an estimation result of the yaw angle β is within 10 degrees.
A person skilled in the art easily understands that the foregoing merely provides preferred embodiments of the present invention, which are not used to limit the present invention. Any modifications, equivalent replacements, and improvements made within the spirit and principle of the present invention shall all fall within the protection scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
201310740553.6 | Dec 2013 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2014/085717 | 9/2/2014 | WO | 00 |