This application claims priority to Taiwanese Patent Application No. 101136642, filed on Oct. 4, 2012.
1. Technical Field
The present disclosure relates to moving control devices, and, more particularly, to a moving control device for moving an autonomous mobile platform.
2. Background
Autonomous mobile platforms, such as Automatic Guided Vehicles (AGV), are often used in manufacturing plants and warehousing for transporting goods to save human resources and establish automated processes. In order for an AGV to “walk” automatically, a moving control device is usually installed in the AGV so as to control forward, rewind, stop, or other movements of the AGV.
Traditionally, AGVs walk on established tracks, but such an arrangement makes the walking routes of the AGV fixed and cannot be changed on demand. Tracks will have to be re-laid in order to change the routes of the AGV. Laying tracks substantially cost more money, manpower and time spent. Therefore, in recent years, automatic walking techniques have incorporated guiding methods without any fixed track routes by detecting specific signs on the ground that form fixed routes along which the AGV can walk. The locations of these specific signs can be adjusted according to needs. For example, a plurality of guiding tapes can be adhered on the ground of an unmanned warehouse or factory, and an AGV may employ a sensor for optically or electromagnetically sensing these guiding tapes, so the AGV can walk along the route formed by the guiding tapes as the guiding tapes are being detected. These guiding tapes can be removed and adhered to different locations on the ground to form different routes for the AGV to walk on.
In the automatic walking technique described above, when an obstacle is encountered on the path, the AGV must have a mechanism to inform itself that there is an obstacle ahead, and stop moving. However, such a method still has the following issues: two different sets of detection devices must be installed, which not only increases the building costs of the AGV and material costs for installing the sensor, the AGV becomes more bulky and less easy to install since it has to accommodate two sets of detection devices. Moreover, the image screen can only be used for a single identification at time.
Therefore, there is an urgent need for a single detection device with multiple detecting functions in an existing AGV that is more compact and easier to install, while improving the efficiency of transporting (or walking) and reducing the construction cost.
The present disclosure provides a moving control device and an autonomous mobile platform, such as an automatic guided vehicle (AGV) and an automatic guided platform, having the same.
The present disclosure provides a moving control device applicable to an autonomous mobile platform, which may include: alight-emitting element for emitting a structured light with a predetermined wavelength; a filtering element for allowing the structured light with the predetermined wavelength to pass through while filtering out lights without the predetermined wavelength; an image capturing unit for retrieving an external image, wherein the filtering element is provided in a portion at a front end of the image capturing unit, such that the external image retrieved by the image capturing unit includes a first region generated as a result of ambient light intersecting the filtering element and a second region generated as a result of ambient light not intersecting the filtering element; and a calculating unit for performing image recognition on the first region and the second region of the external image to generate a first identification result and a corresponding second identification result, respectively, to allow controlling movement of the autonomous mobile platform based on the first identification result and the second identification result.
The present disclosure further provides an autonomous mobile platform, which may include: a main body; and a moving control device provided on the main body. The moving control device may include: a light-emitting element for emitting a structured light with a predetermined wavelength; a filtering element for allowing the structured light with the predetermined wavelength to pass through while filtering out lights without the predetermined wavelength; an image capturing unit for retrieving an external image, wherein the filtering element is provided in a portion at a front end of the image capturing unit, such that the external image retrieved by the image capturing unit includes a first region generated as a result of ambient light intersecting the filtering element and a second region generated as a result of ambient light not intersecting the filtering element; and a calculating unit for performing image recognition on the first region and the second region of the external image to generate a first identification result and a corresponding second identification result, respectively, to allow controlling movement of the autonomous mobile platform based on the first identification result and the second identification result.
The present disclosure can be more fully understood by reading the following detailed description of the preferred embodiments, with reference made to the accompanying drawings, wherein:
In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a through understanding of the disclosed embodiments. It will be apparent, however, that one or more embodiments may be practiced without these specific details. In other instances, well-known structures and devices are schematically shown in order to simplify the drawing.
The light-emitting element 10 is used for emitting a structured light with a predetermined wavelength. The filtering element 11 allows the structured light with the predetermined wavelength to pass therethrough, and filters out lights without the predetermined wavelength. In an embodiment, the structured light is near infrared with a predetermined wavelength. Since the energy of sunlight in the infrared wavelength range of 700 nm to 1400 nm is lower than in the wavelength range of 400 nm to 700 nm, the use of near infrared as the active structured light emitted by the light-emitting element 10 can resist the influence of sunlight with a smaller transmitting power. In particular, when the near infrared wavelength is in the range of about 780 nm to 950 nm, the sunlight has a small energy in the wavelength range. In other words, the use of near infrared with a specific wavelength allows the light emitting element 10 to stably emit the structured light at the minimum transmission power. The filtering element 11 may be an optical filter, a filter or optical coating. More specifically, the filter can be a low-pass filter, a high-pass filter, a band-pass filter or the like, or a combination thereof, and the present disclosure is not limited thereto. In other words, the filtering element 11 in an embodiment can be an optical filter, a filter or a optical coating with a wavelength range of 780 nm to 950 nm.
The image capturing unit 12 is used for capturing an external image. A part of the front end of the image capturing unit 12 is provided with the filtering element 11, such that the external image retrieved by the image capturing unit 12 has a first region formed by the intersection of the filtering element 11 and the light, and a second region formed by the light not intersecting with the filtering element 11. In an embodiment, the image capturing unit 12 is a CMOS sensing element or CCD sensing element, or a camera that employs a CMOS or CCD sensing element. Digital information about the space in front of the moving control device 1 is obtained by the CMOS or CCD sensing the light, and then converted into an external image. The external image will have a first region and a second region as a result of the filtering element 11.
The calculating unit 13 is connected to the image capturing unit 12 to receive the external image, and perform image recognition on the first region and the second region of the external image to produce a corresponding first identification result and a second identification result, respectively, so that the autonomous mobile platform can carry out moving control based on the first identification result and the second identification result.
The optical axis 24 of the light-emitting element 20 is parallel to the optical axis 25 of the image capturing unit 22. The light-emitting element 20 and the image capturing unit 22 are facing the same direction. By contrast, in the prior art an angle must be formed between the central line of the camera and the laser line. In an embodiment, the light emitting element 20 is installed above the image capturing unit 22, and the filtering element 21 is located in front of the image capturing unit 22 on the upper half above the central line 25 of the image capturing unit 22. The image capturing unit 22 is used for capturing the image of a front space 28 in the direction of travelling of the autonomous mobile platform. The front space 28 is divided into an upper half of the front space 281 and a lower half of the front space 282. The structured light 26 generated by the light emitting element 20 may be a point light source or a line light source, such as a linear light source. The present disclosure is not limited to the light emitting element 20 only emitting one linear light source, and may emit a plurality of linear light sources. The structured light 26 is described herein using a linear light source as an example. As the light-emitting element 20 is disposed above the image capturing unit 22, and the optical axis 24 of the light-emitting element is parallel to the optical axis 25 of the image capturing unit 22, when an obstacle appears in the front space 28 (such as a tree shown in the diagram), the linear light of the structured light 26 will only be reflected in the upper half of the front space 281, but not in the lower half of the front space 282. In other words, in the upper half scene above the central line 25 of the image capturing unit 22, only an image generated by the structured light 26 will appear. Moreover, since the natural light 27 comes from light sources, such as indoor lighting, sunlight or ambient light, in the space in which the moving control device 2 resides, the natural light 27 will appear in both the upper and lower halves of the front space.
The image capturing unit 22 when used in conjunction with the filtering element 21 can retrieve the structured light 26 reflected from the upper half of the front space 281. In an embodiment, the reflected range of the structured light 26 emitted by the light emitting element 20 can fully cover the region of the filtering element 21 for receiving the structured light 26. When the structured light 26 passes through the filtering element 21 (the structured light 26 and the filtering element 21 are intersected), light-sensed digital information of the upper half of the front space 281 is obtained by the image capturing unit 22, which in turn generates a first region 291 of the external image 29. In other words, the first region 291 of the external image 29 is the infrared image generated after the near infrared passing through the filtering element 21 is converted to the image capturing unit 22. A second region 292 of the external image 29 is generated by the natural light 27 reflected from the lower half of the front space 282. Thus, the second region 292 of the external image 29 is an image in the range of ordinary natural light generated after converting the natural light 27 directly entering into the image capturing unit 22.
In the present embodiment, the first region 291 is specifically the upper half of the external image 29 above a dividing line 293, while the second region 292 is specifically the lower half of the external image 29 below the dividing line 293. The external image 29 is consisted of the first region 291 and the second region 292 as a result of the filtering element 21 being provided in front of the image capturing unit 22 in the upper half of the image capturing unit above the central line 25. In other words, the present disclosure uses the location of the filtering element 21 to control the range of the first region 291 in the external image 29. The external image 29 is transmitted to the calculating unit 23 for calculation, i.e., for performing image recognition on the first region 291 of the external image 29 to produce the first identification result and performing image recognition on the second region 292 of the external image 29 to produce the second identification result. The first region 291 of the external image 29 is the infrared image generated by retrieving the near infrared. Upon finding an obstacle in the infrared image, the distance between the obstacle and the autonomous mobile platform can be calculated. Therefore, the first identification result is distance information between the autonomous mobile platform and an obstacle calculated by using the infrared image of the first region 291. This distance information is then used for automatically guiding the autonomous mobile platform to avoid the obstacle. In addition, the second region 292 of the external image 29 is an image in the range of ordinary natural light generated by retrieving the natural light 27. This image in the range of ordinary natural light can be used for image recognition or facial recognition. Take image recognition as an example, the second identification result may be the identification of colored tapes on the ground on which the autonomous mobile platform resides. By determining a vector path of the colored tapes on the ground, the traveling direction of the autonomous mobile platform can be automatically guided. In other words, the second identification result can be used in navigation of the autonomous mobile platform. The second identification result is not limited to the identification of colored tapes, but may include the identification of other signs for guiding the autonomous mobile platform, such as the direction indicated by an arrow, or identification of specific parts in the image, such as facial recognition and the like; the present disclosure is not limited as such. In summary of the above, the autonomous mobile platform can avoid obstacles based on the first identification result while navigating based on the second identification result, thus achieving the goal of simultaneously providing multiple moving control functions such as distance measuring and tracking by a single detecting device.
In a specific embodiment, the structured light with a predetermined wavelength is a line-shaped laser. The line-shaped laser is parallel to the horizontal plane corresponding to the image capturing unit 22. The first identification result is the distance from the autonomous mobile platform to an obstacle in the front space 28 estimated based on a line-shaped laser image received by the image capturing unit 22 using a distance sensing method.
Referring in conjunction to
1). The calculating unit 23 receives a line-shaped laser image LI;
2). The calculating unit 23 segments the line-shaped laser image into a plurality of sub-line-shaped laser images LI (1)˜LI (n), wherein n is a non-zero positive integer;
3). The calculating unit 23 calculates the vertical location of the laser light in the ith sub-line-shaped laser image in the sub-line-shaped laser images LI(1)˜LI(n), wherein i is a positive integer and 1≦i≦n; and
4). The calculating unit 23 outputs ith distance information based on the vertical location of the laser light in the i sub-line-shaped laser image LI(i) and a conversion relationship, wherein the ith distance information is, for example, the distance between the moving control device for an autonomous mobile platform 2 and an obstacle in the front space 28, and the conversion relationship is, for example, a relationship curve (as shown in
For example, the calculating unit 23 may output jth distance information based on the ith distance information, trigonometric functions and the height of the laser light in the ith sub-line-shaped laser image LI(j) in the sub-line-shaped laser images LI(1)-LI(n).
Referring in conjunction to
It should be noted that the pixels with the presence of noise generally will not exist continuously in the same horizontal position. Thus, in order to avoid misjudging noise as a line-shaped laser, in actual practice, the maximum tolerable noise width ND can be appropriately defined. When the number of consecutive light spots in a sub-line-shaped laser image is equal to or larger than the maximum tolerable noise width ND, then the calculating unit 23 determines these light spots are part of the line-shaped laser. On the contrary, if the number of consecutive light spots in a sub-line-shaped laser image is less than the maximum tolerable noise width ND, then the calculating unit 23 determines these light spots are part of the noise and not of the line-shaped laser. For example, assuming the maximum tolerable noise width ND is 3. When the number of consecutive light spots in a sub-line-shaped laser image is greater than or equal to 3, then the calculating unit 23 determines these light spots are part of the line-shaped laser. On the contrary, when the number of consecutive light spots in a sub-line-shaped laser image is less than to 3, then the calculating unit 23 determines these light spots are part of the noise and not of the line-shaped laser. By segmenting a line-shaped laser image LI into sub-line-shaped laser images LI(1)˜LI(n), noise interference can be further reduced.
The calculating unit 23 performs a histogram statistics along the vertical direction of the ith sub-line-shaped laser image LI(i) to obtain the vertical position yi of the laser light in the sub-line-shaped laser image. For example, the calculating unit 23 performs a histogram statistics of the grayscale sum of pixels in each row along the vertical direction of the ith sub-line-shaped laser image LI(i). When the grayscale sum of pixels in a row is greater than those of pixels in the other rows, the grayscale sum of this row is the highest. That is, the laser light segment resides on this row of pixels.
In another embodiment, in order to increase accuracy of position representation, the calculating unit 23 may further use a brightness center algorithm to calculate sub-pixels.
In the above two equations, (Xc, Yc) indicates the coordinate of the brightness center calculated, W is the width of the line-shaped laser image LI, n is the number of sub-line-shaped laser images, m is a positive integer, y1 is the y-axis height of the laser light found from histogram in the first sub-linear image, (Xi, Yi) indicates a coordinate in the region of (2m+1)×(W/n) pixels, and I(Xi, Yi) indicates a corresponding brightness value. Thereafter, the calculating unit 23 replaces the vertical position y1 of the laser light with the coordinate of the brightness center Yc, and then the distance from the obstacle is calculated using this coordinate of the brightness center Yc. Similarly, the coordinates of brightness center of the second sub-line-shaped laser image LI(2) to the nth sub-line-shaped laser image LI(n) can be calculated using the above method.
In other specific embodiments, different corresponding external images can be generated based on different locations of the filtering element. Referring to
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments. It is intended that the specification and examples be considered as exemplary only, with a true scope of the disclosure being indicated by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
101136642 | Oct 2012 | TW | national |