This application claims priority of Taiwanese Patent Application No. 105113909, filed on May 5, 2016.
The disclosure relates to a method and a system of automatic shoe lacing.
Some shoes are laced up in the factory before packaging. Traditionally, the lacing process is performed using manual labor, which is less efficient.
Recently, some factories adopt an automatic shoe lacing method by using a contact three-dimensional scanner to contact shoelace holes of a shoe for acquiring coordinates of the shoelace holes, and to transmit the coordinates to a computer device for control of a robotic arm thereby to pass a shoelace through the shoelace holes.
Although the conventional automatic shoe lacing promotes efficiency of the lacing process in comparison to manual labor, repetition of the scanning of the shoes may be required since the shoes may be made of soft materials, which may lead to deformation of the shoe during the contact by the scanner and thus errors in the acquired coordinates. Accordingly, improvement of acquiring the coordinates of the shoelace holes may further promote efficiency of automatic shoe lacing.
Therefore, an object of the disclosure is to provide a method and an apparatus of automatic shoe lacing that can alleviate at least one of the drawbacks of the prior art.
According to the disclosure, the method of automatic shoe lacing includes steps of: (a) capturing, by a camera system, at least two images of shoelace holes of a shoe from different positions relative to the shoe; (b) acquiring, by a computer device through conducting an analysis according to the individual shoelace-hole image in each of the at least two images of the shoe, coordinates of the shoelace holes relative to a robotic arm; and (c) the robotic arm lacing the shoe according to the coordinates acquired in step (b).
According to the disclosure, the system of automatic shoe lacing includes a camera system, a robotic arm and a computer device. The camera system is for capturing at least two images of shoelace holes of a shoe from different positions relative to the shoe. The robotic arm is for lacing the shoe. The computer device is coupled to the camera system for receiving the at least two images therefrom, is coupled to the robotic arm, and is configured to acquire, through conducting an analysis according to the at least two images of the shoe, coordinates of the shoelace holes relative to the robotic arm, and to control the robotic arm to lace the shoe according to the coordinates thus acquired.
Other features and advantages of the disclosure will become apparent in the following detailed description of the embodiments) with reference to the accompanying drawings, of which:
Before the disclosure is described in greater detail, it should be noted that where considered appropriate, reference numerals or terminal portions of reference numerals have been repeated among the figures to indicate corresponding or analogous elements, which may optionally have similar characteristics.
Referring to
Further referring to
In step S2, the computer device 1 issues an image capturing instruction to the processor 32 of each camera device 30, and the processor 32 controls the corresponding image capturing unit 31 to capture the respective image that contains the light spots indicative of locations of the shoelace holes 42. Then, the image captured by the image capturing unit 31 is transmitted to the computer device 1 via the processor 32.
In step S3, the computer device 1 acquires, by performing an analysis according to the images respectively captured by the image capturing units 31 of the two camera devices 30 (e.g., based on principles of 3D reconstruction from multiple images), individual coordinates for each of the shoelace holes 42 (represented by the light spots) with respect to the robotic arm 2.
In step S4, the computer device 1 controls the robotic arm 2 to, according to the coordinates of the shoelace holes 42, approach desired the shoelace holes 42, and lace the shoe 4 bypassing the shoelace 41 through desired ones (e.g., all) of the shoelace holes 42 sequentially in an order from the shoelace hole 42 which is closest to a toe cap of the shoe 4 to the shoelace hole 42 which is closest to a heel of the shoe 4.
It should be noted that, the images of the shoelace holes 42 must be captured from different positions in order to correctly acquire the coordinates of the shoelace holes 42 in relation to the robotic arm 2, so a minimum required number thereof is two. A greater number of images from different positions may assist the computer device 1 in acquiring the coordinates of the shoelace holes 42 with higher precision. This embodiment uses the light source 5 to create the light spots for the camera system 3 and/or computer device 2 to identify the location of the shoelace holes 42. In other embodiments, a characteristic recognition technique for the shoelace holes 42 may be applied. For example, a database of shoelace hole appearances may be built in advance, so that the camera system 3 and/or computer device 1 may capture/identify the image of the shoelace holes 42 by comparison with reference to the database.
In summary, since the automatic shoe lacing method according to this disclosure uses the camera system 3 to capture at least two images of the shoe 4 at different angles without contacting the shoe 4 for acquiring the coordinates of the shoelace holes 42, higher precision of the acquired coordinates of the shoelace holes 42, and higher efficiency of the automatic lacing process are achieved.
In the description above, for the purposes of explanation, numerous specific details have been set forth in order to provide a thorough understanding of the embodiment(s). It will be apparent, however, to one skilled in the art, that one or more other embodiments may be practiced without some of these specific details. It should also be appreciated that reference throughout this specification to “one embodiment,” “an embodiment,” an embodiment with an indication of an ordinal number and so forth means that a particular feature, structure, or characteristic may be included in the practice of the disclosure. It should be further appreciated that in the description, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of various inventive aspects.
While the disclosure has been described in connection with what is (are) considered the exemplary embodiment(s), it is understood that this disclosure is not limited to the disclosed embodiment(s) but is intended to cover various arrangements included within the spirit and scope of the broadest interpretation so as to encompass all such modifications and equivalent arrangements.
Number | Date | Country | Kind |
---|---|---|---|
105113909 A | May 2016 | TW | national |
Number | Name | Date | Kind |
---|---|---|---|
8958912 | Blumberg | Feb 2015 | B2 |
9056396 | Linnell | Jun 2015 | B1 |
9409292 | Smith | Aug 2016 | B2 |
9423869 | Yanagihara | Aug 2016 | B2 |
9676098 | Hemken | Jun 2017 | B2 |
9753453 | Benaim | Sep 2017 | B2 |
9999976 | Checka | Jun 2018 | B1 |
20050256611 | Pretlove | Nov 2005 | A1 |
20070022073 | Gupta | Jan 2007 | A1 |
20070255454 | Dariush | Nov 2007 | A1 |
20090047646 | Porter | Feb 2009 | A1 |
20090132088 | Taitler | May 2009 | A1 |
20100152896 | Komatsu | Jun 2010 | A1 |
20130131854 | Regan | May 2013 | A1 |
20130245824 | Barajas | Sep 2013 | A1 |
20140012415 | Benaim | Jan 2014 | A1 |
20140277744 | Coenen | Sep 2014 | A1 |
20160059412 | Oleynik | Mar 2016 | A1 |
20180129185 | Jurkovic | May 2018 | A1 |
Number | Date | Country |
---|---|---|
102823995 | Dec 2012 | CN |
202566577 | Dec 2012 | CN |
104248134 | Dec 2014 | CN |
204969763 | Jan 2016 | CN |
2911255 | Jul 2008 | FR |
M514754 | Jan 2016 | TW |
2015112734 | Jul 2015 | WO |
Entry |
---|
Office Action issued to German counterpart application No. 102017206764.0 by the GPTO dated Mar. 21, 2019. |
Number | Date | Country | |
---|---|---|---|
20170320214 A1 | Nov 2017 | US |