The present invention relates generally to stitching or sewing machines and, more particularly, to a hands-free control system for same.
Stitchers and sewing machines are increasingly complex pieces of machinery that frequently offer multiple modes of operation and varying features for use. Prior art stitchers and sewing machines utilize a variety of control mechanisms to allow a user to select modes of operation and available features and to generally control the machine. Those control mechanisms generally include a control panel of some type; in some cases push buttons or a touch screen device.
However, a shared point of these prior art control mechanisms is the lack of audible, verbal confirmation of machine control commands input by a user.
Therefore, it would be advantageous to provide a control system for a stitcher or sewing machine that provides audible, verbal command confirmation to a user.
One aspect of the invention generally pertains to a voice command system for a stitcher that provides audible, verbal command confirmation to a user.
In accordance with one or more of the above aspects of the invention, there is provided a text-to-speech system for a stitcher that includes a tablet device in operative communication with the stitcher; the tablet device further comprising a display screen; a memory; a microprocessor; a communication module; a command input device; a speaker; and a text-to-speech algorithm.
In accordance with another aspect, there is provided an associated method that includes the steps of accepting a command for operation of the stitcher from a user; transmitting the command to the text-to-speech algorithm; and producing an audible confirmation of the command through the speaker.
These aspects are merely illustrative of the innumerable aspects associated with the present invention and should not be deemed as limiting in any manner. These and other aspects, features and advantages of the present invention will become apparent from the following detailed description when taken in conjunction with the referenced drawings.
Reference is now made more particularly to the drawings, which illustrate the best presently known mode of carrying out the invention and wherein similar reference characters indicate the same parts throughout the views.
In the following detailed description numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific delays. For example, the invention is not limited in scope to the particular type of industry application depicted in the figures. In ether instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention.
During operation, the needle bar 28 moves up and down thereby moving the needle 30 to form a stitch in the fabric. The needle bar 28 can be adjusted up or down to provide a proper machine timing height. A small hole in the needle plate 34 restricts movement of the thread as the stitch is formed. The hopping foot 32 raises and lowers with the movement of the needle 30 to press and release the fabric as the stitch is formed. The hopping foot 32 is designed to be used with rulers and templates and has a height that can be adjusted for proper stitch formation. A control box 48 is provided to control the operation of the stitcher 10.
The tablet device 50 is programmed to provide hands-free manipulation of the modes of the stitcher 10, as well as requests for functions such as “needle up”, “needle down”, “single stitch”, and others. The tablet device 50 prompts the user to speak a command and then listens for a resulting command from the user. After receiving the command through its microphone 52, the tablet device interprets the voice command. If the tablet device 50 is able to interpret the voice command, the command is executed according the flow chart in
First, the device 50 digitizes the voice command. The device 50 transmits the digitized command to a speech recognition algorithm. The algorithm may be resident on the device's processor or memory, or the device may transmit the voice command to a remote system having such an algorithm. In one embodiment, the device 50 utilizes its wireless connection to transmit the voice command to Google's® speech recognition algorithm via the Internet. The speech recognition algorithm converts the digitized speech to a list of words. The speech recognition algorithm provides an array of possible words or phrases spoken. In a preferred embodiment, the array is an ASCII array. The array is then parsed by the device 50 to identity recognized commands. A list of recognized commands is maintained in the memory of the device 50, and the array is compared to this list to find matches.
Once a match is found within the list of recognized commands, the device 50 performs the identified command. The command may be a machine operation mode change or a command to manipulate the machine in some way such as raising or lowering the needle or making a single stitch. Once the command is initiated by the device 50, it returns to its ready state to receive another command.
The device 50 preferably is capable of interpreting voice commands in a variety of languages. In a preferred embodiment, the device 50 utilizes the voice recognition intent function of the Android API.
As noted above, the device 50 may offer the user the option to control machine functions by voice command or manual selection via a touch screen. In either case, in a preferred embodiment of the system, the device 50 is further provided with a speaker and a text-to-speech algorithm. In a preferred embodiment, the text-to-speech algorithm is function of the Android operating system. The device 50 is thereby capable of provided audible feedback to the user via a speaker 56 to confirm the selection of commands by either voice command or manual selection. Following receipt of any command, the system responds verbally with words that describe the requested function or mode change.
The preferred embodiments of the invention have been described above to explain the principles of the invention and its practical application to thereby enable others skilled in the art to utilize the invention in the best mode known to the inventors. However, as various modifications could be made in the constructions and methods herein described and illustrated without departing from the scope of the invention, it is intended that all matter contained in the foregoing description or shown in the accompanying drawings shall be interpreted as illustrative rather than limiting. Thus, the breadth and scope of the present invention should not be limited by the above-described exemplary embodiment, but should be defined only in accordance with the following claims appended hereto and their equivalents.
This application claims priority to U.S. Provisional Patent Application Ser. No. 61/719,191, filed Oct. 26, 2012.
Number | Date | Country | |
---|---|---|---|
61719191 | Oct 2012 | US |