The present invention is in the field of translation means. More specifically, the invention relates to a sign language translation method and system.
The widening use of smartphones and other mobile devices with integrated image acquiring, data processing, display, and communication capabilities enables multiple input and output operations, such as data input by touch presses and multiple gestures.
This advanced interactive user-computing device interface is highly beneficial in providing new expression and communication modes for motion, hearing, and speech-challenged persons to express their thoughts into gestures that can be interpreted and converted to other communication forms.
One of the beneficial uses of the abovementioned user-computing device interface is translating text or speech to sign language. Such a translation is highly desirable for enabling convenient and efficient communication between hearing-challenged and unchallenged community members. In contrast, many unchallenged community members are familiar with sign language but are not skilled in fluent use. Also, a sufficiently accurate and fast translation is desirable on many occasions, and textually communicating is cumbersome (e.g. when a tour guide wants to communicate with a deaf person efficiently).
Mobile text/speech input means (e.g., touch keyboard and/or mobile device's microphones) and display means (i.e., mobile devices' screens or touchscreens, e.g., as indicated by numeral 403 in
Therefore, it is an object of the present invention to provide a sign language translation method that enables an accurate translation of sign language signs regardless of similarly written words with different meanings and phrases that are identically written in different contexts pertaining to a different meaning.
Other objects and advantages of the invention will become apparent as the description proceeds.
In one aspect, the present invention relates to a computer-implemented method for integrating a sign language translation system into a chat application, the method comprising:
In one aspect, the analysis of the text input is performed by a hierarchical analysis, including:
In one aspect, the method further comprises utilizing statistical tools to rate phrases and words correspondingly to their usage frequency in translation searches, thereby improving the speed of translation results.
In another aspect, the present invention relates to a chat application with integrated sign language translation capabilities, comprising:
In one aspect, the chat application with integrated sign language translation capabilities further comprises statistical tools for rating phrases and words based on usage frequency in translation searches.
In one aspect, the analysis of the text input comprises:
In yet another aspect, the present invention relates to a sign language translation method comprising: receiving a text to be translated and hierarchically analyzing one or more sections of said text from the phrase level through individual words, root words, and the composing morphemes and letters thereof while considering the context of the text.
In one aspect, the method comprises:
In another aspect, the method comprises providing a learning module configured to receive captured stills and video images of a person expressing sign language and store the captured images in conjunction with their corresponding textual meaning.
In one aspect, the images are captured by a camera of the user's device.
In one aspect, the corresponding textual meaning is one or more of the following: phrases, words, root words, morphemes, and letters.
In one aspect, the sign-language translation client application is embedded within an Internet browser, a webpage, or a website, thereby enabling to select text on a webpage to be translated into sign language, providing the selected text to the central computer via said sign-language translation client application and displaying the text's translation in the sign language via said Internet browser on the user device.
According to an embodiment of the invention, the translation can be from text to sign language or vice-versa.
In one aspect, the method further comprises providing/generating QR code(s) that link to sign language translation of text/speech.
In one aspect, the QR codes are adapted to be assigned to pharmaceutical packaging, prescription, and labels to communicate better to hearing-impaired people.
In yet another aspect, the present invention relates to a sign language translation system that comprises:
The above and other characteristics and advantages of the invention will be better understood through the following illustrative and non-limitative detailed description of preferred embodiments thereof, with reference to the appended drawings, wherein:
The present invention relates a sign language translation method, which is adapted to perform hierarchical analysis of sections of an entered text to be translated, from the phrase level through individual words, root words, and the composing morphemes and letters thereof, thereby finding the corresponding sign language translation in a fast and accurate manner, insusceptible to similarly written words with different meanings and to words that are identically written in different contexts pertaining different meaning, where the resulting sign language translation considers the context of the words. The invention also relates to a text-to-sign language translation system for realizing the proposed method. According to an embodiment of the invention, the invention relates to a method and system for integrating a sign language translation system into a chat application.
In the following detailed description, references are made to the accompanying drawings that form a part hereof and are shown by way of illustrating specific embodiments or examples. These embodiments may be combined, other embodiments may be utilized, and other changes may be made without departing from the present invention's spirit or scope.
Entered text portion which is not translated in step 101 (i.e., no phrases from which are identified in a corresponding phrases database), is being analyzed through a second translation step 102, which involves a search of individual words in a corresponding database of words, followed by searching root words search 103, and smaller structural units such as morphemes 104 and letters 105—until sign language translation is found for the entire text entered. The outcome of this process is an inclusive sequence of sign language signs.
According to an embodiment of the invention, method 100 further utilizes statistical tools to rate phrases and words correspondingly to their usage frequency in translation searches, thereby providing faster translation results.
In specific embodiments of the invention, there exists a capability to integrate the sign-language translation client application within various web interfaces. This includes integration within an Internet browser, a specific webpage, or an entire website. With this integration, users browsing a webpage can conveniently select a segment of text they wish to translate into sign language. Upon selection, the system facilitates the transfer of this selected text to the sign-language translation client application. As a response to this, an avatar, for instance, avatar 401, may be triggered to appear on the user's screen. This avatar is designed to visually demonstrate the translation of the selected text into sign language in real-time.
To further enhance user experience and interactivity, there's a potential use of JavaScript's Selection API. This API can be triggered when a user selects text on a webpage. It grants access to both the location and the content of the user's selected text. Leveraging this, a selection menu, possibly displayed directly above the selected text, can be introduced. This menu serves as a gateway for users to access the translation application directly and obtain the sign language translation for their chosen text. Moreover, users have multiple options for text input. They can either let the system automatically copy the selected text, opt to do it manually, or even manually input desired text for translation. Once provided, this text is placed into a dedicated input field within the translation application, ready for the translation process.
The present invention also includes an embodiment that integrates the sign language translation method and system into a chat application, such as WhatsApp or other instant messaging platforms. This integration allows users to translate their text messages into sign language directly within the chat application, providing a seamless and convenient translation experience for users.
To enable this integration, the translation method and system are adapted to work in conjunction with the chat application and its user interface. The following describes the technical details of the integration:
By integrating the sign language translation method and system into a chat application, users can easily communicate with each other in sign language, bridging the communication gap between sign language users and non-sign language users. This integration enhances accessibility and inclusivity in digital communication platforms, enabling effective communication for individuals who rely on sign language.
The chat application with sign language translation capabilities disclosed herein addresses the limitations of existing communication systems by providing an integrated solution that allows users to communicate seamlessly in both text and sign language formats. The application incorporates a sign language translation module that translates text-based messages into corresponding sign language representations in real-time. This enables users to communicate with each other, regardless of whether they are proficient in sign language or not, thus enhancing inclusivity and promoting effective communication.
In one embodiment, the chat application includes a user interface that displays chat messages and receives user inputs within the application. Users can enter text-based messages using a text input receiving means provided by the application. The sign language translation module performs analysis of the text input (e.g., hierarchical analysis), leveraging various databases to generate corresponding sign language translations.
According to an embodiment of the invention, the sign language translation module comprises a phrases database, a words database, a root words database, and morphemes and letters databases. The phrases database stores multiple phrases and their corresponding sign language translations, while the words database stores individual words and their corresponding sign language translations. The root words database stores root words and their corresponding sign language translations. The morphemes and letters databases store smaller structural units and their corresponding sign language translations. These databases allow the translation module to effectively translate text inputs into sign language representations.
According to some embodiments of the invention, to optimize the translation process, statistical tools are implemented within the translation module. These tools rate phrases and words based on their frequency of usage in translation searches, ensuring that commonly used expressions and terms are accurately translated. The translation module utilizes a search engine to search the databases and retrieve the appropriate sign language translations for the text input.
The chat application further includes a display module that presents the sign language translations in real-time within the chat interface. The sign language translations are seamlessly integrated alongside the corresponding text-based messages, allowing users to communicate effectively using both text and sign language formats. This integration enables real-time communication, enhancing the inclusivity of the chat application and facilitating communication between individuals proficient in different modes of communication.
The chat application with sign language translation capabilities described herein provides an innovative solution for inclusive communication. By seamlessly integrating sign language translations into a text-based chat application, it enables real-time communication between individuals using text and sign language formats. This promotes effective communication, enhances inclusivity, and improves accessibility for individuals with hearing impairments or those who prefer sign language as their mode of communication.
According to an embodiment of the invention, the system comprises QR codes that link to sign language translation of text/speech (e.g., a pre-recorded video clip). The QR codes can be added to different objects/items to enable hearing-impaired people to communicate better and understand information associated with such objects/items. For example, such QR codes can be used on pharmaceutical packaging, prescription, and labels to communicate better to hearing-impaired people.
For example, whereas translation engine 321 receives one or more text sentences from the input module 310 receives, it executes step 101 (of
Finally, translation engine 321 assembles the sign language translation sequences in the original order (i.e., correspond to the enter text's order) and submits the assembled translation to an output module 330, which submits the assembled translation to the sending user (e.g., by utilizing communications means of computer 210.
Of course, the translation agent 300 may further comprise common operational software modules such as clients module 340 for managing users and client devices authorized to interact with translation agent 300 (e.g., through corresponding sign-language translation client applications running on mobile devices 220 and 230).
According to an embodiment of the invention, agent 211 may be adapted with a learning module 350, while applications 221 and/or 231 are adapted with a corresponding learning module 351 (not shown), which is configured to receive captured stills and video images of a person expressing sign language sign (i.e., captured by the camera(s) of a smartphone 220, or tablet 230 and correspondingly processed by module 350 of applications 221 and 231) and to store the captured images in conjunction with their corresponding textual meaning (i.e., phrases, words, root words, morphemes, and letters) determined by the person, in databases 323-327 (i.e., by interacting with module 350). Thereby modules 350 and 351 enable the expansion of translation databases 323-327. Furthermore, modules 350 and 351 may be utilized for adding further translation databases 323a-327a (not shown) associated with alternate languages.
According to another embodiment of the invention, one or more databases 323-327 are locally stored, for example, by smartphones 220. For example, phrases database 323 can be stored locally; thus, the initial translation search (first step 101 of
Local databases (e.g., phrases database 323) of applications 221 and 231 may also be utilized for generating and maintaining local, personalized translation databases. For example, they store specific terms that specific users commonly use. Such local databases may be uploaded to central computer 210 (e.g., for backup purposes).
According to an embodiment of the invention, input module 310 and learning modules 350 and 351 may be adapted for receiving speech input (e.g., recorded by a microphone of a mobile device such as smartphones 220 and tablets 230). Furthermore, central computer 210 and mobile devices 220 and 230 may be correspondingly configured with speech-to-sign language translation databases organized and utilized similarly, as described in
One skilled in the art will readily realize that the proposed method and system may also be used vice-versa, for instance, where input module 310 receives captured stills and video images of a person expressing sign language signing, processes them, and output module 330 returns corresponding text. In some embodiments, a suitable mediating or processing module that may be required for processing input sign language is provided as a separate module from input module 310. According to some embodiment of the invention, distinctive body postures and facial expressions that accompany signing and are necessary to form words properly are also considered while processing the received images. The distinctive body postures and facial expressions are one of five sign components, along with hand shape, orientation, location, and movement. For example, distinctive body postures and facial expressions can be detected by applying suitable Facial Expression Recognition (FER) algorithms (e.g., Eigenspace technique to locate face under variable pose, Haar classifier, AdaBoost method, etc.).
Although embodiments of the invention have been described by way of illustration, it will be understood that the invention may be carried out with many variations, modifications, and adaptations, without exceeding the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
283626 | Jun 2021 | IL | national |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/IL22/50577 | May 2022 | US |
Child | 18523497 | US |