This invention relates to real time information retrieval of Quranic citations and explanations of the Holy Quran (Tafseer-ul-Quran) in the native language of the user and more particularly to an augmented reality representation and real time information retrieval of Quranic citations and explanations of the Holy Quran (Tafseer-ul-Quran) in the native language of the user corresponding to an image of the Arabic script.
The Holy Quran is the central religious text of Islam containing revelations of Allah (God) as revealed to Prophet Hazrat Muhammad (peace be upon him, SAW). The Holy Quran is partitioned into thirty (30) even parts called “juz-un” in Arabic or Para in Farsi/Persian. The Holy Quran is comprised of 114 Sura'ht of unequal length and each Sura'ht is numbered and displayed with a title. There are a total of about 6,327 Aya'hts in the Holy Quran. The term Ayaht-e-Karmiah refers to verses in the Holy Quran.
In addition to the above, Hadith is a saying of the Prophet Muhammad (peace be upon him, SAW). Other terms such as Tafseer (Tafseer-ul-Quran) refer to the interpretations of the Holy Quran based on the knowledge about the Holy Quran and Hadith. Tashreh (Tashreh-ul-Quran) refers to the further interpretation of the Holy Quran, Quranic verses and terms while Diacritics refer to Quranic Arabic i.e. the traditional Arabic grammar as used to visualize Quranic syntax and to provide guidelines for a reader to pronounce.
A U.S. Patent Publication No. 2011/0143325 of Awad H. Al-Khalaf and Anas L. Nayfeh discloses an Automatic Integrity Checking of Quran Script. As disclosed therein the aforementioned publication allows for people to check the correctness of the printed verses against the authentic version of the Holy Quran. It also provides the ability to check the Holy Quran's verses written in the scientific papers and in web pages. Using the mechanism this invention will help in protecting the Holy Quran from any distortion. This is important because of the following:
A more recent U.S. Pat. No. 8,150,160 of Husni A. Al-Muhtaseb, Sabri A. Mahmoud and Rami Qahwaji disclose an Automatic Arabic Text Image Optical Character Recognition Method. As disclosed the method includes training a text recognition system using Arabic printed text, using the produced models for classification of newly unseen Arabic scanned text, and generating the corresponding textual information. Scanned images of Arabic text and copies of minimal Arabic text are used in the training sessions. Each page is segmented into lines. Features of each line are extracted and input to Hidden Markov Model (HMM). All training data and training features are used. HMM runs training algorithms to produce codebook and language models. In the classification stage new Arabic text is input in scanned form. Line segmentation where lines are extracted is passed through. In the feature stage, line features are extracted and input to the classification stage. In the classification stage the corresponding Arabic text is generated.
Further a U.S. Patent Publication No. 2003/0200078 of Luo et al. discloses a System and Method for Language Translation of Character Strings Occurring in Captured Image Data. The aforementioned publication discloses a system and method capable of performing language translation of a graphical representation of a first language character string within captured image data of a natural image by extracting image data corresponding to the graphical representation of the text from the captured image data. The extracted graphical representation is then converted into a first language encoded character data that, in turn, is translated into a second language data. The translated text and the captured image can then be displayed together by overlaying the translated text over the graphical representation of the character string in the captured image.
Notwithstanding the above, it is presently believed that there is a current need and a potential commercial market for a real time information retrieval of Quranic Citations and explanations of the Holy Quran (Tafseer-ul-Quran) in the native language(s) of the readers (user). There should be a need and a commercial market because the method and system in accordance with the present invention provide immediate access to real time interpretation and explanation of the Holy Quran in the user's native language explained by the authenticated scholars of Islam. The system is available over a handheld device, smartphone or the like or accessed through a network.
In essence, a system for real time information and augmented reality representations of the Quranic Citations and explanations of the Holy Quran (Tafseer-ul-Quran) in the native language of the user corresponding to an image of the Arabic script will be available. The system comprises or consists of a programmable computer having memory and a copy of the Holy Quran in Arabic script and an explanation of the Holy Quran (Tafseer-ul-Quran) in memory, and translation of the Holy Quran in native language(s) from the Arabic and explanation of the Holy Quran (Tafseer-ul-Quran) in the native language of the user. The system further comprises or consists of a device for inputting queries re: the Holy Quran to said programmable computer for receiving and displaying responses from the computer on the device preferably a remote device. Further, the programmable computer performs operations of taking real time images and inputs through a user interface and segregating a manuscript from images and detecting language of manuscript characters and marking the manuscript characters differentiating by language(s) and having at least one block marked as Arabic language script and detecting the native language of the user. The system or method further comprises or consists of initiating separate search queries using text of each manuscript block as a query term to locate the manuscript text blocks from interlink internal and external data storage. Further, the system communicates with the user device with certified information for each segregated block and a further step of displaying the certified information as augmented reality on the user's device.
The invention will now be described in connection with the accompanying drawing.
As illustrated in
In a first step 10 an image containing textual elements is scanned into the computer and stored in memory. The image is taken in real time via an internal or external input device (including an image scanning interface or camera) or being accessed from an internal or external memory.
If the real-time image is taken from an interface, an autofocus procedure may be initiated including adjustment to aperture, zoom, brightness, sharpness of resolution to form a properly focused real-time image for better visibility and readability and for recognizing textual elements with the best possible accuracy.
In a second step 12, the types of the objects are segregated from images and areas recognized as manuscript are tracked for a next step. In a third step 14, text manuscript areas, regions are marked with dotted lines.
As a fourth step 16, the character of the text manuscript are recognized and the language of each text manuscript block is identified, and each area of text manuscript is re-marked differentiating the languages of the text manuscript areas. In step 16 there are two activities, first activity is to identify the language(s) of the manuscript and based on the result of language identification, in second activity each area of the different language text is re-marked with different marks. In a fifth step 18, each text block is taken as a separate query term and interlinked with internal and/or external data stores corresponding to the language of the text block.
A sixth step 20, involves initiating separate search queries for each query term to locate the query term values in interlinked data storage. The sixth step 20 is followed by a seventh step 22 wherein a search query successfully finds the query term(s) in the stored records. Thus, each record contains a set of certified information (including certified citation, certified key references, original references with human translations, cross references and definitions with human translations, described, explainrd in different native languages in basic format like text, audio and video) corresponding to related text (used as query term) becomes accessible. Further, this step selects the citation references available in the native language of the client and returns the information to the user's device in real-time environment.
Finally, in an eighth step 24 the key information is displayed on the user's device over the same image as augmented reality representation and detail information becomes available for further display on the same image or for associated applications (where text matters are displayed by default text viewer, reading and explanations available in audio are playable with default audio player and similarly explanations and interpretations available in video format are playable and video streaming through default audio/video player. Set of certified information found in seventh step 22 are also available for third party applications such as text editors or worksheet editors to facilitate the users (as per user needs).
As an example of the practice of the invention consider the following.
In a case (one embodiment) of an Arabic language manuscript, the best utilization of the system is information retrieval related to the Holy Quran (religious book of Muslims) from the certified data store containing the key references citation (Para Number, Ruku Number, Manzil/Sur'ht and Ayaht Number/Label or Number), original references (Ayaht-e-karmiah and Sur'ht) and human translations, cross references (e.g. Hadith), definitions/explanations (Tafseer-ul-Quran, Tashreh-ul-Quran and Tashreh-ul-Hadith) human translated and/or described in different local & native languages in the basic formats like text, audio and video by the certified experts of Holy Quran & Hadith and/or Islamic Scholars. The set of key information will be displayed on the display of client device along with the actual text manuscript (over the same image) while user or other applications of the client device will be able to access further information through the related icons, symbols indicating or marked along with the key information displayed.
The following illustrates a further embodiment of the invention.
The following glossary of terms are applicable to the present invention.
While the invention has been described in connection with the preferred embodiments it should be recognized that changes and modifications may be made therein without departing from the scope of the appended claims.