The present invention generally relates to assisting in correcting speech and language disorders in children and adults. More specifically, the present invention relates to a computer-implemented speech and language system to assist in correcting speech and language disorders in children and adults.
A large number of children and adults struggle with speech and language development. Twenty percent of kids between ages between 3 and 10 exhibit speech and language issues. The delayed language and speaking milestones are generally a sign of a language or speech delay or disorder. Language and speech disorders can exist together or by themselves. Examples of speech disorders are difficulty of forming specific words or sounds correctly, or difficulty with making or pronouncing full words or sentences, such as dysarthria, apraxia, and dysphasia. Examples of language disorders are language development delays (the ability to understand and speak develops more slowly than it is typical), dysphasia (an inability to produce words clearly and use verbal expressions to communicate wants and needs), and aphasia (difficulty understanding or speaking parts of language due, for example, to a brain injury).
Language and/or speech disorders can occur together with other learning disorders that affect reading and writing. Children with language disorders may feel frustrated that they cannot understand others or make themselves understood, and they may act out or withdraw. Language or speech disorders can also be present with emotional or behavioral disorders, such as attention-deficit/hyperactivity disorder (ADHD) or anxiety. Children with developmental disabilities including autism spectrum disorder may also have difficulties with speech and language. The combination of challenges can make it particularly hard for a child to succeed academically and socially. It is therefore crucial a proper assessment is implemented to establish a speech problem that a child has, its etiology and method of treatment.
Unfortunately, there is a shortage of trained speech-language pathologists (SLPs) that can work with children suffering from speech and language disorders. This shortage is due, in part, to the limited number of openings in graduate programs and the increased need for SLPs as their scope of practice widens, the autism rate grows, and the population ages. Schools worldwide are feeling this shortage the most.
While types of treatment will typically depend on the severity and type of the speech and/or language disorder, most treatment options include physical exercises that focus on strengthening the muscles that produce speech sounds and speech therapy exercises that focus on building familiarity with certain words or sounds. For example, SLPs work with their patients on performing exercises for improving muscle strength, motor control, and breath control and saying word pairs or sentences that contain one or more different speech sounds.
Further, it is most important for the effective treatment of speech and language disorders that patients practice daily the required exercises and regularly see their SLP. However, the lack of SLPs, expense of online and offline sessions and sometimes lack of motivation to do the exercises in the young patients deter from the progress of the treatment.
There is therefore a need to provide a system that would facilitate an effective treatment of speech and language disorders, and more specifically there is a need to provide a computer-implemented automated speech and language system to assist in treating speech and language disorders in children and adults with or without SLPs being present during a treatment session.
The system preferably can provide a decision support system for SLPs and institutional users, such as schools, speech centers, and insurance companies, and the like. In particular, the system preferably identifies a baseline as a result of an initial assessment of a user, compares the user's results to the age expected levels of performance, generates an individualized plan of care (IPOC) so that the user reaches the age expected levels of a speech output. The IPOC assigns a series of exercises that can be modified by the system according to the user's progress. The system can allow a trained SLP to modify the IPOC based on her/his professional expertise. A progress report can be generated that includes the effectiveness of the specific treatment plan and exercises. This helps to eliminate issues related to subjective assessments by SLPs with a variety of qualifications, experiences, and education of a treatment plan and progress.
In one aspect, the present invention provides a computer-implemented automated speech and language system to assist in correcting speech and language disorders in children and adults. The system has a device connected to a camera and a processor. The system also includes a non-transitory machine-readable medium comprising instructions stored therein, which when executed by the processor, cause the processors to perform operations. The operations performed by the non-transitory machine-readable medium are: accessing or creating a user profile, selecting a recommended exercise to be performed by the user, detecting the user's face and alignment in front of the camera, determining a face key point data, determining an actual data model based on the face key point data, determining a reference model based on a correct performance of the exercise scaled for physical characteristics of the user, comparing the actual data model with the reference model, interpreting whether a result of the comparison between the actual data model and the reference model is within predetermined parameters; and providing a feedback in real-time based on the interpretation.
In another aspect, the present invention provides a computer-implemented automated method to assist in correcting speech and language disorders in children and adults. The method provides for accessing or creating a user profile and then selecting a recommended exercise to be performed by the user. Further, the method includes detecting the user's face and alignment in front of the camera and determining a face key point data. The method includes determining an actual data model based on the face key point data and determining a reference model based on a correct performance of the exercise scaled for physical characteristics of the user. The method further includes comparing the actual data model with the reference model and interpreting whether a result of the comparison between the actual data model and the reference model is within predetermined parameters. Lastly, the method includes providing a feedback in real-time based on the interpretation.
In order that the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, aspects of the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings.
Reference to “a specific embodiment” or a similar expression in the specification means that specific features, structures, or characteristics described in the specific embodiments are included in at least one specific embodiment of the present invention. Hence, the wording “in a specific embodiment” or a similar expression in this specification does not necessarily refer to the same specific embodiment.
Hereinafter, various embodiments of the present invention will be described in more detail with reference to the accompanying drawings. Nevertheless, it should be understood that the present invention could be modified by those skilled in the art in accordance with the following description to achieve the excellent results of the present invention. Therefore, the following description shall be considered as a pervasive and explanatory description related to the present invention for those skilled in the art, not intended to limit the claims of the present invention.
Reference to “an embodiment,” “a certain embodiment” or a similar expression in the specification means that related features, structures, or characteristics described in the embodiment are included in at least one embodiment of the present invention. Hence, the wording “in an embodiment,” “in a certain embodiment” or a similar expression in this specification does not necessarily refer to the same specific embodiment.
A large number of children and adults struggle with speech and language development. In children, the delayed language and speaking milestones are generally a sign of a speech delay or disorder. Examples of speech disorders are difficulty of forming specific words or sounds correctly, or difficulty with making or pronouncing full words or sentences, such as dysarthria (e.g., a speech disorder when a child knows what to say, understands what message they are trying to deliver but cannot do so due to neurological, physiological or anatomical difficulties and disorders, such as, cleft lip/palate, neonatal asphyxia, and cerebral palsy). Examples of language disorders are language development delays (the ability to understand and speak develops more slowly than it is expected), auditory processing disorder (difficulty understanding the meaning of the sounds), and aphasia (difficulty understanding or speaking parts of language due, for example, to a brain injury). Unfortunately, there is a shortage of trained speech-language pathologists (SLPs) that can work with children and adults suffering from speech and language disorders.
Most treatment for a language or speech delay or disorder include options such as speech therapy exercises that focus on physical exercises that strengthen the muscles that produce speech sounds and build familiarity with certain words or sounds. SLPs work with their patients on practicing sounds, words, sentences, and free speech levels. These contain targeted sounds in isolation, combination with vowels, initial, medial and final position of the word.
For the effective treatment of speech and language disorders it is imperative that patients practice daily. Moreover, the patients must perform the exercises correctly and focus on details of the exercises that ensure the progress and, ultimately, successful completion of the treatment. The user (patient) and/or his or her guardian often does not have a required expertise to correctly perform the exercise and identify mistakes when she/he performs the exercise. In case when the user is a child, parents also lack proper training to guide the child to correctly perform the exercises. As access to SLPs is not always readily available, it is often that the exercises are performed incorrectly, and even daily performance of the exercises do not bring the intended results.
In order to mitigate the forgoing issues, the present invention provides a computer-implemented system that facilitates an effective treatment of speech and language disorders in children and adults without SLPs being during a treatment session. According to embodiments of the present invention, the system assists its user to improve and/or increase muscle strength, agility and stability that in turn helps the users in improving their speech output quality. The system provides visual cues and verbal feedback that helps with self-correction and training process, that leads to a better carry-over and provides better results in therapeutic intervention. The system provides a decision support system for SLPs and institutional users, such as schools, speech centers, and insurance companies, and the like. In particular, the system provides real-time feedback during performing of the exercise and grading upon completion of each exercise. Those parameters are included in progress reports generated by the system. During each session the user uses his exercise routine assigned at the time of the initial assessment described below. These reports are accessible by the treating SLP as well as other entities involved in a care of a user. This eliminates issues related to subjective assessments by SLPs with a variety of qualifications, experiences, and education of a treatment plan and progress.
More specifically, the system of the present invention configured to determine a baseline for each user by identify problems to be corrected (e.g., which user's sounds are unexpected and/or disordered for the specific age). The system determines the baseline based on an initial evaluation of the user. The initial information is collected, such as name, age, gender, and the like. In addition, the following evaluation parameters are determined: evaluation of facial structure (symmetry, anomalous movement), jaw assessment (mobility and symmetry), bite and teeth assessment, lip assessment, sound assessment, and tongue assessment. Other assessments and information can be included that is necessary to determine the baseline for a specific user.
Further, the system sets a treatment goal based on comparison between the baseline assessment and an age expected levels of performance, and automatically determines and recommends an individualized plan of care (IPOC) that contains a personalized set of exercises to be performed by the user to achieve the treatment goal, such as the age expected levels of speech output. The system allows a trained SLP to modify the IPOC based on her/his professional expertise.
Based on the individualized plan of care, the system guides the user throughout the set of exercises, assessing precision of performance of the exercises using a computer vision (CV) and sound processing module with artificial neural networks (ANN). The system is configured to provide personalized feedback in real-time and assessment for a specific sound production, word production, free speech output levels, exercise, practice and/or exercise module via voice, text and animation. The system assesses each repeat (i.e., recitation) of the exercise or practice on 0-100% scale as to how precise the user performs the exercise as compared to a model of the exercise performed by a trained SLP. There can be, preferably, seven to ten repeats of each exercise and/or practice. The assessment is expressed by a precise number (e.g., 10%, 20% 93% and so on). Upon completion of the exercise (i.e., completion of all repetitions), the system generates a rating (grading) for the overall performance of the exercise (of all the repeats). The rating has a scale between 0-100%, and preferably, grouped as follows: 90%-100% (super), 70-90% (good), 40%-70% (nice try), and less than 40% (too many mistakes). The system is configured to provide, upon completion of exercise, daily log reports using the assessment and ratings that can be accessed at any time and are accumulated in a user's chart. In addition, the system generates progress reports upon completion of the exercise that can include all the forgoing assessments, ratings, and other data, such as recommendations and/or to modifications of the individualized plan of care, whether the user followed the individualized plan of care, and how regularly the user performed the exercises.
To achieve the high level of precision in the user's performance of the exercise, the system configured to create a reference model of the user performing the exercises and compare the reference model against an actual model of the user performing the exercise in real-time. The reference model is determined based on an optimal model of the exercise performed by a trained SLP but scaled for the user's actual facial contour, characteristics and physical features.
The set of exercises can cover voice (pronunciation and volume of sounds), mimics (lips, tongue, cheeks, teeth position and movement), and gestures (fingers and hands positions and movements). According to embodiments of the invention, there can be eight exercise modules. For each exercise module there are exercises for the development of speech apparatus (e.g., facial expressions, tongue exercises, gestures and the like), the development of sounds (voice) by using specific sounds in various types of scenarios (such as in syllables, in words, in phrases, and in sentences and texts). Moreover, the specific sound that the user is working with will be used in different variations, such as at the beginning of the word, in the middle and at the end, and taking into account combinations of neighboring sounds, for example, nearby consonants or vowels. The system is also configured to allow additional texts and words to be added to the system by, for example, a user or SLP. For example, the following modules can be included:
To clarify, the system is configured to include additional exercises and/or modules to achieve various treatment goals according to the individualized plans of care.
According to embodiments of the invention, each of the constituent parts of the system 100 may be implemented on any computer system suitable for its purpose and known in the art. Such a computer system can include a device 110, such as a personal computer, mobile device (e.g., a mobile phone or tablet), workstation, embedded system, game console, television, set-top box, or any other computer system. Further, the device 110 can include a processor and memory for executing and storing instructions, a software with one or more applications and an operating system, and a hardware with a processor, memory and/or graphical user interface display. The device 100 may also have multiple processors and multiple shared or separate memory components. For example, the computer system may be a clustered computing environment or server farm.
According to embodiments of the invention, the system 100 includes a front-facing camera 130. The camera 130 can be embedded into the device 110. For example, a desktop computer or a game console can be used as the device 110, such that the desktop computer or game console is connected by a wired or wireless connection to camera 130. In these cases, camera 130 may be a webcam, a camera gaming peripheral, or a similar type of camera. However, it should be noted that a wide variety of devices 110 and camera 130 implementations exist, and the examples presented herein are intended to be illustrative only.
It is preferred that the camera 130 is arranged at a distance from the user that allows the device 110 to acquire a sequence of images, such as a video sequence, of the user's face movement. Preferably, the user should maintain a constant orientation and position with respect to the camera 130 to allow for a steady sequence of images.
The device 110 has an implemented computer program 140 that operates one or more modules remotely via cloud computing services accessible via network connection. That is, the device 110 can be connected over a network to one or more servers (not shown).
According to embodiments of the present invention, the implemented computer program 140 has an optimal model 170 for each exercise. The optimal model 170 is predetermined and is based on the performance of the exercise by a trained treating professional, for example, a qualified SLP.
According to embodiments of the present invention, the implemented computer program 140 can include a computer vision (CV) module 150.
According to embodiments of the present invention, as illustrated in
As illustrated in the diagram shown in
The user can be provided with visual, voice and text aids to assist the user with proper positioning in front of the camera 130, i.e., the user looks directly into the camera without turning away and does not make any movements that are not related to the performance of the exercise. For example, a “mask” in the form of bunny ears, crowns, hats and the like can be used to provide the user with visual aids for proper positioning. The user can receive a message via voice or text if the CV module 150 detects a foreign object or another person in the frame of the camera.
According to embodiment of the present invention, the CV module 150 includes a set of custom connected algorithmic modules and artificial neural networks (ANN) configured for predicting, for set of image frames, a set of key points, their temporal and semantic (meaningful features) parameters indicating the movements of the user's face features and muscles to determine a multi-dimensional face data model 235, including face mesh, temporal, semantic (meaningful features, face parts specific) and key points data. In particular, 90-120 key points can be used. General valuation data structure and machine learning (ML) model arrangement for evaluating a specific motion pattern has been generally disclosed in the U.S. Patent Publication 2021/0209349 A1 and U.S. Patent Publication 2021/0209350 A1, the entire disclosures of which are herein incorporated by reference.
As illustrated in
Further, the subset of facial key points can automatically be selected. For different exercises the generalized facial temporal-spatial model is used, but different sub-meshes of facial mesh can be selected for tracking the user's facial impressions.
It is important to note that in addition to the embodiment illustrated by this disclosure, any other representation of the user's face can be used for describing the user's face movement, such 2D, 3D or 3.5D mesh of the user's face. The 3.5D representation is preferred as it includes spatiotemporal trajectory features, which contain perspective projected horizontal and vertical, time, and depth information thereby providing the most accurate representation of the user's face movements and position. By tracking the positions of the face's key points or any other representation of the user's face body in the sequence of image frames, the user's movements when performing the exercise can be evaluated. The representation can depend on the type of the camera 130, which can be for example a 2D-camera, a 2.5D-camera or a 3D-camera. That is, the face's key points predicted for each image frame can be for example 2D-points, 2.5D-points or 3D-points.
According to embodiments of the present invention, as shown in
In addition, to provide the most accurate actual data model 280, the CV module 150 can include a tongue-specific processor component 485, the exemplary diagram of which is shown in
As shown in
According to embodiments of the present invention, the face key point data model 235, the sound-specific data 215 and/or tong-specific data 420 are determined separately and simultaneously in real-time but can be interdependent. For example, when the user performs the exercise involving voice and facial expression (both the face key point data model 235 and the sound-specific data 215 are determined), if the user properly pronounces a sound, but the muscles' movement is incorrect, the system determines the exercise being performed incorrectly. That is, the system 100 is configured to train the user to correctly use the articulatory apparatus (facial expressions) while properly pronouncing sounds (voice).
A set of specific labeled datasets is a part of the technological stack, allowing to get the target ANN characteristics. These datasets are semi-automatically and manually generated, gathered, labeled, validated and accessed. For preprocessing and filtering large raw datasets specific ANNs and algorithms were created.
According to embodiments of the present invention, as illustrated on
The actual data model 280 is compared to the reference model 295 to determine mistakes made by the user during the performance of the exercise. More specifically, as shown in
According to embodiments of the present invention, as shown in
As illustrated in
According to embodiments of the present invention, as shown in
A specific pace of the exercise can be predetermined by the system 100 and can be regulated by a signal (e.g., beeping sound). That is, if the system 100 determines that the user cannot perform the exercise at the recommended pace for the specific exercise, the system 100 will adjust the pace of the exercises by slowing down the pace of the signal.
According to embodiments of the present invention, the computer program 140 can derive a single time period for each exercise that has the start point of the period and the end point. Each time period for each exercise is determined as a time difference between a start point and an end point. For each feedback 155, a single period for each exercise is evaluated.
According to embodiments of the invention, the system 100 can also include a virtual reality (VR) component 135. The VR component can be realized by the device 110 or, alternatively, a separate VR device, for example, VR headsets offered by manufactures like Samsung, Oculus, Hewlett Packard and the like. The VR device for example, can include one or more speakers, microphones, and/or headphones. A VR environment may be displayed on the display to provide a computer simulation of real-world elements. Such immersive VR environment can aid and improve the user's cognitive interactions while performing the exercise. In particular, the VR environment can aid the user in demonstrating how to properly perform the exercise through animation.
The VR component can greatly aid users who suffer from attention deficit disorders (ADD), attention deficit hyperactivity disorder (ADHD) and/or autism spectrum disorders to focus on properly performing the exercise and follow the instructions provided by the system and/or a treating professional. The VR component can be used for individual sessions or group exercises.
As illustrated in
All of the above functionalities are fully automated.
A process which is not automated is U1_1 Expert control. Because while the initial assessment is performed by the system and IPOC is generated by the system, the SLP can manually modify both as she/he feels necessary. Furthermore, the SLP may communicate with a parent to receive any other feedback. The non-automated feedback is optional and is not required by the system 100.
As shown in
In addition, as shown in
Further, user A1_1(human) who signs into the program, will enter his/her information limited to name and age, then the user A1_1 will have an option of giving an SLP assigned to the user's case access, as well as give access to the entity that covers/pays for the SLP's service (if applicable). That is, user A1_1 is connected to A1_2/SLP who is connected to U1_1. Expert Control all during provision of speech therapy via exercise routine, and automatization practices during the full therapy cycle. The individualized plan of care is generated and recommended to the user A1_1 by the computer program 140 based on the data gathered during the initial assessment. This data will be transferred into a document that will describe user's A1_1 abilities and disabilities. This document contains the established baseline and an age expected levels of performance of the user A1_1. Then the IPOC based on this data is will be designed. The data will be automatically accessible by A1_2/SLP who is connected to U1_1, so that these could be involved in the process. These will assure Methodological Support and Progress Monitoring is automated/U1_1_1/. It will enable anyone including but not limited to U1_3, U1_3_1, U1_3_2, U6, U2 have access to IPOC goals which will be constantly reassessed based on the assessment, ratings and related statistics and data. As illustrated in
In stage 720, CV module 150 detects the user's face position in front of the camera 130. For example, in stage 520, CV module 150 using information from camera 130 may use image processing techniques to establish that a face is properly positioned in front of camera 130. According to embodiments of the present invention, the system 100 is configured to assist the user, for example, in the form of animation to confirm proper face positioning in front of the camera 130. The animation can be in the form of a contour, or a mask (crown, hat or bunny ears) made visible on top of the user's head image when the user's head is properly positioned in front of the camera 130.
In stage 740, CV module 150 determines a set of key points indicating the movements of the user's face features and muscles to provide the actual data model 280 of the user performing the exercise in real-time of.
In stage 760, the computer program 140 compares in real-time the actual data model 280 to the reference model 295.
In stage, 765 the computer program 140 interprets the comparison of the actual data model 280 to the reference model 295 to determine whether a result of the comparison between the actual data model 280 and the reference model 295 is within predetermined parameters.
In stage 770, the computer program 140 generates feedback 155 in real-time based on the interpretation. For example, feedback may indicate whether or not the user is following the proper form of the exercise or properly makes the required sound. Further, the feedback 155 can include recognition of mistakes made by the user during the performance of the exercise, and recommendation and instruction as to how to improve the user's performance. The feedback 155 can be in the form of text, voice, animation or combination of various techniques known in the art.
In stage 790, the computer program 140 generates the reports 157. The reports 157 can include a real-time report, report of user's statistics, progress reports, recommendations and other information that can be used by treating professionals, such as SLPs, special care centers, schools, hospitals, insurance companies and the like.
The foregoing detailed description of the embodiments is used to further clearly describe the features and spirit of the present invention. The foregoing description for each embodiment is not intended to limit the scope of the present invention. All kinds of modifications made to the foregoing embodiments and equivalent arrangements should fall within the protected scope of the present invention. Hence, the scope of the present invention should be explained most widely according to the claims described thereafter in connection with the detailed description, and should cover all the possibly equivalent variations and equivalent arrangements.
The present invention can be a system, a method, and/or a computer program product. The computer program product can include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, mobile devices or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, element components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form described. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.