A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owners have no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. Unless otherwise stated, all trademarks and trade dress disclosed in this patent document and other distinctive names, emblems, and designs associated with product or service descriptions, are subject to trademark rights. The trademark and trade dress owner also reserves all trademark rights whatsoever.
The present invention relates to systems and devices for facilitating practical learning of human languages and, in particular, to systems and devices for learning languages via remote videoconferencing.
Teleconferencing began with the inception of telephony in the late 19th century, when Alexander Graham Bell commercialized his patent on the telephone. After transcontinental phone infrastructure was laid, including multiplex switchboards, it became possible to connect more than two participants in an audio teleconference. In an early notable teleconference, Bell himself was connected with the mayors of San Francisco and New York and President Wilson, to commemorate the Panama-Pacific International Exposition of 1915. The call was extremely difficult and expensive to connect, and audio teleconferencing would not become accessible to the public for many years to come.
In the 1950's, Bell Labs and AT&T began creating teleconferencing systems for commercial use in earnest, including early video teleconferencing technologies. AT&T's PICTUREPHONE was displayed at the World's Fair in New York, in 1956. The PICTUREPHONE met with a lukewarm reception. The PICTUREPHONE system was extremely expensive and complicated to operate, and the video image quality was poor. AT&T eventually launched general commercial service for the PICTUREPHONE in 1970, but it was rarely used. Teleconferencing, and video teleconferencing in particular, remained expensive and unpopular for decades. But major electronics companies continued to develop new systems. For example, AT&T launched the VIDEOPHONE in 1992, but this teleconferencing system again had a high price tag, and still met with a tepid reception, forcing AT&T to offer large discounts.
In 1999, KYOCERA introduced what might have been the first general commercial offering of a camera-enabled wireless telephone—the VISUALPHONE VP-210. Operating at a rate of 2 frames per second (“FPS”), its front-facing camera allowed users to hold video teleconference calls from remote locations.
In modern times, teleconferencing may be carried out over several possible forms of communications networks, including wireless networks and the Internet. Teleconferencing systems by GOOGLE MEET, ZOOM, MICROSOFT TEAMS and WEBEX are among the most popular software-based options available over the Internet, and can run at about 30 FPS. This frame rate is some 15× the output of VISUALPHONE in 1999, and around the rate of modern cinematic frame rates. But issues like lag and freezing have remained as persistent problems, frequently raised by users. Some teleconferencing software applications include texting (a.k.a. “chat”) features, and virtual whiteboards, allowing multiple team members on a project to collaborate on written work in real time, as well as oral conversations. Recently, the popularity of these modern platforms has increased, albeit, perhaps, temporarily, in the wake of the ongoing Covid-19 global pandemic beginning in 2019, which has placed a premium on collaboration between people at a safe distance.
Despite increasing, long-felt needs for better teleconferencing systems, all modern offerings remain widely criticized in comparison to in-person conferencing, for several of reasons. For example, “telepresence” through such systems may feel less genuine and familiar than actual, in-person meetings, and there are frequent technical issues inherent in teleconferencing technologies, as mentioned above.
Some new teleconferencing systems include robot-mounted cameras and video projection displays in place of a “head” of a robot, and allow the robot to serve as a telepresence avatar of a remote conference participant—moving as if it were that participant's body, through another teleconference participant's physical environment. However, these systems have not bridged the uncanny valley of teleconferencing—users remain uncomfortable with such teleconferencing systems, and there remains a strong, long-felt need for better teleconferencing systems, devices and methods.
In recent years, personal computers have become far more powerful. Extremely powerful, portable personal computers, including small digital assistants (“PDAs”) have allowed users to record and manage personal information, and conduct complex communications in a wide variety of forms for decades. For example, as early as the 1970's, small digital wristwatches allowed users to perform personal computing, such as financial arithmetic, and storing information related to personal contacts, such as names, addresses and phone numbers. Like desktop computers, even common smartphones have had enormous computing capabilities for some time.
A wide variety of specialized software products (“Apps”) have been designed to be run on personal computers, known as “Desktop Apps,” on smartphones, in which case they are known as “Smartphone Apps,” and/or on remote servers, via virtual computers, over the Internet, in which case they are known as “Web Apps”. Apps allow users to provide and receive a wide variety of data, and perform a wide variety of functions based on those data, ranging from online banking to digital gaming.
Some Apps relate to learning languages, such as human languages. For example, many Web Apps are currently available to aid users in learning vocabulary, grammar rules, cultural customs and pronunciation. As another example, some Web Apps allow users to converse via teleconferencing techniques, either through those Web Apps, or with the aid of additional Web Apps (e.g., ZOOM, as discussed above, or DISCORD, as used by DEUTSCH GYM). At least one Web App, namely LEARNCUBE.COM, counts the amount of time that a user has spent speaking in the Web App.
Despite the wide availability and popularity of language learning Apps, and the availability of Apps including virtual classrooms and teleconferencing capabilities, results have been surprisingly poor. For example, one study of open online language learning courses noted a completion rate of between 2.4 to 18.2%. See K. Friodriksdottir & B. Arnbjornsdottir, Tracking Student Retention in Open Online Language Courses (2015). And retention rates for language learning Apps appear to be far lower than indicated in that study. Language learning remains extremely challenging through Apps, and there remains a long-felt need for more effective language learning Apps, devices, systems and other techniques.
It should be understood that the disclosures in this application related to the background of the invention, in, but not limited to this section titled “Background,” do not necessarily set forth prior art or other known aspects exclusively, and should not be construed as an admission with respect thereto.
New systems, methods and devices for facilitating collaborative, immersive distance learning of human languages and, in particular, for the practical learning of human languages through conversational speaking via remote videoconferencing techniques, are provided. In some embodiments, a learning system including specialized computer hardware (including peripheral devices and/or sensors) and software creates a series of unique audio-visual (“A/V”) conversation GUI tools for uniquely created cohorts of users, including both learning users and teaching users. In some embodiments, the learning system creates such cohorts based, at least in part, on one or more of: a speaking skills estimate for users in a goal learning language; a goal teaching language; and another language indicated by users; a Learning Path selection; a real-time assessment of speaking skills; and a Confidence Score. In some embodiments, the learning system creates a series of A/V conversation chat room tools for the cohort of users, and monitors the performance of users in one or more A/V conversation session(s). In some embodiments, the learning system provides to one or more users of such a cohort of users with access to additional resources, such as additional A/V chat sessions, if a pre-set threshold for Speaking Time is reached and/or exceeded.
In some embodiments, a human language learning system (the “learning system”), includes software modules and GUI tools, aiding in qualifying and grouping cohorts of learning users and teaching users of a human language to be learned by the learning users. In some such embodiments, using the learning system, one or more learning user(s) select at least two languages: a first human language, which is a language in which the user has speaking fluency, and a second, goal learning human language (the “goal learning language”), which the user desires to learn and/or practice speaking. Also using the learning system, one or more teaching user(s) select at least two languages: a “goal teaching language,” which is a human language with which such a teaching user has a high level of fluency; and other human language(s), which the teaching user will not be teaching, but in which the teaching user has some ability to converse for purposes of teaching students who are fluent in that language. In some embodiments, a level of language speaking ability is then determined for each user, in each user's goal learning language or goal teaching language, as the case may be. In addition, one or more Learning Path(s) are then selected and/or determined by each of the learning user(s) and the teaching user(s), which Learning Path(s) reflect subject matter interests. In some embodiments, particular learning user(s) are then grouped together in cohorts of users, based on a user-matching software module of the learning system, based on such language speaking abilities and Learning Path(s). In some embodiments, the learning system creates manages chat room(s) for hosting Audio/Video (A/V) Conversations, starting with a large, Group Immersion A/V chat room (e.g., hosting around 30 users), which large, Group Immersion A/V chat room hosts an initial, group immersion A/V Conversation session between those users of that cohort.
One or more language speaking ability assessment modules are provided within the Learning System, in some embodiments, which are then applied to monitored and/or recorded speaking performance of any such learning user and any such teaching user, respectively. In some embodiments, such an assessment module may, at least in part, include an artificial intelligence (e.g., machine learning) sub-module, which compares a variety of factors of each such user, based on such factors and/or performances of prior learning and teaching users, and their abilities as rated previously, over time (e.g., by administrators coaching the machine learning sub-module with examples of such high- and low-ability prior users). In some embodiments, such an assessment module may, at least in part, include a speaking time recording sub-module, in which each user's amount of time speaking, proportion of time speaking relative to other speakers, and verbal fluidity and talking speed, may be determined. For example, in some embodiments, such a sub-module first transcribes each user's speech during the A/V session into an OCRed transcript, which is then analyzed for grammatical and/or pronunciation errors, speech rate, and such speaking time, and proportion of speaking time, in some embodiments.
In some embodiments, users may then be compared and associated with each other to different degrees, e.g., by a user matching module of the control system. In some embodiments, such an algorithm creates a comparative coefficient of speaking ability in a human language (e.g., by regression analysis of a group of weighted comparisons of a group of statistical indicators, such as any of the statistical indicators of speaking performance set forth in this application) applied to each learning user and teaching user.
Based on the techniques above, in some embodiments, those users of that cohort sharing a particular Learning Path and general language speaking ability, are presently determined, and the learning system assigns sub-sets of the cohort of users to smaller, Breakout Cohorts (e.g., 3 to 4 users each), based on the results of applying a matching sub-module of the learning system. In some embodiment, each set of users assigned to a Breakout Cohort are then placed into additional, smaller A/V chat rooms (“Breakout Rooms”) and subjected to an additional A/V conversation session (a “Breakout Session”), together, and speaking ability assessment module(s) of the Learning System. In some embodiments, the learning users are provided with tasks to complete during the Breakout Session, and their performance in speaking during the Breakout Session is determined. In some embodiments, at a conclusion of the Breakout Session, each learning user is returned to the Group Immersion A/V chat room, from the Breakout Room(s) and provided with a GUI tool for reporting a Confidence Score, meaning a numerical or other self-rating (e.g., on a scale of 1-10) of their confidence that they have spoken well, and/or improved their spoken language skills, in their goal learning language. Also upon so returning each user to the first, Group Immersion A/V chat room, in some embodiments, the control system may report any or all of the ratings, scores, speaking times, relative speaking times, for each user in a Score Reporting GUIs.
In some embodiments, each user must meet or exceed one or more thresholds for particular score(s) relating to their speech (e.g., a minimum speaking time, or proportion of speaking time, as discussed above), or they are then returned to steps 411 et seq., as set forth above, and repeat all such steps until they have so met or exceeded such threshold(s).
In some such embodiments, teaching users may be matched with learning users in such a tranche or cohort based only if their assessed and determined language skill level (e.g., by their coefficient of speaking ability) is higher than all learning users in that tranche or cohort.
In some embodiments, the Learning System only groups users within a cohort or breakout cohort when particular criteria are met. For example, in some embodiments, the Learning System so matches teaching users to a cohort only if: teaching user is tested and verified to be fluent (e.g., a native speaker) in the goal learning language of each learning user of the cohort.
In some embodiments, additional forms of A/V chat rooms and learning tools are also provided by the Learning System, including: Learning Path Rooms, in which a particular learning objective(s) is facilitated by the Systems, through guided A/V interactions; and Seasonal/Event Rooms, in which a Session is held on a particular date, occasion or other social initiative of the users.
In some embodiments, Learning Users are presented with initial, introductory A/V experiences and instructions, at or about their initial connection to it (e.g., a pre-recorded Welcome video).
In some embodiments, the Learning System facilitates oral conversations between users, in real time, over audio/video chat sessions. For example, the Systems may carry out any or all of the following learning facilitation steps: presenting language learning and interaction instructions, at the outset, when users are connected to Rooms and/or when a Session begins; setting out particular language learning tasks, to be carried out by each user, during the Session; identifying improper word choice, grammar and intonation can be detected by the systems and/or users (e.g., the second, teaching user); identifying/requiring a particular user to begin speaking and carrying out other language learning instructions; tracking and calculating the Speaking Time of each user (e.g., with a “Speaking Tracker” aspect of the systems.
As mentioned above, in some embodiments, the Speaking Time for each user must reach a set threshold (e.g., a preselected threshold, by the system and/or user with administrative privileges) or the user is placed in another Room, for another similar A/V chat session. In some embodiments, the Systems grant special access privileges to a user if the user reaches and/or exceeds the preselected threshold applicable to them. In other words, by reaching their threshold, the user is considered qualified for, and granted additional access to Special aspects of the System, in such embodiments.
In some embodiments, the Speaking Time is displayed as a score for each user, after a Session ends. In some such embodiments, the Speaking Time for each user, and whether they reached their applicable threshold, is displayed to all users, after a Session ends.
In some embodiments, users are provided with a User Interface Tool to rate their courage in their speaking performance or abilities, at the end of the session. In some embodiments, the users provide a self-rated “Confidence Score” (e.g. from 0 to 100, 100 being the most confident, and 0 being no confidence) in their performance during an A/V chat session. In some embodiments, as with the Speaking Time parameter, above, the Systems grant special access privileges to a user if the user's Confidence Score reaches and/or exceeds the preselected threshold applicable to them (e.g., 70% Confidence in their last performance at an A/V chat session). In other words, by reaching their threshold, the user is considered qualified for, and granted additional access to Special aspects of the System, in such embodiments.
Within a GUI Tool for each user, cumulative Speaking Time, across all Sessions in which the user participates, is summed and displayed, among other things.
In some embodiments, a Leader Board is provided, showing the top-ranked speakers within a cohort or the community.
Points will be awarded for the second, teaching user, for aiding learners by speaking and otherwise helping in Sessions. In some embodiments, the points can be redeemed through the Systems for monetary or other awards.
In some embodiments, in addition to the A/V Conversations/Sessions, the Systems have several other features, facilitated by independent software modules which can be separately selected and used by users in a single integrated Learning Path (i.e., a unique learning experience for that user facilitated by the Systems), depending on their language learning goals. These modules include, by way of example:
Where any term is set forth in a sentence, clause or statement (“statement”) in this application, each possible meaning, significance and/or sense of any term used in this application should be read as if separately, conjunctively and/or alternatively set forth in additional statement(s), after the sentence, clause or statement, as necessary to exhaust the possible meanings of each such term and each such statement.
It should also be understood that, for convenience and readability, this application may set forth particular pronouns and other linguistic qualifiers of various specific gender and number, but, where this occurs, all other logically possible gender and number alternatives should also be read in as both conjunctive and alternative statements, as if equally, separately set forth therein.
The embodiments set forth in detail in this application are to ease the reader's understanding of inventions set forth herein and, as such, are only examples of the virtually innumerable number of alternative embodiments falling within the scope of the application. No specific embodiment set forth in this application should be read as limiting the scope of any claimed inventions.
These and other aspects of the invention will be made clearer below, in other parts of this application. This Summary, the Abstract, and other parts of the application, are for ease of understanding only, and no part of this application should be read to limit the scope of the invention, whether or not it references matter also set forth in any other part.
The features and advantages of the example embodiments of the invention presented herein will become more apparent from the detailed description set forth below when taken in conjunction with the following drawings.
The features and advantages of the example embodiments of the invention presented herein are directed to new systems, methods and devices for facilitating the practical learning of human languages and, in particular, to systems and devices for learning languages via remote videoconferencing. These and other aspects will become more apparent from the detailed description set forth below when taken in conjunction with the following drawings. This description is not intended to limit the application to the embodiments presented herein, which are only examples of the virtually unlimited possible embodiments falling within the scope of the present application. In fact, after reading the following description, it will be apparent to one skilled in the relevant art(s) how to implement the following example embodiments in alternative embodiments, including any possible order, number or other arrangement of components and sub-components (the following order, components, subcomponents and/or relationships being non-limiting).
The example embodiments of the invention presented herein are directed to new systems, devices and methods for facilitating the practical learning of human languages and, in particular, to systems and devices for learning languages via remote videoconferencing, which are now described herein. This description is not intended to limit the application of the example embodiments presented herein. In fact, after reading the following description, it will be apparent to one skilled in the relevant art(s) how to implement the following example embodiments in alternative embodiments.
In some embodiments, and assuming that a user of the learning system (not pictured but, e.g., a person using computer 105) has already created and logged in to its account, as set forth in step 403 et seq., discussed below in reference to
In any event, as discussed above, GUI 101 includes several example GUI tools 103, adapted for the user to begin using the learning system (after proper log-in and authentication). For example, one such GUI tool is pictured as a main navigation selection menu 111 including a series of navigable link tools 113 which, when activated by the user allow the user to navigate to different sets of GUI tools (“pages”) presented by the learning system, including sets of GUI tools other than those pictured in the present figure as GUI tools 103 (e.g., presented on other “pages” created by html and other programming). For example, in some embodiments, the user may select and activate any of navigable links tools 113, e.g., by using mouse 115 and/or keyboard 116 to place a pointer tool, such as arrow-format pointer tool 117, presented on display 107 over any of the example navigable link tools 113 and “clicking on” them, using mouse 115, or otherwise selecting them with such a data input device. A discussion of some of the example navigable link tools 113, in some embodiments, directly follow. Homescreen navigation link tool 119, in some embodiments, when selected and activated by the user, causes the learning system to present a “home screen” or “dashboard” GUI, which may be a GUI similar to that pictured. In some embodiments, a branding page navigation tool 121 is also, or alternatively, provided, which, in some embodiments, when similarly selected and activated by the user, similarly causes the learning system to present a “home screen” or “dashboard” GUI, which may be a GUI similar to that pictured. A selected courses navigation link tool 123 is also provided, which, in some embodiments, when similarly selected and activated by the user, presents a GUI related to selecting, exploring and engaging in language learning courses, or learning paths, with the aid of the learning system. An extra practice navigation link tool 125 is provided, which, in some embodiments, when similarly selected and activated by the user, causes the learning system to present a GUI related to engaging in ad hoc and/or additional practical language training (other than that previously required and/or scheduled or part of a planned curriculum or learning path) to be engaged in by the user with the aid of the learning system. An e-mail and/or text messaging inbox navigation link tool 127 is provided, which, in some embodiments, when similarly selected and activated by the user, causes the learning system to present a GUI related to viewing and composing e-mail(s) and/or text message(s) to and from other users of the learning system (or, in some embodiments, administrators of the learning system). A system help navigation link tool 129 is provided, which, in some embodiments, when similarly selected and activated by the user, causes the learning system to present a GUI related to requesting and viewing helpful information, messages and other assistance (e.g., from administrators of the learning system, via support tickets).
Also among the example GUI tools 103 is a scheduling GUI tool 130. In some embodiments, scheduling GUI tool 130 aids a user in creating and scheduling Audio-Visual (“A/V”) conversation sessions and other language learning tools available through the learning system, on particular dates and at particular times. In some embodiments, scheduling GUI tool 130 may include navigable links, such as the example navigable links 131, which, when activated by the user, cause the learning system to present more detailed date-and-time scheduling GUI tools for each day of a calendar year(s), which may be used to create A/V conversation chat rooms, as well as scheduling conversation sessions at particular times, with particular participants (in some embodiments, other users). In some embodiments, scheduling GUI tool 130 may include GUI tools for inviting other users to participate in such conversations using such A/V conversation chat rooms.
In some embodiments, upon first accessing GUI 101, the user may be presented with a welcoming GUI tool 133. In some embodiments, welcoming GUI tool 133 introduces the user to GUI 101 and the learning system, with various welcome messages 134, and includes an initial language selection GUI sub-tool 135, for selecting an initial main human language (or, a human language in which the user wishes to receive messages from the learning system). In some embodiments, the learning system supports learning of, and such messaging in, a plurality of human languages. For example, in some such embodiments, the learning system supports learning of, and such messaging in leading popular human languages. For example, in some such embodiments, the learning system supports learning of, and such messaging in, at least, English, German, Spanish and French. Thus, in some embodiments, after activation by the user, as pictured, and selecting such a first human language of the user, other functions and GUIs of the language learning system may incorporate and reflect that language as a means for reliably communicating with the user, with the aim of completing further tasks and actions by that user, and communicating with that user. In some embodiments, welcoming GUI tool 133 also includes a location and time zone selection GUI sub-tool 137, which, when activated by the user, as pictured, allows the user to select their physical location and/or preferred local time zone. In some embodiments, all scheduling and time-based planning functions of the learning system, and related GUI tools presented by the learning system, account for and reflect the user's selected time zone and location. Upon such a selection, and or, upon completing and closing welcoming GUI tool 133, the learning system may proceed to other GUIs and GUI tools, some of which will be discussed below.
As will be apparent to those of ordinary skill in the art to which the present invention applies, a wide range of alternative GUIs and GUI tools may include, or be modified to include or combine with, the GUIs, aspects and techniques of the present invention, as set forth in this application, in some embodiments. The mention, depiction or discussion of any specific language, type or stylization of GUI tools and graphical features are only examples of the virtually unlimited alternatives falling within the scope of the invention.
Generally speaking, the exact detailed embodiments provided throughout this application, including the aspects and techniques set forth in the figures and discussed in detail in this application are, of course, examples, and not limiting. Rather, these embodiments are intended only as a reasonable set of possible example systems, graphics, structures, substructures, GUIs, methods, steps, techniques and other aspects of the present invention, among virtually infinite and innumerable possibilities for carrying out the present invention, to ease comprehension of the disclosure, as will be readily apparent to those of ordinary skill in the art. For example, the description of one particular order, number or other arrangement of any aspects of the present invention set forth herein is illustrative, not limiting, and all other possible orders, numbers, arrangements, etc., are also within the scope of the invention, as will be so readily apparent. Any aspect of the invention set forth in this application may be included with any other aspect, as well as any aspects known in the art, in any number, order, arrangement, or alternative configuration, in particular embodiments, while still carrying out, and falling within the scope of, the invention.
Generally, GUI tools 103, and any other GUI tools and sub-tools set forth in the present application may include a wide variety of alternate forms and media, and any known types of tools for making selections or activating processes and steps set forth in this application may be used in various embodiments, alternatively or in addition to the forms of GUI tools and sub-tools set forth herein. For example, in some embodiments, alternative visual forms of GUI tools, or auditory, haptic or other non-visual GUI tools may be provided.
In some embodiments, GUI 201 and GUI tools 203 include several of the same GUI tools set forth above, with reference to
In some embodiments, GUI 301 and GUI tools 303 include several of the same GUI tools set forth above, with reference to
Beginning with step 401, in some embodiments, the control system begins by first determining if all required software modules and networks for running the language learning system are available (e.g., on-line and active, with no reported functional errors, in some embodiments). If so, the control system proceeds to subsequent step 403, in which it presents a GUI to one or more users seeking to log in and access resources of the language learning system through a learning user account, to learn the same, particular second, goal learning language, as discussed above. In some such embodiments, such a GUI includes sub-tools for creating and/or logging in to such a learning user account on the control system. In some embodiments, such sub-tools require the entry of secure login credentials for a learning user (e.g., with a username, password and/or 2-factor authentication method) prior to providing account access, or other access to further, encrypted resources of the control system dedicated to such learning users, such as any of such additional resources discussed in this application.
In some embodiments, a user may be presented with options for selecting different types of user accounts, such as accounts for learning users or teaching users, with short- or long-term durations and/or resources. For example, some types of user accounts may require a one-time payment or free trial, for accessing a limited amount of learning resources (e.g., the user's account being limited to 3 A/V conversation learning sessions), while other types of advertiser user accounts may require a payment for less limited use of the system, over a longer specified, or unspecified period of time (e.g., a one-year subscription), in various embodiments. In some embodiments, if such a user selects or is otherwise identified as a teaching user, rather than a learning user, of the language learning system, the control system may proceed to step 429 et seq., in which it performs similar authentication and log-in steps as set forth above provided separately for teaching users. In any event, in some embodiments, the control system requires the user to accept legally binding terms and conditions (e.g., via electronic signature) for creating, configuring and using that type of account for purposes set forth in this application, before permitting the user to proceed to subsequent steps, in some such embodiments.
However, assuming that a learning user has successfully logged in to a learning user account, in step 403, and accepted such terms and conditions, in some embodiments, the control system then proceeds to step 405, in which the control system begins to present a series of GUIs to the learning user with GUI tools facilitating practical learning of human languages, in part, by remote teleconferencing, in accordance with any and/or all embodiments set forth in the present application for language learning platforms. In some embodiments, however, if, at steps 401 or 429, the user did not successfully log in to such an account, or accept the terms and conditions, the user may be denied access to any of those GUI tools (e.g., with different GUI aspects informing the user that no account has been set up and that the user has been barred from access to those GUI tools), and the control system returns to the starting position. Assuming, however, at step 403, that the user has accepted the terms and conditions, and the control system has proceeded to step 405, as discussed above, the control system may proceed by requesting that the learning user select a first (“main” or “native”) human language, in which the learning user has native or another high level of fluency at the outset (i.e., prior to using the control system). For example, as discussed above, in some embodiments, the learning user may use language selection GUI sub-tool 135, for selecting such a first, main human language. Also in step 405, or in some embodiments, in an immediately subsequent step, the control system assesses that native or other level of fluency of the user (e.g., by testing or requesting an indication and/or certification of that level of high proficiency in that first, main human language, in various embodiments).
Next, the control system may proceed to step 407, in which it requests that the learning user select a second, goal learning language, as discussed in this application, e.g., using second, goal learning language selection tool 205, as discussed above. Whereas, in some embodiments, the first, main human language selected by the learning user is a user's native human language, or another human language in which the learning user has fluency, in some embodiments, the second language is a different human language, which the user desires to learn and/or practice, and may not have fluency in, in some embodiments. For example, in some embodiments, the learning user may desire and/or be required to learn and increase their fluency in the second, goal learning language for any number of various reasons, including business, employment, academic, travel, personal, and/or other reasons. Also in step 407, or in some embodiments, in an immediately subsequent step, the control system assesses the level of fluency and/or speaking ability of the user in that second, goal learning language (e.g., by testing or requesting an indication and/or certification of that level of proficiency in that second human language, in various embodiments).
Next, the control system may next proceed to step 409, in which it requests that the learning user select a Learning Path. As discussed elsewhere in this application, in some embodiments, the learning user may desire and/or be required to learn and increase their fluency in the second, goal learning language for any number of various reasons, including business, employment, academic, travel, personal, and/or other reasons. And, in some such embodiments, the learning user may select and activate one or more “Learning Paths,” based on and corresponding with those objectives, within the meaning of this application. In some embodiments, such a Learning Path includes a set of software and/or hardware modules maintained by the control system which, when activated, present GUIs of one or more types for carrying out specialized learning activities addressing the specific needs of each selected Learning Path. In some embodiments, upon the learning user selecting such a Learning Path, the control system creates groups or particular cohorts of users based on their sharing one or more Learning Path(s) (having each selected it at step 409), and then assigns common or group activities, GUIs and/or chat rooms for A/V conversations, as will be discussed below.
Similarly, and returning for the time being to step 429, if a user does not log in to the control system as a learning user, and instead successfully logs in as a teaching user, in some embodiments, the control system proceeds to step 431, in which it, similarly to step 405, requests that the teaching user select a first (“main” or “native”) human language, in which the teaching user has native or another high level of fluency at the outset. For example, as discussed above, in some embodiments, the teaching user may use language selection GUI sub-tool 135, for selecting such a first, main human language. Also similarly, in step 431, or in some embodiments, in an immediately subsequent step, the control system assesses that native or other level of fluency of the teaching user (e.g., by testing or requesting an indication and/or certification of that level of high proficiency in that first, main human language, in various embodiments). However, distinctly from step 405, the teaching user is requested to indicate that this first, native language is a “goal teaching” language, in some embodiments, meaning that the teaching user has an interest in teaching language skills (e.g., leading, assessing, correcting and otherwise guiding the learning of speaking skills of other, learning users) in that first, native language (e.g., using a GUI tool for indicating that such a first, native language is such a goal teaching language for the teaching user). In some embodiments, the control system may verify the native or other high level of fluency that the teaching user so indicates, in subsequent step 433. For example, in some embodiments, in step 433, the control system may conduct a verbal and/or oral proficiency test of the teaching user, and grade the teaching user's performance on such a test. In some embodiments, if, and only if, the teaching user's score or other assessment on such a test (e.g., via an AI submodule) determines a level of fluency of the teaching user exceeding a particular pre-set threshold, the control system completes the assignment of the language so selected by the teaching user as a first, native, goal teaching language for the teaching user. In some embodiments, in subsequent step 435, the control system also requests that the teaching user indicate and assign one or more other language(s), which the teaching user will not be teaching, but in which the teaching user has some ability to converse (e.g., a sufficient ability to communicate instructions to learning users in carrying out A/V conversations, as discussed in this application). Also, and similarly to step 409 with respect to learning users, in step 437, a teaching user may be asked to select one or more Learning Path(s), corresponding with their interest(s) and/or experience(s) in using either or both their first, goal teaching language and/or their second language.
The control system may next proceed, either from step 409 or step 437, to step 411, in which it creates and schedules a first, “Group Immersion” A/V conversation session, held in a first, Group Immersion A/V chat room, at a pre-planned time and for a pre-planned duration (e.g., by a user using scheduling GUI tool 130, as discussed above). In some embodiments, a relatively large number of users, including both learning users and teaching users, may be assigned to such a Group Immersion A/V conversation session and corresponding chat room. For example, in some embodiments, between 10 and 50 such users are assigned to such a single Group Immersion A/V conversation session and corresponding chat room, at the same Universal Time (e.g., with invitations and chat room links corresponding to the local time zone selected by each user, matching that Universal Time for each user). As another example, in some embodiments, between, or between about 20 and 40 users are so assigned to such a single Group Immersion A/V conversation session and corresponding chat room. As another example, in some embodiments, 30, or about 30, users are so assigned to such a single Group Immersion A/V conversation session and corresponding chat room. In some embodiments, in advance of the time for such a Group Immersion A/V conversation session, each user within such a cohort is sent an actuable link to such a chat room and, by “clicking on” that link, is next presented with a GUI for engaging in that Group Immersion A/V conversation session (e.g., with the aid of a microphone and/or, in some embodiments, a camera, of each respective user's personal computer, each sharing a communications network with the control system).
In any event, upon entering such a Group Immersion A/V chat room and conversation session, each user of such a cohort may be presented with an introductory media and/or message GUI created by the control system (e.g., prior to entry into the chat room and being permitted to converse with other users, in some embodiments) in step 413. In some embodiments, such an introductory message is provided that includes instructions to each user concerning the speaking tasks assigned to them, and goals for each user (e.g., in terms of speaking time and/or as a percentage of total speaking time of all users). The control system may also allow the users to engage in a group discussion, such as an introductory conversation, with other users within the Group Immersion A/V chat room and conversation session, in step 415, in some embodiments. Based on the results of those introductory conversations, in some embodiments, the control system proceeds to steps 417 and 419, in parallel, in which it applies a learning user assessment module and a teaching user assessment module (included within, or connected for communications with, the control system), to the speaking performance of any such learning user and any such teaching user, respectively. In some embodiments, either such an assessment module may, at least in part, include an artificial intelligence (e.g., machine learning) sub-module, which compares a variety of factors of each such user, based on performances of prior learning and teaching users, and their abilities as rated previously, over time (e.g., by administrators coaching the machine learning sub-module with examples of such high- and low-ability prior users). In some embodiments, either such an assessment module may, at least in part, include a speaking time recording sub-module, in which each user's amount of time speaking, proportion of time speaking relative to other speakers, and/or verbal fluidity and talking speed (in various embodiments), may be determined. For example, in some embodiments, such a sub-module first transcribes each user's speech during the A/V session into a OCRed transcript, which is then analyzed for grammatical and/or pronunciation errors, speech rate, and such speaking time, and proportion of speaking time, in some embodiments. In some embodiments, in subsequent step 421, users may then be compared and associated with each other to different degrees, e.g., by a user matching module of the control system. In some embodiments, such an algorithm creates a comparative coefficient of speaking ability in a human language (e.g., by regression analysis of a group of weighted comparisons of a group of statistical indicators, such as any of the statistical indicators of speaking performance set forth in this application) applied to each learning user and teaching user. In some such embodiments, such users will later be grouped together, in tranches or cohorts of user, based on having a similar or otherwise related (e.g., opposing abilities) such coefficient of speaking ability, as will be discussed in greater detail below, in some embodiments. However, in some such embodiments, teaching users may be matched with learning users in such a tranche or cohort only if their assessed and determined language skill level (e.g., by their coefficient of speaking ability) is higher than all learning users in that tranche or cohort. In some such embodiment, in step 423, if a teaching user does not have such a higher assessed and determined language skill level than all users in at least one such tranche or cohort, the teaching user is eliminated from, or not so assigned to, all such tranches or cohorts, and re-enters the Group Immersion A/V chat room, where they may attempt to be matched with others again by the control system in such a tranche or cohort, in a subsequent Group Immersion A/V conversation session.
However, in some embodiments, the control system next proceeds, in step 425, to create additional A/V chatrooms, each for a single tranche or cohort of fewer users (e.g., 3 to 4 users each), known as “Breakout Rooms.” The remaining teaching users and learning users that are assigned to such a tranche or cohort are then assigned to one such breakout room, and the control system proceeds to the process set forth below, in reference to
It should be understood that the above steps, and number and order of steps, is exemplary only of certain embodiments set forth in this application, and are not intended to limit the application in any way. In fact, virtually unlimited alternative orders, numbers, instances of the above steps, in addition with countless additional and alternative steps may be performed, within the scope of the present application and inventions herein, as will be readily apparent to those of skill in the art.
As discussed above, in reference to
Next, in step 513, In some embodiments, such Breakout Cohorts are assigned to separate Breakout Rooms (which, as discussed above, are additional, separate A/V chat rooms for hosting an A/V conversation session). In some embodiments, also in step 513, additional introductory message GUIs (one per each such Breakout Room) are created and provided by the control system (e.g., prior to entry into a chat room/Breakout Room, and/or prior to being permitted to converse with other users, in some embodiments). In some embodiments, such an introductory message includes instructions to each user of the Breakout Cohort concerning the speaking tasks assigned to them, and goals for each user (e.g., in terms of speaking time and/or a percentage of total speaking time of all users that the user should attempt to speak). The control system may also allow the users to engage in a group discussion, such as an introductory conversation, with other users within the Breakout Room, in some embodiments. However, in some embodiments, the control system may create and/or select particular learning tasks to be carried out by the learning users and teaching user(s) within the Breakout Room, in step 515. For example, in some embodiments, one or more users are instructed to begin and/or lead oral discussion of a subject matter topic, introduced by the control system via a GUI. For example, in some such embodiments, such a subject matter topic is introduced by a GUI tool of the control system displaying a title, description and/or other media signifying the subject matter to be discussed, and assigning it to one or more user(s) to begin and/or lead a discussion regarding that subject matter topic. In some embodiments, such a subject matter topic is selected based on the topic being related to a Learning Path selected by one or more users (and/or, in some embodiments, a Learning Path selected by all of the one or more users). In some embodiments, in subsequent step 517, the control system may assign more specific learning tasks and conversational challenges to one or more (or, in some embodiments, each) of the user(s). For example, in some embodiments, the control system may present one or more GUI tools instructing one or more users to complete a speaking task involving the answering of a question spoken (e.g., by a teaching user, recording or by synthesized speech by the control system). As such users begin and lead such discussions, respond to such discussions, and/or complete such speaking tasks, in step 519, in some embodiments, the control system monitors and/or records and applies a user assessment module to the speaking performance of each user. In some embodiments, such a speaking performance assessment module may, at least in part, include an artificial intelligence (e.g., machine learning) sub-module, which compares a variety of factors of each such user, such as their patterns of speech. In some embodiments, such a comparison is based on indicators of performances of prior users of similar assessed speaking abilities in the goal learning language, and changes in those abilities as rated previously, over time (e.g., by administrators coaching the machine learning sub-module with examples of such high- and low-ability prior users). In some embodiments, such an assessment module may, at least in part, include a speaking time recording sub-module, in which each user's amount of time speaking in the Breakout Room, proportion of time speaking in the Breakout Room relative to other speakers speaking in the Breakout Room, and the user's verbal fluidity and talking speed, may be determined. For example, in some embodiments, in step 521, such a sub-module first transcribes each user's speech during the A/V session into a separate, Optical Character Recognition enabled (“OCRed”) transcript. In some such embodiments, in step 523, each user's speech during the A/V conversation session (e.g., the transcript thereof, including time signatures for each word spoken) is then analyzed for grammatical errors, which may be identified and indicated to the user and or a teaching user by specialized GUI tools, and a grammatical error recognition sub-module of the control system, in some embodiments. Also, in parallel, simultaneous step 525, in some embodiments, a pronunciation error recognition sub-module of the control system may review and analyze recordings of each user's speech from the A/V conversation session, and identify and indicate errors in pronunciation by each user, in additional GUI tools. Also, in another parallel, simultaneous step 527, in some embodiments, a syntax and/or word choice assessment sub-module of the control system may review and analyze recordings of each user's speech from the A/V conversation session, and identify and indicate errors or sub-optimal word choice of each user, in additional GUI tools. However, in some embodiments, such indications, as set forth in steps 523, 525 and 527 are not reported until after the Breakout Session has ended (e.g., in some embodiments, after the user has returned to the first, Group Immersion A/V chat room), as will be discussed further below.
In some embodiments, in step 528, a Breakout Cohort performance score is calculated by the assessment sub-module for each Breakout Cohort, and reported to each user of the respective Cohort via GUI tools, at the end of its Breakout Session. For example, in some embodiments, each user's speaking time is reported via such GUI tools (e.g., as a percentage of total speaking time), as discussed above. As another example, in some embodiments, the assessment module reports whether each user's speaking time meets or exceeds a threshold for successfully completing the Breakout Session. In some embodiments, the assessment module determines whether all users within a Breakout Cohort to have met or exceeded such a threshold, and, if not, the assessment module determines that the Breakout Cohort has not successfully completed the Breakout Session. And, in some such embodiments, the assessment module reports whether the Breakout Cohort, as a whole, has successfully completed the Breakout Session, as at least part of a Breakout Cohort performance score report. In some embodiments, the assessment sub-module may report a total number of instances of any of the errors discussed above, and/or a total number of instances of correct performance of speaking tasks, as at least part of a Breakout Cohort performance score report through GUI tools, to the user(s), in step 528.
After the Breakout Session has concluded (e.g., after a preset duration of time has elapsed after the start time designated for the session), the control system may proceed to step 529, in some embodiments, in which it ends each of the Breakout Rooms, and returns all users to the larger first, Group Immersion A/V chat room, as discussed in
In some embodiments, in subsequent step 530, the assessment module ranks and reports each Breakout Cohort's Breakout Cohort performance score report (e.g., on a leader board GUI tool). For example, in some embodiments, the assessment module reports whether each Breakout Cohort has successfully completed their respective Breakout Session (e.g., each user of such a cohort having spoken for a time or proportion of the Breakout Session's total time meeting or exceeding a pre-set minimum threshold for such speaking times.) In some embodiments, for each user of each Breakout Cohort, the assessment sub-module reports a total number of instances of any or all of the errors discussed above (determined in steps 523, 525 and 527) and/or a total number of instances of correct performance of speaking tasks, as a Breakout Session results report through GUI tools, to the user(s).
Also upon so returning each user of each Breakout Cohort to the first, Group Immersion A/V chat room (“Main Room”), in step 531, in some embodiments, the control system requests each learning user to report a Confidence Score, meaning a numerical or other rating (e.g., on a scale of 1-4) of their confidence that they have spoken well, and/or improved their spoken language skills, in their second, goal, learning language. Also upon so returning each user to the first, Group Immersion A/V chat room (“Main Room”), in some embodiments, the control system may report any or all of the ratings, scores, speaking times, relative speaking times, for each user in a Score Reporting GUIs, in some embodiments, in step 533. In some embodiments, in step 535, each user must meet or exceed one or more thresholds for particular score(s) relating to their speech (e.g., a minimum speaking time, or proportion of speaking time, as discussed above), or they are then returned to steps 411 et seq., as set forth above, and repeat all such steps until they have so met or exceeded such threshold(s).
It should be understood that the above steps, and number and order of steps, is exemplary only of certain embodiments set forth in this application, and are not intended to limit the application in any way. In fact, virtually unlimited alternative orders, numbers, instances of the above steps, in addition with countless additional and alternative steps may be performed, within the scope of the present application and inventions herein, as will be readily apparent to those of skill in the art.
In some embodiments, GUI 601 and GUI tools 603 include several of the same GUI tools set forth above, with reference to
In some embodiments, learning users and teaching users, alike, may select a Learning Path(s), corresponding with their interest(s), need(s) and/or experience(s) in using goal learning languages, goal teaching languages and/or other languages, in various embodiments. For example, a user may wish to work on language skills for such a language due to business, employment, academic, travel, personal, and/or other reasons. And, in some such embodiments, a user may select and activate one or more Learning Paths based on and corresponding with those reasons, within the meaning of this application. In some embodiments, such a Learning Path includes a set of software and/or hardware modules maintained by the control system which, when activated, present GUIs of one or more types for carrying out specialized learning activities addressing the specific needs of each selected Learning Path. In some embodiments, upon the learning user selecting such a Learning Path, the control system creates groups or particular cohorts of users based on their sharing one or more Learning Path(s) (having each selected it at step 409), and then assigns common or group activities, GUIs and/or chat rooms for A/V conversations to such users, as discussed above.
In the example pictured, language Learning Path selection tool 605, when activated, allows the selection of one or more such Learning Paths, from an assortment of Learning Path indicator tools, such as the following examples: an individually selectable everyday activities and language Learning Path Indicator button 607, which, when activated by the user, selects everyday activities and common language needs for ordinary living as a language Learning Path for the user selecting it via GUI 601; an individually selectable cultural activities and language Learning Path Indicator button 609, which, when activated by the user, selects cultural exploration and travel as a language Learning Path for the user selecting it via GUI 601; and an individually selectable business and employment Learning Path Indicator button 611, which, when activated by the user, selects business and employment language needs as a language Learning Path for the user selecting it via GUI 601.
In some embodiments, GUI 701 and GUI tools 703 include several of the same GUI tools set forth above, with reference to
In some embodiments, GUI tools 703 also include a scheduled A/V chat session recommendation tool 707. As mentioned above, in some embodiments, the learning system uses a matching algorithm to create one or more cohorts of users, and then schedules and creates A/V chat room to host an A/V chat session for each of those cohorts of users, in some embodiments. In some embodiments, where the learning system has already created a number of such cohorts and A/V chat rooms, and new user signs up, and selects a goal language, speaking ability and Learning Path, as discussed above, among other factors and personal characteristics, the learning system may match the new user to one or more such previously-created cohorts and A/V chat rooms and sessions. In some such embodiments, the learning system may then recommend and present selectable options for such a new user to join such cohort(s) and/or A/V chat rooms and sessions, using A/V chat session recommendation tool 707 (e.g., by clicking on any of a plurality of A/V chat session selection tools 709, each corresponding with such a A/V chat session scheduled on a plurality of future dates, as indicated).
In some embodiments, GUI 801 and GUI tools 803 include several of the same GUI tools set forth above, with reference to
As discussed elsewhere in this application, in some embodiments, the learning system provides such a large, Group Immersion A/V chat room to host an initial, group immersion A/V Conversation session between those users of that cohort. In some embodiments, where the example learning user (shown by avatar or representative picture 805) is present in such a Group Immersion A/V chat room, they may view similar avatars or representative pictures of all other users of the cohort of users grouped by the learning system and/or hosted in the Group Immersion A/V chat room. Examples of such avatars or representative pictures of all other users are shown, for example, as example other user avatars 807.
Also as mentioned above, in some embodiments, at the outset of such a group immersion A/V Conversation session, instructions, tasks or other guidance may be provided to all users within the cohort, as shown by example instructions GUI tool 809.
Information concerning the cohort of users, including their common Learning Path, and a current subtopic (a.k.a., “chapter”) may be provided in some embodiments, for example, as cohort characteristics indicators 811. In addition, in some embodiments, A/V Conversation session time and date indicators 813 may be provided. Finally, an A/V Conversation time-remaining indicator 815 may be provided, in some embodiments, aiding users in understanding the length of time they have remaining to complete speaking before the session expires.
It should be noted that, during such an A/V Conversation session, any and all users of the cohort may speak (e.g., in succession) and listen to one another (e.g., using microphones and speakers within the control system of their local computer), in some embodiments.
In some embodiments, GUI 901 and GUI tools 903 may include some of the same GUI tools set forth above, with reference to
As mentioned above, in some embodiments, at the outset of such a Breakout Session, additional instructions may be provided to all users within the cohort, as shown by example instructions GUI tool 905. In addition, and also as discussed above, specific speaking tasks and/or other guidance for the Breakout Cohort may be provided, such as example conversational questions, provided in questioning GUI tool 907, which may, in some embodiments, be spoken out loud to all users in the Breakout Cohort by a text-to-speech submodule of the learning system. The users of the Breakout Cohort may be reminded of the time remaining for the Breakout Session, for example, by a remaining time indicator tool 909, and are instructed to make efforts to complete their speaking tasks (e.g., reaching or exceeding a threshold Speaking Time or proportion) within that allotted time, in some embodiments.
In some embodiments, GUI 1001 and GUI tools 1003 may include some of the same GUI tools set forth above, with reference to
In some embodiments, GUI 1101 and GUI tools 1103 may include some of the same GUI tools set forth above, with reference to
In some embodiments, the learning system also provides parting reminders and instructions, in a reminders and instructions GUI tool 1107.
In some embodiments, GUI 1201 and GUI tools 1203 may include some of the same GUI tools set forth above, with reference to
It should be understood, however, that the example Confidence Score eliciting GUI tool 1205 is only one example of the myriad possible Confidence Score eliciting GUI tools that fall within the scope of the inventions set forth in this application. For example, in alternative embodiments, a Confidence Score or other confidence assessment software module may be provided. In some such embodiments, the confidence assessment software module generates a Confidence Score by measurements different and/or additional data related to confidence, and records a Confidence Score based on an algorithm assessing confidence according to a weighting of such measurements. For example, in some embodiments, such measurements include a speaking hesitancy measurement, meaning an amount of time a user pauses during speaking. As another example, in some embodiments, such measurements include a speaking fluidity measurement, meaning an amount of incorrect or imprecise diction, in comparison to model recordings or data related to diction, for the same words spoken. As another example, in some embodiments, such measurements include at least one facial gesture recognition measurement, meaning a facial gesture recognition and confidence rating measurement. Of course, any other measurement known in the art may also be measured, and factored into such an algorithm, in various embodiments, as an alternative, or in addition to, any or all of the above-provided examples of such measurements, to generate a Confidence Score based, at least in part, on such measurements.
In any event, in some embodiments, the learning system may schedule and facilitate additional, specialized A/V interactions, to be held at a later time, based on such a Confidence Score. For example, in some embodiments, future breakout rooms including the user and other users having a similar Confidence Score, may be provided by the learning system. And, in some embodiments, the learning system grants or prevents different special access privileges to the user if the Confidence Score falls below, reaches and/or exceeds a preselected threshold for confidence set for the user (e.g., with the aid of Confidence Score threshold setting GUI tool) or otherwise assessed for the user (not pictured in the present figure).
Control system 1300 includes an input/output device 1301, a memory device 1303, long-term data storage device 1305, and processor(s) 1307. The processor(s) 1307 is (are) capable of receiving, interpreting, processing and manipulating signals and executing instructions for further processing and for output, pre-output and/or storage in and outside of the system. The processor(s) 1307 may be general or multipurpose, single- or multi-threaded, and may have a single core or several processor cores, including microprocessors. Among other things, the processor(s) 1307 is/are capable of processing signals and instructions for the input/output device 1301, to cause a user interface to be provided or modified for use by a user on hardware, such as, but not limited to, computer system peripheral devices, such as a mouse, keyboard, touchscreen and/or other display device 1319, providing specialized tools (e.g., providing a graphical user interface, a.k.a. a “GUI,” providing any of the GUI tools as set forth in this application providing input and output for, and for otherwise aiding a user in, the practical teaching and learning of human language(s), at least in part, through A/V conversation sessions and other A/V interactions, with the aid of A/V chat rooms. In some embodiments, such signals and instructions are based on display-controlling and input-facilitating software (e.g., on local machine(s) 1311, display device 1319 or smartphone 1320).
For example, user interface aspects, such as graphical “windows,” “buttons” and data entry fields, may present, via, for example, a display, any number of selectable options and/or data entry fields. When the option and/or data entry field is selected or data is entered by a user, such selection and/or data entry causes aspects of the control system to command other aspects of the control system to provide additional instructions, GUI tools or other techniques set forth in the present application related to aiding users in practicing, teaching and learning to speak human language(s), in some embodiments. In some embodiments, such selection and/or data entry causes aspects of the control system to provide access to particular GUI tools as set forth in this application for selecting and activating resources of a language Learning System, comprising or comprised in the control system, in some embodiments. For example, and as explained in greater detail elsewhere in this application, the control system may provide GUI tools tailored to a Learning Path of one or more users. In some embodiments, the control system may provide GUI tools for facilitating A/V conversations in chat rooms, as set forth herein. In some embodiments, the control system may calculate ratings and other assessments of performance of users of the system in speaking or teaching a goal learning language (e.g., by monitoring, calculating and comparing such results to pre-set thresholds for completing tasks successfully, and accessing additional resources through the control system as a result, such as additional learning resources and/or monetary rewards). The processor(s) 1307 may execute instructions stored in memory device 1303 and/or long-term data storage device 1305, and may communicate via system bus(ses) 1375. Input/output device 1301 is capable of input/output operations for the system, and may include and communicate through input and/or output hardware, and instances thereof, such as a computer mouse, scanning device or other sensors, actuator(s), communications antenna(ae), keyboard(s), smartphone(s) and/or PDA(s), networked or connected additional computer(s), camera(s) or microphone(s), a mixing board(s), reel-to-reel tape recorder(s), external hard disk recorder(s), additional movie and/or sound editing system(s) or gear, speaker(s), external filter(s), amp(s), preamp(s), equalizer(s), computer display screen(s) or touch screen(s). Such input/output hardware could implement a program or user interface created, in part, by software, permitting the system and user to carry out the user settings and input discussed in this application. Input/output device 1301, memory device 1303, data storage device 1305, and processor(s) 1307 are connected and able to send and receive communications, transmissions and instructions via system bus(ses) 1375. In some embodiments, data storage device 1305 is capable of providing mass storage for the system, and may be or incorporate a computer-readable medium, may be a connected mass storage device (e.g., flash drive or other drive connected to a Universal Serial Bus (USB) port or Wi-Fi), may use back-end (with or without middle-ware) or cloud storage over a network (e.g., the Internet) as either a memory backup for an internal mass storage device or as a primary memory storage means, or may simply be an internal mass storage device, such as a computer hard drive or optical drive. Generally speaking, the control system may be implemented as a client/server arrangement, where features of the system are performed on a remote server, networked to the client and made a client and server by software on both the client computer and server computer. In any event, the control system may include, or include network connections (e.g, wired, WAN, LAN, 5G, ethernet, satellite, and/or Internet connections) with, any of the example devices or auxiliary devices and/or systems, shown as Internet server(s) 1309, local machine(s) 1311, cameras and microphones 1313, sensor(s) 1314, internet of things or other ubiquitous computing devices 1315, Blockchain(s) 1317, mouse, keyboard, touchscreen and/or other display device 1319 and smartphone 1320. Similarly, the control system 1300 is capable of accepting input from any of those auxiliary devices and systems, and modifying stored data within them and within itself, based on any input or output sent through input/output device 1301.
Input and output devices may deliver their input and receive output by any known means, including, but not limited to, any of the hardware and/or software examples shown as Internet server(s) 1309, local machine(s) 1311, cameras and microphones 1313, sensor(s) 1314, Internet of things or other ubiquitous computing devices 1315, Blockchain(s) 1317, display device 1319 and smartphone 1320.
While the illustrated example of a control system 1300 in accordance with the present invention may be helpful to understand the implementation of aspects of the invention, any suitable form of computer system known in the art may be used—for example, in some embodiments, a simpler computer system containing just a processor for executing instructions from a memory or transmission source. The aspects or features set forth may be implemented with, and in any combination of, digital electronic circuitry, hardware, software, firmware, middleware or any other computing technology known in the art, any of which may be aided with external data from external hardware and software, optionally, by networked connection, such as by LAN, WAN, satellite communications networks, 5G or other cellular networks, and/or any of the many connections forming the Internet. The system can be embodied in a tangibly-stored computer program, as by a machine-readable medium and propagated signal, for execution by a programmable processor. The many possible method steps of the example embodiments presented herein may be performed by such a programmable processor, executing a program of instructions, operating on input and output, and generating output and stored data. A computer program includes instructions for a computer to carry out a particular activity to bring about a particular result, and may be written in any programming language, including compiled and uncompiled and interpreted languages and machine language, and can be deployed in any form, including a complete program, module, component, subroutine, or other suitable routine for a computer program.
This application claims the benefit of U.S. Provisional Application No. 63/439,255 filed Jan. 16, 2023, the entire contents of which are hereby incorporated by reference herein into the present application.
Number | Date | Country | |
---|---|---|---|
63439255 | Jan 2023 | US |