This disclosure relates to automatically providing learning activities to users watching a live multimedia lecture as an augmented reality (AR) overlay.
Watching a live lecture online, such as a course lecture, can be difficult. If a student loses attention for a few moments, or otherwise does not understand the content of the lecture even for a few minutes, the student may miss out on important concepts being presented by the professor or lecturer.
Currently, when live lectures are presented through a provider such as Zoom or Webex, students are able to raise their hand and ask the professor a question. In addition, the students are able to chat with each other, the professor, or with the entire class. However, even if the professor records the lecture and the students are able to save the chats, the chats are not synchronized with the recorded lecture. Accordingly, there is a need for an online learning platform that provides augmented information to the lecture that makes the material and concepts presented in the lecture easier to understand for the student.
The present disclosure, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The figures are provided for purposes of illustration only and merely depict typical or example embodiments.
The figures are not exhaustive and do not limit the present disclosure to the precise form disclosed.
Reference will now be made in detail to various embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the various described embodiments. However, it will be apparent to one of ordinary skill in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. According to the present disclosure, an augmented reality (AR) overlay augments traditional lecture content items (hereinafter referred to as “video lecture content”) with corresponding augmented content, thereby facilitating a user's learning process. Accordingly, the present disclosure provides an augmented reality application, where a live multimedia lecture is captured and recognized by a client device, and corresponding augmented content are retrieved and displayed in relation to a video recording of the video lecture content as an augmented reality (AR) overlay.
The overlay can be presented in any of a number of ways. For example, the overlay can be provided in a fixed space on a screen of the user's device, such as mobile device. In other examples, the overlay may be presented in relation to the recognized content. For example, as a user scrolls through a recording of the video lecture content (whether it be snippets of the video lecture content or the entirety of the video lecture), augmented content retrieved for particular content (e.g., augmented content associated with the lecture content occurring at runtime 10 minutes) may appear as part of the AR overly. As the recording of the lecture continues to play, and the particular content item (e.g., occurring at runtime 10 minutes) is no longer being rendered, the retrieved augmented content may disappear. Augmented content retrieved for particular content (e.g., augmented content associated with the lecture content occurring at runtime 15 minutes) may now appear as part of the AR overlay. As the recording of the lecture continues to play further, and the particular content item (e.g., occurring at runtime 15 minutes) is no longer being rendered, the retrieved augmented content may be replaced in the AR overlay with more relevant supplemental content.
The education platform 110 is communicatively coupled to client devices 130 and 132 via a network 140. A client 130 accesses digital content from education platform 110 through network 140 and presents video lecture content to a user 101. Example client devices 130 include a desktop, a laptop, a tablet, a mobile device, smartphone, a smart television, a wearable device, a virtual reality device, etc. Client 130 may include software, such as a web browser or other application for rendering the video lecture content.
User 101 uses computing device 130 to capture a recording of a video lecture 105. A live lecture corresponding to the video lecture 105 may be happening in a physical space, such as, classroom environment, a professor's home office, etc. User 101 may be physically present in the physical space and use their client device 130 to capture a recording of the live lecture, or the user 101 may not be physically present and in such a case, a video lecture 105 is delivered to clients 130, such as network 150. As an example, users 101 and 102 may be students in a course, such as Geometry, and the live video lecture 105 may correspond to a lecture by a professor on the Pythagoras theorem. The user 101 uses a camera included in client 130 (or otherwise physically or communicatively coupled to client 130, such as, via LAN, Bluetooth, etc.), to make a recording of the video lecture 105. The education platform 110 may identify associated content corresponding to the captured recording, and provide supplemental content for display on the client device 130. The client device 130 may display the associated content as an augmented reality overlay is relation to the captured recording.
The client computing devices 130 may include any of a variety of types of augmented reality enabled devices that is capable of capturing audio/video recordings, communicating over a network, and has a display. By way of example and not limitation, such devices may include smart phones, cameras with wireless network access, laptops, smartwatches, tablets, head-mounted displays, gaming systems, AR glasses, etc. Each client computing device may include, for example, user input devices such as cameras, microphones, joysticks, buttons, touch screens, etc., as well as a display, which could include, for instance, an LED, LCD, plasma screen, projector, etc.
Network 140 enables communications among the entities connected to them through one or more local-area networks and/or wide-area networks. In one embodiment, network 140 is the Internet and uses standard wired and/or wireless communications technologies and/or protocols. Data exchanged over the network 140 can be represented using technologies and/or formats including hypertext markup language (HTML), extensible markup language (XML), and/or JavaScript Object Notation (JSON). In addition, all or some of the transmitted data can be encrypted using conventional encryption technologies such as the secure sockets layer (SSL), transport layer security (TLS), virtual private networks (VPNs), and/or Internet Protocol security (IPsec). In another embodiment, the entities use custom and/or dedicated data communications technologies instead of, or in addition to, the ones described above.
Education platform 110 stores educational content items and serves these items to users of client devices 130 in accordance with some implementations. In the illustrated embodiment, the education platform 110 includes a content repository 120, a user information repository 125, a repository of user customizable hands gesture commands 190, and an educational content and learning activities engine 115, referred to as the education engine 115 hereafter. In some implementations (not illustrated), content repository 120 or a portion thereof, is provided by a third-party (not shown) and may be communicatively networked with education engine 115, such as, via network 140.
In some implementations, content repository 120 may store a collection of educational content from various sources from which content is selected for the AR overlay. In some embodiments, content repository 120 includes a number of content entities, each content entity including content of a similar type, such as textbooks, courses, jobs, and videos. Accordingly, a textbooks content entity is a set of electronic textbooks or portions of textbooks. A courses content entity is a set of documents describing courses, such as course syllabi. A jobs content entity is a set of documents relating to jobs or job openings, such as descriptions of job openings. A videos content entity is a set of video files. An image content entity is a set of images, such as JPEGs, PNGs, etc. Content repository 120 may include numerous other content entities, such as, a massive online open course (MOOC) content entity, a question and answer content entity, a user-generated content entity, white papers, study guides, or web pages. Furthermore, custom content entities may be defined for a subset of users of the education platform 110, such as sets of documents associated with a particular topic, school, educational course, or professional organization. The documents associated with each content entity may be in a variety of different formats, such as plain text, HTML, JSON, XML, or others. Content repository 120 is discussed further with reference to
User information repository 125 stores information for each user of education platform 110, such as for users 101 and 102, and is discussed further with reference to
The repository of user configurable hands gestures commands 190 may store various hands gesture commands that have been defined by one or more users, such as user 102, and maintained by the education system 100. Thus, the education system 100 can implement a plurality of user customizable hands gesture commands that enable a user accessing the education platform 110, for instance user 102, to interact with the video lecture content and any supplement content simply by entering a hand gesture. In some implementations, each user customizable hands gesture command correlates a pre-defined and recognized hand gesture to the automatic execution of a particular series of actions performed by the command, such as automatically recording a snippet of the video lecture content, or automatically capturing a user-defined note. The user configurable hands gestures commands may be automatically executed on the client device 130b. The user configurable hands gestures command features of the education system 100 are discussed further in detail with reference to
Education engine 115 provides personalized supplemental content for presentation in an AR overlay to users of education platform 110 and is discussed further with reference to
Many conventional features, such as firewalls, load balancers, application servers, failover servers, network management tools and so forth are not shown so as not to obscure the features of the system. A suitable service for implementation of the education platform is the CHEGG service, found at www.chegg.com; other education platform services are known as well, and can be adapted to operate according to the teaching disclosed here. The term “service” in the context of the education platform 110 represents any computer system adapted to serve content using any internetworking protocols and is not intended to be limited to content uploaded or downloaded via the Internet or the HTTP protocol. The term “module” refers to computer program logic for providing a specified functionality. A module can be implemented in hardware, firmware, and/or software. A module is typically stored on a computer-readable storage medium such as storage device, loaded into a memory, and executed by a processor. In general, functions described in one embodiment as being performed on the server side can also be performed on the client side in other embodiments if appropriate. In addition, the functionality attributed to a particular component can be performed by different or multiple components operating together.
While in this example, and in other examples throughout the disclosure, the multimedia lecture content captured by the client device 130 corresponds to lecture by an educator, it should be understood that the systems and methods described herein may be applied in any of a variety of other contexts, such as any multimedia recording that can be better understood if associated with an AR overlay of the multimedia lecture content. For example, the video recording may be a recording of a group meeting, such as a meeting of a student group, a meeting of co-workers discussing HR policies, etc. Content in the recording may be recognized and used to retrieve an augmented content overlay. For example, in the context of a meeting between co-workers in which a discussion of updating the existing HR policy page is happening, augmented content such as, a hyperlink to a HR policy page on the company's intranet, links to HR policies found online, notes from a previous meeting regarding the same topic, etc., may be overlaid in an AR overlay. Other use cases include live gaming events, live political rallies, live conference meetings, and other live events.
At 210, content processing module 160 extracts metadata from content items in content repository 120, such as, title, author, description, keywords, file size, file type, language of content, publisher, and the like. As an example, for a particular book in content repository 120, extracted metadata may include: “Title: Science 101: Biology”, “Author: Ochoa, George”, “Edition: 1”; “ISBN-13: 978-0060891350”; “ISBN-10: 0060891351”; “Series: Science 101”; “Publisher: Harper Perennial”; “Language: English”; “Date of Publication: Jun. 26, 2007”; “File type: Pdf”; “File Size: 3 GB”.
At 220, content processing module 160 generates and assigns concepts to content items using a learned model, according to one embodiment. The learned model may be generated by a model trainer using an ensemble method, such as linear support vector classification, logistic regression, k-nearest neighbor, naïve Bayes, or stochastic gradient descent. As an example, for a particular chapter (say, chapter 1) in a particular book in content repository 120, content processing module 160 assigns the following concepts: process of science, macromolecules, cell, membranes, energy, enzymes, cellular respiration, and photosynthesis.
In some embodiments, concepts generated by content processing module 160 are hierarchical in nature, and the hierarchy is utilized when assigning concepts to a particular content item. For example, if content processing module 160 assigns a child concept is to a document, the corresponding parent concept is automatically assigned.
In some embodiments, concepts generated by content processing module 160 are seeded by information extracted at block 210. For example, say at 210, content processing module 160 extracts an index and a table of contents for a book. The information in the index and the table of contents is then used by content processing module 160 as seeds to generate the concepts assigned to that book.
In some embodiments, content processing module 160 identifies associations between concepts. Using the identified associations, content processing module 160 generates concept pairs, where concepts in a concept pair are related to each other. In one embodiment, content processing module 160 identifies associations between concepts based on a determination that two concepts frequently appear in proximity to one another in content items are likely to be related. Accordingly, in one embodiment, the content processing module 160 identifies associations between concepts appearing in proximity to one another in content items of the content repository 120, such as concepts appearing on the same page or concepts appearing in the same section of two documents. In one embodiment, the content processing module 160 applies an Apriori algorithm to identify concepts appearing in proximity to one another across multiple content items. Other algorithms identifying associations between concepts in the documents of the content repository 120 may alternatively be used.
In some embodiments, for concepts assigned to a particular content item, at 225, content processing module 160 also generates an indicator of a relative strength of association between the concepts and the particular content item. For example, for a first concept that is very strongly associated with the particular document, content processing module 160 may assign, say, a score of 0.99, while for a second concept that is only mildly associated with the particular content item, content processing module 160 may assign a score of 0.4.
In one embodiment, the content processing module 160 determines the indicators of relative strength (e.g., scores of 0 to 1) using one or more interestingness measures, such as support, confidence, lift, and conviction. The support supp(x) for a concept x is given by the probability P(x) of the concept occurring in a given document. The confidence conf(x→y) for a concept y occurring in a document given the occurrence of concept x in the document is defined by the conditional probability of y given x, or P(x and y)/P(x). The lift lift(x→y) for a concept y occurring in a document given the occurrence of concept x is given by the observed support for x and y in the document as a ratio of the expected support if x and y were independent concepts, or P(x and y)/[P(x)P(y)]. The conviction conv(x→y) is given by a ratio of the expected frequency of concept x occurring in a document without concept y (assuming x and y are independent concepts) to the observed frequency of x without y, or P(x)P(not y)/P(x and not y).
At 230, for each content item in the content repository 120, content processing module 160 generates a content record 300, which is discussed further with reference to
The process shown in
Referring to type metadata 314, the type or format of a content unit is that of digital media and may include, without limitation, text content, image content (e.g., a JPEG or GIF file), audio content, video content, hypertext protocol content, and so on, or any combination of these kinds of content. Content may also include instruction-bearing content, such as machine-readable program code, markup language content, script content, and so forth.
Content record 300 also includes content 320. Content 320 may refer to a book of chapters (e.g., an “eBook”), a chapter of a book, a section within a chapter of a book (such as, questions at the end of the chapter), an issue of a magazine, an entire music album, a song in an album, user-generated notes, etc. In some implementations, the content record 300 does not include the content 320, but instead includes a pointer or link to or other address of content 320, in which case the content itself can be stored outside of the content repository 120, e.g., on a third party server or repository (not shown).
In some embodiments, one or more content records 300 can be broadly categorized as being one of a passive activity, a practice activity, and a user action activity. This category information may be stored as part of the record identifier 310.
Passive activities includes activities where users interact with reference materials to help the user understand the material being delivered by the lecture. Such reference materials may include a summary, excerpts from books, essays, online documents, lecture notes, a Khan Academy lecture, flashcards, etc.
Practice activities are testing activities that provide the user an opportunity to practice their mastery of a concept and/or test their knowledge, including such activities as assignments, quizzes, multiple-choice exams, and other testing materials. In some embodiments, an educator, such as the one providing the video lecture 105, provides the practice activity to be stored as a content record 100, thereby making sure that the user is paying attention and learning the material being presented by the video lecture 105. For example, the practice activity may be a multiple-choice quiz, such that when the user 101 provides answers to a multiple-choice quiz, those answers are stored and/or processed by educational platform 110, including by providing the user-generated quiz responses and/or a success score or rating to the educator.
Further, in some embodiments, user action content records 300 include user action content that specifies any action may be desired or required of the user. In some embodiments, user actions can be captured by the system from the user 101 or the educator presenting the lecture. Examples of user action content include open the textbook to a certain location, or creating an appointment, a reminder, or an automated email, etc. The appointment, reminder, etc. may be, for example, to turn in homework, open a particular page of book, read certain secondary sources, answer a particular quiz, send an email, and so on). The user 101 may also create a user-generated note corresponding to a particular snippet. The user-generated content may include voice notes, written notes, etc. The ability to add user-generated content that can be presented as an AR overlay on the snippet of the lecture to which the note user-generated content corresponds is an important aspect of the present disclosure.
Referring again to
A user record 400 may include: a unique record identifier 410 that identifies the particular user record; identification information 415 for the user, such as, the user's name, email address, address, mobile device number, etc.; educational biographical information 420; and historical access information 430 including records of user's activities on the educational platform 110.
Educational biographical information 420 may include historical and current biographical information, such as universities attended by the user, courses taken, grades in those courses, courses currently registered for, major(s) declared, degree(s) obtained, degree(s) user wishes to obtain, and so on.
Historical access information 430 indicates which content 320 (or associated content records 300 as identified by their unique content identifiers 310) in content repository 120 has been accessed by user 101. Access information 430 may also indicate amount of time spent 424 by user 101 on each content, and optional timestamps 426 of time of access. Access information 430 may also indicate attributes of interaction 428 by user 101 with content 322.
A user record 400 may further include a learning profile 440 for user 101. The determination of learning profile 440 by education platform 110 is described further with reference to
In some embodiments, learning profile 440 optionally includes a differential learning profile 450. The determination of differential learning profile 450 by education platform 110 is described further with reference to
In some implementations, learning profile module 170 generates a user learning profile using a learned model, according to one embodiment. The learned model may be generated by a model trainer using an ensemble method, such as linear support vector classification, logistic regression, k-nearest neighbor, naïve Bayes, or stochastic gradient descent.
At 510, learning profile module 170 analyzes a user's historical access information 430 to determine the user's preferences. The learning profile module 170 analyzes the user's activities (e.g., active, passive, or recall), type of content rendered (e.g., video, lecture, book, etc.), duration of activities, etc., to determine the user's preferences.
Optionally, at 515, learning profile module 170 may optionally request user 101 to provide user input regarding that user's preferences. In some implementations, learning profile module 170 requests user 101 to provide user input if there is not enough information in user's historical access information 430, as may be the case for a new or relatively new user 101 of education platform 110.
At 520, learning profile module 170 uses the user's preferences from 510 to determine other user records with similar preferences. In some implementations, learning profile module 170 compares the user's preferences with other users' preferences, and interprets a match over a certain threshold between these preferences as meaning that users have similar preferences.
At 530, learning profile module 170 generates a differential learning profile 450 for user 101. The differential learning profile 450 provides a snapshot of how the user's learning compares with one or more other users with respect to a particular subject, topic, course, concept, etc.
At 532, for two users (one of them being user 101), learning profile module 170 generates one or more metrics for completion of one or more sets of learning activities. Examples of metrics include: time taken to complete each learning activity individually; time taken to complete a particular set of learning activities in aggregate; completion velocity referring to whether the user started slowing but then got faster (i.e., accelerated), or started fast but got slow (i.e., decelerated), or stayed the same (i.e., no change); outcome of recall activities in the set of learning activities; outcome of the set of learning activities in aggregate (e.g., student grade); and so on.
In one case, users for generating the differential learning profile 450 are selected based on overlap in learning activities with user 101. Accordingly, differential learning profiles 450 are generated for users with overlap in learning activities. For example, say user 101 is enrolled (or was) enrolled in Bio 101. Accordingly, learning profile module 170 may generate a differential learning profile 450 for user 101 that indicates a difference between the user's learning and that of other users in Bio 101 based on all of the users performing the same activities, such as completing the same assignments, reading the same textbook chapters, and so on. In another case, the user for whom the metrics are generated refers to an average of metrics of all other users, say in Bio 101.
At 534, learning profile module 170 compares the metrics generated at 532 and adjusts a score for user 101 accordingly. The score may be incremented when the metric comparison indicates that user 101 performed better than the other user (or average user), decremented when user 101 performed worse, and not adjusted when the performances were equivalent. The score represents a difference between the learning profile for user 101 and one other user (or average user). Learning profile module 170 may iteratively perform steps 532 and 534 until it determines n differential scores for user 101 representing the difference between the learning profile for user 101 and each other user (n−1) who has completed the or more sets of learning activities at 532, such as each other user in Bio 101.
At 536, learning profile module 170 stores the n differential scores for user 101 as the differential learning profile 450 for particular subject, topic, course, concept, etc. In the example above, the n differential scores for user 101 are stored in association with the course Bio 101.
At 540, learning profile module 170 uses the user's preferences (from 510), the preferences for other similar users (from 520), and the user's differential learning profile 450 to generate a learning profile for the user. Accordingly, learning profile module 170 can expand and/or refine the learning profile for the user using the preferences for other similar users. For example, if a first user has learning preferences A, B, and C that match with a second user who also has learning preferences A, B, and C, but also has a learning preference D, then learning profile module 170 infers that the first user also has learning preference D.
At 550, learning profile module 170 updates the user's learning profile as the user's access information 430 changes, as other similar users' access information 430 changes, as new similar users are added, and as older similar users are no longer considered similar. At 540, learning profile module 170 also updates the user's learning profile based on results of an application of the user's learning profile. As discussed further with reference to
The process shown in
At 610, overlay generation module 170 receives a recording of a portion of a lecture 105. In some embodiments, user 101 makes a recording of at least portion of the lecture 105 using a video capture device, and accordingly, the recording of the portion of a lecture 105 is received by overlay generation module 170 of education engine 115. A live lecture corresponding to the video lecture 105 may be happening in a physical space, such as, classroom environment, an educator's home office, etc. User 101 may be physically present in the physical space and use their client device 130 to capture a recording of the live lecture, or the user 101 may not be physically present and in such a case, a video lecture 105 is delivered to clients 130 over network 150. As an example, users 101 and 102 may be students in a course, such as Geometry, and the live video lecture 105 may correspond to a lecture by a professor on the Pythagoras theorem. The user 101 uses a camera included in client 130 (or otherwise communicatively coupled to client 130, such as, via LAN, Bluetooth, etc.), to make a recording of the video lecture 105. In some embodiments, user 101 may use a camera-functionality of the same client device 130 to create a recording of the video lecture 105 as the one they are using to watch the lecture. In other embodiments, the user 101 may be watching the video lecture 105 using a first client device 130, such as, their laptop, and be making a video recording of the video lecture 105 using a second connected client device 130, such as their connected glasses, mobile device, etc.
As the user makes the recording, snippets of the recording are received at education platform 110. In some embodiments, the snippets are determined based on size. For example, as soon as a video recording reaches a pre-determined file size (e.g., 10 kb file size), it is transmitted by the client device 130 to education platform 110. As another example, in addition or in the alternative, as soon as a video recording reaches a pre-determined duration (e.g., 20 seconds long), it is transmitted by the client device 130 to education platform 110. In some embodiments, the pre-determined size or duration of snippet is small so as to enable near real-time transmission from the video capture device to education platform 110. In other embodiments, the snippets are selected by the user 101.
At 615, the snippet is parsed, such as by using video recognition tools, voice recognition tools, image recognition tools, etc. to obtain keywords and key phrases, as well as to identify concepts associated with the content in the snippet. In some implementations, overlay generation module 170 determines the keywords, key phrases, and one or more concepts using a learned model. The learned model may be generated by a model trainer using an ensemble method, such as linear support vector classification, logistic regression, k-nearest neighbor, naïve Bayes, or stochastic gradient descent. For example, a particular snippet may include content associated with the following concepts: process of science, macromolecules, cell, membranes, energy, and enzymes. In addition, the overlay generation module 170 determines the keywords, key phrases, and one or more concept based on the context of the snippet.
In some implementations, at 618, overlay generation module 170 optionally computes an indicator of a relative strength of association between the concepts and the snippet. For example, for a first concept that is very strongly associated with the snippet, overlay generation module 170 may assign, say, a score of 0.99, while for a second concept that is only mildly associated with the particular snippet, overlay generation module 170 may assign a score of 0.4.
At 620, overlay generation module 170 identifies one or more candidate content records 300 that correspond to the parsed snippet. The candidate content records may be identified based on machine learning, strength of association with one or more of the identified concepts, or any of a number of other techniques.
At 626, overlay generation module 170 searches content repository 120 for content that matches the one or more concepts from block 615. As discussed with reference to
In some embodiments, the candidate content records are selected at least in part based on a preference or selection made by the person or entity providing the video lecture 105. For example, a professor may select a particular practice activity content record 100 for all users watching the video lecture. As another example, the professor may select different practice activity content records for the users watching the video lecture 105, thus making sure the users do not cheat and/or to personalize the practice activity, such as based on different users' ability, skill level, learning goals, grade level, etc. In some embodiments, the candidate content records are selected at least in part based on a match with a user profile associated with the user 101.
At 630, overlay generation module 170 selects a second subset of content records from the first subset of content records based on one or more criteria. Examples of criteria include: degree of match of a content record with the user's profile, degree of match of a content record, and so on.
As an example, say the result at 615 is a concept A, and the result at 630 is hundreds of content items. At 640, overlay generation module 170 selects four content items from the hundreds of content items. The four content items correspond to the concept A. In some implementations, the four content items are further selected, based, e.g., on course syllabus (and/or other course materials), similar courses, and/or the user learning profile indicating user's understanding of the concept is weak. In some embodiments, the four content items are further selected based on attributes of client device. For example, if the user 101 is using a mobile device versus a laptop, the four content items selected are those that are optimized for being rendered on a mobile device. In some embodiments, the four content items include items of different types (e.g., video, quizzes, textbook content, etc.).
In some embodiments, the four content items include items that have previously been accessed by user 101 and new items that have not previously been accessed by user 101, as indicated by user's historical access information 430. The ratio of previously accessed content and new content may be based on the user's learning profile (computed at 615).
At 640, overlay generation module 170 presents the selected second subset of candidate content records for display on the client device as an augmented reality overlay in relation to the snippet. For example, the candidate content records may automatically be displayed in a portion of a screen of the client device below, next to, over, or elsewhere in relation to the snippet.
As the user continues to render the video 105, the user may continue to select snippets for retrieving corresponding content. When the user moves on from one snippet to the next, the process 600 starts over at block 810. When the user selects the second snippet, the corresponding candidate records are provided for display on the client device as an augmented reality overlay in relation to the snippet.
The approach of the present disclosure can improve the efficiency of data processing by retrieving data associated with content and concepts in a snippet of a video lecture and presenting the data for display on the client device as an augmented reality overlay in relation to the snippet. By providing data in this manner, user may be quickly presented with additional information to facilitate understanding the content and concepts in the video lecture, without having to enter various search requests, look for particular websites, and sort through various irrelevant information. Accordingly, the user's experience is enhanced and more enjoyable.
As illustrated in a
User interface 700 also provides various user interface actions or buttons. A “Save” button 750 may be used to save the snippets 730 and associated supplemental content 740, to one or more of: memory local to the client device 720, an online drive, in a folder associated with the course (e.g., Geometry course), etc.
A “Share” button 770 may enable the user to share one or more snippets 730 and corresponding supplemental content 740 with other users, such as with a user 102 in the user's (user 101) Geometry class. The shared content (a snippet and its corresponding supplemental content, such as notes generated by user 101) can be rendered by user 102 such the supplemental content is presented as an AR overlay over the shared snippet.
An “Export” button 780 may enable the user to export the one or more snippets 730 and corresponding supplemental content 740 to other file formats. In some embodiments, the “Export” button 780 may be used to package the one or more snippets 730 and corresponding supplemental content 740 for distribution and playback.
An “Edit” button 760 enables the user to edit a snippet 730 and/or corresponding supplemental content 740. Accordingly, the user may be able to use edit the presented candidate records once they have been provided for display on the client device as an augmented reality overlay in relation to the snippet. The user may wish to remove supplemental content that the user does not find helpful, or that does not otherwise appeal to the user. The “Edit” button 760 also enables the user to create additional snippets 730 with corresponding supplemental content 740. The “Edit” button 760 also enables the user to delete existing snippets 730 and corresponding supplemental content 740. The user may also be able to request additional candidate records if the quantity or quality of presented candidate records is insufficient. In some embodiments, the request for addition of additional supplemental content records is fulfilled by overlay generation module 170. In other embodiments, the user 101 may also be able to add user-generated content as a candidate record. For example, the user 101 may wish to add their notes corresponding to a particular snippet. The user-generated content may include voice notes, written notes, etc. The ability to add user-generated content is an important aspect of the present disclosure.
As illustrated in
As illustrated in
As illustrated in
Referring now to
Accordingly, as a user moves their left hand, right hand, or both hands within the hands gesture command field 915, the image capturing device can detect this movement of the user's hand(s) to recognize if the fingers (and palms) are positioned in a particular gesture that is intended to convey information, namely a hands gesture command. For example, the user may place their left hand within the hands gesture command field 915, and motion their hand by extending the index finger outward, extending the thumb upward (contracting the other fingers inward to touch to inside of the palm), and facing the palm inward (e.g., towards the AR connected device display 901). The user's hand gesture can be captured, for example, by a front-facing digital camera of the smartphone. By capturing and analyzing the imaging of the user's hand motion within the hand gestures command field 915, the hand gesture made by the user can be recognized by the system as representing a corresponding hands gesture command that has been previously defined by the user and maintained by the system (shown as library of user customizable hands gesture commands 920). According to the embodiments, the system can implement a plurality of user customizable hands gesture commands, where each user customizable hands gesture command correlates the system's recognition of a hand gesture (captured within the hands gesture command field 915) to the automatic execution of a particular action that allows the user to interact with the streaming video 910 and any supplement content. Consequently, a user can interact with the streaming video 910 of the live lecture, for instance initiating a recording of a snippet of the streamed lecture, simply by moving their hand(s) within the hands gesture command field 915 which automatically executes the user-customizable hands gesture command.
The user customizable hands gesture commands feature enables the quick launching of various actions for interacting with the streaming video 910, including, but not limited to: recording a snippet of the video lecture content; capturing a note; and optically scanning text or pictures recognized within the video lecture content. Enabling a user to interact with the capabilities of the system in a fast and efficient manner is crucial within the time-critical context of education and training. Generally, if the user is distracted for any significant amount of time, attention is taken away from the content of the lesson. For instance, when a student has to direct their eyes and attention away from the lecturer to manually write-down notes, this increases the likelihood that the student may miss important information while it is being presented during that time in the lecture. In many cases, even if a student uses electronic mechanisms (as oppossed to manual) such as typing a note using a word processing application on their smartphone, the student still needs to navigating through multiple user interfaces and/or selecting a series of individual inputs, which may interrupt the user's focus to the lecture itself, and prevent the user from gaining a full enrichment from the learning experience. The user customizable hands gesture commands, as disclosed herein, address this issue by significantly reducing the amount of time and effort required by the user to interact with the video lecture content, thereby eliminating distractions and interruptions from the actual lecture. The user customizable hands gesture commands are configured to quickly and automatically launch the actions and applications needed to by the user to interact with the video lecture content, requiring minimal input by the user, namely a simple hand gesture.
As seen in
The user customization is a key aspect of the user customizable hands gesture commands, as disclosed. In some implementations, the system can be configured to include one or more default hands gesture commands. However, relying only on default hands gesture commands may not be most suitable for the unique needs and characteristics of a particular user, and thus may be less optimal for specific users when interacting with the streaming video 910 and supplemental content. Therefore, the configuration of the commands that are included in the user customizable hands gesture commands library 920 can be distinctively customized by a user. That is, the commands in the library of user customizable hands gesture commands 920 can be defined by the user, and thus adapted to the characteristics of that specific user, ultimately increasing optimization of the feature. As an example, a user may have a dominant hand, for instance being left-handed, making hand movements with the left hand much easier for the user in comparison to the right hand. It may be desirable in this scenario for the user to define user customizable hands gesture commands that recognize hand gestures made with the left hand (as opposed to the right hand). As another example, a user may have a hand that is best suited for dictating hand gestures. For instance, the user may typically hold a pen in their right hand (e.g., for note taking), which predominately leaves the left hand to be readily available for entering hand gestures. Consequently, it may be desirable in this scenario for the user to define user customizable hands gesture commands that recognize hand gestures made with the left hand (as opposed to the right hand, or both hands).
Further, in some cases, there may be a subset of actions that are used more frequently by a particular user than other actions. Thus, it can be more important to define user customizable hands gesture commands that execute the frequently performed actions in this subset, for that particular user. For example, a user may frequently record snippets of their viewed lectures, but very rarely raises questions. Accordingly, the library of user customizable hand gestures 920 can be customized by the user to include the “Snippet” command and remove the “Questions” command. Other characteristics of the user may be used to govern the customization and/or configuration of the library of user customizable hands gesture commands 920. In addition, the hand movements used to indicate a hands gesture command may be hand movements that are deemed as uncommon, in order to mitigate the user making an arbitrary hand motion that inadvertently enters a command to the system. For example, a hand waving motion may not be deemed suitable for implementing a user customizable hands gesture command, as it may be common for a user to wave at another person or to unconsciously move their hand in a waving motion. A memory, repository, database, or storage system, such as one or more storage devices remotely located from the AR connected device can be employed to implement the library of user customizable hand gestures 920, such as the education platform (shown in
The content window 904 serves as a display, mirroring the content of the streaming video (or live scene) 910 that is being viewed by the user. Thus, the streaming video 910 can be rendered on the AR connected device display 901 simultaneously as the user watches the streaming video 910 and the supplemental content in an AR environment, or a live scene in a real-life environment.
Additionally, the hand place holders 902, 903 each display a rendering of the user's hand as it is captured by the image capturing device of the AR connected device. The hand place holder 902 that is positioned on the left side of the content window 904 is dedicated to rendering the left hand of the user in the user interface 900, and the hand place holder 903 that is positioned on the right side of the content window 904 is dedicated to rendering the right hand of the user in the user interface 900. Accordingly, the hand place holders 902, 903 allow the user to view a visual representation of the particular movement and/or of gesturing of the left hand, right hand, or both hands, from the perspective of the image capturing device. For example, the user can move their left hand within the hand gestures command field 915 in order to execute a specific hands gesture command, but the user may also see that only a portion of their left hand is shown in the hand place holder 902. By viewing their hand in the same manner it is being captured by the image capturing device, the user becomes aware that their left hand may not be in an optimal position within the hand gestures command field 915 (e.g., entirely within the range of the image capturing device lens) to enable the image capturing device to fully recognize their hand gesture and for the system to accurately execute the appropriately corresponding hands gesture command. Referring back to the example, subsequent to viewing the rendering of their hand(s) in the hand place holders 902, 903, the user can then reposition their hand(s) as necessary within the hand gestures command field 915 to ensure that the hand gestures are appropriately recognized by the image capturing device, and the intended hands gesture commands are performed by the system. Consequently, the hand place holders 902, 903 can serve as a performance enhancing feature provided by the user interface 900, preventing some user error (e.g., hand gestures not being properly captured by the image capturing device) and misinterpretation of the user's hand gestures by the system.
In
Each of the events 906A-906F displayed in
In
In an embodiment, the snippet hands gesture command 907A is configured to automatically record a snippet for a pre-defined amount of time (e.g., one minute), and consequently does not require an additional hands gesture command (or hand gesture from the user) to stop the recording. In other words, a single hand gesture from the user can automatically trigger the system to start recording a snippet and to stop recording the snippet. In a scenario where the duration for recording the snippet using the snippet hands gesture command 907A automatically ends, and the user desires more of the video lecture content to be recorded, the user can continue recording by making additional hand gestures that enter one or more subsequent snippet hands gesture commands into the system in order to record consecutive snippet(s). These successively recorded snippets can later be viewed in succession by the user as a continuous recording, which is substantively equivalent to a snippet of a longer duration from the perspective of the user. A default duration for the snippet that is recorded using the snippet hands gesture command 907A may be set in the system. The user may also set a user generated duration for the snippet, so as to modify the amount of time the snippet hands gesture command 907A records the streaming video (or live scene) 910 to be better suited to the needs of the user and/or the learning environment. For instance, it may be desirable for the user to set a user generated duration for a snippet (e.g., three minutes), which is longer than the default duration, if the user prefers more context around the certain concept that is being presented by the lecturer in the snippet.
Alternatively, the system may include an end snippet hands gesture command that can be used in conjunction with the snippet hands gestures command 907A in order to terminate the snippet, which would allow the user more control with respect to determining the duration of the snippet. As an example, a user could enter the end snippet hands gesture command 30 seconds after triggering the recording of the snippet by the snippet hands gesture command 907A, if the lecturer quickly defines a term or quickly explains a simple concept.
Accordingly, a snippet of the streaming video 910 that is being watched by the user through the AR connected device can be created by a simple hand motion of the user. The snippet hands gesture command 907A provides efficiency and ease of use, as it does not require the user to navigate through various interfaces and/or interact with mechanical buttons on the AR connect device that otherwise would be used to select a recording feature. For example, on a smartphone, a user may have to navigate through various applications or settings on the device to find the specific interface for controlling the camera, open the interface, and then touch the screen of the smartphone (or press a button on the smartphone) to select the record function of the camera before recording begins. In a learning environment, where capturing information that is being presented by the lecturer is time critical, supporting a snippet hands gesture command 907A that records using a seamless automated interaction, can mitigate loss of critical information and/or key concepts of the lecture that may be experienced when employing slow and cumbersome input mechanisms for recording.
In addition,
The MyNotes hands gesture command 907B is configured to allow a user to move their hand inside of the hands gesture commands field 915 forming the defined hand gesture 908B that the system recognizes. Once the system interprets the captured hand gesture 908B, it can execute the MyNotes hands gesture command 907B, which automatically triggers the AR connected device to perform the corresponding captured action, namely capturing a note and recording a snippet (of the streaming video 910) as the captured action 909B. An example of the defined hand gesture 908B that is recognized by the system for executing the MyNotes hands gesture command 907B is shown in
In some embodiments, the user-generated note can be a voice note, a hand-written note (e.g., a passive/capacitive stylus entry on a touchscreen), or a typed note. Thus, the MyNotes hands gesture command 907B may automatically launch a voice recognition/recording application (e.g., voice memo), a hand writing application (e.g., stylus compatible app), or a word processing and/or text entry application on the AR connected device based on the type of note that is desired to be created by the user. Even further, the type of note (and correspondingly the application used to generate that note) can be based on the particular gesture that is captured. Thus, in some embodiments, there may be several variations of gestures that trigger the MyNote hands gesture command 907B. For instance, there may be a specific gesture that is configured to trigger the MyNote hands gesture command 907B for generating a voice note. Thus, in response to capturing and recognizing the gesture that corresponds directly to a voice note, the system can automatically open a voice recognition/recording application which allows the user to quickly generate the voice note. As another example, there may be a different gesture that is particularly configured to trigger the MyNote hands gesture command 907B for a text note. Consequently, in response to capturing and recognizing the gesture that particularly corresponds to a text note, the system will automatically launch a word (or word processing) application that is used for creating the text note.
An event 906C (“Event C”) is also shown in
An example of the defined hand gesture 908C that is recognized by the system for executing the scan hands gesture command 907C is shown in
The scan hands gesture command 907C leverages hand gesture capabilities to quickly scan text/pic information that is being viewed in a live stream, in a manner that is particularly useful in the learning environment. For example, by making a simple hand gesture, the user can immediately trigger a scan of a region within a current frame of the streaming video 910, which ultimately captures a complex equation, a definition, a graph, a figure, a table, a slide, and the like that may be written on the whiteboard, or other display means employed by the lecturer. In contrast, a student's concentration and time would be taken away from the other content presented in the lecture in order for a student to copy down the displayed text/pic information by hand. Furthermore, the scan hands gesture command 907C may also realize advantages associated with capturing data in an efficient and automated manner, for instance mitigating human error and inaccuracies that may occur when a student is attempting to quickly copy information presented during a lecture, such as an equation written on the whiteboard by the lecturer. The potential for human error may be exacerbated by other conditions in the learning environment, such as the student being partially distracted by continuing to listen to the lecture, which can lead to errors and inaccuracies in manually copying the information. A student capturing inaccurate information can have determinantal effects. As an example, the student may study and attempt to apply an equation presented during a lecture that has been incorrectly written down. By implementing a hand gesture that performs an automated scan of text/pic data, the overall enrichment from the student's learning environment can be improved.
The resulting scanned text/pic data captured using the scan hands gesture command 907C can be presented to the user, for instance within a separate window of the interface 901 or stored by the system as a file. In some embodiments, the scan hands gesture command 907C is configured to automatically scan text/pic content and start recording a snippet of the streaming video 910. The scan hands gesture command 907C can record a snippet of the streaming video 910 in the same manner as the snippet hands gesture command 907A, described in detail above. Also, in some embodiments, the text recognition implemented by the scan hands gesture command 907C, after the image of the frame is captured, can include text analysis techniques, such as Optical Character Recognition (OCR) that performs the conversion of images of typed, handwritten or printed text into machine-encoded text, from various sources, such as a scanned video, image, live scene, or document. The scan hands gesture command 907C can also implement picture recognition techniques, for instance an analysis of an image that can recognize and/or extract geometric forms, graphs, pictures, and the like from the image.
Additionally, yet another event 906D (“Event D”) is shown in
As previously described, the system can include a content processing module (shown in
As an example, at a point during the lecture, the lecturer may mention notable information, such as a date for a test, an assigned reading from specific chapters of a text book, an assignment with an upcoming due date, and the like. The user can simply do a hand motion to execute the ToDo hands gesture command 907E, while watching the streaming video 910, in order to automatically add user action content, also referred to as a “ToDo” action, to a file including other actions for the user. The file can be a “ToDo” list, such as a text file, that includes one or more “to do” actions, user action content, user-generated content, or tasks that are required to be completed the user at a later time. The “ToDo” list can correspond to a specific user, but may also be accessible and/or viewed by other users that have appropriate permissions to the content corresponding to at least one action on the “to do” list, for example the recorded snippet. In some embodiments, the “ToDo” list can be a voice file or a text file. Thus, the executing the ToDo hands gesture command 907D can involve automatically launching a voice recognition/recording application (e.g., voice memo), a hand writing application (e.g., stylus compatible app), or a word processing and/or text entry application on the AR connected device in order to appropriately modify the file to add a “ToDo” action.
Also shown in
In some embodiments, the captured question can be a voice file or a text file. Therefore, executing the hands gesture command 907F can involve automatically launching a voice recognition/recording application (e.g., voice memo), a hand writing application (e.g., stylus compatible app), or a word processing and/or text entry application on the AR connected device in order to appropriately capture a question. The user may review the file capturing their question at any point, for example a few hours after the streaming video 910 of the live lecture has ended. Subsequently, the user can perform a related action, such as creating an email, sending the voice file, posting to a message board, or communicating in-person (e.g., office hours) as a means to pose the captured question. The user can employ various communication mechanisms as deemed suitable or appropriate in order to convey the captured question to another person, such as a lecturer, a tutor, or others in the class. In some embodiments, the hands gesture command 907F can automatically launch at least one communication mechanism to provide the capture question for an intended recipient. For example, after a user saves text document including their question, an automated email to the lecturer can be generated having the text document with the question as an attachment. According to some embodiments, the question hands gesture command 907F can record a snippet of the streaming video 910 in the same manner as the snippet hands gesture command 907A, described in detail above.
At 1005, a hand gesture is captured. For example, a user can make a specific hand gesture in order to be captured by the system, and ultimately execute a hands gesture command. According to the embodiments, 1005 involves capturing a hand gesture that is made within the hands gestures command field, which is a particular area within a field in front of the client device, for instance the AR connected device display, that enables a user's hand(s) to be captured and accurately recognized by an image capturing device. As an example, a user can move their left hand, right hand, or both hands within the hands gesture command field, such that the image capturing device can detect a hand gesture that is made. In some implementation, the hand gesture is captured by an image capturing device of the user's client device, such as an embedded camera within a laptop computer, an embedded digital camera of a smartphone. Alternatively, the hand gesture can be captured by an external camera that is not integrated within the client device, such as a tracking camera that is tethered to a virtual reality (VR) headset. The captured hand gesture can be captured by the image capturing device as a digital image or video, and communicated to the system as image data that can be subject to further digital image processing techniques, such as image recognition, computerized biometrics, and the like.
At 1010, the captured hand gesture is analyzed in order to recognize if the hand gesture that was previously captured at 1005 corresponds to a defined hands gesture command. Recognition can be performed at 1010 using a number of various image processing techniques. For instance, the digital image or a video frame can be analyzed in order to extract the captured hand gesture within the image data. The extracted hand gesture can be compared against a database of defined hands gesture that are known by the system, such as the library of user customizable hand gestures commands (shown in
At 1015, the system can execute the defined user customizable hands gesture command. Accordingly, after the captured hand gesture command entered by the user is successfully recognized by the system, the defined user customizable hands gesture command can be automatically executed such that the user can interact with the video lecture content and supplement content in a fast and efficient manner. As previously described, a plurality of user customizable hands gesture commands can be defined, where each user customizable hands gesture command correlates the system's recognition of a hand gesture to the automatic execution of action(s). Actions that are performed in response to executing the defined user customizable hands gesture command generally involve creating additional supplemental content for the video lecture content, such as creating a video snippet, creating a note, or capturing a question. Accordingly, in some implementations, executing the user customizable hands gesture command also includes automatically triggering the associated client device to open associated applications and execute functions in order to complete the action. Furthermore, executing the user customizable hands gesture command can include automatically storing any content that has been automatically generated from performing the command's actions. Referring back to the example of recognizing that the hand gesture corresponds to the question hands gesture commands, executing the command can involve initiating a recording of a snippet of the streamed lecture and automatically launching a voice recognition/recording application (e.g., voice memo), a hand writing application (e.g., stylus compatible app), or a word processing and/or text entry application on the client device in order to appropriately capture a question. Thus, the method 1000 enables various user interactions simply by moving their hand(s), which automatically executes the user customizable hands gesture command. Although additional user input may be received in order to generate the additional supplemental content, for instance the user entering text to capture their question, it should be appreciated that the embodiments allow the method 1000 to initially execute, or launch, the defined user customizable hands gesture commands without any further interactions being required by the user other than the simple hand gesture captured at 1005.
Memory 1106 includes volatile and/or non-volatile memory. Memory 1106 (e.g., the non-volatile memory within memory 1106) includes a non-transitory computer-readable storage medium. Memory 1106 optionally includes one or more storage devices remotely located from the processors 1102 and/or a non-transitory computer-readable storage medium that is removably inserted into the server system 1100. In some embodiments, memory 1106 (e.g., the non-transitory computer-readable storage medium of memory 1106) stores the following modules and data:
Each of the modules stored in memory 1106 corresponds to a set of instructions for performing one or more functions described herein. Separate modules need not be implemented as separate software programs. The modules and various subsets of the modules may be combined or otherwise re-arranged. In some embodiments, memory 906 stores a subset or superset of the modules and/or data structures identified above.
In general, the word “component,” “engine,” “system,” “database,” data store,” and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++. A software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts. Software components configure for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware components may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors.
The computer system 1100 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 1100 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 1100 in response to processor(s) 1102 executing one or more sequences of one or more instructions contained in main memory 1106. Such instructions may be read into main memory 1106 from another storage medium, such as a storage device. Execution of the sequences of instructions contained in main memory 1106 causes processor(s) 1102 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
The term “non-transitory media,” and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device. Volatile media includes dynamic memory, such as main memory 1106. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.
Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware. The one or more computer systems or computer processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another, or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate, or may be performed in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The performance of certain of the operations or processes may be distributed among computer systems or computers processors, not only residing within a single machine, but deployed across a number of machines.
As used herein, a circuit might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit. In implementation, the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. Even though various features or elements of functionality may be individually described or claimed as separate circuits, these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality. Where a circuit is implemented in whole or in part using software, such software can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto, such as computer system 500.
As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, the description of resources, operations, or structures in the singular shall not be read to exclude the plural. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps.
Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. Adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known,” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.
This application claims the benefit of U.S. Provisional Application No. 63/130,580, filed on Dec. 24, 2020, the contents of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63130580 | Dec 2020 | US |