The present invention relates to algorithms for automated tutoring, and more specifically, to disaggregating learning curves by student mastery for refining and accurately evaluating automated tutoring models.
Learning curves that depict student performance over time are used to evaluate whether students are learning. Learning curves use data that is aggregated across multiple students in order to average out the effects of irrelevant factors and thereby better detect the underlying trajectory of learning as a function of practice.
Mastery learning, on the other hand, is used with individual students to provide them with just enough practice so that they master the material without practicing more than necessary. For example, on the assumption that knowledge can be decomposed into discrete knowledge components, referred to herein as “skills”, a skill profile can be generated for each student, using algorithms such as Bayesian Knowledge Tracing (BKT). Based on the skill profiles, mastery learning can be applied to tailor the lessons for each student such that all students master the material with the minimum amount of practice suitable for each student.
When a learning curve is generated from the mastery learning of multiple students, the aggregated result may inaccurately reflect the learning that is actually occurring for each student. Since learning curves are frequently used to evaluate the effectiveness of automated tutoring models, an inaccurate learning curve can result in a faulty evaluation of software implementing an automated tutoring model. This faulty evaluation may prevent development resources from being directed to the areas of the automated tutoring model that need the most attention, resulting in less than optimal tutoring for students. If the inaccurate learning curves are used with other internal or external data, misleading results may be provided.
Further, the learning curves may be also used within the software itself, for example by matching a student skill profile to a known or projected learning curve for a particular skill. The various parameters of the automated tutoring model can then be refined and adapted to match the learning curve, which can then affect problem difficulty, lesson speed, skill development priorities, and other settings. However, if the learning curve does not accurately reflect true student learning progressions, then the adjustment of the parameters will similarly be inaccurate.
Based on the foregoing, there is a need for a method to accurately refine and evaluate automated tutoring models, particularly those that utilize mastery learning.
The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
In the drawings:
In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
Learning curves aggregated across all students frequently underestimate the learning that is occurring, not just at the tail of the curve but throughout most of its length. Furthermore, this is particularly the case for students engaged in mastery learning. For example, a learning curve may show the percentage of correct answers from multiple students as a function of the number of opportunities, or the number of attempted questions. As the learning curve progresses to a larger number of opportunities, the learning curve must be averaged using only those students that are still attempting additional questions, as systems implementing mastery learning stop providing additional questions to those students who have already attained mastery. Accordingly, the learning curve is negatively impacted by the unavailability of data from students who have already attained mastery.
By using learning curves that are disaggregated by student mastery, or number of opportunities needed to reach skill mastery, several problems can be overcome. First, learning can be detected from aggregated learning curves that appear to show little to no learning, correcting the negative effects of mastery learning on aggregate learning curves. Second, the different subsets of students that are learning and not learning, if any, can be identified, and the characteristics of each subset can be analyzed, such as initial knowledge or skill levels and the rates of mastery. Accordingly, the disaggregated learning curves can provide a more accurate evaluation of a particular automated tutoring model, and the disaggregated learning curves can also be used to more effectively refine the parameters of the automated tutoring model to adapt to each student.
Before discussing the specifics of the disaggregated learning curves, it may be helpful to provide a system architecture overview of an example tutoring system for which the disaggregated learning curves can be used.
Client devices 110A-110C may be implemented by any type of computing device that is communicatively connected to network 120. Example implementations of client device 110A-110C include, without limitation, workstations, personal computers, laptop computers, personal digital assistants (PDAs), tablet computers, cellular telephony devices such as smart phones, and any other type of computing device.
In network arrangement 100, client devices 110A-110C are configured with respective tutoring clients 112A-112C that may access tutoring service 132. Tutoring clients 112A-112C may be implemented in any number of ways, including as a plug-in to a web browser, as an application running in connection with web page provided by tutoring service 132, as a stand-alone native binary application, or by other means. Client devices 110A-110C may be configured with other mechanisms, processes and functionalities, depending upon a particular implementation.
Further, client devices 110A-110C are each communicatively coupled to a display device (not shown in
Network 120 may be implemented with any type of medium and/or mechanism that facilitates the exchange of information between client devices 110A-110C and server device 130. Furthermore, network 120 may facilitate use of any type of communications protocol, and may be secured or unsecured, depending upon the requirements of a particular embodiment.
Server device 130 may be implemented by any type of computing device that is capable of communicating with client devices 110A-110C over network 120. In network arrangement 100, server device 130 is configured with a tutoring service 132, which may be part of a cloud computing service. Functionality attributed to tutoring service 132 may also be performed by tutoring clients 112A-112C, according to embodiments. Server device 130 may be configured with other mechanisms, processes and functionalities, depending upon a particular implementation.
Server device 130 is communicatively coupled to database 140. As shown in
Any of tutoring clients 112A-112C and tutoring service 132 may receive and respond to Application Programming Interface (API) calls, Simple Object Access Protocol (SOAP) messages, requests via HyperText Transfer Protocol (HTTP), HyperText Transfer Protocol Secure (HTTPS), Simple Mail Transfer Protocol (SMTP), or any other kind of communication, e.g., from one of the other tutoring clients 112A-112C or tutoring service 132. Further, any of tutoring clients 112A-112C and tutoring service 132 may send one or more of the following over network 120 to one of the other entities: information via HTTP, HTTPS, SMTP, etc.; XML data; SOAP messages; API calls; and other communications according to embodiments.
In an embodiment, each of the processes described in connection with one or more of tutoring clients 112A-112C and tutoring service 132 are performed automatically and may be implemented using one or more computer programs, other software elements, and/or digital logic in any of a general-purpose computer or a special-purpose computer, while performing data retrieval, transformation, and storage operations that involve interacting with and transforming the physical state of memory of the computer.
According to an embodiment, tutoring clients 112A-112C and/or tutoring service 132 are implemented as part of an intelligent tutoring system, such as the cognitive tutor described in Koedinger, K. R., Anderson, J. R., Hadley, W. H., & Mark, M. A. (1997), Intelligent tutoring goes to school in the big city, International Journal of Artificial Intelligence in Education, 8, 30-43, and Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995), Cognitive Tutors: Lessons Learned, The Journal of the Learning Sciences, 4(2), 167-207, both of which are incorporated herein by reference.
Creating Learning Curves that are Disaggregated by Student Mastery
As discussed above, when compared to aggregated learning curves, disaggregated learning curves can provide a more accurate evaluation of a particular automated tutoring model to better target limited development resources and to better adapt lessons for individual users. Thus, in addition to disaggregating learning curves by knowledge component or skill, learning curves are also disaggregated by number of opportunities until mastery. Below, the specifics of disaggregating learning curves are described with an implementation for the educational software application, Cognitive Tutor, an intelligent automated tutoring system as discussed above. The fundamental assumption behind Cognitive Tutors is that knowledge can be decomposed into discrete knowledge components—i.e., skills—and that learning is best modeled through these skills. These skills act as components in a cognitive model of the student. If the skills that students are actually learning are correctly identified, improvement in performance (e.g., reduced errors) should result as students gain more experience with those skills. To the extent that the modeled skills are not aligned with what students are learning, learning on those skills may not result.
These skill-based cognitive models are used in two ways. First, within a tutor at runtime, the cognitive model is used as the basis for Bayesian Knowledge Tracing (BKT) to assess whether individual students have mastered the material. Second, learning curves, aggregated across students, are used to test whether the modeled skills correspond to the skills that students are learning. If students are not learning a skill, then resources should be directed towards a corresponding improvement to the software.
In one embodiment, Cognitive Tutors use Corbett and Anderson's Bayesian Knowledge Tracing (BKT) algorithm at runtime to estimate the probability that each skill is known, or p_known. In an embodiment, the Cognitive Tutors may calculate p_known as an estimated probability of whether a particular skill is in a “known” state for a user, with larger numbers indicating a greater probability of being in the “known” state rather than an “unknown” state. The BKT algorithm uses four parameters to estimate p_known for each skill: p_initial, the probability that a student knows the skill prior to using it in the tutor; p_learn, the probability that the skill will transition from unknown to known following usage in the tutor; p_guess, the probability of correct performance when the skill is unknown; and p_slip, the probability of incorrect performance when the skill is known. Cognitive Tutor problems typically require multiple steps to solve, and each of these steps is normally associated with at least one skill, so problems usually include multiple skills. In addition, Cognitive Tutor includes multiple problems with any particular skill so that students can have multiple opportunities to master a skill without repeating problems. Mastery learning is implemented by requiring students to solve problems until p_known for each skill in the section has reached 0.95. See Anderson, J. R., Conrad, F. G., & Corbett, A. T. (1989), Skill acquisition and the LISP tutor in Cognitive Science, 14(4), 467-505 and Corbett, A. T., & Anderson, J. R. (1995), Knowledge tracing: Modeling the acquisition of procedural knowledge in User-Modeling and User-Adapted Interaction, 4, 253-278, both of which are incorporated herein by reference.
As students use a Cognitive Tutor, data about their performance is logged. Of particular interest here is logging the success of each opportunity to use a skill as either correct or incorrect, where incorrect includes both errors and student requests for help with a step associated with the skill. The usual way to create a learning curve for a skill is to graph the percentage of all students whose performance was correct for each opportunity to use a skill (across problems). Thus, each problem may present the opportunity to exercise multiple different skills, and performance data is logged for each skill.
A ubiquitous finding for a wide variety of cognitive tasks, as well as perceptual motor tasks and other phenomena, is that performance appears to follow the power law of practice: performance improves rapidly at first and continues to improve but at a diminishing rate in a power function, where performance is a function of some power of the amount of practice (e.g., the number of opportunities): E=E0*n−α, where E=error rate, E0 (the intercept) is initial error rate, n is the opportunity number, and the exponent a controls the rate of change, equivalent to the linear slope when the data is plotted on log-log axes. See Newell, A., & Rosenbloom, P. S. (1981), Mechanisms of skill acquisition and the law of practice in J. R. Anderson (Ed.), Cognitive Skills and Their Acquisition (pp. 1-55). Hillsdale, N.J.: Lawrence Erlbaum Associates, which is incorporated herein by reference.
For learning curves in Cognitive Tutor, the error rate is transformed into percentage correct as C=100−E=100−E0*n−α. The fitted power function for the skill in
However, aggregate learning curves are not always a reliable guide to whether skills accurately model student learning. When averaging over different students who begin with different levels of knowledge and/or learn at different rates, aggregate learning curves may appear to show little student learning even though BKT identifies the students as mastering the skills at runtime.
To resolve the discrepancy between runtime assessments of mastery and the poor learning results shown in some aggregate learning curves, the student performance data was disaggregated for each skill by the number of attempts it took students to reach mastery. For instance, the performance data for the skill “Write composed linear function” whose learning curve is shown in
Each of the disaggregated learning curves in
As discussed above, one effect on the aggregate learning curve due to mastery learning is a negative effect on the curve due to the removal of students that have already mastered the tested skill Mastery learning depresses performance increases in learning curves aggregated across student subpopulations, as the best performing students are removed from the aggregate population as they start performing well (when they master all skills for the section and move on to a different section or leave the tutor), at least for skills that are critical for graduating from the section, leaving only students who are performing less well.
Aggregate learning curves as shown in
Curves aligned by mastery make it easier to visualize whether different student subpopulations follow a similar path as they approach mastery, as would be the case if the students have similar rates of learning, corresponding to BKT parameter p_learn, but different initial knowledge, corresponding to BKT parameter p_initial. In these curves, student subpopulations' performance profiles may look similar as they approach mastery.
To investigate the frequency with which aggregate learning curves fail to show learning even when students appear to be learning at runtime, data on example Cognitive Tutor Algebra I curriculum was studied, for which performance data for 15,414 unique students on 881 skills was recorded.
Skills that are most likely to be better modeled by disaggregated learning curves are those that the tutor (at runtime) thinks most students are learning, but that don't show learning in their aggregate learning curves. Criteria was set such that a learning curve does not show learning if the fitted power function's exponent α is greater than −0.1—i.e., if the fitted power function is relatively flat or even decreasing in terms of percentage correct—and conversely, a learning curve does show learning for α≦−0.1. The results of applying the criteria to the performance data from the example Algebra I curriculum are shown in Table 1 below.
One reason that a skill may not show learning is that students already know it (performance on the learning curve starts out at or above 95%), so there is not much learning left to do—these are referred to as premastered. Another reason may be that knowledge that is modeled as a single skill may actually consist of more than one skill [3], or the skill may be poorly modeled in some other way. Learning curves for composite and poorly modeled skills often show fluctuating performance—i.e., multiple local maxima—as students alternate between practicing two or more distinct skills with different learning trajectories.
Therefore, skills were selected for disaggregation based on skills (1) the tutor thinks students are learning, operationalized as at least 75% of students achieve mastery within 12 opportunities; (2) do not show learning in the aggregate curve, as indicated by a fitted power function exponent of α>−0.1; (3) are not premastered; and (4) do not have multiple local maxima. In addition, (5) a skill was only selected if data was available for at least 250 students, both for stable statistical properties and to have enough data points to smooth out random fluctuations in the curves. As shown in Table 1, this process identified 166 skills (approximately 23% of skills that are not premastered) that were potentially misidentified by their aggregate learning curves as not showing learning.
For each of these 166 skills, disaggregated learning curves were created by grouping students into subpopulations according to the number of opportunities it took them to reach mastery, as assessed by the runtime BKT parameters. The power function fit for each of these curves was then computed. A skill was classified as showing learning if at least 75% of its students were represented by a disaggregated learning curve that showed learning. This had the effect of weighting the disaggregated curves so that, for instance, a learning curve representing 20 students would not count as much as a learning curve representing 200 students. Using these criteria, 117 of the 166 skills, or 70%, showed learning when their skills were disaggregated. Overall, at least 117 skills (those for which enough data was available) of 720 skills that students did not already know, or approximately 16%, had been misidentified as showing no learning. Accordingly, the use of disaggregated learning curves has the potential to correct significant non-learning misidentification errors that would result from using standard aggregated learning curves.
Disaggregated learning curves can reconcile an apparent mismatch between the tutor's runtime assessment of student knowledge and the post hoc assessment provided by the aggregate learning curve. These representations have the potential to provide information to improve real-time student modeling and to more accurately depict educational effectiveness.
Although the disaggregated learning curves described here are calculated post hoc, they represent different underlying patterns of student learning. When a teacher or an educational software system identifies a particular student's membership in one of the underlying subpopulations, the trajectory of that student's learning is better predicted. A teacher or an educational software system can make a quick estimate of the student's likely path and then adapt accordingly, in a manner similar to that described in Pardos, Z. A., & Heffernan, N. T. (2010), Modeling individualization in a Bayesian networks implementation of knowledge tracing in Proceedings of the 18th International Conference on User Modeling, Adaptation and Personalization (pp. 255-266), which is incorporated herein by reference.
Another important application of disaggregated learning curves is to better distinguish effective vs. ineffective educational content or practices in automated tutoring models. For instance, the Cognitive Tutor includes curricula for thousands of skills; with such a large data set, efforts to improve the curricula must be prioritized. Accordingly, a series of attention metrics can be developed, which are heuristics for automatically examining data to identify elements of the Cognitive Tutors that deserve attention by developers. One of the attention metrics can assess whether students are learning the skills that they are expected to be learning. If aggregate learning curves are used to detect skills that students are not learning, a significant number of false positives are generated. Using disaggregated learning curves should provide more accurate metrics for whether students are learning particular skills. This information can be used to prioritize development efforts for a product that is more educationally effective overall.
To provide a process-level overview of how the disaggregated learning curves can be utilized in a tutoring system,
Note that performance data 142 may include data for any desired time period and for any desired set of users. For example, performance data 142 may concern post-hoc data from a prior class or time period of Algebra I students in the past, allowing historical trends to inform the present tutoring models. In other embodiments, performance data 142 may log data from present users, exclusively or combined with past data. For example, performance data 142 may be populated in real-time as the users of client devices 110A-110C work through lessons in the tutoring system. In this case, tutoring service 132 may receive continuous updates of performance data 142, rather than a single static set of data. This approach may be preferred for new topics that do not yet have established historical performance data, or to provide flexibility to adapt to various different situations. Note that while performance data 142 may concern data for a large number of users over a long time period, in some embodiments only a representative sample of users and/or data points may be received from performance data 142 in step 502.
At step 504 of flowchart 500, referring to
Additionally, while a separate subpopulation is created for each represented number of opportunities in
At step 506 of flowchart 500, referring to
For example, referring to
At step 508 of flowchart 500, referring to
To provide an evaluation of whether learning is occurring for the user population as a whole, the number of users in the subpopulations considered to be learning may be added together and divided by the total number of users to determine a percentage of users that demonstrated learning of the particular skill. If this percentage meets a particular threshold, for example at least 75% as discussed above, then learning of the particular skill may be considered to be occurring for most of the population.
As discussed above, since developers and teachers may have limited resources available to implement lesson improvements and refinements, especially if lesson data 146 covers a large number of skills, then one or more attention metrics 148 may be developed to help prioritize resources to portions of the tutoring system that need the most attention. For example, the learning percentage may be weighed as part of an attention metric, and those skills that have a sufficiently low learning percentage, for example below 30%, may be flagged as portions that warrant greater attention.
Thus, once an evaluation of the disaggregated learning curves is carried out, tutoring service 132 of server device 130 may cause a suitable adaptation to be carried out for the tutoring system. One suitable adaptation may be to modify lesson data 146. In one embodiment, the modeling of the skill may be further refined, for example by dividing a composite skill into separate skills. In another embodiment, the presentation related to the skill may be further refined, for example by providing clarified instructions. In yet another embodiment, reference materials such as textbooks or course materials may be revised to improve the teaching of the skill, or to align the presentation of the course materials more consistently with the tutoring system. In still another embodiment, questions using the skill may be deferred for another tutoring session, providing time to implement different adaptations for the skill in the tutor.
Another suitable adaptation may be to send a notification concerning a particular skill that does not show learning, the notification identifying that the particular skill may warrant additional attention for refinement or refactoring. The notification may for example be sent over network 120 to one or more relevant persons such as developers, instructors, or other staff, for example by an e-mail message, instant message, text message, or other communication protocols.
Yet another suitable adaptation may be based on related skills or particular groups of skills in lesson data 146. For example, if a particular skill does not show learning according to the disaggregated learning curves, than a notification may be sent to identify that skills related to that particular skill may also possibly need further attention.
While the above adaptations concern improvements and notifications concerning the tutoring system itself, suitable adaptations may also be provided to better tailor the tutoring system for a particular user using tutoring service 132. For example, user profile data 144 may be maintained for each user associated with respective client devices 110A-110C. By default, the parameters of lesson data 146 may be uniform on a per-skill basis for all of the users using tutoring service 132. As each user provides additional data for performance data 142, each user can be more closely estimated to have a membership within a particular subpopulation of the plurality of subpopulations. By determining this membership for a particular user, a projected learning trajectory can be estimated for the particular user. Accordingly, the parameters for lesson data 146 can be adjusted in user profile data 144 based on the determined membership of the particular user to better suit his individual learning needs. Moreover, these adjustments can occur while a tutoring session is in progress for the particular user, allowing the tutoring session to be adjusted for the particular user in real-time.
Determining membership of a particular user within a particular subpopulation may also be assisted by using external data, for example demographic data. Additionally, the external data may be utilized to establish possible correlations of particular subpopulations to particular groups or demographics. This correlation data may then be utilized to help establish membership of future users within a particular subpopulation.
To provide an example parameter adjustment, if the particular user is determined to be a member of a subpopulation requiring only 3 or 4 opportunities before mastery of a particular skill, then the lesson speed may be increased and more difficult problems may be presented. Alternatively, development of other skills may be prioritized over the particular skill. On the other hand, if the user is determined to be a member of a subpopulation requiring 10 or more opportunities before mastery, then the lesson speed may be decreased and problems may be slowly introduced with a gradual difficulty progression. In this manner, the disaggregated learning curves can be used to estimate a learning progression for a particular user and to adjust the parameters of the tutoring system accordingly.
According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques.
For example,
Computer system 600 also includes a main memory 606, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 602 for storing information and instructions to be executed by processor 604. Main memory 606 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 604. Such instructions, when stored in non-transitory storage media accessible to processor 604, render computer system 600 into a special-purpose machine that is customized to perform the operations specified in the instructions.
Computer system 600 further includes a read only memory (ROM) 608 or other static storage device coupled to bus 602 for storing static information and instructions for processor 604. A storage device 610, such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to bus 602 for storing information and instructions.
Computer system 600 may be coupled via bus 602 to a display 612, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device 614, including alphanumeric and other keys, is coupled to bus 602 for communicating information and command selections to processor 604. Another type of user input device is cursor control 616, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 604 and for controlling cursor movement on display 612. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
Computer system 600 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 600 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 600 in response to processor 604 executing one or more sequences of one or more instructions contained in main memory 606. Such instructions may be read into main memory 606 from another storage medium, such as storage device 610. Execution of the sequences of instructions contained in main memory 606 causes processor 604 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical disks, magnetic disks, or solid-state drives, such as storage device 610. Volatile media includes dynamic memory, such as main memory 606. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 602. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 604 for execution. For example, the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 600 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 602. Bus 602 carries the data to main memory 606, from which processor 604 retrieves and executes the instructions. The instructions received by main memory 606 may optionally be stored on storage device 610 either before or after execution by processor 604.
Computer system 600 also includes a communication interface 618 coupled to bus 602. Communication interface 618 provides a two-way data communication coupling to a network link 620 that is connected to a local network 622. For example, communication interface 618 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 618 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 618 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
Network link 620 typically provides data communication through one or more networks to other data devices. For example, network link 620 may provide a connection through local network 622 to a host computer 624 or to data equipment operated by an Internet Service Provider (ISP) 626. ISP 626 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 628. Local network 622 and Internet 628 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 620 and through communication interface 618, which carry the digital data to and from computer system 600, are example forms of transmission media.
Computer system 600 can send messages and receive data, including program code, through the network(s), network link 620 and communication interface 618. In the Internet example, a server 630 might transmit a requested code for an application program through Internet 628, ISP 626, local network 622 and communication interface 618.
The received code may be executed by processor 604 as it is received, and/or stored in storage device 610, or other non-volatile storage for later execution.
In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction.
This application claims the benefit of U.S. Provisional Application No. 61/843,832, filed Jul. 8, 2013, which is hereby incorporated by reference in its entirety for all purposes as if fully set forth herein.
Number | Date | Country | |
---|---|---|---|
61843832 | Jul 2013 | US |