This application claims priority from European patent application No. 21305204.6, filed on Feb. 19, 2021, the contents of which are hereby incorporated herein in their entirety by this reference.
This specification relates to a computer-implemented method for handwriting-to-text-summarization and to a system for handwriting-to-text-summarization.
Automatic summarization is the process of shortening a text computationally, thereby generating a text summary that represents (the most) important or relevant information within the original content of the text. As an example, extraction-based summarization relies on classifying the importance of sections of the text, usually at the sentence level. Once all sentences have been classified, the important ones are extracted and composed into the shorter text summary. Various techniques exist for successfully capturing the general importance of a sentence to broader understanding of the text, ranging from simpler methods such as calculating the density of uncommon words to semantic analysis of each sentence. In examples, abstraction-based summarization involves natural language processing to build an internal semantic representation of the original content of the text from which a condensed text summary is generated that is likely to be close to how a human might summarize the text.
Retaining dynamic features of handwriting of a user allows for assessing the user based on non-textual (or non-verbal) contents of her or his written input. In fact, as an example, it is known that dynamic features such as measures of stroke distance, applied pressure and stroke duration extracted from the user writing by hand e.g. in a digital pen system can be used to estimate the expertise/level of competence of the user in the domain she or he is writing about. Along the same lines, it is possible to infer other attributes such as a level of confidence, an age of the user, and/or a given emotional state of the user from her or his handwriting. While some of the dynamic features may easily be interpreted by a human reader, pre-trained machine learning algorithms are capable of also assessing subtler dynamic features.
In recent years, means of style transfer have evolved in the realm of natural language processing. Style transfer aims at changing a style of a text while maintaining its linguistic meaning. As an example, a text written in an impolite style (e.g. an online review) can be converted to another text conveying the same message but cast into a neutral or polite style. Such transformations may rely on auto-encoders which can be a class of neural networks (style transfer networks) configured to create in an encoding step a reduced and/or abstract representation of the text that is to be mapped in a decoding step to an output text. In contrast to standard auto-encoders that are trained by providing the input text also as the output text, auto-encoders for style transfer are trained by providing style-transformed input texts as the output texts.
According to a first aspect, there is provided a computer-implemented method for handwriting-to-text-summarization. The method comprises obtaining, via a user interface of a system, a handwriting input representing a handwriting of a user of the system for handwriting-to-text-summarization. The method further comprises recognizing a text in the handwriting input. The method further comprises extracting at least one dynamic feature of the handwriting from the handwriting input. The method further comprises generating a text summary of the text. Generating the text summary is based on the text and on the at least one dynamic feature of the handwriting.
According to a second aspect, there is provided a system for handwriting-to-text-summarization. The system comprises a user interface comprising a capturing subsystem configured to capture a handwriting of a user of the system. The system is configured to run the method of the first aspect (or an embodiment thereof) for handwriting-to-text-summarization.
Dependent embodiments of the aforementioned aspects are given in the dependent claims and explained in the following description, to which the reader should now refer.
Handwriting-to-text-summarization incorporates both importance assessment of the text or portions (e.g. sentences/words) thereof and qualities of the text or the portions thereof indicated by dynamic handwriting features that reveal information about the user, such as e.g. the level of competence. In so doing, the handwriting-to-text-summarization uses, retains and/or reflects such information about the user, thereby yielding a text summary which is more sensitive to subtle and/or invisible user input. Such may lead to a better and more accurate text summary. In fact, as an example, in case a sentence is rated to have been written with a low level of competence, it may not be included in the text summary.
Furthermore, handwriting-to-text-summarization may help to draw the attention of the user as well as of a third party (e.g. a parent and/or a teacher) to particular education needs, such as identifying gaps in knowledge and/or understanding. In fact, generating the text summary of the text may depend on settings that the user or the third party (even after completion of the user's handwriting) may adjust. As an example, such settings can be used to generate a text summary that deliberately features and/or highlights portions of the text that correspond to low level of competence and/or low level of confidence of the user. Such may allow a teacher to efficiently identify the aforementioned gaps. Hence, the system for handwriting-to-text-summarization can be used as a means in education and/or teaching.
In addition, a set of routines may run on the generated text summary to accomplish goals such as style transfer, font modification, and/or annotation. In fact, the language can be adjusted to match psychological or educational information retrieved from handwriting dynamics, thus making this information more obvious to readers (i.e. the user or the third party). Such routines/information may again be used to identify gaps in knowledge and/or understanding. Along the same lines, the text summary may be automatically annotated according to information from handwriting dynamics, and thus not requiring teachers or students to have to do in-depth analysis of their knowledge gaps. Annotations may be interactive, e.g. in terms of hyperlinks encoding a request to a search engine on the web.
In general, note taking while following an event such as e.g., a lecture or a business meeting is typically a lossy process in that given the somewhat limited multitasking capabilities of a human brain there can be a trade-off between jottings things down and digesting newly incoming content or interacting in the event. As an example, that may lead to a suboptimal timing of writing versus listening/talking which may hamper future use of the notes rendering them less telling and thus less beneficial to the user in the long run. However, text summaries from multiple users attending the same event (e.g. the lecture or the business meeting) and each jotting down notes in a system for handwriting-to-text-summarization may be collated based on the information about the users such as e.g. their level of competence/domain expertise, so that the resulting collated text summary represents an expertise level which can be higher than any one of the individual text summaries of the multiple users. In so doing, collated handwriting-to-text-summarization (e.g. via a network of systems for handwriting-to-text-summarization) can be seen as a means for collective/crowd intelligence improving the text summary.
The method 100 of the first aspect (or an embodiment thereof) and the system 200 of the second aspect (or an embodiment thereof) aim at providing functionality for handwriting-to-text-summarization. As an example, as shown in
The computer-implemented method 100 for handwriting-to-text-summarization comprises obtaining 110, via a user interface 210 of a system 200, a handwriting input 10 representing a handwriting of a user of the system 200 for handwriting-to-text-summarization. The method further comprises recognizing 120 a text 20 in the handwriting input 10. The method further comprises extracting 130 at least one dynamic feature 30 of the handwriting from the handwriting input 10. The method further comprises generating 140 a text summary 40 of the text 20. Generating 140 the text summary 40 is based on the text 20 and on the at least one dynamic feature 30 of the handwriting. The computer-implemented method 100 is schematically illustrated in
The one or more dynamic features 30 may comprise or be an average writing pressure, an average stroke length, and/or an average stroke duration, wherein averaging is over the text 20 or portions thereof. As an example, a dynamic feature 30 is an average writing pressure. Alternatively, or in addition, the dynamic feature 30 or another dynamic feature 30 is an average stroke length. Alternatively, or in addition, the dynamic feature 30 or yet another dynamic feature 30 is an average stroke duration. The text 20 can be semantically and/or linguistically interpretable with respect to at least one communication language.
The handwriting input 10 may comprise a first set of data representing the text. As an example, the first set of data may comprise at least one time series (or at least one vector) of stroke data captured by a capturing system 220 of the system 200 for handwriting-to-text-summarization. The first set of data may represent the text 20, if the text 20 written by the user of the system 200 in terms of his or her handwriting can be reproduced from the first set of data, e.g. from the one or more time series (or vectors) of stroke data. Furthermore, the handwriting input 10 may comprise a second set of data representing properties of the handwriting that indicate information about the user as handwriting progresses. As an example, the second set of data representing properties of the handwriting may be at least one time series (or at least one vector) of pen pressure or pressure on a touchpad. The information about the user may relate to an emotional state, an age, a level of confidence, and/or a level of competence (viz. domain expertise) of the user. The first set of data and the second set of data overlap or are identical.
Recognizing 120 the text 20 in the handwriting input 10 may comprise applying the handwriting input 10 (e.g. the first set of data) to a text pre-processing algorithm configured to recognize the text 20 represented by the handwriting input 10. Furthermore, the text pre-processing algorithm can be further configured to segment 121 the text 20 into one or more portions. As an example, such a segmentation may follow from an analysis of punctuation marks (periods, colons, semi-colons, commas) and/or indentation and/or bullet points. The order of the portions can be maintained. In examples, portions can be enumerated in their original order so that their original order can be reproduced any time. In fact, the one or more portions can be enumerated as they appear in the text. Alternatively, or in addition, the one or more portions can be timestamped. As an example, the one or more portions (of the text 20) may be sentences, clauses, and/or phrases (of the text 20).
The text pre-processing algorithm may comprise or be a machine-learning algorithm pre-trained for handwriting-to-text recognition and/or text segmentation. The text pre-processing algorithm may be configured to segment the first set of data into individual character vectors and apply a pre-determined vector-to-character mapping to output the text.
The text pre-processing algorithm may be further configured to store the text 20 in a database. Furthermore, the text pre-processing algorithm may be further configured to store the one or more portions of text 20 (and information about their order, e.g. their enumeration and/or timestamps) in the database. The database may or may not form part of the system 200. In the latter case, the database can be hosted on a server (e.g. in a cloud). Saving the text 20 and/or the portions thereof (and information about their order) in the database allows for reuse of the text 20. As an example, such can be useful when collating 180 text summaries 40 of various users.
Extracting 130 the at least one dynamic feature 30 of the handwriting from the handwriting input 10 may comprise applying the handwriting input 10 to a handwriting dynamics algorithm configured to extract the at least one dynamic feature 30 from the handwriting input 10. The handwriting dynamics algorithm may be further configured to compute at least one dynamic feature 30 for each portion of the text. The handwriting dynamics algorithm may be further configured to compute the average writing pressure, the average stroke length, and/or the average stroke duration, wherein averaging is over the text 20 or portions (e.g. sentences) thereof, thereby producing the one or more dynamic features 30 for the text 20 or for each portion thereof. Computing the average writing pressure, the average stroke length, and/or the average stroke duration may be based on the second set of data.
The handwriting dynamics algorithm can be further configured to store the one or more dynamic features 30 for the text 20 in the database. The handwriting dynamics algorithm may be further configured to store the one or more dynamic features 30 for each portion of the text 20 (and information about the order of the portions, e.g. their enumeration or timestamps) in the database.
As an example, the text 20 and the one or more dynamic features 30 can be stored in terms of a data structure of text (and/or portions of text) and corresponding one or more dynamic features.
Generating 140 the text summary 40 of the text 20 based on the text 20 and on the at least one dynamic feature 30 of the handwriting may comprise applying the text 20 and the one or more dynamic features 30 to a text summarization algorithm configured to generate the text summary 40 of the text 20. The text summarization algorithm may comprise or be a natural language processing machine-learning algorithm pre-trained for automatic summarization such as e.g. extraction-based or abstraction-based text summarization. Extraction-based or abstraction-based text summarization can be extended in that, in the training of the natural language processing machine-learning algorithm, the one or more dynamic features 30 can be provided as additional input data (in addition to the text 20 or its portions).
As an example, as in
As in
In examples, qualities can be properties of the written language which indicate certain meanings, styles or tones to a user. For example, a quality may include how expert a writer seems, how authoritative they are, how child-like or old they are, their emotional state, or other. Qualities may be indicated in any aspect of writing or handwriting, including the graphical form of the handwriting, the properties of the dynamic motions used to create the handwriting, the word choice, language construction, or other. Some qualities may be easily identified by humans, and some qualities may only be easily recognized algorithmically. This is largely dependent on which aspects of the writing the quality indicate. As an example, simplistic word use can be easily recognized by a human as being “child-like”, but subtle changes in applied pressure may only indicate domain expertise level to an algorithm.
The at least one quality classifier may correspond to one or more of the emotional state, the age, a level of confidence, and the level of competence of the user. As an example, the at least one quality classifier may correspond to the emotional state of the user. Alternatively, or in addition, the at least one quality classifier or another quality classifier may correspond to the age of the user. Alternatively, or in addition, the at least one quality classifier or another quality classifier may correspond to the level of confidence of the user. Alternatively, or in addition, the at least one quality classifier or another quality classifier may correspond to the level of competence of the user. Alternatively, or in addition, the at least one quality classifier may output a vector (e.g. a two-dimensional vector, a three-dimensional vector, or a four-dimensional vector) for any two or three combinations of the emotional state, the age, the level of confidence, and the level of competence of the user, or for the combination of the emotional state, the age, the level of confidence, and the level of competence of the user.
In an embodiment, the at least one quality classifier may correspond to the level of competence of the user. As an example, classifying the level of competence of the user may be in terms of two or three classes (e.g. “expert” and “novice” or e.g. “expert”, “normal”, and “novice”). As an example, the level of competence can be decisive when it comes to deciding which portions of the text 20 is relevant for the text summary 40. In examples, the level of competence may be decisive when collating (180) text summaries of several users and deciding that a portion of a user shall contribute to the (collated) text summary 40. The one or more quality classifiers may be a machine-learning algorithm in a pre-trained state (i.e. after training on training data). As an example, the one or more quality classifiers may be trained on results of specific user-groups relevant to the use case, e.g. children, students or learners. For example, the quality classifier corresponding to the level of competence may have been trained on examples of “expert”, “normal” and “novice”. In this context, the class “expert” may refer to handwriting input 10 (and the one or more dynamic features extracted therefrom) produced by children or students known to be very competent in that area. The class “normal” may refer to handwriting input 10 (and the one or more dynamic features extracted therefrom) produced by children or students with only limited exposure to that area. The class “novice” may refer to handwriting input 10 (and the one or more dynamic features extracted therefrom) produced by children or students working on a totally new area.
In an embodiment, as illustrated in
SR=aIC+(b1Q1C1+ . . . +bNQCN)/N
of a numeric result IC of the importance classifier and a normalized linear combination of numeric results QC1, QC2, . . . , QCN of the N quality classifiers.
The numeric results of the importance classifier and of the N quality classifiers can be pre-determined values corresponding to the respective classes of the importance classifier or the N quality classifiers, respectively. As an example, the pre-determined values can be user-defined, i.e. they can be adjusted in a settings menu of the user interface 210 of the system 200. Some of them can be chosen to be zero (e.g. to discard normal level of competence). As an example, the numeric result of the class “expert” can be one and the numeric results of the classes “normal” or “novice” can be zero. In examples, the numeric results of the classes “expert” or “novice” can be one and the numeric result of the class “normal” can be zero. In examples, in case of replacing classifiers by regressors the numeric results can be the output values of the regressors.
Coefficients, e.g. a, of the linear combination and/or coefficients b1, . . . , bN of the normalized linear combination can be pre-determined weights. Again, the pre-determined weights can also be user-defined, i.e. they can be adjustable in the settings menu of the user interface 210. They can also be set to zero (e.g. to eliminate or suppress vague statements from non-experts).
The (one or more) portions of the text 20 with corresponding ranking values above a predetermined threshold value can be concatenated (e.g. in the order the one or more portions appear in the text 20), thereby generating 140 the text summary 40. The predetermined threshold value may also be user-defined, i.e. it can be adjusted in the settings menu of the user interface 210. A lower threshold may result in a less compressed summarization. The right order of the one or more portions (e.g. when queried from the database) can be restored from the enumeration or the timestamps.
In an embodiment, the method 100 may further comprise applying 150 the text summary 40 to a style transfer algorithm configured to modify the text summary 40 so as to reflect the information about the user. The style transfer algorithm may apply at least one style transfer network. In one embodiment, different areas of the text may have different fonts, e.g., italics, bold, underlined, etc., to notify the users on the importance of each specific area of their notes.
Modifying the text summary 40 may comprise selecting for each portion of the text summary 40 one or more style transfer networks (also selecting their order) based on the one or more classification results 42 of the one or more quality classifiers (e.g. queried from the database), and modifying each portion of the text summary 40 by applying the corresponding one or more transfer networks, thereby modifying the text summary 40. The selection of the one or more style transfer networks may be influenced by user settings. As an example, style transfer network associated to the level of competence may apply the linguistic style of an expert to the input text, such that the output appears to have been written by an expert (e.g. confident word choice, no hedge words). The neural network itself may have been trained on examples of expert writings in different domains (levels of competence), such that the “expert” language is not necessarily domain specific. An example of a text summary 40 after style transformation is shown in
In examples, the at least one style transfer network may be an auto-encoder neural network in a pre-trained state (i.e. after training on training data).
In an embodiment, the method 100 may further comprise applying 160 the text summary 40 to a font modification algorithm configured to change the font of at least one portion of the text summary 40 based on the information about the user. In one example embodiment, the font modification algorithm may change the font of at least one portion of the text based, at least in part, on emotional state, age, level of confidence, a level of competence, or a combination thereof of the user. Changing the font of the at least one portion of the text summary 40 based on the information about the user may comprise querying for each portion of the text summary 40 a font based on the one or more classification results 42 of the one or more quality classifiers matching the font labels of the font, and formatting each portion in the corresponding font, thereby modifying the text summary 40. At least one font may be queried from a font database or from the database. Changing the font depending e.g. on the level of competence of the user of the system 200 can be seen as a feedback means to inform the user of portions of the text 20 that are deemed to be less certain. It may also be conducive to enhancing the perceptibility of the text summary 40 to the user (also when reviewing the text summary 40 at a later time). In so doing, such feedback may contribute to a better understanding of the text 20. An example of a text summary 40 after font modification is shown in
In an embodiment, the method 100 may further comprise applying 170 the text summary 40 to an annotation algorithm configured to add at least one annotation to the text summary 40 based on the information about the user. Adding the at least one annotation to the text summary 40 based on the information about the user may comprise querying, for each portion of the text summary 40, an annotation database so as to find a matching annotation to the portion (or parts thereof such as e.g. a selection of one or more words of the portion) and to the one or more classification results 42 of the one or more quality classifiers, and evaluating a predetermined trigger condition, and adding the matching annotation to the portion, if a matching annotation has been found and the predetermined trigger condition is satisfied, or adding a universal annotation to the portion, if a matching annotation has not been found and the predetermined trigger condition is satisfied, thereby modifying the text summary 40.
Each annotation may have a specific level of competence trigger, i.e. “expert”, “normal”, “novice” which can be used in the assignment of one or more annotations to a specific portion of the text 20. A matching annotation, such as e.g. “Dolphins are mammals and therefore need to breathe air”, may be pre-defined by a teacher, parent, or other. These may have specific pre-defined trigger words or combinations of words, such as “dolphin”, “mammal”, “breathing”.
The matching annotation or the universal annotation may result from a queried annotation template after parametrization based on the portion and/or the one or more classification results 42 of the one or more quality classifiers. As an example, the universal annotation can be an interactive hyperlink (e.g. via the user interface 210 of the system 200) to a search engine. In fact, a parametrization can consist in adding the interactive hyperlink to a phrase such as e.g. “more help here: (hyperlink)”. An example of a text summary 40 with annotation(s) is shown in
In an embodiment, the method 100 may further comprise applying 180 the text summary 40 to a collation algorithm configured to compare the text summary 40 to one or more further text summaries corresponding to one or more further users of one or more further systems 200 for handwriting-to-text-summarization (e.g. a network of systems 200 for handwriting-to-text-summarization), and to sample a collated text summary 40 based on the information about the user and on information about the one or more further users, thereby modifying the text summary 40. In one embodiment, the data from the plurality of users taking notes at the same time are implemented to train the collation algorithm. The collation algorithm may then train the system to evaluate the notes of the plurality of users to provide feedback. For example initially, the system could provide feedback in a general way, but as more and more people from the same classroom are actually taking notes (e.g., a plurality of students are taking notes for a particular subject), the system may process all these notes to generate an average value to provide feedback to the students of similar classes. Collating text summaries 40 (the text summary and the further text summaries) can be used when note taking is for the same event (e.g. a business meeting, a lecture). Comparing the text summary 40 to one or more further text summaries may comprise identifying for each portion (or for each of a subset of portions) of the text summary 40 a list of corresponding further portions in the one or more further text summaries. Alternatively, or in addition, text summaries 40 can be compared at the word and/or phrase-level.
Sampling a collated text summary 40 based on the information about the user and on the information about the one or more further users may comprise selecting a portion or a further portion for each list based on the information about the user and on the information about the one or more further users. The information about the user and the information about the one or more further users may correspond to the level of competence of the user or the further users, respectively. Level of competence can be decisive when deciding which portion shall be included in the collated text summary. As an example, in case where multiple valid or equivalent portions have an “expert” classification, one of them may be chosen at random. In examples, in case of a portion having no equivalents in the further text summaries, the portion may be immediately selected by default. In some cases, for example, if a teacher wants to understand where students are commonly missing knowledge, one or more portions with an “novice” classification may be selected instead. An example of a text summary 40 after collation is shown in
The method 100 may further comprise displaying 190 the text summary 40 via a graphical output 230 of the user interface 210 of the system 200 for handwriting-to-text-summarization. User settings can be set via the user interface 210 of the system 200.
The system 200 for handwriting-to-text-summarization, may comprise a user interface 210 comprising a capturing subsystem 220 configured to capture a handwriting of a user of the system 200, and wherein the system 200 is configured to run the method 100 of one of the preceding claims. Such a system is schematically illustrated in
The capturing subsystem 220 may comprise a touchpad, a touch screen, a graphics tablet, or a digital tablet. Capturing the handwriting of the user of the system 200 may comprise capturing handwriting input 10 representing text 20 written by hand or writing utensil 221 by the user on the touchpad, the touch screen, the graphics tablet, or the digital tablet. As an example, a touch screen or digital tablet may be capable of capturing stroke vectors, which may contain information such as stroke pressure, stroke duration and/or stroke distance.
The capturing subsystem 220 may comprise or be integrated in a writing utensil 221. The writing utensil 221 is a ballpoint pen, a fountain pen, a felt-tip pen, a brush, a pencil. Alternatively, or in addition, the writing utensil 221 can be a digital pen or a smart pen. As in
The user interface 210 may comprise a graphical output 230. The graphical output 230 (e.g. a touch screen) may allow for user interaction.
The system 200 may comprise at least one database. Alternatively, or in addition, the system 200 may access a database in a cloud (via a communication interface). The system 200 may comprise a (at least one) communication interface 240 to couple to one or more systems 200 for handwriting-to-text-summarization. The communication interface 240 may comprise one or more of a network, internet, a local area network, a wireless local area network, a broadband cellular network, and/or a wired network. In examples, the system may couple to one or more systems 200 via a server hosted in a cloud. As an example, such network connectivity can be used when collating the text summary 40 to the further text summaries.
One or more implementations disclosed herein include and/or may be implemented using a machine learning model. For example, one or more of the text pre-processing algorithm, machine-learning algorithm, handwriting dynamics algorithm, text summarization algorithm, regression algorithm, portion ranking algorithm, style transfer algorithm, font modification algorithm, annotation algorithm, and/or collation algorithm may be implemented using a machine learning model and/or may be used to train a machine learning model. A given machine learning model may be trained using the data flow 610 of
The training data 612 and a training algorithm 620 (e.g., one or more of the text pre-processing algorithm, machine-learning algorithm, handwriting dynamics algorithm, text summarization algorithm, regression algorithm, portion ranking algorithm, style transfer algorithm, font modification algorithm, annotation algorithm, and/or collation algorithm implemented using a machine learning model and/or may be used to train a machine learning model) may be provided to a training component 630 that may apply the training data 612 to the training algorithm 620 to generate a machine learning model. According to an implementation, the training component 630 may be provided comparison results 616 that compare a previous output of the corresponding machine learning model to apply the previous result to re-train the machine learning model. The comparison results 616 may be used by the training component 630 to update the corresponding machine learning model. The training algorithm 620 may utilize machine learning networks and/or models including, but not limited to a deep learning network such as Deep Neural Networks (DNN), Convolutional Neural Networks (CNN), Fully Convolutional Networks (FCN) and Recurrent Neural Networks (RCN), probabilistic models such as Bayesian Networks and Graphical Models, and/or discriminative models such as Decision Forests and maximum margin methods, or the like.
A machine learning model used herein may be trained and/or used by adjusting one or more weights and/or one or more layers of the machine learning model. For example, during training, a given weight may be adjusted (e.g., increased, decreased, removed) based on training data or input data. Similarly, a layer may be updated, added, or removed based on training data/and or input data. The resulting outputs may be adjusted based on the adjusted weights and/or layers.
In general, any process or operation discussed in this disclosure that is understood to be computer-implementable, such as the process illustrated in
A computer system, such as a system or device implementing a process or operation in the examples above, may include one or more computing devices. One or more processors of a computer system may be included in a single computing device or distributed among a plurality of computing devices. One or more processors of a computer system may be connected to a data storage device. A memory of the computer system may include the respective memory of each computing device of the plurality of computing devices.
In various embodiments, one or more portions of method 100 and system 200 may be implemented in, for instance, a chip set including a processor and a memory as shown in
Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification, discussions utilizing terms such as “processing,” “computing,” “determining”, or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities into other data similarly represented as physical quantities.
In a similar manner, the term “processor” may refer to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory. A “computer,” a “computing machine,” a “computing platform,” a “computing device,” or a “server” may include one or more processors.
In a networked deployment, the computer system 700 may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment. The computer system 700 can also be implemented as or incorporated into various devices, such as a personal computer (PC), a tablet PC, a personal digital assistant (PDA), a mobile device, a palmtop computer, a laptop computer, a desktop computer, a communications device, a wireless telephone, a land-line telephone, a control system, a camera, a scanner, a facsimile machine, a personal trusted device, a web appliance, a network router, switch or bridge, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. In a particular implementation, the computer system 700 can be implemented using electronic devices that provide voice, video, or data communication. Further, while a computer system 700 is illustrated as a single system, the term “system” shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.
As illustrated in
The computer system 700 may include a memory 704 that can communicate via a bus 708. The memory 704 may be a main memory, a static memory, or a dynamic memory. The memory 704 may include, but is not limited to computer readable storage media such as various types of volatile and non-volatile storage media, including but not limited to random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. In one implementation, the memory 704 includes a cache or random-access memory for the processor 702. In alternative implementations, the memory 704 is separate from the processor 702, such as a cache memory of a processor, the system memory, or other memory. The memory 704 may be an external storage device or database for storing data. Examples include a hard drive, compact disc (“CD”), digital video disc (“DVD”), memory card, memory stick, floppy disc, universal serial bus (“USB”) memory device, or any other device operative to store data. The memory 704 is operable to store instructions executable by the processor 702. The functions, acts or tasks illustrated in the figures or described herein may be performed by the processor 702 executing the instructions stored in the memory 704. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firm-ware, micro-code and the like, operating alone or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing and the like.
As shown, the computer system 700 may further include a display 710, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid-state display, a cathode ray tube (CRT), a projector, a printer or other now known or later developed display device for outputting determined information. The display 710 may act as an interface for the user to see the functioning of the processor 702, or specifically as an interface with the software stored in the memory 704 or in the drive unit 706.
Additionally or alternatively, the computer system 700 may include an input/output device 712 configured to allow a user to interact with any of the components of computer system 700. The input/output device 712 may be a number pad, a keyboard, or a cursor control device, such as a mouse, or a joystick, touch screen display, remote control, or any other device operative to interact with the computer system 700.
The computer system 700 may also or alternatively include drive unit 706 implemented as a disk or optical drive. The drive unit 706 may include a computer-readable medium 722 in which one or more sets of instructions 724, e.g. software, can be embedded. Further, instructions 724 may embody one or more of the methods or logic as described herein. The instructions 724 may reside completely or partially within the memory 704 and/or within the processor 702 during execution by the computer system 700. The memory 704 and the processor 702 also may include computer-readable media as discussed above.
In some systems, a computer-readable medium 722 includes instructions 724 or receives and executes instructions 724 responsive to a propagated signal so that a device connected to a network 770 can communicate voice, video, audio, images, or any other data over the network 770. Further, the instructions 724 may be transmitted or received over the network 770 via a communication port or interface 720, and/or using a bus 708. The communication port or interface 720 may be a part of the processor 702 or may be a separate component. The communication port or interface 720 may be created in software or may be a physical connection in hardware. The communication port or interface 720 may be configured to connect with a network 770, external media, the display 710, or any other components in computer system 700, or combinations thereof. The connection with the network 770 may be a physical connection, such as a wired Ethernet connection or may be established wirelessly as discussed below. Likewise, the additional connections with other components of the computer system 700 may be physical connections or may be established wirelessly. The network 770 may alternatively be directly connected to a bus 708.
While the computer-readable medium 722 is shown to be a single medium, the term “computer-readable medium” may include a single medium or multiple media, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions. The term “computer-readable medium” may also include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by a processor or that cause a computer system to perform any one or more of the methods or operations disclosed herein. The computer-readable medium 722 may be non-transitory, and may be tangible.
The computer-readable medium 722 can include a solid-state memory such as a memory card or other package that houses one or more non-volatile read-only memories. The computer-readable medium 722 can be a random-access memory or other volatile re-writable memory. Additionally or alternatively, the computer-readable medium 722 can include a magneto-optical or optical medium, such as a disk or tapes or other storage device to capture carrier wave signals such as a signal communicated over a transmission medium. A digital file attachment to an e-mail or other self-contained information archive or set of archives may be considered a distribution medium that is a tangible storage medium. Accordingly, the disclosure is considered to include any one or more of a computer-readable medium or a distribution medium and other equivalents and successor media, in which data or instructions may be stored.
In an alternative implementation, dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays and other hardware devices, can be constructed to implement one or more of the methods described herein. Applications that may include the apparatus and systems of various implementations can broadly include a variety of electronic and computer systems. One or more implementations described herein may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit. Accordingly, the present system encompasses software, firmware, and hardware implementations.
The computer system 700 may be connected to a network 770. The network 770 may define one or more networks including wired or wireless networks. The wireless network may be a cellular telephone network, an 802.11, 802.16, 802.20, or WiMAX network. Further, such networks may include a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols. The network 770 may include wide area networks (WAN), such as the Internet, local area networks (LAN), campus area networks, metropolitan area networks, a direct connection such as through a Universal Serial Bus (USB) port, or any other networks that may allow for data communication. The network 770 may be configured to couple one computing device to another computing device to enable communication of data between the devices. The network 770 may generally be enabled to employ any form of machine-readable media for communicating information from one device to another. The network 770 may include communication methods by which information may travel between computing devices. The network 770 may be divided into sub-networks. The sub-networks may allow access to all of the other components connected thereto or the sub-networks may restrict access between the components. The network 770 may be regarded as a public or private network connection and may include, for example, a virtual private network or an encryption or other security mechanism employed over the public Internet, or the like.
In accordance with various implementations of the present disclosure, the methods described herein may be implemented by software programs executable by a computer system. Further, in an exemplary, non-limited implementation, implementations can include distributed processing, component/object distributed processing, and parallel processing. Alternatively, virtual computer system processing can be constructed to implement one or more of the methods or functionality as described herein.
Although the present invention has been described above and is defined in the attached claims, it should be understood that the invention may alternatively be defined in accordance with the following embodiments:
Number | Date | Country | Kind |
---|---|---|---|
21305204 | Feb 2021 | EP | regional |
Number | Name | Date | Kind |
---|---|---|---|
10152472 | Park | Dec 2018 | B2 |
20040034835 | Kuruoglu | Feb 2004 | A1 |
20040122657 | Brants et al. | Jun 2004 | A1 |
20060121424 | Ford | Jun 2006 | A1 |
20170052696 | Oviatt | Feb 2017 | A1 |
20170068436 | Auer | Mar 2017 | A1 |
20170068445 | Lee | Mar 2017 | A1 |
20180005082 | Bluche | Jan 2018 | A1 |
20200394364 | Venkateshwaran | Dec 2020 | A1 |
20220253605 | Tan | Aug 2022 | A1 |
Entry |
---|
Asai et al., “EA snippets: Generating summarized view of handwritten documents based on emphasis annotations.” Human Interface and the Management of Information. Information and Knowledge in Applications and Services: 16th International Conference, HCI International 2014, Proceedings. (Year: 2014). |
Asai, Hiroki, Takanori Ueda, and Hayato Yamana. “Legible thumbnail: summarizing on-line handwritten documents based on emphasized expressions.” Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services. 2011. (Year: 2011). |
Extended European Search Report issued on Aug. 31, 2021 in counterpart European Patent Application No. 21305204.6. |
Oviatt, Sharon, et al. “Dynamic handwriting signal features predict domain expertise.” ACM Transactions on Interactive Intelligent Systems (TiiS) 8.3 (2018): 1-21. |
Tur, Gokhan, et al. “The CALO meeting assistant system.” IEEE Transactions on Audio, Speech, and Language Processing 18.6 (2010): 1601-1611. |
Han, Jiawen, et al. “Sentiment pen: Recognizing emotional context based on handwriting features.” Proceedings of the 10th Augmented Human International Conference 2019. 2019. |
Number | Date | Country | |
---|---|---|---|
20220269869 A1 | Aug 2022 | US |