1. Field
The present disclosure relates to computer software, and, more particularly, computer software monitoring a user's use of a computer.
2. Description of the Related Art
Corporations and other organizations have a need to monitor the use of their computer facilities to guard against abuse such as the use of the computer facilities for private (non-corporate) purposes, unlawful purposes, harassment, malicious purposes, and other nefarious activity. An informal survey of system administrators at a number of U.S. corporations indicated that the system administrators primarily rely upon control of access to the Internet to police the computers and networks under their purview. For example, most of the system administrators stated that they block ports using a firewall of one kind or another,
A limited level of monitoring is sometimes employed in policing of computers and networks. For example, some of the system administrators stated that they monitor network or computer content, but primarily as traps of browser addresses and text traffic through a port. Traffic volume between nodes may be monitored, but the content of the traffic is not monitored.
However, control of access to the Internet as well as the limited monitoring described above is generally ineffective in detecting computer abuse. Notwithstanding access control and monitoring, the system administrators surveyed detected many instances of computer abuse by accidental discovery. For example, an employee was discovered to be running a personal eBay store on company time, a programmer was found to be writing computer games while ostensibly on corporate time, and an employee was found to be harassing fellow employees using corporate computers over corporate networks.
Accordingly, for at least the above reasons, there is a need for methods, systems, and compositions of matter for monitoring the use of computers in order to detect abuse.
These and other needs and disadvantages are overcome by the methods, systems, and compositions of matter disclosed herein. Additional improvements and advantages may be recognized by those of ordinary skill in the art upon study of the present disclosure.
In various aspects, methods disclosed herein may include the step of associating an identified user with a computer, and the step of capturing an image of a monitored region of a computer screen of the computer at a specified time. The methods may include the step of extracting image text from the image, the step of determining an emotional state of the identified user using image text content of the image text, and the step of capturing a subsequent image of the monitored region of the computer screen of the computer at a subsequent time subsequent to the specified time, a time difference between the specified time and the subsequent time is dependent upon the emotional state of the user, in various aspects. The associating step, the capturing step, the extracting step, the determining step, and the capturing a subsequent image step are not controlled by the identified user, in various aspects.
This summary is presented to provide a basic understanding of some aspects of the methods disclosed herein as a prelude to the detailed description that follows below. Accordingly, this summary is not intended to identify key elements of the methods, systems, and compositions of matter disclosed herein or to delineate the scope thereof.
The Figures are exemplary only, and the implementations illustrated therein are selected to facilitate explanation. The number, position, relationship and dimensions of the elements shown in the Figures to form the various implementations described herein, as well as dimensions and dimensional proportions to conform to specific force, weight, strength, flow and similar requirements are explained herein or are understandable to a person of ordinary skill in the art upon study of this disclosure. Where used in the various Figures, the same numerals designate the same or similar elements. Furthermore, when the terms “top,” “bottom,” “right,” “left,” “forward,” “rear,” “first,” “second,” “inside,” “outside,” and similar terms are used, the terms should be understood in reference to the orientation of the implementations shown in the drawings and are utilized to facilitate description thereof.
Computer implemented methods for monitoring use of a computer, as well as related systems and compositions of matter are disclosed herein. The methods, systems, and compositions of matter disclosed herein may allow for monitoring the use of a computer by a user, including visual content that is displayed upon a computer screen of the computer to the user as the visual content appears to the user. The visual content may include textual content in image form, which is referred to herein as image text. The methods, systems, and compositions of matter in various aspects may allow for the monitoring of the image text. In various aspects, the methods include the step of capturing an image of a monitored region of the computer screen of the computer, and the step of extracting image text from the image.
The step of capturing an image of the monitored region of the computer screen may be under independent control, meaning the control of someone other than the user, and the image that is captured may be associated with an identified user of the computer.
The image may include the entire computer screen or portions of the computer screen. In some aspects, the image may include only what is actually visible to the user at the moment the image is captured. In other aspects, the image may include portions of all of the windows and other objects that generally lie within computer screen even though one window or object may obscure all or part of another window or object. In still other aspects, the screen may include all the windows and other objects including those portions that lie outside the view of the computer screen but that may be viewed by a user by scrolling or otherwise moving portions thereof into the view of the computer screen. The computer screen may be a physical computer screen in some aspects, while the computer screen may be a virtual computer screen in other aspects.
In some aspects, the step of capturing an image of a monitored region of a computer screen may be performed generally proximate to one or more specified times in order to monitor the images displayed upon the computer screen to the user proximate to the one or more specified times. In other aspects, the methods may include detecting one or more events generated by the user to provoke a computer operation of the computer, and then, upon detecting the one or more events, performing the step of capturing an image of a monitored region of a computer screen of a computer.
The step of the step of extracting image text from the image may employ optical character recognition [OCR] technologies, and the image may be manipulated in various ways so that the image text may be extracted from the image using OCR. While it may be possible to query each application that places text on the screen to obtain the content of that text, in actual practice each application uses different internal methods and internal data structures. Such a query method, if allowed by the application or operating system, would therefore be unique to each application, and for each different device type for which an application version was written. This situation may be further complicated by the fact that applications write to virtual, rather than physical, display surfaces, and may report as displaying text the user cannot actually see (if, for example, the user has scrolled the screen up or down, or if another application's display has overlapped the original application's display area). As each application is updated to a new version, the means by which they display text, the window handles and classes they use to address their virtual screens, and even the function calls, modules or libraries used to expose displayed text to external callers is subject to change. When new applications become available, the external caller must be updated with additional code to know how to get displayed text from the new application. Extracting image text from the image using OCR technologies may avoid the complications of managing potentially thousands of independently maintained functions targeting different versions of tens of thousands of applications. It also ensures that the extracted image text is exactly what the user is actually able to view.
The image text may be processed in order to determine image text content of at least a portion of the image text. In various aspects, at least portions of the image, the image text, and/or the image text content may be reported to an administrator, and/or at least portions of the image, image text, and/or image text content may be archived. The nature and/or frequency of the reporting as well as the frequency at which the image is captured may be related to the image text content. For example, the presence of certain image text content may cause an increase in the frequency at which images are captured and the frequency at which image text and/or image text content is reported.
The methods disclosed herein are generally implemented in software having the form of computer readable instructions adapted to execute upon one or more computers to cause the one or more computers to implement the steps of the methods. Software may be, for example, in the form of high-level code such as C or Java, or may be in the form of machine code. In some aspects, the software may execute on one computer. In other aspects, two or more computers may communicate with one another via network, and the software may be organized in various ways such that portions of the software may be distributed over the two or more computers to be executed by the two or more computers.
The software may be configured into modules, and the modules may be organized in various ways in various aspects. Modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Although generally described as implemented by software, the methods disclosed herein may be implemented in combination with other program modules and/or as a combination of hardware and software in various aspects.
As used herein, the terms “component” and “system” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers.
Computer includes a terminal that may have a computer screen, keyboard, and mouse, and is linked by network to a server. In such an aspect, various software, including that disclosed herein, may execute on the one or more processors in the server, and the computer provides an input/output interface from the server to the user. Computer further includes a computer with one or more processors, memory, computer screen(s), mouse, keyboard, storage device(s), and so forth. Computer screen includes one or more computer screens in communication with the computer that may be generally viewed by the user. Computer further includes, for example, single-processor or multiprocessor computers, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, cellular telephones that include a microprocessor, and microprocessor-based or programmable consumer electronics.
The compositions of matter disclosed herein include computer readable media. Computer readable media may be any available media that may be accessed by the computer and includes both volatile and non-volatile media, removable and non-removable media. For example, computer-readable media may include computer storage media and communication media. Computer readable media may include volatile media, non-volatile media, removable media, and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer readable media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital video disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store the information and that may be accessed by the computer.
Network, as used herein, includes local area networks, cell phone networks (i.e. 3G or 4G), text messaging networks (such as MMS or SMS networks), wide area networks, the Internet, and combinations thereof. Communication may be conducted over the network by various wired and wireless technologies and combinations thereof. Computers may be networked with one another, and storage, various input/output devices, servers, routers and suchlike may be provided about the network, as would be recognized by those of ordinary skill in the art upon study of this disclosure.
As would be recognized by those of ordinary skill in the art upon study of this disclosure, the methods, systems, and compositions of matter disclosed herein may be practiced in distributed computing environments where certain tasks are performed by processors that are linked by network. In a distributed computing environment, modules can be located in computer readable media distributed about the network, and various processors located about the network may execute the modules. The modules and/or processors may communicate with one another via the network.
The user may be the particular person who uses the computer. The administrator may be another person of separate identity from the user. In various aspects, the user may be an employee of a corporation or other organization, and the administrator may be, for example, a systems administrator, a supervisor, a member of a corporate legal department, an administrator in a governmental or an academic setting, a law enforcement officer, a parent, or other individual having responsibility or concern for the usage of the computer, for the user, or both.
With reference to the Figures,
An implementation of computer screen 1020 is illustrated in
The computer screen(s) 1020 may be divided into a monitored region 1025 and an ignored region 1023, as illustrated in
As illustrated, the ignored region 1023 may be a region generally proximate the boundary of the computer screen 1020. The ignored region 1023 may include system clock, scrollbars, window captions, tray icons, and other such features that the administrator may decide not to monitor. The monitored region 1025 may be generally interior portions of the computer screen 1020, as illustrated in
As illustrated in
Text data 40 may be captured in some implementations. Text data 40 includes textual content in character form generally associated with the screen image 1028 displayed upon computer screen 1020, where the character form may be ASCII character(s), ANSI standard character(s), rich text format [RTF] and similar format(s) and combinations of format(s). Text data 40 may include, for example, window captions and window contents when the window contents are textual in nature. For example, a window within which word processing is taking place such as a window that contains a Microsoft Word® document, the text data 40 may include the window caption and the textual content of the Microsoft Word® document. The text data 40 may include other textual information displayed upon computer screen 1020 in various implementations. The text data 40 may be collected into a text data file 41, and text data file 41 may include the identity of the user with whom the text data 41 is associated, date and time information, and other information that may be useful in later analysis of the text data 40.
The image 30 may include image text 35. The image text 35 is extracted from the image 30 at step 12 (see
Step 115 initiates the capture of the image 130 of the monitored region 1025 of the computer screen 1020. The time and the specified time may be the clock time in some implementations, so that the image 130 of the monitored region 1025 of the computer screen 1020 is captured proximate one or more clock times. The time and the specified time may be based on the occurrence of an event such as the login of the user onto the computer 1008, keystrokes on keyboard 1030, or mouse clicks of mouse 1040, in other implementations. In such implementation, the image 130 of the monitored region 1025 of the computer screen 1020 is captured proximate one or more specified times subsequent to the event(s) 121. There may be a plurality of specified times, and the time interval between the specified times may vary, in various implementations. In still other implementations, the event(s) 121 may trigger capture of the image 130 of the monitored region 1025 of the computer screen 1020, and the image 130 may be captured generally concurrent with the event(s) 121.
The method 100 checks to see if the screen saver is on at step 117. If the screen saver is on, method 100 passes from step 117 to step 140 and terminates, rather than capture the screen saver into the image 130. Following termination at step 140, method 100 may be reinitiated at step 105, for example, at some subsequent time or upon occurrence of sum subsequent event, and/or control may be passed to other modules and so forth, as would be recognized by those of ordinary skill in the art upon study of this disclosure.
If the screen saver is not on, method 100 passes from step 117 to step 118 which checks for the occurrence of events 121. Events 121 include keystrokes on keyboard 1030, mouse clicks of mouse 1040, login of the user onto the computer 1008, and other inputs into computer 1008 by the user. If no event(s) 121 have occurred, method 100 passes from step 118 to step 140 and terminates. In other implementations, step 118 may be omitted so that the image 130 of the monitored region 1025 of computer screen 1020 is captured whether or not any events 121 have occurred.
If events 121 have occurred, method 100 proceeds from step 118 to step 120 where events 121 are collected into an event file 122. The event file 122 may be subsequently analyzed, archived, and/or utilized in various ways (see
Method 100 proceeds from step 120 to step 125. At step 125, method 100 checks the number of pixels that have changed since the capture of a prior image 129. The prior image 129 may be an image of the screen image 1028 displayed upon computer screen 1020 at a prior time, and may be generated by a prior execution of method 100. If the number of pixels that have changed since the capture of the prior image 129 is less than a specified minimum number of pixels—i.e. an insufficient number of pixels have changed since the last image capture—then method 100 proceeds to step 140 and terminates. If no prior image 129 exists, method 100 proceeds from step 125 to step 150. If the number of pixels that have changed since the capture of the prior image 129 exceeds the specified minimum number of pixels, method 100 proceeds from step 125 to step 150. At step 150, the image 130 of the screen image 1028 displayed upon computer screen 1020 is captured, and image text 135 is extracted from the image 130.
Method 100 then proceeds from step 150 to step 208. At step 208 the image text 135 and/or the text data, 137 may be processed in various ways. (see
In some implementations, the reporting step 212 may be in the form of the notification that is communicated by, for example, email, and the administrator may be provided with access to the image 130, image text 135, text data 137, and information derived from or generally associated with the user, the image 130, image text 135, text data 137, and events 122. For example, the image 130, image text 135, text data 137, and information derived from or generally associated with the user, the image 130, image text 135, text data 137, and events 122 may be stored on server 1050 and the administrator may access the image 130, image text 135, text data 137, and information derived from or generally associated with the user, the image 130, image text 135, text data 137, and events 122 stored on the server 1050 by FTP, through web-browser based display, through a software application specifically configured for that purpose, or in other ways. Note that the administrator does not have real time relationship with the user—i.e. the administrator may choose to view (or not view) the image 130, image text 135, text data 137, and information derived from or generally associated with the user at any time and not just at the moment these exist upon the computer 1008. The image 130, image text 135, text data 137, and events 122 may be archived to be available for use in subsequent administrative and/or legal proceedings.
The image 130 is converted from a color image to a grayscale image 128 at step 158. The grayscale image 128 may be 256-color grayscale in various implementations. The method 100 checks for the existence of a prior image 129 at step 184 and branches from step 184 depending upon whether or not a prior image 129 exists.
If there is no prior image 129, step 150 of method 100 proceeds from step 184 to step 188. At step 188, text data 137 is captured. The text data 137 may be collected into a text data file 138, and text data file 138 may include the identity of the user with whom the text data 137 is associated, date and time information, and other information that may be useful in later analysis of the text data 137.
At step 192, grayscale image 128 is converted into an OCR image 131 of sufficient quality that OCR software can generally recognize alphanumeric characters imbedded within the OCR image 131. The conversion of grayscale image 128 into an OCR image 131 is further elucidated in
The OCR image 131 created at step 192 is then processed by an OCR engine to extract the image text 135 from the OCR image at step 196. In various implementations the OCR engine may be a commercially available OCR product such as OmniPage®, SimpleOCR®, Tesseract®, or Abby FineReader®. The image text 135 extracted from the OCR image 131 may be organized into an image text file 136. The image text file 136 may include the identity of the user with whom the image text 135 is associated, date and time information, and other information that may be useful in later analysis of the image text 135. Step 150 of method 100 then passes from step 196 to step 204 and terminates at step 204.
If there is a prior image 129, step 150 of method 100 proceeds from step 184 to step 162. At step 162 the portions of the grayscale image 128 that correspond to the ignored region 1023 of computer screen 1020 is removed from the grayscale image 128. Grayscale image 128 is compared with prior image 129 at step 166. If not enough pixels have changed per the test at step 170, method 100 passes from step 170 to step 204 and terminates, as there is not enough change in the grayscale image 128 from the prior image 129 to warrant further processing of grayscale image 128. If enough pixels have changed per the test at step 170, control passes from step 170 to step 188 and proceeds from step 188 as described above.
The emotional state of the user may be analyzed based upon the mood of the words in the text sample, in some implementations. In other implementations, changes in the emotional state of the user may be analyzed based upon changes in the mood of the words in text samples generated by the user, as described by method 350, which is illustrated in
At step 365, image text 336, the text data 337, and/or events 322 generated by the user during time period tB may be captured and combined in various ways to create text sample 358 containing one or more words. Time period tB may differ from time period tA, and time period tB occurs later than time period tA. The time difference between time period tA, and time period tB is denoted as Δt. Accordingly, Δt is the time period between samplings of image text, text data, and/or events, such as image text 136, 336, the text data 137, 337, and/or events 122, 322.
Time period tA, and time period tB may be equal, in some implementations, while time period tA, and time period tB may differ from one another, in other implementations. The time period(s), such as time period tA, and time period tB, over which text samples, such as text samples 348, 358, are collected may range, for example, from a few minutes to a day, a week, a month, or longer periods of time, in various implementations.
As illustrated in
At step 375, the change in the emotional state Δ(emotional state) of the user over time period Δt is calculated. The Δ(emotional state) may be expressed as a change in one or more numerical values that capture the emotional state of the user during time period tA, as determined from text sample 348 at steps 360 and that capture the emotional state of the user at time period tB, as determined from text sample 358 at step 370.
Image text 136, the text data 137, and/or events 122 generated by the user during time period tA are combined into text sample 348, at step 355. Note that events 122 may include keystrokes so that events 122 may include text input by the user via these keystrokes. The image text 136 may include text input by the user, which may be determined from keystrokes recorded in events 122, and the image text 136 may include text from some source other than user keyboard input. The source may be, for example, text in an email from some person other than the user, text on a web browser screen, text in some .pdf or word file, etc. Accordingly, image text 136, text data 137, and events 122 may be combined to create text sample 348, with text sample 348 consisting essentially of text entered by the user, with text sample 348 consisting essentially of text from sources other than the user, or with text sample 348 including both text entered by the user and text from sources other than the user. Similarly, image text 336, text data 337, and events 322 may be combined to create text sample 358, with text sample 358 consisting essentially of text entered by the user, with text sample 358 consisting essentially of text from sources other than the user, or with text sample 358 including both text entered by the user and text from sources other than the user.
Accordingly, the emotional state of the user may be determined at steps 360, 370 using only text entered by the user, using only text from sources other than the user, or using both text entered by the user and text from sources other than the user. The Δ(emotional state) calculated at step 375 may, thus, be based upon only text entered by the user, using only text from sources other than the user, or using both text entered by the user and text from sources other than the user.
At step 380, the period Δt between text samples tA, tB as well as the time periods tA, tB over which text samples 348, 358 are collected may be adjusted based upon the Δ(emotional state). For example, a large change in Δ(emotional state) may result in a shorter period Δt between text samples as well shorter time periods tA, tB over which text samples 348, 358 are collected.
At step 385, the Δ(emotional state) of the user may be reported to the administrator. The Δ(emotional state) of a number of users may be aggregated in various ways and the aggregation of the Δ(emotional state) of the number of users may be reported to the administrator, in various implementations.
An exemplary implementation of steps 360, 370 of method 350 may employ, for example, Whissell's Dictionary of Affect in Language® (WDAL), at least in part. In other implementations, steps 360, 370 may be implemented using Linguistic Inquiry and Word Count (LIWC), which is a text analysis software program designed by James W. Pennebaker, Roger J. Booth, and Martha E. Francis and available from Pennebacker Conglomerates, Inc. In yet other implementations, steps 360, 370 may be implemented using the Belfast Naturalistic Database annotated using Feeltrace, which is available from SSPNet.eu (Social Signal Processing). SSPNet provides a variety of applications for the processing of visual, linguistic, and other phenomena that may be related to social intelligence including emotional state. As examples of resources related to determining emotional state from text, see also:
The WDAL, in various implementations, is operatively implemented in software, and includes approximately 8,300 words in US English language. The WDAL assigns weightings in sixteen categories to each word. The categories may include emotion, activation, child pleasantness, child active, imagery, frequency, pleasant, nice, passive, sad, unpleasant, nasty, active, fun, high imagery, and low imagery. All weightings are positive numbers (no negative numbers are used). When no weighting provided in a particular category for a word, 0.0 is used to weight that word in that category. From those weightings, the relative mood of a word may be determined by calculating that word's weighting against the average weightings and frequencies of all the other words in the dictionary.
Because the WDAL contains 8,300 words, text samples, such as text samples 348, 358, should be of sufficient size to ensure that the text samples contain a number of words found in the WDAL in order to be meaningful. Small text samples may contain few if any words found in the WDAL and so may not be meaningful. Accordingly, the time periods tA, tB over which text samples 348, 358 are collected should be sufficient to allow collection of text samples having sufficient size to allow meaningful evaluation of the emotional state of the user.
To solve this problem, a sample is formed by aggregating text snippets collected over a period of time (for example, over a period of time of two weeks), enough that there is at least one non-zero value in each of the sixteen categories. This establishes an aggregated base sample. The value in the aggregated base sample for each of the sixteen categories is the accumulated weightings of all words in the collected-together samples that supplied a value for that category.
Note that the WDAL considers only individual words, not context. That is, for example, the WDAL does not assign the same mood values to “not happy” and to “unhappy” because the WDAL evaluates the word “not” and “happy” separately. Voice (first person, alternate speaker) is not considered nor is hysteresis (previous emotional state of the speaker—that is, a pleasant word from a person who was previously angry should carry a different emotional weighting than a pleasant word from a speaker who was already pleasant).
When presented with the text sample, such as text sample 348, 358, the WDAL may reports statistics across the entire text sample, including averages of the weightings and ratings found for all the individual words in the text sample, counts of punctuation in the text sample, the number of words in the text sample, the number of WDAL words found in the text sample, and the percentages of words in the text sample falling into the various weightings/ratings/attributes categories.
Methods, such as method 350, may include tools to edit the WDAL weightings or enter weightings for other words. This may be helpful to those industries that have unique terminologies, or that have diverse workforces where colloquial expressions contain words of foreign derivation. It will also enable incorporation of changing patterns of speech and writing, including texting (“LOL”, “LMAO”, “WTF”, etc.) and influences from media and music.
Various implementations of steps 360, 370 of method 350 may utilize a general dictionary that includes at least 100,000 words, and that includes all of the words in the WDA. The count of recognizable words found in the text sample when compared to the general dictionary may be reported—just word compare, no weightings. The ratio of WDAL-listed words to non-WDAL listed words in the text sample may provide a metric for the emotional density of the text sample. For example, a text sample having an emotionally charged word (a word included in the WDAL) among 1,000 words may be less emotionally dense than a text sample having a hundred emotionally charged words among 1,000 words.
In various implementations, text sample(s) collected from the specific user may be analyzed at steps 360, 370 to determine the emotional state of the specific user by using, at least in part, the WDA. Based upon the relative mood of the words in the specific user's text sample(s), the “is/is not” ratings, the attributes, the emotional density of the user's text sample(s), or various other statistics related to the text sample(s), an emotional state may be attributed to the specific user.
As an example, a baseline may be calculated for each user using text sample(s) sufficient to have non-zero values in all categories. As subsequent text samples are obtained, the weightings obtained from the first text sample used in calculating the baseline may be subtracted out and replaced with the weighting from the subsequent text sample. That is, the oldest text sample in the baseline is replaced with the most recent text sample. All weightings contain non-zero values, which solves problems with comparison to other small samples calculated in the same manner, with averaging several small samples, and with making comparisons to the aggregated base. Various implementations may employ moving averages, weighted moving averages, auto-regressive moving averages, various other time series, and other statistical techniques in calculating the baseline, the emotional state of the user, and/or a Δ(emotional state) of the user.
Method 350 tracks relative relationships of emotional state Δ(emotional state). The values of emotional state may not be concrete valuations (i.e., “12.9=happy, 4.3=sad”) but rather are used as comparators (“User A with a value of 12.9 is relatively happier than User B with a value of 4.3, and the range of happiness across all text samples sampled was recorded as being from 1.16 to 18.93”) in method 350. Other implementations may track the emotional state, changes in emotional state Δ(emotional state), or changes in changes in emotional state (2nd derivatives). Methods, such as method 350, may include changing thresholds and parameters used in measuring emotional state depending upon changes in emotional state Δ(emotional state). For example, weightings of words in the WDAL may be altered in response to Δ(emotional state).
Changes in the emotional state Δ(emotional state) of the user over time may be tracked. For example, the emotional state of the user as determined based upon text samples from week 1 may be compared with the emotional state of the user as determined based upon text samples from a subsequent week 2.
Changes in the emotional state Δ(emotional state) of User A may be compared with changes in the emotional state Δ(emotional state) of User B during a given time period, say the same week, in various implementations. Changes in the emotional state Δ(emotional state) of User Group 1 (an aggregation, for example, of the emotional state of User A, User B, User C, User D) may be compared with changes in the emotional state Δ(emotional state) of User Group 2 (an aggregation, for example, of the emotional state of User F, User G, User H, User I, User J) during a given time period, say the same week, in various implementations. Changes in the emotional state Δ(emotional state) of User Group 1 may be tracked over time, in various implementations. Changes in emotional state Δ(emotional state) of various users, user groups, combinations thereof, and so forth, may be tracked over time, compared with one another, or tracked in conjunction with various events in an organization such as layoffs, reorganizations, expansions, changes in management, or changes in operation, in various implementations. Various statistical measures of emotional state, Δ(emotional state), and so forth may be used in various implementations. For example, the mean Δ(emotional state), standard deviation of Δ(emotional state), or skew of Δ(emotional state) for a user over time, for groups of users at a particular time, or for groups of users over time may be calculated, in various implementations.
As illustrated in
Method 100 passes from step 209 to 212. At step 212, the image 130 may be reported per step 225. At least portions of the image text content 146 may be reported per step 213. At least portions of the text data content 148 may be reported per step 217, and at least portions of the event content 152 may be reported per step 221. In other implementations, at least portions of the image text 136, the text data 137, and/or events 122 may be reported.
The nature of the reporting may be dependent upon image text content 146, text data content 148, and/or event content 152. For example, certain image text content 146, text data content 148, and event content 152 may trigger more frequent reporting, may alter the type and quantity of information within the reports, or may alter the administrator(s) to whom the reports are directed. Certain image text content 146, text data content 148, and event content 152 may alter the frequency and extent of the monitoring in various implementations. For example, certain image text content 146, text data content 148, and/or event content 152 may trigger more frequent collection of image 130 and extraction of image text 136 from image 130. The image text content 146, text data content 148, and/or event content 152 is determined generally without human intervention and the subsequent actions taken, if any, based upon the content are automatic generally without human intervention.
Method 100 passes from step 212 to 216. At step 216, the image 130 (step 229), image text 136 (step 233), text data 137 (step 237), image text content 146 (step 241), text data content 148 (step 245), events 122 (step 249), and/or event content 152 (step 253) may be archived. By archived, it is meant that the image 130, image text 136, text data 137, image text content 146, text data content 148, events 122, and/or event content 152 are stored to a generally permanent non-volatile media so that the image 130, image text 136, text data 137, image text content 146, text data content 148, events 122, and/or event content 152 may be retrieved at some later time. The non-volatile media, for example, may be magnetic, optical, or semiconductor based, may be generally fixed or may be removable, and may be located anywhere about network 1080. The image 130, image text 136, text data 137, image text content 146, text data content 148, events 122, and/or event content 152 may be archived in various compressed formats in various implementations. Method 100 then passes from step 216 to step 220 and terminates. The archiving of image 130 in compressed format is described in U.S. patent application Ser. No. 12/571,308 entitled “METHODS FOR DIGITAL IMAGE COMPRESSION” by F. Scott Deaver, which is hereby incorporated by reference in its entirety herein.
In some implementations, at least a portion of the steps of method 100 is performed on monitored computer 1008. Substantially all of the steps of method 100 prior to and including step 208 may be performed on the monitored computer 1008 in some implementations. In such implementations, only upon detection of certain image text content 146, text data content 148, and/or event content 152 are the archiving steps 233, 229, 237, 241, 245, 249, 253 and/or reporting steps 213, 217, 221, 225 performed. This may minimize network traffic on the network 1080, the requirements for storage space, network bandwidth, and otherwise decrease the overhead imposed by method 100.
The foregoing discussion along with the Figures discloses and describes various exemplary implementations. These implementations are not meant to limit the scope of coverage, but, instead, to assist in understanding the context of the language used in this specification and in the claims. Accordingly, variations of the methods as well as systems and compositions of matter that differ from these exemplary implementations may be encompassed by the appended claims. Upon study of this disclosure and the exemplary implementations herein, one of ordinary skill in the art may readily recognize that various changes, modifications and variations can be made thereto without departing from the spirit and scope of the inventions as defined in the following claims.
The present application is a continuation-in-part of co-pending U.S. patent application Ser. No. 12/638,915 filed 15 Dec. 2009 that, in turn, is a continuation-in-part of U.S. patent application Ser. No. 12/571,291 filed 30 Sep. 2009 (now U.S. Pat. No. 8,457,347). Both U.S. patent application Ser. No. 12/638,915 and U.S. patent application Ser. No. 12/571,291 are incorporated by reference in their entirety herein.
Number | Date | Country | |
---|---|---|---|
Parent | 12638915 | Dec 2009 | US |
Child | 14273861 | US | |
Parent | 12571291 | Sep 2009 | US |
Child | 12638915 | US |