The present disclosure generally relates to monitoring communications for activity that violates ethical, legal, or other standards of behavior and poses risk or harm to institutions or individuals. The need for detecting violations in the behavior of representatives of an institution has become increasingly important in the context of proactive compliance, for instance. For example, in the modern world of financial services, there are many dangers to large institutions from a compliance perspective, and the penalties for non-compliance can be substantial, both from a monetary standpoint and in terms of reputation. Institutions are coming under increasing pressure to quickly identify unauthorized trading, market manipulation and unethical conduct within their organization, for example, but often lack the tools to do so effectively.
Among other needs, there exists a need for effective identification of activity that violates ethical, legal, or other standards of behavior and poses risk or harm to institutions or individuals from communications. It is with respect to these and other considerations that the various embodiments described below are presented.
Embodiments of the present disclosure are directed generally towards methods, systems, and computer-readable storage medium relating to, in some embodiments, synchronization and analysis of audio communications data and text data.
Some aspects and embodiments disclosed herein may be utilized for providing advantages and benefits in the area of communication surveillance for regulatory compliance. Some implementations can process all communications, including electronic forms of communications such as instant messaging (or “chat”), email, voice, and/or social network messaging to connect and monitor an organization's employee communications for regulatory and corporate compliance purposes. Some embodiments of the present disclosure unify detection, user interfaces, behavioral models, and policies across all communication data sources, and can provide tools for compliance analysts in furtherance of these functions and objectives. Some implementations can proactively analyze users' actions to identify breaches such as unauthorized activities that are against applicable policies, laws, or are unethical, through the use of natural language processing (NLP) models.
Among other benefits and advantages, the present disclosure can provide a streamlined, holistic communications review which can allow user(s) to unlock risks in voice communications data, in accordance with some aspects and example embodiments of the present disclosure. According to some aspects, and in some example embodiments, alignment of transcription and audio can provide for users to understand communications data more effectively. Conversational patterns and interactions within communication data that may have been previously hidden can now be made more visible and actionable.
In one aspect, the present disclosure relates to a computer-implemented method. In one or more embodiments, the method includes the steps of receiving audio data; transcribing the audio data into text, based on one or more languages being spoken within the audio data; identifying a potential violation of a predetermined standard within at least the transcribed text, wherein the identified potential violation corresponds to a match of a potential violation of a predetermined policy, wherein the policy corresponds to at least one scenario and target population, and wherein the predetermined standard is based on the at least one scenario; generating an alert in response to the match of the potential violation of the predetermined policy; generating an audio waveform representing the audio data and outputting a visual representation corresponding to the audio waveform, wherein the visual representation of the audio waveform is displayed along a vertical axis; and generating a visual representation of the transcribed text, along a horizontal axis perpendicular to the vertical axis, such that lines of the transcribed text, as output for display, are synchronized with and align with corresponding portions of the audio waveform.
In one or more embodiments, the method further includes receiving text data from an electronic communication between persons, and wherein identifying the potential violation of the predetermined standard further comprises identifying the potential violation based on both the audio data and the received text data from the electronic communication
In one or more embodiments, transcribing the audio data into text is based on multiple languages being spoken within the audio data.
In one or more embodiments, identifying the potential violation comprises implementing a machine learning model.
In one or more embodiments, the at least one scenario comprises a lexicon representing one or more terms or regular expressions.
In one or more embodiments, the at least one scenario comprises at least one of features corresponding to a machine learning model, features corresponding to a lexicon, and natural language features.
In one or more embodiments, the at least one scenario is formed by joining the machine learning features and the lexicon features, using Boolean operators.
In one or more embodiments, the method further comprises outputting, for display, a play head configured to move through the audio waveform and/or corresponding, aligned text during playback.
In one or more embodiments, the play head moves vertically along the audio waveform as the transcribed text moves forwards or backwards in playback.
In one or more embodiments, the movement of the play head is controllable by a user based on received user input.
In one or more embodiments, as the audio data corresponding to the audio waveform moves forwards or backwards in playback, the location in time within the audio waveform is visually emphasized within the visual representation of the audio waveform.
In one or more embodiments, as the transcribed text corresponding to the audio data moves forwards or backwards in playback, the location in time corresponding to a corresponding segment of the transcribed text is visually emphasized.
In one or more embodiments, the emphasized segment of the transcribed text is underlined and/or highlighted.
In one or more embodiments, as the audio waveform and transcribed text are scrolled during playback, as displayed, the audio waveform and transcribed text scrolls in synchronization with the playback.
In one or more embodiments, the audio waveform and transcribed text are displayed within a webpage and the audio waveform and transcribed text scroll content of the webpage in synchronization with the playback.
In one or more embodiments, the visual representation corresponding to the audio waveform and the visual representation of the transcribed text are output for display to a user in an interactive graphical user interface.
In another aspect, the present disclosure relates to a system. In one or more embodiments, the system includes A voice surveillance system configured to: receive audio data, and transcribe the audio data into text, based on one or more languages being spoken within the audio data; and an electronic communications surveillance system, configured to: identify a potential violation of a predetermined standard within at least the transcribed text, wherein the identified potential violation corresponds to a match of a potential violation of a predetermined policy, wherein the policy corresponds to at least one scenario and target population, and wherein the predetermined standard is based on the at least one scenario, generate an alert in response to the match of the potential violation of the predetermined policy, and wherein the voice surveillance system and/or the electronic communications system are configured to: generate an audio waveform representing the audio data and output a visual representation corresponding to the audio waveform, wherein the visual representation of the audio waveform is displayed along a vertical axis, and generate a visual representation of the transcribed text, along a horizontal axis perpendicular to the vertical axis, such that lines of the transcribed text, as output for display, are synchronized with and align with corresponding portions of the audio waveform.
In one or more embodiments, the electronic communications surveillance system is configured to: receive text data from an electronic communication between persons, and wherein identifying the potential violation of the predetermined standard further comprises identifying the potential violation based on both the audio data and the received text data from the electronic communication.
In one or more embodiments, transcribing the audio data into text is based on multiple languages being spoken within the audio data.
In one or more embodiments, identifying the potential violation comprises implementing a machine learning model.
In one or more embodiments, the at least one scenario comprises a lexicon representing one or more terms or regular expressions.
In one or more embodiments, the at least one scenario comprises at least one of features corresponding to a machine learning model, features corresponding to a lexicon, and natural language features.
In one or more embodiments, the at least one scenario is formed by joining the machine learning features and the lexicon features, using Boolean operators.
In one or more embodiments, the voice surveillance system and/or the electronic communications system are configured to: output, for display, a play head configured to move through the audio waveform and/or corresponding, aligned text during playback.
In one or more embodiments, the play head moves vertically along the audio waveform as the transcribed text moves forwards or backwards in playback.
In one or more embodiments, the movement of the play head is controllable by a user based on received user input through a graphical user interface.
In one or more embodiments, as the audio data corresponding to the audio waveform moves forwards or backwards in playback, the location in time within the audio waveform is visually emphasized within the visual representation of the audio waveform.
In one or more embodiments, as the transcribed text corresponding to the audio data moves forwards or backwards in playback, the location in time corresponding to a corresponding segment of the transcribed text is visually emphasized.
In one or more embodiments, the emphasized segment of the transcribed text is underlined and/or highlighted.
In one or more embodiments, as the audio waveform and transcribed text are scrolled during playback, as displayed, the audio waveform and transcribed text scrolls in synchronization with the playback.
In one or more embodiments, the audio waveform and transcribed text are displayed within a webpage and the audio waveform and transcribed text scroll content of the webpage in synchronization with the playback.
In one or more embodiments, the visual representation corresponding to the audio waveform and the visual representation of the transcribed text are output for display to a user in an interactive graphical user interface.
In yet another aspect, the present disclosure relates to a computer-readable medium. In one or more embodiments, the computer-readable medium is a non-transitory computer-readable medium storing instructions which, when executed by one or more processors of a computer system, cause a computing system to perform functions that include receiving audio data; transcribing the audio data into text, based on one or more languages being spoken within the audio data; identifying a potential violation of a predetermined standard within at least the transcribed text, wherein the identified potential violation corresponds to a match of a potential violation of a predetermined policy, wherein the policy corresponds to at least one scenario and target population, and wherein the predetermined standard is based on the at least one scenario; generating an alert in response to the match of the potential violation of the predetermined policy; generating an audio waveform representing the audio data and outputting a visual representation corresponding to the audio waveform, wherein the visual representation of the waveform is displayed along a vertical axis; and generating a visual representation of the transcribed text, along a horizontal axis perpendicular to the vertical axis, such that lines of the transcribed text, as output for display, are synchronized with and align with corresponding portions of the audio waveform.
In one or more embodiments, the executable instructions further comprise, when executed, receiving text data from an electronic communication between persons, and wherein identifying the potential violation of the predetermined standard further comprises identifying the potential violation based on both the audio data and the received text data from the electronic communication.
In one or more embodiments, transcribing the audio data into text is based on multiple languages being spoken within the audio data.
In one or more embodiments, identifying the potential violation comprises implementing a machine learning model.
In one or more embodiments, the at least one scenario comprises a lexicon representing one or more terms or regular expressions.
In one or more embodiments, the at least one scenario comprises at least one of features corresponding to a machine learning model, features corresponding to a lexicon, and natural language features.
In one or more embodiments, the at least one scenario is formed by joining the machine learning features and the lexicon features, using Boolean operators.
In one or more embodiments, the executable instructions further comprise, when executed, outputting, for display, a play head configured to move through the audio waveform and/or corresponding, aligned text during playback.
In one or more embodiments, the play head moves vertically along the audio waveform as the transcribed text moves forwards or backwards in playback.
In one or more embodiments, the movement of the play head is controllable by a user based on received user input.
In one or more embodiments, as the audio data corresponding to the audio waveform moves forwards or backwards in playback, the location in time within the audio waveform is visually emphasized within the visual representation of the audio waveform.
In one or more embodiments, as the transcribed text corresponding to the audio data moves forwards or backwards in playback, the location in time corresponding to a corresponding segment of the transcribed text is visually emphasized.
In one or more embodiments, the emphasized segment of the transcribed text is underlined and/or highlighted.
In one or more embodiments, as the audio waveform and transcribed text are scrolled during playback, as displayed, the audio waveform and transcribed text scrolls in synchronization with the playback.
In one or more embodiments, the audio waveform and transcribed text are displayed within a webpage and the audio waveform and transcribed text scroll content of the webpage in synchronization with the playback.
In one or more embodiments, the visual representation corresponding to the audio waveform and the visual representation of the transcribed text are output for display to a user in an interactive graphical user interface.
Other aspects and features according to the example embodiments of the present disclosure will become apparent to those of ordinary skill in the art, upon reviewing the following detailed description in conjunction with the accompanying figures.
Reference will now be made to the accompanying drawings, which are not necessarily drawn to scale.
Although example embodiments of the present disclosure are explained in detail herein, it is to be understood that other embodiments are contemplated. Accordingly, it is not intended that the present disclosure be limited in its scope to the details of construction and arrangement of components set forth in the following description or illustrated in the drawings. The present disclosure is capable of other embodiments and of being practiced or carried out in various ways.
It must also be noted that, as used in the specification and the appended claims, the singular forms “a,” “an” and “the” include plural referents unless the context clearly dictates otherwise.
By “comprising” or “containing” or “including” is meant that at least the named compound, element, particle, or method step is present in the composition or article or method, but does not exclude the presence of other compounds, materials, particles, method steps, even if the other such compounds, material, particles, method steps have the same function as what is named.
In describing example embodiments, terminology will be resorted to for the sake of clarity. It is intended that each term contemplates its broadest meaning as understood by those skilled in the art and includes all technical equivalents that operate in a similar manner to accomplish a similar purpose. It is also to be understood that the mention of one or more steps of a method does not preclude the presence of additional method steps or intervening method steps between those steps expressly identified. Steps of a method may be performed in a different order than those described herein without departing from the scope of the present disclosure. Similarly, it is also to be understood that the mention of one or more components in a device or system does not preclude the presence of additional components or intervening components between those components expressly identified.
According to some embodiments of the present disclosure, audio can be plotted vertically and synced up to text from transcript(s) of an audio session. Line heights corresponding to lines of the text can be utilized. For example, the audio can be broken down into pieces of a certain duration, such as four-second pieces. The received input data can be a standard-format audio file. Other features of the present disclosure in some embodiments include functionalities for focus view (wherein there can be reduced visualization of certain text but not other text) and redacted transcriptions.
The following discussion provides some descriptions and non-limiting definitions, and related contexts, for terminology and concepts used in relation to various aspects and embodiments of the present disclosure.
An “event” can be considered any object with a fixed time, and an event can be observable data that happens at a point in time, for example an email, a badge swipe, a trade (e.g., trade of a financial asset), or a phone call (see also the illustration of
A “property” relates to an item within an event that can be uniquely identified, for example metadata (see also illustration of
A “communication” can be any event with language content, for example email, chat, a document, social media, or a phone call (see also illustration of
A “metric” can be a weighted combination of factors to identify patterns and trends (e.g., a number-based value to represent behavior or intent from a communication). Examples of metrics include sentiment, flight risk, risk indicator, and responsiveness score. A metric may additionally or alternatively be referred to herein as, or with respect to, a score, measurement, or rank.
A “post” can be an identifier's contribution within a communication, for example a single email within a thread, a single chat post, a continuous burst of communication from an individual, or a single social media post (see also illustration of
A “conversation” can be a group of semantically related posts, for example the entirety of an email with replies, a thread, or alternative a started and stopped topic, a time-bound topic, and/or a post with the other post (replies). Several posts can make up a conversation within a communication.
A “signal” can be an observation tied to a specific event that is identifiable, for example rumor language, wall crossing, or language of interest.
A “lexicon” is a collection of terms (also referred to herein as “entries”) that can be matched against text to find language of interest (e.g., language that may trigger an alert, described below). Terms can be strings of characters and/or operators that can implement a search pattern for matching text. A lexicon can include a grammar that defines syntax for the terms in the lexicon.
A “scenario” can be a combination of signals and metrics that can be applied to text. In some embodiments of the present disclosure, scenarios can be created by combining lexicons, machine learning models (e.g., pre-trained models described below), and natural language features. When a section of text matches the parameters set by the scenario, the scenario can trigger the generation of an alert (described below) which represents that a scenario match has occurred.
A “policy” can be a scenario applied to a population with a defined workflow. A policy may be, for instance, how a business chooses to handle specific situations, for example as it may relate to ongoing deal monitoring, disclaimer adherence, and/or anti money laundering (AML) monitoring. As illustrated in
An “alert” can indicate to a user that a policy match or scenario match has occurred which requires action (sometimes referred to herein with respect to “actioning” an alert). A signal that requires review can be considered an alert. As an example, an indication of intellectual property theft may be found in a chat post with language that matches the scenario, on a population that needs to be reviewed.
A “manual alert” can be an alert added to a communication from a user that was not generated from the system. A manual alert may be used, for example, when a user needs to add an alert to language or other factors for further review.
A “hit” can be an exact signal that applies to a policy on events, for example an occurrence of the language “I'm taking clients with me when I leave”, a behavior pattern change, and/or a metric change.
A “review” can be the act of a user assigning actions on hits, alerts, or communications.
A “tag” can be a label attached to a communication for the purpose of identification or to give other information, for example a new feature set that will enable many workflow practices.
A “personal identifier” can be any structured field that can be used to define a reference or entity, for example “jeb@jebbush.com”, “@CMcK”, “EnronUser1234”, or “(555) 336-2700” (i.e., a personal identifier can include email, a chat handle, or a phone number). As used herein, a hit may additionally or alternatively be referred to herein as, or with respect to, an “entity ID”.
An “entity” can be an individual, object, and/or property IRL, and can have multiple identifiers or references, for example John Smith, IBM, or Enron. Other related terms may include profile, participant, actor, and/or resolved entity.
A “relationship” can be a connection between two or more identifiers or entities, for example “works in” department, person-to-person, person-to-department, and/or company-to-company.
The following discussion includes some descriptions and non-limiting definitions, and related contexts, for terminology and concepts that can relate to a graphical user interface (and associated example views as output to a user) that can be used by a user to interact with, visualize, and perform various functionalities in accordance to one or more embodiments of the present disclosure.
A “sidebar” can be a global placeholder for navigation and branding (see, e.g., illustrations in
“Content” as shown and labeled in, for example,
An “aside” as shown and labeled in, for example,
A “visual view” can include a chart, graph, or data representation that is beyond simple text, for example communications (“comms”) over time, alters daily, queue progress, and/or relationship metric(s). As used herein, visual views may additionally or alternatively be referred to herein as, or with respect to charts or graphs.
The following discussion includes some descriptions and non-limiting definitions, and related contexts, for terminology and concepts that may particularly relate to machine learning models and the training of machine learning models, in accordance with one or more embodiments of the present disclosure.
A “pre-trained model” can be a model that performs a task but requires tuning (e.g., supervision and/or other interaction by an analyst or developer) before production. An “out of the box model” can be a model that benefits from, but does not require, tuning before use in production. Pre-trained models and out of the box models can be part of the building blocks for a policy.
In some embodiments, the present disclosure can provide for implementing analytics using “supervised” machine learning techniques (herein also referred to as “supervised learning”). Supervised mathematical models can encode a variety of different data aspects which can be used to reconstruct a model at run-time. The aspects utilized by these models may be determined by analysts and/or developers, for example, and may be fixed at model training time. Models can be retrained at any time, but retraining may be done more infrequently once models reach certain levels of accuracy.
Embodiments of the present disclosure can include systems and methods for applying scenarios to combinations of text communications and audio communications to generate alerts for displaying the alerts, text communications, and audio communications to a user.
As described herein, a machine learning model can be a machine learning classifier that is configured to classify text. Additionally, in some embodiments, model training can include training models for analysis of text data from one or more electronic communications between at least two persons.
The present disclosure contemplates the machine learning training techniques known in the art can be applied to the data disclosed in the present disclosure for model training. For example, in some embodiments, the model training can include evaluating the model against established datasets. As another example, the model training can be based on a user input, for example a user input that labels the data.
When alert is received that a policy match occurs, an alert can be triggered/generated which indicates, to a user (e.g., a user of a graphical user interface), that a policy match has occurred which requires action. The policy can correspond to actions, for example, that violate at least one of a combination of signals and metrics, a population, and workflow (also referred to herein as a “violation” or “violation condition” in some uses). Additionally, the present disclosure contemplates that the alerts can be reviewed by the user or by a machine learning model. This review can include determining whether the alerts correspond to an actual violation.
In some embodiments of the present disclosure, a user can review the data and perform an interaction using a user interface (e.g., a graphical user interface that is part of or operably connected to the computer system illustrated in
Additionally, some embodiments of the present disclosure provide for increasing the accuracy of a conduct surveillance system. Systems and methods can be configured to receive at least one alert from a conduct surveillance system. As used in the present disclosure, a “conduct surveillance system” can refer to a tool for reviewing and investigating communications. Again, the alerts can represent a potential violation of a predetermined standard. The conduct surveillance system can generate the alerts in response to an electronic communication between persons matching a violation of a predetermined policy. The conduct surveillance system can include a voice surveillance system and/or an electronic communications surveillance system. As described in greater detail elsewhere in the present disclosure, the predetermined policy can include a scenario, a target population, and a workflow.
In some embodiments of the present disclosure, the scenario can include a machine learning classifier. Additionally, in some embodiments of the present disclosure, the scenario can include a lexicon. Again, as described herein, the lexicon can represent one or more terms or and regular expressions. A non-limiting example of a term that can be included in the lexicon is a string of one or more text characters, (e.g., a word).
A system in accordance with some embodiments of the present disclosure can determine whether each of the at least one alert represents an actual violation of the predetermined policy. As a non-limiting example, if the predetermined policy can configured to detect the dissemination of confidential information. This could represent a violation of a law, regulation, or internal policy. But a communication identified by the predetermined policy as a potential violation may not represent an actual violation of the underlying law, regulation or policy (i.e., a false positive).
In some embodiments of the present disclosure, determining whether each alert represents an actual violation of the policy is referred to as “actioning” the alert. This can include determining whether each of the at least one alert represents an actual violation of the policy, law, or ethical standard that the policy/scenario that generated the alert is configured to detect. Actioning the alert can include displaying the alert to a user and receiving a user input from a user interface representing whether the alert represents an actual violation of the policy.
Still with reference to
The voice surveillance system 1510 can also include a voice activity detection module 1516. The voice activity detection module 1516 can determine when the audio input 1512 is silent (or the audio input is below a certain noise threshold) to indicate to the user that there is no usable input during that section of the audio input 1512.
In some embodiments of the present disclosure, the system can include a language identification module 1518. The language identification module 1518 can determine what language is being spoken during the audio input 1512, or if multiple languages are being spoken during different segments of the audio input.
In some embodiments of the present disclosure, the system can further include a diarization module 1520. As used herein, “diarization” refers to methods for determining the identity of a speaker in a section of audio input 1512. For example, the diarization module can assign each section of the audio input 1512 a speaker. Therefore, the voice surveillance system 1510 can identify who is speaking at each section of the audio input 1512.
Still with reference to
As shown in
Still with reference to
The output text 1526 can be an input into a text analytics module 1532, which can include one or more scenarios 1534 configured to analyze the output text 1526. The scenarios 1534 can generate alerts based on the output text 1526 and the scenarios 1534 (e.g., the alert can be based on determining that a scenario match occurred), and the alerts 1536 can be displayed to the user using one or more user interfaces (e.g., the user interface 600 illustrated in
With reference to
With reference to
Still with reference to
Embodiments of the present disclosure also include user interfaces that can be configured to display information about alerts derived from combinations of text and/or audio data.
Still with reference to
Still with reference to
Additionally, embodiments of the present disclosure can be configured to operate in a web browser. The play head 630 and waveform 622 can be configured so that, during audio playback, the browser page scrolls with the play head 630. As the browser page scrolls, the transcribed text 621 can scroll with the play head so that the transcribed text 621 and play head 630 are visible as the text progresses.
Again with reference to
With reference to
Additional example user interfaces are shown in
The present disclosure contemplates that any portion of the text can be replaced with placeholder boxes or lines.
As shown, the computer 1800 includes a processing unit 1802, a system memory 1804, and a system bus 1806 that couples the memory 1804 to the processing unit 1802. The computer 1800 further includes a mass storage device 1812 for storing program modules. The program modules 1814 may include modules executable to perform one or more functions associated with embodiments illustrated in one or more of
The mass storage device 1812 is connected to the processing unit 1802 through a mass storage controller (not shown) connected to the bus 1806. The mass storage device 1812 and its associated computer storage media provide non-volatile storage for the computer 1800. By way of example, and not limitation, computer-readable storage media (also referred to herein as “computer-readable storage medium” or “computer-storage media” or “computer-storage medium”) may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-storage instructions, data structures, program modules, or other data. For example, computer-readable storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, digital versatile disks (“DVD”), HD-DVD, BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer 1800. Computer-readable storage media as described herein does not include transitory signals.
According to various embodiments, the computer 1800 may operate in a networked environment using connections to other local or remote computers through a network 1818 via a network interface unit 1810 connected to the bus 1806. The network interface unit 1810 may facilitate connection of the computing device inputs and outputs to one or more suitable networks and/or connections such as a local area network (LAN), a wide area network (WAN), the Internet, a cellular network, a radio frequency network, a Bluetooth-enabled network, a Wi-Fi enabled network, a satellite-based network, or other wired and/or wireless networks for communication with external devices and/or systems.
The computer 1800 may also include an input/output controller 1808 for receiving and processing input from a number of input devices. Input devices may include, but are not limited to, keyboards, mice, stylus, touchscreens, microphones, audio capturing devices, or image/video capturing devices. An end user may utilize such input devices to interact with a user interface, for example a graphical user interface on one or more display devices (e.g., computer screens), for managing various functions performed by the computer 1800, and the input/output controller 808 may be configured to manage output to one or more display devices for visually representing data.
The bus 1806 may enable the processing unit 1802 to read code and/or data to/from the mass storage device 1812 or other computer-storage media. The computer-storage media may represent apparatus in the form of storage elements that are implemented using any suitable technology, including but not limited to semiconductors, magnetic materials, optics, or the like. The program modules 1814 may include software instructions that, when loaded into the processing unit 1802 and executed, cause the computer 1800 to provide functions associated with embodiments illustrated in
The processing unit 1802 may be constructed from any number of transistors or other discrete circuit elements, which may individually or collectively assume any number of states. More specifically, the processing unit 1802 may operate as a finite-state machine, in response to executable instructions contained within the program modules 1814. These computer-executable instructions may transform the processing unit 1802 by specifying how the processing unit 1802 transitions between states, thereby transforming the transistors or other discrete hardware elements constituting the processing unit 1802. Encoding the program modules 1814 may also transform the physical structure of the computer-readable storage media. The specific transformation of physical structure may depend on various factors, in different implementations of this description. Examples of such factors may include, but are not limited to: the technology used to implement the computer-readable storage media, whether the computer-readable storage media are characterized as primary or secondary storage, and the like. For example, if the computer-readable storage media are implemented as semiconductor-based memory, the program modules 1814 may transform the physical state of the semiconductor memory, when the software is encoded therein. For example, the program modules 1814 may transform the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory.
As another example, the computer-storage media may be implemented using magnetic or optical technology. In such implementations, the program modules 1814 may transform the physical state of magnetic or optical media, when the software is encoded therein. These transformations may include altering the magnetic characteristics of particular locations within given magnetic media. These transformations may also include altering the physical features or characteristics of particular locations within given optical media, to change the optical characteristics of those locations. Other transformations of physical media are possible without departing from the scope of the present disclosure.
The various example embodiments described above are provided by way of illustration only and should not be construed to limit the scope of the present disclosure. Certain patentable aspects of the present disclosure are presented in the appended claims. Those skilled in the art will readily recognize various modifications and changes that may be made to the present disclosure without following the example embodiments and applications illustrated and described herein, and without departing from the true spirit and scope of the present disclosure.
This application claims priority to, and the benefit of, U.S. Provisional Patent Application No. 63/239,222, filed Aug. 31, 2021, the entire contents of which is hereby incorporated by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/042142 | 8/31/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63239222 | Aug 2021 | US |