The present inventive subject matter relates generally to computer science, data science, data analytics, computer software, data structures, and data and algorithmic analysis using machine learning techniques. More specifically, techniques for omnichannel data analysis are described.
As online purchasing of goods and services activities continue to increase, there are numerous problems arising with conventional solutions such as platforms that manage activities such as searching, selling, purchasing, returning, and getting customer support. Data transactions are occurring at ever-increasing rates and volumes and conventional solutions for analyzing these tremendous amounts of data are becoming, likewise, more difficult. For example, conventional customer support solutions are often challenged technically due to the numerous and varied communication channels that may be used by a consumer to communicate with a retailer, good or service provider, business, enterprise, or organization (e.g., for-profit and not-for-profit). Various social media channels and networks, websites, text messaging, telephone, smartphone, cellular phone, and other communication media and channels typically used by conventional customer support solutions providers provide multiple and useful ways to connect with consumers, but the data generated from transactional events (e.g., phone calls, electronic mail (hereafter “email”), text messages, chat, SMS, IRC, video calls, augmented reality (AR), and virtual reality (VR)-based communication applications, customer relationship management (CRM) applications, surveys, product reviews, online forums, among others, without restriction or limitation) are neither consistent nor unifying in the handling of data resulting from these transactions. Further, conventional solutions do not provide or apply useful analytical technologies to this data that can help businesses, enterprises, organizations, and the key decisionmakers and stakeholders within, make efficient, accurate, timely, tactical, or strategic decisions to improve transactions and financial performance, which can be dramatically affected by how well such transactions with consumers are handled. Given the sheer volume and often incongruous data types, schemas, formats, and languages involved in these transactions, conventional solutions are unable to improve customer experience-related software (e.g., applications and platforms) despite the dramatic rise in online transactional events and the increasing spread of computing resources and power (e.g., desktop, laptop, server-based, mobile, and the like). Conventional solutions for providing customer-oriented support, regardless of whether for consumer, commercial, or enterprise-related goods and services, can significantly impact not only the experience of a business' customer, but also fundamental business decisions, which are often inefficient or inaccurate due to either a lack of accurately analyzed data and inaccurate presentations of business data that can lead to inaccurate or erroneous business decisions.
Using conventional solutions to manage a customer environment or experience is not only fraught with inaccuracies and inefficiencies, but due to increasingly massive amounts of data being analyzed, accurate and efficient decisions that can substantially affect a business, its products and/or services, and its customers are challenging. For example, many key decisionmakers within an organization such as a Chief Executive Officer, Chief Financial Officer, vice presidents, directors, product managers, and many other key decisionmakers at many levels throughout an organization often see data pertinent to their immediate level, but not an entire organization. Additionally, key decisionmakers may only see data relevant at a micro or macro-level, which lessens the efficacy and applicability of decisions made. Conversely, conventional solutions do not provide granular, detailed data and information that can be analyzed down to individual customer, account, or transaction levels. Thus, while decisionmakers are making decisions that can significantly impact an organization, the data and information underlying these decisions are limited in transparency and applicability to all levels of a business, enterprise, or organization.
Thus, what is needed is a solution for analyzing data from data transactions over various communication channels without the limitations of conventional techniques.
Various embodiments or examples (“examples”) are disclosed in the following detailed description and the accompanying drawings:
Various embodiments or examples may be implemented in numerous ways, including as a system, a process, an apparatus, a user interface, or a series of program code or instructions on a computer readable medium such as a storage medium or a computer network including program instructions that are sent over optical, electronic, electrical, chemical, wired, or wireless communication links. In general, individual operations or sub-operations of disclosed processes may be performed in an arbitrary order, unless otherwise provided in the claims.
A detailed description of one or more examples is provided below along with accompanying figures. This detailed description is provided in connection with various examples, but is not limited to any particular example. The scope is limited only by the claims and numerous alternatives, modifications, and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding. These details are provided for the purpose of illustrating various examples and the described techniques may be practiced according to the claims without some or all of these specific details. For clarity, technical material that is known in the technical fields and related to the examples has not been described in detail to avoid unnecessarily obscuring the description or providing unnecessary details that may be already known to those of ordinary skill in the art.
As used herein, “system” may refer to or include the description of a computer, network, or distributed computing system, topology, or architecture using various computing resources that are configured to provide computing features, functions, processes, elements, components, or parts, without any particular limitation as to the type, make, manufacturer, developer, provider, configuration, programming or formatting language (e.g., JAVA®, JAVASCRIPT®, and others, without limitation or restriction), service, class, resource, specification, protocol, or other computing or network attributes. As used herein, “software” or “application” may also be used interchangeably or synonymously with, or refer to a computer program, software, program, firmware, or any other term that may be used to describe, reference, or refer to a logical set of instructions that, when executed, performs a function or set of functions within a computing system or machine, regardless of whether physical, logical, or virtual and without restriction or limitation to any particular implementation, design, configuration, instance, or state. Further, “platform” may refer to any type of computer hardware (hereafter “hardware”) and/or software using, hosted on, served from, or otherwise implemented on one or more local, remote, and/or distributed data networks such as the Internet, one or more computing clouds (hereafter “cloud”), or others. Data networks (including computing clouds) may be implemented using various types of standalone, aggregated, or logically-grouped computing resources (e.g., computers, clients, servers, tablets, notebooks, smart phones, cell phones, mobile computing platforms or tablets, and the like) to provide a hosted environment for an application, software platform, operating system, software-as-a-service (i.e., “SaaS”), platform-as-a-service, hosted, or other computing/programming/formatting environments, such as those described herein, without restriction or limitation to any particular implementation, design, configuration, instance, version, build, or state. Distributed resources such as cloud computing networks (also referred to interchangeably as “computing clouds,” “storage clouds,” “cloud networks,” or, simply, “clouds,” without restriction or limitation to any particular implementation, design, configuration, instance, version, build, or state) may be used for processing and/or storage of varying quantities, types, structures, and formats of data, without restriction or limitation to any particular implementation, design, or configuration. In the drawings provided herewith, the relative sizes and shapes do not convey any limitations, restrictions, requirements, or dimensional constraints unless otherwise specified in the description and are provided for purposes of illustration only to display processes, data, data flow chart, application or program architecture or other symbols, as described in this specification.
As described herein, structured and unstructured data may be stored in various types of data structures including, but not limited to databases, repositories, warehouses, datalakes, lakehouses, data stores, and other data structures and facilities configured to manage, store, retrieve, process calls for/to, copy, modify, or delete data or sets of data (i.e., “datasets”) in various computer programming languages and formats (e.g., structured, unstructured, binary, and others) in accordance with various types of structured and unstructured database schemas and languages such as SQL®, MySQL®, NoSQL™, DynamoDB™, R, or others, such as those developed by proprietary and open source providers like Amazon® Web Services, Inc. of Seattle, Wash., Microsoft®, Oracle®, Google®, Salesforce.com, Inc., and others, without limitation or restriction to any particular schema, instance, or implementation. Further, references to databases, data structures, or any type of data storage facility may include any embodiment as a local, remote, distributed, networked, cloud-based, or combined implementation thereof, without limitation or restriction. In some examples, data may be formatted and transmitted (i.e., transferred over one or more data communication protocols) between computing resources using various types of wired and wireless data communication and transfer protocols such as Hypertext Transfer Protocol (HTTP), Transmission Control Protocol (TCP)/Internet Protocol (IP), Internet Relay Chat (IRC), SMS, text messaging, instant messaging (IM), WiFi, WiMax, or others, without limitation. Further, as described herein, disclosed processes implemented as software may be programmed using JAVA®, JAVASCRIPT®, Scala, Perl, Python™, XML, HTML, and other data formats and programming languages, without limitation. As used herein, references to layers of an application architecture (e.g., application layer or data layer) may refer to a stacked layer application architecture designed and configured using models such as the Open Systems Interconnect (OSI) model or others.
The described techniques may be implemented as a software-based application, platform, or schema. In some examples, machine learning, deep learning, neural networks, and other types of computing, processing, and analytical algorithms such as those used in various computer science-related fields may be used to implement techniques related to “artificial intelligence” (i.e., “AI”). While there is no particular dependency to a given type of algorithm (e.g., machine learning, deep learning, neural networks, intelligent agents, or any other type of algorithm that, through the use of computing machines, attempts to simulate or mimic certain attributes of natural intelligence such as cognitive problem solving, without limitation or restriction), there is likewise no requirement that only a single instance or type of a given algorithm be used in the descriptions that follow. Algorithms may be untrained or trained using model data, external data, internal data, or other sources of data that may be used to improve the accuracy of calculations performed to generate output data for use in applications, systems, or platforms in data communication with software module or engine-based implementations. The described techniques within this Detailed Description are not limited in implementation, design, function, operation, structure, configuration, specification, or other aspects and may be varied without limitation. The size, shape, quantity, configuration, function, or structure of the elements shown in the various drawings may be varied and are not limited to any specific implementations shown, which are provided for exemplary purposes of illustration and are not intended to be limiting.
In some examples, analyzed and transformed data from analytics/AI pipeline module 120 may then be transferred to user interface module 122. As shown, user interface module 122 may be configured to use analyzed data (not shown) from analytics/AI pipeline module 120 to generate visualizations (i.e., render displays, screens, and other graphical, visual, or multimedia information for consumption by client devices in data communication with omnichannel data analysis engine 102). In other examples, user interface module 122 may also be configured to perform other or extended analyses on transformed and analyzed data from data pipeline module 110. For example, user interface module 122 may perform operations on transformed, analyzed data from user interface module 122 to identify “insights” (as used herein, “insights” may refer to any type of resultant data generated by evaluating, algorithmically or otherwise, transformed and analysed data input to data pipeline module 110), which may be subsequently transferred to insight distribution module 124. In some examples, insight distribution module 124 may be configured to receive resultant data from user interface module 124 to perform various types of formatting, programming, handling, or other operations in order to provide output from omnichannel data analysis engine 102 to, for example, client computing devices (e.g., desktop, laptop, cellular, server, or mobile computing devices, others, without limitation or restriction) on which insights may be reviewed, other data input to omnichannel data analysis engine 102, or other operations, without limitation or restriction. In other examples, omnichannel data analysis system 100 and the elements shown and described may be varied in structure, function, design, layout, order, quantity, size, shape, configuration, and implementation and are not limited to those presented, which are provided for purposes of exemplary description.
In some examples, source collector modules 132-136 may be configured with various components implemented in software, as computer programs or applications, as local, remote, or distributed elements of a platform configured to perform omnichannel data analysis as described herein. For further explanation, “omnichannel” may refer to one or more communication channels of different data types, media, forms, formats, or other characteristics, but which may be analyzed together in a unified analytical environment such as those described in this Detailed Description. As used herein, source collectors 132-136 may be configured to receive media data containers (of any type, form, format, or structure), which then results in using data container extractors (not shown) to parse and identify media interactions, which may be in parts. In some examples, “media interactions” may refer to interactions that occur between a client computing or data device (e.g., analog, digital, binary) and omnichannel data analysis engine 102. Once media interactions have been parsed and identified from received media containers, business rules may be applied to transform the media interactions into sub-atomic interactions that may be output from source collectors 132-136 to interaction unification module 138 and converged into sub-atomic interactions. In some examples, sub-atomic interactions may be combined into atomic interactions (e.g., an entire conversation or data-generating event between a client computing device and a customer contact center or customer experience environment in data communication with omnichannel data analysis engine 102 (
In some examples, interaction unification module 138 may be configured to unify or converge sub-atomic interaction data sets into a “full interaction” between a client computing device and a customer contact center or customer experience environment in data communication with omnichannel data analysis engine 102 (
Referring back to
In some examples, data transferred from source collectors 132-136 may be input to interaction unification module 138, which may also be implemented with various components and elements that are configured to perform functions and data operations such as source and core field definition (i.e., user-defined definition and configuration of data field attributes, as opposed to “attributes” of received interaction data sets that are system-determined by initial parsing performed by source collectors 132-136 (
As described herein, once source field and core field definitions are identified and applied to ingested data, regardless of the type of channel from which ingested data is received, unification mapping may be performed by mapping data based on identified attributes (i.e., of the ingested data) and data attributes (i.e., further attributes yielded by processing of source collectors 132-136), into data cohorts or sets based on common characteristics. In some examples, UI displays for users of omnichannel data analysis engine 102 may also be automatically configured by analyzing the results of unification mapping of data. Further, once data is mapped into a known topology, structure, or schema, unified data may be configured for queries using one or more types of query languages to run against stored data in one or more of transcript data store 114, application datastore 116, and/or reporting datastore 118, as described below in greater detail.
In some examples, after data is processed by interaction unification module 138, interaction data enrichment module 140 may be configured to perform various functions such as, but not limited to, automated speech recognition (hereafter “ASR”), AI language classification (i.e., classifying data by identifying languages using analytical algorithms such as AI-related techniques like machine learning algorithms, deep learning algorithms, neural networks, and the like), rule-based topic classification (i.e., using templates and unique configurations to identify and tag phrases in an interaction to a topic of interaction classification), AI language clustering (i.e., using the techniques described above, clustering data together based on like terms in close proximity to each other), sentiment classification (i.e., using deep learning algorithms or rules-based classifications, positive, negative, or neutral opinions can be parsed, analyzed, and identified from interaction data sets), AI transcript summarization (e.g., identify important sentences in a transcript of an interaction using machine learning, deep learning, neural network, or AI-related algorithms to cluster and score sentences), payment card industry (PCI) data redaction (i.e., removing PCI-categorized data to prevent inadvertent exposure of sensitive financial information), and others. In other examples, ingestion system 130 and the elements shown and described may be varied in structure, function, design, layout, order, quantity, size, shape, configuration, and implementation and are not limited to those presented, which are provided for purposes of exemplary description.
As shown, analytics/AI creation module 154 may be implemented, configured, and designed to perform functions and data operations such as extracting data cohorts from analytics/AI datastore 158, applying data models that may be retrieved and applied from analytics/AI datastore 158, and generating output datasets from analytics/AI pipeline module 120. Output datasets from analytics/AI pipeline module 120 may be, in some examples, output for storage in analytics/AI datastore 158 or, in other examples, output to insight distribution module 124. Here, analytics/AI creation module 154 may be configured to retrieve initial data to be used to generate insights by analytics/AI pipeline module 120. For example, data cohorts stored in analytics/AI datastore 158 may be retrieved by analytics/AI creation module 154. Further, extracting a data model from analytics/AI creation module 154 may be used to process extracted data cohorts prior to being output to export API 159 for subsequent rendering or display on a user interface, as described in further detail below.
In other examples, insight analysis module 156 may be configured, designed, and implemented to retrieve (e.g., query) data from analytics/AI datastore 158 to perform functions and data operations to generate insights such as trend detection, metric calculation, anomaly (e.g., statistical outlier analysis) detection, insight valuation, pattern alerting, alerting. As shown, trend detection may be performed by insight analysis module 156 to identify statistical, heuristic, semantic, or any other type of trend in data analyzed. For example, a given product or service problem may be identified as a pattern under a given circumstance or set of circumstances as determined by analyzing a data cohort containing sub-atomic interactions (e.g., an individual customer interaction over social media) associated with a particular atomic interaction (e.g., a group of customer interactions occurring over the same or similar social media channels). Metric calculation, in some examples, may be performed by insight analysis module 156 to measure and identify individual metrics found in particular data cohorts or sets. As used throughout this detailed description, a data cohort may be a data set grouped, logically or otherwise, together based on a common characteristic, attribute, or data attribute. Metric calculation may be performed on any type of metric, without limitation or restriction and may be manually, automatically, or semi-automatically determined by omnichannel data analysis engine 102 (
Referring back to
Similarly, discovery module 164 may be configured to receive (i.e., receive input or responsive data to a query) data from application datastore 116 such as data analyzed by other modules of data pipeline module 110 (
Mobile insights module 174, in some examples, may be configured to receive data from application data store 116 (or other elements of omnichannel data analysis engine 102 (
In some examples, once generated, an atomic interaction may have various data attributes aligned to ensure consistency and minimize errors when processing and analyzing is later performed by other elements of omnichannel data analysis engine 102 (
In other examples, omnichannel data analysis system 100 and the elements shown and described may be varied in structure, function, design, layout, order, quantity, size, shape, configuration, and implementation and are not limited to those presented, which are provided for purposes of exemplary description.
According to some examples, computing system 500 performs specific operations by processor 504 executing one or more sequences of one or more instructions stored in system memory 506. Such instructions may be read into system memory 506 from another computer readable medium, such as static storage device 508 or disk drive 510. In some examples, hard-wired circuitry may be used in place of or in combination with software instructions for implementation.
The term “computer readable medium” refers to any tangible medium that participates in providing instructions to processor 504 for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as disk drive 510. Volatile media includes dynamic memory, such as system memory 506.
Common forms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
Instructions may further be transmitted or received using a transmission medium. The term “transmission medium” may include any tangible or intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such instructions. Transmission media includes coaxial cables, copper wire, and fiber optics, including wires that comprise bus 502 for transmitting a computer data signal.
In some examples, execution of the sequences of instructions may be performed by a single computer system 500. According to some examples, two or more computing system 500 coupled by communication link 520 (e.g., LAN, PSTN, or wireless network) may perform the sequence of instructions in coordination with one another. Computing system 500 may transmit and receive messages, data, and instructions, including program, i.e., application code, through communication link 520 and communication interface 512. Received program code may be executed by processor 504 as it is received, and/or stored in disk drive 510, or other non-volatile storage for later execution. In other examples, the above-described techniques may be implemented differently in design, function, and/or structure and are not intended to be limited to the examples described and/or shown in the drawings.
Although the foregoing examples have been described in some detail for purposes of clarity of understanding, the above-described techniques are not limited to the details provided. There are many alternative ways of implementing the above-described invention techniques. The disclosed examples are illustrative and not restrictive.