1. Technical Field
The present disclosure relates to unified messaging and more specifically to prioritizing transcriptions for message, such as a voicemail, in a unified messaging system.
2. Introduction
Unified messaging (UM) is an approach to integrate messages which are created, transmitted, and stored in different communication media into a single interface which is accessible from a wide array of devices. For example, a unified messaging interface can be accessible via desktop or laptop computer, a web interface, smart phone, cellular phone, landline phone, and so forth. A UM server can pass transcribable messages, such as voicemail, video mail, images, and other audiovisual messages to a transcription server for transcription.
Message transcription is a very resource intensive process, requiring a significant amount of processing power and memory. Some UM subscribers access their messages very quickly and others wait for an extended period of time to access their messages. One solution to solve this problem is to devote additional resources to transcription, such as purchasing more or faster memory, processors, bandwidth, or even additional transcription servers. However, this approach may not be feasible due to high cost. Further, this approach wastes computing resources during low-demand periods.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein.
A unified messaging server communicates with a transcription server or collection of transcription servers via a finite number of channels. One aspect of this approach involves the management of these channels. Disclosed are systems, methods, and non-transitory computer-readable storage media for managing transcription resources. A system configured to practice the method first retrieves a class of service for a subscriber recipient of a message deposited at a first server for transcription by a second server, determines a probability of near-term access of the message by the subscriber recipient, assigns a weight to the message based on the class of service and the probability of the near-term access, and transcribes, at the second server, the message based on the weight. In one example, the first server is a unified messaging server and the second server is a transcription server. Either of these servers can be a single computing device or a group of computing devices.
A channel manager can manage the limited number of channels between the first server and the second server. The channel manager can reserve a first portion of the finite number of channels for real-time transcription and a second portion of the finite number of channels for non-real-time transcription. The channel manager and/or the UM server can overflow excess transcriptions between the first portion and the second portion as needed.
A queue manager can place non-real-time class messages in one or more transcription queues based on a class of service, probability of near-term message access, and feedback from a transcription event handler. The queues can be located in the UM server, in the queue manager, in one or more transcription servers, and/or a location external to all of these. The queue manager can assign messages to an initial queue, reassign messages to another queue, and remove messages from a queue.
After the transcription server transcribes the messages, the UM server and/or the transcription server can then relay the transcribed messages to an appropriate UM client or UM clients or store the transcribed messages for later retrieval. All or part of the transcription server and the UM server can operate within the same physical device or be separate, and/or located in different geographic locations.
In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure.
The present disclosure addresses the need in the art for prioritizing transcriptions in a unified messaging system. A system, method and non-transitory computer-readable media are disclosed which prioritize and/or manage transcription resources by transcribing messages in different channels based on a message and/or subscriber class of service and a probability of near-term access of the message by the subscriber. In one aspect, the principles disclosed herein function as part of a unified messaging system. A brief introductory description of a basic general purpose system or computing device in
With reference to
The system bus 110 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. A basic input/output (BIOS) stored in ROM 140 or the like, may provide the basic routine that helps to transfer information between elements within the computing device 100, such as during start-up. The computing device 100 further includes storage devices 160 such as a hard disk drive, a magnetic disk drive, an optical disk drive, tape drive or the like. The storage device 160 can include software modules 162, 164, 166 for controlling the processor 120. Other hardware or software modules are contemplated. The storage device 160 is connected to the system bus 110 by a drive interface. The drives and the associated computer readable storage media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computing device 100. In one aspect, a hardware module that performs a particular function includes the software component stored in a non-transitory computer-readable medium in connection with the necessary hardware components, such as the processor 120, bus 110, display 170, and so forth, to carry out the function. The basic components are known to those of skill in the art and appropriate variations are contemplated depending on the type of device, such as whether the device 100 is a small, handheld computing device, a desktop computer, or a computer server.
Although the exemplary embodiment described herein employs the hard disk 160, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, digital versatile disks, cartridges, random access memories (RAMs) 150, read only memory (ROM) 140, a cable or wireless signal containing a bit stream and the like, may also be used in the exemplary operating environment. Non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
To enable user interaction with the computing device 100, an input device 190 represents any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 170 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems enable a user to provide multiple types of input to communicate with the computing device 100. The communications interface 180 generally governs and manages the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
For clarity of explanation, the illustrative system embodiment is presented as including individual functional blocks including functional blocks labeled as a “processor” or processor 120. The functions these blocks represent may be provided through the use of either shared or dedicated hardware, including, but not limited to, hardware capable of executing software and hardware, such as a processor 120, that is purpose-built to operate as an equivalent to software executing on a general purpose processor. For example the functions of one or more processors presented in
The logical operations of the various embodiments are implemented as: (1) a sequence of computer implemented steps, operations, or procedures running on a programmable circuit within a general use computer, (2) a sequence of computer implemented steps, operations, or procedures running on a specific-use programmable circuit; and/or (3) interconnected machine modules or program engines within the programmable circuits. The system 100 shown in
The disclosure now turns to
When the UM server 202 receives messages, the UM server 202 can identify a recipient of the message and retrieve a subscriber profile from a UM directory 214. The subscriber profile can provide information about a class of service for the subscriber. For example, one subscriber can pay a premium fee for real-time transcription service, another subscriber can pay a lower fee for a first non-real-time transcription service that indicates a preference for a short transcription time, but the short time is not guaranteed, and a third subscriber can use a second non-real-time transcription service for free that has no preference for a transcription delay. Real-time transcriptions can be streamed to the transcription server as they are received. While waiting to be transcribed, non-real-time transcriptions can be deposited in a queue internal to the UM server 202, a queue internal to the transcription server(s) 208, and/or a queue external to both the UM server 202 and the transcription server(s) 208. In one case, multiple non-real-time queues can distinguish between different classes of non-real-time transcriptions.
The UM directory 214 can store additional classes of service beyond the exemplary classes of service discussed herein. In one aspect, a hybrid class of service provides a different class of service based on time, location, subscription, date, and other user parameters. For example, a hybrid class of service for an accountant may indicate a real-time class of service on weekdays which are not federal holidays between 8:00 a.m. and 6:30 p.m. and a no-preference class of service all other times. In another example, a salesman can indicate that all incoming messages from phone numbers or emails originating from a group of client companies are associated with a real-time transcription class of service and all other messages are associated with a class of service which prefers but does not require a short transcription time. Other variations and classes of service can be applied.
In one aspect, the UM directory 214 also provides information to the UM server 202 related to the probability of messages being accessed in the near term. If the user receives and accesses a new message notification while the message transcription is pending, the UM server 202 can increase the probability that the message will be accessed in the near term. If the user receives the new message notification but does not access the message, the UM server 202 can lower the probability or leave it unchanged. The probability of near-term access can be based on historical statistics for subscriber message/transcription access times, such as the average time between new message notification and transcription access. The average time can be per-user for a very granular average or can be averaged for similar customers. For example, the average time between new message notification and transcription access can be calculated for males from ages 18-25 in Florida, for Asian females in the Rocky Mountains, or for college students nationwide.
The probability of near-term access can further be based on subscriber presence information. Presence information can convey a user's available capacities to communicate. For example, presence can indicate whether a user is available or not, whether a user can accept a video feed or not, the user's physical location, which specific communication devices the user has available, and so forth. Presence can also indicate a user's willingness to accept communications. For example, a user presence can indicate “do not disturb”, “in a meeting”, or “available”. Presence information can be automatically generated or manually set by the user. In one configuration, the UM directory 214 receives subscriber presence information from UM clients 210a,b,c,d and bases the probability of messages being accessed in the near term on that presence information. Presence information can be gleaned from one source or from multiple sources, such as web browser logins, smartphone applications, GPS signals, calendar events, and so forth.
Other factors which can be relevant to the probability of near-term access can include message parameters, such as indicators of message urgency, and message meta-data, such as a message source or message title. The UM server 202 can also dedicate more resources to subscribers that have historically received higher confidence transcriptions.
The UM server 202 communicates with a transcription server 208 or servers which transcribe all or part of each message from the message sources via a finite number of communication channels 212. The finite number of communication channels can be divided into multiple groups. For example, a first group of communication channels associated with a first group of transcription servers can handle real-time transcriptions and a second group of communication channels associated with a second group of transcription servers can handle non-real-time transcriptions. The transcription server 208 can transcribe messages using speech to text, OCR, pattern recognition, and/or any other suitable mechanism(s) to extract text from non-textually formatted messages. The transcription server 208 can also perform translation services to translate extracted text from one language to another, if needed. The UM server 202 can then offer an original language transcription and a translated transcription to the UM client. The UM server 202 identifies a particular UM client 210a,b,c,d for each message and transmits information to the respective UM client regarding the message, including a transcription status. In the case of a voicemail, the UM server 202 can transmit information indicating a sender of the voicemail, a duration of the voicemail, a callback number, a time of the voicemail, a “headline” of the voicemail transcription.
In one variation, the channel manager 304 is responsible for managing the finite number of channels 310, 314 between the UM server 202 and the transcription servers 312, 316. The number of channels can be limited to available CPU power, network bandwidth, memory, storage, and/or other individual or shared computing resources.
For messages which do not have a real-time class of service, the UM server 202 places the messages to be transcribed in a queue via a queue manager 302. In one embodiment, multiple queues 306, 308 receive and hold messages. In this example, a first non-real-time priority queue 306 and a second non-real-time priority queue 308 receive messages from the queue manager 302. The queues can be incorporated in the UM server 202, a transcription server 316, and/or can be separate from either. In one case, multiple associated queues serve as a single queue. For example, a transcription server 316 can include a short queue of 10 messages, while the UM server 202 includes an associated overflow queue for storing messages beyond the 10 in the short queue. In one aspect, the queues are located in between the UM server 202 and the transcription servers 312, 316.
Non-real-time transcription demand may exceed the capacity of the non-real-time transcription servers 316 while the real-time transcription servers 312 are idle or otherwise under-utilized. In this situation, the UM server 202 can control overflow transcriptions from non-real-time transcription servers 316 to the real-time transcription servers 312. Conversely, when the real-time transcription server 312 is at capacity and cannot transcribe incoming real-time class messages in real time, the UM server 202 can control overflow transcriptions from the real-time transcription server 316 to the non-real-time transcription server 316 by preempting non-real-time transcriptions.
The queue manager 302 can perform initial queue assignment, queue re-assignment, message removal, and so forth. The queue manager 302 can implement a queuing scheme in response to congestion of real-time and/or non-real-time channels 310, 314. The queuing scheme can utilize a variety of different queuing algorithms such as priority queuing and/or class-based weight fair queuing. As an example, the non-real-time channel 314 can service a high priority queue, a medium priority queue, and a low priority queue. The queue manager 302 can use the subscriber transcription class-of-service, historical statistics, message parameters, and/or input from a transcription event handler, to determine queue assignment.
Typically the UM server 202 performs transcription prioritization as described herein, but all or a portion of the steps can be performed by other components as well. For example, a separate queuing module, not shown, can queue non-real-time messages and pass real-time messages from the UM server 202 immediately to a real-time transcription server 312.
Having disclosed some basic system components, the disclosure now turns to the exemplary method embodiment for managing transcription resources shown in
The system 100 determines a probability of near-term access of the message by the subscriber recipient (404). The system 100 can determine the probability of near-term access of the message by evaluating multiple data points, such as subscriber presence, receipt of a new message notification by the subscriber, historical statistics of subscriber message access times, message urgency, message parameters, and other message metadata. The system 100 can determine multiple probabilities for different values of “near-term”. For example, near-term can mean within 5 seconds, 30 seconds, 1 minute, or 10 minutes. “Near-term” can be a preset global duration for all users, or different based on time of day, usage history, and/or user preferences.
The system 100 assigns a weight to the message based on the class of service and the probability of the near-term access (406) and transcribes, at the second server, the message based on the weight (408). In one variation, the assigned weight can be used to govern where to insert a non-real-time message into a transcription queue. The queue can be a priority queue where higher priority messages are transcribed before lower priority messages, a first-in, first-out queue, or other type of queue.
In almost every instance, the number of channels between the UM server and the transcription server is finite and is typically based on computing ability of the transcription server. The UM server can access the resources of the transcription server by issuing an application programming interface (API) call to the transcription server. In this way, the UM server treats the transcription server as a “black box” and may have no direct access to the resources of the transcription server. A channel manager can manage the finite number of channels. The channel manager can reserve different portions of the channels for various purposes. For example, one reserved portion of the channels can service messages in a particular class of service. The channel manager can redirect messages which overflow from one channel to another channel or reallocate resources from an underutilized channel to a channel that needs more resources.
Embodiments within the scope of the present disclosure may also include tangible and/or non-transitory computer-readable storage media for carrying or having computer-executable instructions or data structures stored thereon. Such non-transitory computer-readable storage media can be any available media that can be accessed by a general purpose or special purpose computer, including the functional design of any special purpose processor as discussed above. By way of example, and not limitation, such non-transitory computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions, data structures, or processor chip design. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or combination thereof) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of the computer-readable media.
Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Computer-executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, components, data structures, objects, and the functions inherent in the design of special-purpose processors, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps.
Those of skill in the art will appreciate that other embodiments of the disclosure may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
The various embodiments described above are provided by way of illustration only and should not be construed to limit the scope of the disclosure. For example, the principles herein can enhance existing unified messaging platforms by reducing the cost of hardware required for message transcription, improving customer experience, providing enhanced service for premium customers, and prioritizing messages that are more likely to be accessed in the near-term. Further, this approach more gracefully addresses periods of overload and/or transcription service disruption. Those skilled in the art will readily recognize various modifications and changes that may be made to the principles described herein without following the example embodiments and applications illustrated and described herein, and without departing from the spirit and scope of the disclosure.
Number | Name | Date | Kind |
---|---|---|---|
6580786 | Yarlagadda | Jun 2003 | B1 |
6816858 | Coden et al. | Nov 2004 | B1 |
7024455 | Yokobori et al. | Apr 2006 | B2 |
8019051 | Childs et al. | Sep 2011 | B1 |
20020029189 | Titus et al. | Mar 2002 | A1 |
20020132611 | Immonen et al. | Sep 2002 | A1 |
20030236845 | Pitsos | Dec 2003 | A1 |
20040083270 | Heckerman et al. | Apr 2004 | A1 |
20040248563 | Ayers et al. | Dec 2004 | A1 |
20060030297 | Coble et al. | Feb 2006 | A1 |
20060031364 | Hamilton et al. | Feb 2006 | A1 |
20060080354 | Berger et al. | Apr 2006 | A1 |
20060195540 | Hamilton et al. | Aug 2006 | A1 |
20060239188 | Weiss et al. | Oct 2006 | A1 |
20070189520 | Altberg et al. | Aug 2007 | A1 |
20070230476 | Ding | Oct 2007 | A1 |
20070287463 | Wilson | Dec 2007 | A1 |
20080034086 | Castelli et al. | Feb 2008 | A1 |
20080147864 | Drogo De Iacovo et al. | Jun 2008 | A1 |
20090010202 | Masayuki et al. | Jan 2009 | A1 |
20090061828 | Sigmund et al. | Mar 2009 | A1 |
20090086278 | Vendrow et al. | Apr 2009 | A1 |
20090164588 | D'Amato et al. | Jun 2009 | A1 |
20090164933 | Pederson et al. | Jun 2009 | A1 |
20100057880 | Hasti et al. | Mar 2010 | A1 |
20100125450 | Michaelangelo et al. | May 2010 | A1 |
20100146057 | Abu-Hakima et al. | Jun 2010 | A1 |
20100150322 | Yin et al. | Jun 2010 | A1 |
20100185746 | Suh et al. | Jul 2010 | A1 |
20120278160 | Ieong et al. | Nov 2012 | A1 |
Number | Date | Country |
---|---|---|
1104964 | Jun 2001 | EP |
1696651 | Aug 2006 | EP |
02069043 | Mar 1990 | JP |
09046421 | Feb 1997 | JP |
2009245374 | Oct 2009 | JP |
WO 9719525 | May 1997 | WO |
Entry |
---|
U.S. Appl. No. 12/841,830, Yasrebi et al., filed Jul. 22, 2010. |
Number | Date | Country | |
---|---|---|---|
20120023173 A1 | Jan 2012 | US |