Methods and systems to connect consumers to information

Information

  • Patent Grant
  • 7453998
  • Patent Number
    7,453,998
  • Date Filed
    Monday, March 26, 2007
    17 years ago
  • Date Issued
    Tuesday, November 18, 2008
    16 years ago
Abstract
The present invention provides a method and apparatus for specifying and obtaining services through audio commands, resulting in a live conversation between a user and a selected service provider using an audio-transmission medium (the telephone). A service seeker locates a service provider by entering a keypad code corresponding to a field of service or by speaking the name of a profession, which is recognized by the system. The seeker can then specify, via voice or keypad entry, a price range, quality rating, language, and keyword descriptors of the service provider, such as a service provider code number. In response, the system offers currently available service providers. Once an available service provider is selected, the system connects the service seeker with the service provider for a live conversation. The system bills the seeker for the time spent conversing with the service provider and compensates the service provider accordingly.
Description
FIELD OF THE INVENTION

The invention relates generally to providing users with service providers in a field of service desired by the user. In particular, the invention relates to a method and apparatus for specifying and obtaining services, via an audio portal, resulting in a live conversation between a user and a selected service provider.


BACKGROUND OF THE INVENTION

Consumers interested in acquiring services must first identify a service provider who is capable of providing the required services. At present, this usually means perusing a telephone directory, which can become frustrating and time-consuming if the service providers telephoned are not immediately available. In addition, a simple telephone call does not enable the service provider to charge a fee according to the time spent with his/her customers.


Systems now exist that enable providers of services to charge fees for the time spent delivering the service. 1-900 phone numbers will charge the seeker of services according to the time spent receiving the service and will transfer this payment, or a portion of it, to the provider.


Each 1-900 number, however, has a very narrow scope- “Hear your Horoscope,” for instance. If a seeker would like to hear an entirely different service-“Your Local Weather,” for instance-he/she would have to dial a completely different 1-900 number. Similarly, each 1-900 number is quite rigid in the price, quality, and specificity of its service.


However, current systems now exist that enable seekers to locate service providers according to a wide range of price, quality and specificity of service (U.S. application Ser. No. 09/414,710). Such systems also make it possible for the service provider and buyer to be connected and communicate in real time.


Such systems, however, require the service seeker to have a connection to the internet. The service seeker must also have the necessary computer hardware to browse the internet. Presently, there is no system available by which a service seeker can be matched to a wide array of service providers with specific skills using only a simple audio-transmission medium such as the telephone.


Therefore, there remains a need to overcome limitations in the above described existing art which is satisfied by the inventive structure and method described hereinafter.


SUMMARY OF THE INVENTION

The present invention overcomes the problems in the existing art described above by providing a method and apparatus for specifying and obtaining services, via an audio portal, resulting in a live conversation between a user (service seeker) and a selected service provider. The present invention is a system through which seekers of a wide array of services can select, contact, converse, and pay for a service provider using a simple audio-transmission medium such as the telephone. The invention enables the seeker to locate a service provider by speaking the name of a profession, such as “psychiatrist,” which is recognized by the system's voice-recognition software. Alternatively, the user can select a service provider category by pressing corresponding keypad(s) of a user telephone.


In a similar fashion, the seeker can then specify the price range, quality rating, language, and keyword descriptors of the service provider using either voice commands or keypad entry. Within the desired parameters, the system offers service providers who have made themselves available to render services at the present time. Once the appropriate available service provider is selected, the system connects the service seeker with the service provider for a live conversation. The system automatically bills the seeker for the time spent conversing with the service provider and compensates the service provider accordingly.


Advantages of the invention include providing users with the capability to engage in a live conversation with a selected service provider via a telephone. Contrary to prior systems, requiring an internet connection and browser to receive such services, the equivalent is now provided by a simple audio transmission medium such as the telephone. As a result, virtually anyone can benefit from the capabilities provided by the present invention. The system also allows providers of a field of service to be compensated for supplying their expertise to a user.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:



FIG. 1 depicts a block diagram illustrating a system in which an audio portal service provider system in accordance with the present invention may be implemented;



FIGS. 2A and 2B depict block diagrams further illustrating the audio portal service provider system as shown in FIG. 1.



FIG. 3 depicts a web page, including a list of fields of service from which service providers can be selected for live conversations in accordance with a further embodiment of the present invention;



FIG. 4 depicts a web page presented to a service provider desiring inclusion in a service provider database of the present invention;



FIG. 5 is a flow chart illustrating a method used to allow a user to select a service provider for a live conversation using the audio portal system in accordance with a further embodiment of the present invention;



FIG. 6 is a flow chart illustrating an additional method used by a service provider desiring inclusion in the service provider database in accordance with a further embodiment of the present invention;



FIG. 7 is a flow chart illustrating an additional method for billing a user and compensating a service provider following a live conversation in accordance with a further embodiment of the present invention;



FIG. 8 is a flow chart illustrating an additional method for connecting a user desiring a service provider to the audio portal service provider system in accordance with an exemplary embodiment of the present invention; and



FIG. 9 depicts a flowchart illustrating an additional method for receiving a quality rating from a user regarding services provided by a service provider in accordance with an exemplary embodiment of the present invention.





DETAILED DESCRIPTION

The present invention overcomes the problems in the existing art described above by providing a method and apparatus for specifying and obtaining services through voice commands resulting in a live conversation between a user (service seeker) and a selected service provider. The invention enables the seeker to locate a service provider by speaking the name of a profession, such as “psychiatrist,” which is recognized by the system's voice-recognition software. Alternatively, the seeker can select a service provider category by pressing corresponding telephone keypad(s). Once the appropriate available service provider is selected, the system connects the service seeker with the service provider for a live conversation. The system automatically bills the seeker for the time spent conversing with the service provider and compensates the service provider accordingly.


In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without some of these specific details. In addition, the following description provides examples, and the accompanying drawings show various examples for the purposes of illustration. However, these examples should not be construed in a limiting sense as they are merely intended to provide examples of the present invention rather than to provide an exhaustive list of all possible implementations of the present invention. In other instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the details of the present invention.


Portions of the following detailed description may be presented in terms of algorithms and symbolic representations of operations on data bits. These algorithmic descriptions and representations are used by those skilled in the data processing arts to convey the substance of their work to others skilled in the art. An algorithm, as described herein, refers to a self-consistent sequence of acts leading to a desired result. The acts are those requiring physical manipulations of physical quantities. These quantities may take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Moreover, principally for reasons of common usage, these signals are referred to as bits, values, elements, symbols, characters, terms, numbers, or the like.


However, these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, it is appreciate that discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's devices into other data similarly represented as physical quantities within the computer system devices such as memories, registers or other such information storage, transmission, display devices, or the like.


The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method. For example, any of the methods according to the present invention can be implemented in hard-wired circuitry, by programming a general-purpose processor, or by any combination of hardware and software.


One of skill in the art will immediately appreciate that the invention can be practiced with computer system configurations other than those described below, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, digital signal processing (DSP) devices, network PCs, minicomputers, mainframe computers, and the like. The invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. The required structure for a variety of these systems will appear from the description below.


It is to be understood that various terms and techniques are used by those knowledgeable in the art to describe communications, protocols, applications, implementations, mechanisms, etc. One such technique is the description of an implementation of a technique in terms of an algorithm or mathematical expression. That is, while the technique may be, for example, implemented as executing code on a computer, the expression of that technique may be more aptly and succinctly conveyed and communicated as a formula, algorithm, or mathematical expression.


Thus, one skilled in the art would recognize a block denoting A+B=C as an additive function whose implementation in hardware and/or software would take two inputs (A and B) and produce a summation output (C). Thus, the use of formula, algorithm, or mathematical expression as descriptions is to be understood as having a physical embodiment in at least hardware and/or software (such as a computer system in which the techniques of the present invention may be practiced as well as implemented as an embodiment.


In an embodiment, the methods of the present invention are embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor that is programmed with the instructions to perform the steps of the present invention. Alternatively, the steps of the present invention might be performed by specific hardware components that contain hardwired logic for performing the steps, or by any combination of programmed computer components and custom hardware components.


In one embodiment, the present invention may be provided as a computer program product which may include a machine or computer-readable medium having stored thereon instructions which may be used to program a computer (or other electronic devices) to perform a process according to the present invention. The computer-readable medium may include, but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAMs), Erasable Programmable Read-Only Memory (EPROMs), Electrically Erasable Programmable Read-Only Memory (EEPROMs), magnetic or optical cards, flash memory, or the like.


Accordingly, the computer-readable medium includes any type of media/machine-readable medium suitable for storing electronic instructions. Moreover, the present invention may also be downloaded as a computer program product. As such, the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client). The transfer of the program may be by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem, network connection or the like).


System Architecture



FIG. 1 depicts one embodiment of an audio portal service provider system architecture 100 in which the systems and methods of the present invention may be incorporated. One or more service provider computers 200 (200-1, 200-2, . . ., 200N) are connected through a network 400 (such as an Intranet, a LAN or a WAN such as the Internet) to a host computer or web server (“audio portal server computer”) 300. Persons skilled in the art will recognize that the audio portal server computer 300 may include one or more computers working together to provide the controller computer functions described herein. The audio portal system 100 includes one or more service providers 200 (200-1, . . . , 200-N) each having an audio transmission medium 250 (250-1, . . . 250-N) that is connected to a communications network 110.


Accordingly, one or more users (service seekers) 102 (102-1, . . ., 102-N) access the audio portal system 100 via audio transmission mediums 104 (104-1, . . . 104-N) that are connected to the communications network 110. In accordance with the teachings of the present invention, a service seeker 102 (102-1, . . ., 102-N) can send a request 106 (106-1, . . ., 106-N) via the audio transmission medium 104, which is received by the audio portal server computer 300 via an audio interface 308. The request may be in the form of either a voice command or keypad entry via an audio transmission medium 104. As described in further detail below, the audio portal server computer 300 can then connect the service seeker 102 to a selected service provider 200 for a live conversation via the audio interface 308.


The communications network 110 generally refers to any type of wire or wireless link enabling the transmission of voice or keypad data such as, but not limited to, a public switched telephone network, a wireless communications network, a local area network, a wide area network or a combination of networks. The audio transmission mediums 104 and 250 generally refer to any type of device capable of receiving speech or keypad entry from a user and providing the speech/keypad entry to a destination via a communications network, such as the communications network 110. In an embodiment of the present invention, the communications network 110 is a public switched telephone network and the audio transmission medium is a telephone.



FIG. 2A further illustrates the audio portal service provider system 100, including the audio portal server computer 300, as well as the service provider computer 202. The audio portal server computer 300 includes a central processing unit (CPU) 302, a user interface 304, a communications interface 306, an audio interface 308, a service provider database 310 and a memory 312. The audio portal server computer 300 can be any type of computing device, such as, but not limited to, desktop computers, work stations, lap tops and/or mainframe computers.


The audio interface 308 is used to communicate with users 102 and service providers 200, as well as other system resources not shown. The audio interface 308 receives an audio request 106 provided by user 102 through an audio transmission medium 104, which is provided over the communications network 110. The audio interface 308 provides digitized voice requests to the audio portal server computer 300 for interactive voice recognition, including voice processing, speech recognition and text-to-speech processing. The memory 312 of the audio portal server computer 300 may be implemented as RAM (random access memory), SRAM (synchronous RAM), SDRAM (synchronous data RAM) or a combination of RAM and non-volatile memory, such as one or more memory modules, storage volumes, or magnetic disk storage units. The memory can contain any of the following:

    • an operating system 314;
    • internet access procedures 316;
    • web server procedures 318;
    • web creation procedures 320;
    • audio interface procedures 322 for receiving an audio request (voice/keypad entry) 106 from the user 102 via the audio interface 308 and utilizing either integrated voice recognition (IVR) for voice requests or dual tone multi-frequency (DTMF) decoding for keypad entry request to provide the user with a selected service provider and connect the service seeker 102 with the selected service provider 200 for a live conversation;
    • service provider selection procedures 324 for providing the service seeker 102 with an auditory list of fields of service providers provided by the audio portal system 100, as well as auditory lists of service providers matching a field of service selected by the user 102;
    • database (DB) access procedures 326 for querying the database 310 in order to return records of service providers matching a field of service selected by the user 102;
    • billing procedures 328 for billing the service seeker 102 following a live conversation with the service provider 200, as well as compensating the service provider 200 for the live conversation and collecting a premium fee for the audio portal system 100;
    • provider inclusion procedures 330 for providing an on-line interface, as well as an audio interface (e.g., via telephone), to service providers 200 requesting inclusion in the service provider database 310 in order to provide live services via the audio portal system 100 to perspective users 102;
    • provider interface procedures 332 for providing both an online interface, as well as an audio interface, allowing service providers 200 to update information in the service provider database 310, including times of availability;
    • quality rating procedures 336 for receiving a quality rating for a service provider 200 following a live conversation with a user 102 based on the user's evaluation of the services provided by the service provider 200;
    • user interface procedures 338 for providing the service seeker 102 with an audio listing of fields of service available from the audio portal system 100, a keypad value corresponding to each field of service for non-integrated voice recognition embodiments, as well as receiving various descriptors for narrowing the search of service providers, including acceptable price ranges, acceptable quality ratings, specific languages, as well as a service provider ID of a specific service provider when known by the service seeker 102;
    • conversation monitoring procedures 340 for measuring the duration of the live conversation between the service seeker 102 and the service provider 200;
    • keypad decoding procedures 342 for decoding service seeker 102 responses entered via keypads of an audio transmission medium 104 (DTMF signals) and converting the requests into a query for selecting either service provider categories or specific service providers from service provider database 310 and providing the selected categories and service providers to the user via user interface procedures 338;
    • recorded speech database 344 which contains voice listings of the various fields of service available from the service provider system, as well as names of each service provider corresponding to each field of service available from the audio portal system 100, which are provided to the user in order to enable the user to select a service provider to engage in a live, real-time conversation therewith; and
    • other procedures and files.


Referring now to FIG. 2B, FIG. 2B illustrates the service provider computer 202, which includes a CPU 204, a user interface 206, a memory 210 and a communications interface 208. The communications interface 208 is used to communicate with the audio portal server computer 300, as well as other system resources not shown. The memory 208 of the service provider computer 202 may be implemented as RAM (random access memory) or a combination of RAM and non-volatile memory, such as one or more magnetic disk storage units. The memory 208 can contain the following:

    • an operating system 212;
    • internet access procedures 214;
    • audio portal access procedures 216 for accessing the audio portal server computer 300; and
    • other procedures and files.


The embodiments depicted in FIGS. 2A and 2B include a service provider database 310 containing information about a wide array of service providers 200. In order to present themselves to their potential clients (service seekers), service providers 200 list themselves in this database 310. In one embodiment, this is done through the use of an Internet web site, via web pages 510 and 550, as depicted in FIGS. 3 and 4. The service provider 200 registers his/her name and phone number using the web page 550, along with a description of the service that he/she offers. Possible examples of the wide array of fields of service available from the audio portal system 100 include, but are not limited to, the fields of service depicted in FIG. 3. The description includes key words describing the field of service. The description also includes a price for rendering the service, most commonly, but not restricted to a per-minute price.


The service provider 200 then informs the audio portal system 100 of the times when he/she is available to receive calls. This can be done by creating a schedule of suitable times at the web site 500 or by simply clicking on an “on call” “off call” switch at the web site 500. Switching service provider 200 availability status can also be done through an audio transmission medium such as a telephone. The service provider 200 calls the central phone number, identifies himself/herself with a password, then presses the telephone keypad “1” or “2”, for example, to indicate “on call” or “off call” status, respectively. Once the database 310 contains the phone numbers of service providers, descriptions of their services, their prices, and their real-time availability statuses, the audio portal system 100 can provide services to users 102 desiring corresponding services.


In this embodiment a telephone is used as part of the delivery mechanism or audio transmission medium 250 of the audio portal system 100. A user 102 seeking services (service seekers) dials a central telephone number and then listens to a series of options. In one embodiment, the service seeker is initially prompted for verification information including, for example, a personal information number (PIN) code. Once verified, the service seeker is presented the option to browse available fields of service or enter the extension or identification (ID) code of a desired service provider for automatic connection when the provider is available.


Alternatively, the seeker indicates which type of service he would like to receive by speaking the name of a profession, such as “psychiatrist,” which is processed by the system's audio interface 308 using audio interface procedures 322. Otherwise, the user 102 can listen to a series of professions and press the numerical keypad to select one. This process continues until the desired field of service is selected. In addition, when known, the seeker 102 can provide a service provider ID of a desired service provider for immediate connection with the selected service provider (as described above).


Once the user 102 has indicated a field of service using the service provider selection procedures 324, the audio portal system 100 searches its database 310 for service providers in that field using the DB access procedures 326. The user 102 can then further narrow down the selection of service providers by speaking keywords, such as “psychiatry- depression.” The user 102 can also indicate a known specific service provider by speaking the service provider's name or punching in the service provider's code number or service provider ID into a telephone keypad for immediate connection.


The service provider selection procedures 324 in conjunction with the user interface procedures 338 allow the user 102 to further narrow the search for a service provider 200 by speaking--or pressing into the telephone keypad- a price, such as “50 cents per minute.” The audio portal server computer 300 will then narrow the search in the database 310 for service providers 200 that match the price range. The service seeker 102 can further narrow the search for a service provider 200 by speaking-or pressing into the telephone keypad-a quality rating, such as “three stars or higher.” The server computer 300 will then narrow the search in the database 340 for service providers 200 which match the quality rating range. Finally, the user 102 can further narrow the search for a service provider by speaking-or pressing into the telephone keypad-the name of a language, such as “Spanish.” The server computer 300 will then narrow the search in the database 310 for service providers 200 who can speak this language.


In addition, the service provider selection procedures 324, in conjunction with the user interface procedures 338, allow the service seeker 102 to select a service provider 200 in response to listings of both the service provider, categories or fields of service available from the system, as well as a specific service provider once a field of service is selected via keypad entry of the user's audio transmission medium 104. Accordingly, the service provider selection procedures 324 and user interface procedures 338 will receive, in one embodiment, dual tone multi-frequency (DTMF) signals generated via the audio transmission medium keypad entry. As such, the user interface procedures 338 will decode the received user selection and convert the decoded DTMF signals into a database query format.


Once converted, the service provider selection procedures 324 will query the service provider database 310 using the user selection in order to provide either service providers within a field of service selected by the user or corresponding service provider selected by the user. Accordingly, in certain embodiments, prior users may enter a service provider code number for immediate connection to the service provider. Accordingly, the seeker can avoid delays provided via interface prompts required by new users in order to familiarize users with the fields of service available from the audio portal system 100. In one embodiment, if the service provider is not available, the seeker is given the option to connect with the highest rated service provider within the corresponding category. As such, the service provider selection procedures 324 and user interface procedures 338 include both IVR software, as well as DTMF decoding software, depending on whether the user's responses are provided as voice responses or keypad entry.


Once a service provider 200 with the desired characteristics has been chosen, the audio portal system 100 will automatically connect the service seeker 102 with the selected service provider 200. Since the service provider 200 has informed the audio portal system 100 that he/she is “on call” and ready to receive calls, the audio portal system 200 can reach him/her with a simple phone call via the audio interface 308. Once both the service seeker 102 and provider 200 are on the phone line 110, the audio portal system 100 conferences the two phone calls together, enabling services to be rendered in a live conversation. The system keeps track of the time spent on the phone call using the conversation monitoring procedures 340. The service seeker 102 is then billed accordingly, and the funds are transferred to the provider 200 using the billing procedures 328.


At the end of the phone call, the system prompts the service seeker 102 to rate the quality of the received service using the quality rating procedures 338. A quality rating of one to five stars, for instance, can be spoken into the telephone 104 or pressed into the telephone keypad. The audio portal system 100 records this rating, and in turn, can store the quality rating in the database 310 and use it as a quality-selection criterion the next time a user 102 calls. Procedural method steps for implementing the teachings of the present invention are now described. Operation


Referring now to FIG. 5, a method 600 is depicted for allowing a user 102 to provide an audio request 106 to an audio portal service provider system 100 resulting in a live conversation between a user 102 and a selected service provider 200, for example, in the audio portal system 100 as depicted in FIGS. 1 and 2. At step 610, an audio request 106 is received by the audio portal service provider system 100 from a user 102 (service seeker) that is seeking service providers 200 from a wide array of fields of service available from the audio portal system 100. The audio request 106 is provided via an audio transmission medium 104 and received via an audio interface 308 of an audio portal server computer 300.


Once the request 106 is received, at step 612, it is determined whether the audio request 106 includes a field of service desired by the user 102. At step 614, when the audio request includes a field of service desired by the user 102, the user 102 is provided with a list of one or more service providers 200 stored in a service provider database 310, which match the field of service desired by the user 102. The audio portal server computer 300 selects the list of service providers for the user 102 using the service provider selection procedures 324, as well as the database access procedures 326. The list of service providers is then presented to the user 102 using the user interface procedures 338.


Next, at step 618, the audio portal server computer 300 determines a selection from the service seeker 102 for a selected service provider 200 stored within the service provider database 310. Finally, at step 620, the audio portal server computer 300 uses the audio interface 308 to connect the user 102 with the selected service provider 200 for a live conversation via the audio transmission mediums 104 and 250. The audio interface procedures 322 handle receipt of the audio request 106 and connection of the user 102 with the selected service provider 200. However, the audio interface procedures 322 may be performed by a human operator.


As described above, embodiments of the invention include user response via an audio request, which may include the service provider name, a field of service, or a service provider code for direct connection with the selected service provider. In addition, the user response may be via keypad entry through the user audio transmission medium 104, which generates a DTMF signal, which may also indicate a field of service desired by the user, a corresponding service provider desired by the user, as well as a service provider code for direct connection with the service provider.


As such, depending on the means for user response, the service provider selection procedures 324, in conjunction with the user interface procedures 338, will utilize either IVR software or DTMF decoding software in order to convert the user's response into a query which is recognized by the service provider database. Once the query is generated, the service provider selection procedures will query the service provider database 310 in order to return either the selected field of service, a selected service provider or when service provider code (e.g., extension) is determined to directly connect the service provider with the user when the service provider is available.



FIG. 6 depicts additional method steps 630 for adding service providers 200 to the audio portal system 100. At step 632, the audio portal server computer 300 receives a request from a service provider 200 of a field of service requesting inclusion in the service provider database 310. At step 633, the audio portal service provider system 100 determines whether to approve the service provider 200. Approval of a service provider 200 includes, for example, adding an additional field of service to the audio portal system 100 for a new service provider 200.


At step 634, when the service provider 200 is approved, the server computer 300 generates a record in the service provider database 310, including provider information contained in the audio request 106. Acceptance of the provider 200 and generation of provider records in the service provider database 310 is performed by the server computer 300 using provider inclusion procedures 330. The provider information stored in the database 310 can include a service price, real-time service provider availability, specific expertise of the service provider, languages spoken by the provider and a quality rating for the service provider.



FIG. 7 depicts additional method step 640 for billing a user 102 and compensating a service provider 200 for a live conversation between the provider 200 and the user 102. At step 642, the server computer 300 measures a duration of the live conversation between the user 102 and the provider 200 using the conversation monitoring procedures 340. Once the live conversation is complete, the server computer 300 calculates a billing amount for the user 102 based on the duration of the live conversation and a time-based price charged by the service provider 200.


In one embodiment, the billing amount is generated by the server computer 300 using the billing procedures 328. However, the billing amount may be a flat fee. Otherwise, the server computer 300 continues measuring the duration of the live conversation between the user 102 and service provider 200 at step 642. The time-based price charged by the service provider 200 includes, for example, a per minute price, hourly price or a flat fee.


At step 648, the server computer 300 bills the service seeker 102 the billing amount for the live conversation with the provider 200. Generally, service seekers 102 of the audio portal service provider system 100 will have a billing account set up with the system 100. The audio portal system 100 can then either deduct from the user's account or charge the billing amount, for example, to a credit card submitted by the service seeker 102. At step 650, the audio portal service provider system 100 compensates the provider 200 for the live conversation with the service seeker 102. Finally, at step 652, the server computer 300 collects a premium fee for the audio portal system 100 as a predetermined percentage of the billing amount, for example, ten percent.



FIG. 8 depicts additional method step 602 for connecting a service seeker 102 to the audio portal service provider system 100. At step 604, the server computer 300 receives a request from a user 102 for connection to the audio portal service provider system 100 via the audio transmission medium 104. The audio transmission medium 104 is, for example, a telephone. At step 606, the server computer 300 establishes a connection between the service seeker 102 and the audio portal system 100 via the audio interface 308. At step 608, the server computer 300 provides the user 102 with an audio list of the wide array of fields of service available from the audio portal service provider system 100 using the user interface procedures 338.


Finally, at step 609, the system 100 will provide the service seeker, via the user interface procedures 338, a unique field of service code for each field of service within the audio list of fields of service provided to the service seeker 102. Accordingly, the service seeker can select a desired field of service and enter a field of service code corresponding to the desired field of service within the keys of the service seeker's audio transmission medium 104. Once entered, the audio transmission medium 104 will generate a DTMF response, which is interpreted by the user interface procedures 338 in order to select service providers within the field of service desired by the user.



FIG. 9 depicts additional method step 660 for receiving a quality rating from a user 102 regarding the live conversation with the service provider 200. At step 662, it is determined whether the live conversation is complete. At step 664, the server computer 300 prompts the user 102 for a quality of service rating for services rendered by the service provider 200. At step 666, it is determined whether a quality rating is provided by the user 102. At step 668, the server computer 300 records the service rating provided by the user 102 in the service provider database 310.


As described above, the audio request 106 provided by the user 102 can include the category of service providers, a maximum price range for service providers, desired times of availability for service providers, specific expertise of the service provider, a language spoken by the service provider and a minimum quality rating for the service provider. These criteria are used by the server computer 300 and provided to service provider selection procedures 324 in order to narrow the list of service providers 200 for the user 102 to choose from.


APPLICATION OF THE INVENTION

Danielle, a graduate student in economics, happens to be an expert user of Microsoft Excel. To earn extra money while writing her thesis, Danielle decides to post her Excel-help service son the subject web site, an Internet based implementation of the invention. She registers at the site and lists herself under “Computer Help” and “Excel” at the rate of $1.00 per minute. During the registration process, Danielle provides her telephone number and a description of her abilities, which include regression models and statistical analysis. Whenever Danielle is at home alone studying for long stretches in the evening, she signs on to the subject web site and changes her state of availability to “On Call,” or immediately available to receive clients.


Michael is a management consultant building a regression model on Excel for a large clothing retailer. At midnight in the office, he is having trouble analyzing his spreadsheet. Looking to receive help, he dials the 1-800 number of the subject system. He is prompted by the system to indicate the area of service he desires. He speaks the words, “Computer Help,” which are recognized by the system's voice-recognition software. The system has several thousand computer-help service providers to choose from, so Michael specifies his needs by speaking the words, “regression models and analysis.”


The system has about 50 service providers who are “On Call” to receive customers regarding regression models and analysis. Michael then indicates the price and quality he desires by speaking the words, “one dollar per minute or less” and “with a three-star quality rating or above.” The system uses these parameters to fine only those service providers who fit within this price and quality range and can presently receive customers regarding regression models and analysis—there are four. The system relays the descriptions of the four service providers to Michael. He selects Danielle by speaking the words, “Connect Me.”


Since Michael has not used the subject phone system before, he first must enter his credit card number to pay for the call. Once the credit card number has been confirmed, the system dials Danielle's phone number, which it has on file from her registration at the web site. When Danielle picks up the phone, the automated voice of the system informs her that there is a client on the line looking for “Computer Help” and willing to pay her $1.00 price per minute. The system asks her whether she would like to accept the call. She speaks the word “yes” (or presses “1” on her telephone keypad), and the system conferences the separate phone calls to Danielle and Michael together so that they can communicate.


Michael and Daniel talk until his problem is solved, which takes eight minutes. Michael's credit card is billed for eight dollars. He receives a confirming message via electronic mail notifying him of this, along with a request to evaluate Danielle's service, which he does, pressing “5” on his telephone keypad to award her five stars, which the system then averages into her overall quality rating. Danielle's web site account is credited for eight dollars minus a fee collected by the web site. Once Danielle's web site account has accumulated a surplus of $25, she receives a check from the web site in the mail. After receiving many positive reviews from online clients such as Michael, Danielle is inundated with Excel-help requests whenever she goes “On Call,” enabling her to raise her rates to $1.50 per minute.


The scenario described above illustrates a situation where Michael is allowed to enter a voice request for selection of a service provider and to further narrow the categories. However, certain implementations of the present invention will provide service seekers, such as Michael, with a listing of fields of service provided by the system and indicate a field of service code corresponding to each field of service available from the system. As such, a user such as Michael, would select the desired field of service and enter a field of service code corresponding to the desired field of service via the keypad of Michael's telephone in order to provide a list of service providers in the area of computer health. Therefore, service seekers such as Michael are provided the option to enter voice responses or keypad entry responses in order to enable final selection of a service provider and enter into a live conversation with the desired service provider in order to solve the service seeker's problem.


Alternate Embodiments

Several aspects of one implementation of the audio portal system for providing a real-time communications connection between a service seeker and a service provider have been described. However, various implementations of the audio portal system provide numerous features including, complementing, supplementing, and/or replacing the features described above. Features can be implemented as part of the audio portal system or as part of an on-line implementation in different implementations. In addition, the foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the invention.


In addition, although an embodiment described herein is directed to an audio portal, it will be appreciated by those skilled in the art that the teaching of the present invention can be applied to other systems. In fact, systems for connection of service seekers and service providers for real-time communication are within the teachings of the present invention, without departing from the scope and spirit of the present invention. The embodiments described above were chosen and described in order to best explain the principles of the invention and its practical applications. These embodiment were chosen to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated.


It is to be understood that even though numerous characteristics and advantages of various embodiments of the present invention have been set forth in the foregoing description, together with details of the structure and function of various embodiments of the invention, this disclosure is illustrative only. In some cases, certain subassemblies are only described in detail with one such embodiment. Nevertheless, it is recognized and intended that such subassemblies may be used in other embodiments of the invention. Changes may be made in detail, especially matters of structure and management of parts within the principles of the present invention to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed.


Advantages of the invention include providing users with the capability to engage in a live conversation with a selected service provider via a telephone. Contrary to prior systems requiring an internet connection and browser to receive such services, the equivalent is now provided by a simple audio transmission medium such as the telephone. As a result, virtually anyone can benefit from the capabilities provided by the present invention. The system also allows providers of a field of service to be compensated for supplying their expertise to a user.


Having disclosed exemplary embodiments, modifications and variations may be made to the disclosed embodiments while remaining within the scope of the invention as defined by the following claims.

Claims
  • 1. A method, comprising: a first party providing an advertisement to a customer on behalf of an advertiser; andafter a telephonic connection being established between the advertiser and the customer, via the advertisement, the first party charging the advertiser a fee based on a price specified by the advertiser, the fee being independent of a duration of the telephonic connection;wherein the price is for a service rendered by the advertiser over the telephonic connection between the advertiser and the customer.
  • 2. The method of claim 1, wherein the advertiser is a service provider.
  • 3. The method of claim 2, wherein the first party providing the advertisement includes providing a reference to be selected by a customer to initiate a telephone call to the service provider.
  • 4. The method of claim 1, wherein the first party providing the advertisement includes the first party providing the advertisement in response to a keyword search submitted by a customer.
  • 5. The method of claim 4, wherein the keyword search is submitted in one of: audio and voice.
  • 6. The method of claim 5, wherein the keyword search includes a voice input; and the method further comprises recognizing the voice input via software.
  • 7. The method of claim 5, wherein the keyword search includes a keypad input transmitted as a dual tone multi-frequency (DTMF) signal.
  • 8. The method of claim 4, further comprising: determining whether to use integrated voice recognition or dual tone multi-frequency (DTMF) decoding to process the keyword search submitted by the customer.
  • 9. The method of claim 4, wherein the keyword search is submitted through a telephone.
  • 10. The method of claim 9, wherein the advertisement is provided through the telephone.
  • 11. The method of claim 1, wherein the advertisement comprises a description provided by the advertiser.
  • 12. The method of claim 11, further comprising: the first party receiving the description from the advertiser via Internet.
  • 13. The method of claim 1, further comprising: deducting from an account of the customer set up with the first party according to the price specified by the advertiser.
  • 14. The method of claim 13, further comprising: crediting an account of the advertiser set up with the first party according to the price specified by the advertiser.
  • 15. The method of claim 1, wherein the advertisement is placed on a media channel.
  • 16. The method of claim 15, wherein the media channel includes a telephonic connection between the first party and the customer which supports voice communication.
  • 17. The method of claim 15, farther including the first party providing advertisements on the media channel on behalf of multiple service providers, the advertisements to include at least a reference to a telephonic connection with a respective one of the service providers; the first party charging a fee after a telephonic connection being established between the respective one of the service providers and a customer.
  • 18. The method of claim 17, farther comprising: receiving a selection by the customer over the media channel after the first party providing the advertisements on the media channel on behalf of the multiple service providers.
  • 19. A system comprising: means for a first party providing an advertisement on behalf of an advertiser; andmeans for the first party charging the advertiser a fee based on a price specified by the advertiser, after a telephonic connection being established between the advertiser and a customer, via the advertisement, the fee being independent of a duration of the telephonic connection, wherein the price is for a service rendered by the advertiser over the telephonic connection between the advertiser and the customer.
  • 20. A machine-readable medium having stored thereon a set of instructions, which when executed, cause a data processing system to perform a method comprising: a first party providing an advertisement to a customer on behalf of an advertiser; andafter a telephonic connection being established between the advertiser and the customer, via the advertisement, the first party charging the advertiser a fee based on a price specified by the advertiser, the fee being independent of a duration of the telephonic connection;wherein the price is for a service rendered by the advertiser over the telephonic connection between the advertiser and the customer.
  • 21. A method, comprising: a first party providing an advertisement, on behalf of an advertiser, to a customer via a telephonic apparatus of the customer; andafter a telephonic connection being established between the advertiser and the telephonic apparatus of the customer, via the advertisement, the first party charging the advertiser a fee based on a price specified by the advertiser;wherein the price is associated with a service rendered by the advertiser over the telephonic connection between the advertiser and the customer.
  • 22. The method of claim 21, wherein the advertisement comprises a description provided by the advertiser to the first party.
  • 23. The method of claim 21, further comprising: establishing a telephonic connection between the first party and the customer to receive a search request from the customer;wherein the advertisement is in response to the search request.
  • 24. The method of claim 23, further comprising: the first party establishing a telephonic connection between the first party and the advertiser in response to a selection from the customer; andthe first party conferencing the telephonic connection between the first party and the customer and the telephonic connection between the first party and the advertiser to connect the advertiser and the customer.
  • 25. The method of claim 23, wherein the advertisement is provided over the telephonic connection between the first party and the customer.
  • 26. A method, comprising: a first party providing an advertisement, on behalf of an advertiser, to a customer in response to a voice-based search request received from a telephone of the customer; andafter a telephonic connection being established between the advertiser and the telephone of the customer, via the advertisement, the first party charging the advertiser a fee based on a price specified by the advertiser, the fee being independent of a duration of the telephonic connection, wherein the price is for a service rendered by the advertiser over the telephonic connection between the advertiser and the customer.
  • 27. The method of claim 26, further comprising: processing the voice-based search request via speech recognition.
  • 28. The method of claim 27, wherein the advertisement is provided in voice generated via text-to speech.
  • 29. The method of claim 26, wherein the voice-based search request includes a category of service providers.
  • 30. A machine-readable medium having stored thereon a set of instructions, which when executed, cause a data processing system to perform a method comprising: providing an advertisement, on behalf of an advertiser, from a first party to a customer via a telephonic apparatus of the customer; andafter a telephonic connection being established between the advertiser and the telephonic apparatus of the customer, via the advertisement, charging the advertiser a fee by the first party based on a price specified by the advertiser;wherein the price is associated with a service rendered by the advertiser over the telephonic connection between the advertiser and the customer.
  • 31. The machine-readable medium of claim 30, wherein the advertisement comprises a description provided by the advertiser to the first party.
  • 32. The machine-readable medium of claim 30, wherein the method further comprises: establishing a telephonic connection between the first party and the customer to receive a search request from the customer;the first party establishing a telephonic connection between the first party and the advertiser in response to a selection from the customer; andthe first party conferencing the telephonic connection between the first party and the customer and the telephonic connection between the first party and the advertiser to connect the advertiser and the customer;wherein the advertisement is in response to the search request.
  • 33. A data processing system, comprising: means for providing an advertisement, on behalf of an advertiser, from a first party to a customer via a telephonic apparatus of the customer; andmeans for charging the advertiser a fee by the first party based on a price specified by the advertiser, after a telephonic connection being established between the advertiser and the telephonic apparatus of the customer, via the advertisement, wherein the price is associated with a service rendered by the advertiser over the telephonic connection between the advertiser and the customer.
  • 34. The data processing system of claim 33, further comprising: means for establishing a telephonic connection between the first party and the customer to receive a search request from the customer, wherein the advertisement is provided over the telephonic connection between the first party and the customer in response to the search request.
  • 35. A machine-readable medium having stored thereon a set of instructions, which when executed, cause a data processing system to perform a method comprising: providing an advertisement, on behalf of an advertiser, from a first party to a customer in response to a voice-based search request received from a telephone of the customer; andafter a telephonic connection being established between the advertiser and the telephone of the customer, via the advertisement, charging the advertiser a fee by the first party based on a price specified by the advertiser, the fee being independent of a duration of the telephonic connection, wherein the price is for a service rendered by the advertiser over the telephonic connection between the advertiser and the customer.
  • 36. The machine-readable medium of claim 35, wherein the method further comprises: processing the voice-based search request via speech recognition;wherein the advertisement is provided in voice generated via text-to speech.
  • 37. A data processing system, comprising: means for providing an advertisement, on behalf of an advertiser, from a first party to a customer in response to a voice-based search request received from a telephone of the customer; andmeans for charging the advertiser a fee by the first party based on a price specified by the advertiser, after a telephonic connection being established between the advertiser and the telephone of the customer, via the advertisement, wherein the fee is independent of a duration of the telephonic connection and the price is for a service rendered by the advertiser over the telephonic connection between the advertiser and the customer.
  • 38. The data processing system of claim 37, further comprising: means for processing the voice-based search request via speech recognition, wherein the voice-based search request includes a category of service providers.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present patent application is a continuation application of U.S. patent application Ser. No. 10/956,771 filed Oct. 1, 2004, which is a continuation application of U.S. patent application Ser. No. 10/015,968 filed Dec. 6, 2001, which is a continuation-in-part application of U.S. patent application Ser. No. 09/702,217 filed Oct. 30, 2000, which is now U.S. Pat. No. 6,636,590, issued Oct. 21, 2003. All of the above-referenced applications are incorporated herein by reference.

US Referenced Citations (365)
Number Name Date Kind
4313035 Jordan et al. Jan 1982 A
4577065 Frey et al. Mar 1986 A
4631428 Grimes Dec 1986 A
4645873 Chomet Feb 1987 A
4677434 Fascenda Jun 1987 A
4723283 Nagasawa et al. Feb 1988 A
4751669 Sturgis et al. Jun 1988 A
4752675 Zetmeir Jun 1988 A
4847890 Solomon et al. Jul 1989 A
4850007 Marino et al. Jul 1989 A
4963995 Lang Oct 1990 A
5057932 Lang Oct 1991 A
5058152 Solomon et al. Oct 1991 A
5148474 Haralambopouos et al. Sep 1992 A
5155743 Jacobs Oct 1992 A
5164839 Lang Nov 1992 A
5262875 Mincer et al. Nov 1993 A
5319542 King et al. Jun 1994 A
5325424 Grube Jun 1994 A
5347632 Filepp et al. Sep 1994 A
5359508 Rossides Oct 1994 A
5361295 Solomon et al. Nov 1994 A
5369694 Bales et al. Nov 1994 A
5440334 Walters et al. Aug 1995 A
5448625 Lederman Sep 1995 A
5453352 Tachibana Sep 1995 A
5497502 Castille Mar 1996 A
5524146 Morrisey et al. Jun 1996 A
5537314 Kanter Jul 1996 A
5539735 Moskowitz Jul 1996 A
5555298 Jonsson Sep 1996 A
5557677 Prytz Sep 1996 A
5574780 Andruska et al. Nov 1996 A
5574781 Blaze Nov 1996 A
5589892 Knee et al. Dec 1996 A
5590197 Chen et al. Dec 1996 A
5596634 Fernandez et al. Jan 1997 A
5602905 Mettke Feb 1997 A
5608786 Gordon Mar 1997 A
5615213 Griefer Mar 1997 A
5619148 Guo Apr 1997 A
5619570 Tsutsui Apr 1997 A
5619725 Gordon Apr 1997 A
5619991 Sloane Apr 1997 A
5634012 Stefik et al. May 1997 A
5638432 Wille et al. Jun 1997 A
5675734 Hair Oct 1997 A
5694549 Carlin et al. Dec 1997 A
5696965 Dedrick Dec 1997 A
5701419 McConnell Dec 1997 A
5710887 Chelliah et al. Jan 1998 A
5710970 Walters et al. Jan 1998 A
5712979 Graber et al. Jan 1998 A
5715314 Payne et al. Feb 1998 A
5717860 Graber et al. Feb 1998 A
5718247 Frankel Feb 1998 A
5721763 Joseph et al. Feb 1998 A
5722418 Bro Mar 1998 A
5724424 Gifford Mar 1998 A
5724521 Dedrick Mar 1998 A
5734961 Castille Mar 1998 A
5740231 Cohn et al. Apr 1998 A
5745681 Levine et al. Apr 1998 A
5751956 Kirsch May 1998 A
5768348 Solomon et al. Jun 1998 A
5768521 Dedrick Jun 1998 A
5774534 Mayer Jun 1998 A
5778367 Wesinger, Jr. et al. Jul 1998 A
5781894 Patrecca et al. Jul 1998 A
5794221 Egendorf Aug 1998 A
5802502 Gell et al. Sep 1998 A
5809119 Tonomura et al. Sep 1998 A
5809145 Slik et al. Sep 1998 A
5812769 Graber et al. Sep 1998 A
5818836 DuVal Oct 1998 A
5819092 Ferguson et al. Oct 1998 A
5819267 Uyama Oct 1998 A
5819271 Mahoney et al. Oct 1998 A
5819285 Damico et al. Oct 1998 A
5825869 Brooks et al. Oct 1998 A
5832523 Kanai et al. Nov 1998 A
5835896 Fisher et al. Nov 1998 A
5842212 Ballurio et al. Nov 1998 A
5850433 Rondeau Dec 1998 A
5860068 Cook Jan 1999 A
5862223 Walker et al. Jan 1999 A
5864871 Kitain et al. Jan 1999 A
RE36111 Neville Feb 1999 E
5870546 Kirsch Feb 1999 A
5870744 Sprague Feb 1999 A
5878130 Andrews et al. Mar 1999 A
5884032 Bateman et al. Mar 1999 A
5884272 Walker et al. Mar 1999 A
5884282 Robinson Mar 1999 A
5889774 Mirashrafi et al. Mar 1999 A
5890138 Godin et al. Mar 1999 A
5893077 Griffin Apr 1999 A
5901214 Shaffer et al. May 1999 A
5903635 Kaplan May 1999 A
5907677 Glenn et al. May 1999 A
5911132 Sloane Jun 1999 A
5914951 Bentley et al. Jun 1999 A
5924082 Silverman et al. Jul 1999 A
5937390 Hyodo Aug 1999 A
5940471 Homayoun Aug 1999 A
5940484 DeFazio et al. Aug 1999 A
5946646 Schena et al. Aug 1999 A
5960416 Block Sep 1999 A
5963202 Polish Oct 1999 A
5963861 Hanson Oct 1999 A
5974141 Saito Oct 1999 A
5974398 Hanson et al. Oct 1999 A
5978567 Rebane et al. Nov 1999 A
5982863 Smiley et al. Nov 1999 A
5987102 Elliott et al. Nov 1999 A
5987118 Dickerman et al. Nov 1999 A
5987430 Van Horne et al. Nov 1999 A
5991394 Dezonno et al. Nov 1999 A
5995705 Lang Nov 1999 A
5999609 Nishimura Dec 1999 A
5999611 Tatchell et al. Dec 1999 A
5999965 Kelly Dec 1999 A
6006197 D'Eon et al. Dec 1999 A
6011794 Mordowitz et al. Jan 2000 A
6014644 Erickson Jan 2000 A
6026087 Mirashrafi et al. Feb 2000 A
6026148 Dworkin et al. Feb 2000 A
6026375 Hall et al. Feb 2000 A
6026400 Suzuki Feb 2000 A
6028601 Machiraju et al. Feb 2000 A
6029141 Bezos et al. Feb 2000 A
6035021 Katz Mar 2000 A
6046762 Sonesh et al. Apr 2000 A
6055513 Katz et al. Apr 2000 A
6058379 Odom et al. May 2000 A
6064978 Gardner et al. May 2000 A
6108704 Hutton et al. Aug 2000 A
6130933 Miloslavsky Oct 2000 A
6144670 Sponaugle et al. Nov 2000 A
6167449 Arnold et al. Dec 2000 A
6173279 Levin et al. Jan 2001 B1
6175619 DeSimone Jan 2001 B1
6185194 Musk et al. Feb 2001 B1
6188673 Bauer et al. Feb 2001 B1
6188761 Dickerman et al. Feb 2001 B1
6189030 Kirsch et al. Feb 2001 B1
6192050 Stovall Feb 2001 B1
6199096 Mirashrafi et al. Mar 2001 B1
6208713 Rahrer et al. Mar 2001 B1
6212192 Mirashrafi et al. Apr 2001 B1
6216111 Walker et al. Apr 2001 B1
6223165 Lauffer Apr 2001 B1
6230287 Pinard et al. May 2001 B1
6243684 Stuart et al. Jun 2001 B1
6259774 Miloslavsky Jul 2001 B1
6269336 Ladd et al. Jul 2001 B1
6269361 Davis et al. Jul 2001 B1
6275490 Mattaway et al. Aug 2001 B1
6292799 Peek et al. Sep 2001 B1
6298056 Pendse Oct 2001 B1
6301342 Ander et al. Oct 2001 B1
6304637 Mirashrafi et al. Oct 2001 B1
6310941 Crutcher et al. Oct 2001 B1
6314402 Monaco et al. Nov 2001 B1
6323894 Katz Nov 2001 B1
6327572 Morton et al. Dec 2001 B1
6385583 Ladd et al. May 2002 B1
6393117 Trell May 2002 B1
6400806 Uppaluru Jun 2002 B1
6404864 Evslin et al. Jun 2002 B1
6404877 Bolduc et al. Jun 2002 B1
6404884 Marwell et al. Jun 2002 B1
6408278 Carney et al. Jun 2002 B1
6430276 Bouvier et al. Aug 2002 B1
6434527 Horvitz Aug 2002 B1
6463136 Malik Oct 2002 B1
6466966 Kirsch et al. Oct 2002 B1
6470079 Benson Oct 2002 B1
6470181 Maxwell Oct 2002 B1
6470317 Ladd et al. Oct 2002 B1
6484148 Boyd Nov 2002 B1
6493437 Olshansky Dec 2002 B1
6493671 Ladd et al. Dec 2002 B1
6493673 Ladd et al. Dec 2002 B1
6510417 Woods et al. Jan 2003 B1
6510434 Anderson et al. Jan 2003 B1
6523010 Lauffer Feb 2003 B2
6529878 De Rafael et al. Mar 2003 B2
6539359 Ladd et al. Mar 2003 B1
6546372 Lauffer Apr 2003 B2
6549889 Lauffer Apr 2003 B2
6560576 Cohen et al. May 2003 B1
6606376 Trell Aug 2003 B1
6625595 Anderson et al. Sep 2003 B1
6636590 Jacob et al. Oct 2003 B1
6658389 Alpdemir Dec 2003 B1
6757364 Newkirk Jun 2004 B2
6769020 Miyazaki et al. Jul 2004 B2
6807532 Kolls Oct 2004 B1
5825876 Easwar et al. Nov 2004 A1
6813346 Gruchala et al. Nov 2004 B2
6836225 Lee et al. Dec 2004 B2
6847992 Haitsuka et al. Jan 2005 B1
6850965 Allen Feb 2005 B2
6859833 Kirsch et al. Feb 2005 B2
6937699 Schuster et al. Aug 2005 B1
6968174 Trandal et al. Nov 2005 B1
7028012 St. Vrain Apr 2006 B2
7035381 D'Ascenzo et al. Apr 2006 B2
7076037 Gonen et al. Jul 2006 B1
7092901 Davis et al. Aug 2006 B2
7103010 Melideo Sep 2006 B2
7200413 Montemer Apr 2007 B2
7212615 Wolmuth May 2007 B2
7224781 Jacob et al. May 2007 B2
7231405 Xia Jun 2007 B2
20010010043 Lauffer Jul 2001 A1
20010012913 Iliff Aug 2001 A1
20010016826 Lauffer Aug 2001 A1
20010018662 Lauffer Aug 2001 A1
20010027481 Whyel Oct 2001 A1
20010029322 Iliff Oct 2001 A1
20010032247 Kanaya Oct 2001 A1
20010036822 Mead et al. Nov 2001 A1
20010037283 Mullaney Nov 2001 A1
20010048737 Goldberg et al. Dec 2001 A1
20020003867 Rothschild et al. Jan 2002 A1
20020010608 Faber et al. Jan 2002 A1
20020010616 Itzhaki Jan 2002 A1
20020026457 Jensen Feb 2002 A1
20020029241 Yokono et al. Mar 2002 A1
20020038233 Shubov et al. Mar 2002 A1
20020044640 Meek et al. Apr 2002 A1
20020057776 Dyer May 2002 A1
20020065959 Kim et al. May 2002 A1
20020087565 Hoekman et al. Jul 2002 A1
20020090203 Mankovitz Jul 2002 A1
20020095331 Osman et al. Jul 2002 A1
20020107697 Jensen Aug 2002 A1
20020107805 Kamimura et al. Aug 2002 A1
20020120554 Vega Aug 2002 A1
20020122547 Hinchey et al. Sep 2002 A1
20020133388 Lauffer Sep 2002 A1
20020133402 Faber et al. Sep 2002 A1
20020133571 Jacob et al. Sep 2002 A1
20020136377 Stewart et al. Sep 2002 A1
20020164977 Link, II et al. Nov 2002 A1
20020173319 Fostick Nov 2002 A1
20020191762 Benson Dec 2002 A1
20020193094 Lawless et al. Dec 2002 A1
20030026397 McCroskey Feb 2003 A1
20030036686 Iliff Feb 2003 A1
20030043981 Lurie et al. Mar 2003 A1
20030046161 Kamanger et al. Mar 2003 A1
20030046361 Kirsch et al. Mar 2003 A1
20030050837 Kim Mar 2003 A1
20030083042 Abuhamdeh May 2003 A1
20030105824 Brechner et al. Jun 2003 A1
20030135095 Iliff Jul 2003 A1
20030138091 Meek et al. Jul 2003 A1
20030153819 Iliff Aug 2003 A1
20030163299 Iliff Aug 2003 A1
20030195787 Brunk et al. Oct 2003 A1
20030212600 Hood et al. Nov 2003 A1
20030220866 Pisaris-Henderson Nov 2003 A1
20030223563 Wolmuth Dec 2003 A1
20030223565 Montemer Dec 2003 A1
20030225682 Montemer Dec 2003 A1
20030231754 Stein et al. Dec 2003 A1
20040003041 Moore et al. Jan 2004 A1
20040006511 Montemer Jan 2004 A1
20040008834 Bookstaff Jan 2004 A1
20040010518 Montemer Jan 2004 A1
20040023644 Montemer Feb 2004 A1
20040076403 Mankovitz Apr 2004 A1
20040083133 Nicholas et al. Apr 2004 A1
20040091093 Bookstaff May 2004 A1
20040096110 Yogeshwar et al. May 2004 A1
20040162757 Pisaris-Henderson Aug 2004 A1
20040174965 Brahm et al. Sep 2004 A1
20040174974 Meek et al. Sep 2004 A1
20040193488 Khoo et al. Sep 2004 A1
20040204997 Blaser et al. Oct 2004 A1
20040208185 Goodman et al. Oct 2004 A1
20040234049 Melideo Nov 2004 A1
20040234064 Melideo Nov 2004 A1
20040235524 Abuhamdeh Nov 2004 A1
20040236441 Melideo Nov 2004 A1
20040247092 Timmins et al. Dec 2004 A1
20040249649 Stratton et al. Dec 2004 A1
20040249778 Iliff Dec 2004 A1
20040254859 Aslanian Dec 2004 A1
20040260413 Melideo Dec 2004 A1
20050010795 Tagawa et al. Jan 2005 A1
20050018829 Baker Jan 2005 A1
20050021744 Haitsuka et al. Jan 2005 A1
20050038686 Lauffer Feb 2005 A1
20050041647 Stinnie Feb 2005 A1
20050044238 Jacob et al. Feb 2005 A1
20050048961 Ribaudo et al. Mar 2005 A1
20050063811 Chu et al. Mar 2005 A1
20050074100 Lederman Apr 2005 A1
20050074102 Altberg et al. Apr 2005 A1
20050076100 Armstrong Apr 2005 A1
20050080878 Cunningham et al. Apr 2005 A1
20050086104 McFadden Apr 2005 A1
20050100153 Pines et al. May 2005 A1
20050105881 Mankovitz May 2005 A1
20050114210 Faber et al. May 2005 A1
20050125416 Kirsch et al. Jun 2005 A1
20050135387 Rychener et al. Jun 2005 A1
20050154616 Iliff Jul 2005 A1
20050165285 Iliff Jul 2005 A1
20050165666 Wong et al. Jul 2005 A1
20050203799 Faber et al. Sep 2005 A1
20050207432 Velez-Rivera et al. Sep 2005 A1
20050209874 Rossini Sep 2005 A1
20050216341 Agarwal et al. Sep 2005 A1
20050216345 Altberg et al. Sep 2005 A1
20050220289 Reding Oct 2005 A1
20050222908 Altberg et al. Oct 2005 A1
20050240432 Jensen Oct 2005 A1
20050245241 Durand et al. Nov 2005 A1
20050251445 Wong et al. Nov 2005 A1
20050261964 Fang Nov 2005 A1
20050286688 Scherer Dec 2005 A1
20050289015 Hunter et al. Dec 2005 A1
20060003735 Trandal et al. Jan 2006 A1
20060004627 Baluja Jan 2006 A1
20060095343 Clarke et al. May 2006 A1
20060099936 Link et al. May 2006 A1
20060106711 Melideo May 2006 A1
20060159063 Kumer Jul 2006 A1
20060166655 Montemer Jul 2006 A1
20060171520 Kliger Aug 2006 A1
20060173827 Kliger Aug 2006 A1
20060173915 Kliger Aug 2006 A1
20060182250 Melideo Aug 2006 A1
20060184417 Van der Linden et al. Aug 2006 A1
20060200380 Ho et al. Sep 2006 A1
20060259365 Agarwal et al. Nov 2006 A1
20060277108 Altberg et al. Dec 2006 A1
20060277181 Temple et al. Dec 2006 A1
20070011240 Altberg et al. Jan 2007 A1
20070022011 Altberg et al. Jan 2007 A1
20070038507 Kumer Feb 2007 A1
20070067219 Altberg et al. Mar 2007 A1
20070078717 Ho et al. Apr 2007 A1
20070081662 Altberg et al. Apr 2007 A1
20070083408 Altberg et al. Apr 2007 A1
20070100956 Kumer May 2007 A1
20070116217 Altberg et al. May 2007 A1
20070121844 Altberg et al. May 2007 A1
20070121845 Altberg et al. May 2007 A1
20070121846 Altberg et al. May 2007 A1
20070121847 Faber et al. May 2007 A1
20070121848 Faber et al. May 2007 A1
20070124206 Faber et al. May 2007 A1
20070124207 Faber et al. May 2007 A1
20070127650 Altberg et al. Jun 2007 A1
20070129054 Andronikov et al. Jun 2007 A1
20070130014 Altberg et al. Jun 2007 A1
20070140451 Altberg et al. Jun 2007 A1
20070143182 Faber et al. Jun 2007 A1
20070255622 Swix et al. Nov 2007 A1
Foreign Referenced Citations (23)
Number Date Country
699785 May 1995 AU
2329046 Mar 1999 GB
409233441 Sep 1997 JP
409319812 Dec 1997 JP
9705733 Feb 1997 WO
9802835 Jan 1998 WO
9804061 Jan 1998 WO
9813765 Apr 1998 WO
9838558 Sep 1998 WO
9847295 Oct 1998 WO
9955066 Oct 1999 WO
0244870 Jun 2002 WO
2005040962 May 2005 WO
2005088980 Sep 2005 WO
2005101269 Oct 2005 WO
2005109287 Nov 2005 WO
2005109288 Nov 2005 WO
2005111887 Nov 2005 WO
2005111893 Nov 2005 WO
2006091966 Aug 2006 WO
2006091970 Aug 2006 WO
07028173 Mar 2007 WO
07038618 Apr 2007 WO
Related Publications (1)
Number Date Country
20070280443 A1 Dec 2007 US
Continuations (2)
Number Date Country
Parent 10956771 Oct 2004 US
Child 11691372 US
Parent 10015968 Dec 2001 US
Child 10956771 US
Continuation in Parts (1)
Number Date Country
Parent 09702217 Oct 2000 US
Child 10015968 US