Personal voice-based information retrieval system

Information

  • Patent Grant
  • 9769314
  • Patent Number
    9,769,314
  • Date Filed
    Monday, June 27, 2016
    8 years ago
  • Date Issued
    Tuesday, September 19, 2017
    6 years ago
Abstract
The present invention relates to a system for retrieving information from a network such as the Internet. A user creates a user-defined record in a database that identifies an information source, such as a web site, containing information of interest to the user. This record identifies the location of the information source and also contains a recognition grammar based upon a speech command assigned by the user. Upon receiving the speech command from the user that is described within the recognition grammar, a network interface system accesses the information source and retrieves the information requested by the user.
Description
FIELD OF THE INVENTION

The present invention relates generally to the field of providing information IO access. In particular, the invention relates to a personalized system for accessing information from the Internet or other information sources using speech commands.


BACKGROUND OF THE INVENTION

Popular methods of information access and retrieval using the Internet or other computer networks can be time-consuming and complicated. A user must frequently wade through vast amounts of information provided by an information source or web site in order obtain a small amount of relevant information. This can be time-consuming, frustrating, and, depending on the access method, costly. A user is required to continuously identify reliable sources of information and, if these information sources are used frequently, repeatedly access these sources.


Current methods of accessing information stored on computer networks, such as Wide Area Networks (WANs), Local Area Network (LANs) or the Internet, require a user to have access to a computer While computers are becoming increasingly smaller and easier to transport, using a computer to access information is still more difficult than simply using a telephone. Since speech recognition systems allow a user to convert his voice into a computer-usable message, telephone access to digital information is becoming more and more feasible Voice recognition technology is growing in its ability to allow users to use a wide vocabulary.


Therefore, a need exists for an information access and retrieval system and method that allows users to access frequently needed information from information sources on networks by using a telephone and simple speech commands.


SUMMARY OF THE INVENTION

One object of the preferred embodiment of the present invention is to allow users to customize a voice browsing system.


A further object of the preferred embodiment is to allow users to customize the information retrieved from the Internet or other computer networks and accessed by speech commands over telephones.


Another object of the preferred embodiment is to provide a secure and reliable retrieval of information over the Internet or other computer networks using predefined verbal commands assigned by a user.


The present invention provides a solution to these and other problems by providing a new system for retrieving information from a network such as the Internet. A user creates a user-defined record in a database that identifies an information source, such as a web site, containing information of interest to the user. This record identifies the location of the information source and also contains a recognition grammar assigned by the user. Upon receiving a speech command from the user that is described in the assigned recognition grammar, a network interface system accesses the information source and retrieves the information requested by the user.


In accordance with the preferred embodiment of the present invention, a customized, voice-activated information access system is provided. A user creates a descriptor file defining specific information found on a web site the user would like to access in the future. The user then assigns a pronounceable name or identifier to the selected content and this pronounceable name is saved in a user-defined database record as a recognition grammar along with the URL of the selected web site.


In the preferred embodiment, when a user wishes to retrieve the previously defined web-based information, a telephone call is placed to a media server. The user provides speech commands to the media server that are described in the recognition grammar assigned to the desired search. Based upon the recognition grammar, the media server retrieves the user-defined record from a database and passes the information to a web browsing server which retrieves the information from associated web site. The retrieved information is then transmitted to the user using a speech synthesis software engine.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 displays a personal information selection system used with the preferred embodiment of the present invention;



FIG. 2 displays a web page displayed by the clipping client of the preferred embodiment;



FIG. 3 is a block diagram of a voice browsing system used with preferred embodiment of the present invention;



FIG. 4 is a block diagram of a user-defined database record created by preferred embodiment of the present invention;



FIG. 5 is a block diagram of a media server used by the preferred embodiment; and



FIG. 6 is a block diagram of a web browsing server used by the preferred embodiment.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

The present invention uses various forms of signal and data transmission to allow a user to retrieve customized information from a network using speech communication. In the preferred embodiment of the present invention, a user associates information of interest found on a specific information source, such as a web site, with a pronounceable name or identification word. This pronounceable name/identification word forms a recognition grammar in the preferred embodiment. When the user wishes to retrieve the selected information, he may use a telephone or other voice enabled device to access a voice browser system. The user then speaks a command described in the recognition grammar associated with the desired information. The voice browsing system then accesses the associated information source and returns to the user, using a voice synthesizer, the requested information.


Referring to FIG. 1, a user 100 uses a computer 102 to access a network, such as a WAN, LAN, or the Internet, containing various information sources [n the preferred embodiment, the user 100 access the Internet 104 and begins searching for web sites 106, which are information sources that contain information of interest to the user. When the user 100 identifies a web site 106 containing information the user would like to access using only a voice enabled device, such as a telephone, and the voice browsing system 108, the user initiates a “clipping client” engine 110 on his computer 102.


The clipping client 110 allows a user 100 to create a set of instructions for use by the voice browsing system 108 in order to report personalized information back to the user upon request. The instruction set is created by “clipping” information from the identified web site. A user 100 may be interested in weather for a specific city, such as Chicago. The user 100 identifies a web site from which he would like to obtain the latest Chicago weather information. The clipping client 110 is then activated by the user 100.


The clipping client 110 displays the selected web site in the same manner as a conventional web browser such as Microsoft's® Internet Explorer. FIG. 2 depicts a sample of a web page 200 displayed by the clipping client 110. The user 100 begins creation of the instruction set for retrieving information from the identified web site by selecting the uniform resource locator (URL) address 202 for the web site (i.e., the web site address). In the preferred embodiment, this selection is done by highlighting and copying the URL address 202. Next, the user selects the information from the displayed web page that he would like to have retrieved when a request is made. Referring to FIG. 2, the user would select the information regarding the weather conditions in Chicago 204. The web page 200 may also contain additional information such as advertisements 206 or links to other web sites 208 which are not of interest to the user. The clipping client 110 allows the user to select only that portion of the web page containing information of interest to the user. Therefore, unless the advertisements 206 and links 208 displayed on the web page are of interest to the user, he would not select this information. Based on the web page information 204 selected by the user, the clipping client 110 creates a content descriptor file containing a description of the content of the selected web page. This content descriptor file indicates where the information selected by the user is located on the web page. In the preferred embodiment, the content descriptor file is stored within the web browsing server 302 shown in FIG. 3. The web browsing server 302 will be discussed below.


Table 1 below is an example of a content descriptor file created by the clipping client of the preferred embodiment. This content descriptor file relates to obtaining weather information from the web site www.cnn.com.









TABLE 1







table name : portalServices


column :


 service


content:


 weather


column:


 config


content:


 [cnn]


 Input=_zip


URL=http://cgi.cnn.com/cgi-bin/weather/redirect?zip= zip


Pre-filter=“\n” “


Pre-filter = “ < [ “ <: > ] + > ” ”


Pre-filter=/\s+/ I


Pre-filter=“ [ \ ( \) \ I ] ” ! ”


Output=_location


Output=first_day_name


Output=first_day_weather


Output=first_day_high_F


Output=first_day_high_C


Output=first_day_low_F


Output=first_day_low_c


Output=second_day_name


Output=second_day_weather


Output=second_day_high_F


Output=second_day_high_C


Output=second_day_low_F


Output=second_day_low_C


Output=third_day_name


Output=third_day_weather


Output=third_day_high_F


Output=third_day_high_C


Output=third_day_low_F


Output=third_day_low_C


Output=fourth_day_name


Output=fourth_day_weather


Output=fourth_day_high_F


Output=fourth_day_high_C


Output=fourth_day_low_F


Output=fourth_day_low_C


Output=undef


Output=_current_time


Output=fourth_day_low_C


Output=undef


Output=_current_time


Output=_current_month


Output=_current_day


Output=_current_weather


Output=_current_temperature_F


Output=_current_temperature_C


Output=_humidity


Output=_wind


Output=_pressure


Output=_sunrise


Output=_sunset


 Regular_expression=WEB SERVICES: (.+) Forecast FOUR-DAY


 FORECAST (\S+)


(\S+) HIGH


(\S+) F (\S+) C LOW (\S+) F (\S+) C (\S+) (\S+) HIGH (\S+) F


(\S+) C LOW


(\S+


) F (\S+) C (\S+) (\S+) HIGH (\S+) F (\S+) C LOW (\S+) F


(\S+) C (\S+) (\S+)


HIGH


−(\S+) C LOW (\S+) F (\S+) C WEATHER MAPS RADAR ( .+) Forecast


CURRENT CONDITIONS


(.+) !local!, (\S+) (\S+) (.+) Temp: (\S+) F,


(\S+) C Rel.


Humidity: (


\S+) Wind: (.+) Pressure: ( .+) Sunrise: ( .+) Sunset: ( .+)









Finally, the clipping client 110 prompts the user to enter an identification word or phrase that will be associated with the identified web site and information. For example, the user could associate the phrase “Chicago weather” with the selected URL 202 and related weather information 204. The identification word or phrase is stored as a personal recognition grammar that can now be recognized by a speech recognition engine of the voice browsing system 108 which will be discussed below. The personal recognition grammar, URL address 202, and a command for executing a content extraction agent are stored within a database used by the voice browser system 108 which will be discussed below.


The voice browsing system 108 used with the preferred embodiment will now be described in relation to FIG. 3. A database 300 designed by Webley Systems Incorporated is connected to one or more web browsing servers 302 as well as to one or more media servers 304. The database may store information on magnetic media, such as a hard disk drive, or it may store information via other widely acceptable methods for storing data, such as optical disks. The media servers 304 function as user interface systems that provide access to the voice browsing system 108 from a user's voice enabled device 306 (i.e., any type of wireline or wireless telephone, Internet Protocol (IP) phones, or other special wireless units). The database 300 contains a section that stores the personal recognition grammars and related web site information generated by the clipping client 110. A separate record exists for each web site defined by the user. An example of a user-defined web site record is shown in FIG. 4. Each user-defined web site record 400 contains the recognition grammar 402 assigned by the user, the associated Uniform Resource Locator (URL) 404, and a command that enables the “content extraction agent” 406 and retrieves the appropriate content descriptor file required to generate proper requests to the web site and to properly format received data. The web-site record 400 also contains the timestamp 408 indicating the last time the web site was accessed. The content exaction agent is described in more detail below.


The database 300 may also contain a listing of pre-recorded audio files used to create concatenated phrases and sentences. Further, database 300 may contain customer profile information, system activity reports, and any other data or software servers necessary for the testing or administration of the voice browsing system 108.


The operation of the media servers 304 will now be discussed in relation to FIG. 5. The media servers 304 function as user interface systems since they allow a user to access the voice browsing system 108 via a voice enabled device 306. In the preferred embodiment, the media servers 304 contain a speech recognition engine 500, a speech synthesis engine 502, an Interactive Voice Response (IVR) application 504, a call processing system 506, and telephony and voice hardware 508 that is required to enable the voice browsing system 108 to communicate with the Public Switched Telephone Network (PSTN) 308. In the preferred embodiment, each media server is based upon Intel's Dual Pentium III 730 MHz microprocessor system.


The speech recognition function is performed by a speech recognition engine 500 that converts voice commands received from the user's voice enabled device 10 (i.e., any type of wire line or wireless telephone, Internet Protocol (IP) phones, or other special wireless units) into data messages. In the preferred embodiment voice commands and audio messages are transmitted using the PSTN 308 and data is transmitted using the TCP/IP communications protocol. However, one skilled in the art would recognize that other transmission protocols may be used. Other possible transmission protocols would include SIP/VoIP (Session Initiation Protocol/Voice over IP), Asynchronous Transfer Mode (ATM) and Frame Relay. A preferred speech recognition engine is developed by Nuance Communications of 1380 Willow Road, Menlo Park, Calif. 94025 (www nuance.com) The Nuance engine capacity is measured in recognition units based on CPU type as defined in the vendor specification The natural speech recognition grammars (i.e., what a user can say that will be recognized by the speech recognition engine) were developed by Webley Systems.


In the preferred embodiment, when a user access the voice browsing system 108, he will be prompted if he would like to use his “user-defined searches.” If the user answers affirmatively, the media servers 304 will retrieve from the database 300 the personal recognition grammars 402 defined by the user while using the clipping client 10.


The media servers 304 also contain a speech synthesis engine 502 that converts the data retrieved by the web browsing servers 302 into audio messages that are transmitted to the user's voice enabled device 306. A preferred speech synthesis engine is developed by Lernout and Hauspie Speech Products, 52 Third Avenue, Burlington, Mass. 01803 (www.lhslcom).


A further description of the web browsing server 302 will be provided in relation to FIG. 6. The web browsing servers 302 provide access to data stored on any computer network including the Internet 104, WANs or LANs. The web browsing servers 302 receive responses from web sites 106 and extract the data requested by the user. This task is known as “content extraction.” The web browsing server 302 is comprised of a content extraction agent 600, a content fetcher 602, and the content descriptor file 604. Each of these are software applications and will be discussed below.


Upon receiving a user-defined web site record 400 from the database 300 in response to a user request, the web browsing server 302 invokes the “content extraction agent” command 406 contained in the record 400. The content extraction agent 600 retrieves the content descriptor file 604 associated with the user-defined record 400. As mentioned, the content descriptor file 604 directs the extraction agent where to extract data from the accessed web page and how to format a response to the user utilizing that data. For example, the content descriptor file 604 for a web page providing weather information would indicate where to insert the “city” name or ZIP code in order to retrieve Chicago weather information. Additionally, the content descriptor file 604 for each supported URL indicates the location on the web page where the response information is provided. The extraction agent 600 uses this information to properly extract from the web page the information requested by the user.


The content extraction agent 600 can also parse the content of a web page in which the user-desired information has changed location or format. This is accomplished based on the characteristic that most hypertext documents include named objects like tables, buttons, and forms that contain textual content of interest to a user. When changes to a web page occur, a named object may be moved within a document, but it still exists. Therefore, the content extraction agent 600 simply searches for the relevant name of desired object. In this way, the information requested by the user may still be found and reported regardless of changes that have occurred.


Table 2 below contains source code for a content extraction agent 600 used by the preferred embodiment.











TABLE 2









# ! /usr/ local/www/bin/sybperl5



#$Header:



/usr/local/cvsroot/webley/agents/service/web_dispatch.pl,v



1.6



# Dispatches all web requests



#http://wcorp.itn.net/cgi/flstat?carrier=ua&flight_no=155&mcn



_abbr=jul&date=



6&stamp=ChLN~PdbuuE*itn/ord,itn/cb/sprint_hd



#http://cig.cnnfn.cm/flightview/rlm?airline=amt&number=300



require “config_tmp.pl”;



# check parameters



die “Usage: $0 service [params]\n” if $#ARGV < 1;



#print STDERR @ARGV;



# get parameters



my ( $service, @param ) = @ARGV;



# check service



My ($services = (



        weather_cnn => ‘webget.pl weather_cnn’,



        weather_lycos => ‘webget.pl



‘weather_lycos’,



        weather_weather => ‘webget.pl



weather_weather’,



        weather_snap => ‘webget.pl



weather_snap’,



        weather_infospace => ‘webget.pl



weather_infospace’,



        stockQuote_yahoo => ‘webget.pl stock’,



        flightStatus_itn => ‘webget.pl



flight_delay’,



        yellowPages_yahoo => ‘yp_data.pl’,



        yellowPages_yahoo => ‘yp_data.pl’,



        newsHeaders_newsreal => ‘news.pl’,



        newsArticle_newsreal => ‘news.pl’,



        ) ;



# test param



my $date= ‘date’;



chop ( $date );



my ( $short_date ) = $date = ~ / \s+({w3}\s+\d{1, 2}) \s+/;



my %Test = (



        weather_cnn => ‘60053’,



        weather_lycos => ‘60053’,



        weather_weather => ‘60053’,



        weather_snap => ‘60053’,



        weather_infospace => ‘60053’,



        stockQuote_yahoo => ‘msft’,



        flightStatus_itn => ‘ua 155 ’ .



$short_date,



        yellowPages_yahoo => ‘tires 60015’,



        newsHeaders_newsreal => ‘ 1 ’,



        newsArticle_newsreal => ‘1 1’ ,



        ) ;



die “$date: $0: error: no such service: $service (check



this script) \n”



unless $Services{ $service };



# prepare absolute path to run other scripts



my ( $path, $script ) = $0 =~ ml{circumflex over ( )} (.*/)([ {circumflex over ( )}/ ] * ) | ;



# store the service to compare against datatable



my $service_stored = $service;



# run service



while( ! ( $response = ‘$path$Services { $service } @param’ )



) (



    # response failed



    # check with test parameters



    $response = ‘$path$Services { $service } $Test{



$service }”,



    If ( $response ) {



    $service = &switch_service( $service ) ;



#   print “wrong parameter values were supplied;



$service -



@param\n”;



#   die “$date: $0: error: wrong parameters: $service



-



@param\n”;



  }



  else {



    # change priority and notify



    $service = &increase_attempt( $service ) ;



  }



}



# output the response



print $response;



sub increase_attempt {



    my ( $service ) = @_;



    my ( $service_name ) split( /_/, $service ) ;



    print STDERR “$date: $0: attn: changing priority for



service:



$service\n”;



    # update priority



    &db_query( “update mcServiceRoute ”



        .“set priority = ( select max ( priority



) from



mcServiceRoute ”



        . “where service = ‘$service name’ ) + 1,



        . “date = getdate( ), ”



        . “attempt = attempt + 1 ”



        . “where route = ‘$script $service’ ” ) ;



#   print “---$route===\n”;



    # find new route



    my $route @{ &db_query( “select route from



mcServiceRoute ”



                .“where service =



‘$service_name’ ”



                .“and attempt < 5







                . “order by



priority ”)



        } -> [ 0 ]{ route };



    &db_query( “update mcServiceRoute ”



      . “set attempt = 0 ”



      . “where route = ‘$script $service’ “ ) ;



      if ( $route eq “$script $service_stored” ) ;



    ( $service_name, $service ) =split( /\s+/, $route ) ;



    die “$date: $0: error: no route for the service:



$service (add



More) \n””



     unless $service;



    return $service;



}



sub switch service {



    my ( $service ) = @_;



    my ( $service_name ) = split( /_/, $service );



    print STDERR “$date: $0: attn: changing priority for



service:



$service\n”;



    # update priority



    &db_query( “update mcServiceRoute ”



        . “set priority = ( select max( priority for



) from



mcServiceRoute ”



        . “where service = ‘$service_name’ ) + 1,



        . “date ~ getdate ( ) ”



        . “where route = ‘$script $service’ ” );



#   print “---$route===\n”;



  -  # find new route



    my $route = @( &db_query( “select route from



mcServiceRoute ”



              . “where service =



‘$service_name’ ”



              . “and attempt < 5







              . “order by



priority ”)



        } -> [ 0 ] { route };



    die “ $ date : $ 0 : error : there is the only service:



$route (add



more) \n”



    if ( $route eq “$script $service”



      or $route eq “$script $service_stored” ) ;



    (service_name, $service ) = split( / \s+/, $route ) ;



    die “$date: $0: error: no route for the service:



$service (add



more)\n”



      unless $service;



    return $service;



}











Table 3 below contains source code of the content fetcher 602 used with the content extraction agent 600 to retrieve information from a web site











TABLE 3









#!/usr/local/www/bin/sybper15



#-T



# -w



# $Header:



/usr/local/cvsroot/webley/agents/service/webget.pl, v 1.4



# Agent to get info from the web.



# Parameters: service_name [service_parameters], i.e. stock



msft or weather



60645



# Configuration stored in files service_name.ini



# if this file is absent the configuration is received from



mcServices table



# This script provides autoupdate to datatable if the .ini



file is newer.



$debug = 1;



use URI : : URL;



use LWP : : UserAgent;



use HTTP : :Request: : Common;



use Vail : :VarList;



use Sybase : : CT lib;



use HTTP: :Cookies;



#print “Sybase: :CT lib $DB_USR, $DB_PWD, $DB SRV;”;



Open ( STDERR, “>>$0.log” ) if $debug;



#open ( STDERR, “>&STDOUT” );



$log = ‘date’;



#$response = ‘./url.pl



http://cgi.cnn.com/cgi-bin/weather/redirect?zip=60605”;



#$response= ‘pwd’;



#print STDERR “pwd = $response\n”;



#$response = ‘ls’ ;



#print STDERR “ls = $response\n”;



chop ( $log ) ;



$log .= “pwd=” . ‘pwd’ ;



chop ( $log ) ;



#$debug2 = 1;



my $service = shift;



$log .= “ $service: ”. join( ‘ : ’, @ARGV ) . “\n”;



print STDERR $log if $debug;



#$response = ·. /url .pl



“http://cgi.cnn.com/cgi-bin/weather/redirect?zip=60605” ;



my @ini = &read_ini ( $service ) ;



chop ( @ ini ) ;



my $section= “ ”;



do ($section = &process_section( $section ) } while



$section;



#$response = ‘ ./url.pl



http://cgi.cnn.com/cgi-bin.weather/redirect?zip=60605” ’ ;



exit;



#######################################################



sub read_ini {



  my ( $service ) = @_;



  my @ini = ( );



  # first, try to read file



  $0 =~ ml{circumflex over ( )} ( .*/) [{circumflex over ( )}/];



  $service = $1 . $service;



  if ( open( INI, “$service.ini” ) ) {



    @ini = ( < INI > ) ;



    return @ini unless ( $DB_SRV ) ;



    # update datatable



    my $file_time = time − int ( ( -M “$service.ini” )



* 24 *



3600 ) ;



#    print “time $file_time\n”;



    my $dbh = new Sybase: :CTlib $DB_USR, $DB_PWD,



$DB_SRV;



    unless ( $dbh) {



      print STDERR “webget.pl: Cannot connect to



dataserver $DB_SRV:$DB_USR:$DB_PWD\n”;



      return @ini;



    }



    my @row_refs = $dbh->ct_sql ( “select lastUpdate



from



mcServices where service = ‘$service’ ”, undef, 1 );



    if ( $dbh -> { RC } == CS_FAIL ) {



      print STDERR “webget.pl: DB select from



mcServices



failed\n”;



      return @ini;



    }



    unless ( defined @row_refs ) {



    # have to insert



    my ( @ini_escaped ) = map {



      ( my $x = $_) =~ s/ \ ‘ / \ ‘ / g;



      $x;



    }@ini;



    $dbh -> ct_sql ( “insert mcServices values (



‘$service’,



‘@ini_escaped’, $file time; ) ”);



    if ( $dbh -> { RC } = = CS_FAIL )



      print STDERR “webget.pl: DB insert to



mcServic:es failed\n”;



    }



    return @ ini;



#    print “time $file_time:”$row_refs [ 0 ] -> {



‘lastUpdate’



}.”\n”;



    If ( $file_time -> ref_refs [0 ] -> { ‘last update’



} ) {



    # have to update



  my ( @ini_escaped = map {



    ( my $x = $_ ) =~ s/ \ ‘ / \ ‘ \ ‘ / g;



    $x;



  } @ini;



  $dbh -> ct_sql ( “update mcServices set config



=



‘@ini_escaped’, lastUpdate = $file_time where service =



‘$service’ ” );



    if ( $dbh -> { RC } − CS_FAIL ) {



      print STDERR “webget.pl: DB update to



mcServices failed\n”;



      }



    }



    return @ini;



  }



  else {



  print STDERR “$0: WARNING: $service.ini n/a in ”



. - ‘pwd’



    . “Try to read DB\n”;



  }



  # then try to read datatable



  die “webget.pl: Unable to find service $service\n”



unless ( $DB_SRV



) ;



  my $dbh = new Sybase: : CTlib $DB_USR, $DB_PWD,



  $DB_SRV;



  die “webget.pl: Cannot connect to dataserver



$DB SRV: $08 USR: $08 PWD\n” unless ( $dbh ) ;



my @row_refs = $dbh->ct sql ( “”;;elect con.fiJ from



mcServices where



service = ‘$service’ “ , undef, 1 );



  die “webget.pl: DB select from mcServices failed\n” if



$dbh -> { RC }



= = CS FAIL;



  die “webget.pl: Unable to find service $service\n”



unless ( defined



@row_refs ) ;



  $row_refs [ 0 ] -> { ‘config’ } =~ s/\n /\n\r/g;



  @ini = split ( /\r/, $row_refs [ 0 ] ->{ ‘ config’ } ) ;



  return @ini;



#######################################################



sub process_section {



  my ($prev_section ) = @_;



  my ( $section, $output, $content );



  my %PAram;



  my %Content;



#  print“ ################################\n”;



  foreach (@ini ) {



    print;



    chop;



    s/\s+$//;



    s/{circumflex over ( )}\[(.*) \ ] ) {



    # get section name



    if ( /{circumflex over ( )}\ [(.*) \ ] ) {



#    print “$_: $section:$prev_section\n”;



      last if $section;



      next if $1 eq “print”;



#    next if $prev_section ne “ ” and



$prev_section ne $1;



    if ($prev_section eq $1 )



     $prev_section = “ “;



     next;



    }



    $section = $1;



  }



  # get parameters



  Push ( @{ $Param{ $1 } }, $2 ) if $section and



/ ( [ {circumflex over ( )} = ] +) = (.*) /;



-   }



#  print“++++++++++++++++++++++++++++++++++\n”;



  return 0 unless $section;



#  print “section $section\n”;



  # substitute parameters with values



  map { $Param{ URL }->[ 0 ] =~ s/$Param{ Input }->[ $



] /$ARGV [ $



] /g



  }0 . . S# { $Param{ Input } };



  # get page content



  ( $Content{={ ‘TIME’ }, $content ) = get_url_content (



$ { $ Param { URL



} } [ 0 ] ) ;



  # filter it



  map {



    if (/\“([“\”]+)\“([“\”]*)\”/or



/\/([“\/]+)\/([“\/]*)\//)



(



      my $out = $2; $content =~ s/$1/$out/g;



    }



  } @ ($Param{ “Pre-filter”}};



#print STDERR $content;



  # do main regular expression



  unless ( @values = $content =~ / $! Param {



Regular expression } } [ 0



} / ) (



  &die_hard ( $ { $Param(Reqular_expression } } [ 0



], $content



) ;



    return $section;



  }



  %Content = map { ( $Param{ Output }->[ $_] , $values [



$_ ] )



  } 0 . . $ # ( $Param { Output } ) ;



  # filter it



  map {



  if ( / ( [{circumflex over ( )}\”]+)\“] +) \” ( [“\”]+) \“ ( [“\”]*) \”/



    or / ( [{circumflex over ( )}\/]+) \/ ( [{circumflex over ( )}\/] +) \/ ([{circumflex over ( )}\/]*) \/ / ) (



    my $out = $3;



    $Content{ $1 } =~ s/$2/$out/g;



  }



}  @{ $Param { “Post-filter” } };



#calculate it



map



# calculate it



map {



    if ( /([“‘=]+)=(.*)/



    my $eval = $2;



    map { $eval =~ s/$_/$Content( $_ }/g



    } keys %Content;



  $Content{ $1 } = eval( $eval ) ;



  }



} @{ ( $Param{ Calculate } } ;



# read section [print]



foreach $i ( 0 .. $#ini ) {



  next unless $ini [ $i] /{circumflex over ( )}\ [print\]/;



  foreach ( $i + 1 . . $#ini ) {



    last if $ini [ $_ ] =~ /{circumflex over ( )}\ [.+\]/;



    $output .= $ini [$_1] . “\n”;



  }



  last;



}



# prepare output



map { $output =~ s/$_/$Content{ $_ }/g



} keys %Content;



print $output;



return 0;



}



########################################################



sub get_url_content [



  my ( $url ) = @_;



  print STDERR $url if $debug;



  $response = ‘ ./url.pl ‘$url’ ;



  $response = ‘ ./url.pl ‘$url’ ;



  Return( $time − time, $response );



  my $ua = LWP: :UserAgent -> new;



  $ua -> agent ( ‘Mozilla/4.0 [en] (Xll; I; FreeBSD 2.2.8-



STABLE i386)’



) ;



#  $ua -> proxy( [‘http’, ‘https’],



‘http://proxy.webley:3128/’ );



#  $ua -> no_proxy (‘webley’, ‘vail’ ) ;



  my $cookie = HTTP: :Cookies -> new;



  $ua -> cookie_jar ( $cookie ) ;



  $url = url $url;



  print “$url\n” if $debug2;



  my $time = time;



  my $res= $ua -> request ( GET $url );



  print “Response: ” . ( time − $time ) . “sec\n” if



$debug2;



  Return ( $time − time, $res -> content ) ;



}



########################################################



sub die hard {



my ( $re, $content ) = @_;



-   my ( $re_end, $pattern );



while( $content ! ~ /$re/ ) {



    if ($re =~ s/ (\({{circumflex over ( )}\(\) ]+\) [{circumflex over ( )}\(\)]*$) / / ) {



      $re_end = $1 . $re_end;



    }



    else }



      $re_end = $re;



      last;



    }



  }



  $content=~ /$re/;



$re/n



Possible misuse:



$re_end: \n



Matched:



$&\n



Mismatched:



$’\n



“ if $debug;



  if ( $debug ) {



   print STDERR “Content:\n $content\n” unless



$’;



  }



}



########################################################










Once the web browsing server 302 accesses the web site specified in the CRL 404 and retrieves the requested information, it is forwarded to the media server 304. The media server uses the speech synthesis engine 502 to create an audio message that is then transmitted to the user's voice enabled device 306. In the preferred embodiment, each web browsing server is based upon Intel's Dual Pentium III 730 MHz microprocessor system.


Referring to FIG. 3, the operation of the personal voice-based information retrieval system will be described. A user establishes a connection between his voice enabled device 306 and a media server 304 of the voice browsing system 108. This may be done using the Public Switched Telephone Network (PSTN) 308 by calling a telephone number associated with the voice browsing system 108. Once the connection is established, the media server 304 initiates an interactive voice response (IVR) application. The IVR application plays audio message to the user presenting a list of 10 options, which includes “perform a user-defined search.” The user selects the option to perform a user-defined search by speaking the name of the option into the voice enabled device 306.


The media server 304 then accesses the database 300 and retrieves the personal recognition grammars 402. Using the speech synthesis engine 502, the media server 304 then asks the user, “Which of the following user-defined searches would you like to perform” and reads to the user the identification name, provided by the recognition grammar 402, of each user-defined search. The user selects the desired search by speaking the appropriate speech command or pronounceable name described within the recognition grammar 402. These speech recognition grammars 402 define the speech commands or pronounceable names spoken by a user in order to perform a user-defined search. If the user has a multitude of user-defined searches, he may speak the command or pronounceable name described in the recognition grammar 402 associated with the desired search at anytime without waiting for the media server 304 to list all available user-defined searches. This feature is commonly referred to as a “barge-in” feature. The media server 304 uses the speech recognition engine 500 to interpret the speech commands received from the user. Based upon these commands, the media server 304 retrieves the appropriate user-defined web site record 400 from the database 300. This record is then transmitted to a web browsing server 302. A firewall 310 may be provided that separates the web browsing server 302 from the database 300 and media server 304. The firewall provides protection to the media server and database by preventing unauthorized access in the event the firewall 312 for the web browsing server fails or is compromised. Any type of firewall protection technique commonly known to one skilled in the art could be used, including packet filter, proxy server, application gateway, or circuit-level gateway techniques.


The web browsing server 302 accesses the web site 106 specified by the URL 404 in the user-defined web site record 400 and retrieves the user-defined information from that site using the content extraction agent and specified content descriptor file specified in the content extraction agent command 406. Since the web browsing server 302 uses the URL and retrieves new information from the Internet each time a request is made, the requested information is always updated.


The content information received from the responding web site 106 is then processed by the web browsing server 302 according to the associated content descriptor file This processed response is then transmitted to the media server 304 for conversion into audio messages using either the speech synthesis engine 502 or selecting among a database of prerecorded voice responses contained within the database 300.


It should be noted that the web sites accessible by the personal information retrieval system and voice browser of the preferred embodiment may use any type of mark-up language, including Extensible Markup Language (XML), Wireless Markup Language (WML), Handheld Device Markup Language (HDML), Hyper Text Markup Language (HTML), or any variation of these languages.


The descriptions of the preferred embodiments described above are set forth for illustrative purposes and are not intended to limit the present invention in any manner. Equivalent approaches are intended to be included within the scope of the present invention. While the present invention has been described with reference to the particular embodiments illustrated, those skilled in the art will recognize that many changes and variations may be made thereto without departing from the spirit and scope of the present invention. These embodiments and obvious variations thereof are contemplated as falling within the scope and spirit of the claimed invention.

Claims
  • 1. A method for retrieving information from an information source, the information source being periodically updated with current information, over a network, by speech commands received from a particular user of a plurality of users provided by the particular user via an electronic-communication device, and wherein each of the plurality of users has a respective electronic-communication device, said method comprising: (a) receiving a speech command from each of the plurality of users provided via the respective electronic-communication device, by a speech-recognition engine coupled to a media server, the media server configured to identify and access the information source via the network, the speech-recognition engine adapted to select speech-recognition grammar established to correspond to the speech commands received from the plurality of users and assigned to a desired search;(b) selecting, by the media server, at least one information-source-retrieval instruction corresponding to the speech-recognition grammar established for a particular speech command, the at least one information-source-retrieval instruction stored in a database associated with the media server and adapted to retrieve information;(c) accessing, by a web-browsing server, a portion of the information source to retrieve information of interest requested by the particular user, by using a processor of the web-browsing server, which processor (i) performs an instruction that requests information from an identified webpage, and (ii) utilizes a content extractor within the web-browsing server to separate a portion of the information from other information, the information derived from only a portion of the webpage containing information of interest to the particular user, wherein the content extractor uses a content-descriptor file containing a description of the portion of information and wherein the content-descriptor file indicates a location of the portion of the information within the information source;(d) selecting by the web-browsing server the information of interest from the information source and retrieving only the portion of the information of interest requested by the particular user according to the at least one information-source-retrieval instruction;(e) converting the information retrieved from the information source into an audio message by a speech-synthesis engine, the speech-synthesis engine coupled to the media server; and(f) transmitting the audio message to the electronic-communication device of the particular user requesting information of interest to the particular user.
  • 2. The method of claim 1, further comprising: searching, by the media server, an associated website to locate requested information.
  • 3. The method of claim 1, wherein the respective electronic-communication device is at least one of a landline telephone, a wireless telephone, and an internet protocol telephone and the media server is operatively connected to at least one of a local-area network, a wide-area network, and the internet.
  • 4. The method of claim 1, wherein the media server functions as a user-interface system adapted to provide access to a voice-browsing system.
  • 5. The method of claim 1, further comprising: clipping engine adapted to initially generate the content-descriptor file that indicates the location of the portion of the information within the information source.
  • 6. A system for retrieving information from an information source, the information source being periodically updated with current information, over a network, by speech commands received from a particular user of a plurality of users provided by the particular user via an electronic-communication device, and wherein each of the plurality of users has a respective electronic-communication device, said system comprising: (a) a speech-recognition engine including a processor and coupled to a media server, the speech-recognition engine adapted to receive a speech command from each of the plurality of users provided via the respective electronic-communication device, the media server configured to identify and access the information source via the network, the speech-recognition engine adapted to select speech-recognition grammar established to correspond to the speech commands received from the plurality of users and assigned to a desired search;(b) the media server further configured to select at least one information-source-retrieval instruction corresponding to the speech-recognition grammar established for a particular speech command, the at least one appropriate information-source-retrieval instruction stored in a database associated with the media server and adapted to retrieve information;(c) a web-browsing server coupled to the media server and adapted to access a portion of the information source to retrieve information of interest requested by the particular user, by using a processor of the web-browsing server, which processor (i) performs an instruction that requests information from an identified webpage, and (ii) utilizes a content extractor within the web-browsing server to separate a portion of the information from other information, the information derived from only a portion of a webpage containing information of interest to a particular user, wherein the content extractor uses a content-descriptor file containing a description of the portion of information and wherein the content-descriptor file indicates a location of the portion of the information within the information source, and selecting, by the web-browsing server, the information of interest from the information source and retrieving only the portion of the information of interest requested by the particular user according to the at least one information-source-retrieval instruction; and(d) a speech-synthesis engine including a processor and coupled to the media server, the speech-synthesis engine adapted to convert the information retrieved from the information source into an audio message and transmit the audio message by the electronic-communication device of the particular user requesting information of interest to the particular user.
  • 7. The system claim 6, further comprising: an interface to an associated website by the network to locate requested information.
  • 8. The system of claim 6, wherein the respective electronic-communication device is at least one of a landline telephone, a wireless telephone, and an internet protocol telephone and wherein the media server is operatively connected to the network, which is at least one of a local-area network, a wide-area network, and the internet.
  • 9. The system of claim 6, wherein the media server functions as a user-interface system adapted to provide access to a voice-browsing system.
  • 10. The method of claim 6, further comprising: clipping engine adapted to generate the content-descriptor file, by which, an instruction is used by the web-browsing server to request information from the identified website and the information is displayed on the respective electronic-communication device, wherein the information is only the portion of the webpage containing information of interest to the particular user.
  • 11. A method for retrieving desired information from an information source of a plurality of information sources, the information source being periodically updated with current information, over a network, by speech commands received from a particular user of a plurality of users, wherein each of the plurality of users has a respective electronic-communication device, said method comprising: (a) receiving a speech command, from each of the plurality of users via the respective electronic-communication device, the speech-recognition engine coupled to a media server, the media server configured to identify and access an information source from the plurality of information sources via the network, the speech-recognition engine adapted to select speech-recognition grammar established to correspond to the speech commands received, from certain of the plurality of users and assigned to a desired search;(b) selecting, by the media server, at least one information-source-retrieval instruction corresponding to the speech-recognition grammar established for a particular speech command, the at least one information-source-retrieval instruction stored in a database associated with the media server and adapted to retrieve information;(c) providing access, by the speech command, via a web-browsing server, to a portion of the information source to retrieve the desired information for the particular user, by using a processor of the web-browsing server, which processor (i) performs an instruction that requests information from an identified webpage, and (ii) utilizes a content extractor within the web-browsing server to separate a portion of the information from other information, the information is derived from only a portion of the webpage containing information of interest to a particular user, wherein the content extractor uses a content-descriptor file containing a description of the portion of information and wherein the content-descriptor file indicates a location of the portion of the information within the information source,(d) selecting, by the web-browsing server, the desired information from the appropriate information source and retrieving only the portion of the information of interest requested by the particular user according to the at least one information-source-retrieval instruction;(e) converting the information retrieved from the information source into an audio message, by a speech-synthesis engine, the speech-synthesis engine coupled to the media server;(f) conveying the audio message through the electronic-communication device to the respective electronic-communication device of the particular user requesting the desired information; and(g) providing a graphical display and adapted to display the desired information retrieved from the information source to the particular user on the respective electronic-communication device of the particular user.
  • 12. The method of claim 11, further comprising: an interface to a plurality of associated websites accessed by the network to locate the desired information.
  • 13. The method of claim 11, wherein the respective electronic-communication device is at least one of a landline telephone, a wireless telephone, and an internet protocol telephone and wherein the media server is operatively connected to the network, which is at least one of a local-area network, a wide-area network, and the internet.
  • 14. The method of claim 11, wherein the web-browsing server further comprises the content-descriptor file, which is stored within the web-browsing server, wherein the content-descriptor file relates to obtaining the desired information from a website.
  • 15. The method of claim 11, wherein the speech command includes a phrase provided by the certain users, the phrase associated with an identified website and information.
  • 16. The method of claim 11, wherein a command for executing a content-extraction agent are stored in a database associated with the media server and used for voice browsing.
  • 17. The method of claim 11, wherein the media server functions as a user-interface system adapted to provide access to a voice-browsing system.
  • 18. The method of claim 11, further comprising: clipping engine coupled to the content-descriptor file, by which, the instruction requests information from the identified website and the information is displayed on the respective electronic-communication device, wherein the information is only the portion of the webpage containing information of interest to the particular user.
  • 19. An information-retrieval system for retrieving information from an information source, the information source being periodically updated with current information, comprising: (a) a speech-recognition engine coupled to a processor and a media server and adapted to receive a speech command from a particular user of a plurality of users via an electronic-communication device to access desired information, wherein each of the plurality of users has a respective electronic-communication device, the media server configured to identify and access an information source from a plurality of information sources via the network, the speech-recognition engine adapted to select speech-recognition grammar established to correspond to the speech commands received, the speech-recognition grammar associated with the desired information;(b) the media server, adapted to select at least one information-source-retrieval instruction corresponding to the speech-recognition grammar established for a particular speech command, the at least one information-source-retrieval instruction stored in a database associated with the media server and adapted to retrieve information from a particular one of the information sources that has the desired information;(c) a web-browsing server, adapted to provide access, by the speech command, to a portion of the information source to retrieve the desired information, by using a processor of the web-browsing server, which process (i) performs an instruction that requests information from an identified webpage, and (ii) utilizes a content extractor within the web-browsing server to separate a portion of the information from other information, the information derived from only a portion of the webpage containing information of interest to the particular user, wherein the content extractor uses a content-descriptor file containing a description of the portion of information and wherein the content-descriptor file indicates a location of the portion of the information within the information source and selecting, by the web-browsing server, the desired information from the information source and retrieving only the portion of the information desired by the particular user according to the at least one information-source-retrieval instruction;(d) a speech-synthesis engine coupled to the media server, and adapted to convert the portion of the information from the information source into an audio message for the particular user of the plurality of users and conveying the audio message through the electronic-communication device to the particular user of the plurality of users; and(e) a graphical display interface coupled to the media server and adapted to provide for display the desired information retrieved from the information source to certain others of the plurality of users.
  • 20. The system claim 19, further comprising: an interface to a plurality of associated websites of the information source accessed by the network to locate the desired information.
  • 21. The system of claim 19, wherein the respective electronic-communication device is at least one of a landline telephone, a wireless telephone, and an internet protocol telephone.
  • 22. The system of claim 19, wherein the media server is operatively connected to the network, which is at least one of a local-area network, a wide area network, and the internet.
  • 23. The system of claim 19, wherein the content-descriptor file relates to obtaining the desired information from a website.
  • 24. The system of claim 19, wherein the speech command includes a phrase provided by the particular user, the phrase associated with an identified website and information available at the website.
  • 25. The system of claim 16, wherein a command for executing the content-extraction agent is stored in a database associated with the media server and used for voice browsing.
  • 26. The system of claim 19, further comprising: a database wherein a personal-recognition grammar is stored in the database and relates to web information.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a continuation of U.S. Utility application Ser. No. 12/787,801, filed May 26, 2010, which is a continuation of U.S. Utility application Ser. No. 11/711,773, filed Jun. 29, 2007, which is a continuation of U.S. Utility application Ser. No. 09/777,406, dated Feb. 6, 2001, which claims priority to U.S. Provisional Patent Application No. 60/180,343, filed Feb. 4, 2000, which are incorporated by reference herein in their entirety.

US Referenced Citations (400)
Number Name Date Kind
D174465 Paxton Apr 1955 S
3728486 Kraus Apr 1973 A
4058838 Crager et al. Nov 1977 A
4100377 Flanagan Jul 1978 A
4158750 Sakoe et al. Jun 1979 A
4313035 Jordan et al. Jan 1982 A
4327251 Fomenko et al. Apr 1982 A
4340783 Sugiyama et al. Jul 1982 A
4340797 Takano et al. Jul 1982 A
4340800 Ueda et al. Jul 1982 A
4371752 Matthews et al. Feb 1983 A
4481574 DeFino et al. Nov 1984 A
4489438 Hughes Dec 1984 A
4500751 Darland et al. Feb 1985 A
4513390 Walter et al. Apr 1985 A
4523055 Hohl et al. Jun 1985 A
4549047 Brian et al. Oct 1985 A
4584434 Hashimoto Apr 1986 A
4585906 Matthews et al. Apr 1986 A
4596900 Jackson Jun 1986 A
4602129 Matthews et al. Jul 1986 A
4635253 Urui et al. Jan 1987 A
4652700 Matthews et al. Mar 1987 A
4696028 Morganstein et al. Sep 1987 A
4713837 Gordon Dec 1987 A
4747127 Hansen et al. May 1988 A
4748656 Gibbs et al. May 1988 A
4755932 Diedrich Jul 1988 A
4757525 Matthews et al. Jul 1988 A
4761807 Matthews et al. Aug 1988 A
4763317 Lehman et al. Aug 1988 A
4769719 Endo Sep 1988 A
4771425 Baran et al. Sep 1988 A
4776016 Hansen Oct 1988 A
4782517 Bernardis et al. Nov 1988 A
4792968 Katz Dec 1988 A
4799144 Parruck et al. Jan 1989 A
4809321 Morganstein et al. Feb 1989 A
4811381 Woo et al. Mar 1989 A
4837798 Cohen et al. Jun 1989 A
4847891 Kotani Jul 1989 A
4850012 Mehta et al. Jul 1989 A
4852149 Zwick et al. Jul 1989 A
4852170 Bordeaux Jul 1989 A
4866758 Heinzelmann Sep 1989 A
4873719 Reese Oct 1989 A
4879743 Burke et al. Nov 1989 A
4893333 Baran et al. Jan 1990 A
4893335 Fuller et al. Jan 1990 A
4903289 Hashimoto Feb 1990 A
4903291 Tsurufuji et al. Feb 1990 A
4905273 Gordon et al. Feb 1990 A
4907079 Turner et al. Mar 1990 A
4918722 Duehren et al. Apr 1990 A
4922518 Gordon et al. May 1990 A
4922520 Bernard et al. May 1990 A
4922526 Morganstein et al. May 1990 A
4926462 Ladd et al. May 1990 A
4930150 Katz May 1990 A
4933966 Hird et al. Jun 1990 A
4935955 Neudorfer Jun 1990 A
4935958 Morganstein et al. Jun 1990 A
4941170 Herbst Jul 1990 A
4942598 Davis Jul 1990 A
4953204 Cuschleg, Jr. et al. Aug 1990 A
4955047 Morganstein et al. Sep 1990 A
4956835 Grover Sep 1990 A
4959854 Cave et al. Sep 1990 A
4967288 Mizutori et al. Oct 1990 A
4969184 Gordon et al. Nov 1990 A
4972462 Shibata Nov 1990 A
4974254 Perine et al. Nov 1990 A
4975941 Morganstein et al. Dec 1990 A
4985913 Shalom et al. Jan 1991 A
4994926 Gordon et al. Feb 1991 A
4996704 Brunson Feb 1991 A
5003575 Chamberlin et al. Mar 1991 A
5003577 Ertz et al. Mar 1991 A
5008926 Misholi Apr 1991 A
5020095 Morganstein et al. May 1991 A
5027384 Morganstein Jun 1991 A
5029196 Morganstein Jul 1991 A
5036533 Carter et al. Jul 1991 A
5054054 Pessia et al. Oct 1991 A
5065254 Hishida Nov 1991 A
5086385 Launey et al. Feb 1992 A
5095445 Sekiguchi Mar 1992 A
5099509 Morganstein et al. Mar 1992 A
5109405 Morganstein Apr 1992 A
5128984 Katz Jul 1992 A
5131024 Pugh et al. Jul 1992 A
5133004 Heileman, Jr. et al. Jul 1992 A
5145452 Chevalier Sep 1992 A
5146452 Pekarske Sep 1992 A
5166974 Morganstein et al. Nov 1992 A
5179585 MacMillan, Jr. et al. Jan 1993 A
5193110 Jones et al. Mar 1993 A
5195086 Baumgartner et al. Mar 1993 A
5233600 Pekarske Aug 1993 A
5243643 Sattar et al. Sep 1993 A
5243645 Bissell et al. Sep 1993 A
5249219 Morganstein et al. Sep 1993 A
5255305 Sattar Oct 1993 A
5263084 Chaput et al. Nov 1993 A
5276729 Higuchi et al. Jan 1994 A
5287199 Zoccolillo Feb 1994 A
5291302 Gordon et al. Mar 1994 A
5291479 Vaziri et al. Mar 1994 A
5303298 Morganstein et al. Apr 1994 A
5307399 Dai et al. Apr 1994 A
5309504 Morganstein May 1994 A
5325421 Hou et al. Jun 1994 A
5327486 Wolff et al. Jul 1994 A
5327529 Fults et al. Jul 1994 A
5329578 Brennan et al. Jul 1994 A
5333266 Boaz et al. Jul 1994 A
5347574 Morganstein Sep 1994 A
5355403 Richardson, Jr. et al. Oct 1994 A
5359598 Steagall et al. Oct 1994 A
5365524 Hiller et al. Nov 1994 A
5365574 Hunt et al. Nov 1994 A
5375161 Fuller et al. Dec 1994 A
5384771 Isidoro et al. Jan 1995 A
5404231 Bloomfield Apr 1995 A
5408526 McFarland et al. Apr 1995 A
5414754 Pugh et al. May 1995 A
5416834 Bales et al. May 1995 A
5426421 Gray Jun 1995 A
5432845 Burd et al. Jul 1995 A
5436963 Fitzpatrick et al. Jul 1995 A
5459584 Gordon et al. Oct 1995 A
5463684 Morduch et al. Oct 1995 A
5475791 Shalk et al. Dec 1995 A
5479487 Hammond Dec 1995 A
5495484 Self et al. Feb 1996 A
5497373 Hulen et al. Mar 1996 A
5499288 Hunt et al. Mar 1996 A
5515427 Carlsen et al. May 1996 A
5517558 Schalk May 1996 A
5526353 Henley et al. Jun 1996 A
5533115 Hollenbach et al. Jul 1996 A
5537461 Bridges et al. Jul 1996 A
5555100 Bloomfield et al. Sep 1996 A
5559611 Bloomfield et al. Sep 1996 A
5559859 Dai et al. Sep 1996 A
5566236 MeLampy et al. Oct 1996 A
5603031 White et al. Feb 1997 A
5608786 Gordon Mar 1997 A
5610910 Focsaneanu et al. Mar 1997 A
5610970 Fuller et al. Mar 1997 A
5611031 Hertzfeld et al. Mar 1997 A
5630079 McLaughlin May 1997 A
5652789 Miner et al. Jul 1997 A
5657376 Espeut et al. Aug 1997 A
5659597 Bareis et al. Aug 1997 A
5666401 Morganstein et al. Sep 1997 A
5675507 Bobo, II Oct 1997 A
5675811 Broedner et al. Oct 1997 A
5689669 Lynch et al. Nov 1997 A
5692187 Goldman et al. Nov 1997 A
5699486 Tullis et al. Dec 1997 A
5712903 Bartholomew et al. Jan 1998 A
5719921 Vysotsky et al. Feb 1998 A
5721908 Lagarde et al. Feb 1998 A
5724408 Morganstein Mar 1998 A
5737395 Irribarren Apr 1998 A
5742596 Baratz et al. Apr 1998 A
5742905 Pepe et al. Apr 1998 A
5752191 Fuller et al. May 1998 A
5758322 Rongley May 1998 A
5761294 Shaffer et al. Jun 1998 A
5764639 Staples et al. Jun 1998 A
5764736 Shachar et al. Jun 1998 A
5764910 Shachar Jun 1998 A
5774860 Bayya et al. Jun 1998 A
5787298 Broedner et al. Jul 1998 A
5793993 Broedner et al. Aug 1998 A
5794205 Walters et al. Aug 1998 A
5796791 Polcyn Aug 1998 A
5799063 Krane Aug 1998 A
5799065 Junqua et al. Aug 1998 A
5809282 Cooper et al. Sep 1998 A
5809481 Baron et al. Sep 1998 A
5812796 Broedner et al. Sep 1998 A
5819220 Sarukkai et al. Oct 1998 A
5819306 Goldman et al. Oct 1998 A
5822727 Garberg et al. Oct 1998 A
5823879 Goldberg et al. Oct 1998 A
5832063 Vysotsky et al. Nov 1998 A
5832440 Woodbridge Nov 1998 A
5835570 Wattenbarger Nov 1998 A
5838682 Dekelbaum et al. Nov 1998 A
5867494 Krishnaswamy et al. Feb 1999 A
5867495 Elliott et al. Feb 1999 A
5870550 Wesinger, Jr. et al. Feb 1999 A
5873080 Coden et al. Feb 1999 A
5881134 Foster et al. Mar 1999 A
5881135 Watts et al. Mar 1999 A
5884032 Bateman et al. Mar 1999 A
5884262 Wise et al. Mar 1999 A
5884266 Dvorak Mar 1999 A
5890123 Brown et al. Mar 1999 A
5905476 McLaughlin et al. May 1999 A
5914951 Bentley et al. Jun 1999 A
5915001 Uppaluru Jun 1999 A
5917817 Dunn et al. Jun 1999 A
5926789 Barbara et al. Jul 1999 A
5940598 Strauss et al. Aug 1999 A
5943399 Bannister et al. Aug 1999 A
5946389 Dold Aug 1999 A
5953392 Rhie Sep 1999 A
5974124 Schlueter et al. Oct 1999 A
5974413 Beauregard et al. Oct 1999 A
5991292 Focsaneau et al. Nov 1999 A
5995615 Miloslavsky Nov 1999 A
5999525 Krishnaswamy et al. Dec 1999 A
5999611 Tatchell et al. Dec 1999 A
5999965 Kelly Dec 1999 A
6012088 Li et al. Jan 2000 A
6014437 Acker et al. Jan 2000 A
6014626 Cohen Jan 2000 A
6018710 Wynblatt et al. Jan 2000 A
6021181 Miner et al. Feb 2000 A
6021190 Fuller et al. Feb 2000 A
6031904 An et al. Feb 2000 A
6038305 McAllister et al. Mar 2000 A
6044107 Gatherer et al. Mar 2000 A
6047053 Miner et al. Apr 2000 A
6052372 Gittins et al. Apr 2000 A
6067516 Levay et al. May 2000 A
6078580 Mandalia et al. Jun 2000 A
6081518 Bowman-Amuah Jun 2000 A
6081782 Rabin Jun 2000 A
6091808 Wood et al. Jul 2000 A
6101472 Giangarra et al. Aug 2000 A
6104803 Weser et al. Aug 2000 A
6115737 Ely et al. Sep 2000 A
6115742 Franklin et al. Sep 2000 A
6130933 Milsoslavsky Oct 2000 A
6131095 Low et al. Oct 2000 A
6137863 Brown et al. Oct 2000 A
6144991 England Nov 2000 A
6157705 Perrone Dec 2000 A
6161128 Smyk Dec 2000 A
6178399 Takebayashi et al. Jan 2001 B1
6185535 Hedin et al. Feb 2001 B1
6188683 Lang et al. Feb 2001 B1
6195357 Polcyn Feb 2001 B1
6199076 Logan et al. Mar 2001 B1
6201814 Greenspan Mar 2001 B1
6201863 Miloslavsky Mar 2001 B1
6208638 Rieley et al. Mar 2001 B1
6215858 Bartholomew et al. Apr 2001 B1
6230132 Class et al. May 2001 B1
6233318 Picard et al. May 2001 B1
6243373 Turock Jun 2001 B1
6252944 Hansen, II et al. Jun 2001 B1
6269336 Ladd et al. Jul 2001 B1
6285745 Bartholomew et al. Sep 2001 B1
6327572 Morton et al. Dec 2001 B1
6330538 Breen Dec 2001 B1
6343529 Pool Feb 2002 B1
6349132 Wesemann et al. Feb 2002 B1
6353661 Bailey, III Mar 2002 B1
6366575 Barkan et al. Apr 2002 B1
6366578 Johnson Apr 2002 B1
6424945 Sorsa Jul 2002 B1
6430282 Bannister et al. Aug 2002 B1
6434529 Walker et al. Aug 2002 B1
6445694 Swartz Sep 2002 B1
6446076 Burkey et al. Sep 2002 B1
6456699 Burg Sep 2002 B1
6459910 Houston Oct 2002 B1
6477240 Lim et al. Nov 2002 B1
6477420 Struble et al. Nov 2002 B1
6490627 Kalra et al. Dec 2002 B1
6501966 Bareis et al. Dec 2002 B1
6505163 Zhang et al. Jan 2003 B1
6529948 Bowman-Amuah Mar 2003 B1
6532444 Weber Mar 2003 B1
6539359 Ladd et al. Mar 2003 B1
6546393 Khan Apr 2003 B1
6560604 Fascenda May 2003 B1
6584439 Geilhufe et al. Jun 2003 B1
6587822 Brown et al. Jul 2003 B2
6593944 Nicolas et al. Jul 2003 B1
6594348 Bjurstrom et al. Jul 2003 B1
6594692 Reisman Jul 2003 B1
6606611 Khan Aug 2003 B1
6618039 Grant et al. Sep 2003 B1
6618726 Colbath et al. Sep 2003 B1
6618763 Steinberg Sep 2003 B1
6636831 Profit, Jr. et al. Oct 2003 B1
6654814 Britton et al. Nov 2003 B1
6658662 Nielsen Dec 2003 B1
6665640 Bennett et al. Dec 2003 B1
6687341 Koch et al. Feb 2004 B1
6704024 Robotham et al. Mar 2004 B2
6718015 Berstis Apr 2004 B1
6721705 Kurganov et al. Apr 2004 B2
6724868 Pradhan et al. Apr 2004 B2
6732142 Bates et al. May 2004 B1
6763388 Tsimelzon Jul 2004 B1
6771732 Xiao et al. Aug 2004 B2
6771743 Butler et al. Aug 2004 B1
6775264 Kurganov Aug 2004 B1
6785266 Swartz Aug 2004 B2
6807257 Kurganov Oct 2004 B1
6812939 Flores et al. Nov 2004 B1
6823370 Kredo et al. Nov 2004 B1
6888929 Saylor et al. May 2005 B1
6922733 Kuiken et al. Jul 2005 B1
6941273 Loghmani et al. Sep 2005 B1
6964012 Zirngibl et al. Nov 2005 B1
6964023 Maes et al. Nov 2005 B2
6965864 Thrift et al. Nov 2005 B1
6996609 Hickman et al. Feb 2006 B2
6999804 Engstrom et al. Feb 2006 B2
7003463 Maes et al. Feb 2006 B1
7024464 Lusher et al. Apr 2006 B1
7050977 Bennett May 2006 B1
7075555 Flores Jul 2006 B1
7076431 Kurganov et al. Jul 2006 B2
7089307 Zintel et al. Aug 2006 B2
7145898 Elliott Dec 2006 B1
7146323 Guenther et al. Dec 2006 B2
7327723 Kurganov Feb 2008 B2
7386455 Kurganov Jun 2008 B2
7506022 Wang et al. Mar 2009 B2
7516190 Kurganov Apr 2009 B2
7881941 Kurganov et al. Feb 2011 B2
7974875 Quilici et al. Jul 2011 B1
8098600 Kurganov Jan 2012 B2
8131267 Lichorowic et al. Mar 2012 B2
8131555 Carriere et al. Mar 2012 B1
8185402 Kurganov et al. May 2012 B2
8380505 Konig et al. Feb 2013 B2
8775176 Gilbert et al. Jul 2014 B2
8838074 Kurganov Sep 2014 B2
8843120 Kurganov Sep 2014 B2
8843141 Kurganov Sep 2014 B2
8874446 Carriere et al. Oct 2014 B2
9451084 Kurganov et al. Sep 2016 B2
20010011302 Son Aug 2001 A1
20010032234 Summers et al. Oct 2001 A1
20010040885 Jonas et al. Nov 2001 A1
20010048676 Jimenez et al. Dec 2001 A1
20020006126 Johnson et al. Jan 2002 A1
20020059402 Belanger May 2002 A1
20020064149 Elliott et al. May 2002 A1
20020087327 Lee et al. Jul 2002 A1
20020090114 Rhoads et al. Jul 2002 A1
20020104025 Wrench Aug 2002 A1
20030002635 Koch et al. Jan 2003 A1
20040160913 Kubler et al. Aug 2004 A1
20040247094 Crockett Dec 2004 A1
20050025133 Swartz Feb 2005 A1
20050030179 Script et al. Feb 2005 A1
20050074104 Swartz Apr 2005 A1
20050102147 Ullrich et al. May 2005 A1
20050278179 Overend Dec 2005 A1
20060069926 Ginter et al. Mar 2006 A1
20070136072 Sampath Jun 2007 A1
20070206737 Hickman Sep 2007 A1
20070249406 Andreasson Oct 2007 A1
20070263601 Kurganov Nov 2007 A1
20070286360 Chu et al. Dec 2007 A1
20080228494 Cross Sep 2008 A1
20090286514 Lichorowic Nov 2009 A1
20100042413 Simpson Feb 2010 A1
20100094635 Bermudez Perez Apr 2010 A1
20110035220 Opaluch Feb 2011 A1
20110054898 Phillips et al. Mar 2011 A1
20110082696 Johnston Apr 2011 A1
20110091023 Kurganov et al. Apr 2011 A1
20110153324 Ballinger Jun 2011 A1
20120179464 Newman Jul 2012 A1
20120253800 Goller Oct 2012 A1
20130006638 Lindahl Jan 2013 A1
20130041666 Bak Feb 2013 A1
20130191122 Mason Jul 2013 A1
20130317823 Mengibar Nov 2013 A1
20140039898 Reich Feb 2014 A1
20140046660 Kamdar Feb 2014 A1
20140111415 Gargi et al. Apr 2014 A1
20140123010 Goldstein May 2014 A1
20150134340 Blaisch May 2015 A1
20150185985 Kang et al. Jul 2015 A1
20150234636 Barnes, Jr. Aug 2015 A1
20150334080 Tamayo Nov 2015 A1
20150339745 Peter et al. Nov 2015 A1
20160057383 Pattan Feb 2016 A1
20160080811 Fukushima Mar 2016 A1
20160125881 Vogel May 2016 A1
20160179752 Clark Jun 2016 A1
20160225369 Agrawal Aug 2016 A1
20160239497 O'Donnell Aug 2016 A1
20160321266 Philippov Nov 2016 A1
20160328206 Nakaoka Nov 2016 A1
20170116986 Weng Apr 2017 A1
Foreign Referenced Citations (13)
Number Date Country
1329852 May 1994 CA
0572544 Sep 1996 EP
0794650 Sep 1997 EP
2211698 Jul 1989 GB
2240693 Aug 1991 GB
2317782 Apr 1998 GB
1-258526 Oct 1989 JP
9107838 May 1991 WO
9118466 Nov 1991 WO
9609710 Mar 1996 WO
9734401 Sep 1997 WO
9737481 Sep 1997 WO
9823058 May 1998 WO
Non-Patent Literature Citations (68)
Entry
“McGraw-Hill Dictionary of Scientific & Technical Terms 1101, 6th ed. 2003.”
Newton, Harry, Newtons Telecom Dictionary—The Official Glossary of Telecommunications and Voice Processing Terms, Dec. 1992, 6 pages.
Paper No. 10, Denying Institution of Covered Business Method Patent Review CBM2015-00109 and CBM2015-00149, Nov. 9, 2015, 19 pages.
Paper No. 10, Denying Institution of Covered Business Method Patent Review CBM2015-00110 and CBM2015-00150, Nov. 9, 2015, 20 pages.
Paper No. 10, Denying Institution of Covered Business Method Patent Review CBM2015-00111 and CBM2015-00151, Nov. 9, 2015, 19 pages.
Paper No. 10, Denying Institution of Covered Business Method Patent Review CBM2015-00112 and CBM2015-00152, Nov. 9, 2015, 18 pages.
Putz, Steve, Interactive Information Services Using World-Wide Web Hypertext, First Int'! Conference on World-Wide Web (May 25-27,1994), 10 pages.
Memorandum Opinion and Order, Oct. 8, 2015, 27 pages.
Update Subject Matter Eligibility, Jul. 2015, 33 pages.
Wikipedia Definition of “Internet”, available at http://en.wikipedia.org/wiki/Internet pp. 24-26.
Opening Brief of Appellant Paws Holdings, Inc., submitted on Mar. 8, 2016, to the United States Court of Appeal for the Federal Circuit, 236 pages.
Brief of Appellees, submitted on Jun. 20, 2016, to the United States Court of Appeals for the Federal Circuit, 53 pages.
“A PABX that Listens and Talks”, Speech Technology, Jan./Feb. 1984, pp. 74-79.
Amended Complaint, Parus Holdings. Inc. v. Web Telephony LLC & Robert Swartz, Case No. 06-cv-O 1146 (N.D. III.), Jul. 10, 2006, 14 pages.
AT&T, Press Release, “AT&T Customers Can Teach Systems to Listen and Respond to Voice”, Jan. 17, 1995, pp. 1-2, Basking Ridge, NJ., available at www.lucent.com/press/O 195/950117.gbb.html (accessed Mar. 15, 2005).
Bellcore Technology Licensing, “The Electronic Receptionist—A Knowledge-Based Approach to Personal Communications”, 1994, pp. 1-8.
Brachman et al., “Fragmentation in Store-and-Forward Message Transfer”, IEEE Communications Magazine, vol. 26 (7), Jul. 1998, pp. 18-27.
“Business Phone Systems for Advanced Offices”, NTT Review, vol. 2 (6), Nov. 1990, pp. 52-54.
Cole et al., “An Architecture for a Mobile OSI Mail Access System”, IEEE Journal on Selected Areas in Communications, vol. 7 (2), Feb. 1989, pp. 249-256.
“Data Communications Networks: Message Handling Systems”, Fasciele, VIII. ?-Recommendations X.400-X.430, 38 pages, date unknown.
DAX Systems, Inc., Press Release, “Speech Recognition Success in DAX's Grasp”, Nov. 22, 1995, pp. 1-2, Pine Brook, NJ.
Defendants Answer to the Amended Complaint and Demand for Jury Trial, Parus Holdings, Inc. v. Web Telephone LLC & Robert Swartz, Case No. 06-cv-01146 (N.D. III.), Aug. 10, 2006, 14 pages.
Faxpak Store and Forward Facsimile Transmission Service, Electrical Communication, vol. 54 (3), 1979, pp. 251-55.
Garcia et al., “Issues in Multimedia Computer-Based Message Systems Design and Standardization”, NATO ASI Series, vol. 1-6, 1984, 18 pgs.
“Globecom '85 IEEE Global Telecommunications Conference,” New Orleans, LA., Dec. 2-5, 1985, pp. 1295-1300.
Hemphill et al., “Speech-Aware Multimedia,” IEEE MultiMedia, Spring 1996, vol. 3, No. 1, pp. 74-78, IEEE. As indicated on the cover page of the journal, which is attached hereto as Attachment 4, the reference was received by Cornell University on Mar. 25, 1996.
Hunt et al., “Long-Distance Remote Control to the Rescue”, Chicago Tribune, Jun. 15, 2002, Section 4, p. 15.
“Introducing PIC SuperFax, First PC/Fax System to Run Under Windows”, Pacific Image Communications, Pasadena, CA, Date Unknown, (received at COMDEX show, Nov. 3, 1987). 4 pgs.
Kubala et al., “BYBLOS Speech Recognition Benchmark Results”, Workshop on Speech &Natural Language, Feb. 19-22, 1991.According to the web site http://portal.acm.org/citation.cfm?id˜II2405.I 12415&coll . . . , the reference was published in 1991, Morgan Kaufman Publishers, San Franscisco, CA. The distribution date is not presently known.
Ly, “Chatter: A Conversational Telephone Agent”, submitted to Program in Media Arts & Sciences, MIT, 1993, pp. 1-130.
Maeda, et al., “An Intelligent Customer-Controlled Switching System”, IEEE Global Telecommunications Conference, Hollywood, Florida, Nov. 28-Dec. 1, 1988, pp. 1499-1503.
Markowitz, J., “The Ultimate Computer Input Device May Be Right Under Your Nose”, Byte, Dec. 1995, pp. 1-13, available atwww.byte.com/art/9512/sec8/artl.htm (accessed Mar. 15, 2005).
Marx et al., “Mail Call: Message Presentation and Navigation in a Nonvisual Environment,” SIGCHI Conference on Human Factors in Computing Systems, Vancouver, B.C., Canada, Apr. 13-18, 1996. The web site http://www.usabilityviews.corn/uv001673.html shows a date of Apr. 16, 1996. The distribution date is not presently known.
Marx, M., “Toward Effective Conversational Messaging” (Thesis). As indicated on the cover page, the thesis was presented to the Departmental Committee on Graduate Students, Program in Media Arts and Sciences, School of Architecture and Planning, Massachusetts Institute of Technology on May 12, 1995. According to the web site http://www.thesis.mit.edu/Dienst/Repository/2.0/Body/0018.mit.theses/1995-314/rfc1807bib., the thesis was indexed on Mar. 21, 2000.
Oye, Phil, “Juggler”, p. 1, availableathttp://www.philove.com/work/juggler/index.shtml (accessed on Dec. 8, 2006).
Oye, Phil, “Juggler”, p. 1, available athttp://www.philoye.com/work/juggler—2.shtml (accessed on Dec. 8, 2006).
Perdue et al., “Conversant® 1 Voice System: Architecture and Applications”,Jul. 17, 1986, AT&T Technical Journal, pp. 1-14.
Plaintiff Parus Holdings, Inc. 's Supplemental Responses to Defendant Web Telephone LLC's First Set of Interrogatories (Nos. 1-12), Parus Holdings, Inc. v. Web Telephony LLC Y Robert Swartz, Case No. 06-cv-01146 (N.D. III.), Oct. 31, 2006, 32 pages.
Plaintiff Parus Holdings, Inc. 's Supplemental Responses to Defendant Web Telephony LLC's Second Set of Interrogatories (Nos. 13-17), Parus Holdings,Inc. v. Web Telephony LLC & Robert Swartz, Case No. 06-cv-01146 (N.D. III.), Oct. 31, 2006, 31 pages.
Print outs of Internet web site, “Wildfire Communications, Inc.,”, Nov. 5, 1997, including print outs of the following web pages: http://www.wildfire.com; http://www.wildfire.com/consumerhome.html;http://www.wildfire.com/106.html; http://www.wildfire.com/carrierhome.html; http://www.wildfire.com/sfandb.html; http://www.wildfire.com/about.html; http://www.wildfire.com/abtmgmt.html; http://www. wildfire.corn/scoop.html; http://www. wildfire.corn/intel.html; and http://www.wildfire.com/msft.html.
“Proceedings of the IFIP 10th World Computer Congress”, Dublin, Ireland, Sep. 1-5, 1986.
“PureSpeech Announces Juggler PC System for First Quarter of 1997”, HighBeam Research, Sep. 19, 1996, pp. 1-3, available at http://www.highbeam.com/doc/1G1-186909545.html (accessed on Dec. 8, 2006).
PureSpeech, “Meet the Voice of Juggler!”, pp. 1-3, the date of Nov. 18, 1996 is shown at the top of p. 1.
“PureSpeech's Juggler”, Teleconnect, Dec. 1996 issue, p. 36.
Ross, Randy, “Retrieve E-mail from a Telephone”, Oct. 7, 1996, pp. 1-2, available at http://resna.org/ProfessOrg?Sigs?SIGSites/sig11archive/juggler.htm (accessed on Dec. 8, 2006). Printout indicates that the article was originally printed in PC world.
Sartori, M., “Speech Recognition”, Apr. 1995, pp. 1-9, Mercury Communications, available at www.gar.co.uk/technology—watch/speech.htm (accessed Mar. 15, 2005).
Schmandt et al., “A Conversational Telephone Messaging System”, IEEE Transactions on Consumer Electronics, 1984, vol. CE-30, No. 3, pp. xxi-xxiv.
Schmandt et al., “Phone Slave: A Graphical Telecommunications Interface”, Proceedings of the SID, 1985, vol. 26/1, pp. 79-82.
“Secretarial Branch Exchange”, IBM Technical Disclosure Bulletin, vol. 26 (5), Oct. 1983, pp. 2645-2647.
Shimamura, et al., “Review of the Electrical Communication Laboratories”, vol. 418 (33), No. 1, Tokyo, Japan, 1985, pp. 31-39.
“The VMX Systems Product Reference Manual: Product Description Volume”, May 1994, vol. 1, release 7.1, VMX, Inc. (Octel Communications Corp.) San Jose, CA USA.
“VMXworks Product Reference Manual: vol. 3 Programmer's Guide”, Jul. 1994, vols. 3 & 4, Release 3.1, Octel Communications Corp., Milpitas, CA, USA.
“Wildfire Communication, Inc.”, Harvard Business School, Mar. 21, 1996, Pub!. No. 9-396-305, pp. 1-22.
“WordPerfect: New Telephony Features Boost Office”, WordPerfect Office TechBrief, 1994, Info-World Publishing. Co., vol. 10, Issue 2, pp. 2-3.
Yang, C., “INET Phone—Telephone Services and Servers on the Internet”, Apr. 1995, University of North Texas, pp. 1-6.
Bellcore Technology Licensing, “The Electronic Receptionist—A Knowledge-Based Approach to Personal Communications,” 1994, pp. 1-8.
Examples: Abstract Ideas, 20 pages.
IBM AIX DirectTalk/6000 Version 1 Release 6 Improves Your Voice Processing Services to Callers and Customers, Announcement No. 295-489, Nov. 28, 1995, 27 pages.
IBM Announcement Letter No. A95-893, retrieved on Mar. 9, 2015, 10 pages.
IBM, AIX DirectTalk/6000: General Information and Planning, Release 6, GC33-I 720-00, Dec. 1995, 162 pages.
IBM, DirectTalkMail: Administration, Release 6, SC33-I 733-00, Feb. 1996, 274 pages.
“McGraw-Hill Dictionary of Scientific & Technical Terms 1101, 6th ed. 2003,” No copy is provided but please inform if a copy of this dictionary is required.
Hemphill et al., “Surfing the Web by Voice,” ACM Multimedia 95—Electronic Proceedings, Nov. 5-9, 1995, 8 pages, San Francisco, CA.
IBM, AIX DirectTalk/6000 Release 6: Speech Recognition with the BBN Hark Recognizer, SC33-1734-00, Feb. 1996, 250 pages.
Joint Appendix, submitted on Sep. 16, 2016, to the United States Court of Appeal for the Federal Circuit, 406 pages.
Juggler by PureSpeech, p. 1, available at http://members.aol.com/compqanda1/juggler.html (accessed on Dec. 8, 2006).
Reply Brief of Appellant Parus Holdings, Inc., submitted on Sep. 6, 2016, to the United States Court of Appeal for the Federal Circuit, 40 pages.
Judgment without Opinion for Parus Holdings Inc., v. Sallie Mae Bank, Navient Solutions Inc., PNC Bank, N.A., Suntrust Bank, Suntrust Mortgage Inc., 2016-1179, 2016-1180, 2016-1181, entered Feb. 27, 2017 (2 pages).
Related Publications (1)
Number Date Country
20160307583 A1 Oct 2016 US
Provisional Applications (1)
Number Date Country
60180343 Feb 2000 US
Continuations (3)
Number Date Country
Parent 12787801 May 2010 US
Child 15193517 US
Parent 11771773 Jun 2007 US
Child 12787801 US
Parent 09777406 Feb 2001 US
Child 11771773 US