Internet appliance system and method

Information

  • Patent Grant
  • 9535563
  • Patent Number
    9,535,563
  • Date Filed
    Tuesday, November 12, 2013
    10 years ago
  • Date Issued
    Tuesday, January 3, 2017
    7 years ago
Abstract
An Internet appliance, comprising, within a single housing, packet data network interfaces, adapted for communicating with the Internet and a local area network, at least one data interface selected from the group consisting of a universal serial bus, an IEEE-1394 interface, a voice telephony interface, an audio program interface, a video program interface, an audiovisual program interface, a camera interface, a physical security system interface, a wireless networking interface; a device control interface, smart home interface, an environmental sensing interface, and an environmental control interface, and a processor, for controlling a data transfer between the local area network and the Internet, and defining a markup language interface communicated through a packet data network interface, to control a data transfer or control a remote device.
Description
COPYRIGHT NOTICE

A portion of the disclosure of this patent document and appendices contain material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of this patent document or the patent disclosure, as it appears in the U.S. Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.


FIELD OF THE INVENTION

The present invention relates to the field of adaptive systems, and more particularly systems and methods which are adaptive to a human user input and/or a data environment, as well as applications for such systems and methods. More particularly, embodiments of the invention involve, for example, consumer electronics, personal computers, control systems, and professional assistance systems.


BACKGROUND OF THE INVENTION

The prior art is rich in various systems and methods for data analysis, as well as various systems and methods relating to useful endeavors. In general, most existing systems and methods provide concrete functions, which have a defined response to a defined stimulus. Such systems, while embodying the “wisdom” of the designer, have a particular shortcoming in that their capabilities are static.


Intelligent or learning systems are also known. These systems are limited by the particular paradigm employed, and rarely are the learning algorithms general. In fact, while the generic theory and systems which learn are well known, the application of such systems to particular problems requires both a detailed description of the problem, as well as knowledge of the input and output spaces. Even once these factors are known, a substantial tuning effort may be necessary to enable acceptable operation.


Therefore, the present invention builds upon the prior art, which defines various problems to be addressed, intelligent systems and methods, tuning paradigms and user interfaces. Therefore, as set forth below, and in the attached appendix of references (including abstracts), incorporated herein by reference, a significant number of references detail fundamental technologies which may be improved according to the present invention, or incorporated together to form a part of the present invention. To the some extent, these technologies are disclosed and are expressly incorporated herein by reference to avoid duplication of prior art teachings. However, the disclosure herein is not meant to be limiting as to the knowledge of a person of ordinary skill in the art. Recitation herein below of these teachings or reference to these teachings is not meant to imply that the inventors hereof were necessarily in any way involved in these references, nor that the particular improvements and claimed inventions recited herein were made or conceived after the publication of these references. Thus, prior art cited herein is intended to (1) disclose information related to the application published before the filing hereof; (2) define the problem in the art to which the present invention is directed, (3) define prior art methods of solving various problems also addressed by the present invention; (4) define the state of the art with respect to methods disclosed or referenced herein; and/or (5) detail technologies used to implement methods or apparatus in accordance with the present invention.


Human Interface


Aspects of the present invention provide an advanced user interface. The subject of man-machine interfaces has been studied for many years, and indeed the entire field of ergonomics and human factors engineering revolves around optimization of human-machine interfaces. Typically, the optimization scheme optimizes the mechanical elements of a design, or seeks to provide a universally optimized interface. Thus, a single user interface is typically provided for a system. In fact, some systems provide a variety of interfaces, for example, novice, intermediate and advanced, to provide differing balances between available control and presented complexity. Further, adaptive and/or responsive human-machine computer interfaces are now well known. However, a typical problem presented is defining a self-consistent and useful (i.e., an improvement over a well-designed static interface) theory for altering the interface. Therefore, even where, in a given application, a theory exists, the theory is typically not generalizable to other applications. Therefore, one aspect of the present invention is to provide such a theory by which adaptive and/or responsive user interfaces may be constructed and deployed.


In a particular application, the user interface according to the present invention is applied to general-purpose-type computer systems, for example, personal computers. One aspect of the present invention thus relates to a programmable device that comprises a menu-driven interface in which the user enters information using a direct manipulation input device. Such a type of interface scheme is disclosed in Verplank, William L., “Graphics in Human-Computer Communication: Principles of Graphical User-Interface Design”, Xerox Office Systems. See the references cited therein: Foley, J. D., Wallace, V. L., Chan, P., “The Human Factor of Computer Graphics Interaction Techniques”, IEEE CG&A, November 1984, pp. 13-48; Koch, H., “Ergonomische Betrachtung von Schreibtastaturen”, Humane Production, I, pp. 12-15 (1985); Norman, D. A., Fisher, D., “Why Alphabetic Keyboards Are Not Easy To Use: Keyboard Layout Doesn't Much Matter”, Human Factors 24(5), pp. 509-519 (1982); Perspectives: High Technology 2, 1985; Knowlton, K., “Virtual Pushbuttons as a Means of Person-Machine Interaction”, Proc. of Conf. Computer Graphics, Pattern Recognition and Data Structure, Beverly Hills, Calif., May 1975, pp. 350-352; “Machine Now Reads, enters Information 25 Times Faster Than Human Keyboard Operators”, Information Display 9, p. 18 (1981); “Scanner Converts Materials to Electronic Files for PCs”, IEEE CG&A, December 1984, p. 76; “New Beetle Cursor Director Escapes All Surface Constraints”, Information Display 10, p. 12, 1984; Lu, C., “Computer Pointing Devices: Living With Mice”, High Technology, January 1984, pp. 61-65; “Finger Painting”, Information Display 12, p. 18, 1981; Kraiss, K. F., “Neuere Methoden der Interaktion an der Schnittstelle Mensch-Maschine”, Z. F. Arbeitswissenschaft, 2, pp. 65-70, 1978; Hirzinger, G., Landzettel, K., “Sensory Feedback Structures for Robots with Supervised Learning”, IEEE Conf. on Robotics and Automation, St. Louis, March 1985; Horgan, H., “Medical Electronics”, IEEE Spectrum, January 1984, pp. 90-93.


A menu based remote control-contained display device is disclosed in Platte, Oberjatzas, and Voessing, “A New Intelligent Remote Control Unit for Consumer Electronic Device”, IEEE Transactions on Consumer Electronics, Vol. CE-31, No. 1, February 1985, 59-68.


A directional or direct manipulation-type sensor based infrared remote control is disclosed in Zeisel, Tomas, Tomaszewski, “An Interactive Menu-Driven Remote Control Unit for TV-Receivers and VC-Recorders”, IEEE Transactions on Consumer Electronics, Vol. 34, No. 3, 814-818 (1988), which relates to a control for programming with the West German Videotext system. This implementation differs from the Videotext programming system than described in Bensch, U., “VPV—VIDEOTEXT PROGRAMS VIDEORECORDER”, IEEE Transactions on Consumer Electronics, Vol. 34, No. 3, 788-792 (1988), which describes the system of Video Program System Signal Transmitters, in which the VCR is programmed by entering a code for the Video Program System signal, which is emitted by television stations in West Germany. Each separate program has a unique identifier code, transmitted at the beginning of the program, so that a user need only enter the code for the program, and the VCR will monitor the channel for the code transmission, and begin recording when the code is received, regardless of schedule changes. The Videotext Programs Recorder (VPV) disclosed does not intelligently interpret the transmission, rather the system reads the transmitted code as a literal label, without any analysis or determination of a classification of the program type.


Known manual input devices include the trackball, mouse, and joystick. In addition, other devices are known, including the so-called “J-cursor” or “mousekey” which embeds a two (x, y) or three (x, y, p) axis pressure sensor in a button conformed to a finger, present in a general purpose keyboard; a keyboard joystick of the type described in Electronic Engineering Times, Oct. 28, 1991, p. 62, “IBM Points a New Way”; a so-called “isobar” which provides a two axis input by optical sensors (θ, x), a two and one half axis (x, y, digital input) input device, such as a mouse or a “felix” device, infrared, acoustic, etc.; position sensors for determining the position of a finger or pointer on a display screen (touch-screen input) or on a touch surface, e.g., “GlidePoint” (ALPS/Cirque); goniometer input (angle position, such as human joint position detector), etc. Many of such suitable devices are summarized in Kraiss, K. F., “Alternative Input Devices For Human Computer Interaction”, Forschunginstitut Für Anthropotecahnik, Werthhoven, F. R. Germany. Another device, which may also be suitable is the GyroPoint, available from Gyration Inc., which provides 2-D or 3-D input information in up to six axes of motion: height, length, depth, roll, pitch and yaw. Such a device may be useful to assist a user in inputting a complex description of an object, by providing substantially more degrees of freedom sensing than minimally required by a standard graphic user interface. The many degrees of freedom available thus provide suitable input for various types of systems, such as “Virtual Reality” or which track a moving object, where many degrees of freedom and a high degree of input accuracy is required. The Hallpot, a device which pivots a magnet about a Hall effect sensor to produce angular orientation information, a pair of which may be used to provide information about two axes of displacement, available from Elweco, Inc, Willoughby, Ohio, may also be employed as an input device.


User input devices may be broken down into a number of categories: direct inputs, i.e. touch-screen and light pen; indirect inputs, i.e. trackball, joystick, mouse, touch-tablet, bar code scanner (see, e.g., Atkinson, Terry, “VCR Programming: Making Life Easier Using Bar Codes”), keyboard, and multi-function keys; and interactive input, i.e. Voice activation/instructions (see, e.g., Rosch, Winn L., “Voice Recognition: Understanding the Master's Voice”, PC Magazine, Oct. 27, 1987, 26′-308); and eye tracker and data suit/data glove (see, e.g. Tello, Ernest R., “Between Man And Machine”, Byte, September 1988, 288-293; products of EXOS, Inc; Data Glove). Each of the aforementioned input devices has advantages and disadvantages, which are known in the art.


Studies suggest that a “direct manipulation” style of interface has advantages for menu selection tasks. This type of interface provides visual objects on a display screen, which can be manipulated by “pointing” and “clicking” on them. For example, the popular Graphical User Interfaces (“GUIs”), such as Macintosh and Microsoft Windows, and others known in the art, use a direct manipulation style interface. A device such as a touch-screen, with a more natural selection technique, is technically preferable to the direct manipulation method. However, the accuracy limitations and relatively high cost make other inputs more commercially practical. Further, for extended interactive use, touchscreens are not a panacea for office productivity applications. In addition, the user must be within arms' length of the touch-screen display. In a cursor positioning task, Albert (1982) found the trackball to be the most accurate pointing device and the touch-screen to be the least accurate when compared with other input devices such as the light pen, joystick, data tablet, trackball, and keyboard. Epps (1986) found both the mouse and trackball to be somewhat faster than both the touch-pad and joystick, but he concluded that there were no significant performance differences between the mouse and trackball as compared with the touch-pad and joystick.


It is noted that in text-based applications, an input device that is accessible, without the necessity of moving the user's hands from the keyboard, may be preferred. Thus, for example, Electronic Engineering Times (EET), Oct. 28, 1991, p. 62, discloses a miniature joystick incorporated into the functional area of the keyboard. This miniature joystick has been successfully incorporated into a number of laptop computers.


The following references are also relevant to the interface aspects of the present invention:

  • Hoffberg, Linda I, “AN IMPROVED HUMAN FACTORED INTERFACE FOR PROGRAMMABLE DEVICES: A CASE STUDY OF THE VCR” Master's Thesis, Tufts University (Master of Sciences in Engineering Design, November, 1990).
  • “Bar Code Programs VCR”, Design News, Feb. 1, 1988, 26.
  • “How to find the best value in VCRs”, Consumer Reports, March 1988, 135-141.
  • “Low-Cost VCRs: More For Less”, Consumer Reports, March 1990, 168-172.
  • “Nielsen Views VCRs”, Television Digest, Jun. 23, 1988, 15.
  • “The Highs and Lows of Nielsen Homevideo Index”, Marketing & Media Decisions, November 1985, 84-86+.
  • “The Quest for ‘User Friendly’”, U.S. News & World Report, Jun. 13, 1988. 54-56.
  • “The Smart House: Human Factors in Home Automation”, Human Factors in Practice, December 1990, 1-36.
  • “VCR, Camcorder Trends”, Television Digest, Vol. 29:16 (Mar. 20, 1989).
  • “VCR's: A Look At The Top Of The Line”, Consumer Reports, March 1989, 167-170.
  • “VHS Videocassette Recorders”, Consumer Guide, 1990, 17-20.
  • Abedini, Kamran, “An Ergonomically-improved Remote Control Unit Design”, Interface '87 Proceedings, 375-380.
  • Abedini, Kamran, and Hadad, George, “Guidelines For Designing Better VCRs”, Report No. IME 462, Feb. 4, 1987.
  • Bensch, U., “VPV—VIDEOTEXT PROGRAMS VIDEORECORDER”, IEEE Transactions on Consumer Electronics, 34(3):788-792.
  • Berger, Ivan, “Secrets of the Universals”, Video, February 1989, 45-47+.
  • Beringer, D. B., “A Comparative Evaluation of Calculator Watch Data Entry Technologies: Keyboards to Chalkboards”, Applied Ergonomics, December 1985, 275-278.
  • Bier, E. A. et al. “MMM: A User Interface Architecture for Shared Editors on a Single Screen,” Proceedings of the ACM Symposium on User Interface Software and Technology, Nov. 11-13, 1991, p. 79.
  • Bishop, Edward W., and Guinness, G. Victor Jr., “Human Factors Interaction with Industrial Design”, Human Factors, 8(4):279-289 (August 1966).
  • Brown, Edward, “Human Factors Concepts For Management”, Proceedings of the Human Factors Society, 1973, 372-375.
  • Bulkeley, Debra, “The Smartest House in America”, Design News, Oct. 19, 1987, 56-61.
  • Card, Stuart K., “A Method for Calculating Performance times for Users of Interactive Computing Systems”, IEEE, 1979, 653-658.
  • Carlson, Mark A., “Design Goals for an Effective User Interface”, Electro/82 Proceedings, 3/1/1-3/1/4.
  • Carlson, Mark A., “Design Goals for an Effective User Interface”, Human Interfacing with Instruments, Session 3.
  • Carroll, Paul B., “High Tech Gear Draws Cries of “Uncle”, Wall Street Journal, Apr. 27, 1988, 29.
  • Cobb, Nathan, “I don't get it”, Boston Sunday Globe Magazine, Mar. 25, 1990, 23-29.
  • Davis, Fred, “The Great Look-and-Feel Debate”, A+, 5:9-11 (July 1987).
  • Dehning, Waltraud, Essig Heidrun, and Maass, Susanne, The Adaptation of Virtual Man-Computer Interfaces to User Requirements in Dialogs, Germany: Springer-Verlag, 1981.
  • Ehrenreich, S. L., “Computer Abbreviations—Evidence and Synthesis”, Human Factors, 27(2): 143-155 (April 1985).
  • Friedman, M. B., “An Eye Gaze Controlled Keyboard”, Proceedings of the 2nd International Conference on Rehabilitation Engineering, 1984, 446-447.
  • Gilfoil, D., and Mauro, C. L., “Integrating Human Factors and Design: Matching Human Factors Methods up to Product Development”, C. L. Mauro Assoc., Inc., 1-7.
  • Gould, John D., Boies, Stephen J., Meluson, Antonia, Rasammy, Marwan, and Vosburgh, Ann Marie, “Entry and Selection Methods For Specifying Dates”. Human Factors, 32(2):199-214 (April 1989).
  • Green, Lee, “Thermo Tech: Here's a common sense guide to the new thinking thermostats”, Popular Mechanics, October 1985, 155-159.
  • Grudin, Jonathan, “The Case Against User Interface Consistency”, MCC Technical Report Number ACA-HI-002-89, January 1989.
  • Harvey, Michael G., and Rothe, James T., “VideoCassette Recorders: Their Impact on Viewers and Advertisers”, Journal of Advertising, 25:19-29 (December/January 1985).
  • Hawkins, William J., “Super Remotes”, Popular Science, February 1989, 76-77.
  • Henke, Lucy L., and Donohue, Thomas R., “Functional Displacement of Traditional TV Viewing by VCR Owners”, Journal of Advertising Research, 29:18-24 (April-May 1989).
  • Hoban, Phoebe, “Stacking the Decks”, New York, Feb. 16, 1987, 20:14.
  • Howard, Bill, “Point and Shoot Devices”, PC Magazine, 6:95-97 (August 1987).
  • Jane Pauley Special, NBC TV News Transcript, Jul. 17, 1990, 10:00 PM.
  • Kolson, Ann, “Computer wimps drown in a raging sea of technology”, The Hartford Courant, May 24, 1989, B1.
  • Kreifeldt, J. G., “A Methodology For Consumer Product Safety Analysis”, The 3rd National Symposium on Human Factors in Industrial Design in Consumer Products, August 1982, 175-184.
  • Kreifeldt, John, “Human Factors Approach to Medical Instrument Design”, Electro/82 Proceedings, 3/3/1-3/3/6.
  • Kuocheng, Andy Poing, and Ellingstad, Vernon S., “Touch Tablet and Touch Input”, Interface '87, 327.
  • Ledgard, Henry, Singer, Andrew, and Whiteside, John, Directions in Human Factors for Interactive Systems, New York, Springer-Verlag, 1981.
  • Lee, Eric, and MacGregor, James, “Minimizing User Search Time Menu Retrieval Systems”, Human Factors, 27(2):157-162 (April 1986).
  • Leon, Carol Boyd, “Selling Through the VCR”, American Demographics, December 1987, 40-43.
  • Long, John, “The Effect of Display Format on the Direct Entry of Numerical Information by Pointing”, Human Factors, 26(1):3-17 (February 1984).
  • Mantei, Marilyn M., and Teorey, Toby J., “Cost/Benefit Analysis for Incorporating Human Factors in the Software Lifecycle”, Association for Computing Machinery, 1988.
  • Meads, Jon A., “Friendly or Frivolous”, Datamation, Apr. 1, 1988, 98-100.
  • Moore, T. G. and Dartnall, “Human Factors of a Microelectronic Product: The Central Heating Timer/Programmer”, Applied Ergonomics, 1983, 13(1): 15-23.
  • Norman, Donald A., “Infuriating By Design”, Psychology Today, 22(3):52-56 (March 1988).
  • Norman, Donald A., The Psychology of Everyday Things, New York, Basic Book, Inc. 1988.
  • Platte, Hans-Joachim, Oberjatzas, Gunter, and Voessing, Walter, “A New Intelligent Remote Control Unit for Consumer Electronic Device”, IEEE Transactions on Consumer Electronics, Vol. CE-31(1):59-68 (February 1985).
  • Rogus, John G. and Armstrong, Richard, “Use of Human Engineering Standards in Design”, Human Factors, 19(1): 15-23 (February 1977).
  • Rosch, Winn L., “Voice Recognition: Understanding the Master's Voice”, PC Magazine, Oct. 27, 1987, 261-308.
  • Sarver, Carleton, “A Perfect Friendship”, High Fidelity, 39:42-49 (May 1989).
  • Schmitt, Lee, “Let's Discuss Programmable Controllers”, Modern Machine Shop, May 1987, 90-99.
  • Schniederman, Ben, Designing the User Interface: Strategies for Effective Human-Computer Interaction, Reading, Mass., Addison-Wesley, 1987.
  • Smith, Sidney J., and Mosier, Jane N., Guidelines for Designing User Interface Software, Bedford, Mass., MITRE, 1986.
  • Sperling, Barbara Bied, Tullis Thomas S., “Are You a Better ‘Mouser’ or ‘Trackballer’? A Comparison of Cursor—Positioning Performance”, An Interactive/Poster Session at the CHI+GI'87 Graphics Interface and Human Factors in Computing Systems Conference.
  • Streeter, L. A., Ackroff, J. M., and Taylor, G. A. “On Abbreviating Command Names”, The Bell System Technical Journal, 62(6): 1807-1826 (July/August 1983).
  • Swanson, David, and Klopfenstein, Bruce, “How to Forecast VCR Penetration”, American Demographic, December 1987, 44-45.
  • Tello, Ernest R., “Between Man And Machine”, Byte, September 1988, 288-293.
  • Thomas, John, C., and Schneider, Michael L., Human Factors in Computer Systems, New Jersey, Ablex Publ. Co., 1984.
  • Trachtenberg, Jeffrey A., “How do we confuse thee? Let us count the ways”, Forbes, Mar. 21, 1988, 159-160.
  • Tyldesley, D. A., “Employing Usability Engineering in the Development of Office Products”, The Computer Journal”, 31(5):431-436 (1988).
  • Verplank, William L., “Graphics in Human-Computer Communication: Principles of Graphical User-Interface Design”, Xerox Office Systems.
  • Voyt, Carlton F., “PLC's Learn New Languages”, Design News, Jan. 2, 1989, 78.
  • Whitefield, A. “Human Factors Aspects of Pointing as an Input Technique in Interactive Computer Systems”, Applied Ergonomics, June 1986, 97-104.
  • Wiedenbeck, Susan, Lambert, Robin, and Scholtz, Jean, “Using Protocol Analysis to Study the User Interface”, Bulletin of the American Society for Information Science, June/July 1989, 25-26.
  • Wilke, William, “Easy Operation of Instruments by Both Man and Machine”. Electro/82 Proceedings, 3/2/1-3/2/4.
  • Yoder, Stephen Kreider, “U.S. Inventors Thrive at Electronics Show”, The Wall Street Journal, Jan. 10, 1990, B1.
  • Zeisel, Gunter, Tomas, Philippe, Tomaszewski, Peter, “An Interactive Menu-Driven Remote Control Unit for TV-Receivers and VC-Recorders”, IEEE Transactions on Consumer Electronics, 34(3):814-818.


AGENT TECHNOLOGIES


Presently well known human computer interfaces include so-called agent technology, in which the computer interface learns a task defined (inherently or explicitly) by the user and subsequently executes the task. Such systems are available from Firefly (www.firefly.com), and are commercially present in some on-line commerce systems, such as Amazon.com (www.amazon.com). See:

  • “ABI WHAP, Web Hypertext Applications Processor,” alphabase.com (1996, Jul. 11).
  • “AdForce Feature Set”, www.imgis.com (1997, Apr. 11).
  • “IPRO,” www.ipro.com, Internet profiles Corporation Home and other Web Pages (1996, Jul. 11).
  • “Media Planning is Redefined in a New Era of Online Advertising,” PR Newswire, (1996,


Feb. 5).

  • “My Yahoo! news summary for My Yahoo! Quotes”, my.yahoo.com (1997, Jan. 27).
  • “NetGravity Announces Adserver 2.1”, www.netgravity.com (1997, Apr. 11).
  • “Netscape & NetGravity: Any Questions?”, www.netgravity.com (1996, Jul. 11).
  • “Network Site Main”, www.doubleclick.net (1997, Apr. 11).
  • “Real Media,” www.realmedia.com (1996, Jul. 11).
  • “The Front Page”, live.excite.com (1997, Jan. 27) and (1997, Apr. 11).
  • “The Pointcast Network,” www.pointcast.com (1996, Spring).
  • “The Power of PenPoint”, Can et al., 1991, p. 39, Chapter 13, pp. 258-260.
  • “Welcome to Lycos,” www.lycos.com (1997, Jan. 27).
  • Abatemarco, Fred, “From the Editor”, Popular Science, Sep. 1992, p. 4
  • Berniker, M., “Nielsen plans Internet Service,” Broadcasting & Cable, 125(30):34(1995, Jul. 24).
  • Berry, Deanne, et al. In an Apr. 10, 1990 news release, Symantec announced a new version of MORE ™.
  • Betts, M., “Sentry cuts access to naughty bits,” Computers and Security, vol. 14, No. 7, p. 615(1995).


Boy, Guy A., Intelligent Assistant Systems, Harcourt Brace Jovanovich, 1991, uses the term “Intelligent Assistant Systems”.

  • Bussey, H.E., et al., “Service Architecture, Prototype Description, and Network Implications of a Personalized Information Grazing Service,” IEEE Multiple Facets of Integration Conference Proceedings, vol. 3, No. Conf. 9, Jun. 3, 1990, pp. 1046-1053.
  • Donnelley, J.E., “WWW media distribution via Hopewise Reliable Multicast,” Computer Networks and ISDN Systems, vol. 27, No. 6, pp. 81-788 (Apr., 1995).
  • Edwards, John R., “Q&A: Integrated Software with Macros and an Intelligent Assistant”, Byte Magazine, Jan. 1986, vol. 11, Issue 1, pp. 120-122, critiques the Intelligent Assistant by Symantec Corporation.
  • Elofson, G. and Konsynski, B., “Delegation Technologies: Environmental Scanning with Intelligent Agents”, Journal of Management Information Systems, Summer 1991, vol. 8, Issue 1, pp. 37-62.
  • Garretson, R., “IBM Adds ‘ Drawing Assistant’ Design Tool to Graphics Series”, PC Week, Aug. 13, 1985, vol. 2, Issue 32, p. 8.
  • Gessler, S. and Kotulla A., “PDAs as mobile WWW browsers,” Computer Networks and ISDN Systems, vol. 28, No. 1-2, pp. 53-59 (Dec. 1995).
  • Glinert-Stevens, Susan, “Microsoft Publisher: Desktop Wizardry”, PC Sources, Feb., 1992, vol. 3, Issue 2, p. 357.
  • Goldberg, Cheryl, “IBM Drawing Assistant: Graphics for the EGA”, PC Magazine, Dec. 24, 1985, vol. 4, Issue 26, p. 255.
  • Hendrix, Gary G. and Walter, Brett A., “The Intelligent Assistant: Technical Considerations Involved in Designing Q&A′s Natural-language Interface”, Byte Magazine, Dec. 1987, vol. 12, Issue 14, p. 251.
  • Hoffman, D.L. et al., “A New Marketing Paradigm for Electronic Commerce,” (1996, Feb. 19),
  • www2000.ogsm.vanderbilt.edu Information describing BroadVision One-to-One Application System: “Overview,” p. 1; Further Resources on One-To-One Marketing, p. 1; BroadVision Unleashes the Power of the Internet with Personalized Marketing and Selling, pp. 1-3; Frequently Asked Questions, pp. 1-3; Products, p. 1; BroadVision One-To-One™, pp. 1-2; Dynamic Command Center, p. 1; Architecture that Scales, pp. 1-2; Technology, pp. 1; Creating a New Medium for Marketing and Selling BroadVision One-To-One and the World Wide Web a White Paper, pp. 1-15; www.broadvision.com (1996, Jan.-Mar.).
  • Jones, R., “Digital's World-Wide Web server: A case study,” Computer Networks and ISDN Systems, vol. 27, No. 2, pp. 297-306 (Nov. 1994).
  • McFadden, M., “The Web and the Cookie Monster,” Digital Age, (1996, Aug.).
  • Nadoli, Gajanana and Biegel, John, “Intelligent Agents in the Simulation of Manufacturing Systems”, Proceedings of the SCS Multiconference on AI and Simulation, 1989.
  • Nilsson, B.A., “Microsoft Publisher is an Honorable Start for DTP Beginners”, Computer Shopper, Feb. 1992, vol. 12, Issue 2, p. 426, evaluates Microsoft Publisher and Page Wizard.
  • O'Connor, Rory J., “Apple Banking on Newton's Brain”, San Jose Mercury News, Wednesday, Apr. 22, 1992.
  • Ohsawa, I. and Yonezawa, A., “A Computational Model of an Intelligent Agent Who Talks with a Person”, Research Reports on Information Sciences, Series C, Apr. 1989, No. 92, pp. 1-18.
  • Pazzani, M. et al., “Learning from hotlists and coldlists: Towards a WWW Information Filtering and Seeking Agent,” Proceedings International Conference on Tools with Artificial Intelligence, Jan. 1995, pp. 492-495.
  • Poor, Alfred, “Microsoft Publisher”, PC Magazine, Nov. 26, 1991, vol. 10, Issue 20, p. 40, evaluates Microsoft Publisher.
  • PRNewswire, information concerning the PointCast Network (PCN) (1996, Feb. 13) p. 213.
  • Raggett, D., “A review of the HTML +document format,” Computer Networks and ISDN Systems, vol. 27, No. 2, pp. 35-145 (Nov. 1994).
  • Rampe, Dan, et al. In a Jan. 9, 1989 news release, Claris Corporation announced two products, SmartForm Designer and SmartForm Assistant, which provide “Intelligent Assistance”, such as custom help messages, choice lists, and data-entry validation and formatting.
  • Ratcliffe, Mitch and Gore, Andrew, “Intelligent Agents take U.S. Bows.”, MacWeek, Mar. 2, 1992, vol. 6, No. 9, p. 1.
  • Sharif Heger, A. and Koen, B. V., “KNOWBOT: an Adaptive Data Base Interface”, Nuclear Science and Engineering, Feb. 1991, vol. 107, No. 2, pp. 142-157.
  • Soviero, Marcelle M., “Your World According to Newton”, Popular Science, Sep. 1992, pp. 45-49.
  • Upendra Shardanand, “Social Information Filtering for Music Recommendation” Sep. 1994, pp. 1-93, Massachusetts Institute of Technology, Thesis.
  • Weber, Thomas E., “Software Lets Marketers Target Web Ads,” The Wall Street Journal, Apr. 21, 1997
  • Weiman, Liza and Moran, Tom, “A Step toward the Future”, Macworld, Aug. 1992, pp. 129-131.


Yan, T.W. and Garcia-Molina, H., “SIFT -A Tool for Wide-Area Information Dissemination,” Paper presented at the USENIX Technical Conference, New Orleans, L.a. (1995, Jan.), pp. 177-186.


Industrial Controls


Industrial control systems are well known. Typically, a dedicated reliable hardware module controls a task using a conventional algorithm, with a low level user interface. These devices are programmable, and therefore a high level software program may be provided to translate user instructions into the low level commands, and to analyze any return data. See, U.S. Pat. No. 5,506,768, expressly incorporated herein by reference. See, also:

  • A. B. Corripio, “Tuning of Industrial Control Systems”, Instrument Society of America, Research Triangle Park, N.C. (1990) pp. 65-81.
  • C. J. Harris & S. A. Billings, “Self-Tuning and Adaptive Control: Theory and Applications”, Peter Peregrinus LTD (1981) pp. 20-33.
  • C. Rohrer & Clay Nesler, “Self-Tuning Using a Pattern Recognition Approach”, Johnson Controls, Inc., Research Brief 228 (Jun. 13, 1986).
  • D. E. Seborg, T. F. Edgar, & D. A. Mellichamp, “Process Dynamics and Control”, John Wiley & Sons, NY (1989) pp. 294-307, 538-541.
  • E. H. Bristol & T. W. Kraus, “Life with Pattern Adaptation”, Proceedings 1984 American Control Conference, pp. 888-892, San Diego, Calif. (1984).
  • Francis Schied, “Shaum's Outline Series-Theory & Problems of Numerical Analysis”, McGraw-Hill Book Co., NY (1968) pp. 236, 237, 243, 244, 261.
  • K. J. Astrom and B. Wittenmark, “Adaptive Control”, Addison-Wesley Publishing Company (1989) pp. 105-215.
  • K. J. Astrom, T. Hagglund, “Automatic Tuning of PID Controllers”, Instrument Society of America, Research Triangle Park, N.C. (1988) pp. 105-132.
  • R. W. Haines, “HVAC Systems Design Handbook”, TAB Professional and Reference Books, Blue Ridge Summit, Pa. (1988) pp. 170-177.
  • S. M. Pandit & S. M. Wu, “Timer Series & System Analysis with Applications”, John Wiley & Sons, Inc., NY (1983) pp. 200-205.
  • T. W. Kraus 7 T. J. Myron, “Self-Tuning PID Controller Uses Pattern Recognition Approach”, Control Engineering, pp. 106-111, June 1984.


Pattern Recognition


Another aspect of some embodiments of the invention relates to signal analysis and complex pattern recognition. This aspect encompasses analysis of any data set presented to the system: internal, user interface, or the environment in which it operates. While semantic, optical and audio analysis systems are known, the invention is by no means limited to these types of data.


Pattern recognition involves examining a complex data set to determine similarities in its broadest context) with other data sets, typically data sets which have been previously characterized. These data sets may comprise multivariate inputs, sequences in time or other dimension, or a combination of both multivariate data sets with multiple dimensions.


The following cited patents and publications are relevant to pattern recognition and control aspects of the present invention, and are herein expressly incorporated by reference:


U.S. Pat. No. 5,067,163, incorporated herein by reference, discloses a method for determining a desired image signal range from an image having a single background, in particular a radiation image such as a medical X-ray. This reference teaches basic image enhancement techniques.


U.S. Pat. No. 5,068,664, incorporated herein by reference, discloses a method and device for recognizing a target among a plurality of known targets, by using a probability based recognition system. This patent document cites a number of other references, which are relevant to the problem of image recognition:

  • Appriou, A., “Interet des theories de l'incertain en fusion de donnees”, Colloque International sur le Radar Paris, 24-28 avril 1989.
  • Appriou, A., “Procedure d'aide a la decision multi-informateurs. Applications a la classification multi-capteurs de cibles”, Symposium de l'Avionics Panel (AGARD) Turquie, 25-29 avril 1988.
  • Arrow, K. J., “Social choice and individual valves”, John Wiley and Sons Inc. (1963).
  • Bellman, R. E., L. A. Zadeh, “Decision making in a fuzzy environment”, Management Science, 17(4) (December 1970).
  • Bhatnagar, R. K., L. N. Kamal, “Handling uncertain information: a review of numeric and non-numeric methods”, Uncertainty in Artificial Intelligence, L. N. Kamal and J. F. Lemmer, Eds. (1986).
  • Blair, D., R. Pollack, “La logique du choix collectif” Pour la Science (1983).
  • Chao, J. J., E. Drakopoulos, C. C. Lee, “An evidential reasoning approach to distributed multiple hypothesis detection”, Proceedings of the 20th Conference on decision and control, Los Angeles, Calif., December 1987.
  • Dempster, A. P., “A generalization of Bayesian inference”, Journal of the Royal Statistical Society, Vol. 30, Series B (1968).
  • Dempster, A. P., “Upper and lower probabilities induced by a multivalued mapping”, Annals of mathematical Statistics, no. 38 (1967).
  • Dubois, D., “Modeles mathematiques de l'imprecis et de l'incertain en vue d'applications aux techniques d'aide a la decision”, Doctoral Thesis, University of Grenoble (1983).
  • Dubois, D., N. Prade, “Combination of uncertainty with belief functions: a reexamination”, Proceedings 9th International Joint Conference on Artificial Intelligence, Los Angeles (1985).
  • Dubois, D., N. Prade, “Fuzzy sets and systems-Theory and applications”, Academic Press, New York (1980).
  • Dubois, D., N. Prade, “Theorie des possibilites: application a la representation des connaissances en informatique”, Masson, Paris (1985).
  • Duda, R. O., P. E. Hart, M. J. Nilsson, “Subjective Bayesian methods for rule-based inference systems”, Technical Note 124-Artificial Intelligence Center-SRI International.
  • Fua, P. V., “Using probability density functions in the framework of evidential reasoning Uncertainty in knowledge based systems”, B. Bouchon, R. R. Yager, Eds. Springer Verlag (1987).
  • Ishizuka, M., “Inference methods based on extended Dempster and Shafer's theory for problems with uncertainty/fuzziness”, New Generation Computing, 1:159-168 (1983), Ohmsha, Ltd, and Springer Verlag.
  • Jeffrey, R. J., “The logic of decision”, The University of Chicago Press, Ltd., London (1983)(2nd Ed.).
  • Kaufmann, A., “Introduction a la theorie des sous-ensembles flous”, Vol. 1, 2 et 3-Masson-Paris (1975).
  • Keeney, R. L., B. Raiffa, “Decisions with multiple objectives: Preferences and value tradeoffs”, John Wiley and Sons, New York (1976).
  • Ksienski et al., “Low Frequency Approach to Target Identification”, Proc. of the IEEE, 63(12): 1651-1660 December, (1975).
  • Kyburg, H. E., “Bayesian and non Bayesian evidential updating”, Artificial Intelligence 31:271-293 (1987).
  • Roy, B., “Classements et choix en presence de points de vue multiples”, R.I.R.O.-2eme annee-no. 8, pp. 57-75 (1968).
  • Roy, B., “Electre III: un algorithme de classements fonde sur une representation floue des preferences en presence de criteres multiples”, Cahiers du CERO, 20(1):3-24 (1978).
  • Scharlic, A., “Decider sur plusieurs criteres. Panorama de l'aide a la decision multicritere” Presses Polytechniques Romandes (1985).
  • Shafer, G., “A mathematical theory of evidence”, Princeton University Press, Princeton, N.J. (1976).
  • Sugeno, M., “Theory of fuzzy integrals and its applications”, Tokyo Institute of Technology (1974).
  • Vannicola et al, “Applications of Knowledge based Systems to Surveillance”, Proceedings of the 1988 IEEE National Radar Conference, 20-21 Apr. 1988, pp. 157-164.
  • Yager, R. R., “Entropy and specificity in a mathematical theory of Evidence”, Int. J. General Systems, 9:249-260 (1983).
  • Zadeh, L. A., “Fuzzy sets as a basis for a theory of possibility”, Fuzzy sets and Systems 1:3-28 (1978).
  • Zadeh, L. A., “Fuzzy sets”, Information and Control, 8:338-353 (1965).
  • Zadeh, L. A., “Probability measures of fuzzy events”, Journal of Mathematical Analysis and Applications, 23:421-427 (1968).


U.S. Pat. No. 5,067,161, incorporated herein by reference, relates to a video image pattern recognition system, which recognizes objects in near real time.


U.S. Pat. Nos. 4,817,176 and 4,802,230, both incorporated herein by reference, relate to harmonic transform methods of pattern matching of an undetermined pattern to known patterns, and are useful in the pattern recognition method of the present invention. U.S. Pat. No. 4,998,286, incorporated herein by reference, relates to a harmonic transform method for comparing multidimensional images, such as color images, and is useful in the present pattern recognition methods.


U.S. Pat. No. 5,067,166, incorporated herein by reference, relates to a pattern recognition system, in which a local optimum match between subsets of candidate reference label sequences and candidate templates. It is clear that this method is useful in the pattern recognition aspects of the present invention. It is also clear that the interface and control system of the present invention are useful adjuncts to the method disclosed in U.S. Pat. No. 5,067,166.


U.S. Pat. No. 5,048,095, incorporated herein by reference, relates to the use of a genetic learning algorithm to adaptively segment images, which is an initial stage in image recognition. This patent has a software listing for this method. It is clear that this method is useful in the pattern recognition aspects of the present invention. It is also clear that the interface and control system of the present invention are useful adjuncts to the method disclosed in U.S. Pat. No. 5,048,095.


Fractal-Based Image Processing


Fractals are a relatively new field of science and technology that relate to the study of order and chaos. While the field of fractals is now very dense, a number of relevant principles are applicable. First, when the coordinate axes of a space are not independent, and are related by a recursive algorithm, then the space is considered to have a fractional dimensionality. One characteristic of such systems is that a mapping of such spaces tends to have self-similarity on a number of scales. Interestingly, natural systems have also been observed to have self-similarity over several orders of magnitude, although as presently believed, not over an unlimited range of scales. Therefore, one theory holds that images of natural objects may be efficiently described by iterated function systems (IFS), which provide a series of parameters for a generic formula or algorithm, which, when the process is reversed, is visually similar to the starting image. Since the “noise” of the expanded data is masked by the “natural” appearance of the result, visually acceptable image compression may be provided at relatively high compression ratios. This theory remains the subject of significant debate, and, for example, wavelet algorithm advocates claim superior results for a more general set of starting images. It is noted that, on a mathematical level, wavelets and fractal theories have some common threads.


According to a particular embodiment of the invention, the expression of an image as an ordered set of coefficients of an algorithm, wherein the coefficients relate to elements of defined variation in scale, and the resulting set of coefficients is related to the underlying image morphology, is exploited in order to provide a means for pattern analysis and recognition without requiring decompression to an orthogonal coordinate space.


U.S. Pat. Nos. 5,065,447, and 4,941,193, both incorporated herein by reference, relate to the compression of image data by using fractal transforms. These are discussed in detail below. U.S. Pat. No. 5,065,447 cites a number of references, relevant to the use of fractals in image processing:

  • U.S. Pat. No. 4,831,659.
  • “A New Class of Markov Processes for Image Encoding”, School of Mathematics, Georgia Inst. of Technology (1988), pp. 14-32.
  • “Construction of Fractal Objects with Iterated Function Systems”, Siggraph '85 Proceedings, 19(3):271-278 (1985).
  • “Data Compression: Pntng by Numbrs”, The Economist, May 21, 1988.
  • “Fractal Geometry-Understanding Chaos”, Georgia Tech Alumni Magazine, p. 16 (Spring 1986).
  • “Fractal Modelling of Biological Structures”, Perspectives in Biological Dynamics and Theoretical Medicine, Koslow, Mandell, Shlesinger, eds., Annals of New York Academy of Sciences, vol. 504, 179-194 (date unknown).
  • “Fractal Modelling of Real World Images, Lecture Notes for Fractals: Introduction, Basics and Perspectives”, Siggraph (1987).
  • “Fractals—A Geometry of Nature”, Georgia Institute of Technology Research Horizons, p. 9 (Spring 1986).
  • A. Jacquin, “A Fractal Theory of Iterated Markov Operators with Applications to Digital Image Coding”, PhD Thesis, Georgia Tech, 1989.
  • A. Jacquin, “Image Coding Based on a Fractal Theory of Iterated Contractive Image Transformations” p. 18, Jan. 1992 (Vol I Issue 1) of IEEE Trans on Image Processing.
  • A. Jacquin, ‘Fractal image coding based on a theory of iterated contractive image transformations’, Proc. SPIE Visual Communications and Image Processing, 1990, pages 227-239.
  • A. E. Jacquin, ‘A novel fractal block-coding technique for digital images’, Proc. ICASSP 1990.
  • Baldwin, William, “Just the Bare Facts, Please”, Forbes Magazine, Dec. 12, 1988.
  • Barnsley et al., “A Better Way to Compress Images”, Byte Magazine, January 1988, pp. 213-225.
  • Barnsley et al., “Chaotic Compression”, Computer Graphics World, November 1987.
  • Barnsley et al., “Harnessing Chaos For Images Synthesis”, Computer Graphics, 22(4): 131-140 (August, 1988).
  • Barnsley et al., “Hidden Variable Fractal Interpolation Functions”, School of Mathematics, Georgia Institute of Technology, Atlanta, Ga. 30332, July, 1986.
  • Barnsley, M. F., “Fractals Everywhere”, Academic Press, Boston, Mass., 1988.
  • Barnsley, M. F., and Demko, S., “Iterated Function Systems and The Global Construction of Fractals”, Proc. R. Soc. Lond., A399:243-275 (1985).
  • Barnsley, M. F., Ervin, V., Hardin, D., Lancaster, J., “Solution of an Inverse Problem for Fractals and Other Sets”, Proc. Natl. Acad. Sci. U.S.A., 83:1975-1977 (April 1986).
  • Beaumont J M, “Image data compression using fractal techniques”, British Telecom Technological Journal 9(4):93-108 (1991).
  • Byte Magazine, January 1988, supra, cites:
  • D. S. Mazel, Fractal Modeling of Time-Series Data, PhD Thesis, Georgia Tech, 1991. (One dimensional, not pictures).
  • Derra, Skip, “Researchers Use Fractal Geometry,”, Research and Development Magazine, March 1988.
  • Elton, J., “An Ergodic Theorem for Iterated Maps”, Journal of Ergodic Theory and Dynamical Systems, 7 (1987).
  • Fisher Y, “Fractal image compression”, Siggraph 92.
  • Fractal Image Compression Michael F. Barnsley and Lyman P. Hurd ISBN 0-86720-457-5, ca. 250 pp.
  • Fractal Image Compression: Theory and Application, Yuval Fisher (ed.), Springer Verlag, New York, 1995. ISBN number 0-387-94211-4.
  • Fractal Modelling of Biological Structures, School of Mathematics, Georgia Institute of Technology (date unknown).
  • G. E. Oien, S. Lepsoy & T. A. Ramstad, ‘An inner product space approach to image coding by contractive transformations’, Proc. ICASSP 1991, pp 2773-2776.
  • Gleick, James, “Making a New Science”, pp. 215, 239, date unknown.
  • Graf S, “Barnsley's Scheme for the Fractal Encoding of Images”, Journal Of Complexity, V8, 72-78 (1992).
  • Jacobs, E. W., Y. Fisher and R. D. Boss. “Image Compression: A study of the Iterated Transform Method. Signal Processing 29, (1992) 25-263.
  • M. Barnsley, L. Anson, “Graphics Compression Technology, SunWorld, October 1991, pp. 42-52.
  • M. F. Barnsley, A. Jacquin, F. Malassenet, L. Reuter & A. D. Sloan, ‘Harnessing chaos for image synthesis’, Computer Graphics, vol 22 no 4 pp 131-140, 1988.
  • M. F. Barnsley, A. E. Jacquin, ‘Application of recurrent iterated function systems to images’, Visual Comm. and Image Processing, vol SPIE-1001, 1988.
  • Mandelbrot, B., “The Fractal Geometry of Nature”, W.H. Freeman & Co., San Francisco, Calif., 1982, 1977.
  • Monro D M and Dudbridge F, “Fractal block coding of images”, Electronics Letters 28(II):1053-1054 (1992).
  • Monro D. M. & Dudbridge F. ‘Fractal approximation of image blocks’, Proc ICASSP 92, pp. III: 485-488.
  • Monro D. M. ‘A hybrid fractal transform’, Proc ICASSP 93, pp. V: 169-72.
  • Monro D. M., Wilson D., Nicholls J. A. ‘High speed image coding with the Bath Fractal Transform’, IEEE International Symposium on Multimedia Technologies Southampton, April 1993.
  • Peterson, Ivars, “Packing It In-Fractals.”, Science News, 131(18):283-285 (May 2, 1987).
  • S. A. Hollatz, “Digital image compression with two-dimensional affine fractal interpolation functions”, Department of Mathematics and Statistics, University of Minnesota-Duluth, Technical Report 91-2. (a nuts-and-bolts how-to-do-it paper on the technique).
  • Stark, J., “Iterated function systems as neural networks”, Neural Networks, Vol 4, pp 679-690, Pergamon Press, 1991.
  • Vrscay, Edward R. “Iterated Function Systems: Theory, Applications, and the Inverse Problem. Fractal Geometry and Analysis, J. Belair and S. Dubuc (eds.) Kluwer Academic, 1991. 405-468.


U.S. Pat. No. 5,347,600, incorporated herein by reference, relates to a method and apparatus for compression and decompression of digital image data, using fractal methods. According to this method, digital image data is automatically processed by dividing stored image data into domain blocks and range blocks. The range blocks are subjected to processes such as a shrinking process to obtain mapped range blocks. The range blocks or domain blocks may also be processed by processes such as affine transforms. Then, for each domain block, the mapped range block which is most similar to the domain block is determined, and the address of that range block and the processes the blocks were subjected to are combined as an identifier which is appended to a list of identifiers for other domain blocks. The list of identifiers for all domain blocks is called a fractal transform and constitutes a compressed representation of the input image. To decompress the fractal transform and recover the input image, an arbitrary input image is formed into range blocks and the range blocks processed in a manner specified by the identifiers to form a representation of the original input image.


“Image Compression Using Fractals and Wavelets”, Final Report for the Phase II Contract Sponsored by the Office of Naval Research, Contract No. N00014-91-C-0117, Netrologic Inc., San Diego, California (Jun. 2, 1993), relates to various methods of compressing image data, including fractals and wavelets. This method may also be applicable in pattern recognition applications. This reference provides theory and comparative analysis of compression schemes.


A fractal-processing method based image extraction method is described in Kim, D. H.; Caulfield, H. J.; Jannson, T.; Kostrzewski, A.; Savant, G, “Optical fractal image processor for noise-embedded targets detection”, Proceedings of the SPIE—The International Society for Optical Engineering, Vol. 2026, p. 144-9 (1993) (SPIE Conf: Photonics for Processors, Neural Networks, and Memories 12-15 Jul. 1993, San Diego, Calif., USA). According to this paper, a fractal dimensionality measurement and analysis-based automatic target recognition (ATR) is described. The ATR is a multi-step procedure, based on fractal image processing, and can simultaneously perform preprocessing, interest locating, segmenting, feature extracting, and classifying. See also, Cheong, C. K.; Aizawa, K.; Saito, T.; Hatori, M., “Adaptive edge detection with fractal dimension”, Transactions of the Institute of Electronics, Information and Communication Engineers D-II, J76D-II(11):2459-63 (1993); Hayes, H. I.; Solka, J. L.; Priebe, C. E.; “Parallel computation of fractal dimension”, Proceedings of the SPIE—The International Society for Optical Engineering, 1962:219-30 (1993); Priebe, C. E.; Solka, J. L.; Rogers, G. W., “Discriminant analysis in aerial images using fractal based features”, Proceedings of the SPIE—The International Society for Optical Engineering, 1962:196-208 (1993). See also, Anson, L., “Fractal Image Compression”, Byte, October 1993, pp. 195-202; “Fractal Compression Goes On-Line”, Byte, September 1993.


Methods employing other than fractal-based algorithms may also be used. See, e.g., Liu, Y., “Pattern recognition using Hilbert space”, Proceedings of the SPIE—The International Society for Optical Engineering, 1825:63-77 (1992), which describes a learning approach, the Hilbert learning. This approach is similar to Fractal learning, but the Fractal part is replaced by Hilbert space. Like the Fractal learning, the first stage is to encode an image to a small vector in the internal space of a learning system. The next stage is to quantize the internal parameter space. The internal space of a Hilbert learning system is defined as follows: a pattern can be interpreted as a representation of a vector in a Hilbert space. Any vectors in a Hilbert space can be expanded. If a vector happens to be in a subspace of a Hilbert space where the dimension L of the subspace is low (order of 10), the vector can be specified by its norm, an L-vector, and the Hermitian operator which spans the Hilbert space, establishing a mapping from an image space to the internal space P. This mapping converts an input image to a 4-tuple: t in P=(Norm, T, N, L-vector), where T is an operator parameter space, N is a set of integers which specifies the boundary condition. The encoding is implemented by mapping an input pattern into a point in its internal space. The system uses local search algorithm, i.e., the system adjusts its internal data locally. The search is first conducted for an operator in a parameter space of operators, then an error function delta (t) is computed. The algorithm stops at a local minimum of delta (t). Finally, the input training set divides the internal space by a quantization procedure. See also, Liu, Y., “Extensions of fractal theory”, Proceedings of the SPIE—The International Society for Optical Engineering, 1966:255-68 (1993).


Fractal methods may be used for pattern recognition. See, Sadjadi, F., “Experiments in the use of fractal in computer pattern recognition”, Proceedings of the SPIE—The International Society for Optical Engineering, 1960:214-22 (1993). According to this reference, man-made objects in infrared and millimeter wave (MMW) radar imagery may be recognized using fractal-based methods. The technique is based on estimation of the fractal dimensions of sequential blocks of an image of a scene and slicing of the histogram of the fractal dimensions computed by Fourier regression. The technique is shown to be effective for the detection of tactical military vehicles in IR, and of airport attributes in MMW radar imagery.


In addition to spatial self-similarity, temporal self-similarity may also be analyzed using fractal methods. See, Reusens, E., “Sequence coding based on the fractal theory of iterated transformations systems”, Proceedings of the SPIE—The International Society for Optical Engineering, 2094(pt. 1):132-40 (1993). This reference describes a scheme based on the iterated functions systems theory which relies on a 3D approach in which the sequence is adaptively partitioned. Each partition block can be coded either by using the spatial self similarities or by exploiting temporal redundancies.


Fractal compression methods may be used for video data for transmission. See, Hurtgen, B.; Buttgen, P., “Fractal approach to low rate video coding”, Proceedings of the SPIE—The International Society for Optical Engineering, 2094(pt. 1):120-31 (1993). This reference relates to a method for fast encoding and decoding of image sequences on the basis of fractal coding theory and the hybrid coding concept. The DPCM-loop accounts for statistical dependencies of natural image sequences in the temporal direction. Those regions of the original image where the prediction, i.e. motion estimation and compensation, fails are encoded using an advanced fractal coding scheme, suitable for still images, and whose introduction instead of the commonly used Discrete Cosine Transform (DCT)-based coding is advantageous especially at very low bit rates (8-64 kbit/s). In order to increase reconstruction quality, encoding speed and compression ratio, some additional features such as hierarchical codebook search and multilevel block segmentation may be employed. This hybrid technique may be used in conjunction with the present adaptive interface or other features of the present invention.


Fractal methods may be used to segment an image into objects having various surface textures. See, Zhi-Yan Xie; Brady, M., “Fractal dimension image for texture segmentation”, ICARCV '92. Second International Conference on Automation, Robotics and Computer Vision, p. CV-4.3/1-5 vol. I, (1992). According to this reference, the fractal dimension and its change over boundaries of different homogeneous textured regions is analyzed and used to segment textures in infrared aerial images. Based on the fractal dimension, different textures map into different fractal dimension image features, such that there is smooth variation within a single homogeneous texture but sharp variation at texture boundaries. Since the fractal dimension remains unchanged under linear transformation, this method is robust for dismissing effects caused by lighting and other extrinsic factors. Morphology is the only tool used in the implementation of the whole process: texture feature extraction, texture segmentation and boundary detection. This makes possible parallel implementations of each stage of the process.


Rahmati, M.; Hassebrook, L. G., “Intensity- and distortion-invariant pattern recognition with complex linear morphology”, Pattern Recognition, 27 (4):549-68(1994) relates to a unified model based pattern recognition approach is introduced which can be formulated into a variety of techniques to be used for a variety of applications. In this approach, complex phasor addition and cancellation are incorporated into the design of filter(s) to perform implicit logical operations using linear correlation operators. These implicit logical operations are suitable to implement high level gray scale morphological transformations of input images. In this way non-linear decision boundaries are effectively projected into the input signal space yet the mathematical simplicity of linear filter designs is maintained. This approach is applied to the automatic distortion- and intensity-invariant object recognition problem. A set of shape operators or complex filters is introduced which are logically structured into a filter bank architecture to accomplish the distortion and intensity-invariant system. This synthesized complex filter bank is optimally sensitive to fractal noise representing natural scenery. The sensitivity is optimized for a specific fractal parameter range using the Fisher discriminant. The output responses of the proposed system are shown for target, clutter, and pseudo-target inputs to represent its discrimination and generalization capability in the presence of distortion and intensity variations. Its performance is demonstrated with realistic scenery as well as synthesized inputs.


Sprinzak, J.; Werman, M., “Affine point matching”, Pattern Recognition Letters, 15(4):337-9(1994), relates to a pattern recognition method. A fundamental problem of pattern recognition, in general, is recognizing and locating objects within a given scene. The image of an object may have been distorted by different geometric transformations such as translation, rotation, scaling, general affine transformation or perspective projection. The recognition task involves finding a transformation that superimposes the model on its instance in the image. This reference proposes an improved method of superimposing the model.


Temporal Image Analysis


Temporal image analysis is a well known field. This field holds substantial interest at present for two reasons. First, by temporal analysis of a series of two dimensional images, objects and object planes may be defined, which provide basis for efficient yet general algorithms for video compression, such as the Motion Picture Experts Group (MPEG) series of standards. Second, temporal analysis has applications in signal analysis for an understanding and analysis of the signal itself.


U.S. Pat. No. 5,280,530, incorporated herein by reference, relates to a method and apparatus for tracking a moving object in a scene, for example the face of a person in videophone applications, comprises forming an initial template of the face, extracting a mask outlining the face, dividing the template into a plurality (for example sixteen) sub-templates, searching the next frame to find a match with the template, searching the next frame to find a match with each of the sub-templates, determining the displacements of each of the sub-templates with respect to the template, using the displacements to determine affine transform coefficients and performing an affine transform to produce an updated template and updated mask.


U.S. Pat. No. 5,214,504 relates to a moving video image estimation system, based on an original video image of time n and time n+1, the centroid, the principal axis of inertia, the moment about the principal axis of inertia and the moment about the axis perpendicular to the principal axis of inertia are obtained. By using this information, an affine transformation for transforming the original video image at time n to the original video image at time n+1 is obtained. Based on the infinitesimal transformation (A), {eAt, and eA(t−1)} obtained by making the affine transformation continuous with regard to time is executed on the original video image at time n and time n+1. The results are synthesized to perform an interpolation between the frames. {e(a(t−1)} is applied to the original video system time n+1. The video image after time n+1 is thereby protected.


U.S. Pat. No. 5,063,603, incorporated herein by reference, relates to a dynamic method for recognizing objects and image processing system therefor. This reference discloses a method of distinguishing between different members of a class of images, such as human beings. A time series of successive relatively high-resolution frames of image data, any frame of which may or may not include a graphical representation of one or more predetermined specific members (e.g., particular known persons) of a given generic class (e.g. human beings), is examined in order to recognize the identity of a specific member; if that member's image is included in the time series. The frames of image data may be examined in real time at various resolutions, starting with a relatively low resolution, to detect whether some earlier-occurring frame includes any of a group of image features possessed by an image of a member of the given class. The image location of a detected image feature is stored and then used in a later-occurring, higher resolution frame to direct the examination only to the image region of the stored location in order to (1) verify the detection of the aforesaid image feature, and (2) detect one or more other of the group of image features, if any is present in that image region of the frame being examined. By repeating this type of examination for later and later occurring frames, the accumulated detected features can first reliably recognize the detected image region to be an image of a generic object of the given class, and later can reliably recognize the detected image region to be an image of a certain specific member of the given class. Thus, a human identity recognition feature of the present invention may be implemented in this manner. Further, it is clear that this recognition feature may form an integral part of certain embodiments of the present invention. It is also clear that the various features of the present invention would be applicable as an adjunct to the various elements of the system disclosed in U.S. Pat. No. 5,063,603.


U.S. Pat. No. 5,067,160, incorporated herein by reference, relates to a motion-pattern recognition apparatus, having adaptive capabilities. The apparatus recognizes a motion of an object that is moving and is hidden in an image signal, and discriminates the object from the background within the signal. The apparatus has an image-forming unit comprising non-linear oscillators, which forms an image of the motion of the object in accordance with an adjacent-mutual-interference-rule, on the basis of the image signal. A memory unit, comprising non-linear oscillators, stores conceptualized meanings of several motions. A retrieval unit retrieves a conceptualized meaning close to the motion image of the object. An altering unit alters the rule, on the basis of the conceptualized meaning. The image forming unit, memory unit, retrieval unit and altering unit form a holonic-loop. Successive alterations of the rules by the altering unit within the holonic loop change an ambiguous image formed in the image forming unit into a distinct image. U.S. Pat. No. 5,067,160 cites the following references, which are relevant to the task of discriminating a moving object in a background:

  • U.S. Pat. No. 4,710,964.
  • Shimizu et al, “Principle of Holonic Computer and Holovision”, Journal of the Institute of Electronics, Information and Communication, 70(9):921-930 (1987).
  • Omata et al, “Holonic Model of Motion Perception”, IEICE Technical Reports, Mar. 26, 1988, pp. 339-346.
  • Ohsuga et al, “Entrainment of Two Coupled van der Pol Oscillators by an External Oscillation”, Biological Cybernetics, 51:225-239 (1985).


U.S. Pat. No. 5,065,440, incorporated herein by reference, relates to a pattern recognition apparatus, which compensates for, and is thus insensitive to pattern shifting, thus being useful for decomposing an image or sequence of images, into various structural features and recognizing the features. U.S. Pat. No. 5,065,440 cites the following references, incorporated herein by reference, which are also relevant to the present invention: U.S. Pat. Nos. 4,543,660, 4,630,308, 4,677,680, 4,809,341, 4,864,629, 4,872,024 and 4,905,296.


Recent analyses of fractal image compression techniques have tended to imply that, other than in special circumstances, other image compression methods are “better” than a Barnsley-type image compression system, due to the poor performance of compression processors and lower than expected compression ratios. Further, statements attributed to Barnsley have indicated that the Barnsley technique is not truly a “fractal” technique, but rather a vector quantization process which employs a recursive library. Nevertheless, these techniques and analyses have their advantages. As stated hereinbelow, the fact that the codes representing the compressed image are hierarchical represents a particular facet exploited by the present invention.


Another factor which makes fractal methods and analysis relevant to the present invention is the theoretical relation to optical image processing and holography. Thus, while such optical systems may presently be cumbersome and economically unfeasible, and their implementation in software models slow, these techniques nevertheless hold promise and present distinct advantages.


Biometric Analysis


Biometric analysis comprises the study of the differences between various organisms, typically of the same species. Thus, the intraspecies variations become the basis for differentiation and identification. In practice, there are many applications for biometric analysis systems, for example in security applications, these allow identification of a particular human.


U.S. Pat. No. 5,055,658, incorporated herein by reference, relates to a security system employing digitized personal characteristics, such as voice. The following references are cited:

  • “Voice Recognition and Speech Processing”, Elektor Electronics, September 1985, pp. 56-57.
  • Naik et al., “High Performance Speaker Verification.”, ICASSP 86, Tokyo, CH2243-4/86/0000-0881, IEEE 1986, pp. 881-884.
  • Shinan et al., “The Effects of Voice Disguise.”, ICASSP 86, Tokyo, CH2243-4/86/0000-0885, IEEE 1986, pp. 885-888.


Parts of this system relating to speaker recognition may be used to implement a voice recognition system of the present invention for determining an actor or performer in a broadcast.


Neural Networks


Neural networks are a particular type of data analysis tool. There are characterized by the fact that the network is represented by a set of “weights”, which are typically scalar values, which are derived by a formula which is designed to reduce the error between a data pattern representing a known state and the network's prediction of that state. These networks, when provided with sufficient complexity and an appropriate training set, may be quite sensitive and precise. Further, the data pattern may be arbitrarily complex (although the computing power required to evaluate the output will also grow) and therefore these systems may be employed for video and other complex pattern analysis.


U.S. Pat. No. 5,067,164, incorporated herein by reference, relates to a hierarchical constrained automatic learning neural network for character recognition, and thus represents an example of a trainable neural network for pattern recognition, which discloses methods which are useful for the present invention. This patent cites various references of interest:

  • U.S. Pat. Nos. 4,760,604, 4,774,677 and 4,897,811.
  • LeCun, Y., Connectionism in Perspective, R. Pfeifer, Z. Schreter, F. Fogelman, L. Steels, (Eds.), 1989, “Generalization and Network Design Strategies”, pp. 143-55.
  • LeCun, Y., et al., “Handwritten Digit Recognition: Applications of Neural.”, IEEE Comm. Magazine, pp. 41-46 (November 1989).
  • Lippmann, R. P., “An Introduction to Computing with Neural Nets”, IEEE ASSP Magazine, 4(2):4-22 (April 1987).
  • Rumelhart, D. E., et al., Parallel Distr. Proc.: Explorations in Microstructure of Cognition, vol. 1, 1986, “Learning Internal Representations by Error Propagation”, pp. 318-362.


U.S. Pat. Nos. 5,048,100, 5,063,601 and 5,060,278, all incorporated herein by reference, also relate to neural network adaptive pattern recognition methods and apparatuses. It is clear that the methods of U.S. Pat. Nos. 5,048,100, 5,060,278 and 5,063,601 may be used to perform the adaptive pattern recognition functions of the present invention. More general neural networks are disclosed in U.S. Pat. Nos. 5,040,134 and 5,058,184, both incorporated herein be reference, which provide background on the use of neural networks. In particular, U.S. Pat. No. 5,058,184 relates to the use of the apparatus in information processing and feature detection applications.


U.S. Pat. No. 5,058,180, incorporated herein by reference, relates to neural network apparatus and method for pattern recognition, and is thus relevant to the intelligent pattern recognition functions of the present invention. This patent cites the following documents of interest:

  • U.S. Pat. Nos. 4,876,731 and 4,914,708.
  • Carpenter, G. A., S. Grossberg, “The Art of Adaptive Pattern Recognition by a Self-Organizing Neural Network,” IEEE Computer, March 1988, pp. 77-88.
  • Computer Visions, Graphics, and Image Processing 1987, 37:54-115.
  • Grossberg, S., G. Carpenter, “A Massively Parallel Architecture for a Self-Organizing Neural Pattern Recognition Machine,” Computer Vision, Graphics, and Image Processing (1987, 37, 54-115), pp. 252-315.
  • Gullichsen E., E. Chang, “Pattern Classification by Neural Network: An Experiment System for Icon Recognition,” ICNN Proceeding on Neural Networks, March 1987, pp. IV-725-32.
  • Jacket L. D., H. P. Graf, J. S. Denker, D. Henderson and I. Guyon, “An Application of Neural Net Chips: Handwritten Digit Recognition,” ICNN Proceeding, 1988, pp. II-107-15.
  • Lippman, R. P., “An Introduction to Computing with Neural Nets,” IEEE ASSP Magazine, April 1987, pp. 4-22.
  • Pawlicki, T. F., D. S. Lee, J. J. Hull and S. N. Srihari, “Neural Network Models and their Application to Handwritten Digit Recognition,” ICNN Proceeding, 1988, pp. II-63-70.


Chao, T.-H.; Hegblom, E.; Lau, B.; Stoner, W. W.; Miceli, W. J., “Optoelectronically implemented neural network with a wavelet preprocessor”, Proceedings of the SPIE—The International Society for Optical Engineering, 2026:472-82 (1993), relates to an optoelectronic neural network based upon the Neocognitron paradigm which has been implemented and successfully demonstrated for automatic target recognition for both focal plane array imageries and range-Doppler radar signatures. A particular feature of this neural network architectural design is the use of a shift-invariant multichannel Fourier optical correlation as a building block for iterative multilayer processing. A bipolar neural weights holographic synthesis technique was utilized to implement both the excitatory and inhibitory neural functions and increase its discrimination capability. In order to further increase the optoelectronic Neocognitron's self-organization processing ability, a wavelet preprocessor was employed for feature extraction preprocessing (orientation, size, location, etc.). A multichannel optoelectronic wavelet processor using an e-beam complex-valued wavelet filter is also described.


Neural networks are important tools for extracting patterns from complex input sets. These systems do not require human comprehension of the pattern in order to be useful, although human understanding of the nature of the problem is helpful in designing the neural network system, as is known in the art. Feedback to the neural network is integral to the training process. Thus, a set of inputs is mapped to a desired output range, with the network minimizing an “error” for the training data set. Neural networks may differ based on the computation of the “error”, the optimization process, the method of altering the network to minimize the error, and the internal topology. Such factors are known in the art.


Optical Pattern Recognition


Optical image processing holds a number of advantages. First, images are typically optical by their nature, and therefore processing by this means may but not always) avoid a data conversion. Second, many optical image processing schemes are inherently or easily performed in parallel, improving throughput. Third, optical circuits typically have response times shorter than electronic circuits, allowing potentially short cycle times. While many optical phenomena may be modeled using electronic computers, appropriate applications for optical computing, such as pattern recognition, hold promise for high speed in systems of acceptable complexity.


U.S. Pat. No. 5,060,282, incorporated herein by reference, relates to an optical pattern recognition architecture implementing the mean-square error correlation algorithm. This method allows an optical computing function to perform pattern recognition functions. U.S. Pat. No. 5,060,282 cites the following references, which are relevant to optical pattern recognition:

  • Kellman, P., “Time Integrating Optical Signal Processing”, Ph. D. Dissertation, Stanford University, 1979, pp. 51-55.
  • Molley, P., “Implementing the Difference-Squared Error Algorithm Using An Acousto-Optic Processor”, SPIE, 1098:232-239, (1989).
  • Molley, P., et al., “A High Dynamic Range Acousto-Optic Image Correlator for Real-Time Pattern Recognition”, SPIE, 938:55-65 (1988).
  • Psaltis, D., “Incoherent Electro-Optic Image Correlator”, Optical Engineering, 23(1): 12-15 (January/February 1984).
  • Psaltis, D., “Two-Dimensional Optical Processing Using One-Dimensional Input Devices”, Proceedings of the IEEE, 72(7):962-974 (July 1984).
  • Rhodes, W., “Acousto-Optic Signal Processing: Convolution and Correlation”, Proc. of the IEEE, 69(1):65-79 (January 1981).
  • Vander Lugt, A., “Signal Detection By Complex Spatial Filtering”, IEEE Transactions On Information Theory, IT-10, 2:139-145 (April 1964).


U.S. Pat. Nos. 5,159,474 and 5,063,602, expressly incorporated herein by reference, also relate to optical image correlators. Also of interest is Li, H. Y., Y. Qiao and D. Psaltis, Applied Optics (April, 1993). See also, Bains, S., “Trained Neural Network Recognizes Faces”, Laser Focus World, June, 1993, pp. 26-28; Bagley, H. & Sloan, J., “Optical Processing: Ready For Machine Vision?”, Photonics Spectra, August 1993, pp. 101-106.


Optical pattern recognition has been especially applied to two dimensional patterns. In an optical pattern recognition system, an image is correlated with a set of known image patterns represented on a hologram, and the product is a pattern according to a correlation between the input pattern and the provided known patterns. Because this is an optical technique, it is performed nearly instantaneously, and the output information can be reentered into an electronic digital computer through optical transducers known in the art. Such a system is described in Casasent, D., Photonics Spectra, November 1991, pp. 134-140. The references cited therein provide further details of the theory and practice of such a system: Lendaris, G. G., and Stanely, G. L., “Diffraction Pattern Sampling for Automatic Target Recognition”, Proc. IEEE 58:198-205 (1979); Ballard, D. H., and Brown, C. M., Computer Vision, Prentice Hall, Englewood Cliffs, N.J. (1982); Optical Engineering 28:5 (May 1988) (Special Issue on product inspection); Richards J., and Casasent, D., “Real Time Hough Transform for Industrial Inspection” Proc. SPIE Technical Symposium, Boston 1989 1192:2-21 (1989); Maragos, P., “Tutorial Advances in Morphological Image Processing” Optical Engineering 26:7:623-632 (1987); Casasent, D., and Tescher, A., Eds., “Hybrid Image and Signal Processing II”, Proc. SPIE Technical Symposium, April 1990, Orlando Fla. 1297 (1990); Ravichandran, G. and Casasent, D., “Noise and Discrimination Performance of the MINACE Optical Correlation Filter”, Proc. SPIE Technical Symposium, April 1990, Orlando Fla., 1471 (1990); Weshsler, H. Ed., “Neural Nets For Human and Machine Perception”, Academic Press, New York (1991).


By employing volume holographic images, the same types of paradigms may be applied to three dimensional images.


Query by Image Content


Query by image content, a phrase coined by IBM researchers, relates to a system for retrieving image data stored in a database on the basis of the colors, textures, morphology or objects contained within the image. Therefore, the system characterizes the stored images to generate a metadata index, which can then be searched. Unindexed searching is also possible.


A number of query by image content systems are known, including both still and moving image systems, for example from IBM (QBIC), Apple (Photobook), Belmont Research Inc. (Steve Gallant), BrainTech Inc.; Center for Intelligent Information Retrieval (Umass Amherst), Virage, Inc., Informix Software, Inc. (Illustra), Islip Media, Inc., Magnifi, Numinous Technologies, Columbia University VisualSeek/WebSeek (Chang et al., John R. Smith), Monet (CWI and UvA), Visual Computing Laboratory, UC San Diego (ImageGREP, White and Jain). See also, ISO/IEC MPEG-7 literature.


See, Jacobs, et al., “Fast Multiresolution Image Querying”, Department of Computer Science, University of Washington, Seattle Wash.


U.S. Pat. No. 5,655,117, expressly incorporated herein by reference, relates to a method and apparatus for indexing multimedia information streams for content-based retrieval. See also:

  • Gong et al, “An Image Database System with Content Capturing and Fast Image Indexing Abilities”, PROC of the International Conference on Multimedia Computing and Systems, pp. 121-130 May 19, 1994.
  • Hongjiang, et al., Digital Libraries, “A Video Database System for Digital Libraries”, pp. 253-264, May 1994.
  • S. Abe and Y. Tonomura, Systems and Computers in Japan, vol. 24, No. 7, “Scene Retrieval Method Using Temporal Condition Changes”, pp. 92-101, 1993.
  • Salomon et al, “Using Guides to Explore Multimedia Databases”, PROC of the Twenty-Second Annual Hawaii International Conference on System Sciences. vol. IV, 3-6 Jan. 1989, pp. 3-12 vol. 4. Jan. 6, 1989.
  • Stevens, “Next Generation Network and Operating System Requirements for Continuous Time Media”, in Herrtwich (Ed.), Network and Operating System Support for Digital Audio and Video, pp. 197-208, November 1991.


U.S. Pat. No. 5,606,655, expressly incorporated herein by reference, relates to a method for representing contents of a single video shot using frames. The method provides a representative frame (Rframe) for a group of frames in a video sequence, selecting a reference frame from the group of frames and storing the reference frame in a computer memory. This system defines a peripheral motion tracking region along an edge of the reference frame and successively tracks movement of boundary pixels in the tracking region, symbolizing any of the length of the shot and the presence of any caption. See, also:

  • “A Magnifier Tool for Video Data”, Mills et al., Proceedings of ACM Computer Human Interface (CHI), May 3-7, 1992, pp. 93-98.
  • “A New Family of Algorithms for Manipulating Compressed Images”, Smith et al., IEEE Computer Graphics and Applications, 1993.
  • “Anatomy of a Color Histogram”, Novak et al., Proceeding of Computer Vision and Pattern Recognition, Champaign, Ill., June 1992, pp. 599-605.
  • “Automatic Structure Visualization for Video Editing”, Ueda et al., InterCHI'93 Conference Proceedings, Amsterdam, The Netherlands, 24-29 Apr. 1993, pp. 137-141.
  • “Automatic Video Indexing and Full-Video Search for Object Appearances”, Nagasaka et al., Proceedings of the IFIP TC2/WG2.6 Second Working Conference on Visual Database Systems, North Holland, Sep. 30-Oct. 3, 1991, pp. 113-127.
  • “Color Indexing”, Swain et al., International Journal of Computer Vision, vol. 7, No. 1, 1991, pp. 11-32.
  • “Content Oriented Visual Interface Using Video Icons for Visual Database Systems”, Tonomura et al., Journal of Visual Languages and Computing (1990) I, pp. 183-198.
  • “Developing Power Tools for Video Indexing and Retrieval”, Zhang et al., Proceedings of SPIE Conference on Storage and Retrieval for Image and Video Databases, San Jose, Calif., 1994.
  • “Image Information Systems: Where Do We Go From Here?”, Chang et al., IEEE Transactions on Knowledge and Data Engineering, vol. 4, No. 5, October 1992, pp. 431-442.
  • “Image Processing on Compressed Data for Large Video Databases”, Arman et al., Proceedings of First ACM International Conference on Multimedia, Anaheim, Calif., 1-6 Aug. 1993, pp. 267-272.
  • “Image Processing on Encoded Video Sequences”, Arman et al., ACM Multimedia Systems Journal, to appear 1994.


“Impact: An Interactive Natural-Motion-Picture Dedicated Multimedia Authoring System”, Ueda et al., Proceedings of Human Factors in Computing Systems (CHI 91), New Orleans, La., Apr. 27-May 2, 1991, pp. 343-350.

  • “MPEG: A Video Compression Standard for Multimedia Applications”, Le Gall, Communications of the ACM, vol. 34, No. 4, April 1991, pp. 47-58.
  • “News On-Demand for Multimedia Networks”, Miller et al., ACM International Conference on Multimedia, Anaheim, Calif., 1-6, August 1993, pp. 383-392.
  • “Overview of the px64 kbits Video Coding Standard”, Liou, Communications of the ACM, vol. 34, No. 4, April 1991, pp. 60-63.
  • “Pattern Recognition by Moment Invariants”, Hu et al., Proc. IRE, vol. 49, 1961, p. 1428.
  • “Pattern Recognition Experiments in the Mandala/Cosine Domain”, Hsu et al., IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAM1-5, No. 5, September 1983, pp. 512-520.
  • “The JPEG Still Picture Compression Standard”, Wallace, Communications of the ACM, vol. 34, No. 4, April 1991, pp. 31-44.
  • “The Revised Fundamental Theorem of Moment Invariants”, Reiss, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, No. 8, August 1991, pp. 830-834.
  • “VideoMAP and VideoSpaceIcon: Tools for Anatomizing Video Content”, Tonomura et al., Inter CHI'93 Conference Proceedings, Amsterdam, The Netherlands, 24-29 April, 1993, pp. 131-136.
  • “Visual Pattern Recognition by Moment Invariants”, IRE Trans. Inform. Theory, vol. 8, February 1962, pp. 179-187.
  • “Watch-Grab-Arrange-See: Thinking with Motion Images via Streams and Collages”, Elliott, Ph.D. Thesis, MIT, February 1993.
  • Book entitled Digital Image Processing, by Gonzalez et al., Addison-Wesley, Readings, Mass., 1977.
  • Book entitled Digital Picture Processing by Rosenfeld et al., Academic Press, Orlando, Fla., 1982.
  • Book entitled Discrete Cosine Transform—Algorithms, Advantages, Applications, by Rao et al., Academic Press, Inc., 1990.
  • Book entitled Sequential Methods in Pattern Recognition and Machine Learning, Fu, Academic, NY, N.Y. 1968.
  • C.-C. J. Kuo (ed), “Multimedia Storage and Archiving Systems”, SPIE Proc. Vol. 2916 (Nov. 18-22, 1996).


U.S. Pat. No. 5,600,775, expressly incorporated herein by reference, relates to a method and apparatus for annotating full motion video and other indexed data structures. U.S. Pat. No. 5,428,774, expressly incorporated herein by reference relates to a system of updating an index file of frame sequences so that it indexes non-overlapping motion image frame sequences. U.S. Pat. No. 5,550,965, expressly incorporated herein by reference, relates to a method and system for operating a data processor to index primary data in real time with iconic table of contents. U.S. Pat. No. 5,083,860, expressly incorporated herein by reference, relates to a method for detecting change points in motion picture images. U.S. Pat. No. 5,179,449, expressly incorporated herein by reference, relates to a scene boundary detecting apparatus. See also:

  • “A show and tell of the QBIC technology—Query By Image Content (QBIC)”, IBM QBIC Almaden web site, pp. 1-4.
  • “Chaos & Non-Linear Models in Economics”.
  • “Chaos Theory in the Financial Markets. Applying Fractals, Fuzzy Logic, Genetic Algorithms”.
  • “Evolutionary Economics & Chaos Theory”.
  • “Four Eyes”, MIT Media Lab web site; pp. 1-2.
  • “Frequently asked questions about visual information retrieval”, Virage Incorporated web site; pp. 1-3.
  • “IBM Ultimedia Manager 1.1 and Clinet Search”, IBM software web site, pp. 1-4.
  • “Image Detection and Registration”, Digital Image Processing, Pratt, Wiley, New York, 1991.
  • “Jacob Methodology” @ WWCSAI.diepaunipa.it.
  • “Market Analysis. Applying Chaos Theory to Invstment & Economics”.
  • “Photobook”, MIT Media Lab web site; Aug. 7, 1996; pp. 1-2.
  • “Profiting from Chaos. Using Chaos Theory for Market Timing, Stock Selection & Option”.
  • “Shape Analysis”, Digital Image Processing, Pratt, Wiley, New York, 1991.
  • “The QBIC Project”, IBM QBIC Almaden web site, home page (pp. 1-2).
  • “Virage—Visual Information Retrieval”, Virage Incorporated, home page.
  • “Virage Products”, Virage Incorporated web site; pp. 1-2.
  • “Visual Information Retrieval: A Virage Perspective Revision 3”, Virage Incorporated web site; 1995; pp. 1-13.
  • “Workshop Report: NSF—ARPA Workshop on Visual Information Management Systems”, Virage Incorporated web. site; pp. 1-15.
  • A. D. Bimbo, et al, “3-D Visual Query Language for Image Databases”, Journal Of Visual Languages & Computing, 1992, pp. 257-271.
  • A. E. Cawkell, “Current Activities in Image Processing Part III: Indexing Image Collections”, CRITique, vol. 4, No. 8, May 1992, pp. 1-11, ALSIB, London.
  • A. Pizano et al, “Communicating with Pictorial Databases”, Human-Machine Interactive Systems, pp. 61-87, Computer Science Dept, UCLA, 1991.
  • A. Yamamoto et al, “Extraction of Object Features from Image and its Application to Image Retrieval”, IEEE 9th International Conference On Pattern Recognition, 1988, 988-991.
  • A. Yamamoto et al, “Image Retrieval System Based on Object Features”, IEEE Publication No. CH2518-9/87/0000-0132, 1987, pp. 132-134.
  • A. Yamamoto et al., “Extraction of Object Features and Its Application to Image Retrieval”, Trans. of IEICE, vol. E72, No. 6, 771-781 (June 1989).
  • A. Yamamoto et al., “Extraction of Object Features from Image and Its Application to Image Retrieval”, Proc. 9th Annual Conference on Pattern Recognition, vol. II, pp. 988-991 (November 1988).
  • A. Soffer and H. Samet. Retrieveal by content in symbolic-image databases. In Symposium on Electronic Imaging: Science and Technology—Storage & Retrieval for Image and Video Databases IV, pages 144-155. IS&T/SPIE, 1996.
  • Abadi, M., et al, “Authentication and Delegation with Smart-cards”, Oct. 22, 1990, revised Jul. 30, 1992 Report 67, Systems Research Center, Digital Equipment Corp., Palo Alto, Calif.
  • Advertisement for “TV Decision,” CableVision, Aug. 4, 1986.
  • American National Standard, “Financial Institution Retail Message Authentication”, ANSI X9.19 1986.
  • American National Standard, “Interchange Message Specification for Debit and Credit Card Message Exchange Among Financial Institutions”, ANSI X9.2-1988.
  • Antonofs, M., “Stay Tuned for Smart TV,” Popular Science, November 1990, pp. 62-65.
  • Arman et al., “Feature Management for Large Video Databases”, 1993. (Abstract Only).
  • ASIAN TECHNOLOGY INFORMATION PROGRAM (ATIP) REPORT: ATIP95.65: Human Computer Interface International, 7/95 Yokohama.
  • Barber et al. “Ultimedia Manager: Query by Image Content and it's Applications” IEE, 1994, pp. 424-429, January 1994.
  • Barros, et al. “Indexing Multispectral Images for Content-Based Retrieval”, Proc. 23rd AIPR Workshop on Image and Information Retrieval, Proc. 23rd Workshop, Washington, D.C., October 1994, pp. 25-36.
  • Belkin, N. J., Croft, W. B., “Information Filtering And Information Retrieval: Two Sides of the Same Coin?”, Communications of the ACM, December 1992, vol. 35, No. 12, pp. 29-38.
  • Benoit Mandelbrot: “Fractal Geometry of Nature”, W H Freeman and Co., New York, 1983 (orig ed 1977).
  • Benoit Mandelbrot: “Fractals—Form, Chance and Dimensions”, W H Freeman and Co., San Francisco, 1977.
  • Bimbo et al., “Sequence Retrieval by Contents through Spatio Temporal Indexing”, IEEE on CD-ROM, pp. 88-92, Aug. 24, 1993.
  • Bolot, J.; Turletti, T. & Wakeman, I.; “Scalable Feedback Control for Multicast Video Distribution In the Internet”, Computer Communication Review, vol. 24, No. 4 Oct. 1994, Proceedings of SIGCOMM 94, pp. 58-67.
  • Bos et al., “SmartCash: a Practical Electronic Payment System”, pp. 1-8; August 1990.
  • Branko Pecar: “Business Forecasting for Management”, McGraw-Hill Book Co., London, 1994.
  • Brian H Kaye: “A Random Walk Through Fractal Dimensions”, VCH Verlagsgesellschaft, Weinheim, 1989.
  • Brugliera, Vito, “Digital On-Screen Display—A New Technology for the Consumer Interface”, Symposium Record Cable Sessions. Jun. 11, 1993, pp. 571-586.
  • Burk et al, “Value Exchange Systems Enabling Security and Unobservability”, Computers & Security, 9 1990, pp. 715-721.
  • C. Chang et al, “Retrieval of Similar Pictures on Pictorial Databases”, Pattern Recognition, vol. 24, No. 7, 1991, pp. 675-680.
  • C. Chang, “Retrieving the Most Similar Symbolic Pictures from Pictorial Databases”, Information Processing & Management, vol. 28, No. 5, 1992.
  • C. Faloutsos et al, “Efficient and Effective Querying by Image Content”, Journal of Intelligent Information Systems:Integrating Artificial Intelligence and Database Technologies, vol. 3-4, No. 3, July 1994, pp. 231-262.
  • C. Goble et al, “The Manchester Multimedia Information System”, Proceedings of IEEE Conference, Eurographics Workshop, April, 1991, pp. 244-268.
  • C. C. Chang and S. Y. Lee. Retrieval of similar pictures on pictorial databases. Pattern Recog., 24(7), 1991.
  • Case Study The CIRRUS Banking Network, Comm. ACM 8, 28 pp. 7970-8078, August 1985.
  • Chalmers, M., Chitson, P., “Bead: Explorations In Information Visualization”, 15th Ann. Int'l SIGIR 92/Denmark-June 1992, pp. 330-337.
  • Chang et al., “Intelligent Database Retrieval by Visual Reasoning”, PROC Fourteenth Annual International Computer Software and Application Conference, 31 Oct.-1 November 1990, pp. 459-464.
  • Chang, Yuh-Lin, Zeng, Wenjun, Kamel, Ibrahim, Alonso, Rafael, “Integrated Image and Speech Analysis for Content-Based Video Indexing”.
  • Chaum et al, “Untraceable Electronic Cash”, Advances in Cryptology, 1988, pp. 319-327.
  • Chaum et al; “Achieving Electronic Privacy”, Scientific American, pp. 319-327; 1988.
  • Chaum, D. “Security without Identification: Card Computers to Make Big Brother Obsolete”, Communications of the ACM, 28(10), October 1985, pp. 1030-1044.
  • Chaum, D. “Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms”, Communications of the ACM, vol. 24, No. 2, February, 1981.
  • Chaum, D., “Achieving Electronic Privacy”, Scientific American, August 1992, pp. 96-101.
  • Chaum, D. L. et al.; “Implementing Capability-Based Protection Using Encryption”; Electronics Research Laboratory, College of Engineering, University of California, Berkeley, Calif.; Jul. 17, 1978.
  • Cliff Pickover, Spiral Symmetry (World Scientific).
  • Cliff Pickover, Chaos in Wonderland: Visual Adventures in a Fractal World (St. Martin's Press).
  • Cliff Pickover, Computers and the Imagination (St. Martin's Press) Cliff Pickover, Mazes for the Mind: Computers and the Unexpected (St. Martin's Press).
  • Cliff Pickover, Computers, Pattern, Chaos, and Beauty (St. Martin's Press).
  • Cliff Pickover, Frontiers of Scientific Visualization (Wiley).
  • Cliff Pickover, Visions of the Future: Art, Technology, and Computing in the 21st Century (St. Martin's Press).
  • Cohen, Danny; “Computerized Commerce”; ISI Reprint Series ISI/RS-89/243; October, 1989; Reprinted from Information Processing 89, Proceedings of the IFIP World Computer Congress, held Aug. 28-Sep. 1, 1989.
  • Cohen, Danny; “Electronic Commerce”; University of Southern California, Information Sciences Institute, Research Report ISI/RR-89-244; October, 1989.


Common European Newsletter, Multimedia Content manipulation and Management, ww.esat.kuleuven.ac.be.

  • CompuServe Information Service Users Guide, CompuServe International, 1986, pp. 109-114.
  • Computer Shopper, November 1994, “Internet for Profit”, pp. 180-182, 187, 190-192, 522-528, 532, 534.
  • Computer, Vol. 28(9), September 1995.
  • Compuvid Sales Manual (date unknown).
  • Corporate Overview, Virage Incorporated web site; pp. 1-4.
  • Cox, Ingemar J., et al., “PicHunter: Bayesian Relevance Feedback for Image Retrieval,” Proc. of the ICPR '96, IEEE, pp. 361-369.
  • Cutting, D. R.; Karger, D. R.; Pedersen, J. O. & Tukey, J. W. “Scatter/Gather: A Cluster-based Approach to Browsing Large Document Collections”, 15 Ann. Int'l SIGIR '92, ACM, 1992, pp. 318-329.
  • D K Arrowsmith & C M Place: “An Introduction to Dynamical Systems”, Cambridge University Press, Cambridge, 1990.
  • Damashek, M., Gauging Similarity via N-Grams: Language-Independent Sorting, Categorization, and Retrieval of Text, pp. 1-11, Jan. 24, 1995.
  • Data Partner 1.0 Simplifies DB Query Routines, PC Week, Sep. 14, 1992, pp. 55 & 58.
  • David E Rumelhart & James L McClelland: “Parallel Distributed Processing”, Vol I., The MIT Press, Cambridge, Mass., 1986.
  • Deering, S.; Estrin, D.; Farinacci, D.; Jacobson, V.; Liu, C.; Wei, L; “An Architecture for Wide-Area Multicast Routing”, Computer Communication Review, vol. 24, No. 4, October 1994, Proceedings of SIGCOMM 94, pp. 126-135.
  • Donal Daly: “Expert Systems Introduced”, Chartwell-Bratt, Lund, 1988.
  • Dukach, Seymon; Prototype Implementation of the SNPP Protocol; allspicks.mit.edu; 1992.
  • E. Binaghi et al, “Indexing and Fuzzy Logic Based Retrieval of Color Images”, Visual Database Systems, II, 1992, pp. 79-92.
  • E. Binaghi et al., “A Knowledge-Based Environment for Assessment of Color Similarity”, Proc. 2nd Annual Conference on Topics for A1, pp. 268-285 (1990).
  • E. Lee, “Similarity Retrieval Techniques”, Pictorial Information Systems, Springer Verlag, 1980 pp. 128-176.
  • E. G. M. Petrakis and C. Faloutsos. Similarity searching in large image databases. Technical Report 3388, Department of Computer Science, University of Maryland, 1995.
  • Edward Reitman: “Exploring the Geometry of Nature”, Windcrest Books, Blue Ridge Summit, 1989.
  • Even et al; “Electronic Wallet”; pp. 383-386; 1983.
  • F. J. Varela and P. Bourgine (eds.): Proceedings of the first European Conference on Artificial Life. Cambridge, Mass.: MIT Press. (1991).
  • Fassihi, Theresa & Bishop, Nancy, “Cable Guide Courting National Advertisers,” Adweek, Aug. 8, 1988.
  • Flickner, et al. “Query by Image and Video Content, the QBIC System”, IEEE Computer 28(9); 23-32, 1995.
  • Foltz, P. W., Dumais, S. T., “Personalized Information Delivery: An Analysis Of Information Filtering Methods”, Communications of the ACM, December 1992, vol. 35, No. 12, pp. 51-60.
  • Frank Pettit: “Fourier Transforms in Action”, Chartwell-Bratt, Lund, 1985.
  • G F Page, J B Gomm & D Williams: “Application of Neural Networks to Modelling and Control”, Chapman & Hall, London, 1993.
  • G. Mannes, “Smart Screens”, Video Magazine, December 1993) (2 Pages).
  • G. Tortora et al, “Pyramidal Algorithms”, Computer Vision, Graphics and Images Processing, 1990, pp. 26-56.
  • Gautama, S., D'Haeyer, J. P. F., “Context Driven Matching in Structural Pattern Recognition”.
  • Gautama, S., Haeyer, J. D., “Learning Relational Models of Shape: A Study of the Hypergraph Formalism”.
  • Gene F Franklin, J David Powell & Abbas Emami-Naeini: “Feedback Control of Dynamic Systems”, Addison-Wesley Publishing Co. Reading, 1994.
  • George E P Box & Gwilym M Jenkins: “Time Series Analysis: Forecasting and Control”, Holden Day, San Francisco, 1976.
  • Gifford, D., “Notes on Community Information Systems”, MIT LCS TM-419, December 1989.
  • Gifford, David K.; “Cryptographic Sealing for Information Secrecy and Authentication”; Stanford University and Xerox Palo Alto Research Center; Communication of the ACM; vol. 25, No. 4; April, 1982.
  • Gifford, David K.; “Digital Active Advertising”; U.S. patent application Ser. No. 08/168,519; filed Dec. 16, 1993.
  • Gligor, Virgil D. et al.; “Object Migration and Authentication”; IEEE Transactions on Software Engineering; vol. SE-5, No. 6; November, 1979.
  • Gong et al. “An Image Database System with Content Capturing and Fast Image Indexing Abilities” IEEE, 1994, pp. 121-130, May 1994.
  • Gregory L Baker & Jerry P Gollub: “Chaotic Dynamics: An Introduction”, Cambridge University Press, Cambridge, 1990.
  • Gupta, Amarnath; Weymount, Terry & Jain, Ramesh, “Semantic Queries With Pictures: The VIMSYS Model”, Proceedings of the 17th International Conference on Very Large Data Bases, pp. 69-79, Barcelona, September, 1991.
  • H. Tamura et al, “Image Database Systems: A Survey”, Pattern Recognition, vol. 17, No. 1, 1984, pp. 29-34.
  • H. Tamura, et al., “Textural Features Corresponding to Visual Perception,” IEEE Transactions on System, Man, and Cyb., vol. SMC-8, No. 6, pp. 460-473 (1978).
  • H. Samet. The quadtree and related hierarchical data structures. ACM Computing Surveys, 16(2):187-260, 1984.
  • Hans Lauwerier: “Fractals—Images of Chaos”, Penguin Books, London, 1991.
  • Harty et al., “Case Study: The VISA Transaction Processing System,” 1988.
  • Heinz-Otto Peitgen & Deitmar Saupe: “The Science of Fractal Images”, Springer-Verlag, New York, 1988.
  • Heinz-Otto Peitgen, Hartmut Jurgens & Deitmar Saupe: “Fractals for the Classroom”, Springer-Verlag, 1992.
  • Hirata, et al. “Query by Visual Example, Content Based Image Retrieval” Advance in Database Technology-EDBT '92, Springer-Verlag, Berlin 1992, pp. 56-71
  • Hirzalla et al., “A Multimedia Query User Interface”, IEEE on CD-ROM, pp. 590-593, Sep. 5, 1995.
  • Hooge, Charles, “Fuzzy logic Extends Pattern Recognition Beyond Neural Networks”, Vision Systems Design, January 1998, pp. 32-37.
  • Hou et al., “Medical Image Retrieval by Spatial Features”, IEEE on CD-ROM, pp. 1364-1369, Oct. 18, 1992.
  • Iino et al., “An Object-Oriented Model for Spatio-Temporal Synchronization of Multimedia Information”, May, 1994.
  • Information Network Institute, Carnegie Mellon University, Internet Billing Server, Prototype Scope Document, Oct. 14, 1993.
  • Ingemar J. Cox et al., “Target Testing and the Pic Hunter Bayesian Multimedia Retrieval System,” Proc. of the 3d Forum on Research and Technology Advances in Digital Libraries, ADL '96, IEEE, pp. 66-75.
  • Intel Corporation, iPower Technology, Marketing Brochure, date unknown.
  • Intuit Corp. Quicken User's Guide, “Paying Bills Electronically”, pp. 171-192; undated.
  • ISO/IEC JTCI/SC29/WGII N1735, MPEG97, July 1997—Stockholm, “MPEG-7 Applications Document”.
  • ISO/IEC JTCI/SC29/WGII N2460, MPEG98, October 1998 “MPEG-7 Context and Objectives (v. 10—Atlantic City)”; ISO/IEC JTCI/SC29/WGII N1920, MPEG97, October 1997 “MPEG-7 Context and Objectives (v. 5—Fribourg)”; ISO/IEC JTCI/SC29/WGII N1733, MPEG97, July 1997, “MPEG-7 Context and Objectives (v. 4—Stockholm)”.
  • ISO/IEC JTCI/SC29/WGII N2461, MPEG98, October 1998—Atlantic City, “MPEG-7 Requirements”.
  • ISO/IEC JTCI/SC29/WGII N2462, MPEG98, October 1998—Atlantic City, “MPEG-7 Applications”.
  • ISO/IEC JTCI/SC29/WGII N2467, MPEG98, October 1998—Atlantic City, “MPEG-7 Content Set”.
  • Itzhak Wilf, “Computer, Retrieve For Me the Video Clip of the Winning Goal”, Advanced Imaging, August 1998, pp. 53-55.
  • Ivan Ekeland: “Mathematics and the Unexpected”, The University of Chicago Press, Chicago, 1988 Kenneth Falconer: “Fractal Geometry”, John Wiley & Sons, Chichester, 1990.
  • Ivars Peterson: “The Mathematical Tourist”, W H Freeman, New York, 1988.
  • Iyengar et al., “Codes Designs for Image Browsing”, 1994.
  • J W Bruce & P J Giblin: “Curves and Singularities”, Cambridge University Press, Cambridge, 1992.
  • J. Hasegawa et al, “Intelligent Retrieval of Chest X-Ray Image Database Using Sketches”, System And Computers In Japan, 1989, pp. 29-42.
  • J. M. Chassery, et al., “An Interactive Segmentation Method Based on Contextual Color and Shape Criterion”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAM1-6, No. 6, (November 1984).
  • J. Wachman, “A Video Browser that Learns by Example”, Masters Thesis, Massachusetts Institute of Technology; 1996; also appears as MIT Media Laboratory Technical Report No. 383.
  • J. Hafner, H. S. Sawhney, W. Equitz, M. Flickner, and W. Niblack. Efficient color histogram indexing for quadratic form distance functions. IEEE Trans. Pattern Anal. Machine Intell., July 1995.
  • J. R. Bach, C. Fuller, A. Gupta, A. Hampapur, B. Horowitz, R. Humphrey, R. C. Jain, and C. Shu. Virage image search engine: an open framework for image management. In Symposium on Electronic Imaging: Science and Technology—Storage & Retrieval for Image and Video Databases IV, pages 76-87. IS&T/SPIE, 1996.
  • J. R. Smith and S.-F. Chang. Querying by color regions using the VisualSEEk content-based visual query system. In M. T. Maybury, editor, Intelligent Multimedia Information Retrieval. IJCAI, 1996.
  • J. R. Smith and S.-F. Chang. Tools and techniques for color image retrieval. In Symposium on Electronic Imaging: Science and Technology—Storage & Retrieval for Image and Video Databases IV, volume 2670, San Jose, Calif., February 1996. IS&T/SPIE.
  • Jacobs, Charles E., Finkelstein, Adam, Salesin, David H., “Fast Multiresolution Image Querying”.
  • James Gleick: “Chaos—Making a New Science”, Heinemann, London, 1988.


Jane Hunter, “The Application of Metadata Standards to Video Indexing” www.dtsc.edu.au (<Dec. 24 1998).

  • Jim Binkley & Leslie Young, Rama: An Architecture for Internet Information Filtering, Journal of Intelligent
  • Information Systems: Integrating Artificial Intelligence and Database Technologies, vol. 5, No. 2, September 1995, pp. 81-99.
  • Jonathan Berry, “A Potent New Tool for Selling Database Marketing”, Business Week, Sep. 5, 1994, pp. 34-40.
  • Joseph L McCauley: “Chaos, Dymanics, and Fractals”, Cambridge University Press, Cambridge, 1993.
  • JPL New Technology Report NPO-20213, Nasa Tech Brief Vol. 22, No. 4, Item #156 (April 1998).
  • Judith H. Irven et al., “Multi-Media Information Services: A Laboratory Study”, IEEE Communications Magazine, vol. 26, No. 6, June, 1988, pp. 24-44.
  • K V Mardia, J T Kent & J M Bibby: “Multivariate Analysis”, Academic Press, London, 1979.
  • K. Hirata et al, “Query by Visual Example Content Based Image Retrieval”, Advances In Database Technology, March, 1992, pp. 57-71.
  • K. Wakimoto et al, “An Intelligent User Interface to an Image Database using a Figure interpretation Method”, IEEE Publication No. CH2898-5/90/0000/0516, 1990, pp. 516-520.
  • K. Woolsey, “Multimedia Scouting”, IEEE Computer Graphics And Applications, July 1991 pp. 26-38.
  • Kelly et al. “Efficiency Issues Related to Probability Density Function Comparison”, SPIE vol. 2670, pp. 42-49 January 1996.
  • Kelly, P. M., et al. “Candid Comparison Algorithm for Navigating Digital Image Databases”, Proceedings 7th International Working Conference on Scientific and Statistical Database Management, pp. 252-258, 1994.
  • Krajewski, M. et al, “Applicability of Smart Cards to Network User Authentication”, Computing Systems, vol. 7, No. 1, 1994.
  • Krajewski, M., “Concept for a Smart Card Kerberos”, 15th National Computer Security Conference, October 1992.
  • Krajewski, M., “Smart Card Augmentation of Kerberos, Privacy and Security Research Group Workshop on Network and Distributed System Security”, February 1993.
  • Lampson, Butler; Abadi, Martin; Burrows, Michael; and Wobber, Edward; “Authentication in Distributed Systems: Theory and Practice”; ACM Transactions on Computer Systems; vol. 10, No. 4;November, 1992; pp. 265-310.


Landis, Sean, “Content-Based Image Retrieval Systems for Interior Design”, www.tc.cornell.edu.

  • Langton C G (ed): Artificial Life; Proceedings of the first international conference on Artificial life, Redwood City: Addison-Wessley (1989).
  • Lee et al., “Video Indexing—An Approach based on Moving Object and Track”, Proceedings of Storage and Retrieval for Image and Video Databases, pp. 25-36. February 1993.
  • Lee, Denis, et al., “Query by Image Content Using Multiple Objects and Multiple Features: User Interface Issues,” 1994 Int'l Conf. on Image Processing, IEEE, pp. 76-80.
  • Lennart Ljung & Torsten Soderstrom: “Theory and Practice of Recursive Identification”, The MIT Press, Cambridge, Mass., 1983.
  • Lennart Ljung: “System Identification; Theory for the User”, Prentice-Hall Englewood Cliffs, N.J., 1987.
  • Loeb, S., “Architecting Personalized Delivery of Multimedia Information”, Communications of the ACM, December 1992, vol. 35, No. 12, pp. 39-50.
  • M V Berry, I C Persival & N O Weiss: “Dynamical Chaos”, The Royal Society, London, 1987, Proceedings of a Royal Society Discussion Meeting held on 4 & 5 Feb. 1987.
  • M. Bender, “EFTS: Electronic Funds Transfer Systems”, Kennikat Press, Port Washington, New York, pp. 43-461975.
  • M. H. O'Docherty et al, “Multimedia Information System—The Management and Semantic Retrieval of all Electronic Data Types”, The Computer Journal, vol. 34, No. 3, 1991.
  • M. Ioka, “A Method of Defining the Similarity of Images on the Basis of Color Information”, Bulletin Of The National Museum Of Ethnology Special Issue, pp. 229-244, No. 17, November 1992.
  • M. Kurokawa, “An Approach to Retrieving Images by Using their Pictorial Features”, IBM Research, Japan, September 1989.
  • M. Swain et al, “Color Indexing”, International Journal Of Computer Vision, 1991, pp. 12-32.
  • M. Stricker and A. Dimai. Color indexing with weak spatial constraints. In Symposium on Electronic Imaging: Science and Technology—Storage & Retrieval for Image and Video Databases IV, pages 29-41. IS&T/SPIE, 1996.
  • M. Stricker and M. Orengo. Similarity of color images. In Storage and Retrieval for Image and Video Databases III, volume SPIE Vol. 2420, February 1995.
  • Mackay et al., “Virtual Video Editing in Interactive Multimedia Applications”, 1989.
  • Manners, George, “Smart Screens; Development of Personal Navigation Systems for TV Viewers,” Video Magazine, December 1993.
  • Martin Casdagli & Stephen Eubank: “Nonlinear Modelling and Forecasting”, Addison-Wesley Publishing Co., Redwood City, 1992.
  • Martinez et al. “Imagenet: A Global Distribution Database for Color Image Storage and Retrieval in Medical Imaging Systems” IEEE, 1992, 710-719, May 1992.
  • Marvin A. Sirbu; Internet Billing Service Design And Prototype Implementation; pp. 1-19; An Internet Billing Server.
  • Masahiro Morita & Yoichi Shinoda, Information Filtering Based on User Behavior Analysis and Best Match Text Retrieval, Proceedings of the Seventeenth Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, Dublin, Jul. 3-6, 1994, Pages Title Page (272)-281.
  • Medvinsy et al, “NetCash: A Design for Practical Electronic Currency on the Internet”, Proc. 1st ACM Conf. on Comp. and Comm. Security, November 1993.
  • Medvinsy et al., “Electronic Currency for the Internet”, Electronic Markets, pp. 30-31, September 1993.
  • Meyer, J. A., Roitblat, H. L., Wilson, W. (eds.): From Animals to Animals. Proceedings of the Second International Conference on Simulation of Adaptive Behaviour. Cambridge, Mass.: MIT Press. (1991).
  • Middleton, G. V. ed., 1991, Nonlinear Dynamics, Chaos and Fractals, with Applications to Geological Systems. Geol. Assoc. Canada Short Course Notes Vol. 9 (available from the GAC at Memorial University of Newfoundland, St. John's NF AIB 3×5).
  • Mills, “Media Composition for Casual Users”, 1992.
  • Minneman et al., “Where Were We: making and using near-synchronous, pre-narrative video”, Multimedia '93, pp. 1-11. December 1993.
  • N. Hutheesing, “Interactivity for the passive”, Forbes magazine Dec. 6, 1993 (@Forbes Inc. 1993) (2 pages).
  • N. S. Chang et al., “Query-by-Pictorial Example”, IEEE Transactions on Software Engineering, vol. SE-6, No. 6, pp. 519-524 (November 1980).
  • N. S. Chang, et al., “Picture Query Languages for Pictorial Data-Base Systems”, Computer vol. 14, No. 11, pp. 23-33 (November 1981).
  • Nagasaka et al., “Automatic Video Indexing and Full-Video Search for Object Appearances”, Visual Database Systems, (Knuth et al., eds.), pp. 113-126. January 1992.
  • National Westminster Bank Group Brochure; pp. 1-29; undated.
  • Needham, Roger M. and Schroeder, Michael D.; “Using Encryption for Authentication in Large Networks of Computers”; Communications of the ACM; vol. 21, No. 12; December, 1978; pp. 993-999.
  • Needham, Roger M.; “Adding Capability Access to Conventional File Servers”; Xerox Palo Alto Research Center; Palo Alto, Calif.
  • Newman, B. C., “Proxy-Based Authorization and Accounting for Distributed Systems”, Proc. 13th Int. Conf. on Dist. Comp. Sys., May 1993.
  • Niblack, W. et al., “The QBIC Project: Querying Images by Content Using Color, Texture, and Shape”, IBM Computer Science Research Report, pp. 1-20 (Feb. 1, 1993).
  • Nussbaumer et al., “Multimedia Delivery on Demand: Capacity Analysis and Implications”, Proc 19th Conference on Local Computer Networks, 2-5 Oct. 1994, pp. 380-386.
  • O. Guenther and A. Buchmann. Research issues in spatial databases. In ACM SIGMOD Record, volume 19, December 1990.
  • Okamoto et al; “Universal Electronic Cash”, pp. 324-337; 1991.
  • Ono, Atsushi, et al., “A Flexible Content-Based Image Retrieval System with Combined Scene Description Keyword,” Proc. of Multimedia '96, IEEE, pp. 201-208.
  • Otis Port, “Wonder Chips-How They'll Make Computing Power Ultrafast and Ultracheap”, Business Week, Jul. 4, 1994, pp. 86-92.
  • P G Drazin: “Nonlinear System”, Cambridge University Press, Cambridge, 1992.
  • P. Stanchev et al, “An Approach to Image Indexing of Documents”, Visual Database Systems, II, 1992, pp. 63-77.
  • Peter J Diggle: “Time Series: A Biostatistical Introduction”, Clarendon Press, Oxford, 1990.
  • Peters:“Chaos and Order in the Capital Markets”, Wiley, 1991 Gershenfeld & Weigend: “The Future of Time Series”, Addison-Wesley, 1993.
  • Pfitzmann et al; “How to Break and Repair a Provably Secure Untraceable Payment System”; pp. 338-350; 1991.
  • Phillips, “MediaView: a general multimedia digital publication system”, Comm. of the ACM, v. 34, n. 7, pp. 75-83. July 1991.
  • Predrag Cvitanovic: “Universality in Chaos”, Adam Hilger, Bristol, 1989.
  • R. Mehrotra et al, “Shape Matching Utilizing Indexed Hypotheses Generation and Testing”, IEEE Transactions On Robotics, vol. 5, No. 1, February 1989, pp. 70-77.
  • R. Price, et al., “Applying Relevance Feedback to a Photo Archival System”, Journal of Information Science 18, pp. 203-215 (1992).
  • R. W. Picard et al, “finding Similar Patterns in Large Image Databases”, IEEE ICASSP, Minneapolis, Minn., vol. V, pp. 161-164, April 1993; also appears in MIT Media Laboratory Technical Report No. 205.
  • Rangan et al., “A Window-based Editor for Digital Video and Audio”, January 1992.
  • Richards et al., “The Interactive Island”, IEE Revies, July/August 1991 pp. 259-263.
  • Rivest, R.; “The MD5 Message-Digest Algorithm”; MIT Laboratory for Computer Science and RSA Data Security, Inc.; April, 1992.
  • Rivest, R. L. et al., “A Method for Obtaining Digital Signatures and Public-Key Cryptosystems,” Laboratory for Computer Science, Massachusetts Institute of Technology, Cambridge, Mass.
  • Rivest, R. L.; Shamir, A. & Adleman, L.; “A Method for Obtaining Digital Signatures and Public-Key Cryptosystems”, Communications of the ACM, February 1978, vol. 21, No. 2, pp. 120-126.
  • Robert Brown: “Statistical Forecasting for Inventory Control”, McGraw-Hill Book Co., New York, 1958.
  • Robinson, G., and Loveless, W., “Touch-Tone’ Teletext—A Combined Teletext-Viewdata System,” IEEE Transactions on Consumer Electronics, vol. CE-25, No. 3, July 1979, pp. 298-303.
  • Roizen, Joseph, “Teletext in the USA,” SMPTE Journal, July 1981, pp. 602-610.
  • Rose, D. E.; Mander, R.; Oren, T., Ponceleon, D. B.; Salomon, G. & Wong, Y. Y. “Content Awareness in a File System Interface Implementing the ‘Pile’ Metaphor for Organizing Information”, 16 Ann. Int'l SIGIR '93, ACM, pp. 260-269.
  • Ross Anderson, “Why Cryptosystems Fail”, Proc. 1st Conf. Computer and Comm. Security, pp. 215-227, November 1993.
  • Ross J. Anderson, “UEPS—A Second Generation Electronic Wallet”, Proc. of the Second European Symposium on Research in Computer Security (ESORICS), Touluse, France, pp. 411-418, Touluse, France.
  • Rui, Yong, Huang, Thomas S., Chang, Shih-Fu, “Image Retrieval: Past Present and Future”.
  • Rui, Yong, Huang, Thomas S., Mehotra, Sharad, “Browsing and retrieving Video Content in a Unified Framework”.
  • Rui, Yong, Huang, Thomas S., Ortega, Michael, Mehotra, Sharad, “Relevance Feedback: A Power Tool for Interactive Content-Based Image Retrieval”.
  • S. Chang et al, “An Intelligent Image Database System”, IEEE Transactions On Software Engineering, vol. 14, No. 5, May 1988, pp. 681-688.
  • S. Chang et al, “Iconic Indexing by 2-D Strings”, IEEE Transactions On Pattern Analysis And Machine Intelligence, vol. PAM1-9, No. 3, May 1987.
  • S. Chang et al, “Iconic Indexing by 2-D Strings”, IEEE Transactions On Pattern Analysis And Machine Intelligence, vol. 9, No. 3, May 1987, pp. 413-427.
  • S. Charles et al, “Using Depictive Queries to Search Pictorial Databases”, Human Computer Interaction, 1990, pp. 493-498.
  • S. Lee et al, “2D C-string: A New Spatial Knowledge Representation for Image Database Systems”, Pattern Recognition, vol. 23, 1990, pp. 1077-1087.
  • S. Lee et al, “Similarity Retrieval of Iconic Image Database”, Pattern Recognition, vol. 22, No. 6 1989, pp. 675-682.
  • S. Lee et al, “Spatial Reasoning and Similarity Retrieval of Images Using 2D C-string Knowledge Representation”, Pattern Recognition, 1992, pp. 305-318.
  • S. Negandaripour et al “Challenges in Computer Vision: Future Research Direction”, IEEE Transactions On Systems, Man And Cybernetics, pp. 189-199, 1992, at Conference on Computer Vision and Pattern Recognition.
  • S. Tanaka et al, “Retrieval Method for an Image Database based on Topological Structure”, SPIE, vol. 1153, 1989, pp. 318-327.
  • S.-F. Chang. Compressed-domain techniques for image/video indexing and manipulation. In Proceedings, I.E.E.E. International Conference on Image Processing, Washington, D.C., October 1995. invited paper to the special session on Digital Library and Video on Demand.
  • S.-K. Chang, Q. Y. Shi, and C. Y. Yan. Iconic indexing by 2-D strings. IEEE Trans. Pattern Anal. Machine Intell., 9(3):413-428, May 1987.
  • S.-K. Chang. Principles of Pictorial Information Systems Design. Prentice Hall, 1989.
  • Salton, G., “Developments in Automatic Text Retrieval”, Science, vol. 253, pp. 974-980, Aug. 30, 1991.
  • Schamuller-Bichl, I., “IC-Cards in High-Security Applications”, in Selected Papers from the Smart Card 2000 Conference, Springer Verlag, 1991, pp. 177-199.
  • Semyon Dukach, “SNPP: A Simple Network Payment Protocol”, MIT Laboratory for Computer Science, Cambridge, Mass., 1993.
  • Shann et al. “Detection of Circular Arcs for Content-Based Retrieval from an Image Database” IEE Proc.-Vis. Image Signal Process, vol. 141, No. 1, February 1994, pp. 49-55.
  • Sheldon G Lloyd & Gerald D Anderson: “Industrial Process Control”, Fisher Controls Co., Marshalltown, 1971.
  • Sheth et al., “Evolving Agents for Personalized Information Filtering”, 1-5 Mar. 1993, pp. 345-352.
  • Sheth, B. & Maes, P. “Evolving Agents For Personalized Information Filtering”, Proc. 9th IEEE Conference, 1993 pp. 345-352.
  • Sincoskie, W. D. & Cotton C. J. “Extended Bridge Algorithms for Large Networks”, IEEE Network, January 1988-vol. 2, No. 1, pp. 16-24.
  • Smith, J. et al., “Quad-Tree Segmentation for Texture-Based Image Query” Proceeding ACM Multimedia 94, pp. 1-15, San Francisco, 1994.
  • Smoliar, S. et al., “Content-Based Video Indexing and Retrieval”, IEEE Multimedia, pp. 62-72 (Summer 1994).
  • Society for Worldwide Interbank Financial Telecommunications S. C., “A.S.W.I.F.T. Overview”, undated.
  • Spyros Makridakis & Steven Wheelwright: “The Handbook of Forecasting”, John Wiley, New York, 1982.
  • Steven C Chapra & Raymond P Canale: “Numerical Methods for Engineers”, McGraw-Hill Book Co., New York, 1988.
  • T. Arndt, “A Survey of Recent Research in Image Database Management”, IEEE Publication No. TH0330-1/90/0000/0092, pp. 92-97, 1990.
  • T. Gevers et al, “Enigma: An Image Retrieval System”, IEEE 11th IAPR International Conference On Pattern Recognition, 1992, pp. 697-700.
  • T. Gevers et al, “Indexing of Images by Pictorial Information”, Visual Database Systems, II, 1992 IFIP, pp. 93-101.
  • T. Kato et al, “A Cognitive Approach Interaction”, International Conference Of Multimedia Information Systems, January, 1991, pp. 109-119.
  • T. Kato et al, “Trademark: Multimedia Database with Abstracted Representation on Knowledge Base”, Proceedings Of The Second International Symposium On Interoperable Information Systems, pp. 245-252, November 1988.
  • T. Kato et al, “Trademark: Multimedia Image Database System with Intelligent Human Interface”, System And Computers In Japan, 1990, pp. 33-46.
  • T. Kato, “A Sketch Retrieval Method for Full Color Image Database-Query by Visual Example”, IEEE, Publication No. 0-8186-2910-X/92, 1992, pp. 530-533.
  • T. Kato, “Intelligent Visual Interaction with Image Database Systems Toward the Multimedia Personal Interface”, Journal Of Information Processing, vol. 14, No. 2, 1991, pp. 134-143.
  • T. Minka, “An Image Database Browser that Learns from User Interaction”, Masters Thesis, Massachusetts Institute of Technology; 1996; also appears as MIT Media Laboratory Technical Report 365.
  • T.-S. Chua, S.-K. Lim, and H.-K. Pung. Content-based retrieval of segmented images. In Proc. ACM Intern. Conf. Multimedia, October 1994.
  • Tak W. Yan & Hector Garcia-Molina, SIFT—A Tool for Wide-Area Information Dissemination, 1995 USENIX Technical Conference, New Orleans, La., January 16-20, pp. 177-186.
  • Tanton, N. E., “UK Teletext—Evolution and Potential,” IEEE Transactions on Consumer Electronics, vol. CE-25, No. 3, July 1979, pp. 246-250.
  • Tenenbaum, Jay M. and Schiffman, Allan M.; “Development of Network Infrastructure and Services for Rapid Acquisition”; adapted from a white paper submitted to DARPA by MCC in collaboration with EIT and ISI.
  • Training Computers To Note Images, New York Times, Apr. 15, 1992.
  • Turcotte, Donald L., 1992, Fractals and Chaos in Geology and Geophysics. Cambridge U.P.
  • TV Communications Advertisement for MSI Datacasting Systems, January 1973.
  • V. Gudivada et al, “A Spatial Similarity Measure for Image Database Applications”, Technical Report 91-1, Department of Computer Science, Jackson, Miss., 39217, 1990-1991.
  • V. N. Gudivada and V. V. Raghavan. Design and evaluation of algorithms for image retrieval by spatial similarity. ACM Trans. on Information Systems, 13(2), April 1995.
  • Vittal, J., “Active Message Processing: Message as Messengers”, pp. 175-195; 1981.
  • Voydock, Victor et al.; “Security Mechanisms in High-Level Network Protocols”; Computing Surveys; vol. 15, No. 2; June 1981.
  • W Gellert, H Kustner, M Hellwich & H Kastner: “The VNR Concise Encyclopedia of Mathematics”, Van Nostrand Reinhols Co., New York, 1975.
  • W. Grosky et al, “A Pictorial Index Mechanism for Model-based Matching”, Data 7 Knowledge Engineering 8, 1992, pp. 309-327.
  • W. Grosky et al, “Index-based Object Recognition in Pictorial Data Management”, Computer Vision, 1990, pp. 416-436.
  • W. Niblack et al, “Find me the Pictures that Look Like This: IBM'S Image Query Project”, Advanced Imaging, April 1993, pp. 32-35.
  • W. Niblack, R. Barber, W. Equitz, M. Flickner, E. Glasman, D. Petkovic, P. Yanker, and C. Faloutsos. The QBIC project: Querying images by content using color, texture, and shape. In Storage and Retrieval for Image and Video Databases, volume SPIE Vol. 1908, February 1993.
  • W.T. Freeman et al, “The Design and Use of Steerable Filters”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, No. 9, September 1991, pp. 891-906.
  • Weber et al., “Marquee: A Tool for Real-Time Video Logging”, CHI '94. April 1994.
  • Willett, P., “Recent Trends in Hierarchic Document Clustering: A Critical Review”, Information Processing & Management, vol. 24, No. 5, pp. 557-597, 1988
  • William L. Thomas, “Electronic Program Guide Applications—The Basics of System Design”, 1994 NCTA Technical Papers, pp. 15-20.
  • X. Zhang, et al, “Design of a Relational Image Database Management System: IMDAT”, IEEE Publication No. TH0166-9/87/0000-0310, 1987, pp. 310-314.
  • Y. Okada, et al., “An Image Storage and Retrieval System for Textile Pattern Adaptable to Color Sensation of the Individual”, Trans. Inst. Elec. Inf. Comm., vol. J70D, No. 12, pp. 2563-2574, December 1987 (Japanese w/English Abstract).
  • Y. Yoshida et al, “Description of Weather Maps and Its Application to Implementation of Weather Map Database”, IEEE 7th International Conference On Pattern Recognition, 1984, pp. 730-733.
  • Yan et al., “Index Structures for Information Filtering Under the Vector Space Model”, PROC the 10th International Conference on Data Engineering, pp. 14-18 of DRD203RW User's Manual relating to the DSS Digital System.
  • Z. Chen et al, “Computer Vision for Robust 3D Aircraft Recognition with Fast Library Search”, Pattern Recognition, vol. 24, No. 5, pp. 375-390, 1991, printed in Great Britain.
  • Zhuang, Yueting, Rui, Yong, Huang, Thomas S., Mehotra, Sharad, “Applying Semantic Association to Support Content-Based Video Retrieval”.


Video on Demand


Video on demand has long been sought as a means for delivering personalized media content. The practical systems raise numerous issues, including data storage formats, retrieval software, server hardware architecture, multitasking and buffering arrangements, physical communications channel, logical communications channel, receiver and decoder system, user interface, etc. In addition, typically a pay-per-view concept may be employed, with concomitant subscription, royalty collection and accounting issues. See, e.g.:

  • A. D. Gelman, et al.: A Store-And-Forward Architecture For Video-On-Demand Service; ICC 91 Conf.; June 1991; pp. 842-846.
  • Caitlin Bestler Flexible Data Structures and Interface Rituals For Rapid Development of OSD Applications; 93 NCTA Tech. Papers; Jun. 6, 1993; pp. 223-236.
  • Consumer Digest advertisement: Xpand Your TV's Capability: Fall/Winter 1992; p. 215.
  • Daniel M. Moloney: Digital Compression in Todays Addressable Enviroment; 1993 NCTA Technical Papers; Jun. 6, 1993; pp. 308-316.
  • Great Presentations advertisement: Remote, Remote; 1987; p. 32H.
  • Henrie van den Boom: An Interactive Videotex System for Two-Way CATV Networks; AEU, Band 40; 1986; pp. 397-401.
  • Hong Kong Enterprise advertisement: Two Innovative New Consumer Products From SVI; November 1988; p. 379.
  • IEEE Communications Magazine; vol. 32, No. 5, May 1994 New York, N.Y., US, pp. 68-80, XP 000451097 Chang et al “An Open Systems Approach to Video on Demand”.
  • Proceedings of the IEEE, vol. 82, No. 4, April 1994 New York, N.Y., US, pp. 585-589, XP 000451419 Miller “A Scenario for the Deployment of Interactive Multimedia Cable Television Systems in the United States in the 1990's”.
  • Reimer, “Memories in my Pocket”, Byte, pp. 251-258, February 1991.
  • Sharpless, “Subscription teletext for value added services”, August 1985.


    Demographically Targeted Advertising Through Electronic Media


Since the advent of commercially subsidized print media, attempts have been made to optimize the placement and compensation aspects relating to commercial messages or advertisements in media. In general, advertisers subsidize a large percentage of the cost of mass publications and communications, in return for the inclusion and possibly strategic placement of advertisements in the publication. Therefore, the cost of advertising in such media includes the cost of preparation of the advertisement, a share of the cost of publication and a profit for the content provider and other services. Since the advertiser must bear some of the cost of production and distribution of the content, in addition to the cost of advertisement placement itself, the cost may be substantial. The advertiser justifies this cost because the wide public reception of the advertisement, typically low cost per consumer “impression”, with a related stimulation of sales due to commercial awareness of the advertisers' products and services. Therefore, the advertisement is deemed particularly effective if either the audience is very large, with ad response proportionate to the size of the audience, or if it targets a particularly receptive audience, with a response rate higher than the general population.


On the other hand, the recipient of the commercial publication is generally receptive of the advertisement, even though it incurs a potential inefficiency in terms of increased data content and inefficiencies in receiving the content segment, for two reasons. First, the advertisements subsidize the publication, lowering the monetary cost to the recipient. Second, it is considered economically efficient for a recipient to review commercial information relating to prospective purchases or expenditures, rather than directly soliciting such information from the commercial source, i.e., “push” is better than “pull”. For this reason specialty publications are produced, including commercial messages appropriate for the particular content of the media or the intended recipients. In fact, in some forms of publications, most, if not all the information content is paid advertisements, with few editorial or independently produced pieces.


Mass media, on the other hand, tends not to include specialty commercial messages, because the interested population is too disperse and the resulting response rate from an advertisement too low, and further because the majority of the audience will be disinterested or even respond negatively to certain messages. Thus, mass media generally includes a majority of retail advertisements, with specialty advertisements relegated, if at all, to a classified section which is not interspersed with other content.


This is the basis for a “least common denominator” theory of marketing, that mass media must merchandise to the masses, while specialty media merchandises to selected subpopulations. As a corollary, using such types of media, it may be difficult to reach certain specialized populations who do not consistently receive a common set of publications or who receive primarily publications which are unspecialized or directed to a different specialty.


Where a recipient has limited time for reviewing media, he or she must divide his or her available time between mass media and specialty media. Alternatively, publication on demand services have arisen which select content based on a user's expressed interests. Presumably, these same content selection algorithms may be applied to commercial messages. However, these services are primarily limited distribution, and have content that is as variable as commercial messages. Likewise, mass media often has regionally variable content, such as local commercials on television or cable systems, or differing editions of print media for different regions. Methods are known for demographic targeting of commercial information to consumers; however, both the delivery methods and demographic targeting methods tend to be suboptimal.


Sometimes, however, the system breaks down, resulting in inefficiencies. These result where the audience or a substantial proportion thereof is inappropriate for the material presented, and thus realize a low response rate for an advertiser or even a negative response for the media due to the existence of particular commercial advertisers. The recipients are bombarded with inappropriate information, while the advertiser fails to realize optimal return on its advertising expenditures. In order to minimize the occurrence of these situations, services are available, including A. C. Nielsen Co. and Arbitron, Inc., which seek to determine the demographics of the audience of broadcast media.


U.S. Pat. No. 5,436,653, incorporated herein by reference, relates to a broadcast segment recognition system in which a signature representing a monitored broadcast segment is compared with broadcast segment signatures in a data base representing known broadcast segments to determine whether a match exists. Therefore, the broadcast viewing habits of a user may be efficiently and automatically monitored, without pre-encoding broadcasts or the like.


U.S. Pat. No. 5,459,306, incorporated herein by reference, relates to a method for delivering targeting information to a prospective individual user. Personal user information is gathered, as well as information on a user's use of a product, correlated and stored. Classes of information potentially relevant to future purchases are then identified, and promotions and recommendations delivered based on the information and the user information.


U.S. Pat. No. 5,483,278, incorporated herein by reference, relates to a system having a user interface which can access downloaded electronic programs and associated information records, and which can automatically correlate the program information with the preferences of the user, to create and display a personalized information database based upon the results of the correlation. Likewise, U.S. Pat. No. 5,223,914, expressly incorporated herein by reference, relates to a system and method for automatically correlating user preferences with a T.V. program information database.


U.S. Pat. No. 5,231,494, expressly incorporated herein by reference, relates to a system which selectively extracts one of a plurality of compressed television signals from a single channel based on viewer characteristics.


U.S. Pat. No. 5,410,344 relates to a system for selecting video programs based on viewers preferences, based on content codes of the programs.


U.S. Pat. No. 5,485,518, incorporated herein by reference, relates to a system for electronic media program recognition and choice, allowing, for example, parental control of the individual programs presented, without requiring a transmitted editorial code.


Videoconferencing Technologies


Videoconferencing systems are well known in the art. A number of international standards have been defined, providing various telecommunication bandwidth and communication link options. For example, H.320, H.323 and H.324 are known transport protocols over ISDN, packet switched networks and public switched telephone networks, respectively. H.324 provides a multimedia information communication and videoconferencing standard for communication over the standard “plain old telephone system” network (“POTS”), in which the video signal is compressed using DCT transforms and motion compensation for transmission over a v.80 synchronous v.34-type modem link. The video image is provided as a video window with relatively slow frame rate. This image, in turn, may be presented on a computer monitor or television system, with appropriate signal conversion. See, Andrew W. Davis, “Hi Grandma!: Is It Time for TV Set POTS Videoconferencing?”, Advanced Imaging, pp. 45-49 (March 1997); Jeff Child, “H.324 Paves Road For Mainstream Video Telephony”, Computer Design, January 1997, pp. 107-110. A newly proposed set of extensions to H.324, called H.324/M, provides compatibility with mobile or impaired telecommunications systems, and accommodates errors and distortions in transmissions, reduced or variable transmission rates and other anomalies of known available mobile telecommunications systems, such as Cellular, GSM, and PCS.


Four common standards are employed, which are necessary for videoconferencing stations to communicate with each other under common standards. The first is called h.320, and encompasses relatively high bandwidth systems, in increments of 64 kbits/sec digital communication with a synchronous communication protocol. Generally, these systems communicate with 128 kbits/sec, 256 kbits/sec or 384 kbits/sec, over a number of “bonded” ISDN B-channels. The second standard H.324, employs a standard POTS communication link with a v.80/v.34bis modem, communicating at 33.6 kbits/sec synchronous. The third standard, is the newly established H.321 standard, which provides for videoconferencing over a packet switched network, such as Ethernet, using IPX or TCP/IP. Finally, there are so-called Internet videophone systems, such as Intel Proshare. See, Andrew W. Davis, “The Video Answering Machine: Intel ProShare's Next Step”, Advanced Imagine, pp. 28-30 (March 1997).


In known standards-based videoconferencing systems, the image is generally compressed using a discrete cosine transform, which operates in the spatial frequency domain. In this domain, visually unimportant information, such as low frequencies and high frequency noise are eliminated, leaving visually important information. Further, because much of the information in a videoconference image is repeated in sequential frames, with possible movement, this redundant information is transmitted infrequently and filtered from the transmitted image stream, and described with motion vector information. This motion vector information encodes objects which are fixed or move somewhat between frames. Such known techniques include H.261, with integer pixel motion estimation, and H.263, which provides ½ pixel motion estimation. Other techniques for video compression are known or have been proposed, such as H.263+, and MPEG-4 encoding. Many standard videoconferencing protocols require the initial transmission of a full frame image, in order to set both transmitting and receiving stations to the same encoding state. The digital data describing this image is typically Huffman encoded for transmission. Multiple frames may be combined and coded as a unit, for example as so-called PB frames. Other techniques are also known for reducing image data transmission bandwidth for various applications, including video conferencing.


Each remote videoconference terminal has an interface system, which receives the digital data, and separates the video information (H.261, H.263), audio information (G.711, G.723, G.723.1), data protocol information (HDLC, V.14, LAPM, etc.) and control information (H.245, H.221/H.223) into discrete streams, which are processed separately. Likewise, each terminal interface system also assembles the audio information, video information, data protocols and control data for transmission. The control information consists of various types of information; the standard control protocol which addresses the data format, error correction, exception handling, and other types of control; and the multipoint control information, such as which remote videoconference terminal(s) to receive audio information from, selective audio muting, and such. Generally, the standard, low level control information is processed locally, at the codec interface system, and filtered from the remainder of the multipoint control system, with only the extracted content information made available to the other stations.


The ITU has developed a set of multipoint videoconferencing standards or recommendations, T.120-T.133, T.RES series, H.231, H.243, etc. These define control schemes for multiple party video conferences. Typically, these protocols are implemented in systems which either identically replicate the source image data stream from one source to a plurality of destinations, or completely decode and reencode the image in a different format in a “transcoder” arrangement, to accommodate incompatible conference stations. The ITU standards also allow optional data fields which may be used to communicate digital information essentially outside the videoconference scheme, and provide data conferencing capabilities, which allow videoconferencing and data conferencing to proceed simultaneously. See, ITU T.120-T.127, T.130-T.133, T.RES, T.Share and T.TUD recommendations, expressly incorporated herein by reference.


There are a number of known techniques for transmitting and displaying alphanumeric data on a television, the most common of which are teletext, used primarily in Europe, and closed caption, which is mandated in television sets larger than 13 inches by the Television Decoder Circuitry Act of 1990, and Section 305 of the Telecommunications Act of 1996, and Federal Communication Commission (FCC) regulations. The American closed caption standard is EIA 608. The later is of particular interest because many current generation televisions, especially larger sizes, include a closed caption decoder, and thus require no external hardware or connections, separate from the hardware and cabling for supplying the video signal. See, TCC Tech Facts, Vols. 1-4, (www.wgbh.org, rev. 9/95) expressly incorporated herein by reference. The closed caption signal is distributed on Line 21 of the vertical blanking interval. The existing standard supports 480 bits/sec, with a potential increase to 9600 bits/sec in the forthcoming ATSC standard.


Known systems provide a videoconferencing system which resides in a “set top box”, i.e., a stand-alone hardware device suitable for situation on top of a television set, providing all of the necessary functionality of a videoconferencing system employing the television as the display and possibly audio speaker functions. These systems, however, do not integrate the television functions, nor provide interaction between the video and videoconferencing systems. C-Phone Inc., Wilmington N.C., provides a C-Phone Home product line which provides extensions to H.324 and/or H.320 communications in a set-top box.


Other known videophone and videoconferencing devices are disclosed, e.g., in U.S. Pat. Nos. 5,600,646; 5,565,910; 5,564,001; 5,555,443; 5,553,609; 5,548,322; 5,542,102; 5,537,472; 5,526,405; 5,509,009; 5,500,671; 5,490,208; 5,438,357; 5,404,579; 5,374,952; 5,224,151; 4,543,665; 4,491,694; 4,465,902; 4,456,925; 4,427,847; 4,414,432; 4,377,729; 4,356,509; 4,349,701; 4,338,492; 4,008,376 and 3,984,638 each of which is expressly incorporated herein by reference.


Known Web/TV devices (from Sony/Magnavox/Philips) allow use of a television to display alphanumeric data, as well as audiovisual data, but formats this data for display outside the television. In addition, embedded Web servers are also known. See, Richard A. Quinell, “Web Servers in embedded systems enhance user interaction”, EDN, Apr. 10, 1997, pp. 61-68, incorporated herein by reference. Likewise, combined analog and digital data transmission schemes are also known. See. U.S. Pat. No. 5,404,579.


A class of computing devices, representing a convergence of personal computers and entertainment devices, and which provide network access to the Internet (a publicly available network operating over TCP/IP). ITU standards for communications systems allow the selective addition of data, according to T.120-T.133, T.RES series of protocols, as well as HDLC, V.14, LAPM, to the videoconference stream, especially where excess bandwidth is available for upload or download.


A system may be provided with features enabling it to control a so-called smart house and/or to be a part of a security and/or monitoring system, with imaging capability. These functions are provided as follows. As discussed above, various data streams may be integrated with a videoconference data stream over the same physical link. Therefore, external inputs and outputs may be provided to the videophone or videoconference terminal, which maybe processed locally and/or transmitted over the telecommunications link. The local device, in this case, is provided with a continuous connection or an autodial function, to create a communications link as necessary. Therefore, heating ventilation and air conditioning control (HVAC), lighting, appliances, machinery, valves, security sensors, locks, gates, access points, etc., may all be controlled locally or remotely through interfaces of the local system, which may include logic level signals, relays, serial ports, computer networks, fiber optic interfaces, infrared beams, radio frequency signals, transmissions through power lines, standard-type computer network communications (twisted pair, coaxial cable, fiber optic cable), acoustic transmissions and other known techniques. Likewise, inputs from various devices and sensors, such as light or optical, temperature, humidity, moisture, pressure, fluid level, security devices, radio frequency, acoustic, may be received and processed locally or remotely. A video and audio signal transmission may also be combined with the data signals, allowing enhanced remote monitoring and control possibilities. This information, when transmitted through the telecommunication link, may be directed to another remote terminal, for example a monitoring service or person seeking to monitor his own home, or intercepted and processed at a central control unit or another device. Remote events may be monitored, for example, on a closed caption display mode of a television attached to a videophone.


While the preferred embodiments of the invention adhere to established standards, the present invention also encompasses communications which deviate from or extend beyond such standards, and thus may engage in proprietary communications protocols, between compatible units.


OTHER REFERENCES

In addition, the following patents are considered relevant to the data compression and pattern recognition functions of the apparatus and interface of the present invention and are incorporated herein by reference: U.S. Pat. Nos. 3,609,684; 3,849,760; 3,950,733; 3,967,241; 4,025,851; 4,044,243; 4,100,370; 4,118,730; 4,148,061; 4,213,183; 4,225,850; 4,228,421; 4,230,990; 4,245,245; 4,254,474; 4,264,924; 4,264,925; 4,305,131; 4,326,259; 4,331,974; 4,338,626; 4,390,904; 4,395,780; 4,420,769; 4,442,544; 4,449,240; 4,450,531; 4,468,704; 4,491,962; 4,499,601; 4,501,016; 4,511,918; 4,543,660; 4,546,382; 4,547,811; 4,547,899; 4,581,762; 4,593,367; 4,602,279; 4,630,308; 4,646,250; 4,656,665; 4,658,429; 4,658,370; 4,660,166; 4,677,466; 4,697,209; 4,672,683; 4,677,680; 4,682,365; 4,685,145; 4,695,975; 4,710,822; 4,710,964; 4,716,404; 4,719,591; 4,731,863; 4,734,786; 4,736,439; 4,739,398; 4,742,557; 4,747,148; 4,752,890; 4,653,109; 4,760,604; 4,764,971; 4,764,973; 4,771,467; 4,773,024; 4,773,099; 4,774,677; 4,775,935; 4,783,752; 4,783,754; 4,783,829; 4,789,933; 4,790,025; 4,799,270; 4,802,103; 4,803,103; 4,803,736; 4,805,224; 4,805,225; 4,805,255; 4,809,331; 4,809,341; 4,817,171; 4,817,176; 4,821,333; 4,823,194; 4,829,453; 4,831,659; 4,833,637; 4,837,842; 4,843,562; 4,843,631; 4,845,610; 4,864,629; 4,872,024; 4,876,731; 4,881,270; 4,884,217; 4,887,304; 4,888,814; 4,891,762; 4,893,346; 4,897,811; 4,905,162; 4,905,286; 4,905,296; 4,906,099; 4,906,940; 4,908,758; 4,914,708; 4,920,499; 4,926,491; 4,930,160; 4,931,926; 4,932,065; 4,933,872; 4,941,193; 4,944,023; 4,949,187; 4,956,870; 4,958,375; 4,958,375; 4,964,077; 4,965,725; 4,967,273; 4,972,499; 4,979,222; 4,987,604; 4,989,256; 4,989,258; 4,992,940; 4,995,078; 5,012,334; 5,014,219; 5,014,327; 5,018,218; 5,018,219; 5,019,899; 5,020,112; 5,020,113; 5,022,062; 5,027,400; 5,031,224; 5,033,101; 5,034,991; 5,038,379; 5,038,390; 5,040,134; 5,046,121; 5,046,122; 5,046,179; 5,047,867; 5,048,112; 5,050,223; 5,051,840; 5,052,043; 5,052,045; 5,052,046; 5,053,974; 5,054,093; 5,054,095; 5,054,101; 5,054,103; 5,055,658; 5,055,926; 5,056,147; 5,058,179; 5,058,180; 5,058,183; 5,058,186; 5,059,126; 5,060,276; 5,060,277; 5,060,279; 5,060,282; 5,060,285; 5,061,063; 5,063,524; 5,063,525; 5,063,603; 5,063,605; 5,063,608; 5,065,439; 5,065,440; 5,065,447; 5,067,160; 5,067,161; 5,067,162; 5,067,163; 5,067,164; 5,068,664; 5,068,723; 5,068,724; 5,068,744; 5,068,909; 5,068,911; 5,076,662; 5,099,422; 5,103,498; 5,109,431; 5,111,516; 5,119,507; 5,122,886; 5,130,792; 5,132,992; 5,133,021; 5,133,079; 5,134,719; 5,148,497; 5,148,522; 5,155,591; 5,159,474; 5,161,204; 5,168,529; 5,173,949; 5,177,796; 5,179,652; 5,202,828; 5,220,420; 5,220,648; 5,223,924; 5,231,494; 5,239,617; 5,247,347; 5,247,651; 5,259,038; 5,274,714; 5,283,641; 5,303,313; 5,305,197; 5,307,421; 5,315,670; 5,317,647; 5,317,677; 5,343,251; 5,351,078; 5,357,276; 5,381,158; 5,384,867; 5,388,198; 5,390,125; 5,390,281; 5,410,343; 5,410,643; 5,416,856; 5,418,951; 5,420,975; 5,421,008; 5,428,559; 5,428,727; 5,428,730; 5,428,774; 5,430,812; 5,434,933; 5,434,966; 5,436,653; 5,436,834; 5,440,400; 5,446,891; 5,446,919; 5,455,892; 5,459,517; 5,461,699; 5,465,308; 5,469,206; 5,477,447; 5,479,264; 5,481,294; 5,481,712; 5,483,278; 5,485,219; 5,485,518; 5,487,132; 5,488,425; 5,488,484; 5,495,292; 5,496,177; 5,497,314; 5,502,774; 5,504,518; 5,506,768; 5,510,838; 5,511,134; 5,511,153; 5,515,098; 5,515,099; 5,515,173; 5,515,453; 5,515,471; 5,517,598; 5,519,452; 5,521,841; 5,521,984; 5,522,155; 5,523,796; 5,524,065; 5,526,427; 5,535,302; 5,541,638; 5,541,662; 5,541,738; 5,543,929; 5,544,254; 5,546,475; 5,548,667; 5,550,575; 5,550,928; 5,550,965; 5,552,833; 5,553,221; 5,553,277; 5,554,983; 5,555,495; 5,557,728; 5,559,548; 5,560,011; 5,561,649; 5,561,718; 5,561,796; 5,566,274; 5,572,604; 5,574,845; 5,576,950; 5,579,471; 5,581,658; 5,586,218; 5,588,074; 5,592,560; 5,574,845; 5,579,471; 5,581,665; 5,581,800; 5,583,560; 5,586,025; 5,594,661; 5,594,911; 5,596,705; 5,600,733; 5,600,775; 5,604,542; 5,604,820; 5,604,823; 5,606,655; 5,611,020; 5,613,032; 5,614,940; 5,617,483; 5,617,565; 5,621,454; 5,621,484; 5,621,579; 5,621,903; 5,625,715; 5,625,783; 5,627,915; 5,634,849; 5,635,986; 5,642,434; 5,644,686; 5,644,735; 5,654,771; 5,655,117; 5,657,397; 5,659,653; 5,659,368; 5,659,732; 5,664,046; 5,668,897; 5,671,343; 5,671,411; 5,682,437; 5,696,964; 5,701,369; 5,710,601; 5,710,833; 5,710,834; 5,715,400; 5,717,814; 5,724,424; 5,724,472; 5,729,741; 5,734,893; 5,737,444; 5,740,274; 5,745,126; 5,745,640; 5,745,710; 5,751,286; 5,751,831; 5,754,938; 5,758,257; 5,761,655; 5,764,809; 5,767,893; 5,767,922; 5,768,421; 5,768,426; 5,768,437; 5,778,181; 5,797,001; 5,798,785; 5,799,109; 5,801,750; 5,801,753; 5,805,763; 5,809,471; 5,819,288; 5,828,809; 5,835,087; 5,850,352; 5,852,823; 5,857,181; 5,862,260; H 331; and Re. 33,316. The aforementioned patents, some of which are mentioned elsewhere in this disclosure, and which form a part of this disclosure, may be applied in known manner by those skilled in the art in order to practice various embodiments of the present invention.


The following scientific articles, some of which are discussed elsewhere herein, are understood by those skilled in the art and relate to the pattern recognition and image compression functions of the apparatus and interface of the present invention:

  • “Fractal Geometry-Understanding Chaos”, Georgia Tech Alumni Magazine, p. 16 (Spring 1986).
  • “Fractal Modelling of Biological Structures”, School of Mathematics, Georgia Institute of Technology (date unknown).


“Fractal Modelling of Real World Images”, Lecture Notes for Fractals: Introduction, Basics and Perspectives, Siggraph (1987).

  • “Fractals Yield High Compression”, Electronic Engineering Times, Sep. 30, 1991, p. 39.
  • “Fractals—A Geometry of Nature”, Georgia Institute of Technology Res. Horizons, p. 9 (Spring 1986).
  • “Voice Recognition and Speech Processing”, Elektor Electronics, September 1985, pp. 56-57.
  • Aleksander, I., “Guide to Pattern Recognition Using Random-Access Memories”, Computers and Digital Techniques, 2(1):29-40 (February 1979).


Anderson, F., W. Christiansen, B. Kortegaard, “Real Time, Video Image Centroid Tracker”, Apr. 16-20, 1990.

  • Anson, L., M. Barnsley, “Graphics Compression Technology”, SunWorld, pp. 43-52 (October 1991).
  • Appriou, A., “Interet des theories de l'incertain en fusion de donnees”, Colloque International sur le Radar Paris, 24-28 avril 1989.
  • Appriou, A., “Procedure d'aide a la decision multi-informateurs. Applications a la classification multi-capteurs de cibles”, Symposium de l'Avionics Panel (AGARD) Turquie, 25-29 avril 1988.
  • Arrow, K. J., “Social choice and individual valves”, John Wiley and Sons Inc. (1963).
  • Barnsley et al., “A Better Way to Compress Images”, Byte Magazine, January 1988.
  • Barnsley et al., “Harnessing Chaos For Images Systhesis”, Computer Graphics, 22(4) (August 1988).
  • Barnsley et al., “Hidden Variable Fractal Interpolation Functions”, School of Mathematics, Georgia Institute of Technology, Atlanta, Ga. 30332, July, 1986.
  • Batchelor, B. G., “Pattern Recognition, Ideas in Practice”, Plenum Press, London and New York, (1978).
  • Batchelor, B. G., “Practical Approach to Pattern Classification”, Plenum Press, London and New York, (1974).
  • Bellman, R. E., L. A. Zadeh, “Decision making in a fuzzy environment”, Management Science, 17(4) (December 1970).
  • Bhatnagar, R. K., L. N. Kamal, “Handling uncertain information: a review of numeric and non-numeric methods”, Uncertainty in Artificial Intelligence, L. N. Kamal and J. F. Lemmer, Eds. (1986).
  • Blair, D., R. Pollack, “La logique du choix collectif”, Pour la Science (1983).
  • Burr, D. J., “A Neural Network Digit Recognizer”, Proceedings of the 1986 IEEE International Conference of Systems, Man and Cybernetics, Atlanta, Ga., pp. 1621-1625.
  • Caffery, B., “Fractal Compression Breakthrough for Multimedia Applications”, Inside, Oct. 9, 1991.
  • Carpenter, G. A., S. Grossberg, “The Art of Adaptive Pattern Recognition by a Self-Organizing Neural Network”, IEEE Computer, March 1988, pp. 77-88.
  • Casasent, D., et al., “General I and Q Data Processing on a Multichannel AO System”, Applied Optics, 25(18):3217-24 (Sep. 15, 1986).
  • Caudill, M., “Neural Networks Primer-Part III”, AI Expert, June 1988, pp. 53-59.
  • Chao, J. J., E. Drakopoulos, C. C. Lee, “An evidential reasoning approach to distributed multiple hypothesis detection”, Proceedings of the 20th Conference on decision and control, Los Angeles, Calif., December 1987.
  • Chao, T.-H.; Hegblom, E.; Lau, B.; Stoner, W. W.; Miceli, W. J., “Optoelectronically implemented neural network with a wavelet preprocessor”, Proceedings of the SPIE—The International Society for Optical Engineering, 2026:472-82(1993).
  • Chen et al., “Adaptive Coding of Monochrome and Color Images”, November 1977, pp. 1285-1292.
  • Cheong, C. K.; Aizawa, K.; Saito, T.; Hatori, M., “Adaptive edge detection with fractal dimension”, Transactions of the Institute of Electronics, Information and Communication Engineers D-II, J76D-II(11):2459-63 (1993).
  • Computer Visions, Graphics, and Image Processing, 1987, 37:54-115.
  • Computers and Biomedical Research 5, 388-410 (1972).
  • Cooper, L. N., “A Possible Organization of Animal Memory and Learning”, Nobel 24, (1973), Collective Properties of Physical Systems, pp. 252-264.
  • Crawford et al., “Adaptive Pattern Recognition Applied To An Expert System For Fault Diagnosis In Telecommunications Equipment”, pp. 10/1-8 (Inspec. Abstract No. 86C010699, Inspec IEE (London) & IEE Coll. on “Adaptive Filters”, Digest No. 76, Oct. 10, 1985).
  • Danielsson, Erik, et al., “Computer Architectures for Pictorial Inf. Systems”, IEEE Computer, November, 1981, pp. 53-67.
  • Dempster, A. P., “A generalization of Bayesian inference”, Journal of the Royal Statistical Society, Vol. 30, Series B (1968).
  • Dempster, A. P., “Upper and lower probabilities induced by a multivalued mapping”, Annals of mathematical Statistics, no. 38 (1967).
  • Denker, 1984 International Test Conf., October 1984, Philadelphia, Pa., pp. 558-563.
  • Dubois, D., “Modeles mathematiques de l'imprecis et de l'incertain en vue d'applications aux techniques d′aide a la decision”, Doctoral Thesis, University of Grenoble (1983).
  • Dubois, D., N. Prade, “Combination of uncertainty with belief functions: a reexamination”, Proceedings 9th International Joint Conference on Artificial Intelligence, Los Angeles (1985).
  • Dubois, D., N. Prade, “Fuzzy sets and systems-Theory and applications”, Academic Press, New York (1980).
  • Dubois, D., N. Prade, “Theorie des possibilites: application a la representation des connaissances en informatique”, Masson, Paris (1985).
  • Duda, R. O., P. E. Hart, M. J. Nilsson, “Subjective Bayesian methods for rule-based inference systems”, Technical Note 124, Artificial Intelligence Center, SRI International.
  • Dunning, B. B., “Self-Learning Data-Base For Automated Fault Localization”, IEEE, 1979, pp. 155-157.
  • Farrelle, Paul M. and Jain, Anil K., “Recursive Block Coding—A New Approach to Transform Coding”, IEEE Transactions on Communications, Com. 34(2) (February 1986).
  • Fitzpatrick, J. M., J. J. Grefenstette, D. Van Gucht, “Image Registration by Genetic Search”, Conf. Proc., IEEE Southeastcon 1984, pp. 460-464.
  • Fua, P. V., “Using probability density functions in the framework of evidential reasoning Uncertainty in knowledge based systems”, B. Bouchon, R. R. Yager, Eds. Springer Verlag (1987).
  • Gogoussis et al., Proc. SPIE Intl. Soc. Opt. Eng., November 1984, Cambridge, Mass., pp. 121-127.
  • Grossberg, S., G. Carpenter, “A Massively Parallel Architecture for a Self-Organizing Neural Pattern Recognition Machine”, Computer Vision, Graphics, and Image Processing, 1987, 37, 54-115, 252-315.
  • Gullichsen, E., E. Chang, “Pattern Classification by Neural Network: An Experiment System for Icon Recognition”, ICNN Proceeding on Neural Networks, March 1987, pp. IV-725-32.
  • Haruki, K. et al., “Pattern Recognition of Handwritten Phonetic Japanese Alphabet Characters”, International Joint Conference on Neural Networks, Washington, D.C., January 1990, pp. 11-515 to 11-518.
  • Hayashi, Y., et al., “Alphanumeric Character Recognition Using a Connectionist Model with the Pocket Algorithm”, Proceedings of the International Joint Conference on Neural Networks, Washington, D.C. Jun. 18-22, 1989, vol. 2, pp. 606-613.
  • Hayes, H. I.; Solka, J. L.; Priebe, C. E.; “Parallel computation of fractal dimension”, Proceedings of the SPIE—The International Society for Optical Engineering, 1962:219-30 (1993).
  • Hinton et al., “Boltzmann Machines: Constraint Satisfaction Networks that Learn”, Tech. Report CMU-CS-85-119, Carnegie-Mellon Univ, 5/84.
  • Hoare, F.; de Jager, G., “Neural networks for extracting features of objects in images as a pre-processing stage to pattern classification”, Proceedings of the 1992 South African Symposium on Communications and Signal Processing. COMSIG '92 (Cat. No. 92TH0482-0). Inggs, M. (Ed.), p. 239-42 (1992).
  • Hopfield et al., “Computing with Neural Circuits: A Model”, Science, 233:625-633 (8 Aug. 1986).
  • Hopfield, “Neural Networks and Physical Systems with Emergent Collective Computational Abilities”, Proc. Natl. Acad. Sci. USA, 79:2554-2558 (April 1982).
  • Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons”, Proc. Natl. Acad. Sci. USA, 81:3088-3092 (May 1984).
  • Hurtgen, B.; Buttgen, P., “Fractal approach to low rate video coding”, Proceedings of the SPIE—The International Society for Optical Engineering, 2094(pt. 1): 120-31(1993).
  • Information Processing 71, North-Holland Publishing Company (1972) pp. 1530-1533.
  • Ishizuka, M., “Inference methods based on extended Dempster and Shafer's theory for problems with uncertainty/fuzziness”, New Generation Computing, Ohmsha, Ltd, and Springer Verlag, 1:159-168 (1983).
  • Jacket L. D., H. P. Graf, J. S. Denker, D. Henderson and I. Guyon, “An Application of Neural Net Chips: Handwritten Digit Recognition”, ICNN Proceeding, 1988, pp. 11-107-15.
  • Jean, J. S. N., et al., “Input Representation and Output Voting Considerations for Handwritten Numeral Recognition with Backpropagation”, International Joint Conference on Neural Networks, Washington, D.C., January 1990, pp. 1-408 to 1-411.
  • Jeffrey, R. J., “The logic of decision”, The University of Chicago Press, Ltd., London (1983)(2nd Ed.).
  • Kaufmann, A., “Introduction a la theorie des sous-ensembles flous”, Vol. 1, 2 et 3, Masson, Paris (1975).
  • Keeney, R. L., B. Raiffa, “Decisions with multiple objectives: Preferences and value tradeoffs”, John Wiley and Sons, New York (1976).
  • Kellman, P., “Time Integrating Optical Signal Processing”, Ph. D. Dissertation, Stanford University, 1979, pp. 51-55.
  • Kim, D. H.; Caulfield, H. J.; Jannson, T.; Kostrzewski, A.; Savant, G, “Optical fractal image processor for noise-embedded targets detection”, Proceedings of the SPIE—The International Society for Optical Engineering, Vol: 2026 p. 144-9 (1993) (SPIE Conf: Photonics for Processors, Neural Networks, and Memories 12-15 Jul. 1993, San Diego, Calif., USA).
  • Kohonen, “Self-Organization & Memory”, Second Ed., 1988, Springer-Verlag, pp. 199-209.
  • Kortegaard, B. L., “PAC-MAN, a Precision Alignment Control System for Multiple Laser Beams Self-Adaptive Through the Use of Noise”, Los Alamos National Laboratory, date unknown.
  • Kortegaard, B. L., “Superfine Laser Position Control Using Statistically Enhanced Resolution in Real Time”, Los Alamos National Laboratory, SPIE-Los Angeles Technical Symposium, Jan. 23-25, 1985.
  • Ksienski et al., “Low Frequency Approach to Target Identification”, Proc. of the IEEE, 63(12): 1651-1660 (December 1975).
  • Kyburg, H. E., “Bayesian and non Bayesian evidential updating”, Artificial Intelligence 31:271-293 (1987).
  • LeCun, Y. et al., “Handwritten Digit Recognition: Applications of Neural.”, IEEE Comm. Magazine, November 1989, pp. 41-46.
  • LeCun, Y., “Connectionism in Perspective”, in R. Pfeifer, Z. Schreter, F. Fogelman, L. Steels (Eds.), 1989, “Generalization and Network Design Strategies”, pp. 143-155.
  • Liepins, G. E., M. R. Hilliard, “Genetic Algorithms: Foundations & Applications”, Annals of Operations Research, 21:31-58 (1989).
  • Lin, H. K., et al., “Real-Time Screen-Aided Multiple-Image Optical Holographic Matched-Filter Correlator”, Applied Optics, 21(18):3278-3286 (Sep. 15, 1982).
  • Lippman, R. P., “An Introduction to Computing with Neural Nets”, IEEE ASSP Magazine, April 1987, pp. 4-22.
  • Lippmann, R. P., “An Introduction to Computing with Neural Nets”, IEEE ASSP Magazine, vol. 4(2):4-22 (April 1987).
  • Liu, Y., “Extensions of fractal theory”, Proceedings of the SPIE—The International Society for Optical Engineering, 1966:255-68 (1993).
  • Liu, Y., “Pattern recognition using Hilbert space”, Proceedings of the SPIE—The International Society for Optical Engineering, 1825:63-77 (1992).
  • Mahalanobis, A., et al., “Minimum Average Correlation Energy Filters”, Applied Optics, 26(17):3633-40 (Sep. 1, 1987).
  • Martin, G. L. et al., “Recognizing Hand-Printed Letters and Digits Using Backpropagation Learning”, Technical Report of the MCC, Human Interface Laboratory, Austin, Tex., January 1990, pp. 1-9.
  • McAulay, A. D., J. C. Oh, “Image Learning Classifier System Using Genetic Algorithms”, IEEE Proc. of the National Aerospace & Electronics Conference, 2:705-710 (1989).
  • Miller, R. K., Neural Networks ((c) 1989: Fairmont Press, Lilburn, Ga.), pp. 2-12 and Chapter 4, “Implementation of Neural Networks”, pp. 4-1 to 4-26.
  • Molley, P., “Implementing the Difference-Squared Error Algorithm Using An Acousto-Optic Processor”, SPIE, 1098:232-239 (1989).
  • Molley, P., et al., “A High Dynamic Range Acousto-Optic Image Correlator for Real-Time Pattern Recognition”, SPIE, 938:55-65 (1988).
  • Mori, “Towards the construction of a large-scale neural network”, Electronics Information Communications Association Bulletin PRU 88-59, pp. 87-94.
  • Naik et al., “High Performance Speaker Verification.”, ICASSP 86, Tokyo, CH2243-4/86/0000-0881, IEEE 1986, pp. 881-884.
  • Ney, H., et al., “A Data Driven Organization of the Dynamic Programming Beam Search for Continuous Speech Recognition”, Proc. ICASSP 87, pp. 833-836, 1987.
  • Nilsson, N. J., The Mathematical Foundations of Learning Machines ((c) 1990: Morgan Kaufmann Publishers, San Mateo, Calif.) and particularly section 2.6 “The Threshold Logic Unit (TLU)”, pp. 21-23 and Chapter 6, “Layered Machines” pp. 95-114.
  • Ohsuga et al., “Entrainment of Two Coupled van der Pol Oscillators by an External Oscillation”, Biological Cybernetics, 51:225-239 (1985).
  • Omata et al., “Holonic Model of Motion Perception”, IEICE Technical Reports, Mar. 26, 1988, pp. 339-346.
  • O'Neal et al., “Coding Isotropic Images”, November 1977, pp. 697-707.
  • Pawlicki, T. F., D. S. Lee, J. J. Hull and S. N. Srihari, “Neural Network Models and their Application to Handwritten Digit Recognition”, ICNN Proceeding, 1988, pp. 11-63-70.
  • Perry et al., “Auto-Indexing Storage Device”, IBM Tech. Disc. Bulletin, 12(8):1219 (January 1970).
  • Peterson, Ivars, “Packing It In”, Science News, 131 (18):283-285 (May 2, 1987).
  • Priebe, C. E.; Solka, J. L.; Rogers, G. W., “Discriminant analysis in aerial images using fractal based features”, Proceedings of the SPIE—The International Society for Optical Engineering, 1962:196-208 (1993).
  • Proceedings, 6th International Conference on Pattern Recognition 1982, pp. 152-136.
  • Psaltis, D., “Incoherent Electro-Optic Image Correlator”, Optical Engineering, 23(1): 12-15 (January/February 1984).
  • Psaltis, D., “Two-Dimensional Optical Processing Using One-Dimensional Input Devices”, Proceedings of the IEEE, 72(7):962-974 (July 1984).
  • Rahmati, M.; Hassebrook, L. G., “Intensity- and distortion-invariant pattern recognition with complex linear morphology”, Pattern Recognition, 27 (4):549-68 (1994).
  • Reusens, E., “Sequence coding based on the fractal theory of iterated transformations systems”, Proceedings of the SPIE—The International Society for Optical Engineering, 2094(pt. 1): 132-40 (1993).
  • Rhodes, W., “Acousto-Optic Signal Processing: Convolution and Correlation”, Proc. of the IEEE, 69(1):65-79 (January 1981).
  • Rosenfeld, Azriel and Avinash C. Kak, Digital Picture Processing, Second Edition, Volume 2, Academic Press, 1982.
  • Roy, B., “Classements et choix en presence de points de vue multiples”, R.I.R.O.-2eme annee-no. 8, pp. 57-75 (1968).
  • Roy, B., “Electre III: un algorithme de classements fonde sur une representation floue des preferences en presence de criteres multiples”, Cahiers du CERO, 20(1):3-24 (1978).
  • Rumelhart, D. E., et al., “Learning Internal Representations by Error Propagation”, Parallel Distr. Proc.: Explorations in Microstructure of Cognition, 1:318-362 (1986).
  • Rumelhart, D. E., et al., Parallel Distributed Processing, ((c) 1986: MIT Press, Cambridge, Mass.), and specifically Chapter 8 thereof, “Learning Internal Representations by Error Propagation”, pp. 318-362.
  • Rutherford, H. G., F. Taub and B. Williams, “Object Identification and Measurement from Images with Access to the Database to Select Specific Subpopulations of Special Interest”, May 1986.
  • Rutter et al., “The Timed Lattice—A New Approach To Fast Converging Equalizer Design”, pp. VIII/1-5 (Inspec. Abstract No. 84C044315, Inspec IEE (London) & IEE Saraga Colloquium on Electronic Filters, May 21, 1984).
  • Sadjadi, F., “Experiments in the use of fractal in computer pattern recognition”, Proceedings of the SPIE—The International Society for Optical Engineering, 1960:214-22 (1993).
  • Sakoe, H., “A Generalization of Dynamic Programming Based Pattern Matching Algorithm Stack DP-Matching”, Transactions of the Committee on Speech Research, The Acoustic Society of Japan, p. S83-23, 1983.
  • Sakoe, H., “A Generalized Two-Level DP-Matching Algorithm for Continuous Speech Recognition”, Transactions of the IECE of Japan, E65(11):649-656 (November 1982).
  • Scharlic, A., “Decider sur plusieurs criteres. Panorama de l'aide a la decision multicritere”, Presses Polytechniques Romandes (1985).
  • Schurmann, J., “Zur Zeichen and Worterkennung beim Automatischen Anschriftenlesen”, Wissenschaftlichl, Berichte, 52(1/2) (1979).
  • Scientific American, “Not Just a Pretty Face”, March 1990, pp. 77-78.
  • Shafer, G., “A mathematical theory of evidence”, Princeton University Press, Princeton, N.J. (1976).
  • Shimizu et al., “Principle of Holonic Computer and Holovision”, Journal of the Institute of Electronics, Information and Communication, 70(9):92′-930 (1987).
  • Shinan et al., “The Effects of Voice Disguise.”, ICASSP 86, Tokyo, CH2243-4/86/0000-0885, IEEE 1986, pp. 885-888.
  • Silverston et al., “Spectral Feature Classification and Spatial Pattern Rec.”, SPIE 201:17-26, Optical Pattern Recognition (1979).
  • Simpson, W. R., C. S. Dowling, “WRAPLE: The Weighted Repair Assistance Program Learning Extension”, IEEE Design & Test, 2:66-73 (April 1986).
  • Specht, IEEE Internatl. Conf. Neural Networks, 1:1525-1532 (July 1988), San Diego, Calif.
  • Sprageu, R. A., “A Review of Acousto-Optic Signal Correlators”, Optical Engineering, 16(5):467-74 (September/October 1977).
  • Sprinzak, J.; Werman, M., “Affine point matching”, Pattern Recognition Letters, 15(4):337-9 (1994).
  • Stanley R. Sternberg, “Biomedical Image Processing”, IEEE Computer, 1983, pp. 22-34.
  • Stewart, R. M., “Expert Systems For Mechanical Fault Diagnosis”, IEEE, 1985, pp. 295-300.
  • Sugeno, M., “Theory of fuzzy integrals and its applications”, Tokyo Institute of Technology (1974).
  • Svetkoff et al., Hybrid Circuits (GB), No. 13, May 1987, pp. 5-8.
  • Udagawa, K., et al, “A Parallel Two-Stage Decision Method for Statistical Character Recognition.”, Electronics and Communications in Japan (1965).
  • Vander Lugt, A., “Practical Considerations for the Use of Spatial Carrier-Frequency Filters”, Applied Optics, 5(11): 1760-1765 (November 1966).
  • Vander Lugt, A., “Signal Detection By Complex Spatial Filtering”, IEEE Transactions On Information Theory, IT-10, 2:139-145 (April 1964).
  • Vander Lugt, A., et al., “The Use of Film Nonlinearites in Optical Spatial Filtering”, Applied Optics, 9(1):215-222 (January 1970).
  • Vannicola et al., “Applications of Knowledge Based Systems to Surveillance”, Proceedings of the 1988 IEEE National Radar Conference, 20-21 Apr. 1988, pp. 157-164.
  • Vitols, “Hologram Memory for Storing Digital Data”, IBM Tech. Disc. Bulletin 8(11): 1581-1583 (April 1966).
  • Wald, Sequential Analysis, Dover Publications Inc., 1947, pp. 34-43.
  • Wasserman, Philip D., “Neural Computing-Theory & Practice”, 1989, pp. 128-129.
  • Willshaw et al., “Non-Holographic Associative Memory”, Nature, 222:960-962 (Jun. 7, 1969).
  • Yager, R. R., “Entropy and specificity in a mathematical theory of Evidence”, Int. J. General Systems, 9:249-260 (1983).
  • Yamada et. al., “Character recognition system using a neural network”, Electronics Information Communications Association Bulletin PRU 88-58, pp. 79-86.
  • Yamane et al., “An Image Data Compression Method Using Two-Dimensional Extrapolative Prediction-Discrete Sine Transform”, Oct. 29-31, 1986, pp. 311-316.
  • Zadeh, L. A., “Fuzzy sets as a basis for a theory of possibility”, Fuzzy sets and Systems, 1:3-28 (1978).
  • Zadeh, L. A., “Fuzzy sets”, Information and Control, 8:338-353 (1965).
  • Zadeh, L. A., “Probability measures of fuzzy events”, Journal of Mathematical Analysis and Applications, 23:421-427 (1968).
  • Zhi-Yan Xie; Brady, M., “Fractal dimension image for texture segmentation”, ICARCV '92. Second International Conference on Automation, Robotics and Computer Vision, p. CV-4.3/1-5 vol. 1, (1992).
  • Zhu, X., et al., “Feature Detector and Application to Handwritten Character Recognition”, International Joint Conference on Neural Networks, Washington, D.C., January 1990, pp. 11-457 to 11-460.


The above-mentioned references are exemplary, and are not meant to be limiting in respect to the resources and/or technologies available to those skilled in the art. Of course it should be realized that the hardware for implementing a system may be integrally related to the choice of specific method or software algorithm for implementing the system, and therefore these together form a system. It is noted that in view of the present disclosure, it is within the skill of the artisan to combine in various fashions the available methods and apparatus to achieve the advanced interface and control system of the present invention.


SUMMARY AND OBJECTS OF THE INVENTION

The present invention provides, according to one embodiment, an adaptive user interface which changes in response to the context, past history and status of the system. The strategy employed preferably seeks to minimize, for an individual user at any given time, the search and acquisition time for the entry of data through the interface.


The interface may therefore provide a model of the user, which is employed in a predictive algorithm. The model parameters may be static (once created) or dynamic, and may be adaptive to the user or alterations in the use pattern.


The present invention also provides a model-based pattern recognition system, for determining the presence of an object within an image. By providing models of the objects within an image, the recognition process is relatively unaffected by perspective, and the recognition may take place in a higher dimensionality space than the transmitted media. Thus, for example, a motion image may include four degrees of freedom; x, y, chroma/luma, and time. A model of an object may include further dimensions, including z, and axes of movement. Therefore, the model allows recognition of the object in its various configurations and perspectives.


A major theme of the present invention is the use of intelligent, adaptive pattern recognition in order to provide the operator with a small number of high probability choices, which may be complex, without the need for explicit definition of each atomic instruction comprising the desired action. The interface system predicts a desired action based on the user input, a past history of use, a context of use, and a set of predetermined or adaptive rules.


Because the present invention emphasizes adaptive pattern recognition of both the input of the user and data which may be available, the interface system proposes the extensive use of advanced signal processing and neural networks. These processing systems may be shared between the interface system and the functional system, and therefore a controller for a complex system may make use of the intrinsic processing power available rather than requiring additional computing power, although this unification is not required. In the case where the user interface employs common hardware elements, it is further preferred that the interface subsystem employ common models of the underlying data structures on which the device functionally operates.


In fact, while hardware efficiency dictates common hardware for the interface system and the operational routine, other designs may separate the interface system from the operational system, allowing portability and efficient application of a single interface system for a number of operational systems. Thus, the present invention also proposes a portable human interface system which may be used to control a number of different devices. In this case, a web browser metaphor is preferred, as it has become a standard for electronic communications.


A portable interface may, for example, take the form of a personal digital assistant or downloaded JAVA apples, with the data originating in a web server. The data from a web server or embedded web server may include a binary file, a generic HTML/XML file, or other data type. The interface receives the data and formats it based, at least in part, on parameters specific to the client or user. Thus, the presentation of data is responsive to the user, based on user preferences, as opposed to hardware limitations or compatibility issues. In a preferred embodiment, the data is transmitted separately from the presentation definition. The presentation definition, on the other hand, provides a set of parameters that propose or constrain the data presentation. The user system also provides a set of parameters that set preferences on presentation. Further, the data itself is analyzed for appropriate presentation parameters. These three sets of considerations are all inputs into a “negotiation” for an ultimate presentation scheme. Thus, the presentation is adaptive to server parameters, user parameters, and the data itself. For example, in a typical web-context, the color, size, typestyle, and layout of text may be modified based on these considerations. Other factors that may be altered include frame size and layout, size of hotspots, requirement for single or double clicks for action, and the like.


The adaptive nature of the present invention derives from an understanding that people learn most efficiently through the interactive experiences of doing, thinking, and knowing. For ease-of-use, efficiency, and lack of frustration of the user, the interface of the device should be intuitive and self explanatory, providing perceptual feedback to assist the operator in communicating with the interface, which in turn allows the operational system to receive a description of a desired operation. Another important aspect of man-machine interaction is that there is a learning curve, which dictates that devices which are especially easy to master become frustratingly elemental after continued use, while devices which have complex functionality with many options are difficult to master and may be initially rejected, or the user stops exploring. One such system which addresses this problem is U.S. Pat. No. 5,005,084, expressly incorporated herein by reference. The present invention addresses these issues by determining the most likely instructions of the operator, and presenting these as easily available choices, by analyzing the past history data and by detecting the “sophistication” of the user in performing a function, based on all information available to it. The context of use may also be a significant factor. The interface seeks to optimize the relevant portion of the interface adaptively and immediately in order to balance and optimize the interface for both quantitative and qualitative factors. This functionality may greatly enhance the quality of interaction between man and machine, allowing a higher degree of overall system sophistication to be tolerated and a greater value added than other interface designs. See, Commaford, C., “User-Responsive Software Must Anticipate Our Needs”, PC Week, May 24, 1993.


The present interface system analyzes data from the user, which may be both the selections made by the user in context, as well as the efficiency by which the user achieves the selection. Thus, information concerning both the endpoints and time-dependent path of the process are considered and analyzed by the interface system.


The interface of the present invention may be advantageously applied to an operational system that has a plurality of functions, certain of which are unnecessary or are rarely used in various contexts, while others are used with greater frequency. In such systems, the functionality use is usually predictable. Therefore, the present invention provides an optimized interface system which, upon recognizing a context, dynamically reconfigures the availability or ease of availability of functions and allow various subsets to be used through “shortcuts”. The interface presentation will therefore vary over time, use and the particular user.


The advantages to be gained by using an intelligent data analysis interface for facilitating user control and operation of the system are more than merely reducing the average number of selections or time to access a given function. Rather, advantages also arise from providing a means for access and availability of functions not necessarily previously existing or known to the user, therefore improving the perceived quality and usefulness of the product. Further advantages over prior interfaces accrue due to the availability of pattern recognition functionality as a part of the interface system.


In those cases where the pattern recognition functions are applied to large amounts of data or complex data sets, in order to provide a sufficient advantage and acceptable response time, powerful computational resources, such as advanced DSPs or neural network processors are made available to the interface system. On the other hand, where the data is simple or of limited scope, aspects of the technology may be easily implemented as added software functionality as improvements of existing products having limited computational resources.


The application of these technologies to multimedia systems provides a new model for performing image pattern recognition on multimedia data and for the programming of applications including such data. The ability of the interface of the present invention to perform abstractions and make decisions regarding a closeness of presented data to selection criteria makes the interface suitable for use in a programmable control, i.e., determining the existence of certain conditions and taking certain actions on the occurrence of detected events. Such advanced technologies might be especially valuable for disabled users.


In a multimedia environment, a user often wishes to perform an operation on a multimedia data event. Past systems have required explicit indexing of images and events. The present technologies, however, allow an image, diagrammatic, abstract or linguistic description of the desired event to be acquired by the interface system from the user and applied to identify or predict the multimedia event(s) desired without requiring a separate manual indexing or classification effort. These technologies may also be applied to single media data.


The interface system according to the present invention is not limited to a single data source, and may analyze data from many different sources for its operation. This data may be stored data or present in a data stream. Thus, in a multimedia system, there may be a real-time data stream, a stored event database, as well as an exemplar or model database. Further, since the device is adaptive, information relating to past experience of the interface, both with respect to exposure to data streams and user interaction, is also stored. This data analysis aspect of the operation of the present interface system may be substantially processor intensive, especially where the data includes abstract or linguistic concepts or images to be analyzed. Interfaces which do not relate to the processing of such data may be implemented on simpler hardware. On the other hand, systems which handle complex data types may necessarily include sophisticated processors, adaptable for use with the interface system, thus minimizing the additional computing power necessary in order to implement the interface according to the present invention. A portion of the data analysis may also overlap the functional analysis of the data for operation.


A fractal-based image processing system exemplifies one application of the technologies. A fractal-based system includes a database of image objects, which may be preprocessed in a manner which makes them suitable for comparison to a fractal-transformed image representation of an image to be analyzed. Thus, corresponding “fractal” transforms are performed on the unidentified image or a portion thereof and on an exemplar of a database. A degree of relatedness is determined in this “fractal transform domain”, and the results used to identify objects within the image. The system then makes decisions based on the information content of the image, i.e. the objects contained therein.


The fractal-based image processing system presents many advantages. First, fractal-processed images may have dramatically reduced storage size requirements as compared to traditional methods while substantially retaining information important for image recognition. The process may be parallelized, and the exemplars may be multidimensional, further facilitating the process of identifying a two-dimensional projection of an object. The efficient storage of information allows the use of inexpensive storage media, i.e., CD-ROM, or the use of an on-line database through a serial data link, while allowing acceptable throughput. See, Zenith Starsight Telecast brochure, (1994); U.S. Pat. No. 5,353,121, expressly incorporated herein by reference.


As applied to a multimedia database storage and retrieval system, the user programs, through an adaptive user interface according to the present invention, the processing of data, by defining a criteria and the actions to be taken based on the determination of the criteria. The criteria, it is noted, need not be of a predefined type, and in fact this is a particular feature of the present invention. A pattern recognition subsystem is employed to determine the existence of selected criteria. To facilitate this process, a database of image objects may be stored as two counterparts: first, the data is stored in a compressed format optimized for normal use, such as human viewing on a video monitor, using, e.g., MPEG-2 or Joint Photographic Experts Group (JPEG) compression; second, it is stored in a preprocessed and highly compressed format adapted to be used with the pattern recognition system. Because the preprocessed data is highly compressed and used directly by the pattern recognition system, great efficiencies in storage and data transmission are achieved. The image preprocessing may include Fourier, DCT, wavelet, Gabor, fractal, or model-based approaches, or a combination thereof.


The potential significant hardware requirement for image processing and pattern recognition is counterbalanced by the enhanced functionality available by virtue of the technologies. When applied to multimedia devices, the interface system allows the operator to define complex criteria with respect to image, abstract or linguistic concepts, which would otherwise be difficult or impossible to formulate. Thus, the interface system becomes part of a computational system that would otherwise be too cumbersome for use. It is noted that, in many types of media streams, a number of “clues” are available defining the content, including close caption text, electronic program guides, simulcast data, related Internet web sites, audio tracks, image information, and the like. The latter two data types require difficult processing in order to extract a semantic content, while the former types are inherently semantic data.


A pattern recognition subsystem allows a “description” of an “event” without explicit definition of the data representing the “event”. Thus, instead of requiring explicit programming, an operator may merely define parameters of the desired “event”. This type of system is useful, for example, where a user seeks a generic type of data representing a variety of events. This eliminates the need for preindexing or standardized characterization of the data. The interface system therefore facilitates the formulation of a request, and then searches the database for data which corresponds to the request. Such preindexing or standardized characterization is extremely limiting with image and multimedia data, because “a picture is worth a thousand words”, and without a priori knowing the ultimate search criteria, all possible criteria must be accounted for. Pattern recognition systems do not require initial translation of visual aspects into linguistic concepts, thus allowing broader searching capability. Of course, a pattern recognition system may be used in conjunction with other searching schemes, to mutual advantage.


The pattern recognition functionality of the interface system is not limited to multimedia data, and may be applied to data of almost any type, e.g., real-time sensor data, distributed control, linguistic data, etc.


It is noted that, in consumer electronics and particularly entertainment applications, the reliability of the system need not be perfect, and errors may be tolerable. On the other hand, in industrial control applications, reliability must be much higher, with fail-safe backup systems in place, as well as advanced error checking. One way to address this issue is to allow the advanced user interface to propose an action to the user, without actually implementing the action. However, in this case, the action and its proposed basis are preferably presented to the user in a sophisticated manner, to allow the basis for the action to be independently assessed by the user. Therefore, in a complex, multistep process, the user interface may be simplified by permitting a three step process: the user triggers a proposed response, analyzes the proposal and rationale, and confirms the proposal. Therefore, single step processes are inferior candidates for intelligent assistance.


Another notable aspect of the technologies is the contextual analysis. Often, multimedia data often includes a data component that closely corresponds to a format of a search criteria. Thus, while a search may seek a particular image, other portions of the datastream correlate well with the aspect of the image being searched, and may be analyzed by proxy, avoiding the need for full image analysis. The resulting preselected reduced number of images may then be fully analyzed, if necessary. Thus, especially as with respect to consumer electronics applications, where absolute accuracy may not be required, the processing power available for pattern recognition need not be sufficient for compete real-time signal analysis of all data. The present invention therefore proposes use of a variety of available data in order to achieve the desired level functionality at minimum cost.


One aspect of the present invention therefore relates to a mechanism for facilitating a user interaction with a programmable device. The interface and method of use of the present invention serves to minimize the learning and searching times, better reflect users' expectations, provide better matching to human memory limits, be usable by both novices and experienced users, reduce intimidation of novice users by the device, reduce errors and simplify the entering of programming data. The present invention optimizes the input format scheme for programming an event-driven device, and can also be applied to many types of programmable devices. Thus, certain human factors design concepts, heretofore unexploited in the design of consumer electronics devices and industrial controls, have been incorporated, and new precepts developed. Background and theory of various aspects of the present invention is disclosed in “AN IMPROVED HUMAN FACTORED INTERFACE FOR PROGRAMMABLE DEVICES: A CASE STUDY OF THE VCR”, Master's Thesis, Tufts University (Master of Sciences in Engineering Design, November, 1990, publicly available January, 1991), by Linda I. Hoffberg. This thesis, and cited references, are incorporated herein by reference, and attached hereto as an appendix. Also referenced are: Hoffberg, Linda I., “Designing User Interface Guidelines For Time-Shift Programming of a Video Cassette Recorder (VCR)”, Proc. of the Human Factors Soc. 35th Ann. Mtg. pp. 501-504 (1991); and Hoffberg, Linda I., “Designing a Programmable Interface for a Video Cassette Recorder (VCR) to Meet a User's Needs”, Interface 91 pp. 346-351 (1991). See also, U.S. patent application Ser. No. 07/812,805, filed Dec. 23, 1991, incorporated herein by reference in its entirety, including appendices and incorporated references.


The present invention extends beyond simple predictive schemes which present exclusively a most recently executed command or most recently opened files. Thus, the possible choices are weighted in a multifactorial method, e.g., history of use, context and system status, rather than a single simple criterion alone. Known simple predictive criteria often exclude choices not previously selected, rather than weighing these choices in context with those which have been previously selected. While the system according to the present invention may include initial weightings, logical preferences or default settings, through use, the derived weightings are obtained adaptively based on an analysis of the status, history of use and context. It is noted that not all of the possible choices need be weighted, but rather merely a subset thereof.


For a given system, status, history of use and context may be interrelated factors. For example, the status of the machine is determined by the prior use, while the status also intersects context. The intended meaning of status is information relating to a path independent state of the machine at a given point in time. History of use is intended to implicate more than the mere minimum instructions or actions necessary to achieve a given state, and therefore includes information unnecessary to achieve a given state, i.e., path dependent information. Context is also related to status, but rather is differentiated in that context refers to information relating to the environment of use, e.g., the variable inputs or data upon which the apparatus acts or responds. Status, on the other hand, is a narrower concept relating more to the internal and constant functionality of the apparatus, rather than the particularities of its use during specific circumstances.


U.S. Pat. No. 5,187,797 relates to a machine interface system having hierarchical menus, with a simple (three button) input scheme. The choice(s) presented relate only to the system status, and not the particular history of use employed to obtain the system status nor the context of the choice. This system has a predetermined hierarchical menu structure, which is invariant with usage. The goal of this interface system is not to provide a learning interface, but rather to teach the user about or conform the user to the dictates of the predetermined and invariant interface of the device. While many types of programmable devices are known to exist, normally, as provided in U.S. Pat. No. 5,187,797, instructions are entered and executed in a predetermined sequence, with set branch points based on input conditions or the environment. See also U.S. Pat. Nos. 4,878,179, 5,124,908, and 5,247,433.


An aspect of the present invention provides a device having a predetermined or a generic style interface upon initial presentation to the user, with an adaptive progression in which specialized features become more easily available to a user who will likely be able to make use of them, while unused features are or remain “buried” within the interface. The interface also extracts behavioral information from the user and to alter the interface elements to optimize the efficiency of the user.


A videocassette recorder is a ubiquitous example of a programmable device, and therefore forms the basis of much of the discussion herein. It should, of course, be realized that many of the aspects of the present invention could be applied by one of ordinary skill in the art to a variety of controls having human interfaces, and that these other applications are included within the scope of the present invention.


The VCR apparatus typically involves a remote control entry device, and the interface of the present invention contains a graphical interface displayed for programming programmable devices. This aspect of the present invention seeks more accurate programming through the use of program verification to ensure that the input program is both valid and executable. Thus, it has a mechanism to store and check to verify that there are no conflicting programs. An apparatus according to the present invention can be connected, for example, to any infrared programmable device in order to simplify the programming process. By way of example only, an improved VCR interface forms the basis of a disclosed example. It is, of course, realized that the present method and apparatus may be applied to any programmable controller, i.e., any device which monitors an event or sensor and causes an event when certain conditions or parameters are met, and may also be used in other programming environments, which are not event driven. While the present interface is preferably learning and adaptive, it may also detect events and make decisions based on known or predetermined characteristics. Where a number of criteria are evaluated for making a decision, conflicts among the various criteria are resolved based on a strength of an evaluated criteria, a weighting of the criteria, an interactivity function relating the various criteria, a user preference, either explicitly or implicitly determined, and a contextual analysis. Thus, a user override or preference input may be provided to assist in resolving conflicts.


The present invention may incorporate an intelligent program recognition and characterization system, making use of any of the available cues, which allows an intelligent determination of the true nature of the broadcast and therefore is able to make a determination of whether parameters should be deemed met even with an inexact match to the specified parameters. Therefore, in contradistinction with VPV, the present invention provides, for example, intelligence. The VPV is much more like the “VCR Plus” device, known to those skilled in the art, which requires that a broadcast be associated with a predetermined code, with the predetermined code used as a criteria for initiating recording. Some problems with VCR Plus include identification of the codes which identify channel and time, post scheduling changes, incorrect VCR clock setting, and irregular schedules. VCR Plus also is limiting with respect to new technologies and cable boxes.


The videotext signal of the prior art includes a digitally encoded text message that may be displayed in conjunction with the displayed image, similar to the closed caption system. The aforementioned West German system demonstrates one way in which the transmitted signal may be received by a device and interpreted to provide useful information other than the transmitted program itself. However, the prior art does not disclose how this signal may be used to index and catalog the contents of a tape, nor does it disclose how this signal may be used to classify or interpret the character of the broadcast. In other words, in one embodiment of the present invention, the videotext or closed caption signal is not only interpreted as a literal label, as in the prior art, but is also further processed and analyzed to yield data about the content of the broadcast, other than merely an explicit identification of the simultaneously broadcast information.


Beyond or outside the visible region of an U.S. National Television Standards Committee (NTSC) broadcast video frame are a number of scan lines which are dedicated to presenting digital information, rather than analog picture information. Various known coding schemes are available for transmitting and receiving information in this non-viewing portion of the video transmission, and indeed standard exist defining the content of these information fields. Of course, various other transmission schemes provide a format for transmitting data. For example, standard frequency modulation (FM) transmissions may be associated with digital data transmissions in a subcarrier. Likewise, satellite transmissions may include digital data along with an audio data stream or within a video frame, which may be in analog format or digitally encoded.


Cable systems may transmit information either in the broadcast band or in a separate band. HDTV schemes also generally provide for the transmission of digital data of various sorts. Thus, known audio and video transmission systems may be used, with little or no modifications to provide enhanced functionality, according to the present invention. It is therefore possible to use known and available facilities for transmitting additional information relating to the broadcast information, in particular, the characteristics of the video broadcast, and doing so could provide significant advantages, used in conjunction with the interface and intelligent pattern recognition controller of the present invention. If this information were directly available, there would be a significantly reduced need for advanced image recognition functions, such advanced image recognition functions requiring costly hardware devices, while still maintaining the advantages of the present invention.


It is noted, however, that the implementation of a system in which characterization data of the broadcast is transmitted along therewith might require a new set of standards and the cooperation of broadcasters, as well as possibly the government regulatory and approval agencies. The present invention does not require, in all of its aspects, such standardization, and therefore may advantageously implement substantial data processing locally to the receiver. It is nevertheless within the scope of the invention to implement such a broadcast system with broadcast of characterization data in accordance with the present invention. Such broadcast characterization data may include characterizations as well as preprocessed data useful for characterizing according to flexible criteria in the local receiving device.


According to the present invention, if such characterizations are broadcast, they may, as stated above, be in band or out of band, e.g., making use of unused available spectrum bandwidth within the NTSC channel space, or other broadcast system channel space, or may be “simulcast” on a separate channel, such as an FM sideband or separate transmission channel. Use of a separate channel would allow a separate organization, other than the network broadcasters, to provide the characterization data for distribution to users of devices that make use of the present intelligent system for controlling a VCR or other broadcast information processing device. Thus, the characterization generating means need not be directly linked to the local user machine in order to fall within the scope of the present invention. The present invention also provides a mechanism for copyright holders or other proprietary interests to be protected, by limiting access to information be encryption or selective encryption, and providing an accounting system for determining and tracking license or broadcast fees.


Research has been performed relating to VCR usability, technology, implementation, programming steps, current technology, input devices, and human mental capacity. This research has resulted in a new paradigm for the entry of programming data into a sequential program execution device, such as a VCR, by casual users.


Four major problems in the interfaces of VCRs were found to exist. The first is that users spend far too much time searching for necessary information, which is necessary in order to complete the programming process. Second, many people do not program the VCR to record at a later time (time-shift) frequently, and thus forget the programming steps in the interim, i.e., the inter-session decay of the learning curve is significant. Third, the number of buttons on many remote control devices has become overwhelming. Fourth, people have become reluctant to operate or program VCRs because of their difficult operation. It was found that, by minimizing the learning and searching times, the user's programming time and frustration level can be greatly reduced. If VCRs are easier to program, users might program them more frequently. This would allow more efficiency and flexibility in broadcast scheduling, especially late night for time shift viewing. The present invention therefore provides an enhanced VCR programming interface having a simplified information structure, an intuitive operational structure, simplified control layout and enhanced automated functionality.


A new class of consumer device has been proposed, which replaces the videotape of a traditional videotape recorder with a random-access storage device, such as a magnetic hard disk drive. Multimedia data is converted through a codec (if necessary), and stored in digital form. Such systems are proposed by Tivo, Inc., Philips Electronics (Personal TV), Replay Networks, Inc. and Metabyte, Inc. Some of these systems employ a user preference based programming/recording method similar to that of the present invention.


In these systems, typically a content descriptive data stream formulated by human editors accompanies the broadcast or is available for processing and analysis. Based on a relation of the user preferences, which may be implied by actual viewing habits or input through simple accept/veto user feedback, selected media events may be recorded. However, such systems rely on a correspondence between the factors of interest to users and those encoded in the data stream, e.g., a “program guide”. This is not always the case. However, where the available data describing the program maps reasonably well into the user preference space, such a system may achieve acceptable levels of performance, or stated otherwise, the program material selected by the system will be considered acceptable.


One particular aspect of these time-shifting consumer media recording devices is how they deal with advertising materials which accompany program material. In many instances, the user seeks to avoid “commercials”, and the device may be programmed to oblige. However, as such devices gain wider acceptance, advertisers will be reluctant to subsidize broadcasts. Therefore, an advertising system may be integrated into the playback device which seeks to optimize the commercial messages presented to a viewer. By optimizing the messages or advertisements, the viewer is more receptive to the message, and economic implications ensue. For example, a viewer may be compensated, directly or indirectly, for viewing the commercials, which may be closely monitored and audited, such as by taking pictures of the audience in front of a “set-top box”. The acquired data, including viewer preferences, may be transmitted back to commercial sponsors, allowing detailed demographic analysis.


In order to ensure privacy, the preference information and/or images may be analyzed by a proxy, with the raw data separated from the commercial users of such data. Thus, for example, the particular users of a system may register their biometric characteristics, e.g., face. Thereafter, the imager captures facial images and correlates these with its internal database. The image itself therefore need not be stored or transmitted. Viewer preferences and habits, on the other hand, likely must be transmitted to a central processing system for analysis.


Because the system is intelligent, copy protection and royalty accounting schemes may readily be implemented. Thus, broadcasters and content providers may encode broadcasts in such a way as to control the operation of the consumer device. For example, an IEEE-1394-type encryption key support/copy protection or DIVX scheme may be implemented. Further, certain commercial sponsors may be able to avoid deletion of their advertisement, while others may allow truncation. The acceptability of this to the consumer may depend on subsidies. In other words, an company is willing to pay for advertising. Instead of paying for placements directly to the media, a portion is paid to a service provider, based on consumer viewing. The media, on the other hand, may seek to adopt a pay-per-view policy, at least with respect to the service provider, in lieu of direct advertising revenues. The service provider will account to both advertisers and content providers for use. With sufficient viewing of commercials, the entire service charge for a system might be covered for a user. On the other hand, a viewer might prefer to avoid all commercials, and not get the benefit of a subsidy. The service provider performs the economically efficient function of delivering optimized, substituted commercials for the almost random commercials which flood the commercial broadcast networks, and thus can accrue greater profits, even after paying content providers a reasonable fee. An advertiser, by selecting a particular audience, may pay less than it would otherwise pay to a broadcaster. The content providers may also charge more for the privilege of use of their works.


As stated above, the content may be copy protected by the use of encryption and/or lockout mechanisms. Thus, by providing an alternative to an analog VCR, a full end-to-end encrypted signal may be provided, such as that proposed for the IEEE-1394 copy protection scheme. Because enhanced recording capabilities are provided to the consumer, the acceptance will be high. Because of the encryption, lack of portability and continued royalty accounting, content provider acceptance will also likely be high.


The user interface concepts according to the present invention are easily applied to other special purpose programmable devices, and also to general purpose programmable devices wherein the programming paradigm is event-driven, as well as other programming systems. It should also be noted that it is within the scope of the present invention to provide an improved interface and programming environment for all types of programmable devices, and in this regard, the present invention incorporates adaptive features which optimize the programming environment for both the level of the user and the task to be programmed.


In optimizing the interface, four elements are particularly important: the input device, the display format, the sequence of the programming operation, and the ability of the device to properly interpret the input as the desired program sequence.


The present invention proceeds from an understanding that an absence of user frustration with respect to a programmable consumer or industrial device or interface, may be particularly important with respect to achieving the maximum potential functionality thereof. The interface must be designed to minimize the user's frustration level. This can be accomplished by clearly furnishing the possible choices, presenting the data in a logical sequence, and leading the user through the steps necessary to program the device.


When applied to other than audiovisual and/or multimedia application, the pattern recognition function may be used to control the execution of a program or selectively control execution of portions of the software. For example, in a programmable temperature controller application, a sensor or sensor array could be arranged to detect a “door opening”. On the occurrence of the door opening, the system would recognize this pattern, i.e. a mass of air at a different temperature entering the environment from a single location, or a loss of climate controlled air through a single location. In either event, the system would take appropriate action, including: halt of normal climate control and impose a delay until the door is closed; after closure, set a time constant for maintenance of a steady state of the replaced air with the climate controlled air; based on the actual climatic condition after assimilation, or a predicted climatic condition after assimilation, begin a climate compensation control; optionally, during the door opening, control a pressure or flow of air to counterbalance the normal flow through the door, by using a fan or other device. The climate may differ in temperature, humidity, pollutants, or the like, and appropriate sensors may be employed.


The present invention also allows a dynamic user preference profile determination based on explicit or implicit desires, e.g., moods, which assist in processing data to make decisions which conform to the user preference at a given point in time. For example, voice patterns, skin temperature, heat pulse rate, external context, skin resistance (galvanic skin response), blood pressure, stress, as determined by EMG, EEG or other known methods, spontaneous motor activity or twitching, may be detected in order to determine or infer a user mood, which may be used as a dynamic influence on the user preference. These dynamic influences are preferably stored separately from static influences of the preferences, so that a resultant determined preference includes a dynamic influence based on a determined mood or other temporally varying factor and a static influence associated with the user.


When a group of people are using the system simultaneously, the system must make a determination of a composite preference of the group. In this case, the preferences of the individuals of the group, if known, may be correlated to produce an acceptable compromise. Where individual preferences are not a priori known, individual or group “interviews” may be initially conducted to assist in determining the best composite group preference.


It is therefore an object according to the present invention to provide a radio receiver or video receiver device, having a plurality of different available program sources, determining a program preference for one or more individuals subject to a presented program, comparing the determined program preference and a plurality of different program sources, and selects at least one program based on the comparison.


In formulating a group preference, individual dislikes may be weighted more heavily than likes, so that the resulting selection is tolerable by all and preferable to most group members. Thus, instead of a best match to a single preference profile for a single user, a group system provides a most acceptable match for the group. It is noted that this method is preferably used in groups of limited size, where individual preference profiles may be obtained, in circumstances where the group will interact with the device a number of times, and where the subject source program material is the subject of preferences. Where large groups are present, demographic profiles may be employed, rather than individual preferences. Where the device is used a small number of times by the group or members thereof, the training time may be very significant and weigh against automation of selection. Where the source material has little variety, or is not the subject of strong preferences, the predictive power of the device as to a desired selection is limited.


The present invention provides a system and method for making use of the available broadcast media forms for improving an efficiency of matching commercial information to the desires and interests of a recipient, improving a cost effectiveness for advertisers, improving a perceived quality of commercial information received by recipients and increasing profits and reducing required information transmittal by publishers and media distribution entities.


This improved advertising efficiency is accomplished by providing a system for collating a constant or underlying published content work with a varying, demographically or otherwise optimized commercial information content. This commercial information content therefore need not be predetermined or even known to the publisher of the underlying works, and in fact may be determined on an individual receiver basis. It is also possible to integrate the demographically optimized information within the content. For example, overlays in traditional media, and electronic substitutions or edits in new media, may allow seamless integration. The content alteration need not be only based on commercial information, and therefore the content may vary based on the user or recipient.


U.S. Pat. No. 5,469,206, expressly incorporated herein by reference, relates to a system that automatically correlates user preferences with electronic shopping information to create a customized database for the user.


Therefore, the granularity of demographic marketing may be very fine, on a receiver-by-receiver basis. Further, the accounting for advertisers will be more accurate, with a large sample and high quality information. In fact, in a further embodiment, an interactive medium may be used allowing immediate or real time communication between recipient and advertiser. This communication may involve the Internet, private networks or dial-up connections. Because the commercial messages are particularly directed to recipients, communication with each selected recipient is more valuable to an advertiser and that advertiser is willing to pay more for communication with each selected recipient. Recipients may therefore be selected to receive the highest valued appropriate commercial message(s). Thus, advertisers will tend to pay less and media producers will gain more revenues. Recipients will gain the benefit of selected and appropriate media, and further, may provide feedback for determining their preferences, which will likely correspond with their purchasing habits. Thus, the recipient will benefit by receiving optimized information.


Likewise, a recipient may place a value on receiving certain information, which forms the basis for “pay-per-view” systems. In this case, the recipient's values may also be considered in defining the programming.


This optimization is achieved by providing a device local to the recipient which selectively presents commercial information to the recipient based on characteristics individual to the recipient, which may be input by the recipient, the publisher, the advertiser, and/or learned by the system based on explicit or implicit feedback. The local device either has a local memory for advertising materials, or a telereception link for receiving commercial information for presentation, either on a real time basis or stored for later presentation. In a further embodiment, a user may control the content and/or commercial information received. In this case, the accounting system involves the user's account, and, for example, the recipient may be denied the subsidy from the commercial advertiser, and pay for the privilege of commercial free content.


It is also possible to employ the methods and systems according to the present invention to create a customized publication, which may be delivered physically to the recipient, for example as print media, facsimile transmission, e-mail, R-CD-ROM, floppy disk, or the like, without having a device local to the consumer.


It is noted that this system and method is usable for both real time media, such as television, radio and on-line telecommunication, as well as manually distributed periodicals, such as newspapers, magazines, CD-ROMs, diskettes, etc. Therefore, the system and method according to the present invention includes a set of related systems with varying details of implementation, with the underlying characteristic of optimization of variable material presentation at the recipient level rather than the publisher level.


The system and method according to the present invention preferably includes an accounting system which communicates information relating to receipt of commercial advertising information by a recipient to a central system for determination of actual receipt of information. This feedback system allows verification of receipt and reduces the possibility of fraud or demographic inaccuracies.


The accounting system, for example, may place value on the timeslot, associated content, the demographics of the user, user's associated valuation, competition for placement, past history (number of impressions made to same recipient) and exclusivity.


A preferred embodiment includes a subscription television system having a plurality of received channels. At least one of these channels is associated with codes to allow determination of content from variable segments. It is also possible to identify these variable segments without these codes, although the preferred system includes use of such codes. These codes also allow simple identification of the content for accounting purposes. Upon detection of a variable segment, a commercial advertisement is selected for presentation to the recipient. This variable segment is selected based on the characteristics of the recipient(s), the history of use of the device by the recipient(s), the context of use, the arrangements made by the commercial information provider(s) for presentation of information, and the availability of information for presentation. Other factors may include the above-mentioned accounting system factors. Typically, the local device will include a store of commercial information, downloaded or otherwise transmitted to the recipient (e.g., a CD-ROM or DVD with MPEG-2 compressed images). A telecommunication link may also be provided to control the process, provide parameters for the presentation or the information itself. This telecommunication link may be provided through the public telephone network, Internet, private network (real or virtual cable network, or a wireless network, for example. Generally, the underlying work will have a gap of fixed length, so that the commercial information must be selected to fit in this gap. Where the gap is of variable length, such as might occur in live coverage, the commercial information is interrupted or the underlying work buffered and delayed to prevent loss. Thus, the presentation to the user is constructed from pieces, typically at the time of presentation, and may include invariable content, variable content, invariable messages, variable messages, targeted content and/or messages, and hypervariable content. Hypervariable content includes, for example, transition material selected based on the stream of information present, and other presentations which my optionally include useful information which are individualized for the particular recipient or situation.


According to another embodiment, a recording, such as on a videotape, is retained by a recipient which includes proprietary content. This may include a commercial broadcast, a private broadcast, or distributed media. In the case of a commercial broadcast, some or all of the commercial advertising or other time-sensitive information is old and/or stale. Therefore, in operation, this old or time sensitive information is eliminated and substituted with new and/or different information. Thus, the presentation system freshens the presentation, editing and substituting where necessary.


By such a method, content distributed even through private channels may include advertisements, and thus be subsidized by advertisers. The advertisements and other added content are generally more acceptable to the audience because they are appropriately targeted.


For example, where the broadcaster has a high degree of control over the initial broadcast, e.g., pay per view under license, or where the broadcaster may claim substantial continuing rights in the work after recording, the enforcement of a proprietary replay system may be accepted. For example, a work is broadcast as an encrypted digital data stream, with selective decryption at the recipient's receiver, under license from the broadcaster. In this case, a recording system is provided which retains the encryption characteristics, ensuring the integrity of the accounting process. During presentation of the recorded work, commercial information is appropriately presented to the recipient during existing or created gaps, or in an associated output separate from the content presentation. The recipient, as a result, receives the benefit of the original subsidy, or may receive a new subsidy.


Therefore, similar to the known DIVX system, an encrypted media may be mass distributed, which requires authorization for display. Instead, however, of requiring the recipient to pay for the initial and subsequent displays of the content, the player integrates advertising content into the output, which may vary based on the audience, time and past history, as well as other factors discussed herein. Given the interactive and variable nature of the presentation, the user or audience may even veto (“fast forward through”) a particular commercial. In this case, the use may have to account for a fee, or other advertisers may tack up the slack. The veto provides information regarding the desires of the viewer, and may be used to help select future messages to the displayed or presented.


According to another embodiment, a radio transmission/reception system is provided which broadcasts content, an overlay track and variable commercial information. The invariant works are preferably prerecorded music. The overlay track is preferably a “DJ”, who provides information regarding the invariant works, commercial information or news. The commercial information in this instance therefore refers to prerecorded segments. In this instance, the goal is to allow the invariant works to be received by the recipient and presented with improved optimization of the commercial information content and other messages presented at the time of output. Further, this system allows optimization of the presentation of the invariant portions as well, i.e., the commercial information and the program content may be independently selected at the receiver, with appropriate accounting for commercial subsidy. In a mobile receiver, it is preferable to include as a factor in the selection of commercial information a location of the receiver, as might be obtained from a GPS system, cellular location system, intelligent highway system or the like. This would allow geographically appropriate selection of commercial information, and possibly overlay information as well, e.g., traffic reports.


Another embodiment according to the present invention provides a hypertext linked media or multimedia environment, such as HTML/World Wide Web, wherein information transmitted and/or displayed is adaptively selected based on the particular user or the user's receiving system. Thus, various elements may be dynamically substituted during use.


Therefore, it is an object according to the present invention to provide adaptive man-machine interfaces, especially computer graphic user interfaces, which are ergonomically improved to provide an optimized environment. Productivity of computer operators is limited by the time necessary to communicate a desired action through the user interface to the device. To reduce this limitation, most likely user actions are predicted and presented as easily available options. The technologies also extend beyond this core theme in many differing ways, depending on the particular application.


The system also provides an intelligent, adaptive pattern recognition function in order to provide the operator with a small number of high probability choices, which may be complex, without the need for explicit definition of each atomic instruction comprising the desired action. The interface system predicts a desired action based on the user input, a past history of use, and a context of use.


In yet another embodiment, a present mood of a user is determined, either explicitly or implicitly, and the device selects program material that assists in a desired mood transition. The operation of the device may additionally acquire data relating to an individual and the respective moods, desires and characteristics, altering the path provided to alter the mood based on the data relating to the individual. As stated above, in a group setting, a most acceptable path is presented rather than a most desirable path as presented for an individual.


In determining mood, a number of physiologic parameters may be detected. In a training circumstance, these set of parameters are correlated with a temporally associated preference. Thus, when a user inputs a preference into the system as feedback, mood data is also obtained. Invariant preferences may be separated, and analyzed globally, without regard for temporal variations, while varying preferences are linked with information regarding the surrounding circumstances and stored. For example, the preference data may be used to train a neural network, e.g., using backpropagation of errors or other known methods. The inputs to the neural network include available data about surrounding context, such as time, environmental brightness, and persons present; source program choices, which may be raw data, preprocessed data, and abstracted data; explicit user input; and, in this embodiment, mood parameters, which may be physiological or biometric data, voice pattern, or implicit inputs. An example of an implicit input is an observation of a man-machine interaction, such as a video game. The manner in which a person plays a video game or otherwise interacts with a machine may provide valuable data for determining a mood or preference.


According to one embodiment of the invention, the image is preprocessed to decompose the image into object-elements, with various object-elements undergoing separate further processing. For example, certain backgrounds may be aesthetically modeled using simple fractal equations. While, in such circumstances the results may be inaccurate in an absolute sense, they may be adequate in a performance sense. Faces, on the other hand, have common and variable elements. Therefore, a facial model may be based on parameters having distinguishing power, such as width between eyes, mouth, shape of ears, and other proportions and dimensions. Thus, along with color and other data, a facial image may be stored as a reference to a facial model with the distinguishing parameters for reconstruction. Such a data processing scheme may produce a superior reconstructed image and allow for later recognition of the face, based on the stored parameters in reference to the model. Likewise, many different elements of an image may be extracted and processed in accordance with specific models to produce differentiating parameters, wherein the data is stored as a reference to the particular model along with the particular data set derived from the image. Such a processing scheme allows efficient image storage along with ease of object recognition, i.e., distinction between objects of the same class. This preprocessing provides a highly asymmetric scheme, with a far greater processing complexity to initially process the image than to subsequently reconstruct or otherwise later employ the data.


By employing a model-based object extraction system, the available bandwidth may be efficiently used, so that objects which fall within the scope of an available model may be identified with a model identification and a series of parameters, and objects not within the scope of a model may be allocated a comparatively greater bandwidth for general image description, e.g., JPEG, MPEG-1/MPEG-2, wavelet, standard fractal image compression (FIC), or other image processing schemes. In a worst case, therefore, the bandwidth required will be only slightly greater than that required for a corresponding standard method, due only to the additional overhead to define data types, as necessary. However, by employing a model based-object decomposition processing system, recognized elements may be described using only a small amount of data and a greater proportion of data used to describe unrecognized elements. Further, the models available may be dynamically updated, so that, as between a communicating transmitted and receiver, retransmission of unrecognized elements will be eliminated as a model is constructed.


Where image processing systems may produce artifacts and errors, an error minimization function may also be provided which compares an original image with a decomposed-recomposed image and produces an error function which allows correction for these errors. This error function may be transmitted with the processed data to allow more faithful reproduction. In a pattern recognition context, the error function may provide useful data relating to the reliability of a pattern correlation, or may provide useful data outside of the model and associated parameters for pattern recognition.


Thus, in the case of an object-extraction model-based processing system, the resulting data stream may be appropriate for both viewing and recognition. Of course, acoustic data may be likewise processed using acoustic models with variable parameters. However, in such a system, information for pattern recognition may be filtered, such as eliminating the error function or noise data. Further, certain types of objects may be ignored, for example, under normal circumstances, clouds in the sky provide little information for pattern recognition and may be removed. In such a system, data intended for viewing or listening will likely contain all objects in the original data stream, with as much original detail as possible given data storage and bandwidth constraints.


An object extraction model based processing system also allows for increased noise rejection, such as over terrestrial broadcast channels. By transmitting a model, the receiving system may interpolate or extrapolate data to fill in for missing data. By extrapolate, it is meant that past data is processed to predict a subsequent condition. By interpolate, it is meant that data presentation is delayed, and missing data may therefore be predicted from both past and subsequent data transmission. Missing portions of images may also be reconstructed from existing portions. This reconstruction process is similar to that described in U.S. Pat. No. 5,247,363, to reconstruct MPEG images; except that where model data is corrupted, the corruption must be identified and the corrupt data eliminated and replaced with predicted data.


It is therefore an object according to the present invention to provide a programmable control, having a status, responsive to an user input and a signal received from a signal source, comprising a controller, for receiving the user input and the signal and producing a control output; a memory for storing data relating to an activity of the user; a data processing system for adaptively predicting a most probable intended action of the user based on the stored data relating to the activity of the user and derived weighing of at least a subset of possible choices, the derivation being based on a history of use, a context of a respective choice and the status of the control; and a user feedback data presenting system comprising an output device for presentation of a variable sequence of programming options to the user, including the most probable intended action of the user, in a plurality of output messages, the output messages differing in available programming options.


The programmable control may be employed for performing an action based on user input and an information content of a signal received from a signal source, wherein the output device includes a display device, further comprising a user controlled direct manipulation-type input device, associated with the display device, having a device output, the device output being the user input; a plant capable of performing the action, being responsive to an actuator signal; and the controller, being for receiving data from the device output of the input device and the signal, and displaying user feedback data on the display device, the logical sequence of the user feedback data including at least one sequence of options sufficient to define an operable control program, and a presentation of additional programming options if the control program is not operable.


The programmable control may further comprise a user input processing system for adaptively determining a viewer preference based on the user input received by the controller; a program material processing system for characterizing the program material based on its content; a correlator for correlating the characterized content of the program material with the determined viewer preference to produce a correlation index; and a processor, selectively processing the program material based on the correlation index, the data processing system receiving an input from the processor.


The programmable control may also comprise a plurality of stored profiles, a processor for characterizing the user input to produce a characterized user input; and means for comparing the characterized user input with at least one of the plurality of stored profiles to produce a comparison index, wherein the variable sequence of programming options is determined on the basis of the comparison index. The processor for characterizing may perform an algorithm on the signal comprising a transform selected from the group consisting of an Affine transformation, a Fourier transformation, a discrete cosine transformation and a wavelet transformation.


It is a further object according to the present invention to provide a programmable controller for controlling a recording device for recording an analog signal sequentially on a recording medium having a plurality of uniquely identifiable storage locations, further comprising a sequential recording device for recording the analog signal, and a memory for storing, in a directory location on the recording medium which is separate from the storage location of the analog signal, information relating to the signal, processed to selectively retain characterizing information, and an identifier of a storage location on the recording medium in which the analog signal is recorded.


It is another object according to the present invention to provide a control, wherein program material is encrypted, further comprising a decryption system for decrypting the program material if it is selected to produce unencrypted program material and optionally an associated decryption event; a memory for storing data relating to the occurrence of the decryption event; and a central database for storing data relating to the occurrence of the decryption event in association with data relating to the viewer.


It is still another object according to the present invention to provide a control wherein the user input processing system monitors a pattern of user activity and predicts a viewer preference; the program material processing system comprising a processor for preprocessing the program material to produce a reduced data flow information signal substantially retaining information relating to the abstract information content of the program material and selectively eliminating data not relating to the abstract information content of the program material and for characterizing the information signal based on the abstract information content; and a comparing system for determining if the correlation index is indicative of a probable high correlation between the characterization of the information signal and the viewer preference and causing the stored program material to be processed by the processing means based on the determination. The system according to this aspect of the present invention preferably comprises an image program material storage and retrieval system.


The present invention further provides a control further comprising a memory for storing a characterization of the program material; an input for receiving a feedback signal from the viewer indicating a degree of agreement with the correlation index determination, wherein the feedback signal and the stored characterization are used by the viewer preference predicting means to predict a new viewer preference.


According to another aspect of the invention, it is an object to provide an image information retrieval apparatus, comprising a memory for storing compressed data representing a plurality of images; a data storage system for retrieving compressed data representing at least one of the plurality of images and having an output; a memory for storing characterization data representing a plurality of image types, having an output; and an image processor, receiving as inputs the outputs from the data storage system and the characterization data memory, and producing a signal corresponding to a relation between at least one of the plurality of images of the compressed data and at least one of the image types of the characterization data.


It is a still further aspect of the present invention to provide a video interface device for a user comprising a data transmission system for simultaneously transmitting data representing a plurality of programs; a selector for selecting at least one of the plurality of programs, being responsive to an input; a program database containing information relating to the plurality of programs, having an output; a graphical user interface for defining commands, comprising (a) an image display device having at least two dimensions of display, being for providing visual image feedback; and (b) a multidimensional input device having at least two dimensions of operability, adapted to correspond to the two dimensions of the display device, and having an output, so that the user may cause the input device to produce a corresponding change in an image of the display device by translating an indicator segment of the display in the at least two dimensions of display, based on the visual feedback received from the display device, the indicator segment being moved to a translated location of the display device corresponding to a user command; and a controller for controlling the graphical user interface and for producing the input of the selector, receiving as a control the output of the multidimensional input device, the controller receiving the output of the program database and presenting information relating to at least one of the plurality of programs on the display device associated with a command, the command being interpreted by the control means as the user command to produce the input of the selector to select the at least one of the plurality of programs associated with the command.


Another object of the present invention is to provide an apparatus, receiving as an input from a human user having a user characteristic, comprising an input device, producing an input signal from the human user input; a display for displaying information relating to the input from the user and feedback on a current state of the apparatus, having an alterable image type; an input processor for extracting an input instruction relating to a desired change in a state of the apparatus from the input signal; a detector for detecting one or more temporal-spatial user characteristics of the input signal, independent of the input instruction, selected from the group consisting of a velocity component, an efficiency of input, an accuracy of input, an interruption of input and a high frequency component of input; a memory for storing data related to the user characteristics; and a controller for altering the image type based on the user characteristics. The controller may alter the image type based on an output of the detector and the stored data so that the display displays an image type which corresponds to the detected user characteristics. The controller may further be for controlling the causation of an action on the occurrence of an event, further comprising a control for receiving the input instruction and storing a program instruction associated with the input instruction, the control having a memory sufficient for storing program instructions to perform an action on the occurrence of an event; and a monitor for monitoring an environment of the apparatus to determine the occurrence of the event, and causing the performance of the action on the occurrence of the event. The controller may also alter the image type based on an output of the detector and the stored data so that the display means displays an image type which corresponds to the detected user characteristics.


It is another object of the present invention to provide an adaptive programmable apparatus having a plurality of states, being programmable by a programmer and operating in an environment in which a plurality of possible events occur, each of the events being associated with different data, comprising an data input for receiving data; an programmer input, producing an input signal from the programmer; a memory for storing data relating to the data input or the input signal; a feedback device for adaptively providing information relating to the input signal and a current status of the apparatus to the programmer, based on the data input or the programmer input, the stored data, and derived weighing of at least a subset of possible choices, the derived weighing being based on a history of use, a context of a respective choice and the current status of the apparatus; a memory for storing programming data associated with the input signal; and a processor, having a control output, for controlling the response of the apparatus relating to the detection of the input signal or the data in accordance with the stored programming data, the processor: (a) processing the at least one of the input signal or the data to reduce an amount of information while substantially retaining an abstract portion of the information; (b) storing a quantity of the abstracted information; (c) processing the abstract portion of the information in conjunction with the stored quantity of abstracted information; and (d) providing the control output based on the processed abstract portion of the information and the stored programming data. The apparatus may further comprise an input for receiving a programming preference from the programmer indicating a plurality of possible desired events; the processor further including a correlator for correlating the programming preference with the data based on an adaptive algorithm and for determining a likelihood of occurrence of at least one of the desired events, producing the control output. The apparatus may further comprise an input for receiving feedback from the programmer indicating a concurrence with the control output of the processor, and modifying the response control based on the received feedback to increase a likelihood of concurrence. The apparatus may still further verify the programming data to ensure that the programming data comprise a complete and consistent set of instructions; and include a feedback system for interactively modifying the programming data. The apparatus may also comprise a chronological database and an accessing system for accessing the chronological database on the basis of the programming data stored in the memory.


It is also an object according to the present invention to provide an apparatus comprising an input for receiving a programming preference from the programmer indicating a plurality of possible desired events; and a correlator for correlating the programming preference with the data based on an adaptive algorithm and for determining a likelihood of occurrence of at least one of the desired events, producing the output, the output being associated with the initiation of the response.


The present invention also provides as an object an apparatus comprising an input for receiving feedback from the programmer indicating a concurrence with the output of the correlator, and modifying the algorithm based on the received feedback, the feedback device comprising a display and the input device is remote from the display, and providing a direct manipulation of display information of the display.


According to an aspect of the present invention, a processor of the programmable apparatus verifies the program instructions to ensure that the program instructions are valid and executable by the processor; an output for providing an option, selectable by the programmer input for changing an instruction stored by the processor, such that the apparatus enters a state wherein a new instruction may be input to substitute for the instruction, wherein the processor verifies the instructions such that the instructions are valid; and wherein the feedback device further presents information requesting confirmation from the programmer of the instructions associated with the input signal. The apparatus may further comprise a chronological database and an accessing system for accessing the chronological database on the basis of the program instructions stored in the memory.


The processor of the programmable apparatus may receive information from the input signal and/or from the data input; and may further comprise an input signal memory for storing at least a portion of the input signal or the data, a profile generator for selectively generating a profile of the input signal or the data, and an input signal profile memory for storing the profile of the input signal or the data separately from the input signal or the data in the input signal memory. The programmable apparatus may further comprise a processor for comparing the input signal or the data with the stored profile of the input signal or the data to determine the occurrence of an event, and the data optionally comprises image data and the processor for comparing performs image analysis. The image data may comprise data having three associated dimensions obtained by a method selected from the group consisting of synthesizing a three dimensional representation based on a machine based model derived from two dimensional image data, synthesizing a three dimensional representation derived from a time series of pixel images, and synthesizing a three dimensional representation based on a image data representing a plurality of parallax views each having at least two dimensions.


A user feedback data presenting device according to the present invention may comprise a display having a plurality of display images, the display images differing in available programming options.


According to another aspect of the present invention, a program material processing system is provided comprising means for storing template data; means for storing the image data; means for generating a plurality of domains from the stored image data, each of the domains representing different portions of the image information; means for creating, from the stored image data, a plurality of addressable mapped ranges corresponding to different subsets of the stored image data, the creating means including means for executing, for each of the mapped ranges, a procedure upon the one of the subsets of the stored image data which corresponds to the mapped range; means for assigning identifiers to corresponding ones of the mapped ranges, each of the identifiers specifying for the corresponding mapped range an address of the corresponding subset of stored image data; means for selecting, for each of the domains, the one of the mapped ranges which most closely corresponds according to predetermined criteria; means for representing at least a portion of the image information as a set of the identifiers of the selected mapped ranges; and means for selecting, from the stored templates, a template which most closely corresponds to the set of identifiers representing the image information. The means for selecting may comprise means for selecting, for each domain, the mapped range which is the most similar, by a method selected from at least one of the group consisting of selecting a minimum Hausdorff distance from the domain, selecting the highest cross-correlation with the domain and selecting the lowest mean square error of the difference between the mapped range and the domain. The means for selecting may also comprise, for each domain, the mapped range with the minimum modified Hausdorff distance calculated as D[db, mrb]+D[1-db, 1-mrb], where D is a distance calculated between a pair of sets of data each representative of an image, db is a domain, mrb is a mapped range, 1-db is the inverse of a domain, and 1-mrb is an inverse of a mapped range. The means for representing may further comprise means for determining a feature of interest of the image data, selecting a mapped range corresponding to the feature of interest, storing the identifiers of the selected mapped range, selecting a further mapped range corresponding to a portion of image data having a predetermined relationship to the feature of interest and storing the identifiers of the further mapped range.


According to an embodiment of the present invention, the image data comprises data having three associated dimensions obtained by a method selected from the group consisting of synthesizing a three dimensional representation based on a machine based prediction derived from two dimensional image data, synthesizing a three dimensional representation derived from a time series of pixel images, and synthesizing a three dimensional representation based on a image data representing a plurality of parallax views having at least two dimensions.


It is therefore an object of the present invention to provide a programmable apparatus for receiving instructions from a programmer and causing an action to occur on the happening of an event, comprising an input device, producing an input instruction signal; a control means for receiving the input instruction signal, and storing a program instruction associated with the input instruction signal, the control means storing sufficient program instructions to perform an action on the occurrence of an event, the control means monitoring a status of the apparatus to determine the occurrence of various events, comparing the determined events with the program instructions, and performing the action on the occurrence of the event; a display means for interactively displaying information related to the instructions to be received, and responsive thereto, controlled by the control means, so that the programmer is presented with feedback on a current state of the apparatus and the program instruction; wherein the control means further comprises means for detecting one or more characteristics of the input instruction signal independent of the program instruction selected from the group consisting of a velocity component, an efficiency of input, an accuracy of input, an interruption of input, a high frequency component of input and a past history of input by the programmer, whereby when the control means detects a characteristic indicating that the display means is displaying information in a suboptimal fashion, the control means controls the display means to display information in a more optimal fashion.


It is also an object of the present invention to provide a programmable apparatus for receiving instructions from a programmer and causing an action to occur on the happening of an event, comprising an input device, producing an input instruction signal; a control means for receiving the input instruction signal, and storing a program instruction associated with the input instruction signal, the control means storing sufficient program instructions to perform an action on the occurrence of an event, the control means monitoring a status of the apparatus to determine the occurrence of various events, comparing the determined events with the program instructions, and performing the action on the occurrence of the event; a display means for interactively displaying information related to the instructions to be received, and responsive thereto, controlled by the control means, so that the programmer is presented with feedback on a current state of the apparatus and the program instruction; wherein the control means further comprises means for detecting a need by the programmer for more detailed information displayed on the display means, by detecting one or more characteristics of the input instruction signal independent of the program instruction selected from the group consisting of a velocity component, an efficiency of input, an accuracy of input, an interruption of input, a high frequency component of input and a past history of input by the programmer, whereby when the control means detects a characteristic indicating that the display means is insufficiently detailed information, the control means controls the display means to display more detailed information.


It is a further object of the present invention to provide a programmable apparatus having a data input, the apparatus receiving instructions from a programmer and causing an action to occur on the receipt of data indicating an event, comprising an input device, producing an input instruction signal; a control means for receiving the input instruction signal, and storing a program instruction associated with the input instruction signal, the control means storing sufficient program instructions to perform an action on the receipt of data indicating an event, the control means monitoring the data input; a display means for interactively displaying information related to the instructions to be received, and responsive thereto, controlled by the control means, so that the programmer is presented with feedback on a current state of the apparatus and the program instruction; wherein the control means receives a programming preference indicating a desired event from the input device which does not unambiguously define the event, and the control means monitors the data and causes the occurrence of the action when a correlation between the programming preference and the monitored data is above a predetermined threshold, indicating a likely occurrence of the desired event. It is also object of the present invention to provide the programmable aforementioned apparatus, wherein the input device is remote from the display means, and provides a direct manipulation of display information of the display means, further comprising means for verifying the program instructions so that the program instructions are executable by the control means. The control means may further comprise a calendar or other chronological database.


Another object of the present invention provides a programmable information storage apparatus having a data input, for receiving data to be stored, the apparatus receiving instructions from a programmer and causing an action to occur on the receipt of data indicating an event, comprising means for storing data from the data input; an input device, producing an input instruction signal; a control means for receiving the input instruction signal, and storing a program instruction associated with the input instruction signal, the control means storing sufficient program instructions to perform an action on the receipt of data from the data input indicating an event, the control means monitoring the data input to determine the occurrence of various events, comparing the determined events with the program instructions, and performing for storing the data the action on the occurrence of the event; wherein the control means receives identifying data from at least one of the input device and the data input, the identifying data being stored separately from the input data on a storage medium. The programmable information storage apparatus may also include means for reading the identifying data stored separately on the storage medium, and may also receive as an input the identifying data.


It is also an object of the present invention to provide a programmable apparatus, wherein the control means provides an option, selectable by the input means in conjunction with the display means, for changing an input program instruction prior to execution by the control means, so that the apparatus enters a state wherein a new program instruction may be input to substitute for the changed input step, wherein the control means verifies the program instructions so that the program instructions are executable by the control means.


It is still another object of the present invention to provide a programmable apparatus, wherein the control means further causes the display means to display a confirmation screen after the program instructions are input, so that the programmer may confirm the program instructions.


Another object of the present invention is to provide a programmable information storage apparatus, wherein the control means further comprises means for recognizing character data present in a data stream of the input data, the identifying data comprising the recognized character data.


It is a still further object of the present invention to provide a video tape recording apparatus, comprising a video signal receiving device, a recording device for recording the video signal, wherein the control analyzes the video signal for the presence of a symbol, and recognizes the symbol as one of a group of recognized symbols, and the control stores the recognized symbol separately from the video signal.


Another object of the present invention is to provide a recording device for recording an analog signal sequentially on a recording medium, comprising means for characterizing the analog signal, wherein data representing the characterization and a location of the analog signal on the recording medium are stored in a directory location on the recording medium separately from the analog signal.


It is a further object of the present invention to provide an interface for a programmable control for input of a program for a controller to execute, which performs an action based on an external signal, comprising an input device, a controller for receiving data from the input device and from an external stimulus, a plant being controlled by the controller based on an input from the input device and the external stimulus, and a display device being controlled by the controller, for providing visual feedback to a user operating the input device, wherein a predetermined logical sequence of programming options is presented to the user on the display device, in a plurality of display screens, each of the display screens differing in available programming choices; the logical sequence including a correct sequence of choices to set an operable control program, so that no necessary steps are omitted; the external stimulus comprises a timing device, and the display comprises a display option for programming the plant to perform an action at a time which is input through the input device as a relative position on the display device, the relative position including a means for displaying an absolute time entry and means for displaying a relative time entry, the display also comprising a display option means for performing an action at a time; the control comprises means for presenting the user, on the display device, with a most probable action, which may be selected by the user through activation of the input device without entering data into the controller through the input device relating to both the action and the event; the display also comprising means for indicating completion of entry of a programming step, which means indicates to the user an indication that the programming step is not completed if information necessary for execution of the step is not available to the controller; and the controller being capable of controlling the display device to present information to the user relating to the use of the apparatus if necessary for use of the device by the user.


Another object of the present invention provides a system for presenting a program to a viewer, comprising a source of program material; means for determining a viewer preference, the viewer preference optionally being context sensitive; means for receiving the program material from the source; means for characterizing the program material based on its content; means for correlating the characterized content of the program material with the determined viewer preference to produce a correlation index; and means for presenting the program material to the viewer, if the correlation index indicates a probable high correlation between the characterization of the program material and the viewer preference.


Another object of the present invention is to provide a system for presenting a program to a viewer, comprising a source of program material; means for determining a viewer preference; means for receiving the program material from the source; means for storing the program material; means for preprocessing the program material to produce a reduced data flow information signal retaining information relating to a character of the program material and eliminating data not necessary to characterize the program material; means for characterizing the information signal based on its content; means for correlating the characterized content of the information signal with the determined viewer preference to produce a correlation index; and means for presenting the stored program material to the viewer, if the correlation index indicates a probable high correlation between the characterization of the information signal and the viewer preference. The system may also include a means for storing the information signal, wherein the characterizing means characterizes the stored information signal, and also a memory for storing the program material while the characterizing means produces characterized content and the correlating means produces the correlation index.


Still another object of the present invention is to provide a system, wherein the program material is encrypted, further comprising means for decrypting the program material to produce a decryption event; and means for charging an account of the viewer based on the occurrence of a decryption event. Thus, a decryption processor and an accounting database are provided for these purposes.


Another object of the present invention is to allow the means for characterizing the program material to operate without causing a decryption event. Thus, the data stream may include characterization data specifically suitable for processing by a characterizing system, or the decryption processor may be provided with multiple levels of functionality, or both. Further, the system may comprise a memory for storing the program material while the characterizing means produces characterized content and the correlating means produces the correlation index. The characterizing means may also characterize the program material stored in memory, and the program material stored in memory may be compressed.


Another object of the present invention is to provide a controller for controlling a plant, having a sensor for sensing an external event and producing a sensor signal, an actuator, responsive to an actuator signal, for influencing the external event, and a control means for receiving the sensor signal and producing an actuator signal, comprising means for inputting a program; means for storing the program; means for characterizing the sensor signal to produce a characterized signal; and means for comparing the characterized signal with a pattern stored in a memory to produce a comparison index, wherein the actuator signal is produced on the basis of the comparison index and the program, wherein the characterization comprises an Affine transformation of the sensor signal. The characterization may comprise one or more transformation selected from the group consisting of an Affine transformation, a Fourier transformation, a Gabor transformation, and a wavelet transformation.


It is another object of the present invention to provide a method for automatically recognizing digital image data consisting of image information, the method comprising the steps performed by a data processor of storing a plurality of templates; storing the image data in the data processor; generating a plurality of addressable domains from the stored image data, each of the domains representing a portion of the image information; creating, from the stored image data, a plurality of addressable mapped ranges corresponding to different subsets of the stored image data, the creating step including the substep of (a) executing, for each of the mapped ranges, a corresponding procedure upon the one of the subsets of the stored image data which corresponds to the mapped ranges; (b) assigning identifiers to corresponding ones of the mapped ranges, each of the identifiers specifying for the corresponding mapped range a procedure and a address of the corresponding subset of the stored image data; (c) optionally subjecting a domain to a transform selected from the group consisting of a predetermined rotation, an inversion, a predetermined scaling, and a predetermined preprocessing in the time, frequency, and/or wavelet domain; (d) selecting, for each of the domains or transformed domains, the one of the mapped ranges which most closely corresponds according to predetermined criteria; (e) representing the image information as a set of the identifiers of the selected mapped ranges; and (f) selecting, from the stored templates, a template which most closely corresponds to the set of identifiers representing the image information. The step of selecting the mapped ranges may also include the substep of selecting, for each domain, a most closely corresponding one of the mapped ranges.


It is another object of the present invention to provide a method wherein the step of selecting the most closely corresponding one of the mapped ranges includes the step of selecting, for each domain, the mapped range which is the most similar, by a method selected from one or more of the group consisting of selecting minimum Hausdorff distance from the domain, selecting the highest cross-correlation with the domain, selecting the highest fuzzy correlation with the domain and selecting the minimum mean square error with the domain.


Another object of the present invention provides a method wherein the step of selecting the most closely corresponding one of mapped ranges includes the step of selecting, for each domain, the mapped range with the minimum modified Hausdorff distance calculated as D[db, mrb]+D[1-db, 1-mrb], where D is a distance calculated between a pair of sets of data each representative of an image, db is a domain, mrb is a mapped range, 1-db is the inverse of a domain, and 1-mrb is an inverse of a mapped range.


Another object of the present invention provides a method wherein the digital image data consists of a plurality of pixels each having one of a plurality of associated color map values, further comprising the steps of optionally transforming the color map values of the pixels of each domain by a function including at least one scaling function for each axis of the color map, each of which may be the same or different, and selected to maximize the correspondence between the domains and ranges to which they are to be matched; selecting, for each of the domains, the one of the mapped ranges having color map pixel values which most closely correspond to the color map pixel values of the domain according to a predetermined criteria, wherein the step of representing the image color map information includes the substep of representing the image color map information as a set of values each including an identifier of the selected mapped range and the scaling functions; and selecting a most closely corresponding stored template, based on the identifier of the color map mapped range, the scaling functions and the set of identifiers representing the image information. The first criteria may comprise minimizing the Hausdorff distance between each domain and the selected range.


Another object of the present invention is to provide a method further comprising the steps of storing delayed image data, which represents an image of a moving object differing in time from the image data in the data processor; generating a plurality of addressable further domains from the stored delayed image data, each of the further domains representing a portion of the delayed image information, and corresponding to a domain; creating, from the stored delayed image data, a plurality of addressable mapped ranges corresponding to different subsets of the stored delayed image data; matching the further domain and the domain by subjecting a further domain to one or both of a corresponding transform selected from the group consisting of a null transform, a rotation, an inversion, a scaling, a translation and a frequency domain preprocessing, which corresponds to a transform applied to a corresponding domain, and a noncorresponding transform selected from the group consisting of a rotation, an inversion, a scaling, a translation and a frequency domain preprocessing, which does not correspond to a transform applied to a corresponding domain; computing a motion vector between one of the domain and the further domain, or the set of identifiers representing the image information and the set of identifiers representing the delayed image information, and storing the motion vector; compensating the further domain with the motion vector and computing a difference between the compensated further domain and the domain; selecting, for each of the delayed domains, the one of the mapped ranges which most closely corresponds according to predetermined criteria; representing the difference between the compensated further domain and the domain as a set of difference identifiers of a set of selected mapping ranges and an associated motion vector and representing the further domain as a set of identifiers of the selected mapping ranges; determining a complexity of the difference based on a density of representation; and when the difference has a complexity below a predetermined threshold, selecting, from the stored templates, a template which most closely corresponds to the set of identifiers of the image data and the set of identifiers of the delayed image data.


Another object of the present invention provides an apparatus for automatically recognizing digital image data consisting of image information, comprising means for storing template data; means for storing the image data; means for generating a plurality of addressable domains from the stored image data, each of the domains representing a different portion of the image information; means for creating, from the stored image data, a plurality of addressable mapped ranges corresponding to different subsets of the stored image data, the creating means including means for executing, for each of the mapped ranges, a procedure upon the one of the subsets of the stored image data which corresponds to the mapped range; means for assigning identifiers to corresponding ones of the mapped ranges, each of the identifiers specifying for the corresponding mapped range an address of the corresponding subset of stored image data; means for selecting, for each of the domains, the one of the mapped ranges which most closely corresponds according to predetermined criteria; means for representing the image information as a set of the identifiers of the selected mapped ranges; and means for selecting, from the stored templates, a template which most closely corresponds to the set of identifiers representing the image information.


It is also an object of the present invention to provide a method and system for processing broadcast material having a first portion and a second portion, wherein the first portion comprises an content segment and the second portion comprises a commercial segment, in order to allow alteration in the presentation of commercial segments, based on the recipient, commercial sponsor, and content provider, while providing means for accounting for the entire broadcast.


Another object of an embodiment of the present invention provides an apparatus comprising a user interface, receiving a control input and a user attribute from the user; a memory system, storing the control input and user attribute; an input for receiving content data; means for storing data describing elements of the content data; means for presenting information to the user relating to the content data, the information being for assisting the user in defining a control input, the information being based on the stored user attribute and the data describing elements of the content data; and means for processing elements of the content data in dependence on the control input, having an output. This apparatus according to this embodiment may be further defined as a terminal used by users of a television program delivery system for suggesting programs to users, wherein the user interface comprises means for gathering the user specific data to be used in selecting programs; the memory system comprises means, connected to the gathering means, for storing the user specific data; the input for receiving data describing elements of the content data comprises means for receiving the program control information containing the program description data; and the processing means comprises program selection means, operably connected to the storing means and the receiving means, for selecting one or more programs using a user's programming preferences and the program control information. In this case, the program selection means may comprise a processor, wherein the user programming preferences are generated from the user specific data; and means, operably connected to the program selection means, for suggesting the selected programs to the user. The apparatus processing means selectively may records the content data based on the output of the processing means. Further, the presenting means presents information to the user in a menu format. The presenting means may comprises means for matching the user attribute to content data.


The data describing elements of an associated data stream may, for example, comprise a program guide generated remotely from the apparatus and transmitted in electronically accessible form; data defined by a human input, and/or data defined by an automated analysis of the content data.


According to another embodiment, the present invention comprises a method, comprising the steps of receiving data describing an user attribute; receiving a content data stream, and extracting from the content data stream information describing a plurality of program options; and processing the data describing a user attribute and the information describing a plurality of program options to determine a likely user preference; selectively processing a program option based on the likely user preference. The method may be embodied in a terminal for a television program delivery system for suggesting programs to users for display on a television using program control information and user specific data. In that case, the step of receiving data describing an user attribute may comprise gathering user specific data to be used in selecting programs, and storing the gathered user specific data; the step of receiving a content data stream, may comprise receiving both programs and program control information for selecting programs as the information describing a plurality of program options; the selectively processing step may comprise selecting one or more programs using a user's programming preferences and the received program control information, wherein the user programming preferences are generated from the user specific data; and the method further including the step of presenting the program or information describing a program option for the selected programs to the user.


The user attribute may comprise a semantic description of a preference, or some other type of description, for example a personal profile, a mood, a genre, an image representing or relating to a scene, a demographic profile, a past history of use by the user, a preference against certain types of media, or the like. In the case of a semantic preference, the data processing step may comprise determining a semantic relationship of the user preference to the information describing a plurality of program options. The program options may, for example, be transmitted as an electronic program guide, the information being in-band with the content (being transmitted on the same channel), on a separate channel or otherwise out of band, through a separate communications network, e.g., the Internet, dial-up network, or other streaming or packet based communications system, or by physical transfer of a computer-readable storage medium, such as a CD-ROM or floppy disk. The electronic program guide may include not only semantic or human-readable information, but also other types of metadata relating to or describing the program content.


In a further embodiment of the present invention, it is an object to provide a device for identifying a program in response to user preference data and program control information concerning available programs, comprising means for gathering the user preference data; means, connected to the gathering means, for storing the gathered user preference data; means for accessing the program control information; and means, connected to the storing means and accessing means, for identifying one or more programs based on a correspondence between a user's programming preferences and the program control information. For example, the identifying means identifies a plurality of programs, a sequence of identifications transmitted to the user being based on a degree of correspondence between a user's programming preferences and the respective program control information of the identified program. The device my selectively record or display the program, or identify the program for the user, who may then define the appropriate action by the device. Therefore, a user may, instead of defining “like” preferences, may define “dislike” preference, which are then used to avoid or filter certain content. Thus, this feature may be used for censoring or parental screening, or merely to avoid unwanted content. Thus, the device comprises a user interface adapted to allow interaction between the user and the device for response to one or more of the identified programs. The device also preferably comprises means for gathering the user specific data comprises means for monitoring a response of the user to identified programs.


It is a further object of the invention to provide a device which serves as a set top terminal used by users of a television program delivery system for suggesting programs to users using program control information containing scheduled program description data, wherein the means for gathering the user preference data comprising means for gathering program watched data; the means, connected to the gathering means, for storing the gathered user preference data comprising means, connected to the gathering means, for storing the program watched data; the means for accessing the program control information comprising means for receiving the program control information comprising the scheduled program description data; the means, connected to the storing means and accessing means, for identifying one or more programs based on a correspondence between a user's programming preferences and the program control information, being for selecting at least one program for suggestion to the viewer, comprising: means for transforming the program watched data into preferred program indicators, wherein a program indicator comprises a program category with each program category having a weighted value; means for comparing the preferred program indicators with the scheduled program description data, wherein each scheduled program is assigned a weighted value based on at least one associated program category; means for prioritizing the scheduled programs from highest weighted value programs to lowest weighted value programs; means for indicating one or more programs meeting a predetermined weight threshold, wherein all other programs are excluded from program suggestion; and means, operably connected to the program selection means, for displaying for suggestion the selected programs to the user.


It is a further aspect of the invention to provide device a device comprising: a data selector, for selecting a program from a data stream; an encoder, for encoding programs in a digitally compressed format; a mass storage system, for storing and retrieving encoded programs; a decoder, for decompressing the retrieved encoded programs; and an output, for outputting the decompressed programs.


Therefore, the present invention provides a system and method for making use of the available broadcast media forms for improving an efficiency of matching commercial information to the desires and interests of a recipient, improving a cost effectiveness for advertisers, improving a perceived quality of commercial information received by recipients and increasing profits and reducing required information transmittal by publishers and media distribution entities.


This improved advertising efficiency is accomplished by providing a system for collating a constant or underlying published content work with a varying, demographically or otherwise optimized commercial information content. This commercial information content therefore need not be predetermined or even known to the publisher of the underlying works, and in fact may be determined on an individual receiver basis. It is also possible to integrate the demographically optimized information within the content. For example, overlays in traditional media, and electronic substitutions or edits in new media, may allow seamless integration. The content alteration need not be only based on commercial information, and therefore the content may vary based on the user or recipient.


The technologies emphasize adaptive pattern recognition of both the user input and data, with possible use of advanced signal processing and neural networks. These systems may be shared between the interface and operational systems, and therefore a controller for a complex system may make use of the intrinsic processing power available, rather than requiring additional computing resources, although this unification is not required. In fact, while hardware efficiency dictates that near term commercial embodiments employ common hardware for the interface system and the operational system, future designs may successfully separate the interface system from the operational system, allowing portability and efficient application of a single interface system for a number of operational systems.


The adaptive nature of the technologies derive from an understanding that people learn most efficiently through the interactive experiences of doing, thinking, and knowing. Users change in both efficiency and strategy over time. To promote ease-of-use, efficiency, and lack of frustration of the user, the interface of the device is intuitive and self explanatory, providing perceptual feedback to assist the operator in communicating with the interface, which in turn allows the operational system to identify of a desired operation. Another important aspect of man-machine interaction is that there is a learning curve, which dictates that devices which are especially easy to master become frustratingly elemental after continued use, while devices which have complex functionality with many options are difficult to master and may be initially rejected, or used only at the simplest levels. The present technologies address these issues by determining the most likely instructions of the operator, and presenting these as easily available choices, by analyzing the past history data and by detecting the “sophistication” of the user in performing a function, based on all information available to it. The context of use is also a factor in many systems. The interface seeks to optimize the interface adaptively and immediately in order to balance and optimize both quantitative and qualitative factors. This functionality may greatly enhance the quality of interaction between man and machine, allowing a higher degree of overall system sophistication to be tolerated.


The interface system analyzes data from the user, which may be both the selections made by the user in context, as well as the efficiency by which the user achieves the selection. Thus, information concerning both the endpoints and path are considered and analyzed by the human user interface system.


The interface may be advantageously applied to an operational system which has a plurality of functions, certain of which are unnecessary or are rarely used in various contexts, while others are used with greater frequency. In such systems, the application of functionality may be predictable. Therefore, the present technologies provide an optimized interface system which, upon recognizing a context, dynamically reconfigures the availability or ease of availability of functions and allows various functional subsets to be used through “shortcuts”. The interface presentation will therefore vary over time, use and the particular user.


The advantages to be gained by using an intelligent data analysis interface for facilitating user control and operation of the system are more than merely reducing the average number of selections or time to access a given function. Rather, advantages also accrue from providing a means for access and availability of functions not necessarily previously existing or known to the user, improving the capabilities and perceived quality of the product.


Further improvements over prior interfaces are also possible due to the availability of pattern recognition functionality as a part of the interface system. In those cases where the pattern recognition functions are applied to large amounts of data or complex data sets, in order to provide a sufficient advantage and acceptable response time, powerful computational resources, such as powerful RISC processors, advanced DSPs or neural network processors are made available to the interface system. On the other hand, where the data is simple or of limited scope, aspects of the technology may be easily implemented as added software-based functionality in existing products having limited computational resources.


The application of these technologies to multimedia data processing systems provides a new model for performing image pattern recognition and for the programming of applications including such data. The ability of the interface to perform abstractions and make decisions regarding a closeness of presented data to selection criteria makes the interface suitable for use in a programmable control, i.e., determining the existence of certain conditions and taking certain actions on the occurrence of detected events. Such advanced technologies might be especially valuable for disabled users.


In a multimedia environment, it may be desirable for a user to perform an operation on a multimedia data event. Past systems have required explicit indexing or identification of images and events. The present technologies, however, allow an image, diagrammatic, abstract or linguistic description of the desired event to be acquired by the interface system from the user and applied to identify or predict the multimedia event(s) desired, without requiring a separate manual indexing or classification effort. These technologies may also be applied to single media data.


The interface system analyzes data from many different sources for its operation. Data may be stored or present in a dynamic data stream. Thus, in a multimedia system, there may be a real-time video feed, a stored event database, as well as an exemplar or model database. Further, since the device is adaptive, information relating to past experience of the interface, both with respect to exposure to data streams and user interaction, is also stored.


This data analysis aspect of the interface system may be substantially processor intensive, especially where the data includes abstract or linguistic concepts or images to be analyzed. Interfaces which do not relate to the processing of such data may be implemented with simpler hardware. On the other hand, systems which handle complex data types may necessarily include sophisticated processors, adaptable for use by the interface system. A portion of the data analysis may also overlap the functional analysis of the data for the operational system.


Other objects and features of the present invention will become apparent from the following detailed description considered in conjunction with the accompanying drawings. It is to be understood, however, that the drawings are designed solely for the purposes of illustration and not as a definition of the limits of the invention, for which reference should be made to the appended claims.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the present invention are shown in the figures in the drawings, in which:



FIG. 1 is a flow chart of the steps required to set a VCR;



FIG. 2 shows a graphical comparison of required and extra keypresses for the prior art and the interface of the present invention;



FIG. 3 graphically shows the differences in seconds between total time for the prior art for each user;



FIG. 4 graphically shows the differences in seconds between total time for the interface of the present invention for each user;



FIG. 5 graphically shows the programming steps for the comparison of the prior art and the interface of the present invention;



FIG. 6 graphically shows comparative statistics by user comparing the prior art and the interface of the present invention;



FIGS. 7 and 8 graphically show the critical steps in programming the prior art and the interface of the present invention;



FIG. 9 graphically shows the number of keypresses made by test participants comparing the prior art and the interface of the present invention;



FIG. 10 graphically shows the comparison of the actual and theoretical number of keypresses necessary for programming the prior art and the interface of the present invention;



FIG. 11 graphically compares the actual and theoretical time necessary for programming the prior art and the interface of the present invention;



FIGS. 12a and 12b graphically compares the actual and theoretical time necessary for setting the programs in the prior art and the interface of the present invention;



FIGS. 13 and 14 graphically show the percentage time for the critical steps in programming the prior art and the interface of the present invention;



FIG. 15 is a flow diagram of a predictive user interface of the present invention;



FIG. 16 is a flow diagram of the program input verification system of the present invention;



FIG. 17 is a flow diagram of a predictive user preference aware interface of the present invention;



FIG. 18 is a block diagram of a non-program information feature extraction circuit of the present invention;



FIG. 19 is a diagram of a block of information for a catalog entry of the present invention;



FIG. 20 is a block diagram of a digital information and analog signal reading/recording apparatus;



FIG. 21 is a block diagram of a user level determining system of the present invention;



FIG. 22 is a block diagram of a template-based pattern recognition system of the present invention;



FIG. 23 is a block diagram of a control system of the present invention incorporating a pattern recognition element and an interface;



FIG. 24 is a block diagram of a control system for characterizing and correlating a signal pattern with a stored user preference of the present invention;



FIG. 25 is a block diagram of a multiple video signal input apparatus, with pattern recognition, data compression, data encryption, and a user interface of the present invention;



FIG. 26 is a block diagram of a control system for matching a template with a sensor input, of the present invention;



FIGS. 27, 28 and 29 are flow diagrams of an iterated function system method for recognizing a pattern according to the present invention;



FIG. 30 is a semi-cartoon flow diagram of the object decomposition and recognition method of the present invention;



FIG. 31 is a block diagram of an adaptive interface system according to the present invention;



FIG. 32 shows a clock diagram of a system in accordance with the present invention; and



FIG. 33 shows a flow chart in accordance with an embodiment of the present invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

The preferred embodiments of the present invention will now be described with reference to the Figures. Identical elements in the various figures are designated with the same reference numerals.


EXAMPLE 1
VCR Interface

A preferred embodiment of the interface of the present invention, described in the present example, provides automatic sequencing of steps, leading the user through the correct sequence of actions to set a program on the screen, so that no necessary steps are omitted, and no optional steps are accidentally or unintentionally omitted. These steps are shown diagrammatically in FIG. 15 of the present invention. In addition, such a system does not burden the user with the necessity of inputting superfluous information, nor overwhelm the user with the display of unnecessary data. See, Hoffberg, Linda I., “AN IMPROVED HUMAN FACTORED INTERFACE FOR PROGRAMMABLE DEVICES: A CASE STUDY OF THE VCR”, Master's Thesis, Tufts University; Hoffberg, Linda I., “Designing User Interface Guidelines For Time-Shift Programming of a Video Cassette Recorder (VCR)”, Proc. of the Human Factors Soc. 35th Ann. Mtg. pp. 501-504 (1991); and Hoffberg, Linda I., “Designing a Programmable Interface for a Video Cassette Recorder (VCR) to Meet a User's Needs”, Interface 91 pp. 346-351 (1991). See also, U.S. patent application Ser. No. 07/812,805, incorporated herein by reference in its entirety, including appendices and incorporated references.


Many design considerations were found to be important in the improved interface of the present invention:


The interface should preferably employ only minimal amounts of abbreviations and the use of complete words is especially preferred, except where a standard abbreviation is available or where an “iconic” or symbolic figure or textual cue is appropriate. Thus, standard abbreviations and symbols are acceptable, and displayed character strings may be shortened or truncated in order to reduce the amount of information that is to be displayed, where necessary or desirable. An option may be provided to the user to allow full words, which may decrease the information which may be conveyed on each screen and increase the number of screens that must be displayed, or abbreviations and symbols, which may minimize the number of displayed screens of information, thus allowing the user to make the compromise. This aspect of the system may also be linked to the adaptive user level function of the present invention, wherein abstract symbols and abbreviations are presented to advanced users, while novices are presented with full words, based on an implicit indication of user level. These abstract symbols and abbreviations may be standard elements of the system, or user designated icons. Of course, the user could explicitly indicate his preference for the display type, thus deactivating the automatic adaptive user level function.


If multiple users use the device, then the device identifies the relevant users. This may be by explicit identification by keyboard, bar code, magnetic code, smart card (which may advantageously include a user profile for use with a number of devices), an RF-ID or IR-ID transponder, voice recognition, image recognition, or fingerprint identification. It is noted that smart cards or other intelligent or data-containing identifications systems may be used with different types of devices, for example video, audio, home appliances, HVAC and automobile systems.


Where a new user is identified to the system, an initial query may be made to determine an optimum initial user level. This allows further identification of the user and preference determination to occur more efficiently.


In applications in which a user must program an event on a certain date, at a certain time, a built-in calendar menu screen is preferably employed so that the user cannot set the device with a program step that relies on a non-existent date. Technology that will help eliminate the human problem of setting the wrong (yet existing) date may also be employed. Such technology might include accessing an on-line or other type of database containing media programming information, and prompting the user regarding the selected choice. In situations where it is applicable, the interface should indicate to the user the number of characters the interface is expecting, such as when entering the year.


The interface system provides an easily accessible CHANGE, CANCEL or UNDO (single or multiple level) feature, which facilitates backtracking or reprogramming the immediately previously entered information rather than forcing the user to repeat all or a substantial portion of the programming steps. A method of the type described is shown in FIG. 16 of the present invention. User input is also facilitated by the provision of frequently used settings as explicit choices, such as, referring to the VCR example, “Record today,” “Record tomorrow,” “Noon,” and “Midnight,” so that the user does not have to specify a date in these cases. This will eliminate extra keypresses, and reduce the programming time. In addition, this could eliminate user errors. Frequently used choices for program selections are also provided to the user to reduce the number of programming steps necessary and provide the user with all the frequently used selections. The especially preferred choices are “Once On.”, “Once a Week on.”, “Monday-Friday at.”, “Everyday at.”. These redundant, complex instructions reduce the number of keystrokes required for data entry, and reduce the amount of programming time required.


The presently described interface system also provides, in the event that a color screen is available, conservatively used color coding, which allows the user to effectively and quickly acknowledge the function of each aspect of the screen. When programming, the preferred colors are royal blue for “help,” red for mistakes, light blue for information previously entered, and yellow for current information being entered. Of course, other colors could be used, according to the user's or designer's preference, cultural differences, and display parameters.


When viewing, it is preferable that screen colors change to indicate status changes, such as viewed/unviewed, or to categorize the shows.


The interface includes a confirmation screen which displays to the user all of the categories and selections previously explicitly entered or otherwise inferred, and should be easily understandable. This is shown in FIG. 15 of the present invention. All of the necessary information is displayed on this screen, in addition to the change and cancel options, if possible.


The entering of information on each screen is preferably consistent throughout the various interface options and levels. All of the screens preferably have similar layouts. “Buttons” or screen locations which are keyed to a particular function, which appear on multiple screens, should appear in approximately the same location on all screens. However, in certain cases, relatively more important information on a given screen may be displayed more prominently, and possibly in a different screen location, in order to reduce the search time. Further, when other factors dictate, each screen may be independently optimized for the prescribed function. For example, a representation of an analog clock dial may be used to set time information. However, even if the format does change, a standard scheme should be maintained, such as the use of a particular color to indicate that a particular program aspect has been changed.


The interface should display data consistent with standards and conventions familiar to users. For, e.g., when entering dates, users are most familiar with calendars. However, this type of presentation of choices does not eliminate the human problem of entering incorrect information, e.g., setting a wrong, but existing, date. The problem of ensuring the accuracy of user input may be addressed by an intelligent interface which stores data concerning programming, user preferences, and by means of some logical method, such as Boolean logic, fuzzy logic, neural network theory, or any other system which may be used to generate a prediction, to determine if an entry is likely in error, by comparing the prediction with the entry. Of course, these predictive systems would also provide an initial default entry, so that an a priori most probably action or actions may be initially presented to the user.


In addition to following conventions of information presentation to the user, the interface of the present invention may also provide emulations of other user interfaces of which a particular user may be familiar, even if these are not optimized according to the presently preferred embodiments of the present invention, or not otherwise well known. These emulations need not even be of the same type of device, so that a broad based standard for entry of information into programmable controls, regardless of their type, may be implemented. By allowing emulation, the interface could provide compatibility with a standard or proprietary interface, with enhanced functionality provided by the features of the present interface.


These enhanced functional intelligent aspects of the controller may be implemented by means of software programming of a simple microcomputer, or by use of more specialized processors, such as a Fuzzy Set Processor (FSP) or Neural Network Processor to provide real-time responsiveness, eliminating delays associated with the implementation of complex calculations on general purpose computing devices.


In the various embodiments according to the present invention, various control strategies are employed. Depending on the application, fuzzy set processors (FSP's) may be preferred because they have the advantage of being easier to program through the use of presumptions or rules for making the fuzzy inferences, which may be derived by trial and error or the knowledge of experts, while Neural Networks are less easily explicitly programmed and their network weighing values are not easily understood in the abstract, but these systems may be applied to learn appropriate responses from test data. Thus, neural networks tend to require extensive “training”, while Fuzzy Set Processors may be explicitly programmed without the need of duplicating or simulating actual operating conditions, but may require “fine tuning”.


The most frequently used choices preferably should be displayed as the default setting. The screen cursor preferably appears at the “accept” screen button, when the screen is displayed. This default can either be set in advance, or acquired by the system. In the case of acquired defaults, these may be explicitly set by the user or adaptively acquired by the system through use. The interface of the present invention may be taught, in a “teach” mode, the preferences of the user, or may also acquire this information by analyzing the actual choices made by the user during operation of the interface and associated controller. This type of operation is shown schematically in FIG. 15 of the present invention. The options of “Midnight” (12:00 AM) and “Noon” (12:00 PM) should preferably be present, as some people often become confused when distinguishing between them. Icons, such as those indicative of the “sun” and the “moon”, may also be used to facilitate data entry for AM and PM. The interface should preferably utilize an internal clock and calendar so that the user cannot set the time or program to record on a nonexistent date. Such a system could also compensate for daylight-savings time seasonal adjustments.


The cursor is preferably distinctive and readily distinguished from other parts of the screen. This may be by color, attribute (i.e. blinking), size, font change of underlying text, or by other means.


The user can preferably exit the programming sequence at any time by selecting a “Main Menu” button which may exist on the lower left-hand corner of every screen. The user is preferably provided with an adequate amount of feedback, and error messages should be directive in nature. Some form of an acknowledgement is preferably displayed after each entry. The user should preferably not be able to go to the next programming step until the current step has been completed. A message to convey why the user can not continue should appear when an attempt to prematurely continue is recognized.


The “help” function is available for when the user does not know what to do. The “help” screen(s) preferably explains the functions of each of the available buttons or functions, but may also be limited to those that are ambiguous. The “help” screen may also be used to indicate a current status of the interface and the controller. Further, the “help” function may also provide access to various other functions, such as advanced options and configurations, and thus need not be limited to merely providing information on the display. The help system may incorporate a hypertext-type system, wherein text or information relating to concepts that are conceptually linked may be easily accessed by indicating to the interface system that the related information is desired. To eliminate the possibility of the user trying to make selections on merely informative help screens, the cursor, in these cases, should be locked to a choice which returns the user to where they left off in the programming sequence, and this choice should be highlighted.


The “help” function may also comprise “balloon help” similar to the system adopted by Apple Computer, Inc. in Macintosh Operating System, e.g., 7.0, 7.1, 7.5, etc.


The interface preferably initiates the programming sequence where the user wants to be, so that the interface has so-called “smart screens”. For example, when a VCR is first powered up or after an extended power failure, and the time and date are not stored in the machine, the “set date” and “set time” screens should appear. The sequence of screens may also vary depending on the system predicted requirements of the user and various aspects of the improved interface of the present invention. This is shown schematically in FIG. 17 of the present invention.


The preferable input device for the interface of the present invention provides as few buttons as possible to achieve the required functionality, thus reducing potential user intimidation, focusing the user's attention on the interactive display screen, where the available choices are minimized to that number necessary to efficiently allow the user to program the discrete task presented. Such a minimization of discrete inputs facilitates a voice recognition input, which may be used as an alternative to mechanical input devices. The preferred embodiment includes a direct-manipulation type interface, in which a physical act of the user causes a proportionate change in the associated interface characteristic, such as cursor position. A computer mouse, e.g. a two dimensional input device, with 1 to 3 buttons is the preferred input device, for use with a general purpose computer as a controller, while a trackball on a remote control device is especially preferred for limited purpose controllers because they do not require a flat surface for operation. Other stationary or movement sensitive input devices may, of course be used, such as joysticks, gyroscopes, sonic echo-location, magnetic or electrostatic location devices, RF phase location devices, Hallpots (joystick-like device with magnets that move with respect to Hall effect transducers), etc. The present interface minimizes the number of necessary keys present on an input device, while maintaining the functionality of the interface. It is noted that a strict minimization without consideration of functionality, might lead to inefficiency. For example, in a VCR device, if the user wants to record a program which airs Monday through Friday, he would have to set five separate programs, rather than one program if a “weeknights” choice is made available. The interface preferably should be easy to learn and should not require that a user have prior knowledge of the interface in order to use it. An attempt has been made to minimize the learning curve, i.e., to minimize the time it takes to learn how to use the device.


Menu options are preferably displayed in logical order or in their expected frequencies. Research has shown that a menu-driven interface is best for applications involving new users and does not substantially hinder experienced users. Menu selection is preferably used for tasks which involve limited choices. They are most helpful for users with little or no training. Each menu should preferably allow only one selection at a time. Most of the information is preferably entered using a numeric keypad (entry method), rather than using up and down arrow keys (selection method). In addition, no leading zeros are required for entry. If there is more than one keystroke required, the user must then select an “OK” button to continue in the programming sequence. However, if the selection method is used, all of the choices are displayed on the screen at once. The number of steps required to complete the task through a sequence of menus should be minimized. The choice of words used to convey information should not be device specific, i.e., computer terms, but rather normal, everyday terms which are easy to understand. In addition, very few abbreviations should be used. All necessary information which the user needs should preferably be displayed at once. A user preferably should not have to rely on his memory or his previous experience, in order to find the correct choice, at least at the lower user levels. If all selections cannot be displayed at once, a hierarchical sequence is preferably used. A main menu should preferably provide a top level to which the user can always return and start over.


Searching and learning times should be kept to a minimum in order to obtain a subjectively better interface. The system's logic should reflect the users' expectations, offer visual clues and feedback, and stay within human memory limits. For example, the VCR should turn on not only with the “Power” button, but also when inserting a tape into the device. In addition, the sequence of steps for setting the machine to record, if the user does not indicate implicitly or explicitly that he knows how to use the device, should assume that the user is a novice, and fully prompt the user for elemental items of information. Nothing should be taken for granted. By developing an improved interface, an attempt is made to: reduce the searching time; reduce the learning time; simplify the entering of data; and, reduce the intimidation experienced by certain persons when using electronic devices.


Tests by an inventor hereof show that people do not program their VCRs often, and they often forget the sequence of steps between recording sessions. Thus, the present invention preferably incorporates an adaptive user level interface, wherein a novice user is presented with a simpler interface with fewer advanced features initially available, so that there is reduced searching for the basic functions. A more advanced user is presented with more advanced choices and functions available initially, as compared to a novice user.


Thus, as shown in FIG. 17, the user identifies himself to the controller in block 1701. The controller 1806 of FIG. 18 thereafter uses a stored profile of the identified user in controlling the interaction with the user, as shown in block 1702 of FIG. 17, from information stored in the database 1807 of FIG. 18 of the present invention. It has been found that in the case of novice users, a greater number of simple instructions may be more quickly and easily input rather than a potentially fewer number of a larger set of more complex instructions. It has further been found that, even if presented with a set of instructions which will allow a program to be entered with a fewer number of inputs, a novice user may choose to input the program using the simple instructions exclusively, thus employing an increased number of instructions and being delayed by an increased search time for those instructions that are used, from the larger set.


Other characteristics of this interface include color coding to help prompt the user as to which data must be entered. Red text signifies instructions or errors, yellow text represents data which must be entered or has not been changed, and blue text shows newly entered program data or status information. Blue buttons represent buttons which should normally be pressed during the programming sequence. Red buttons signify an erratic pattern in the data entry, such as the “cancel” and “return to main menu” buttons. Of course, these colors can be replaced by other display attributes, such as intensity, underline, reverse video, blinking and pixel dithering pattern, in addition to the use of various fonts. Such a situation would include a monochrome monitor or display.


The date may be entered in the form of a calendar rather than as numbers (i.e., “9/6/91”). This calendar method is advantageous because users may wish to input date data in one of three ways: day of the week, day relative to the present, and day of the month. The present method allows the current date to be highlighted, so that the calendar may be used to easily enter the absolute day, absolute date, and relative day. Further, the choices “today” and “tomorrow”, the most frequently used relative recording times, are included in addition to a month-by-month calendar. This information is provided to avoid an unnecessary waste of time and user frustration. Thus, another aspect of the present invention is to provide a partially redundant interactive display input system which allows, according to the highest probability, the choices to be prominently displayed and easily available, in addition to allowing random access to all choices.


The present device allows common user mistakes to be recognized and possibly addressed, such as the confusion between 12:00 PM and 12:00 AM with midnight and noon, respectively. Therefore, the options of “noon” and “midnight” are provided in addition to a direct numeric clock input. When entering time information, leading zeros need not be entered, and such information may be entered in either fashion.


The criteria for system acceptance of input depends on how many keystrokes are required on the screen. If only one keystroke is required to complete input of the information, upon depressing the key, the programming sequence will continue. If more than one keypress is required, the user must depress the “OK” button to continue programming. This context sensitive information entry serves to avoid unnecessary input.


An on-line “help” system and on-line feedback is preferably provided to the user throughout various aspects of the interface. Other features include minimizing the number of keypresses required to program the device. These features, together with other aspects of the present invention allow the user to achieve a greater efficiency with the input device than with prior art devices.


The interface of the present invention applied to a VCR control preferably comprises a virtual keypad entry device (i.e. a representation of an array of choices), a directional input control for a cursor on a display screen, and selection buttons. The input device has an input corresponding to a direction of movement relative to the cursor position. Thus, since the present input device seeks to minimize the physical control elements of the human interface device, the display elements for a preferred embodiment of the present interface include:

    • 1. number keys 0-9.
    • 2. enter key.
    • 3. cancel key.
    • 4. status indicator.
    • 5. return to menu option button.
    • 6. program type indicator: program once, program once a week, program Monday-Friday, program everyday.
    • 7. Day indicators: 7 week days, today, tomorrow.
    • 8. Noon and midnight choices.
    • 9. Help button.
    • 10. Main menu options: Review, Enter new recording time, Set time, Set date.
    • 11. Timer button.
    • 12. Power button.
    • 13. AM/PM choices.
    • 14. 31 day calendar.
    • 15. 12 month Choices.
    • 16. 3 tape speed choices.


User dissatisfaction is generally proportionate to the length of “search time,” the time necessary in order to locate and execute the next desired function or instruction. Search time may be minimized by the inclusion of up to a maximum of 4-8 choices per screen and by use of consistent wording and placement of items on the display.


The present invention proceeds from the understanding that there are a number of aspects of a programmable interface that are desirable:


First, users should be able to operate the system successfully, without wide disparities in time. It should take, e.g., a normal person interacting with a VCR interface, less than seven minutes to set the time and two programs. Searching time spent in setting the clock, programming, getting into the correct mode, and checking whether or not the VCR is set correctly should be kept to a minimum through the appropriate choices of menu layout and the presentation of available choices.


Second, programming should be a stand-alone process, and not require an instruction manual. A help system should be incorporated in the interface. Word choices should be understandable, with a reduction in the use of confusing word terminology. Error messages should be understandable. The system should provide the ability to cancel, change or exit from any step.


Third, the system should provide on-screen understandable information, with adequate visual feedback. The displays should be consistent. Color coding should be employed, where applicable, using, e.g. blue—new input; red—error condition; yellow—static, unchanged value. Layouts should be logical, and follow a predictable pattern. There should be a maximum of 4-8 choices per screen to minimize searching time. Keys should be labeled with text rather than with ambiguous graphics. However, a combination of both may be preferable in some cases.


Fourth, steps required to complete tasks should be simple, require a short amount of time and not create user frustration. The system should guide the user along a decision path, providing automatic sequencing of steps. The most frequently used choices should be provided as defaults, and smart screens may be employed. The learning curve should be minimized through the use of easily understandable choices. As a user becomes more sophisticated, the interface may present more advanced choices.


Fifth, there should be a reminder to set the timer and to insert the tape once the programming information is entered. This reminder may also be automated, to eliminate the commonly forgotten step of setting the timer, so that the VCR automatically sets the timer as soon as the necessary information is entered and a tape is inserted. Once the program is set in memory, a message should appear if a tape is not inserted. If the VCR is part of a “jukebox” (automatic changer), the tape may be automatically loaded. The VCR should preferably turn on when a tape is inserted. In addition, users should also be able to control the VCR with a Power button.


Sixth, the VCR should be programmable from both the remote device and the control panel.


Seventh, each operation should require only one keypress, if possible, or otherwise reduce the number of keypresses required. There should be a 12 hour clock, not a 24 hour clock. There should be an on-screen keypad with entry keys, not “up” and “down” selector keys, allowing for the choice of specific day or time entry. There should be a “start” and a “stop” recording time, rather than “start” time and “length of program” or duration exclusively. The number of buttons on the remote control should be minimized so that as few buttons as are required are provided. The input device should provide for the direct manipulation of screen elements. A menu driven interface should be provided.


The interface of the present invention provides an automatic sequencing of steps which does not normally let the user think the previous step is complete. This is shown schematically in FIG. 16. In this manner, important steps will not be inadvertently omitted. Upon entering the programming sequence, if the current date or time is not set, the interface will prompt the user to enter this information. Thereafter, the interface will normally default to the main menu, the most frequently used first screen. Thus, the interface of the present invention is adaptive, in that its actions depend on the current state of the device, including prior programming or use of the device by the user. It can be appreciated that this adaptive behavior can be extended to include extended “intelligence”. For example, if the device is similarly programmed on a number of occasions, then the default setup may be adapted to a new “normal” program mode. Further, the apparatus could provide multiple levels of user interface, e.g. beginner, intermediate, and advanced, which may differ for various functions, based on the behavior of the user. This user interface level determining feature extraction system is shown diagrammatically in FIG. 18. In contrast, prior art interfaces that have different user interface levels, allow the user to explicitly choose the interface level, which will then be used throughout the system until reset.


The present system allows discrete tasks to be conducted more quickly, more efficiently, with reduced search time and with fewer errors than prior art systems.


EXAMPLE 2
Serial Recording Medium Index

In a preferred embodiment of the present invention, in a VCR, in order to track the content of the tape, a directory or a catalog is recorded, preferably digitally, containing the programming information, as well as additional information about the recorded programs, in a header, i.e., at the beginning of the tape, or at other locations on the tape. The device may also catalog the tape contents separately, and based on an identification of the tape, use a separately stored catalog. A preferred format for storing information is shown in FIG. 19.


Thus, if there are a number of selections on the tape, the entire contents of the tape could be accessible quickly, without the need for searching the entire tape. In a sequential access medium, the tape transport apparatus must still shuttle to the location of the desired material, but it may do so at increased speeds, because there is no need to read the tape once the location is determined; after the tape transport nears the desired spot, the tape may be slowed or precisely controlled to reach the exact location.


The tape read and drive system is shown schematically in FIG. 20. The algorithm used in the final stage of approach to the desired portion of the tape or other recording medium may incorporate a control employing Fuzzy logic, Neural Networks, mathematical formulae modeling the system (differential equations) in a Model-based system, a Proportional-Differential-Integral (PID) system, or a controller employing an algorithm of higher order, or other known control methods.


If a selection is to be recorded over, the start and stop locations would be automatically determined from the locations already indicated on the tape. Further, this information could be stored in memory device (which reads a catalog or index of the tape when a new tape is loaded) or non-volatile memory device (which stores information relating to known tapes within the device) or both types of memory in the VCR, so that an index function may be implemented in the VCR itself, without the need to read an entire tape. Optionally, a printer, such as a thermal label printer (available from, e.g. Seiko Instruments, Inc.), attached to the device, could be available to produce labels for the tapes, showing the index, so that the contents of a tape may be easily indicated. A label on the tape may also include a bar code or two-dimensional coding system to store content or characterization information. The stored identification and index information is thus stored in a human or machine readable form.


These contents, or a list of contents, need not necessarily be manually entered by the user or created by the apparatus, rather, these may be derived from published data or a database, data transmitted to the control, and/or data determined or synthesized by the control itself. For example, broadcast schedules are available in electronic or machine readable form, and this information may be used by the apparatus.


EXAMPLE 3
Serial Data Medium Index

Another aspect of the present invention relates to the cataloging and indexing of the contents of a storage medium. While random access media normally incorporate a directory of entries on a disk, and devices such as optical juke boxes normally are used in conjunction with software that indexes the contents of the available disks, serial access mass storage devices, such as magnetic tape, do not usually employ an index; therefore, the entire tape must be searched in order to locate a specific selection.


In the present invention, an area of the tape, preferable at the beginning of the tape or at multiple locations therein, is encoded to hold information relating to the contents of the tape. This encoding is shown in FIG. 19, which shows a data format for the information. This format has an identifying header 1901, a unique tape identifier 1902, an entry identifier 1903, a start time 1904, an end time 1905 and/or a duration 1906, a date code 1907, a channel code 1908, descriptive information 1909 of the described entry, which may include recording parameters and actual recorded locations on the tape, as well as a title or episode identifying information, which may be a fixed or variable length entry, optionally representative scenes 1910, which may be analog, digital, compressed form, or in a form related to the abstract characterizations of the scenes formed in the operation of the device. Finally, there are error correcting codes 1911 for the catalog entry, which may also include advanced block encoding schemes to reduce the affect of non-Gaussian correlated errors which may occur on video tape, transmission media and the like. This information is preferably a modulated digital signal, recorded on, in the case of Hi-Fi VHS, one or more of the preexisting tracks on the tape, including the video, overscan area, Audio, Hi-Fi stereo audio, SAP or control tracks. It should be noted that an additional track could be added, in similar fashion to the overlay of Hi-Fi audio on the video tracks of Hi-Fi VHS. It is also noted that similar techniques could be used with Beta format, 8 mm, or other recording systems, to provide the necessary indexing functions.


Digital data may also be superimposed as pseudonoise in the image information, or as other information intermixed or merged with the video information.


The recording method is preferable a block encoding method with error correction within each block, block redundancy, and interleaving. Methods are known for reducing the error rate for digital signals recorded on unverified media, such as videotape, which are subject to burst errors and long term non-random errors. Such techniques reduce the effective error rate to acceptable levels. These are known to those skilled in the art and need not be discussed herein in detail. A standard reference related to this topic is Digital Communications by John G. Proakis, McGraw-Hill (1983). The digital data recording scheme is best determined according to the characteristics of the recording apparatus. Therefore, if an, e.g. Sony Corporation helical scan recording/reproducing apparatus was employed, one of ordinary skill in the art would initially reference methods of the Sony Corporation initially for an optimal error correcting recording scheme, which are available in the patent literature, in the U.S., Japan, and internationally, and the skilled artisan would also review the known methods used by other manufacturers of digital data recording equipment. Therefore, these methods need not be explained herein in detail.


The catalog of entries is also preferably stored in non-volatile memory, such as hard disk, associated with the VCR controller. This allows the random selection of a tape from a library, without need for manually scanning the contents of each tape. This also facilitates the random storage of recordings on tape, without the requirement of storing related entries in physical proximity with one another so that they may be easily located. This, in turn, allows more efficient use of tape, because of reduced empty space at the end of a tape. The apparatus is shown schematically in FIG. 20, in which a tape drive motor 2001, controlled by a transport control 2002, which in turn is controlled by the control 2003, moves a tape 2005 past a reading head 2004. The output of the reading head 2004 is processed by the amplifier/demodulator 2006, which produces a split output signal. One part of the output signal comprises the analog signal path 2007, which is described elsewhere. A digital reading circuit 2008 transmits the digital information to a digital information detecting circuit 2009, which in turn decodes the information and provides it to the control 2003.


In order to retrieve an entry, the user interacts with the same interface that is used for programming the recorder functions; however, the user selects different menu selections, which guide him to the available selections. This function, instead of focusing mainly on the particular user's history in order to predict a selection, would analyze the entire library, regardless of which user instituted the recording. Further, there would likely be a bias against performing identically the most recently executed function, and rather the predicted function would be an analogous function, based on a programmed or inferred user preference. This is because it is unlikely that a user will perform an identical action repeatedly, but a pattern may still be derived.


It is noted that the present library functions differ from the prior art VHS tape index function, because the present index is intelligent, and does not require the user to mark an index location and explicitly program the VCR to shuttle to that location. Rather, the index is content based. Another advantage of the present library function is that it can automatically switch media and recording format, providing an adaptive and/or multimode recording system. Such a system might be used, for example, if a user wishes to record, e.g., “The Tonight Show With Johnny Carson” in highly compressed form, e.g. MPEG-2 at 200:1 compression, except during the performance of a musical guest, at which time the recording should have a much lower loss, e.g., MPEG-2 at 20:1, or in analog format uncompressed. A normal VCR could hardly be used to implement such a function even manually, because the tape speed (the analogy of quality level) cannot generally be changed in mid recording. The present system could recognize the desired special segment, record it as desired, and indicate the specific parameters on the information directory. The recorded information may then be retrieved sequentially, as in a normal VCR, or the desired selection may be preferentially retrieved. If the interface of the present invention is set to automatically record such special requests, the catalog section would then be available for the user to indicate which selections were recorded based upon the implicit request of the user. Because the interface has the ability to characterize the input and record these characterizations in the index, the user may make an explicit request different from the recording criteria, after a selection has been recorded. The controller would then search the index for matching entries, which could then be retrieved based on the index, and without a manual search of the entire tape. Other advantages of the present system are obvious to those of ordinary skill in the art.


A library system is available from Open Eyes Video, called “Scene Locator”, which implements a non-intelligent system for indexing the contents of a videotape. See NewMedia, November/December 1991, p. 69.


It is noted that, if the standard audio tracks are used to record the indexing information, then standard audio frequency modems and recording/receiving methods are available, adapted to record or receive data in half-duplex mode. These standard modems range in speed from 300 baud to about 64 kilobits per second, e.g. v.29, v.17, v.32, v.32bis, v.34, v.90, v.91, etc. While these systems are designed for dial-up telecommunications, and are therefore are designed for the limited data rates available from POTS. These are limited to a slower speed than necessary and incorporate features unnecessary for closed systems, they require a minimum of design effort and the same circuitry may be multiplexed and also be used for telecommunication with an on-line database, such as a database of broadcast listings, discussed above. It should be noted that a full-duplex modem should be operated in half duplex mode when reading or recording on a media, thus avoiding the generation of unnecessary handshaking signals. Alternatively, a full duplex receiver may be provided with the resulting audio recorded. A specially programmed receiver may extract the data from the recording. DTMF codes may also be employed to stored information.


The Videotext standard may also be used to record the catalog or indexing information on the tape. This method, however, if used while desired material is on the screen, makes it difficult (but not impossible) to change the information after it has been recorded, without re-recording entire frames, because the videotext uses the video channel, during non-visible scan periods thereof. The video recording system according to the present invention preferably faithfully records all transmitted information, including SAP, VAR, close caption and videotext information, which may be used to implement the various functions.


The use of on-line database listings may be used by the present interface to provide information to be downloaded and incorporated in the index entry of the library function, and may also be used as part of the intelligent determination of the content of a broadcast. This information may further be used for explicitly programming the interface by the user, in that the user may be explicitly presented with the available choices available from the database.


EXAMPLE 4
Controlled Encryption and Accounting System

The present invention also allows for scrambling, encryption and locking of source material, and the receiving device selectively implements an inverse process or a partial inverse process for descrambling, decryption or unlocking of the material, much as the Videocipher series systems from General Instruments, and the fractal enciphering methods of Entertainment Made Convenient2 Inc. (EMC2, and related companies, e.g., EMC3, and Iterated Systems, Inc. The present invention, however, is not limited to broadcasts, and instead could implement a system for both broadcasts and prerecorded materials. In the case of copying from one tape to another, such a system could not only provide the herein mentioned library functions of the present invention according to Example 2, it could also be used to aid in copy protection, serial copy management, and a pay-per-view royalty collection system.


Such a system could be implemented by way of a telecommunication function incorporated in the device, shown as block 1808 of FIG. 18, or an electronic tag which records user activity relating to a tape or the like. Such tags might take the form of a smart card, PCMCIA device, or other type of storage device. A royalty fee, etc., could automatically be registered to the machine either by telecommunication or registry with the electronic tag, allowing new viewer options to be provided as compared with present VCR's.


Numerous digital data encryption and decryption systems are known. These include DES, “Clipper”, elliptic key algorithms, public key/private key (RSA, etc.), PGP, and others. Digital encryption allows a sender to scramble a message so that, with an arbitrary degree of difficulty, the message cannot be determined without use of a decryption key.


An encrypted tape or other source material may be decrypted with a decryption key available by telecommunication with a communication center, remote from the user, in a decryption unit, shown schematically as the decrypt unit 1806a of FIG. 18. Such an encryption/decryption scheme requires special playback equipment, or at least equipment with decryption functionality, and thus any usage or decrypted data may be registered as a result of the requirement to receive a decryption key. The decryption unit may be part of an addressable remote unit for control of the unit remotely.


During acquisition of the electronic decryption key, a VCR device of an embodiment of the present invention would indicate its identity or electronic address, and an account is charged a fee for such use. The negotiation for the electronic key is also preferably encrypted. In addition, the decryption key may be specific for a particular decoder. Such a system could also be used for controlled access software, for example for a computer, wherein a remote account is charged for use of the software. Information communication may be through the Internet or through an on-line service such as America Online or Compuserve.


Such a system differs from the normal hardware “key” or “dongle” (device which attaches to standard hardware port for authentication and usage limitation) because it requires on-line or electronic access for an encryption key, which may offer different levels of use. It also differs from a call-in registration, because of the automatic nature of the telecommunication. This presently described system differs from normal pay-per-view techniques because it allows, in certain instances, the user to schedule the viewing. Finally, with an encryption function implemented in the VCR, the device allows a user to create and distribute custom “software” or program material. In addition, the present controller could then act as the “telecommunication center” and authorize decryption of the material.


If the source signal is in digital form, a serial copy management scheme system is preferably implemented.


The present invention is advantageous in this application because it provides an advanced user interface for creating a program (i.e. a sequence of instructions), and it assists the user in selecting from the available programs, without having presented the user with a detailed description of the programs, i.e., the user may select the choice based on characteristics rather than literal description.


In the case of encrypted program source material, it is particularly advantageous if the characterization of the program occurs without charging the account of the user for such characterization, and only charging the account if the program is viewed by the user. The user may make a viewing decision based on the recommendation of the interface system, or may review the decision based on the title or description of the program, or after a limited duration of viewing. Security of the system could then be ensured by a two level encryption system, wherein the initial decryption allows for significant processing, but not comfortable viewing, while the second level of decryption allows viewing, and is linked to the accounting system. Alternatively, the decryption may be performed so that certain information, less than the entirety, is available in a first decryption mode, while other information comprising the broadcast information is available in a second decryption mode.


The transmission encryption system may be of any type, but for sensitive material, i.e. where mere distortion of the material (e.g., loss of synchronization information and phase distortion) would be insufficient, an analog multiple subband transform, with spread spectrum band hopping and digital encryption of various control signals, would provide a system which would be particularly difficult for the user to view without authorization, and could be effectively implemented with conventionally available technology. The fractal compression and encryption of the EMC2 and Iterated Systems, Inc. system is also possible, in instances where the broadcast may be precompressed prior to broadcast and the transmission system supports digital data. Of course, if a digital storage format is employed, a strict digital encryption system of known type may be used, such as those available from RSA. The implementation of these encryption systems is known to those skilled in the art. These may include the National Bureau of Standards (NBS), Verifiable Secret Sharing (VSS) and National Security Agency (NSA) encryption standards, as well as various proprietary standards.


EXAMPLE 5
User Interface

In one embodiment of the present invention, the apparatus comprises a program entry device for a VCR or other type of media recording system. The human interface element has an infrared device to allow wireless communication between the human interface device and the VCR apparatus proper. The human interface device also includes a direct-manipulation type input device, such as a trackball or joystick. Of course it is understood that various known or to-be developed alternatives can be employed, as described above.


It is noted that many present devices, intended for use in computers having graphic interfaces, would advantageously make use of an input device which is accessible, without the necessity of moving the user's hands from the keyboard. Thus, for example, Electronic Engineering Times (EET), Oct. 28, 1991, p. 62, discloses a miniature joystick incorporated into the functional area of the keyboard. This technique is directed at a different aspect of user interaction with a programmable device than certain preferred embodiments of the present invention, in that the input device does not have a minimal number of keys. While the device disclosed in EET is intended for use in a full function keyboard, the preferred embodiment of the present invention is directed towards the minimization of the number of keys and avoidance of superfluous keys by provision of a pointing device. Of course, the present invention could be used with a full function input device, where appropriate, and the joystick of EET (Oct. 28, 1991, p. 62) would be suitable in this case.


The interface of the present invention studies the behavior and moods of the user, in context, during interactions to determine the expected user level of that user as well as the preferences of the user. These user characteristics may change over time and circumstances. This means that the system studies the interaction of the user to determine the skill of the user or his or her familiarity with the operation and functionality of the system. By determining the skill of the user, the system may provide a best compromise. The purpose of this feature is to provide a tailored interface adapted to the characteristics of the user, thus adaptively providing access to various features in a hierarchical manner such that a most likely feature to be used is more easily accessible than an unlikely feature, but that features can generally be accessed from all or most user levels. The user level analysis also allows the system to teach the user of the various functions available, particularly when it becomes apparent that the user is being inefficient in the use of the system to perform a given task. Therefore, the menu structure may also be adaptive to the particular task being performed by the user. When combined with the user level analysis feature, the user efficiency feature will provide a preferable interface, with reduced learning time and increased usability for a variety of users.


Thus, an important concept is that the system has at least one object having a plurality of functions, certain of which are unnecessary or are rarely used for various applications or in various contexts, while these are used with greater frequency in other contexts. Further, based upon predetermined protocols and learned patterns, it is possible to predict which functions will be used and which will not be used.


Therefore, the system, upon recognizing a context, will reconfigure the availability or ease of availability of functions and allow various subsets to be used through “shortcuts”. Thus, to some extent, the interface structure may vary from time to time based upon the use of the system. The prior art apparently teaches away from this concept, because it is believed to prevent standardization, limits the “recordability” of macros and/or instruction sheets for casual users and limits the availability of technical support. Each of these can be addressed, to some extent by the availability of a default mode (so that users can access all information), and because the interface is self-simplifying in case of difficulty. However, forcing all users to always work in a default mode limits the improvements in productivity that may be gained by a data-sensitive processing system, and hence this standardization for its own sake is rejected by the present invention.


The improvements to be gained by using an intelligent data analysis interface for facilitating user control and operation of the system are more than merely reducing the average number of keystrokes or time to access a given function. Initial presentation of all available information to a new user might be too large an information load, leading to inefficiency, increased search time and errors. Rather, the improvements arise from providing a means for access of and availability to functions not necessarily known to the user, and to therefore improve the perceived quality of the product.


The system to determine the sophistication of the user includes a number of storage registers, for storing an analysis of each act for each user. A given act is represented in a plurality of the registers, and a weighting system to ensure that even though an act is represented in a number of registers, it is not given undue emphasis in the analysis. Thus, each act of the user may be characterized in a number of ways, and each characteristic stored in an appropriate register, along with a weighting representing an importance of the particular characteristic, in relation to other identified characteristics and in relation to the importance of the act as a whole. The act is considered in context, and therefore, the stored information relates to the act, the sequence of acts prior to the act, acts of the user occur after the act, the results of the sequence of acts which include the act, and characteristics of the user which are not “acts”, but rather include timing, mouse path efficiency, and an interaction with other users.


An apparatus for performing a path information or efficiency determining function is shown schematically in FIG. 18, and in more detain in FIG. 21. Thus, for example, if a characteristic of the user is an unsteady hand while using the cursor control device, e.g. mouse, producing a high frequency or oscillating component, the existence of this characteristic is detected and quantified by the high frequency signal component detector 2112, and, depending on the amplitude, frequency and duration (e.g. path length), may also be detected by the path optimization detector 2105. Once this characteristic is detected and quantified, an adaptive filter may be applied by the main control 1806 to selectively remove the detected component from the signal, in order to improve the reliability of the detection of other characteristics and to determine the intended act of the user.


It should be noted that the various characteristic filters preferably act in “parallel” at each stage of the characteristic recognition, meaning that one characteristic is defined simultaneously with the detection of other characteristics, which assists in resolving ambiguities, allows for parallel processing by a plurality of processing elements which improves real-time recognition speed, and allows a probability-based analysis to proceed efficiently. Such a “parallel” computation system is included in a neural net computer, and a hardware-implementation of a neural net/fuzzy logic hybrid computer is a preferred embodiment, which allows fuzzy rules to be programmed to provide explicit control over the functioning of the system. It is preferred that a human programmer determine the basic rules of operation of the system, prior to allowing a back-propagation of errors learning algorithm to improve and adapt the operation of the system.


The adaptive system implemented according to the present invention, by detecting a user level, allows a novice user to productively interact with the system while not unnecessarily limiting the use of the adaptive interface by an advanced user, who, for example, wishes to move the cursor quickly without the limiting effects of a filter which slows cursor response.


Another example of the use of an adaptive user interface level is a user who repeatedly requests “help” or user instructions, through the explicit help request detector 2115, which causes an output from the current help level output 2102; such a user may benefit from an automatic context-sensitive help system, however such a system may interfere with an advanced user, and is unnecessary in that case and should be avoided. This adaptive user interface level concept is not limited to a particular embodiment of the present invention, such as a VCR, and in fact, may be broadly used wherever a system includes an interface which is intended for use by both experienced and inexperienced users. This differs from normal help systems which must be specifically requested, or “balloon help” (Apple Computer, Macintosh System 7.0, 7.1, 7.5) which is either engaged or disengaged, but not adaptive to the particular situation based on an implicit request or predicted need. In the case of a single user or group of users, the interface could maintain a history of feature usage for each user, as in the past user history block 2107, and provide a lower user interface level for those features which are rarely used, and therefore less familiar to the user, through the current user level output 2101.


It should be noted that the present system preferably detects an identity of a user, and therefore differentiates between different users by an explicit or implicit identification system. Therefore, the system may accumulate information regarding users without confusion or intermingling.


EXAMPLE 6
VCR Programming Preference Prediction

The device according to the present invention is preferably intelligent. In the case of a VCR, the user could also input characteristics of the program material that are desired, and characteristics of that program material which is not desired. The device would then, over time, monitor various broadcast choices, and determine which most closely match the criteria, and thus be identified. For example, if the user prefers “talk-shows”, and indicates a dislike for “situation comedies” (“sitcoms”), then the device could scan the various available choices for characteristics indicative of one or the other type of programming, and perform a correlation to determine the most appropriate choice(s). A sitcom, for example, usually has a “laugh track” during a pause in normal dialogue. The background of a sitcom is often a confined space (a “set”), from different perspectives, which has a large number of “props” which may be common or unique. This set and the props, however, may be enduring over the life of a show.


A talk-show, on the other hand, more often relies on actual audience reaction (possibly in response to an “applause” sign), and not prerecorded or synthesized sounds. The set is simple, and the broadcast often shows a head and neck, or full body shot with a bland background, likely with fewer enduring props. A signal processing computer, programmed for audio and/or video recognition, is provided to differentiate between at least the two types with some degree of efficiency, and with a possibly extended sampling time, have a recognition accuracy, such that, when this information is integrated with other available information, a reliable decision may be made. The required level of reliability, of course, will depend on the particular application and a cost-benefit analysis for the required system to implement the decision-making system.


Since the system according to the present invention need not display perfect accuracy, the preferred embodiment according to the present example applies general principles to new situations and receives user or other feedback as to the appropriateness of a given decision. Based on this feedback, subsequent encounters with the same or similar data sets will produce a result which is “closer” to an optimal decision. Therefore, with the aid of feedback, the search criterion would be improved. Thus, a user could teach the interface through trial and error to record the desired broadcast programs. Thus, the presently described recognition algorithms may be adaptive and learning, and need not apply a finite set of predetermined rules in operation. For such a learning task, a neural network processor may be implemented, as known in the art.


The feature extraction and correlation system according to the present invention is shown in FIG. 22. In this figure, the multimedia input, including the audio signal and all other available data, are input in the video input 2201. The video portion is transferred to a frame buffer 2202, which temporarily stores all of the information. All other information in the signal, including audio, VIR, videotext, close caption, SAP (second audio program), and overscan, is preferably stored in a memory, and analyzed as appropriate. The frame buffer 2202 may have an integral or separate prefiltering component 2203. The filtered signal(s) are then passed to a feature extractor 2204, which divides the video frame into a number of features, including movement, objects, foreground, background, etc. Further, sequences of video frames are analyzed in conjunction with the audio and other information, and features relating to the correlation of the video and other information, e.g., correlation of video and audio, are extracted. Other information is also analyzed and features extracted, e.g., audio and close caption. All extracted features relating to the multimedia input are then passed to a transform engine or multiple engines in parallel, 2205. These transform engines 2205 serve to match the extracted features with exemplars or standard form templates in the template database 2206. It should be noted that even errors or lack of correlation between certain data may provide useful information. Therefore, a mismatch between audio and close caption or audio and SAP may be indicative of useful information. For non-video information, exemplars or templates are patterns which allow identification of an aspect of the signal by comparing the pattern of an unidentified signal with the stored pattern. Thus, the voice patterns of particular persons and audio patterns of particular songs or artists may be stored in a database and employed to identify a source signal.


The transformed extracted features and the templates are then correlated by a correlator or correlators 2207. The parallelization of implementation of the transforms and correlators serves to increase the recognition speed of the device. It should be understood that appropriate systems for parallelization are known in the art. For example, the TMS 320C80, also known as the TI MVP (Texas Instruments multimedia video processor) contains four DSP engines and a RISC processor with a floating point unit on a single die. A board including a TMS 320C80 is available from General Imaging Corp., Billerica Mass., the S/IP80, which may be programmed with ProtoPIPE. In addition, a board including a TMS 320C80 is also available from Wintriss Engineering Corp., San Diego, Calif. Multiple MVP processors may also be parallelized for additional computing power. The MVP may be used to analyze, in parallel, the multimedia input signal and correlate it with stored patterns in a database. In this context, correlation does not necessarily denote a strict mathematical correlation, but rather indicates a comparison to determine the “closeness” of an identified portion of information with an unidentified portion, preferably including a reliability indicator as well. For neural network-based processing, specific hardware accelerators also available, such as from Nestor, Inc. and Intel. Therefore, since there may be multiple recognizable aspects of the unidentified data, and various degrees or genericness of the characteristic recognized, it is preferred that at this initial stage of the recognition process that the output of the correlators 2207 be a data set, e.g. a matrix, series of pointers, or other arrangement, so that sufficient information is available for higher level processing to allow application of an appropriate decision process. Of course, if the characteristic to be detected is simple and well defined, and the decision-making process may be implemented with a simple correlation result, then a complex data set output is not required. In fact, the output of the correlator may have a number of different forms, based on the context of the recognition process.


If, for example, an exact match to an entire frame is sought, partial match information is not particularly useful, and is ignored in this process. (Of course, since the system is “self-learning”, the processing results may be maintained and analyzed for other purposes). If the system, on the other hand, is analyzing novel data, a full analysis would likely be necessary including partial results and low correlation results.


The outputs of the correlators are input into an adaptive weighing network 2208, to produce a probability of a match between a given feature and a given template. The recognition is completed in an identifier 2209, which produces a signal identifying one or more objects in the video frame input. The identifier 2209 also has an output to the template database 2206, which reinforces the recognition by providing feedback; therefore, if the same object appears again, it will be more easily recognized. The template database 2206 therefore also has an input from the feature extractor 2204, which provides it with information regarding the features recognized. It is also noted that, in addition to allowing recognition, the parallel transform engines 2205, correlators 2207, and adaptive weighing network 2208 also allows the system to ignore features that, though complex, do not aid in recognition.


For example, during dialogue, the soundtrack voice may correlate with the mouth movements. Thus, the mouth movements aid little in recognition, and may be virtually ignored, except in the case where a particular person's mouth movements are distinctive, e.g., Jim Nabors (“Gomer Pyle”), and Tim Curry (“Rocky Horror Picture Show”). Thus, the complexity and parallelism in the intermediate recognition stages may actually simplify the later stages by allowing more abstract features to be emphasized in the analysis. Animation poses a special example where audio and image data may be separated, due to the generally non-physiologic relation between the image and soundtrack.


The pattern recognition function of the present invention could be used, in a VCR embodiment according to the present invention to, e.g., to edit commercials out of a broadcast, either by recognition of characteristics present in commercials, in general, or by pattern recognition of specific commercials in particular, which are often repeated numerous times at various times of the day, and on various broadcast channels. Therefore, the system may acquire an unidentified source signal, which may be, for example, a 30 second segment, and compare this with a database of characteristics of known signals. If the signal does not match any previously known or identified signals, it is then subject to a characterization which may be the same or different than the characterization of the identified signals. The characterizations of the unidentified signal are then compared to characteristics to be recognized. If the unidentified signal meets appropriate criteria, a presumptive generic characterization is made. This characterization is preferably confirmed by a user later, so that a positively identified signal is added to the database of identified signals; however, under certain circumstances no confirmation is required.


Certain media present a recognizable audio or video cue when a commercial break has ended. (E.g. often sports events, such as the Olympic Games, will have theme music or distinctive images). The present device need not respond immediately to such cues, and may incorporate a delay, which would store the information while a decision is being made. In the case of a video tape, the delay may be up to the time between the time of recording and the time of playback. Further, the temporary storage medium may be independent of the pattern recognition system. Thus, a system provided according to the present invention may actually include two independent or semi-independent data streams: the first serving as the desired signal to be stored, retaining visually important information, and the second providing information for storage relating to the pattern recognition system, which retains information important for the recognition process, and may discard this information after the pattern recognition procedure is complete.


A system which provides a plurality of parallel data streams representing the same source signal may be advantageous because is allows a broadcast quality temporary storage, which may be analog in nature, to be separate from the signal processing and pattern recognition stage, which may be of any type, including digital, optical, analog or other known types, which need only retain significant information for the pattern recognition, and therefore may be highly compressed (e.g. lossy compression), and devoid of various types of information which are irrelevant or of little importance to the pattern recognition functions. Further, the temporary storage may employ a different image compression algorithm, e.g. MPEG-4, MPEG-2 or MPEG-1, which is optimized for retention of visually important information, while the recognition system may use a compression system optimized for pattern recognition, which may retain information relevant to the recognition function which is lost in other compression systems, while discarding other information which would be visually important. Advantageously, however, the analysis and content transmission streams are closely related or consolidated, such as MPEG-7 and MPEG-4.


In a particularly advantageous arrangement, the compression algorithm is integral to the recognition function, preparing the data for the pattern matching and characterization, and therefore is optimized for high throughput. According to this embodiment, the initial compression may include redundant or uncompressed information, if necessary in order to achieve real-time or near real-time recognition, and, thus may actually result in a larger intermediate data storage requirement than the instantaneous data presented to the recognition system; however, the term “compression”, in this case, applies to the long term or steady state status of the device, and in a real-time recognition function, the amount of data stored for use in recognition is preferably less than the cumulative amount of data presented, except during the very initial stages of data acquisition and possibly rare peaks.


In the case where a high quality (low loss, e.g. broadcast quality) intermediate storage is employed, after a decision is made as to whether the data should be stored permanently or otherwise further processed or distributed, the data may be transferred to the appropriate system or subsystem of the apparatus. Alternatively, the high quality intermediate storage is retained, and no further processing is performed. In either case, the purpose of this storage is to buffer the source data until the computational latency resolves any decisions which must be made.


According to one aspect of the present invention, the source image may be compressed using the so called “fractal transform”, using the method of Barnsley and Sloan, which is implemented and available as a hardware accelerator in product form from Iterated Systems, Inc., Norcross, Ga., as the Fractal Transform Card (FTC) II, which incorporates eight fractal transform integrated circuit chips, 1 MByte of Random Access Memory (RAM), and an Intel i80960CA-25□P, and operates in conjunction with P.OEMT™ (Iterated Systems, Inc., Norcross, Ga.) software, which operates under MicroSoft-Disk Operating System (MS-DOS). FTC-II hardware compression requires approximately 1 second per frame, while software decompression on an Intel 80486-25 based MS-DOS computer, using “Fractal Formatter” software, can be performed at about 30 frames per second, which allows approximately real time viewing. The Fractal Video Pro 1.5 is a video codec for WIN, allowing software only playback at 15-30 fps, 70-150 Kbytes/sec. This is a non-symmetrical algorithm, requiring more processing to compress than to decompress the image. The FTC-IV Compression Accelerator Board is presently available.


This fractal compression method potentially allows data compression of upwards of 2000:1, while still maintaining an aesthetically acceptable decompressed image result. Further, since the method emphasizes structural aspects of the image, as opposed to the frequency decomposition used in DCT methods (JPEG, MPEG), elements of the fractal method could be used as a part of the image recognition system. Of course, it should be appreciated that other fractal processing methods are available and may be likewise employed.


Audio data is also compressible by means of fractal transforms. It is noted that the audio compression and image recognition functions cannot be performed on the FTC-II board, and therefore an alternate system must be employed in order to apply the pattern recognition aspects of the present invention. It should also be noted that an even more efficient compression-pattern recognition system could be constructed by using the fractal compression method in conjunction with other compression methods, which may be more efficient under certain circumstances, such as discrete cosine transform (DCT), e.g. JPEG or modified JPEG or wavelet techniques. Fractal compression systems are also available from other sources, e.g. the method of Greenwood et al., Netrologic Inc., San Diego, Calif. See also, Shepard, J. D., “Tapping the Potential of Data Compression”, Military and Aerospace Electronics, May 17, 1993, pp. 25-27.


A preferred method for compressing audio information includes a model-based compression system. This system may retain stored samples, or derive these from the data stream. The system preferably also includes high-level models of the human vocal tract and vocalizations, as well as common musical instruments. This system therefore stores information in a manner which allows faithful reproduction of the audio content and also provides emphasis on the information-conveying structure of the audio signal. Thus, a preferred compression for audio signals retains, in readily available form, information important in a pattern recognition system to determine an abstract information content, as well as to allow pattern matching. Of course, a dual data stream approach may also be applied, and other known compression methods may be employed.


Because of the high complexity of describing a particular signal pattern or group of audio or image patterns, in general, the system will learn by example, with a simple identification of a desired or undesired pattern allowing analysis of the entire pattern, and extraction of characteristics thereof for use in preference determination.


Barnsley and Sloan's method for automatically processing digital image data consisting of image information, disclosed in U.S. Pat. Nos. 5,065,447 and 4,941,193, both expressly incorporated herein by reference, consists of the steps of storing the image data in the data processor, then generating a plurality of uniquely addressable domain blocks from the stored image data, each of the domain blocks representing a different portion of the image information such that all of the image information is contained in at least one of the domain blocks. A plurality of uniquely addressable mapped range blocks corresponding to different subsets of the stored image data are created, from the stored image data, with each of the subsets having a unique address. This step includes the substep of executing, for each of the mapped range blocks, a corresponding procedure upon the one of the subsets of the stored image data which corresponds to the mapped range block. Unique identifiers are then assigned to corresponding ones of the mapped range blocks, each of the identifiers specifying for the corresponding mapped range block a procedure and an address of the corresponding subset of the stored image data. For each of the domain blocks, the one of the mapped range blocks which most closely corresponds according to predetermined criteria is selected. Finally, the image information is represented as a set of the identifiers of the selected mapped range blocks. This method allows a fractal compression of image data. In particular, Drs. Barnsley and Sloan have optimized the match of the domain blocks with the mapping region by minimizing the Hausdorff distance. A decompression of the data precedes analogously in reverse order starting with the identifiers and the mapping regions to produce a facsimile of the original image. This system is highly asymmetric, and requires significantly more processing to compress than to decompress. Barnsley and Sloan do not suggest a method for using the fractal compression to facilitate image recognition, which is a part of the present invention.


Basically, the fractal method proceeds from an understanding that real images are made up of a plurality of like subcomponents, varying in size, orientation, etc. Thus, a complex block of data may be described by reference to the subcomponent, the size, orientation, etc. of the block. The entire image may thus be described as the composite of the sub-images. This is what is meant by iterative function systems, where first a largest block is identified, and the pattern mapping is repetitively performed to describe the entire image.


The Iterated Systems, Inc. FTC-II or FTC-IV board, if applied as a part of a system according to the present invention, is preferably used in conjunction with a frame-grabber board, such as Matrox, Quebec, Canada, Image-LC board, or a Data Translation DT1451, DT2651, DT2862, DT2867, DT2861 or DT2871, which may perform additional functions, such as preprocessing of the image signal, and may be further used in conjunction with an image processing system, such as the Data Translation DT2878. Of course, it should be understood that any suitable hardware, for capturing, processing and storing the input signals, up to and including the state of the art, may be incorporated in a system according to the present invention without exceeding the scope hereof, as the present invention is not dependent on any particular subsystem, and may make use of the latest advances. For example, many modern systems provide appropriate functionality for digital video capture, either uncompressed, mildly compressed, or with a high degree of compression, e.g., MPEG-2.


The Texas Instruments TMS320C80 provides a substantial amount of computing power and is a preferred processor for certain computationally intensive operations involving digital signal processing algorithms. A system employing a parallel TMS 320C40 processors may also be used. The Intel Pentium series (or related processors from AMD, National Semiconductor, or other companies), DEC/Compaq Alpha, SPARC, or other processors intended for desktop computing may, either individually or in multiprocessor configurations, be used to process signals.


A pattern recognition database system is available from Excalibur Technologies, San Diego, Calif. Further, IBM has had pattern recognition functionality available for its DB/2 database system, and has licensed Excalibur's XRS image retriever recognition software for DB/2. See, Lu, C., “Publish It Electronically”, Byte, September 1993, pp. 94-109. Apple Computer has included search by sketch and search by example functions in PhotoFlash 2.0. See also, Cohen, R., “FullPixelSearch Helps Users Locate Graphics”, MacWeek, Aug. 23, 1993, p. 77.


Image processing hardware and systems are also available from Alacron, Nashua N. H.; Coreco, St. Laurent, Quebec; Analogic, and others.


A fractal-based system for real-time video compression, satellite broadcasting and decompression is also known from Iterated Systems, Inc. and Entertainment Made Convenient2, Inc. (EMC2). In such a system, since the compressed signal is transmitted, the remote receiving system need not necessarily complete decompression prior to the intelligent pattern recognition function of the present invention. This system also incorporates anti-copy encryption and royalty and accounting documentation systems. It is noted that the EMC2 system does not incorporate the intelligent features of the present invention.


A preferred fractal-based system according to the present information provides the source data preprocessed to allow easy and efficient extraction of information. While much precharacterization information may be provided explicitly, the preferred system allows other, unindexed information to also be extracted from the signal. Further, the preferred system provides for an accounting system which facilitates pay-per-view functions. Thus, the interface of the present invention could interact with the standard accounting system to allow royalty-based recording or viewing, and possibly implement a serial-copy recording prevention system. Prior art systems require a user to explicitly select a program, rather than allow an intelligent system to assist in selection and programming of the device. The EMC2 system is described in “EMC2 Pushes Video Rental By Satellite”, Electronic Engineering Times, Dec. 2, 1991, p. 1, p. 98. See also, Yoshida, J., “The Video-on-demand Demand”, Electronic Engineering Times, Mar. 15, 1993, pp. 1, 72.


Fractal techniques may be used to store images on a writable mass storage medium, e.g. CD-ROM compatible. The present system may thus be used to selectively access data on the CD-ROM by analyzing the images, without requiring full decompression of the image data.


Wavelets hold promise for efficiently describing images (i.e., compressing the data) while describing morphological features of the image. However, in contrast to wavelet transforms which are not intended to specifically retain morphological information, the selection of the particular wavelet and the organization of the algorithm will likely differ. In this case, the transform will likely be more computationally complex and therefore slower, while the actual compression ratios achieved may be greater.


Thus, one embodiment of the device according to the present invention may incorporate a memory for storing a program, before being transferred to a permanent storage facility, such as tape. Such a memory may include a hard disk drive, magnetic tape loop, a rewritable optical disk drive, or semiconductor memories, including such devices as wafer scale memory devices. This is shown diagrammatically as the intermediate storage 2210 of FIG. 22. The capacity of such a device may be effectively increased through the use of image data compression, which may be proprietary or a standard format, i.e. MPEG-1, MPEG-2 (Motion Picture Experts Group standard employing DCT encoding of frames and interframe coding), MPEG-4 (Motion Picture Experts Group standard employing DCT encoding of frames and interframe coding, as well as model-based encoding methods) JPEG Joint Photographic Experts Group standard employing DCT encoding of frames), Px64 (Comité Consultatif International des Telegraph et telephone (International telegraph and telephone consultative committee) (CCITT) standard H.261, videoconferencing transmission standard), DVI (Digital Video Interactive), CDI (Compact Disk Interactive), etc.


Standard devices are available for processing such signals, available from 8×8, Inc., C-Cube, Royal Philips Electronics (TriMedia), and other companies. Image processing algorithms may also be executed on general purpose microprocessor devices.


Older designs include the Integrated Information Technology, Inc. (IIT, now 8×8, Inc.) Vision Processor (VP) chip, Integrated Information Technology Inc., Santa Clara, Calif., the C-Cube CL550B (JPEG) and CL950 (MPEG decoding), SGS-Thompson STI3220, STV3200, STV3208 (JPEG, MPEG, Px64), LSI Logic L64735, L64745 and L64765 (JPEG) and Px64 chip sets, and the Intel Corp. i750B DVI processor sets (82750PB, 82750 DB). Various alternative image processing chips have been available as single chips and chip sets; in board level products, such as the Super Motion Compression and Super Still-Frame Compression by New Media Graphics of Billerica, Mass., for the Personal Computer-Advanced technology (PC-AT, an IBM created computer standard) bus; Optibase, Canoga Park, Calif. (Motorola Digital Signal Processor (DSP) with dedicated processor for MPEG); NuVista+ from Truevision (Macintosh video capture and output); New Video Corp. (Venice, Calif.) EyeQ Delivery board for Macintosh NuBus systems (DVI); Intel Corp. ActionMedia II boards for Microsoft Windows and IBM OS/2 in Industry Standard Adapter (ISA, the IBM-PC bus standard for 8 (PC) or 16 bit (PC-AT) slots); Micro Channel Architecture (MCA) (e.g., Digital Video Interactive (DVI), Presentation Level Video (PLV) 2.0, Real Time Video (RTV) 2.0) based machines; and as complete products, such as MediaStation by VideoLogic.


Programmable devices, including the Texas Instruments TMS320C80 MVP (multimedia video processor) may be used to process information according to standard methods, and further provide the advantage of customizability of the methods employed. Various available DSP chips, exemplary board level signal processing products and available software are described in more detail in “32-bit Floating-Point DSP Processors”, EDN, Nov. 7, 1991, pp. 127-146. The TMS320C80 includes four DSP elements and a RISC processor with a floating point unit.


It is noted that the present interface does not depend on a particular compression format or storage medium, so that any suitable format may be used. The following references describe various video compression hardware: Kim, Y., “Chips Deliver Multimedia”, Byte, December 1991, pp. 163-173; and Donovan, J., “Intel/IBM's Audio-Video Kernel”, Byte, December, 1991, pp. 177-202.


It should also be noted that the data compression algorithm applied for storage of the received data may be lossless or lossy, depending on the application. Various different methods and paradigms may be used. For example, DCT (discrete cosine transform) based methods, wavelets, fractals, and other known methods may be used. These may be implemented by various known means. A compressed image may also be advantageously used in conjunction with the image recognition system of the present invention, as described above. In such a case, the compression system would retain the information most important in the recognition function, and truncate the unimportant information.


A further method of performing pattern recognition, especially of two dimensional patterns, is optical pattern recognition, where an image is correlated with a set of known image patterns represented on a hologram, and the product is a pattern according to a correlation between the input pattern and the provided known patterns. Because this is an optical technique, it is performed nearly instantaneously, and the output information can be reentered into an electronic digital computer through optical transducers known in the art. Such a system is described in Casasent, D., Photonics Spectra, November 1991, pp. 134-140. See also references cited therein.


These optical recognition systems are best suited to applications where an uncharacterized input signal frame is to be compared to a finite number of visually different comparison frames (i.e., at least one, with an upper limit generally defined by the physical limitations of the optical storage media and the system for interfacing to the storage media), and where an optical correlation will provide useful information. Thus, if a user wished to detect one of, e.g., “David Letterman”, “Jay Leno”, or “David Koppel”, a number of different planar views, or holograms in differing poses, of these persons would be formed as a holographic correlation matrix, which could be superimposed as a multiple exposure, stacked in the width dimension, or placed in a planar matrix, side by side. The detection system produces, from the uncharacterized input image and the holographic matrix, a wavefront pattern that is detectable by photonic sensors.


It is preferred that if multiple holographic images of a particular characterization are employed, that they each produce a more similar resulting wavefront pattern than the holographic images of other characterizations, in order to enhance detection efficiency. The optical pattern recognition method is limited in that a holographic image must be prepared of the desired pattern to be detected, and that optically similar images might actually be of a different image, if the differences are subtle. However, this method may be used in conjunction with electronic digital pattern recognition methods, to obtain the advantages of both. Methods are also known to electronically write an image to a holographic storage medium, thereby facilitating its use in a general-purpose image recognition system. Of course, the system may also be used to identify talk show guests, such as “Richard Gere” or “Cindy Crawford”, or these same individuals in other contexts.


If image compression is used, once an image is compressed, it need not be decompressed and returned to pixel, NTSC or other standard transmission or format for storage on tape, and thus the compressed image information may be stored in the same format as is present in the temporary storage medium. Thus, the block labeled intermediate processing 2211 of FIG. 22 shows that the intermediate storage need not retain the information as received from the frame buffer 2202, and in fact, may prepare it for the feature extractor 2204. In addition, the storage medium itself need not be normal videotape (S-VHS, VHS, Beta, 8 mm, Hi-8) and may be an adapted analog storage technique or a digital storage technique. Various magneto-optical recording techniques are known, which can store between 128 MB (3½″) and around 5 GB (11″), uncompressed, which might be suitable for storing compressed digital or analog information. Multilayer CD-ROM and short wavelength (e.g., blue) laser systems allow storage densities of about 3.5 to 10 Gbytes per disk, allowing storage of over two hours of MPEG-2 encoded video.


It is also noted that the present technology could also be applied to any sort of mass storage, such as for a personal computer. In such a case, a characteristic of the computer file, which is analogous to the broadcast program in temporary storage of a VCR, is classified according to some criteria, which may be explicit, such as an explicit header or identifying information, or implicit, such as a document in letter format, or a memorandum, as well as by words and word proximity. In particular, such a recognition system could differentiate various clients or authors based on the content of the document, and these could be stored in different manners. The text analysis system of a text-based computer storage system is analogous to the program classification system of the VCR embodiment of the present invention. However, there is a further analogy, in that the VCR could incorporate optical character recognition of text displayed in the program material, employ voice recognition, or directly receive text information as a part of a closed caption or videotext system. Thus, the VCR device according to the present invention could recognize and classify programs based on textual cues, and make decisions based on these cues. This might also provide a simple method of discriminating program material, for example, if a commercial does not include close caption or Second Audio Program (SAP), while the desired program does, or vice versa, then a commercial could be discriminated from a program with very little computational expenditure.


EXAMPLE 7
VCR Interface

A particular VCR interface system according to one aspect of the present invention includes an internal clock, four program memory, and the capability to display a graphical color interface. By providing the user with the aforementioned features, this design is a unique implementation for an instrument to be used for programming an event driven controller via an interactive display. All information that the user needs is displayed on the screen to avoid or minimize the unnecessary searching for information. This information includes the current date and current time.


A simulation of the AKAI Inc. VCR VS303U (on-screen programming) and the interface of the present invention, were tested to evaluate users' performances. The AKAI interface of the prior art, hereinafter referred to as the prior art interface, was chosen because users made the fewest errors while using this machine, and no user quit while programming, as compared to three other VCRs tested, a Panasonic (made by Matsushita, Inc.) PV4962 (Bar Coder), an RCA brand (formerly Radio Corporation of America, Inc.) VKP950 (on-screen programming), Panasonic brand (made by Matsushita Inc.) PV4700 (Display Panel).


The present embodiment was constructed and tested using HyperPAD™, a rapid prototyping package for an IBM-PC Compatible Computer. It is, of course obvious that the present embodiment could be incorporated in a commercial VCR machine by those skilled in the art, or be implemented on many types of general purpose computers with output screens which allow on-screen feedback for the programming operation. Further, the system of the present embodiment can include a remote-control device which communicates with a VCR through an infrared beam or beams, and can thus exert control over an infrared remote controlled VCR, or translate the programming information and communicate through an infrared remote control, using the standard type infrared transmitter.


An IBM PC-AT compatible (MS-DOS, Intel 80286-10 MHz) computer was used to test the two simulations. In order to simulate the use of a remote control device in programming the VCR, an infrared device made by NView™ was attached to the computer. This device came with a keyboard that was used to “teach” a Memorex™ Universal Remote so that the desired actions could be obtained. By using a universal remote, the computer could be controlled by using a remote control.


The present embodiment incorporates a mouse input device. It is understood that a small trackball with a button for selection, mounted on a remote control may also be employed, and may be preferable in certain circumstances. However, a computer mouse is easily available, and the mouse and trackball data are essentially similar for the type of task implemented by the user, with trackball performance being slightly faster. For daily use on a VCR however, a trackball would be a more preferable input device because it does not require a hard, flat surface, which is not always available to a user when programming a VCR, such as in the situation where a person is watching television while sitting in a chair or sofa.


A Genius™ Mouse was used as the input device in the prototype of the interface of the present invention. With the mouse, the user could view all of the choices at once on the display screen, and then make a selection from the items on the screen by moving the cursor and then pressing the left mouse button.


The interface of the present example focuses on attending to the user's needs, and the interface must be modified for each application. By reducing the searching, learning times, and entry times, the mental load is also minimized. Some tradeoffs are necessary as a result of subjective and objective data. Because of the difficulty in optimizing a single interface design for all levels of users, a menu system was used in an attempt to satisfy all these user types.


The interface of the present example reduced the number of incorrect recordings by 50%. The severity of the errors is unimportant here because one wrong entry will cause an irretrievable mistake and the user will not record the intended program. One study reported that faulty inputs, which lead to missing the program, can be reported by almost every present day owner of a VCR.


EXAMPLE 8
Programmable Device Interface

It is also noted that the interface of the present invention need not be limited to audio-visual and multimedia applications, as similar issues arise in various programmable controller environments. Such issues are disclosed in Carlson, Mark A., “Design Goals for an Effective User Interface”, Electro/82 Proceedings, 3/1/1-3/1/4; Kreifeldt, John, “Human Factors Approach to Medical Instrument Design”, Electro/82 Proceedings, 3/3/1-3/3/6; Wilke, William, “Easy Operation of Instruments by Both Man and Machine”, Electro/82 Proceedings, 3/2/1-3/2/4; Green, Lee, “Thermo Tech: Here's a common sense guide to the new thinking thermostats”, Popular Mechanics, October 1985, 155-159; Moore, T. G. and Dartnall, “Human Factors of a Microelectronic Product: The Central Heating Timer/Programmer”, Applied Ergonomics, 1983, Vol. 13, No. 1, 15-23; and “The Smart House: Human Factors in Home Automation”, Human Factors in Practice, December 1990, 1-36.


This generalized system is shown in FIG. 23, in which the sensor array 2301 interfaces with a microprocessor 2302 with a serial data port 2302a, which transmits sensor data to a control 2303. The control 2303 further interfaces or includes a data pattern recognition system 2304 and an interface and programming console 2305 according to the present invention, using the aforementioned intelligent features and adaptive pattern recognition techniques. The control 2203 controls the plant 2306, which includes all the controlled actuators, etc.


EXAMPLE 9
Adaptive Graphic Interface

A “smart screen” aspect according to the present invention is further explored in the present example. This aspect of the present invention allows the interface to anticipate or predict the intent of the user, to provide, as a default user choice, the most likely action to be taken by the user of the programmable device as a default, which may be either accepted or rejected by the user, without inordinate delay to the user. The intelligent selection feature may also automatically choose an option and execute the selected option, without further intervention, in cases where little or no harm will result. Examples of such harm include a loss of data, a substantial waste of the user's time and an inappropriate unauthorized allocation of computational resources.


When a user regularly applies the VCR device, for example, to record a particular television show which appears weekly on a given television channel, at a given time, on a given channel, such an action could be immediately presented to the user as a first option, without forcing him to explicitly program the entire sequence. Likewise, if the user has already entered such a command, the presented choices could include a second most likely selection, as well as the possibility of canceling the previously entered command.


Further, if an entire television programming guide for a week or month is available as a database, the interface could actively determine whether the desired show is preempted, a repeat (e.g., one which has been previously recorded by the system), changed in time or programming slot, etc. Thus, the interface could present information to the user, of which he might not be aware, and/or predict an action based on that information. Such a device could, if set in a mode of operation that allows such, automatically execute a sequence of instructions based on a predicted course of action. Thus, if a user is to be absent for a period, he could set the machine to automatically record a show, even if the recording parameters are not known with precision at the time of setting by the user. Of course, this particular embodiment depends on the availability of a database of current broadcast schedules, however, such a database may generally be available, e.g., in an on-line database.


Such an on-line database system of known type may be used and need not be described in detail herein. Alternately, a printed schedule of broadcasts may be scanned into a computer and the printed information deciphered (e.g., OCR) to gain access to a database. Other methods may also be used to access scheduling information, e.g. access channels on cable systems, as well as other broadcast information identifying future and imminent programming. Together, these methods allow semiautonomous operation, guided by programming preferences rather than explicit programs, where such explicit instruction is absent.


The smart screens according to the present invention may be implemented as follows. The controller may be, for example, an Apple Power Macintosh 8100/110 AV computer, operating under Macintosh 7.5 operating system. The Hypercard™ 2.3 software may be used to implement the screen interface, which incorporates the above-described features, which is generally compatible with the Hyperpad software described above. HyperCard™ is mentioned due to its capabilities to reference external programs, thus allowing interfacing to various software and hardware devices. A more global scripting language, such as Frontier by UserLand Software Inc., may also be used, especially where low level hardware control of interfaced devices, such as a VCR, multimedia adapter, or the like is desired. Apple Applescript may also be used. The Quicktime format may be used to store and recall data, however, many acceptable formats exist. The input device is an Apple Desktop Bus (ADB) mouse (Apple Computer Inc., Cupertino, Calif.), and the output display is an 8 bit or 24 bit graphics color adapter connected to, e.g., a 14″ color monitor. In addition, various parameters concerning the use of the interface are stored in the computer's memory, and a non-volatile mass storage device, such as a hard disk drive, or Electrically Erasable Programmable read Only Memory (EEPROM) or Erasable Programmable Read Only Memory (EPROM), as well as battery backed Random Access Memmory (RAM) could also be used.


A more modern implementation might employ, for example, a single or dual Pentium II 450 MHz workstation, running Microsoft Windows NT 4.0 or Windows 2000, when available). The hardware is a matter of choice, including memory, monitor, pointing device, graphic display card, video capture card, mass storage options, and the like. Preferably, a hardware codec is provided, for example a Media 100, Inc. Broadway device. The software may be, for example, Microsoft Visual Basic 5.0 or other suitable development language.


Intel Pentium-based platforms may also be used, preferably in IBM-PC compatible implementations. Intel 80860 and/or Intel 80960 processor platforms may also be used.


Alternatively, other Apple Power PC, Macintosh (MC680X0 series) or IBM Power PC implementation may be used, providing the advantage of increased processing power over Motorola 680×0 derivatives. The specific Power PC employed may be any version, including desktop system versions available from Apple and IBM and embedded versions from IBM and Motorola. These Power PC processors may also be provided in a parallel processing implementation. Further, custom implementations of Power PC hardware optimized for the relevant computational tasks may be employed.


Of course, other systems, including DEC Alpha and HP 9000 systems may also be employed, as well as SPARC, MIPS, and other available RISC systems. While RISC systems, possibly supplemented with DSP hardware, are presently preferred because of their efficiency in executing the pattern recognition tasks, Complex Instruction Set Computer (CISC)., hybrid and other known processing systems may be employed. The Texas Instruments TMS320C80 combines a Reduced Instruction Set Computer (RISC) processor, Arithmetic logoc Unit (ALU) and four DSP processors on a single chip, and is therefore a preferred processor for implementing various aspects of the system, especially mathematical processing including DCT and correlations.


According to the present invention, the interface may perform comparatively simple tasks, such as standard graphic user interface implementation with optimized presentation of screen options, or include more complex functionality, such as pattern recognition, pattern matching and complex user preference correlations. Therefore, hardware requirements will range from basic 68040, 80486, Pentium, Power PC, MIPS, SPARC, Digial Equipment Corp. (DEC, now Compaq Computer Corp.) Alpha, or other microprocessors which are used to perform visual or audio interface functions, to much special purpose processors for implementation of complex algorithms, including mathematical, neural network, fuzzy logic, and iterated function systems (fractals).


It should be noted that, while many aspects of the intelligent interface according to the present invention do not require extremely high levels of processing power, and therefore may be provided with inexpensive and commonly available computing hardware, other aspects involve complex pattern recognition and advantageously employ powerful processors to achieve a short processing latency. Both simple and complex interface systems, however, are included within the scope of the present invention. Processing may be distributed in different fashions, so that complex functionality may be implemented with relatively simple local hardware, with a substantial amount of required processing for a high level of functionality performed centrally, and for a large number of users.


From the stored information regarding the prior use of the interface by the user, including prior sessions and the immediate session, and a current state of the machine (including a received data stream and information relating to the data stream previously stored), a predicted course of action or operation may be realized. This predicted operation is, in the context of the current user interface state, the most probable next action to be taken by the user.


The predicted operation is based on: the identity of the user, if more than one user operates the interface and machine, the information already entered into the interface during the present programming session, the presently available choices for data entry, settings for the use of the machine, which may be present as a result of a “setup” operation, settings saved during a prior session, and a database of programming choices. In the case of a HyperCard script, the interface software calls another program which has access to the necessary data in the memory, as well as access to any remote database which may be necessary for implementation of the function. Using a predictive technology, such as Boolean logic, fuzzy logic, neural network logic, or other type of artificial intelligence, a most probable choice may be presented to the user for his approval, or another alternative choice may be selected. Further, a number of most probable choices may be presented simultaneously or in sequence, in order to improve the probability that the user will be immediately or quickly presented with an acceptable choice. If multiple choices are presented, and there is limited room on the display, two or more similar choices may be merged into a single menu selection, which may be resolved in a secondary menu screen, e.g., a submenu or dialog box.



FIG. 24 shows a system for correlating a user's preferences with a prospective or real-time occurrence of an event. The input device 2401, which is a remote control with a pointing device, such as a trackball, provides the user's input to the control 2402. The program is stored in a program memory 2403, after it is entered. The control 2402 controls a plant 2404, which is a VCR. The control also controls an on-screen programming interface 2405, through which the user interactively enters the program information. Each program entry of the user is submitted to the user history database and preferences module 2406, which may also receive explicit preference information, input by the user through the input device 2401. The prospective and real time event characterization unit 2407 uses any and/or all relevant information available in order to determine the character of a signal input, which is a video signal, from the signal receiver 2408. A signal analyzer 2409 provides a preliminary analysis and characterization of the signal, which is input to the prospective and real time event characterization unit 2407. The prospective and real time event characterization unit 2407 also interacts and receives an input from a telecommunication module 2410, which in turn interacts and receives information from an on-line database 2411. A user preference and event correlator 2412 produces an output relating to a relatedness of an event or prospective event and a user preference. In the event of a high correlation or relatedness, the control 2402 determines that the event or prospective event is a likely or most likely predicted action. The prospective event discussed above refers to a scheduled event, which is likely to occur in the future. The characterization unit also has a local database 2413 for storing schedule information and the like.


In the particular context of a videotape, one consideration of the user is the amount of time remaining on the tape. Generally, users wish to optimally fill a tape without splitting a program, although the optimization and non-splitting parameters may vary between users. Therefore, the length of the tape and the amount and character of other items on the tape are also factors to be employed in determining a most desired result. With respect to this issue, the interface may maintain a library function which allows the identification of a partially filled tape for recording under given circumstances. The interface may also optimize a playback by selecting a tape containing a desired sequence of materials.


The intelligent interface may also be used as a part of an educational system, due to its ability to adapt to the level of the user and dynamically alter an information presentation based on the “user level”, i.e. the training status of the user, and its ability to determine areas of high and low performance. Likewise, the intelligent interface according to the present invention may also be used in a business environment for use by trained individuals who require relatively static software interface design for consistence and “touch typing” with memorized keystroke or mouse click sequences. In this case, the intelligent functionality is segregated into a separate user interface structure, such as an additional “pull down menu” or other available screen location. While the interface always monitors user performance, the impact of the analysis of the user is selectively applied. User analysis may also be used for performance evaluation according to an objective criteria, based on continuous monitoring. In a network environment, user profile and evaluation may be made portable, stored so as to be accessible from any networked device the user may interact with, from office computers to thermostats to photocopying machines to coffee machines.


EXAMPLE 10
Intelligent Adaptive VCR Interface

A user interacting with the device intends to record a particular program, “Married With Children” (Fox, Sunday, 9:00 p.m., etc.) on its ever occurrence. This intent, however, is to provide a full library of episodes, and not to duplicate episodes. The particular program is subject to the occurrence of reruns, syndicated distribution, time shifting of performance, preview scenes and advertisements. Further, various actors appearing in the particular program also appear in other capacities and roles on television. Therefore, after this intent is elucidated, the interface scans available directories of programming to determine when “Married With Children” will be broadcast. In addition, to the extent possible, all channels may be monitored, in the event that the directories or erroneous or incomplete.


It is noted that the interface may be quite effective if it is used for a number of applications, such as television, radio, desktop computer, and even kitchen and HVAC system. For example, preferences for processing MTV or other music video information may be directly relevant to processing of radio or other music reproduction devices, and vice versa.


At some point in the process, preferably prior to substantive programming input, the interface performs a self-diagnostic check to determine whether the machine is set up and operating correctly. This would include a determination of whether the clock has been set and thereafter operating continuously. Of course, the clock could have, in practice, a battery to minimize the occurrence of problems relating to clock function. The interface would then, if the clock is not properly set, and if there is no telecommunication or other external means for automatically determining the exact time, present the user with a menu selection to set the proper time. Of course, if the correct time is available to the apparatus in some form, this could be automatically obtained, and the internal clock updated, without intervention. These same sources may be used to verify the accuracy of an internal clock. Further, if a reliable external clock system is available, an internal clock may be dispensed with or ignored. Time may also be inferred based on the regular schedules of broadcasts, e.g., the 11:00 p.m. news begins at 11:00 p.m. If the user does not have access to a source of the exact time, the step of correcting the time may be deferred, although at some point the user should be reminded to verify the clock information. The user may thus be able to override a machine-generated request or attempt to correct the time data.


If the machine has access to an external source of the exact time, it would then preferably access this source first. Such sources of exact time include a telephone connection to a voice line which repeats the time. The computer would then perform a speech recognition algorithm which would be used to determine the time. Such a speech recognition algorithm could also be used as a part of the user interface for other purposes, i.e. a speech recognition system is not supplied solely for obtaining time information. Alternatively, a modem or communication device could be used to obtain the time in digitally coded form over a network, which would alleviate the need for speech recognition capabilities for this function. An on-line connection could also be used in order to obtain information concerning television scheduling.


A further method for obtaining accurate time information is to access a video signal which contains the desired time information. For example, many cable broadcasting systems have a channel which continuously broadcasts the time in image form. The interface tunes this channel, and acquires a representation of the screen image, thereafter performing a character recognition algorithm to capture the time information. This character recognition algorithm could also be used to obtain or capture information regarding programming schedules, stock prices, and other text information which may appear on certain cable broadcast channels.


Thus, the interface, in obtaining necessary information, employs such available data source access methods as speech recognition, character recognition, digital telecommunication means, radio wave reception and interpretation, and links to other devices.


In interacting with the apparatus, the user first identifies himself/herself to the machine, which can occur in a number of ways. This step may be dispensed with, or at least trivialized, if only one user regularly interacts with the apparatus. Otherwise, such identification may be important in order to maintain the integrity of the user profiles and predictive aspects of the interface. An radio frequency transponder (RF-ID), infrared transponder (IR-ID) system may automatically determine the user based on a devices, which may be concealed in a piece of jewelry or wristwatch. The user may also be identified by voice pattern recognition, speaker independent voice recognition, video pattern recognition, fingerprint, retinal scan, or other biometric evaluation. An explicit entry of the user identity may also be employed, wherein the user types his/her name on a keyboard or selects the name or unique identifier from a “pick-list”. The interface, upon identifying the user, retrieves information regarding the user, which may include past history of use, user preferences, user sophistication, patterns of variation of user, which may be based on, e.g., time, mood, weather, lighting, biometric factor or other factors.


Thus, after completing system diagnostics, including the time-check function referred to above, the system next determines or predicts the desired function of the user. In this regard, if more than one user has access to the system, the user identifies himself to the interface, in a user identification step 1701 or an analogous action, which may be a coded entry, or a selection from the menu. If the interface has voice recognition capability, then the user may be recognized by his voice pattern, or merely by stating his name. The interface then accesses the memory for a profile of the past use of the machine by the user, which may include the entire prior history, relevant abstracts of the history, or derived user preferences, as shown in the personalized startup based on user profile step 1702, which information is also stored and used in the past user history determining element 2107. These choices differ in the amount of storage necessary in order to retain the desired information.


Thus, if the user has only used the VCR to record, e.g., the National Broadcasting Company (NBC) 11 o'clock news, i.e., record all days from 11:00 p.m. to 11:30 p.m. on NBC, in the past, the most likely current predicted choice would be the NBC 11 o'clock news. If the interface were to present a number of choices, having lower probability, then it interprets the recording history to be “news” based on a database of broadcast information. Therefore, a prediction of lower probability would be American Broadcasting Company (ABC) or Central Broadcasting Company (CBS) news at, e.g., 11:00 p.m., and the NBC news at, e.g., 5:00 p.m. In a cable television system, there may be a number of NBC affiliated news alternatives, so that these alternatives may be investigated first before other networks or the like are presented as likely choices. In addition, where a video feed is unavailable, a text feed from the internet or an on-line service may be acquired as a probable alternative.


Thus, a number of likely choices, based on intelligently determined alternatives, as well as adaptation based on determined user preferences, are initially presented to the user, along with a menu selection to allow rejection of these predicted choices. In this case, the user selects the “reject” selection, and the system presents the user with a next predicted desired menu choice. Since the user history, in this case, does not provide for another choice of particularly high probability, the user is prompted to explicitly choose the program sequence by day, time, channel, and duration. The user then enters the starting time for recording according to the methods described above. The interface then searches its databases regarding the user and broadcast listings to present a most likely choice given that parameter, as well as all available alternatives. In this case, the user history is of little help, and is not useful for making a prediction. In other cases, the system uses its intelligence to “fill in the blanks”, which could, of course, be rejected by the user if these are inaccurate or inappropriate. The most likely choices are then those programs that begin at the selected time. If the user had input the channel or network, instead of starting time, then the presented choices would be the broadcast schedule of the channel, e.g. channel 5 or Fox, for the selected day.


The user then selects one of the available choices, which completes the programming sequence. If no database of broadcasts is available, then the user explicitly defines all parameters of the broadcast. When the programming is completed, the interface then updates its user database, prompts the user to set the VCR to record, by, e.g., inserting a blank or recordable tape.


If the predicted desire of the user is of no help, or the user seeks to explicitly program the system, a manual program entry system is available. Where there is no useful prediction of the user, the interface may request a training session, which may be a general inquiry, or specifically directed to immediately forthcoming broadcasts, or both.


In this case, after a failure to predict a desired program, the user then proceeds to explicitly program the VCR interface to record “Married with Children” on Fox at 9:00 p.m. on Sunday evening. If a database is available, it might also show that “Married with Children” is also syndicated in re-runs, and therefore various episodes may be available on other channels at other times. Thus, during the subsequent session, both the premier showing and re-run of “Married With Children” would be available predicted choices, along with the 11 o'clock News on NBC.


The user having demonstrated a preference for “Married with Children”, the interface then characterizes the program. This includes, for example, a characterization of the soundtrack, the background, foreground, actors and actresses present, credits, etc. The interface then attempts to correlate the features present in the reference selection with other available selections. This comparison may be with a preformed database, providing immediate results, or prospectively, after entry of the reference selection. Of course, a number of correlation functions may proceed simultaneously, and various choices may be merged to form a compound reference selection, any ambiguity in which to be later resolved. Further, as various “episodes” of the reference selection occur, the system appends and integrates the most recent occurrence with the stored reference information, thus updating the reference database.


When an occurrence is identified, it is immediately buffered, until such time as the particular episode may be compared against previously stored episodes. If two identical broadcasts occur simultaneously, one may be selected, i.e., the one with the best reception. When the episode is identified, if it is new, the buffered broadcast information is permanently stored; if it is previously stored, the buffer is flushed and the occurrence is further ignored as a “hit”. Since the apparatus is now not responding to a direct request, it may then perform various housekeeping functions, including updating databases of broadcasts and the like. This is because, although the apparatus is preferably highly trained upon manufacture, a large number of new broadcasts are always being created and presented, so that the apparatus must constantly maintain its “awareness” of data types and trends, as well as update its predicted preferences of the user(s).


Based on input from the user, other programming including the same actors and/or actresses may be processed, e.g., recorded. For example, Katey Segal periodically appears on “Jay Leno” as a musical guest, and therefore may be recorded in these appearances.


EXAMPLE 11
Intelligent Adaptive VCR Interface

Another example of the use of the present programming system allows a hybrid request which does not correspond to any single broadcast schedule entry. In this case, if the user instead wishes to record weather reports on all channels, the interface may be of further help. The interface controls a plurality of tuner elements 2502 of a video signal reception device 2501, so that a plurality of broadcasts may be simultaneously received. Using the mass storage and possibly image data compression described above, a plurality of broadcasts may also be recorded simultaneously in the intermediate storage 2503. The mass storage may be multiple VCRs, optical storage, magnetooptical storage, magnetic storage including disk (e.g. single disks, multimedia compatible disks, RAID, etc.) tape (QIC, 8 mm, 4 mm, etc.). Preferably, the archival recording medium is recordable DVD or possibly recordable CD-ROM.


The optical recording tape produced by ICI, Inc., or other card or tape optical storage medium might also be a useful storage medium for large volumes of data, as might be generated by recording multiple video signals. The known implementations of the ICI product system best suited for commercial or industrial use and not for individual consumer use.


In any case, the interface 2506 accesses its associated database 2413 to determine, at a given time, which channels are broadcasting “news”. The interface system might also randomly or systematically monitor or scan all or a portion of the available broadcasts for “special reports”. The interface system then monitors these channels for indicia of a “weather” information content broadcast. For example, the newscaster who appears to report the weather on a given show is usually the same, so that a pattern recognition system 2505 of the video frame could indicate the presence of that newscaster. In addition, the satellite photographs, weather radar, computer generated weather forecast screens, etc. are often similar for each broadcast. Finally, news segments, such as “weather” often appear at the same relative time in the broadcast. Using this information, the interface system selects certain broadcast segments for retention.


This retention begins at a beginning of a news segment, such as “weather”, stop recording during commercials, and continues after return from break, on all selected channels. In order to assist in making accurate decisions, the monitored broadcasts may be stored in a temporary storage medium until a decision is made, and thereafter transfer the recording to a more permanent storage medium if that be appropriate. It is noted that the system of the present invention is intelligent, and may therefore “learn” either explicitly, or through training by example. Therefore, if the system made an error during the process, the user may define the error of the system, e.g., a substitute newscaster or rearrangement of news segments, so that the interface system has a reduced likelihood of making the same error again. Thus, while such a system is inherently complex, it poses significant user advantages. Further, while the interface system itself is sophisticated, it provides simplicity, with inductive reasoning and deductive reasoning for the user.


Thus, a minimum of user interaction is required even for complex tasks, and nearly full automation is possible, as long as the user and apparatus are able to communicate to convey a preference. As a further embodiment according to the present invention, the interface system will stored transmitted data, and subsequently review that data, extracting pertinent information. The stored data may then be deleted from the storage medium. In this regard, the system may be self learning.


It is noted that various algorithms and formulae for pattern recognition, correlation, data compression, transforms, etc., are known to those skilled in the art, and are available in compendiums, such as Netravali, Arun N., and Haskell, Barry G., “Digital Pictures Representation and Compression”, Plenum Press, New York (1988); Baxes, Gregory A., “Digital Signal Processing, A Practical Primer”, Prentice-Hall, Englewood Cliffs, N.J. (1984); Gonzalez, Rafael C., “Digital Image Processing”, Addison-Wesley, Reading, Mass. (1987), and, of a more general nature, Press, William H. et al, “Numerical Recipes in C The Art of Scientific Computing”, Cambridge University Press, 1988.


EXAMPLE 12
Intelligent Adaptive VCR Interface

A further example of the use of the advanced intelligent features of the present invention is the use of the system to record, e.g., “live” musical performances. These occur on many “talk” shows, such as “Tonight Show” (NBC, 11:30 p.m. to 12:30 p.m., weeknights), “Saturday Night Live” (NBC 11:30 p.m. to 1:00 a.m. Saturday-Sunday), and other shows or “specials” such as the “Grammy Awards”. The interface, if requested by the user to record such performances, then seeks to determine their occurrence by, e.g., analyzing a broadcast schedule; interacting with the on-line database 2411; and by reference to the local database 2413. When the interface determines with high probability that a broadcast will occur, it then monitors the channel(s) at the indicated time(s), through the plurality of tuners 2502. The system may also autonomously scan broadcasts for unexpected occurrences.


In the case of pay-per-view systems and the like, which incorporate encrypted signals, an encryption/decryption unit 2509 is provided for decrypting the transmitted signal for analysis and viewing. This unit also preferably allows encryption of material in other modes of operation, although known decryption systems without this feature may also be employed with the present system. During the monitoring, the interface system acquires the audio and video information being broadcast, through the signal receiver 2408, and correlates this information with a known profile of a “live musical performance”, in the preference and event correlator 2412. This must be distinguished from music as a part of, e.g., a soundtrack, as well as “musicals” which are part of movies and recorded operas, if these are not desired by the user. Further, music videos may also be undesirable. When the correlation is high between the broadcast and a reference profile of a “live musical performance”, the system selects the broadcast for retention. In this case, the information in the intermediate storage 2503 is transferred to the plant 2507, which includes a permanent storage device 2508. The intermediate storage 2503 medium is used to record a “buffer” segment, so that none of the broadcast is lost while the system determines the nature of the broadcast. This, of course, allows an extended period for the determination of the type of broadcast, so that, while real-time recognition is preferred, it is not absolutely necessary in order to gain the advantages of the present invention. The buffer storage data, if not deleted, also allows a user to select a portion for retention that the interface system has rejected.


Thus, while it is preferable to make a determination in real time, or at least maintain real time throughput with a processing latency, it is possible to make an ex post facto determination of the nature of the broadcast program. By using an available delay, e.g., about 5 to about 300 seconds, or longer, the reliability of the determination can be greatly increased as compared to an analysis of a few frames of video data, e.g., about 15 to about 300 mS. An intermediate reliability will be obtained with a delay of between about 300 to about 5000 mS. As stated above, the storage system for this determination need not be uncompressed nor lossless, so long as features necessary to determine the character of the broadcast are present. However, it is preferred that for broadcast recording intended for later viewing, the storage be as accurate as possible, so that if a compression algorithm is implemented, it be as lossless as reasonable given the various constraints. The MPEG-2 standard would be applicable for this purpose, though other video compression systems are available.


In a preferred situation, approximately 5 minutes of broadcast material is analyzed in order to make a determination of the content. This broadcast material is stored in two media. First, it is stored in a format acceptable for viewing, such as video tape in a videotape recorder, or in digital video format, e.g., uncompressed, MPEG-2. Second, it is received in parallel by the computer control, where the data is subject to a number of recognition and characterization processes. These are performed in parallel and in series, to produce a stored extracted feature matrix. This matrix may contain any type of information related to the broadcast material, including an uncompressed signal, a compressed signal, a highly processed signal relating to information contained in particular frames and abstract features, spatially and temporally dissociated from the broadcast signal, yet including features included in the broadcast which relate to the content of the broadcast.


One possible method incorporates one or more digital signal processor based coprocessor elements, which may be present on, e.g., Nubus cards in the Macintosh Quadra 950, Apple Power PC, PCI card in Pentium-based MS-DOS/Windows 3.1, 3.11, 95, 98, NT computers (or Macintosh PCI-based computers), other Power PC based computers. These elements may be based on C-Cube CL550 (JPEG compression), Analog Devices ADSP-21020, Analog Devices ADSP-21060, AT&T (formerly American Telephone and Telegraph Co.) DSP32C, AT&T DSP3210, AMD 29000 series, Motorola DSP 96000ADS, Texas Instruments TMS 320C40, TMS 320C80, IBM Mwave, or other known devices. Other devices are also available from Analog Devices, AT&T, DSP Group, Motorola, NEC, SGS-Thomson, Sharp, Texas Instruments, Zilog, Zoran, and other vendors. See, EDN, May 11, 1995, pp. 40-106; Bursky, D., “Improved DSP ICs Eye New Horizons”, Electronic Design, Nov. 11, 1993, pp. 69-82. DSP systems, which generally have an architecture optimized for the efficient and rapid execution of repetitive numeric calculations, are desirable for certain pattern recognition tasks, and may be provided as a tightly coupled parallel processing array to increase throughput.


A known board containing a DSP is the MacDSP3210 by Spectral Innovations Inc., containing an AT&T digital signal processor and an MC68020 CISC processor, and which uses the Apple Real-time Operating System Executive (A/ROSE) and Visible Cache Operating System (VCOS). It is preferred that the processors employed be optimized for image processing, because of their higher throughput in the present image processing applications, to process the video signals, and more other signal processors to analyze the audio signals. Of course, general purpose processors may be used to perform all calculations. An array processor which may be interfaced with a Macintosh is the Superserver-C available from Pacific Parallel Research Inc., incorporating parallel Inmos Transputers. Such an array processor may be suitable for parallel analysis of the image segment and classification of its attributes.


Pattern recognition processing, especially after preprocessing of the data signal by digital signal processors and image compression engines, may also be assisted by logical inference engines, such as FUTURE (Fuzzy Information Processing Turbo Engine) by The Laboratory for International Fuzzy Engineering (LIFE), which incorporates multiple Fuzzy Set Processors (FSP), which are single-instruction, multiple data path (SIMD) processors. Using a fuzzy logic paradigm, the processing system may provide a best fit output to a set of inputs more efficiently than standard computational techniques, and since the presently desired result requires a “best guess”, rather than a very accurate determination, the present interface is an appropriate application of this technology.


As noted above, these processors may also serve other functions such as voice recognition for the interface, or extracting text from video transmissions and interpreting it. It is also noted that, while some of these coprocessing engines are now costly, these costs are decreasing and the present invention therefore includes the use of sophisticated present designs as well as future devices which may be used to perform the stated functions. The continued development of optical computers may also dramatically reduce the cost of implementing this aspect of the present invention; however, the present state of the art allows the basic functions to be performed. See attached appendix of references, incorporated herein by reference, detailing various optical computing designs.


A real time operating system may be employed, of which there are a number of available examples. Some older examples include SPOX DSP operating system, IBM's Mwave operating system and AT&T's VCOS operating system. These operating systems, and possibly others, are to be supported by Microsoft Inc.'s Windows 95 operating system Resource Manager function.


It is noted that various methods are available for determining a relatedness of two sets of data, such as an image or a representation of an image. These include the determination of Hausdorff distance, fuzzy correlation, arithmetic correlation, mean square error, neural network “energy” minimization, covariance, cross correlation, and other known methods, which may be applied to the raw data or after a transformation process, such as an Affine transformation, a Fourier transformation, a Gabor transformation, a warping transformation, a color map transformation, and the like. Further, it is emphasized that, in image or pattern recognition systems, there is no need that the entire image be correlated or even analyzed, nor that any correlation be based on the entirety of that image analyzed. Further, it is advantageous to allow redundancy, so that it is not necessary to have unique designations for the various aspects of the data to be recognized, nor the patterns to be identified as matching the uncharacterized input data.


The MSHELL from Applied Coherent Technology is a software system that runs on a Mercury MC3200 array processor, in conjunction with a Data Translation DT2861 or DT2862. The NDS1000 Development System from Nestor, Inc., provides image recognition software which runs on a PC compatible computer and a Data Translation DT2878.


The C-Cube CL550 is disclosed in “C-Cube CL550 JPEG Image Compression Processor”, Preliminary Data Book, August 1991, and addendum dated Nov. 20, 1991, and products incorporating the CL550 include the JPEG Video Development Kit (ISA bus card with Chips and Technologies PC video 82C9001A Video Window Controller), and the C-Cube CL550 Development Board/PC for ISA Bus (CL550, for use with Truevision TARGA-16 or ATVista cards) or for NuBus (Macintosh). The so-called C-Cube “CL950” is a MPEG decoder device. Such a device as the CL950 may be particularly useful for use in the present VCR for reproducing compressed program material, which may be compressed by the present apparatus, or may be used for decompressing pre-compressed program material. Other MPEG-1 and MPEG-2 encoding and decoding devices are known.


It is noted that all functions of a VCR would also be facilitated by the use of such powerful processors, and thus it is not only these advanced functions which are enabled by these advanced processors and coprocessors. It is also noted that these image recognition functions need not necessarily all be executed local to the user, and may in fact be centralized with resultant processed data transmitted to the remote user. This would be advantageous for two reasons: first, the user need not have an entire system of hardware localized in the VCR, and second, many of the operations which must be performed are common to a number of users, so that there is a net efficiency to be gained.


EXAMPLE 13
Intelligent Adaptive VCR Interface

The interface of the present invention incorporates an intelligent user interface level determination. This function analyzes the quality of the user input, rather than it's content. Thus, this differs from the normal interface user level determination which requires an explicit entry of the desired user level, which is maintained throughout the interface until explicitly changed. The present interface may incorporate the “smart screen” feature discussed above, which may, through its analysis of the past user interaction with the interface predict the most likely predicted user input function. Thus, the predictive aspects of the present invention may be considered a related concept to the intelligent user level interface of the present invention. However, the following better serves to define this aspect of the invention.


The input device, in addition to defining a desired command, also provides certain information about the user which has heretofore been generally ignored or intentionally removed. With respect to a two-dimensional input device, such as a mouse, trackball, joystick, etc., this information includes a velocity component, an efficiency of input, an accuracy of input, an interruption of input, and a high frequency component of input. This system is shown schematically in FIG. 21, which has a speed detector 2104, a path optimization detector 2105, a selection quality detector 2106, a current programming status 2108, an error counter 2109, a cancel counter 2110, a high frequency signal component detector 2112, an accuracy detector 2113 and a physio-dynamic optimization detector 2114. In addition, FIG. 21 also shows that the interface also uses a past user history 2107, an explicit user level choice 2111 and an explicit help request 2115.


This list is not exclusive, and is somewhat dependent on the characteristics of the specific input device. For a mouse, trackball, or other like device, the velocity or speed component refers to the speed of movement of the sensing element, i.e. the rotating ball. This may also be direction sensitive, i.e., velocity vector. It is inferred that, all other things being equal, the higher the velocity, the more likely that the user “knows” what he is doing.


The efficiency of input refers to two aspects of the user interface. First, it refers to the selection of that choice which most simply leads to the selection of the desired selection. For example, if “noon” is an available choice along with direct entry of numbers, then the selection of “noon” instead of “12:00 p.m.” would be more efficient. The second aspect of efficiency has to do with the path taken by the user in moving a graphic user interface cursor or input device from a current position to a desired position. For example, a random curve or swiggle between locations is less efficient than a straight line. This effect is limited, and must be analyzed in conjunction with the amount of time it takes to move from one location of a cursor on the screen to another; if the speed of movement is very rapid, i.e. less than about 400 mS for a full screen length movement, or less than about 300 mS for small movements, then an inefficiency in path is likely due to the momentum of the mouse and hand, momentum of the rolling ball, or a physiological arc of a joint. This aspect is detected by the physio-dynamic optimization detector 2114. Thus, only if the movement is slow, deliberate, and inefficient, should this factor weigh heavily. It is noted that arcs of movement, as well as uncritical damping of movement around the terminal position may be more efficient, and a straight path actually inefficient, so that the interface may therefore calculate efficiency based on a complex determination, and act accordingly where indicated.


Thus, an “efficient” movement would indicate an user who may work at a high level, and conversely, an inefficient movement would indicate a user who should be presented with simpler choices. The efficiency of movement is distinguished from gestures and path dependent inputs, such as drawing and painting. These may be distinguished based on machine status or context. Further, the interface may recognize gestures in many contexts. Therefore, gesticulations must be distinguished from command inputs before further processing. Gesticulations, like path efficiency, may also be analyzed separately from the basic command input, and therefore may be provided as a separate input stream on an interface level rather than an application level, thus allowing cross application operation.


Likewise, if a movement is abrupt or interrupted, yet follows an efficient path, this would indicate a probable need for a lower user interface level. This would be detected in a number of elements shown in FIG. 21, the speed detector 2104, a high frequency signal component detector 2112, an accuracy detector 2113 and a physio-dynamic optimization detector 2114. In addition, FIG. 21 also shows the use of a past user history 2107, an explicit user level choice 2111 and an explicit help request 2115.


While the interface may incorporate screen buttons which are smart, i.e. those which intelligently resolve ambiguous end locations, the accuracy of the endpoint is another factor in determining the probable level of the user. Thus, for example, if a 14″ color monitor screen is used, having a resolution of 640 by 480 pixels, an accurate endpoint location would be within a central area of a screen button of size about 0.3″ by about 1.0″, would be an area of about 0.25″ by about 0.75″. A cursor location outside this location, but inside the screen button confines would indicate an average user, while a cursor location outside the screen button may be inferred to indicate the button, with an indication that the user is less experienced in using the pointing device.


Finally, in addition to the efficiency of the path of the cursor pointing device, a high frequency component may be extracted from the pointer signal by the high frequency signal component detector 2112, which would indicate a physical infirmity of the user (tremor), a distraction in using the interface, indecision in use, or environmental disturbance such as vibration. In this case, the presence of a large amount of high frequency signal indicates that, at least, the cursor movement is likely to be inaccurate, and possibly that the user desires a lower user level. While this is ambiguous based on the high frequency signal content alone, in conjunction with the other indicia, it may be interpreted. If, for example, the jitter is due to environmental vibrations, and the user is actually a high level user, then the response of the user level adjust system would be to provide a screen display with a lowered required accuracy of cursor placement, without necessarily qualitatively reducing the implied user level of the presented choices, thus, it would have an impact on the display simplification 2103, with only the necessary changes in the current user level 2101.


Alternatively, the user may input a gesture, i.e., a stylized input having no other command input meaning, which may be detected by analyzing the input. The input may be a manual input, voice, image or the like. A number of different gestures may be recognized. These gestures are generally explicit inputs, which allow a voluntary action to be interpreted as input information to the interface.


EXAMPLE 14
Intelligent Telephone Device Interface

Likewise, the present interface could be used to control complex telecommunications functions of advanced telephone and telecommunications equipment. In such a case, the user display interface would be a video display, or a flat panel display, such as an LCD display. The interface would hierarchically present the available choices to the user, based on a probability of selection by the user. The input device would be, for example, a small track ball near the keypad. Thus, simple telephone dialing would not be substantially impeded, while complex functions, such as call diversion, automated teledictation control, complex conferencing, caller identification-database interaction, and videotel systems, could easily be performed.


EXAMPLE 15
Character Recognition of Video

The present invention may incorporate character recognition from the video broadcast for automatic entry of this information. This is shown schematically in FIG. 24, with the inclusion of the videotext and character recognition module 2414. This information is shown to be transmitted to the event characterization unit 2407, where the detected information is correlated with the other available information. This information may also be returned to the control 2402. Examples of the types of information which would be recognized are titles of shows, cast and crew from programming material, broadcast special alerts, time (from digital display on special access channels), stock prices from “ticker tape” on special access channels, etc. Thus, this technology adds functionality to the interface. In addition, subtitled presentations could be recognized and presented through a voice synthesizer, to avoid the necessity of reading the subtitle. Further, foreign language subtitles could be translated into, e.g., English, and presented. In a particular embodiment, certain game shows, such as “Wheel of Fortune” have alphanumeric data presented as a part of the programming. This alphanumeric text may be extracted from the image.


In a preferred embodiment, the character recognition is performed in known manner on a buffer memory containing a frame of video, from a device such as a Data Translation DT2851, DT2853, DT2855, DT2867, DT2861, DT2862 and DT2871. A contrast algorithm, run on, for example, a Data Translation DT2858, DT2868, or DT2878, first removes the background, leaving the characters. This works especially well where the characters are of a single color, e.g. white, so that all other colors are masked. After the “layer” containing the information to be recognized is masked, an algorithm similar to that used for optical character recognition (OCR) is employed. See, U.S. Pat. No. 5,262,860, incorporated herein by reference. These methods are well known in the art. This may be specially tuned to the resolution of the video device, e.g. NTSC, Super Video Home System (S-VHS), High Definition Television and/or Advannced Television System Committee (HDTV/ATSC-various included formats), Improved definition television (IDTV), Enhanced Definition Television (EDTV), Multiple Sideband Encoding (MUSE), Phase Alternate Line (PAL), Sequential Coleur à Memoire (SECAM), MPEG-2 digital video, or other analog or digital transmission and/or storage formats, etc. In addition, since the text normally lasts for a period in excess of one frame, a spatial-temporal image enhancement algorithm may be employed to improve the quality of the information to be recognized, if it is indistinct in a single frame.


EXAMPLE 16
Smart House Interface

The present invention may also be incorporated into other types of programmable controls, for example those necessary or otherwise used in the control of a smart house. See, “The Smart House: Human Factors in Home Automation”, Human Factors in Practice, December 1990, 1-36. The user interface in such a system is very important, because it must present the relevant data to the user for programming the control to perform the desired function. A smart house would likely have many rarely used functions, so that both the data and the available program options must be presented in the simplest manner consistent with the goal of allowing the user to make the desired program choice. For example, a smart house system with appropriate sensors might be used to execute the program: “start dishwasher, if more than half full, at 9:00 p.m.” This program might also include a program to load soap into the dishwasher or to check if soap is already loaded. A user who wishes to delay starting until 11:00 p.m. would be initially presented with the defaults, including start time as an option, which would be simply modified by correcting the starting time. The next time the same user wishes to program the device, an algorithm might change the predicted starting time to, e.g. 10:00 p.m., which is a compromise between the historical choices. Alternatively, the new predicted start time might be 11:00 p.m., the last actually programmed sequence. Finally, the next predicted start time might remain at 9:00 p.m. The resolution of these choices would depend on a number of factors: a preprogrammed expert system; any other prior history of the user, even with respect to other appliances or in other situations; the context, meaning any other contemporaneously programmed sequences; and an explicit input from the user as to how the inputs should be evaluated for predictive purposes.


The expert system would balance factors, including disturbing noise from the dishwasher, which might be objectionable while persons are near the dishwasher, people are sleeping, or during formal entertainment. On the other hand, if the dishwasher is full, or its cleaned contents are needed, the dishwasher should run. Some persons prefer to reshelve dishes in the evening, before sleep, so in those cases, the dishwasher should complete its cycle before bedtime. The dishwasher, on a hot water cycle, should not run during showers or baths, and preferably should not compete with a clothes washer for hot water. The dishwasher preferably does not run during peak electrical demand times, especially if electrical rates are higher. Water conserving cycles should be selected, especially during droughts or water emergencies. If dishes remain in the dishwasher for an extended period, e.g., overnight, a moistening cycle may be employed to help loosen dirt and to help prevent drying. Thus, the expert system is preprogrammed for a number of high level considerations that might be common to a large number of users of the system, thus shortening the required training time of the system to learn the preferences of the user. Such a sophisticated system may eliminate the need entirely for adaptive responses, based on weighing of considerations provided by the user. Of course, other considerations may also be included for the operation or delay of operation of the dishwasher. Further, these considerations are exemplary of the types of considerations which might be employed in an expert system in a smart house.


The prior history of the user provides an excellent source of information regarding the preferences of the user, although this is sometimes not the most efficient means, and may often include contradictory data. This historical use data is therefore analyzed in a broad context in order to extract trends, which over a number of uses may be further extracted as “rules”. Often, the user history data will be applied at a high level, and will interact with preexisting rules of the expert system, rather than to create new rules. In this case, the expert system preferably includes a large number of “extra rules”, i.e., those with an a priori low probability or low weighing, providing a template for future pattern matching. The past history may be evaluated in a number of ways. First, an expert system may be used to analyze the past usage pattern. Second, a neural network may be trained using the historical data along with any corrective feedback. Third, the historical data may be used to alter fuzzy logic rules or classifications, either by expert system, neural network, or by other known means.


The context of use may also be used to determine a desired or predicted action. Therefore, if on a single occasion, a number of changes are made, for example during a large house party, the standard predictions would not be altered, and thus a normal program would remain in effect. Of course, a new “house party” sequence would then be recognized and included as a new type of sequence for future evaluation. For example, a house party sequence might encompass a number of house systems. Thus, the delay of dishwasher until 11:00 p.m. allows all dishes from the party to be placed in the dishwasher before starting. An alarm system would be generally deactivated, although various zones may be provided with different protection; e.g., a master suite may be off-limits, with an alarm transmitting a signal to a user's beeper, rather than a call to police or alarm service company. During the summer, the air conditioner might run even if doors and windows are open, even if the normal program prompts for door closings before the air conditioner is turned on. Likewise, exterior lighting would be turned on at dusk, with bug lights turned on during the entire party. The user might individually make such decisions, which would be recognized as a group due to their proximity in time, or delineate the actions as a group. Thereafter, where some of these choices are made, and the profile of choices matches a “party” style, the remainder of the choices may be presented as a most likely or predicted choice. The group of choices together might also be selected from a menu of choices.


Context also relates to sensor data, which might include sensors in particular appliances or unrelated sensors. For example, infrared motion detectors may be used to estimate the number of persons present in a house. Likewise, heavy use of a bathroom, as detected by flushes, frequent light transitions or door openings, might also be useful as data to estimate a crowd size. Temperature sensors, video imaging sensors, perimeter sensors, electrical sensors relating to the status of appliances and machinery, and other types of sensors may provide data for context determination.


Of course, explicit inputs must also be accommodated, which may be atomic instructions or complex combinations of instructions which may control a single house system or a number of house systems simultaneously. The explicit input preferably comes by way of the adaptive interface described throughout the present application, or an interface incorporating particular aspects thereof.


The smart house system also controls the climate control system. Thus, it could coordinate temperatures, air flow and other factors, based on learned complex behaviors, such as individual movement within the dwelling. Since the goal of the programming of the smart house is not based on the storage of discrete information, but rather the execution of control sequences at various times and under certain circumstances, the control would differ in various ways from that of a VCR. However, the user interface system, adaptive user level, help system, and the like would be common to both types of system. This differs from the Fuzzy Logic controlled air conditioner available (in Japan) from Mitsubishi in that these prior art devices do not have an intelligent interface of the present invention. It should also be noted that the control for the VCR could be the same control as that for the smart house, so that the common elements are not redundant. Therefore, by applying a single control to many tasks, a common user interface is used, and the cost is reduced.


EXAMPLE 17
Programmable Environmental Controller

The present Example relates to a programmable environmental controller application. In this case, a sensor or sensor array is arranged to detect a change in the environment which is related to a climatic condition, such as an open door. On the occurrence of the door opening, the system would apply a pattern recognition analysis to recognize this particular sensor pattern, i.e. a mass of air at a different temperature entering the environment from a single location, or a loss of climate controlled air to a single location. These sensor patterns must be distinguished from other events, such as the action of appliances, movement of individuals in the vicinity of the sensor, a shower and other such events. It is noted that in this instance, a neural network based adaptive controller may be more efficient than a standard fuzzy logic system, because the installation and design of such a system is custom, and therefore it would be difficult to program fuzzy set associations a priori. In this case, a learning system, such as a neural network, may be more efficient in operation and produce a better result than other adaptive methods. The training procedure may be fully automated, (with manual feedback provided where necessary to adjust the control parameters) so long as sufficient sensors are provided for controlling the system, and also that an initial presumption of the control strategy is workable during the training period. In the case of an HVAC system, the initial strategy incorporated is the prior art “bang-bang” controller, which operates as a simple thermostat, or multi-zone thermostat. As a better starting point, a fuzzy logic temperature controller may be modeled and employed. Other known strategies which are not often used in environmental control include the proportional-integral-differential controller (PID).


It is noted that the HVAC system may also be of a type which is inoperable with standard type controllers; for example, the system may be such as to produce temperature oscillations, or significant temperature or pressure gradients. In this case, the default control system must be provided to compensate the system, allowing more subtle corrections and adjustments to be made based on preferences. Thus, an expert system is provided, which is updated based on user input, and which receives context information, including sensor data and other inputs. Explicit user preferences and programming are also input, preferably with an interface in accordance with the present invention or incorporating aspects thereof.


In this example, which may be described with reference to FIG. 23, sufficient sensors in a sensor array 2301 are provided, being light, temperature, humidity, pressure, air flow and possibly a sensor for determining an event proximate to the sensor, such as door opening. While a single sensor array 2301 provides input to the present control, a plurality of sensor arrays are preferably employed in complex installations, such as that described here. The sensors, with the possible exceptions of the flow sensor and event sensor, are housed in a single sensor head. Further, the temperature and pressure sensors may be combined in a single integrated circuit by known means. The light and temperature sensors are known to those skilled in the art, and need not be described herein. The pressure sensor may be a Sensym strain gage pressure transducer, a Motorola pressure transducer device, or the like, which are known in the art. Alternatively, other types of sensors may be used, for example a micromachined silicon force balance pressure transducer, similar in electrical design to the Analog Devices monolithic accelerometers, ADXL-50 or ADXL-05.


The humidity sensor is preferably an electronic type, producing an electrical signal output. It need not be internally compensated for the other measured environmental factors, as the constellation of sensors may compensate each other. The air flow sensor may be based on pressure differentials, using the electronic pressure sensor described above, or may be a mechanical vane type, which is based on flows. In most applications, a single flow axis will be sufficient, however, in some circumstances, a two or greater axis sensor will be required. Further, in the case of large volume areas, complex turbulent flow patterns may be relevant, for which known sensors exist. Laser based air flow sensors may be employed, if desired. LIDAR sensors may be used to determine flow rate, direction, and turbulence.


The event sensor may be of any type, and depends particularly on the event being measured. In the present case, where a door opening is to be detected, it is preferred that the environmental control be interfaced with a perimeter intrusion alarm system, which, for example, provides a magnet embedded in the door and a magnetic reed switch in the door frame. Individual sensors are normally wired to the alarm control panel, thus providing central access to many or all of the desired event detection sensors while minimizing the added cost. The event detector may also be an ultrasonic, infrared, microwave-Doppler, mechanical, or other type of sensor. Wireless sensors may also be used, communicating via infrared beams, acoustic, radio frequency, e.g., 46-49 MHz, 900 MHz, or other bands, using analog, digital or multilevel quantized digital AM, FM, PSK, QAM, or other modulation scheme, or a combination thereof. Spread spectrum devices may be employed, as well as time, code or frequency multiplexing or a combination thereof. Various failsafe mechanisms are preferably included, including those identifying transmitter or receiver failure, communication interference or message collision, and other conditions. A reverse communication channel may also be included, either symmetric in band, or asymmetric in band or out of band, for communication with the sensor or apparatus associated with the sensor, and as part of the failsafe system. A forward error correction protocol is preferably effected, which may detect errors and include error correcting codes for digital transmissions. Digital data may be encrypted, and the transmission modulation scheme may also include an encrypted sequence of frequency, phase, convolution, noise, or other modulation parameter.


While wireless data transmission as described above may be used, the preferred method of receiving sensor information is through a serial digital or analog (i.e., 4-20 mA transmitter) data transmission which may be multiplexed and/or part of a local area network scheme, with minimal local processing of the sensor data by the microprocessor 2302 with the serial link 2302a in the sensor head. Such serial digital protocols and physical transport layers include Echelon LON-works, BSR X-10, CEBUS, RS-232, RS-423, Apple ADB, Appletalk, Ethernet (10 base T, 10 Base 2, 10 base 5, 100 Base T, 100 base VG), ATM, USB, IEEE-1394, Homerun (Intel/Tut), etc. This system allows the central control 2303 to incorporate the desired processing, e.g., by the pattern recognition system 2304, etc., while minimizing the installation expense. A simple microprocessor device 2302 in the sensor head interfaces the sensing elements, and may provide analog-to-digital conversion, or other conversion which may be necessary, of the sensor signal. In the case of a serial digital data transmission, the local microprocessor formats the sensor data, including a code indicating the sensor serial number and type, the sensor status (i.e., operative, defective, in need of maintenance or calibration, etc.), the sensor data, and an error correcting code. In the case that the data is transmitted on a local area network, the microprocessor also arbitrates for bus usage and the messaging protocol.


The control, it must be understood, has a number of available operative systems at its disposal, comprising the plant 2306. In this case, the system is a forced air heating and cooling system. This system has a heating unit, a humidifier, blowers, a cooling unit (which also dehumidifies), ducts, dampers, and possible control over various elements, such as automated door openers.


As described above, the system is installed with a complete array of sensors, some of which may be shared with, or a part of, other control systems in the environment, and begins operation with a basic acceptable initial control protocol. The system then receives data from the sensors, and correlates data from the various sensors, including the event sensors, with the operation of the systems being controlled. In such a case, a “door open” event may be correlated with a change in other measured variables. The system then correlates the control status with the effect on the interrelation of the measured variables. Thus, the system would detect that if the blower is operating while the door is open, then there is a high correlation that air will flow out of the door, unless a blower operates to recirculate air from a return near the door. Thus, the system will learn to operate the proximate return device while the door is open and the blower is on. Once this correlation is defined, the system may further interrelate the variables, such as a wind speed and direction outside the door, effects of other events such as other open doors, the absolute and relative speeds of the blowers and the return device, the effect of various damper devices, etc. It is further noted that, under some circumstances, an exchange of air through an open door is desired, and in such instance, the system may operate to facilitate the flow through such an open door. Finally, the system must be able to “learn” that conditions may exist which produce similar sensor patterns which should be handled differently. An example is a broken, defective or inoperative sensor. In such a case, the system must be able to distinguish the type of condition, and not execute an aggressive control algorithm in an attempt to compensate for an erroneous reading or otherwise normal event. This requires the intelligent control of the present invention. In order to distinguish various events, sensors which provide overlapping or redundant information, as well as providing a full contextual overview, should be provided as a part of the system.


It is further noted that energy efficiency is a critical issue in climate control systems, and an absolute and continuous control over the internal environment may be very inefficient. Thus, the starting of large electrical motors may cause a large power draw, and simultaneous starting of such equipment may increase the peak power draw of a facility, causing a possible increase in the utility rates. Further, some facilities may operate on emergency or private power generation (co-generation) which may have different characteristics and efficiency criteria. These factors may all be considered in the intelligent control. It is also noted that a higher efficiency may also be achieved, in certain circumstances, by employing auxiliary elements of the climate control system which have a lower capacity and lower operating costs than the main elements. Thus, for example, if one side of a building is heated by the sun, it may be more efficient to employ an auxiliary device which suitably affects, i.e. compensates, only a part of the building. If such equipment is installed, the aggregate efficiency of the system may be improved, even if the individual efficiency of an element is lower. Likewise, it may be preferable to run a 2½ ton air conditioning unit continuously, rather than a 5 ton air conditioning unit intermittently. The present intelligent control allows a fine degree of control, making use of all available control elements, in an adaptive and intelligent manner.


Returning to the situation of a door opening event, the system would take appropriate action, including: interruption of normal climate control until after the disturbance has subsided and normal conditions are achieved; based on the actual climatic conditions or predicted climatic conditions begin a climate compensation control, designed to maximize efficiency and also maintain climatic conditions during the disturbance, as well as return to normal after the disturbance; optionally, during the door opening disturbance, the system would control a pressure or flow of air to counterbalance a flow through the door, by using a fan, blower or other device, or halting such a device, if necessary. It is also noted that the climatic control system could also be outfitted with actuators for opening and closing doors and windows, or an interface with such other system, so that it could take direct action to correct the disturbance, e.g., by closing the door. The climate between the internal and external ambients may differ in temperature, humidity, pollutants, or the like, and appropriate sensors may be employed.


It is thus realized that the concepts of using all available resources to control an event, as well as using a predictive algorithm in order to determine a best course of action and a desired correction are a part of the present invention.


EXAMPLE 18
Remote Control Hardware

A remote control of the present invention may be constructed from, for example, a Micromint (Vernon, Conn.) RTC-LCD, RTC-V25 or RTC-HCII or RTC180 or RTC31/52, and RTC-SIR, in conjunction with an infrared transmitter and receiver, input keys and a compatible trackball, which may provide raw encoder signals, or may employ a serial encoder and have a serial interface to the processor module. A power supply, such as a battery, is used. The use, interfacing and programming of such devices is known to those skilled in the art, and such information is generally available from the manufacturer of the boards and the individual circuit elements of the boards. The function of such a remote control is to receive inputs from the trackball and keys and to transmit an infrared signal to the controller.


The processor and display, if present, may provide added functionality by providing a local screen, which would be useful for programming feedback and remote control status, as well as compressing the data stream from the trackball into a more efficient form. In this case, certain of the extracted information may be relevant to the determination of the user level, so that information related to the user level would be analyzed and transmitted separately to the controller by the infrared transmitter. If the local LCD screen is used in the programming process, then the main controller would transmit relevant information to the remote display, by a reverse-channel infrared link. These components are known in the art, and many other types may also be used in known manner.


In known manner, available personal digital assistants (“PDAs”), available from 3Com (Palm Pilot III), Microsoft Windows CE-based devices, Apple (“Newton” model 100, 110, 120), Tandy, Poquet, Sharp, Casio, AT&T (Eo 440), Hewlett-Packard, etc. may also be employed as a human interface device.


EXAMPLE 19
Medical Device Interface

The interface and intelligent control of the present invention are applicable to control applications in medicine or surgery. This system may also be described with reference to the generic system drawings of FIGS. 23 and 24. In this case, an operator identifies himself and enters information regarding the patient, through the interface 2305. The interface 2305 automatically loads the profile 2406 of both the operator and the patient, if the device is used for more than one at a time, and is connected to a database containing such information, such as a hospital central records bureau. The interface may be connected to various sensors, of the input device 2401, such as ambient conditions (temperature, humidity, etc.), as well as data from the patient, such as electrocardiogram (EKG or ECG), electromyograph (EMG), electroencephalogram (EEG), Evoked Potentials, respirator, anesthesia, temperature, catheter status, arterial blood gas monitor, transcutaneous blood gas monitor, urinary output, intravenous (IV), intraperitoneal (IP), Intramuscular (IM), subcutaneous (SC), intragastric or other types of solutions, pharmaceutical and chemotherapy administration data, mental status, movement, pacemaker, etc. as well as sensors and data sources separate from the patient such as lab results, radiology and medical scanner data, radiotherapy data and renal status, etc. Based on the available information, the interface 2405, using the simple input device and the display screen described above, presents the most important information to the operator, along with a most probable course of action. The user then may either review more parameters, investigate further treatment options, input new data, or accept the presented option(s). The system described has a large memory in the signal analysis module 2409 for recording available patient data from the signal receiver 2408, and thus assists in medical record keeping and data analysis, as well as diagnosis. While various systems are available for assisting in both controlling medical devices and for applying artificial intelligence to assist in diagnosis, the present system allows for individualization based on both the service provider and the patient. Further, the present invention provides the improved interface for interaction with the system.


It is further noted that, analogously to the library function discussed above, medical events may be characterized in the characterization unit 2407 and recorded by the plant 2404, so that a recording of the data need not be reviewed in its entirety in order to locate a particular significant event, and the nature of this event need not be determined in advance. It is also noted that the compression feature of the recorder of the present invention could be advantageously employed with the large volume of medical data that is often generated. Medical data image data may be compressed as known in the art, by standard image compression techniques, and/or image compression techniques optimized for radiology, nuclear medicine and ultrasonography data. Other types of data may be compressed using lossless algorithms, or by various vector quantization, linear excited models, or fractal compression methods. It is finally noted that, because of its ability to store and correlate various types of medical data in the characterization unit 2407, the system could be used by the operator to create notes and discharge summaries for patients, using the database stored in the local database 2413, as well as the user history and preferences 2406. Thus, in addition to saving time and effort during the use of the device, it would also perform an additional function, that of synthesizing the data, based on medical significance.


In addition to providing the aforementioned intelligence and ease of use, the present example also comprises a control 2402, and may interface with any of the sensors and devices, performing standard control and alarm functions. However, because the present control 2402 is intelligent and has pattern recognition capability, in addition to full data integration from all available data sources, it may execute advanced control functions. For example, if the present control 2402 is interfaced to a controlled infusion pump for, e.g., morphine solution, in e.g., a terminally ill patient, then certain parameters must be maintained, while others may be flexible. For example, a maximum flow rate is established as a matter of practice as a safety measure; too high a flow rate could result in patient death. However, a patient may not need a continuous infusion of a constant dose of narcotic. Further, as the patient's status changes, the level of infusion may be advantageously altered. In particular, if the renal status of the patient were to change, the excretion of the drug may be impaired. Therefore, by providing the controller with a urinary output monitor, it could immediately suppress the morphine infusion as soon as the renal output is recognized as being decreased, and further indicate an alarm condition. Further, it may be advantageous to provide a diurnal variation in the infusion rate, to provide a “sleep” period and a period of heightened consciousness with correspondingly lower levels of narcosis. Where various tests, procedures or interviews are scheduled, an appropriate level of narcosis and/or analgesia may also be anticipatorily provided at an appropriate time.


As another example of the use of the present device as a medical controller, the control 2402 could be interfaced with a cardiac catheter monitor, as a part of the signal receiver 2408. In such a case, normally, alarms are set based on outer ranges of each sensor measurement, and possibly a simple formula relating two sensor measurements, to provide a useful clinical index. However, by incorporating the advanced interface and pattern recognition function of the present invention, as well as its ability to interface with a variety of unrelated sensors, the present device, including the present control, may be more easily programmed to execute control and alarm functions, may provide a centralized source of patient information, including storage and retrieval, if diverse sources of such information are linked, and may execute advanced, adaptive control functions. The present control 2402 is equipped to recognize trends in the sensor data from the signal receiver 2408, which would allow earlier recognition and correction of various abnormal conditions, as well as recognizing improvements in conditions, which could allow a reduction in the treatment necessary. Further, by allowing a fine degree of control, parameters may be maintained within optimal limits for a greater percentage of the time. In addition, by monitoring various sensors, various false alarms may be avoided or reduced. In particular, false alarms may occur in prior art devices even when sensors do not indicate a dangerous condition, merely as a safety precaution when a particular parameter is out of a specified range. In such a case, if a cause of such abnormal condition may be identified, such as patient movement or the normal activities of the patient's caretakers, then such condition may be safely ignored, without indicating an alarm. Further, even if a sensor parameter does in and of itself indicate a dangerous condition, if a cause, other than a health risk, may be identified, then the alarm may be ignored, or at least signaled with a different level of priority. By providing an intelligent and active filter for false alarm events, the system may be designed to have a higher level of sensitivity and specificity to real health risks, and further to provide a finer level of control based on the sensor readings, with fewer false positive readings.


EXAMPLE 20
Securities Trading Terminal Interface

The present invention is also of use in automated securities, debt, variable yield and currency trading systems, where many complex functions are available, yet often a particular user under particular circumstances will use a small subset of the functionality available at a given time. Such a situation would benefit from the present interface, which provides adaptive user levels, prioritized screen information presentation, and pattern recognition and intelligent control. A securities trading system is disclosed in U.S. Pat. No. 5,034,916, for a mouse driven Fast Contact Conversational Video System, incorporated herein by reference. The present system relates primarily to the user terminal, wherein the user must rapidly respond to external events, in order to be successful. In such a case, the advantages of the application of an interface according to the present invention are obvious, and need not be detailed herein. However, the pattern recognition functions of the present invention may be applied to correspond to the desired actions of the trader, unlike in prior intelligent trading systems, where the terminal is not individually and adaptively responsive to the particular user. Thus, the system exploits the particular strengths of the user, facilitating his actions, including: providing the desired background information and trading histories, in the sequence most preferred by the user; following the various securities to determine when a user would execute a particular transaction, and notifying the user that such a condition exists; monitoring the success of the user's strategy, and providing suggestions for optimization to achieve greater gains, lower risk, or other parameters which may be defined by the user. Such a system, rather than attempting to provide a “level playing field” to all users of like terminals, allows a user to use his own strategy, providing intelligent assistance. By enhancing the interface, a user becomes more productive with fewer errors and faster training.


EXAMPLE 21
Fractal Theory Pattern Recognition

Affine transforms are mathematical manipulations of data in two dimensions, wherein the manipulation comprises a rotation, scaling and a displacement for each of the two coordinates. Schroeder, M., Fractals, Chaos, Power Laws, W.H. Freeman & Co., New York (1991). Of course, Affine transforms of higher dimensionality may also be employed. In describing an image using Affine transforms, the degree of matching between an image and the mathematical description of that image may be related by a number of iterations, and the fewer the iterations, the less data used to describe the image. Of particular importance in the field of graphics is the speed of “convergence”, i.e., that a relatively few iterations are necessary in order to describe an image with sufficient precision to be visually useful. Therefore, the Affine transform mathematical specifications may be far more compact than the raw image data, and these specifications compare favorably to other types of image compression, such discrete cosine transformation (DCT) compression schemes, including JPEG, depending on a number of factors.


Because the Affine transform may be used to produce a compact visual description of an image, among other reasons, the present invention applies this transform to a pattern matching system for analyzing image contents.


Pattern recognition, in this case, may proceed on an image basis, to match similar images, or on an object basis, in which portions of images are matched. It is preferred that the pattern matching system be robust, i.e., tolerant of various alterations of an image, artifacts, interference and configurational changes, while specific enough to allow object differentiation.


In the case of video images, therefore, it is preferred that various two-dimensional projections of three-dimensional objects, in various “poses”, be classified the same. This therefore requires that, in analyzing a two-dimensional image, the object be extracted from a background image and separated from other objects. Further, degrees of freedom may be determined, such as through analysis of a sequence of frames to reveal relative motion or change of portions of the object with respect to other portions. Finally, the object in the image must be compared to three dimensional models, through various projections.


In the case of two dimensional image analysis, the image should be analyzed according to a robust starting criteria, so that the similarity of images may be determined by comparison of normalized Affine transformation coefficients.


Fractal analysis, the study of self-similarity, and a superset of Affine transformation, allows a compact representation of an image or an object in an image, and due to its encompassing of various spatial relationships of object parts, allows normalized transforms to be compared. In other words, assuming that the object is extracted from a background scene, and various degrees of freedom are identified, an Affine transformation may be applied, which will yield a similar result for an image of the same object in a different “pose”, i.e., with different exercise of its degrees of freedom. While in general, Affine transformations are described with respect to two-dimensional images, these may also be applied to three dimensional images. Thus, if a triangular polygon is rotated, scaled and displaced in a two dimensional image, a tetrahedron is rotated, scaled and displaced in a three dimensional system. Further, analogies may also be drawn to the time dimension (although geometric forms which are rotated, scaled and displaced over time are not given trivial names). Because, in a contractive Affine transformation (one in which the scaling factor of successive iterations is less than 1), continued iterations are less significant, objects described with varying level of detail may be compared. Even images which are not normalized may still be compared, because at every level of the transform, slight changes in rotation, scale and displacement are accounted for.


According to the present invention, nonlinear self-similarity may also be used. Further, in objects having more than two dimensions, linear scaling other than rotation, scaling and displacement may be described.


It is noted that many types of optical computers, especially those including holographic elements, employ transformations similar to Affine transformations. Therefore, techniques of the present invention may be implemented using optical computers or hybrid optical-electronic computers.


Thus, according to the present invention, the fractal method employing Affine transforms may be used to recognize images. This method proceeds as follows. A plurality of templates are stored in a memory device, which represent the images to be recognized. These templates may be preprocessed, or processed in parallel with the remainder of the procedure, in a corresponding manner. Image data, which may be high contrast line image, greyscale, or having a full color map, the greyscale being a unidimensional color map, is stored in the data processor, provided for performing the recognition function.


The image is preprocessed to extract various objects from the background, and to separate objects.


This preprocessing may be performed in standard manner. The method of U.S. Pat. No. 5,136,659, incorporated herein by reference, may also be used. As a part of this preprocessing, a temporal analysis of the object through a series of image frames, is performed to provide four dimensional data about the object, i.e., the two dimensions from the image, a third image imputed from differing perspective views of the object, and time. Certain objects may be immediately recognized or classified, without further processing. Further, certain objects, without full classification or identification, may be “ignored” or subjected to a lesser level of final processing. During the classification processing, various objects may be selected for different types of processing, for example, people, automobiles, buildings, plants, etc.


After classification, and temporal analysis, an object for further processing is analyzed for degrees of freedom, i.e., joints of a person, moving parts of an object, etc. These degrees of freedom may then be corrected, e.g., the object itself altered, to change the image into a standard format, or the degree of freedom information processed with the object to allow mathematical normalization without actual change of the image.


The information describing the object image is stored. A plurality of addressable domains are generated from the stored image data, each of the domains representing a portion of the image information. As noted above, the entire image need not be represented, and therefore various objects separately analyzed. Further, only those parts of the image or object necessary for the recognition, need be analyzed. While it may be unknown which image components are unnecessary, sometimes this may be determined.


From the stored image data, a plurality of addressable mapped ranges are created, corresponding to different subsets of the stored image data. Creating these addressable mapped ranges, which should be uniquely addressable, also entails the step of executing, for each of the mapped ranges, a corresponding procedure upon the one of the subsets of the stored image data which corresponds to the mapped ranges. Identifiers are then assigned to corresponding ones of the mapped ranges, each of the identifiers specifying, for the corresponding mapped range, a procedure and a address of the corresponding subset of the stored image data.


To ensure comparability, the processing treatment of the template and the image data are analogous. Of course, template data may be stored in preprocessed form, so that the image data need only be processed according to the same rules. The domains are optionally each subjected to a transform, which may be a predetermined rotation, an inversion, a predetermined scaling, and a displacement. Because of the nature of these linear superposable transforms, the earliest iterations will include data about gross morphology, later iterations will include data about configuration, and latest iterations will include data about texture.


In addition, nonlinear alterations, and frequency, Gabor or wavelet transform preprocessing may be applied. A warping or other kind of transform may also be applied. These types of transforms are generally not included in Affine transform analysis, yet judiciously applied, may produce more rapid convergence, greater data storage efficiency, computational advantages or pattern matching advantages.


This transform is used to optimize the procedure, and also to conform the presentation of the image data with the template, or vice versa. Each of the domains need not be transformed the same way, and in fact it is the transform coefficients which are stored to describe the transformed object, so that differences in coefficients relate to differences in objects.


For each of the domains or transformed domains, as may be the case, the one of the mapped ranges which most closely corresponds according to predetermined criteria, is selected. The image is then represented as a set of the identifiers of the selected mapped ranges.


Finally, from the stored templates, a template is selected which most closely corresponds to the set of identifiers representing the image information. This matching process is optimized for the data type, which is a string of iterative transform coefficients, of a contractive transform.


It is preferred that, for each domain, a most closely corresponding one of the mapped ranges be selected. By performing analogous operations on a template and an unrecognized object in an image, a correspondence between the two may be determined. Thus, libraries of template image portions may be provided, with associated transform information, which may increase the computational efficiency of the system.


In selecting the most closely corresponding one of the mapped ranges, for each domain, the mapped range is selected which is the most similar, by a method which is appropriate, and may be, for example, selecting minimum Hausdorff distance from the domain, selecting the highest cross-correlation with the domain, the minimum mean square error with the domain and selecting the highest fuzzy correlation with the domain, based on rules which may be predetermined. Neural network energy minimization may also yield the best fit, and other techniques may also be appropriate.


In particular, the step of selecting the most closely corresponding one of mapped ranges according to the minimum modified Hausdorff distance includes the step of selecting, for each domain, the mapped range with the minimum modified Hausdorff distance calculated as D[db, mrb]+D[1-db, 1-mrb], where D is a distance calculated between a pair of sets of data each representative of an image, db is a domain, mrb is a mapped range, 1-db is the inverse of a domain, and 1-mrb is an inverse of a mapped range.


It is important that the selection criteria be tolerant to variations of the type seen in image data, e.g., video, so that like objects have similar transforms. Thus, the selection criteria is not particularly directed to optimal data compression, although the two criteria may coincide for some types of data.


In the case where the digital image data consists of a plurality of pixels, each having one of a plurality of associated color map values, the method includes a matching of the color map, which as stated above, encompasses a simple grey scale, natural color representation, and other color types. In such a case, the method is modified to optionally transform the color map values of the pixels of each domain by a function including at least one scaling function, for each axis of the color map, each of which may be the same or different, and selected to maximize the correspondence between the domains and ranges to which they are to be matched. For each of the domains, the one of the mapped ranges having color map pixel values is selected which most closely corresponds to the color map pixel values of the domain according to a predetermined criteria, wherein the step of representing the image color map information includes the substep of representing the image color map information as a set of values each including an identifier of the selected mapped range and the scaling functions. The correspondence method may be of any sort and, because of the added degree of complexity, may be a different method than that chosen for non-color images. The method of optimizing the correspondence may be minimizing the Hausdorff distance or other “relatedness” measurement between each domain and the selected range. The recognition method concludes by selecting a most closely corresponding stored template, based on the identifier of the color map mapped range and the scaling functions, which is the recognized image.


Color information may have less relevance to pattern recognition than, for example, edge information, and therefore may be subjected to a lesser degree of analysis. The color information may also be analyzed separately, using a different technique.


EXAMPLE 22
Image Analysis

Alternatively to the object extraction, the image as a whole may be analyzed. In the case of moving images, the aforementioned method is further modified to accommodate time varying images. These images usually vary by small amounts between frames, and this allows a statistical improvement of the recognition function by compensating for a movement vector, as well as any other transformation of the image. This also allows a minimization of the processing necessary because redundant information between successive frames is not subject to the full degree of processing. Of course, if the image is substantially changed, then the statistical processing ceases, and a new recognition function may be begun, “flushing” the system of the old values. The basic method is thus modified by storing delayed image data information, i.e., a subsequent frame of a moving image. This represents an image of a moving object differing in time from the image data in the data processor.


A plurality of addressable further domains are generated from the stored delayed image data, each of the further domains representing a portion of the delayed image information, and corresponding to a domain. Thus, an analogous transform is conducted so that the further domains each are corresponding to a domain. A plurality of addressable mapped ranges corresponding to different subsets of the stored delayed image data are created from the stored delayed image data. The further domain and the domain are optionally matched by subjecting a further domain to a corresponding transform selected from the group consisting of a rotation, an inversion, a scaling, and a displacement, which corresponds to a transform applied to a corresponding domain, and a noncorresponding transform selected from the group consisting of a rotation, an inversion, a scaling, a translation which does not correspond to a transform applied to a corresponding domain. For each of the further domains or transformed further domains, the one of the mapped ranges is selected which most closely corresponds according to predetermined criteria. As stated above, these domains may also be subjected to corresponding and noncorresponding frequency domain processing transforms, Gabor transforms, and wavelet transforms.


A motion vector is then computed between one of the domain and the further domain, or the set of identifiers representing the image information and the set of identifiers representing the delayed image information, and the motion vector is stored. The further domain is compensated with the motion vector and a difference between the compensated further domain and the domain is computed. For each of the delayed domains, the one of the mapped ranges is selected which most closely corresponds according to predetermined criteria. The difference between the compensated further domain and the domain is represented as a set of difference identifiers of the selected mapping ranges and an associated motion vector.


This method is described with respect to FIGS. 27, 28 and 29. FIG. 27 is a basic flow diagram of the recognition system of the present invention. FIG. 28 provides a more detailed description, including substeps, which are included in the major steps shown in FIG. 27. Basically, the image, or a part thereof, is decomposed into a compressed coded version of the scene, by a modified fractal-based compression method. In particular, this differs from the prior compression algorithms in that only a part, preferably that part containing objects of interest, need be fully processed. Thus, if a background is known (identified) or uninteresting, it may be ignored. Further, the emphasis is on matching the available templates to produce an image recognition, not achieving a high degree of compression. Therefore, the image, or domains thereof, may be transformed as required in order to facilitate the matching of the templates. As with respect to single images, the templates are represented in analogous form, having been processed similarly, so that a comparison of the relatedness of an object in an image and the templates may be performed. In particular, if an oblique view of an object is presented, then either the object may be transformed to achieve a predicted front view, or the template transformed or specially selected to correspond to the oblique view. Further, once a recognition process has taken place with a high degree of certainty, the system need only ensure that the scene has not changed, and need not continually fully process the data. This has implications where multiple recognition processes are occurring simultaneously, either in a single scene or in different images, wherein the throughput of the recognition apparatus need not meet that required for de novo real time recognition of all aspects of all the objects or images.


In order to limit processing of portions of images, exclusionary criteria may be applied which allow truncation of processing when it is determined that an option is precluded or there exists a significantly higher probability alternative. The processing system may use primarily exclusionary criteria to select the best predictions, or after preselection, employ a highest probability selection system on the remaining choices.



FIG. 30 shows a flow diagram of a cartoon-like representation of an image recognition method of the present invention. It shows initially, an input image 3001, having a degree of complexity. A windowing function 3002 isolates the object from the background. A first order approximation of the image is generated 3003, here called a mapping region. The first order approximation is then subtracted from the initial image to produce a difference 3004. The first order error is then subjected, iteratively, to successive transform and subtract operations 3005 and 3006, until the error is acceptably small, at which point the input image is characterized by a series of codes, representing the first order approximation and the successive transforms, which are stored 3008. These codes are then compared with stored templates 3009. The comparisons are then analyzed to determine which template produces the highest correlation 3010, and the match probability is maximized 3011. The recognized image is then indicated as an output 3012.


This system is shown in FIG. 26, wherein a sensor 2602 provides data, which may be image data, to a control 2601. The control 2601 serves to control the plant 2603, which has an actuator. The plant 2603 may be a VCR or the like. The control 2601 has associated with it an intermediate sensor data storage unit 2611, which may be, for example a frame buffer or the like. The control 2601 also has associated with it a transform engine 2612, which may perform a reversible or irreversible transform on the data or stored data.


The system also has a template input 2610, which may receive data from the sensor 2602, if accompanied by identifying information. Thus, the pattern storage memory 2609 stores a pattern, such as an image pattern, along with an identifier.


The control 2601 also has an input device 2604, an on-screen display interface 2605, and a program memory 2606, for inputting instructions from a user, providing feedback to the user, and recording the result of the user interaction, respectively. Finally, a characterization network 2607 characterizes the sensor 2602 data, which may be provided directly from the sensor 2602 or preprocessing circuitry, or through the control 2601. A correlator 2608 correlates the output of the characterization network with the stored patterns, representing the templates from the template input 2610. The system therefore operates to recognize sensor patterns, based on the correlator 2608 output to the control 2601.


When analyzing objects in a sequence of images, a determination is made of the complexity of the difference based on a density of representation. In other words, the error between the movement and transform compensated delayed image and the image is quantified, to determine if the compensation is valid, or whether the scene is significantly changed. When the difference has a complexity below a predetermined or adaptive threshold, a template is selected, from the stored templates, which most closely corresponds or correlates with both the set of identifiers of the image data and the set of identifiers of the delayed image data, thus improving recognition accuracy, by allowing a statistical correlation or other technique. The threshold may be set based on an error analysis of the system to determine statistical significance or using other criteria. The threshold may also be adaptively determined based on the history of use of the machine and feedback. For example, if the two images both have a high correlation with one template, while a first of the images has a slightly higher correlation with another template, while the second image has a much lower correlation with that other template, then the system would score the first template as a better match to the first image, based on this differentiation. Thus, templates may be particularly selected to best differentiate similar images of objects.


EXAMPLE 23
Pattern Recognition System

The present system allows for the use of a pattern recognition subsystem for a controller which acts in accordance with a detected pattern. In image, audio and multimedia applications, different types of image processing may take place. First, various processing algorithms may take place in parallel, with an optimum result selected from the results of the various algorithms. Further, various processing schemes may be applied in sequence, with differing sequences applied to different data streams. These processing schemes may be commutative, i.e. yield approximately the same result regardless of the processing order, or may be highly order dependent, in which case a processed data stream must include information relating to the sequence of processing for interpretation.


Various exemplars may reside in a fragment library, for comparison with unidentified data. In the case of processing path dependent systems, an exemplar may be found in multiple forms based on the processing procedure, or in a small subset of corresponding libraries. In general, both lossless compression methods and lossy compression methods employed using high fidelity parameters to minimize loss may be processed to produce a relatively or almost unique result for each unknown data set, while lossy compression or processing methods will be particularly procedure sensitive, especially if differing strategies are employed. These differing strategies may be used to emphasize different features of the unknown data set in order to facilitate comparison. This technique is especially useful when the processing procedures are run in parallel, so that the latency penalty for redundant processing is minimized. Techniques available for this processing include vectorization, fractal processing, iterated function systems, spatial frequency processing (DCT-JPEG, MPEG, etc.), wavelet processing, Gabor transforms, neural nets (static or sequence of images), and other known techniques.


In a preferred embodiment, a spatial frequency or wavelet processing step is performed first, on static image data or a sequence of images, with a fractal domain processing step performed thereafter. This allows high frequency noise to be initially filtered; with subsequent fractal-based correlated noise detection and subtraction, therefore allowing cleanup without loss of high frequency detail. Preferably, before the fractal-based processing, which may be performed by a digital computer or optical processing apparatus, standard edge detection/object separation, e.g., high frequency filtering, contour mapping, artificial intelligence, etc. may be performed. A fractal transform is then performed on the image of a portion thereof, starting in a standardized manner, e.g. at a point of lowest complexity, or the epicenter of the largest feature for beginning a contractive transform. The processed image may then be matched with one or more databases to identify all or a portion of the image. Optionally, after a match has been found and/or confirmed by an operator, using the human interface system, the method is then optimized to minimize the errors and increase the efficiency of later matches. This may be performed by modifying the database record, or related records, as well as modifying the preprocessing algorithm. In a preferred embodiment, the image is processed piecemeal, on an object-by-object basis. Therefore, after an object has been processed, it is extracted from the image so that the remaining information may be processed. Of course, multiple objects may be processed in parallel. The exemplar database is preferably adaptive, so that new objects may be added as they are identified.


The present technology may also be used with a model-based exemplar database, wherein an image object is matched, based on a two dimensional projection, or analysis of a sequence of images, with a multidimensional model of an object. For example, the model may include volume, as well as multiple degrees of freedom of movement. Further, objects may also include “morphing” characteristics, which identify expected changes in an appearance of an object. Other types of characteristics may be included in conjunction with the exemplar in the database.


In a preferred embodiment, a model contained in a database includes a three or more dimensional representation of an object. These models include information processed by a fractal-based method to encode repetitive, transformed patterns in a plane, space, time, etc., as well as to include additional degrees of freedom, to compensate for changes in morphology of the object, to allow continuous object identification and tracking. Thus, once an object is identified, an expected change in that object will not necessitate a reidentification of the object. According to one embodiment, a fractal-like processing is executed by optical elements of an optical or optical hybrid computer. Further, in order to temporarily store an optical image, optically active biological molecules, such as bacteriorhodopsins, etc. may be used. Liquid crystals or other electrophotorefractive active materials may also used. These imagers may be simple two dimensional images, holograms, or other optical storage methods. A preferred holographic storage method is a volume phase hologram, which will transform an impressed image, based on hologram to image correlation. Thus, these models would be somewhat linear transform independent, and would likely show some (planar) transform relationship. Thus, an optical computer may be advantageous because of its high computational speed as compared to digital computers for image analysis, due to inherent parallelism and high inherent speed.


Because of the present limitations in speed of writing an image to optical recording media, especially holographic images, the preferred system includes a plurality of image storage elements, which are operated in parallel. It is noted that absolute accuracy of object identification is not required for “consumer” applications, and therefore partial match results may be considered useful. A plurality of partial results, when taken together, may also increase identification reliability. Critical applications generally differ in quantitative aspects rather than qualitatively, and therefore many aspects of the present invention may be applied to mission critical and other high reliability applications.


A preferred object identification method proceeds by first classifying an object in an image, e.g., “car”, “person”, “house”, etc. Then, based on the classification and object separation, an optimized preprocessing scheme is implemented, based on the classification. This classification preprocessing operates on the raw image data relating only to the object, separated from the background. Then, after the optimized preprocessing, a parallel recognition system would operate to extract unique features and to identify common features to be excluded from the comparison. This step could also identify variable features upon which identification should not be made because the distinctions are useless for the purpose. Thus, the object image at this point loses its relationship to the entire image, and the data reduction might be substantial, providing a compact data representation. The preferred algorithm has a tree structure, wherein the identification need only differentiate a few possibilities, and pass the result to another branch of the tree for further analysis, if necessary. Since the intermediate calculations may help in later computations, these should preferably be retained, in order to avoid duplicative analysis. Further, the order of analysis should be predetermined, even if arbitrary, so that once a useful intermediate calculation is identified, it may be passed in a regular, predictable manner to the next stage processing. Of course, one should not ignore that objects in the entire image may be correlated with one another, i.e. if one object is present, it would increase or decrease the likelihood of another object also being present. Further, temporal correlations should also be noted. Thus, the object identification need not proceed upon each object independently.


Based on time sequences of two-dimensional images, a three dimensional image representation may be constructed. Alternatively, based on various presumptions about extractable “objects” in a single or small group of two dimensional images, a hypothetical three dimensional object may be modeled, which may be later modified to reflect the actual image when an actual view of hidden surfaces is shown. Therefore, by one means or another a three dimensional model is created, having both volume and surface characteristics. Of course, since inner structure may never be seen, the model normally emphasized the surface structure, and is thus a so-called two-and-a-half dimensional surface model. Other non-integral dimension representations may also be useful, and fractal models may efficiently represent the information content of an image model.


When the source signal is an MPEG encoded datastream, it is advantageous to provide an exemplar database which does not require complete expansion of the encoded signal. Thus, the motion vector analysis performed by the MPEG encoder may form a part of the pattern recognition system. Of course, image sequence description formats other than MPEG may be better suited to pattern analysis and recognition tasks. For example, a system may transmit an interframe, by any suitable description method, as well as an object decomposed image in, e.g., fractal transform codes. The transmitted source material, other than interframes, is then transmitted as changes only, e.g. new objects, transforms of existing objects, translations of existing objects, etc.


Color coding may use even more extensive use of fractal compression technology with high compression ratios, because absolute accuracy is not necessary; rather photorealism and texture are paramount, and need not be authentic. Therefore, backgrounds with significant detail, which would require substantial data in a DCT type system, could be simply coded and decoded without loss of significant useful information. Important to the use of this method is to discriminate between background textures and foreground objects, and to encode each separately, optimizing the processing based on the type of object being processed.


EXAMPLE 24
Data Context Sensitive Computer Interface

The present example relates to a context sensitive computer interface in which a characteristic of the interface is modified based on a linguistic or informational content of a data object upon which the interface is operating. For example, a number of alternate feature sets may be made available based on the type of data which is being operated on by the user. For example, differing feature sets would be optimal for each scientific discipline, each type of financial or economic field, marketing, retail, distribution, manufacturing, administration, human resources, etc. Such an interface will make it possible to provide an extended and extensible suite of application modules customized for the user in general, and further adaptive to the particular use to which the user may be making of the apparatus. Thus, complex options particularly suited for the data at hand may be made available without inefficient interface searching, while inappropriate options are not presented. It is noted that this interface is responsive to the data, rather than the programming. Further, the data is analyzed for its meaning, rather than its type.


In a word processing environment, a document or section of a document is analyzed for the presence of particular words or phrases, or for the presence of concepts, interpretable by linguistic concepts. This context-sensitive functionality does not require an explicit definition by the user, but rather will be present even during an incidental occurrence of a recognized context. In accordance with other aspects of the present invention, each context related function may have various user levels, which are selected based on an imputed user level of the user. Thus, the interface program must actually interpret the text or context of the user document in order to select the most likely options for use.


Thus, if a user were to embed a table in a document, the available options would change to table-type options when the “active” portion of the document is at the table, i.e. within the viewable area, etc. Further, and more specifically, if the text and context of the table indicate that this is a financial table, financial options would be initially provided, and standard financial calculation functions immediately made available or performed, in contemplation of their prospective use. Similarly, if the data appears to be scientific, a different set of options would be initially available, and the standard scientific-type calculation functions be made available or performed. If the table relates to chemical or mechanical-type data, chemical or mechanical options might be made available, respectively. Embedded graphics, likewise, would be associated with graphics functions appropriate to the type of graphic. It is noted that, due to the analysis of the content of the document, software having generic functionality may present as special purpose software, based on its actual use.


Thus, in a like manner, the system could determine the “style” of the document and automatically format the data in a predetermined manner to conform with general standards of presentations relating to the desired style. This is similar to style sheets of many programs, but they are self applying, and will, within the same document, be adaptive as the data changes context. Further, since the “styles” would be applied automatically, it would be relatively easy to alter them, requiring only a small amount of manual effort. This is so because the “keys” by which the system determines style could be stored, thus allowing redeterminations to be easily made. This context sensitivity could also assist in spelling and grammar checking, where different rules may apply, depending on the context.


The data object includes information, which might be text, arrays of numbers, arrays of formulas, graphics, or other data types. The system relates parts of the object to each other by “proximity” which could be linear, in the case of a text document, or otherwise, such as in the case of a hypertext document or spreadsheet. Those parts or elements of the object closest to each other, by whatever criteria, are presumed to be topically related, regardless of data type. Thus, if a paragraph of text is proximate to a table for numbers, then the type of numbers presumed to occupy the table would relate to the content of the proximate text. If the text relates to finance, i.e. uses financial-related terms, or series of words that often occur in financial contexts, the table would be presumed to be a financial table.


Once the context of the part of the object is determined, the system then acts based upon this context. The major act is the presentation of tailored menus. This means that if the context is financial, the menus available for use with the numeric table relate to financial tables or spreadsheets. Further, the proximate text would be subject to financial oriented spellcheck and financial oriented grammar or style check. If a graphics-option is selected proximate to the text and table, the menu options would presume a financial graph and present appropriate choices. Of course, the options need not be limited to a few types, and may be hybrid and/or adaptive to the style of the user. However, it is noted that the adaptive menus could be linked to a “corporate style”. Thus, communication styles could be dictated by a set of global rules for an organization. Of course, these a priori choices could be overridden.


An advantage of this system is that it allows a software system to include a wide range of functionality which remains “buried”, or relatively inaccessible, based on the context of usage. Thus, feature rich software would be considered more usable, and software could be provided in modular fashion. Since the system might allow a user to have potential access to many software modules, the system could also be linked to a license manager and per use billing system for rarely used modules, while allowing these to remain available on, e.g., a CD ROM. Thus, for example, a full integrated package could employ a single, “standard” interface which would not require task-switching programs, while avoiding presentation of the full range of features to the user at each juncture.


This system provides advantages over traditional systems by providing a non-standardized interface with a variable feature set which attains usability by adapting a subset of the available functionality based on the context of the data.


EXAMPLE 25
Group Aware Adaptive Computer Interface

The adaptive interface according to the present invention may be used in group computing applications. In such a case, the predictive functionality is applied to allow the interface to apply rules from one group member to a project, even when that group member has not contributed personally to a particular aspect. This is thus a type of intelligent agent technology, which, according to the present invention includes the characteristics of abstraction and extrapolation, rather than rule based analysis which would fail based on divergent circumstances. This differs from standard rule-based expert system because the intelligence applied is not necessarily “expert”, and may be applied in a relative fashion. Further, extracted user characteristics need not completely define a solution to a problem, and indeed, the use of such a technology in group situations presupposes that a contribution of a number of users is desirable, and therefore that the expertise of any given user is limited.


In order to ensure data integrity after the application or contingent application of user characteristics to a datastream, it is desirable to trace the evolution of data structures. This also allows for assistance in the organization and distribution of workgroup responsibilities. Thus, in a workgroup situation, the goal is not optimization of individual productivity, but rather optimization of the group result, including all levels of review after an initial phase is complete.


Thus, while an individual user may seek various shortcuts to achieve various results, the group would benefit by having available all information relating to the path taken to achieve that result. Further, the desired result may be modified according to the presumed actions of the group, so that the final product is pre-optimized for the group, rather than the individual. Thus, a group member may have his “rules” extracted from his actions, i.e. by neural net backpropagation of errors programming or fuzzy rule definition, to be presented for consideration by another group member. This strategy will allow “better” drafts by considering the predicted input of a member prior to review by that member. A user may further tailor the rules for a given project, and “distilled wisdom” from non-group members may also be employed, as in normal expert (AI) systems.


This rule-extraction technology as applied to workgroups is enhanced by the context sensitivity of the software, where the input of each group member may be weighted by considering the context. Again, this technique may be used to increase the efficiency of the primary author of a section of a project, as well as better defining the scope of responsibility of each member, while still respecting the input of other group members.


According to this workgroup rule extraction technology, points of conflict between group members are highlighted for resolution. As an adjunct to this resolution phase of a project, videoconferencing may be employed. Further, where a conflict of a similar type had occurred in the past, data relating to the resolution of that conflict, including recorded videoconference, may be retrieved and presented to one or more members of the workgroup. In this way, such conflicts may be resolved before it becomes adversarial. Thus, each group member may efficiently proceed independently, with only major issues requiring meetings and the like to resolve.


If a workgroup member disagrees with an imputed rule, either explicitly, by review of the rules, or implicitly, by a review of the results, the system will allow a review of all decisions influenced by that faulty rule, as well as a proposed correction. This may be addressed by any member of the group, but usually by the author of the section or the source of the rule will be the relevant reviewing individual. Rules may also be created by the group, rather than from a single individual. Such rules are more often explicitly defined, rather than derived from observation. Such group rules may also be subjected to adaptive forces, especially when overridden frequently.


EXAMPLE 26
Adaptive Interface Vehicular Control System

It is noted that, the adaptive user level interface is of use in uncontrolled environments, such as in a moving vehicle, especially for use by a driver. An intelligent system of the present invention would allow the driver of such a vehicle to execute control sequences, which may compensate for the limited ability to interact with an interface while driving. Thus, the driver need not explicitly control all individual elements, because the driver is assisted by an intelligent interface. Thus, for example, if it begins raining, the interface would predict the windshield wipers should be actuated, the windows and any roof opening closed, and the headlights activated. Thus, the driver could immediately assent to these actions, without individually actuating each control. In such a case, the screen interface, which may be a heads-up display, would provide a small number of choices, which may be simply selected. Further, under such conditions, there would likely be a large amount of mechanical jitter from the input device, which would be filtered to ease menu selection. Further, this jitter would indicate an unstable environment condition, which would cause the interface to present an appropriate display. A voice input may also be used.


EXAMPLE 27
Adaptive Interface Vehicular Control System

An integrated electronics system for an automobile is provided having control over engine, transmission, traction control, braking, suspension, collision avoidance, climate control, and audio systems. Steering and throttle may also be controlled. Based on driver preference and action patterns, the system may optimize the vehicle systems. For example, the vehicle may anticipate voluntary or road conditions based on implicit inputs of the user, thus readying vehicular systems prior to the actual encounter with certain conditions. Further, a user interface may be simplified, based on probable required functionality, thus limiting required attention by the driver in order to activate a particular control. By providing such an interface, controls normally inaccessible may be made accessible, without increasing mechanical complexity, e.g., functions normally controlled by computer may be accessed through a common user interface, rather than through dedicated manual controls.


The automobile control system may also include collision avoidance systems, which may include imaging sensors and radar or LIDAR ranging and velocity measurement. According to the present invention, a heads-up display or simplified graphic user interface in the dashboard or near the steering wheel presents predicted options to the driver. An auxiliary interface may also make certain options available for passengers.


According to another aspect of the present invention, an automobile positioning system is provided, which may be extraterrestrial, e.g., GPS, or terrestrial, e.g., cellular base station, LORAN, etc. Such a system is described in U.S. Pat. No. 5,390,125, incorporated herein by reference; see references cited therein. A controller in the automobile is provided with an itinerary for the vehicle travel. Based on position and itinerary, the vehicle may communicate with various services, such as food, fuel and lodging providers, to “negotiate” for business. The driver may be provided with customized “billboards”, directed to his demographics. Reservations and discounts may all be arranged while en-route. Communication between the automobile and the services is preferably provided by CDPD services, which is a cellular based 832 MHz band digital data transmission system. Therefore, an existing cell phone system or CDPD modem system may be employed for telecommunication. Preferably, a simple display is provided for presentation of commercial messages to the driver or passenger and for interacting with the service.


As a matter of practice, the service may be subsidized by the service providers, thus reducing the cost to the consumer. The extent of the subsidy may be determined by the amount of data transmitted or by the eventual consummation of the transaction negotiated.


Because of the positioning system, any variance from the itinerary may be transmitted to the service providers, so that reservations may be cancelled, or substitute services provided in a different location or at a different time.


The telecommunication system may also be used as an emergency system, to contact emergency services and/or police in the event of accident or distress. The transponder system may also be part of an antitheft system. The transponder may also be part of a vehicular maintenance and diagnostic system to ensure proper servicing and to help determine the nature of problems. Raw or processed data may be transmitted to a centralized station for full analysis and diagnosis. Because the vehicle need not be at the repair shop for diagnosis, problems may be analyzed earlier and based on extensive, objective sensor data.


EXAMPLE 28
Intelligent Internet Appliance

A further application of the present technologies is in a so-called “Internet appliance”. These devices typically are electronic devices which have a concrete function (i.e., do more than merely act as a generic server) and typically employ at least as a secondary interface, a web browser 3205. In addition, these devices provide a TCP/IP network connection and act as a web server, usually for a limited type of data. Therefore, in addition to any real human interface on the device, a web browser 3205 may be used as a virtual interface 3304.


According to the present invention, such an Internet Appliance is provided according to the present invention with advanced features, for example adaptivity to the user, to the environment, or intelligent algorithms which learn. In fact, a preferred embodiment provides 3301 a rather generic device which serves as a bridge between the Internet, a public packet switched network 3202 which employs TCP/IP, and a local area network 3213, for example in a residential, industrial or office environment. The device may further abstract the interface functions for a variety of other devices 3212 as nodes on either the Internet or local area network 3213, to provide a common control system and interface.


A preferred embodiment also encompasses certain other features which may be used as resources for the networked devices or as usable features of the device.


The Internet, or other wide area network, may be connected in any known manner, for example, X.25/ISDN D-channel, dial-up over POTS (e.g., v.34, v.90, v.91), ISDN, xDSL, ADSL, cable modem, frame relay, T1 line, ATM, or other communications system. Typically, a system is provided with either a commonly used access method, such as v.90 or ISDN, or a replaceable communications module with a generic interface. Such systems are well known.


The local area network 3213 is also well known, and may include, for example, as a physical layer, 10 Base T, 100 Base T, HomeRun (Cat. 3 twisted pair/telephone twisted pair/power line transmission, from Intel Corp., e.g., Intel 21145 device/Tut systems), Universal Serial Bus (USB), Firewire (IEEE-1394), optical fiber, or other known computer network. The protocol may be, for example, TCP/IP, IPX, ATM, USB, IEEE-1394, or other known or proprietary appropriate communications protocol.


While not required, a particular aspect of a preferred embodiment according to the present invention is the ability to interface “dumb” devices as nodes on the LAN 3213 with an intelligent device 3201, while allowing the user to interact primarily with the intelligent device 3201. This scheme therefore reduces redundancy and increases functionality.


Therefore, in an exemplary embodiment, an intelligent home is established, with most or all electrical appliances 3223 and electronic devices interfaced with the system, for example through the aforementioned Homerun system, using any of the supported physical layers. Each device is provided as a relatively simple control, for example, remotely controllable (or where applicable, dimmable) lights 3224, control over normal use and peak electrical demand of heavy appliances 3223, as well as inter-device communications for consumer electronics 3221. Therefore, the intelligent device acts as an external communications and control node for the entire network, and may, for example, control telephony 3214 functions in addition.


Exemplary devices to be controlled in a home include household appliances 3223, HVAC 3215, alarm systems 3217, consumer electronics 3221, and the like, and/or provide for communications purposes. An alarm system 3217 embodiment, for example, may employ a video camera input 3219 for capture and analysis of images, as well as motion or irregularity detection. The intelligent device 3201 may, for example, employ neural networks or other intelligent analysis technology for analyzing data patterns indicative of particular states. An alarm output may be produced, for example, through standard alarms, as well as through a telephone 3214 interface of the system.


The system may therefore set/control/monitor the status of any home-based device—oven, stove, alarm, washing machine, dryer, iron, lights, computer, oil/gas burner, thermostat 3222, location of automobiles 3218, camera, pump 3226 (pool, sump), sprinkler 3225, stereo/video systems, home surveillance system 3216. This may be especially important if the user is away from home for an extended period of time, or if he or she wants to change the schedule of something, or travel plans change. For a home surveillance system 3216, pattern recognition may be employed to monitor all sensors, including cameras, to detect abnormal patterns or changes in condition.


Thus, since the intelligent device incorporates a web server, the physical proximity of the user is not critical for interaction with the device, and all devices on the LAN 3213 may be controlled remotely, automatically, and in synchrony.


In one embodiment, the intelligent device includes a videoconferencing 3220/video capture system, including any or all known features for such systems, for example as described in the background of the invention. Therefore, in addition to a base level of functionality, such an embodiment would also likely include (a) telephony 3214 interface, (b) video capture, (c) video codec, (d) audio capture, (e) audio codec, (f) full duplex speakerphone, (g) video output, and (h) audio output.


In another embodiment, a speech interface is provided for interpreting human speech as an input and/or producing synthesized speech as an output. Therefore, such a device would include speech recognition and/or synthesis technologies, as well as a semantic data processor.


Preferably, the device allows use of a simplified web browser interface 3205, such as which may be supported by personal digital assistants (PDAs) and enhanced digital data cellular telephones, e.g., handheld device markup language (HDML). This, for example, allows a remote user to communicate through wireless networks 3211 or the like, and therefore avoids the need for a full personal computer as a human interface.


Advantageously, the device may be interfaced with a telephone 3214 communication system, allowing use as a voice and/or video message recorder, and allowing remote access to the stored information, either through a dialup connection and/or through the network. In this case, the intelligent device 3201 may act as a computer telephony interface, and all communications devices logically under this device act as “net phones”, i.e., voice communications devices which communicate over data networks. Therefore, all telephony control and computer telephony functions may be integrated into the device, for example, voice mail, auto-attendant, call center, and the like. Further, the Internet interface allows remote messaging and control over the telephony system, as well as virtual networking, Internet telephony, paging functions, and voice and data integration.


The intelligent device 3201 may also interface with various media electronics devices, and for example, may act as a “rights server” 3208 or other aspect of a copyright protection and royalty collection/enforcement system 3307. Typically, these functions entail e-commerce functions, and may require X.22 and/or XML communications and translations. In addition, such functions also typically involve encryption/decryption 3207, as well as key management, which are also preferably supported by the device. Such support may be in hardware or software.


Another aspect of the invention provides an index and/or catalog database 3204 for media information 3209 or media metadata 3210 information. Thus, data relating to a VCR tape or other recorded media may be subjected to search criteria without requiring access or contemporaneous analysis of the media content itself. Therefore, a preferred embodiment of the intelligent device includes mass storage and retrieval capability 3204, for example, magnetic disk, RW-CD, or RW-DVD. This mass storage and retrieval capability 3204 may be used, not only for databases, but also for computer software, media and content storage and retrieval 3303. Thus, the device may also serve as a video data recorder, capturing video data and storing it digitally, for example, employing the aforementioned video and audio codecs. In this case, it is preferable that the intelligent device 3201 also include a direct media access port 3203, for example a broadcast TV tuner, ATSC/HDTV tuner, cable tuner, DVD reader, CD reader, satellite video decoder, NTSC composite/S-VHS, and/or other type of media content information input 3302. With such storage, the intelligent device 3201 may also assume the standard functions of computer network servers, for example, file serving, print serving, fax serving, application serving, client/server application support, as well as traditional networking functions, such as bridging, routing, switching, virtual private network, voice-over-IP, firewall functions, remote access serving, and the like. It should also be apparent that the intelligent device 3201 may also serve as a personal computer 3206 itself, and thus does not require additional systems for basic functionality.


In a media recording system embodiment, the system preferably notifies the user if the “program”, i.e., instructions, are incomplete, ambiguous, or impossible to complete. For example, if a single channel selector is provided, no more than one channel may be monitored at a time. Further, where irreversible actions are necessary, the user is preferably informed and allowed to make a choice, for example, if lack of storage space forces a choice to be made between new and archival material. A conflict management system is provided which arbitrates between the conflicting demands, for example if a second user is programming the same device (for example, the VCR) to record a show at the same time.


Thus, it is apparent that the intelligent device 3201 according to this embodiment of the present invention may incorporate many different functions, some of which are defined purely by software and processing availability, and others by particular hardware devices for performing specific functions.


Another aspect of the invention defines a special training mode of the intelligent device, which allows the user to improve the functionality of the system by ensuring that any intelligence algorithms will correctly operate in an anticipated and/or desired manner. In this mode, responses of the user are provoked which indicate user preferences, preferably in a manner which resolves ambiguities encountered with prior data sets. Thus, where the system identifies a situation where a decision is difficult, e.g., where the data analysis does not output any selected actions which will likely correspond to the user desires or preferences, or where ex post facto the user indicates that an inappropriate choice was made, the particular data structures may be stored and abstracted for later presentation to the user. In this case, such structures are presented by the system to the user, during a training session, to train the system relating to the desired response to particular data environments. In this way, the user is not necessarily burdened with training tasks during normal use of the device, and opportunities for such training are not lost. Where the system is untrained, and an “intelligent” response or mode of operation cannot be resolved, a default mode of operation may be defined. Further, such a default mode is preferably always available, at the request of the user, thus allowing use where an adaptive system is undesired or difficult to employ.


In a television application, the Internet appliance preferably has access to an electronic program guide (EPG). Such EPG systems are known, and typically provide an efficient staring point for user programming. These EPG may be provided as an embedded signal in a broadcast stream, through a dial-up network, through the internet, or on distribution media, such as CD-ROM, OCR scanning of TV-Guide (or the like) or other known means. EPGs contain a concise semantic description of program content, which typically is both sufficient for user evaluation, and brief enough for rapid evaluation. The system may therefore analyze user preferences in this semantic space and provide adaptive presentation of elements of the EPG to the user. Of course, a media data stream analysis embodiment of the invention, as disclosed above, may be used in conjunction with or in lieu of the EPG system.


The system preferably maintains an updated index of available data. Thus, newly acquired data is added to the index, and deleted data is purged from the index. The system preferably compares new data to previously encountered data, to avoid redundant processing. For example, the system preferably recognizes events/programs that have previously been recorded, and checks to determine whether they are still in the index. In this context, the user is preferably provided with low-level file maintenance tools, for example to manually control the addition or deletion of data, which is then correctly represented in the index.


Because the Internet appliance is connected to the Internet, so-called multicasts may be monitored for correspondence with user preferences. Therefore, it is understood that the operation of the present invention is not limited to traditional television broadcasts, and that streaming video and audio, as well as stored images, sound files (e.g., MID1, MP3, A2B, RealAudio), text, and multimedia streams may be analyzed based on the adaptive principles presented herein 3305.


The system may also integrate Internet data with other types of data, for example providing access to stored or static data corresponding to a data stream. The retrieval and storage of such data may also be adaptively controlled in accordance with the present invention. Thus, it is expressly understood that the intelligent device may act as a “VCR” (albeit not necessarily employing a known type of videocassette tape), to record media 3306.


The Internet appliance may also operate autonomously, capturing data which corresponds to user preferences and profiles, thus reducing latency for the user, and potentially shifting data transfers to off-peak periods. Such a system operates in this mode as a so-called “agent” system. Likewise, the device may also be linked to other intelligent devices, to provide an intelligent interaction therebetween.


The preferred user interface maintains user levels constant over long periods, i.e., not rapidly adaptive, to allow for quick accessing over a low bandwidth connection, such as a telephone, or using succinct displays, such as might be found on a personal digital assistant. Thus, the user can rely on memory of the interface functionality and layout to reduce data transmissions and reduce search time. In one embodiment, the interface may be “forced” to a particular type, as either a permanent interface, or as a starting point for adaptivity. Thus, the user may be provided with an interface design mode of operation.


The user interaction with each “device”, which may be real or virtual (implemented as a software construct in a relatively general purpose computer), is preferably carefully designed for each device. A common user interface paradigm is preferably provided for corresponding functions, while the user interface is preferably optimized for dealing with the specific functions of each particular device. Thus, a similar user interface and screen layout is employed for functions that are the same across a variety of devices. In this regard, it is an aspect of an embodiment of the invention to translate user interface systems, even in a high level state, to other forms. Thus, in a multi-brand environment, related components may have native interfaces that are both well developed and distinctly different. Therefore, the present invention allows for a translation or remapping of the functionality into a common paradigm. Where aspects cannot be adequately translated, the native interface may be presented to the user.


It should be understood that the preferred embodiments and examples described herein are for illustrative purposes only and are not to be construed as limiting the scope of the present invention, which is properly delineated only in the appended claims.


REFERENCES



  • “32-bit Floating-Point DSP Processors”, EDN, Nov. 7, 1991, pp. 127-146.

  • “A New Class of Markov Processes for Image Encoding”, School of Mathematics, Georgia Inst. of Technology (1988), pp. 14-32.

  • “A show and tell of the QBIC technology—Query By Image Content (QBIC)”, IBM QBIC Almaden web site, pp. 1-4.

  • “ABI WHAP, Web Hypertext Applications Processor,” alphabase.com (1996, Jul. 11).

  • “AdForce Feature Set”, <http>www.imgis.com/index.html/core/p2-2html (1997, Apr. 11).

  • “Bar Code Programs VCR”, Design News, Feb. 1, 1988, 26.

  • “C-Cube CL550 JPEG Image Compression Processor”, Preliminary Data Book, August 1991, and addendum dated Nov. 20, 1991.

  • “Chaos & Non-Linear Models in Economics”.

  • “Chaos Theory in the Financial Markets. Applying Fractals, Fuzzy Logic, Genetic Algorithms”.

  • “Construction of Fractal Objects with Iterated Function Systems”, Siggraph '85 Proceedings, 19(3):271-278 (1985).

  • “Data Compression: Pntng by Numbrs”, The Economist, May 21, 1988.

  • “EMC2 Pushes Video Rental By Satellite”, Electronic Engineering Times, Dec. 2, 1991, p. 1, p. 98.

  • “Evolutionary Economics & Chaos Theory”.

  • “Finger Painting”, Information Display 12, p. 18, 1981.

  • “Four Eyes”, MIT Media Lab web site; pp. 1-2.

  • “Fractal Geometry-Understanding Chaos”, Georgia Tech Alumni Magazine, p. 16 (Spring 1986).

  • “Fractal Modelling of Biological Structures”, Perspectives in Biological Dynamics and Theoretical Medicine, Koslow, Mandell, Shlesinger, eds., Annals of New York Academy of Sciences, vol. 504, 179-194 (date unknown).

  • “Fractal Modelling of Real World Images, Lecture Notes for Fractals: Introduction, Basics and Perspectives”, Siggraph (1987).

  • “Fractals Yield High Compression”; Electronic Engineering Times; Sep. 30, 1991; p. 39.

  • “Fractals—A Geometry of Nature”, Georgia Institute of Technology Research Horizons; p. 9 (Spring 1986).

  • “Frequently asked questions about visual information retrieval”, Virage Incorporated web site; pp. 1-3.

  • “How to find the best value in VCRs”, Consumer Reports, March 1988, 135-141.

  • “IBM Ultimedia Manager 1.1 and Clinet Search”, IBM software web site, pp. 1-4.

  • “Image Compression Using Fractals and Wavelets”, Final Report for the Phase II Contract Sponsored by the Office of Naval Research, Contract No. N00014-91-C-0117, Netrologic Inc., San Diego, Calif. (Jun. 2, 1993).

  • “Image Detection and Registration”, Digital Image Processing, Pratt, Wiley, New York, 1991.

  • “IPRO,”www.ipro.com, Internet profiles Corporation Home and other Web Pages (1996, Jul. 11).

  • “Jacob Methodology” @ WWCSAI.diepa.unipa.it.

  • “Low-Cost VCRs: More For Less”, Consumer Reports, March 1990, 168-172.

  • “Machine Now Reads, enters Information 25 Times Faster Than Human Keyboard Operators”, Information Display 9, p. 18 (1981).

  • “Market Analysis. Applying Chaos Theory to Investment & Economics”.

  • “Media Planning is Redefined in a New Era of Online Advertising,” PR Newswire, (1996, Feb. 5).

  • “MPEG: A Video Compression Standard for Multimedia Applications”, Le Gall, Communications of the ACM, vol. 34, No. 4, April 1991, pp. 47-58.

  • “My Yahoo! news summary for My Yahoo! Quotes”, my.yahoo.com (1997, Jan. 27).

  • “NetGravity Announces Adserver 2.1”, www.netgravity.com (1997, Apr. 11).

  • “Netscape & NetGravity: Any Questions?”, www.netgravity.com (1996, Jul. 11).

  • “Network Site Main”, www.doubleclick.net (1997, Apr. 11).

  • “New Beetle Cursor Director Escapes All Surface Constraints”, Information Display 10, p. 12, 1984.

  • “Nielsen Views VCRs”, Television Digest, Jun. 23, 1988, 15.

  • “Photobook”, MIT Media Lab web site; Aug. 7, 1996; pp. 1-2.

  • “Profiting from Chaos. Using Chaos Theory for Market Timing, Stock Selection & Option”.

  • “Real Media,” www.realmedia.com (1996, Jul. 11).

  • “Scanner Converts Materials to Electronic Files for PCs”, IEEE CG&A, December 1984, p. 76.

  • “The Front Page”live.excite.com (1997, Jan. 27) and (1997, Apr. 11).

  • “The Front Page”, <http>live.excite.com/?aBb (1997, Jan. 27) and (1997, Apr. 11).

  • “The Highs and Lows of Nielsen Homevideo Index”, Marketing & Media Decisions, November 1985, 84-86+.

  • “Pointcast Network,” www.pointcast.com (1996, Spring).

  • “The Power of PenPoint”, Can et al., 1991, p. 39, Chapter 13, pp. 258-260.

  • “The QBIC Project”, IBM QBIC Almaden web site, home page.

  • “The Quest for ‘User Friendly’”, U.S. News & World Report, Jun. 13, 1988. 54-56.

  • “The Smart House: Human Factors in Home Automation”, Human Factors in Practice, December 1990, 1-36.

  • “VCR, Camcorder Trends”, Television Digest, Vol. 29, Mar. 20, 1989, 16.

  • “VCR's: A Look At The Top Of The Line”, Consumer Reports, March 1989, 167-170.

  • “VHS Videocassette Recorders”, Consumer Guide, 1990, 17-20.

  • “Virage—Visual Information Retrieval”, Virage Incorporated, home page.

  • “Virage Products”, Virage Incorporated web site; pp. 1-2.

  • “Visual Information Retrieval: A Virage Perspective Revision 3”, Virage Incorporated web site; 1995; pp. 1-13.

  • “Visual Pattern Recognition by Moment Invariants”, IRE Trans. Inform. Theory, vol. 8, February 1962, pp. 179-187.

  • “Voice Recognition and Speech Processing”, Elektor Electronics, September 1985, pp. 56-57.

  • “Welcome to Lycos,” www.lycos.com (1997, Jan. 27).

  • “Workshop Report: NSF—ARPA Workshop on Visual Information Management Systems”, Virage Incorporated web. site; pp. 1-15.

  • “WWW.amazon.com”.

  • “WWW.firefly.com”.

  • Abadi, M., et al, “Authentication and Delegation with Smart-cards”, Oct. 22, 1990, revised Jul. 30, 1992 Report 67, Systems Research Center, Digital Equipment Corp., Palo Alto, Calif.

  • Abatemarco, Fred, “From the Editor”, Popular Science, September 1992, p. 4

  • Abe, S., Y. Tonomura, Systems and Computers in Japan, vol. 24, No. 7, “Scene Retrieval Method Using Temporal Condition Changes”, pp. 92-101, 1993.

  • Abedini, Kamran, “An Ergonomically-improved Remote Control Unit Design”, Interface '87 Proceedings, 375-380.

  • Abedini, Kamran, and Hadad, George, “Guidelines For Designing Better VCRs”, Report No. IME 462, Feb. 4, 1987.

  • Advertisement for “TV Decision,” CableVision, Aug. 4, 1986.

  • Aleksander, I., “Guide to Pattern Recognition Using Random-Access Memories”, Computers and Digital Techniques, 2(1):29-40 (February 1979).

  • American National Standard, “Financial Institution Retail Message Authentication”, ANSI X9.19 1986.

  • American National Standard, “Interchange Message Specification for Debit and Credit Card Message Exchange Among Financial Institutions”, ANSI X9.2-1988.

  • Anderson, F., W. Christiansen, B. Kortegaard, “Real Time, Video Image Centroid Tracker”, Apr. 16-20, 1990.

  • Anderson, Ross J., “UEPS—A Second Generation Electronic Wallet”, Proc. of the Second European Symposium on Research in Computer Security (ESORICS), Touluse, France, pp. 411-418, Touluse, France.

  • Anderson, Ross, “Why Cryptosystems Fail”, Proc. 1st Conf. Computer and Comm. Security, pp. 215-227, November 1993.

  • Anson, L., “Fractal Image Compression”, Byte, October 1993, pp. 195-202; “Fractal Compression Goes On-Line”, Byte, September 1993.

  • Anson, L., M. Barnsley; “Graphics Compression Technology”; SunWorld; pp. 43-52 (October 1991).

  • Antonofs, M., “Stay Tuned for Smart TV,” Popular Science, November 1990, pp. 62-65.

  • Appriou, A., “Interet des theories de l'incertain en fusion de donnees”, Colloque International sur le Radar Paris, 24-28 avril 1989.

  • Appriou, A., “Procedure d'aide a la decision multi-informateurs. Applications a la classification multi-capteurs de cibles”, Symposium de l'Avionics Panel (AGARD) Turquie, 25-29 avril 1988.

  • Arman et al., “Feature Management for Large Video Databases”, 1993. (Abstract Only).

  • Arman et al., “Image Processing on Compressed Data for Large Video Databases”, Proc. of First ACM Int. Conf. on Multimedia, Anaheim, Calif., 1-6 Aug. 1993, pp. 267-272.

  • Arman et al., “Image Processing on Encoded Video Sequences”, ACM Multimedia Systems Journal, to appear 1994.

  • Arndt, T., “A Survey of Recent Research in Image Database Management”, IEEE Publication No. TH0330-1/90/0000/0092, pp. 92-97, 1990.

  • Arrow, K. J., “Social choice and individual valves”, John Wiley and Sons Inc. (1963).

  • Arrowsmith, DK & C M Place: “An Introduction to Dynamical Systems”, Cambridge University Press, Cambridge, 1990.

  • Asian Technology Information Program (ATIP) Report: ATIP95.65: Human Computer Interface International, 7/95 Yokohama.

  • Astrom, K. J., and B. Wittenmark, “Adaptive Control”, Addison-Wesley Publishing Company (1989) pp. 105-215.

  • Astrom, K. J., T. Hagglund, “Automatic Tuning of PID Controllers”, Instrument Society of America, Research Triangle Park, N.C. (1988) pp. 105-132.

  • Atkinson, Terry, “VCR Programming: Making Life Easier Using Bar Codes”.

  • Bach, J. R., C. Fuller, A. Gupta, A. Hampapur, B. Horowitz, R. Humphrey, R. C. Jain, and C. Shu. Virage image search engine: an open framework for image management. In Symposium on Electronic Imaging: Science and Technology—Storage & Retrieval for Image and Video Databases IV, pages 76-87. IS&T/SPIE, 1996.

  • Bagley, H. & Sloan, J., “Optical Processing: Ready For Machine Vision?”, Photonics Spectra, August 1993, pp. 101-106.

  • Bains, S., “Trained Neural Network Recognizes Faces”, Laser Focus World, June, 1993, pp. 26-28.

  • Baker, Gregory L., & Jerry P Gollub: “Chaotic Dynamics: An Introduction”, Cambridge University Press, Cambridge, 1990.

  • Baldwin, William, “Just the Bare Facts, Please”, Forbes Magazine, Dec. 12, 1988.

  • Ballard, D. H., and Brown, C. M., Computer Vision, Prentice Hall, Englewood Cliffs, N.J. (1982); Optical Engineering 28:5 (May 1988)(Special Issue on product inspection).

  • Barber et al. “Ultimedia Manager: Query by Image Content and it's Applications” IEE, 1994, pp. 424-429, January 1994.

  • Barnsley et al., “A Better Way to Compress Images”, Byte, January 1988, pp. 213-225.

  • Barnsley et al., “Chaotic Compression”, Computer Graphics World, November 1987.

  • Barnsley et al., “Harnessing Chaos For Images Systhesis”, Computer Graphics, 22(4):131-140 (August, 1988).

  • Barnsley et al., “Hidden Variable Fractal Interpolation Functions”, School of Mathematics, Georgia Institute of Technology, Atlanta, Ga. 30332, July, 1986.

  • Barnsley, M., L. Anson, “Graphics Compression Technology, SunWorld, October 1991, pp. 42-52.

  • Barnsley, M. F., A. Jacquin, F. Malassenet, L. Reuter & A. D. Sloan, ‘Harnessing chaos for image synthesis’, Computer Graphics, vol 22 no 4 pp 131-140, (August, 1988).

  • Barnsley, M. F., A. E. Jacquin, ‘Application of recurrent iterated function systems to images’, Visual Comm. and Image Processing, vol SPIE-1001, 1988.

  • Barnsley, M. F., “Fractals Everywhere”, Academic Press, Boston, Mass., 1988.

  • Barnsley, M. F., and Demko, S., “Iterated Function Systems and The Global Construction of Fractals”, Proc. R. Soc. Lond., A399:243-275 (1985).

  • Barnsley, M. F., Ervin, V., Hardin, D., Lancaster, J., “Solution of an Inverse Problem for Fractals and Other Sets”, Proc. Natl. Acad. Sci. U.S.A., 83:1975-1977 (April 1986).

  • Barros, et al. “Indexing Multispectral Images for Content-Based Retrieval”, Proc. 23rd AIPR Workshop on Image and Information Retrieval, Proc. 23rd Workshop, Washington, D.C., October 1994, pp. 25-36.

  • Batchelor, B. G., “Pattern Recognition, Ideas in Practice”, Plenum Press, London and New York, (1978).

  • Batchelor, B. G., “Practical Approach to Pattern Classification”, Plenum Press, London and New York, (1974).

  • Baxes, Gregory A., “Digital Signal Processing, A Practical Primer”, Prentice-Hall, Englewood Cliffs, N.J. (1984).

  • Beaumont J M, “Image data compression using fractal techniques”, British Telecom Technological Journal 9(4):93-108 (1991).

  • Belkin, N. J., Croft, W. B., “Information Filtering And Information Retrieval: Two Sides of the Same Coin?”, Communications of the ACM, December 1992, vol. 35, No. 12, pp. 29-38.

  • Bellman, R. E., L. A. Zadeh, “Decision making in a fuzzy environment”, Management Science, 17(4) (December 1970).

  • Bender, M., “EFTS: Electronic Funds Transfer Systems”, Kennikat Press, Port Washington, N.Y., pp. 43-46 1975.

  • Bensch, U., “VPV—VIDEOTEXT PROGRAMS VIDEORECORDER”, IEEE Transactions on Consumer Electronics, Vol. 34, No. 3, 788-792 (1988).

  • Berger, Ivan, “Secrets of the Universals”, Video, February 1989, 45-47+.

  • Beringer, D. B., “A Comparative Evaluation of Calculator Watch Data Entry Technologies: Keyboards to Chalkboards”, Applied Ergonomics, December 1985, 275-278.

  • Berniker, M., “Nielsen plans Internet Service,” Broadcasting & Cable, 125(30):34 (1995, Jul. 24).

  • Berry, Deanne, et al. In an Apr. 10, 1990 news release, Symantec announced a new version of MORE™.

  • Berry, Jonathan, “A Potent New Tool for Selling Database Marketing”, Business Week, Sep. 5, 1994, pp. 34-40.

  • Berry, M V, I C Persival & N O Weiss: “Dynamical Chaos”, The Royal Society, London, 1987, Proceedings of a Royal Society Discussion Meeting held on 4 & 5 Feb. 1987.

  • Bestler, Caitlin: Flexible Data Structures and Interface Rituals For Rapid Development of OSD Applications; 93 NCTA Tech. Papers; Jun. 6, 1993; pp. 223-236.

  • Betts, M., “Sentry cuts access to naughty bits,” Computers and Security, vol. 14, No. 7, p. 615 (1995).

  • Bhatnagar, R. K., L. N. Kamal, “Handling uncertain information: a review of numeric and non-numeric methods”, Uncertainty in Artificial Intelligence, L. N. Kamal and J. F. Lemmer, Eds. (1986).

  • Bier, E. A. et al. “MMM: A User Interface Architecture for Shared Editors on a Single Screen,” Proceedings of the ACM Symposium on User Interface Software and Technology, Nov. 11-13, 1991, p. 79.

  • Bimbo et al., “Sequence Retrieval by Contents through Spatio Temporal Indexing”, IEEE on CD-ROM, pp. 88-92, Aug. 24, 1993.

  • Bimbo, A. D., et al, “3-D Visual Query Language for Image Databases”, Journal Of Visual Languages & Computing, 1992, pp. 257-271.

  • Binaghi, E., et al, “Indexing and Fuzzy Logic Based Retrieval of Color Images”, Visual Database Systems, II, 1992, pp. 79-92.

  • Binaghi, E., et al., “A Knowledge-Based Environment for Assessment of Color Similarity”, Proc. 2nd Annual Conference on Topics for AI, pp. 268-285 (1990).

  • Bishop, Edward W., and Guinness, G. Victor Jr., “Human Factors Interaction with Industrial Design”, Human Factors, 8(4):279-289 (August 1966).

  • Blair, D., R. Pollack, “La logique du choix collectif” Pour la Science (1983).

  • Bolot, J.; Turletti, T. & Wakeman, I.; “Scalable Feedback Control for Multicast Video Distribution In the Internet”, Computer Communication Review, vol. 24, No. 4 Oct. 1994, Proceedings of SIGCOMM 94, pp. 58-67.

  • Bos et al., “SmartCash: a Practical Electronic Payment System”, pp. 1-8; August 1990.

  • Boy, Guy A., Intelligent Assistant Systems, Harcourt Brace Jovanovich, 1991, uses the term “Intelligent Assistant Systems”.

  • Bristol, E. H., & T. W. Kraus, “Life with Pattern Adaptation”, Proceedings 1984 American Control Conference, pp. 888-892, San Diego, Calif. (1984).

  • Brown, Edward, “Human Factors Concepts For Management”, Proceedings of the Human Factors Society, 1973, 372-375.

  • Brown, Robert: “Statistical Forecasting for Inventory Control”, McGraw-Hill Book Co., New York, 1958.

  • Bruce, J W, & P J Giblin: “Curves and Singularities”, Cambridge University Press, Cambridge, 1992.

  • Brugliera, Vito, “Digital On-Screen Display—A New Technology for the Consumer Interface”, Symposium Record Cable Sessions. Jun. 11, 1993, pp. 571-586.

  • Bulkeley, Debra, “The Smartest House in America”, Design News, Oct. 19, 1987, 56-61.

  • Burk et al, “Value Exchange Systems Enabling Security and Unobservability”, Computers & Security, 9 1990, pp. 715-721.

  • Burr, D. J., “A Neural Network Digit Recognizer”, Proceedings of the 1986 IEEE International Conference of Systems, Man and Cybernetics, Atlanta, Ga., pp. 1621-1625.

  • Bursky, D., “Improved DSP ICs Eye New Horizons”, Electronic Design, Nov. 11, 1993, pp. 69-82.

  • Bussey, H. E., et al., “Service Architecture, Prototype Description, and Network Implications of a Personalized Information Grazing Service,” IEEE Multiple Facets of Integration Conference Proceedings, vol. 3, No. Conf. 9, Jun. 3, 1990, pp. 1046-1053.

  • Byte Magazine, January 1988.

  • Caffery, B., “Fractal Compression Breakthrough for Multimedia Applications”, Inside, Oct. 9, 1991.

  • Card, Stuart K., “A Method for Calculating Performance times for Users of Interactive Computing Systems”, IEEE, 1979, 653-658.

  • Carlson, Mark A., “Design Goals for an Effective User Interface”, Human Interfacing with Instruments, Electro/82 Proceedings, 3/1/1-3/1/4.

  • Carpenter, G. A., S. Grossberg, “The Art of Adaptive Pattern Recognition by a Self-Organizing Neural Network”, IEEE Computer, March 1988, pp. 77-88.

  • Carroll, Paul B., “High Tech Gear Draws Cries of “Uncle”, Wall Street Journal, Apr. 27, 1988, 29.

  • Casasent, D., and Tescher, A., Eds., “Hybrid Image and Signal Processing II”, Proc. SPIE Technical Symposium, April 1990, Orlando Fla. 1297 (1990).

  • Casasent, D., et al., “General I and Q Data Processing on a Multichannel AO System”, Applied Optics, 25(18):3217-24 (Sep. 15, 1986).

  • Casasent, D., Photonics Spectra, November 1991, pp. 134-140.

  • Casdagli, Martin, & Stephen Eubank: “Nonlinear Modelling and Forecasting”, Addison-Wesley Publishing Co., Redwood City, 1992.

  • Case Study The CIRRUS Banking Network, Comm. ACM 8, 28 pp. 7970-8078, August 1985.

  • Caudill, M., “Neural Networks Primer-Part III”, AI Expert, June 1988, pp. 53-59.

  • Cawkell, A. E., “Current Activities in Image Processing Part III: Indexing Image Collections”, CRITique, vol. 4, No. 8, May 1992, pp. 1-11, ALSIB, London.

  • Chalmers, M., Chitson, P., “Bead: Explorations In Information Visualization”, 15th Ann. Int'l SIGIR 92/Denmark-June 1992, pp. 330-337.

  • Chang et al., “Image Information Systems: Where Do We Go From Here?”, IEEE Transactions on Knowledge and Data Engineering, vol. 4, No. 5, October 1992, pp. 431-442.

  • Chang et al., “Intelligent Database Retrieval by Visual Reasoning”, PROC Fourteenth Annual International

  • Computer Software and Application Conference, 31 Oct.-1 November 1990, pp. 459-464.

  • Chang, C., “Retrieving the Most Similar Symbolic Pictures from Pictorial Databases”, Information Processing & Management, vol. 28, No. 5, 1992.

  • Chang, C., et al, “Retrieval of Similar Pictures on Pictorial Databases”, Pattern Recognition, vol. 24, No. 7, 1991, pp. 675-680.

  • Chang, N. S., et al., “Picture Query Languages for Pictorial Data-Base Systems”, Computer vol. 14, No. 11, pp. 23-33 (November 1981).

  • Chang, N. S., et al., “Query-by-Pictorial Example”, IEEE Transactions on Software Engineering, vol. SE-6, No. 6, pp. 519-524 (November 1980).

  • Chang, S., et al, “An Intelligent Image Database System”, IEEE Transactions On Software Engineering, vol. 14, No. 5, May 1988, pp. 681-688.

  • Chang, S.-F, Compressed-domain techniques for image/video indexing and manipulation. In Proceedings, I.E.E.E. International Conference on Image Processing, Washington, D.C., October 1995. invited paper to the special session on Digital Library and Video on Demand.

  • Chang, S.-K., Principles of Pictorial Information Systems Design. Prentice Hall, 1989.

  • Chang, S.-K., Q. Y. Shi, and C. Y. Yan. “Iconic indexing by 2-D strings”. IEEE Trans. On Pattern Analysis And Machine Intelligence, vol. 9, No. 3, May 1987, pp. 413-428.

  • Chang, Yuh-Lin, Zeng, Wenjun, Kamel, Ibrahim, Alonso, Rafael, “Integrated Image and Speech Analysis for Content-Based Video Indexing”.

  • Chao, J. J., E. Drakopoulos, C. C. Lee, “An evidential reasoning approach to distributed multiple hypothesis detection”, Proceedings of the 20th Conference on decision and control, Los Angeles, Calif., December 1987.

  • Chao, T.-H.; Hegblom, E.; Lau, B.; Stoner, W. W.; Miceli, W. J., “Optoelectronically implemented neural network with a wavelet preprocessor”, Proceedings of the SPIE—The International Society for Optical Engineering, 2026:472-82(1993).

  • Chapra, Steven C, & Raymond P Canale: “Numerical Methods for Engineers”, McGraw-Hill Book Co., New York, 1988.

  • Charles, S., et al, “Using Depictive Queries to Search Pictorial Databases”, Human Computer Interaction, 1990, pp. 493-498.

  • Chassery, J. M., et al., “An Interactive Segmentation Method Based on Contextual Color and Shape Criterion”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAM1-6, No. 6, (November 1984).

  • Chaum et al, “Untraceable Electronic Cash”, Advances in Cryptology, 1988, pp. 319-327.

  • Chaum et al; “Achieving Electronic Privacy”, Scientific American, pp. 319-327; 1988.

  • Chaum, D. “Security without Identification: Card Computers to Make Big Brother Obsolete”, Communications of the ACM, 28(10), October 1985, pp. 1030-1044.

  • Chaum, D. “Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms”, Communications of the ACM, vol. 24, No. 2, February, 1981.

  • Chaum, D., “Achieving Electronic Privacy”, Scientific American, August 1992, pp. 96-101.

  • Chaum, D. L. et al.; “Implementing Capability-Based Protection Using Encryption”; Electronics Research Laboratory, College of Engineering, University of California, Berkeley, Calif.; Jul. 17, 1978.

  • Chen et al., “Adaptive Coding of Monochrome and Color Images”, November 1977, pp. 1285-1292.

  • Chen, Z., et al, “Computer Vision for Robust 3D Aircraft Recognition with Fast Library Search”, Pattern Recognition, vol. 24, No. 5, pp. 375-390, 1991, printed in Great Britain.

  • Cheong, C. K.; Aizawa, K.; Saito, T.; Hatori, M., “Adaptive edge detection with fractal dimension”, Transactions of the Institute of Electronics, Information and Communication Engineers D-II, J76D-II(11):2459-63 (1993).

  • Child, Jeff, “H.324 Paves Road For Mainstream Video Telephony”, Computer Design, January 1997, pp. 107-110.

  • Chua, T.-S., S.-K. Lim, and H.-K. Pung. Content-based retrieval of segmented images. In Proc. ACM Intern. Conf. Multimedia, October 1994.

  • Cobb, Nathan, “I don't get it”, Boston Sunday Globe Magazine, Mar. 25, 1990, 23-29.

  • Cohen, Danny; “Computerized Commerce”; ISI Reprint Series ISI/RS-89/243; October, 1989; Reprinted from Information Processing 89, Proceedings of the IFIP World Computer Congress, held Aug. 28-Sep. 1, 1989.

  • Cohen, Danny; “Electronic Commerce”; University of Southern California, Information Sciences Institute, Research Report ISI/RR-89-244; October, 1989.

  • Cohen, R., “FullPixelSearch Helps Users Locate Graphics”, MacWeek, Aug. 23, 1993, p. 77.

  • Commaford, C., “User-Resonsive Software Must Anticipate Our Needs”, PC Week, May 24, 1993.

  • Common European Newsletter, Multimedia Content manipulation and Management, ww.esat.kuleuven.ac.be.

  • CompuServe Information Service Users Guide, CompuServe International, 1986, pp. 109-114.

  • Computer Shopper, November 1994, “Internet for Profit”, pp. 180-182, 187, 190-192, 522-528, 532, 534.

  • Computer Visions, Graphics, and Image Processing 1987, 37:54-115.

  • Computer, Vol. 28(9), September 1995.

  • Computers and Biomedical Research 5, 388-410 (1972).

  • Compuvid Sales Manual (date unknown).

  • Consumer Digest advertisement: Xpand Your TV's Capability: Fall/Winter 1992; p. 215.

  • Cooper, L. N., “A Possible Organization of Animal Memory and Learning”, Nobel 24, (1973), Collective Properties of Physical Systems, pp. 252-264.

  • Corporate Overview, Virage Incorporated web site; pp. 1-4.

  • Corripio, A. B., “Tuning of Industrial Control Systems”, Instrument Society of America, Research Triangle Park, N.C. (1990) pp. 65-81.

  • Cox, Ingemar J., et al., “PicHunter: Bayesian Relevance Feedback for Image Retrieval,” Proc. of the ICPR '96, IEEE, pp. 361-369.

  • Crawford et al., “Adaptive Pattern Recognition Applied To An Expert System For Fault Diagnosis In Telecommunications Equipment”, pp. 10/1-8 (Inspec. Abstract No. 86C010699, Inspec IEE (London) & IEE Coll. on “Adaptive Filters”, Digest No. 76, Oct. 10, 1985).

  • Cutting, D. R.; Karger, D. R.; Pedersen, J. O. & Tukey, J. W. “Scatter/Gather: A Cluster-based Approach to Browsing Large Document Collections”, 15 Ann. Int'l SIGIR '92, ACM, 1992, pp. 318-329.

  • Cvitanovic, Predrag: “Universality in Chaos”, Adam Hilger, Bristol, 1989.

  • Daly, Donal: “Expert Systems Introduced”, Chartwell-Bratt, Lund, 1988.

  • Damashek, M., Gauging Similarity via N-Grams: Language-Independent Sorting, Categorization, and Retrieval of Text, pp. 1-11, Jan. 24, 1995.

  • Danielsson, Erik, et al.; “Computer Architectures for Pictorial Inf. Systems”; IEEE Computer, November, 1981; pp. 53-67.

  • Data Partner 1.0 Simplifies DB Query Routines, PC Week, Sep. 14, 1992, pp. 55 & 58.

  • Davis, Andrew W., “Hi Grandma!: Is It Time for TV Set POTS Videoconferencing?”, Advanced Imagine, pp. 45-49 (March 1997).

  • Davis, Andrew W., “The Video Answering Machine: Intel ProShare's Next Step”, Advanced Imagine, pp. 28-30 (March 1997).

  • Davis, Fred, “The Great Look-and-Feel Debate”, A+, 5:941 (July 1987).

  • Deering, S.; Estrin, D.; Farinacci, D.; Jacobson, V.; Liu, C.; Wei, L; “An Architecture for Wide-Area Multicast Routing”, Computer Communication Review, vol. 24, No. 4, October 1994, Proceedings of SIGCOMM 94, pp. 126-135.

  • Dehning, Waltraud, Essig Heidrun, and Maass, Susanne, The Adaptation of Virtual Man-Computer Interfaces to User Requirements in Dialogs, Germany: Springer-Verlag, 1981.

  • Dempster, A. P., “A generalization of Bayesian inference”, Journal of the Royal Statistical Society, Vol. 30, Series B (1968).

  • Dempster, A. P., “Upper and lower probabilities induced by a multivalued mapping”, Annals of mathematical Statistics, no. 38 (1967).

  • Denker; 1984 International Test Conf., October 1984, Philadelphia, Pa.; pp. 558-563.

  • Derra, Skip, “Researchers Use Fractal Geometry,”, Research and Development Magazine, March 1988.

  • Diggle, Peter J: “Time Series: A Biostatistical Introduction”, Clarendon Press, Oxford, 1990. DivX standard.

  • Donnelley, J. E., “WWW media distribution via Hopewise Reliabe Multicast,” Computer Networks and ISDN Systems, vol. 27, No. 6, pp. 81-788 (April, 1995).

  • Donovan, J., “Intel/IBM's Audio-Video Kernel”, Byte, December, 1991, pp. 177-202.

  • Drazin, P G: “Nonlinear System”, Cambridge University Press, Cambridge, 1992.

  • Dubois, D., “Modeles mathematiques de l'imprecis et de l'incertain en vue d'applications aux techniques d′aide a la decision”, Doctoral Thesis, University of Grenoble (1983).

  • Dubois, D., N. Prade, “Combination of uncertainty with belief functions: a reexamination”, Proceedings 9th International Joint Conference on Artificial Intelligence, Los Angeles (1985).

  • Dubois, D., N. Prade, “Fuzzy sets and systems-Theory and applications”, Academic Press, New York (1980).

  • Dubois, D., N. Prade, “Theorie des possibilites: application a la representation des connaissances en informatique”, Masson, Paris (1985).

  • Dubois, D.; “Modeles mathematiques de l'imprecis et de l'incertain en vue d'applications aux techniques d′aide a la decision”; Doctoral Thesis, University of Grenoble (1983).

  • Duda, R. O., P. E. Hart, M. J. Nilsson, “Subjective Bayesian methods for rule-based inference systems”, Technical Note 124-Artificial Intelligence Center-SRI International.

  • Dukach, Semyon, “SNPP: A Simple Network Payment Protocol”, MIT Laboratory for Computer Science, Cambridge, Mass., 1993.

  • Dukach, Seymon; Prototype Implementation of the SNPP Protocol; allspicks.mit.edu; 1992.

  • Dunning, B. B., “Self-Learning Data-Base For Automated Fault Localization”, IEEE, 1979, pp. 155-157.

  • EDN, May 11, 1995, pp. 40-106.

  • Edwards, John R., “Q&A: Integrated Software with Macros and an Intelligent Assistant”, Byte Magazine, January 1986, vol. II, Issue 1, pp. 120-122, critiques the Intelligent Assistant by Symantec Corporation.

  • Ehrenreich, S. L., “Computer Abbreviations—Evidence and Synthesis”, Human Factors, 27(2):143-155 (April 1985).

  • Ekeland, Ivan: “Mathematics and the Unexpected”, The University of Chicago Press, Chicago, 1988 Falconer, Kenneth: “Fractal Geometry”, John Wiley & Sons, Chichester, 1990.

  • Electronic Engineering Times (EET), Oct. 28, 1991, p. 62.

  • Electronic Engineering Times, Oct. 28, 1991, p. 62, “IBM Points a New Way”.

  • Elliott, “Watch—Grab—Arrange—See: Thinking with Motion Images via Streams and Collages”, Ph.D. Thesis, MIT, February 1993.

  • Elofson, G. and Konsynski, B., “Delegation Technologies: Environmental Scanning with Intelligent Agents”, Journal of Management Information Systems, Summer 1991, vol. 8, Issue 1, pp. 37-62.

  • Elton, J., “An Ergodic Theorem for Iterated Maps”, Journal of Ergodic Theory and Dynamical Systems, 7 (1987).

  • Even et al; “Electronic Wallet”, pp. 383-386; 1983.

  • Faloutsos, C., et al, “Efficient and Effective Querying by Image Content”, Journal of Intelligent Information Systems: Integrating Artificial Intelligence and Database Technologies, vol. 3-4, No. 3, July 1994, pp. 231-262.

  • Farrelle, Paul M. and Jain, Anil K., “Recursive Block Coding—A New Approach to Transform Coding”, IEEE Transactions on Communications, Com. 34(2) (February 1986).

  • Fassihi, Theresa & Bishop, Nancy, “Cable Guide Courting National Advertisers,” Adweek, Aug. 8, 1988.

  • Fisher Y, “Fractal image compression”, Siggraph 92.

  • Fitzpatrick, J. M., J. J. Grefenstette, D. Van Gucht, “Image Registration by Genetic Search”, Conf. Proc., IEEE Southeastcon 1984, pp. 460-464.

  • Flickner, et al. “Query by Image and Video Content, the QBIC System”, IEEE Computer 28(9); 23-32, 1995.

  • Foley, J. D., Wallace, V. L., Chan, P., “The Human Factor of Computer Graphics Interaction Techniques”, IEEE CG&A, November 1984, pp. 13-48.

  • Foltz, P. W., Dumais, S. T., “Personalized Information Delivery: An Analysis Of Information Filtering Methods”, Communications of the ACM, December 1992, vol. 35, No. 12, pp. 51-60.

  • Fractal Image Compression Michael F. Barnsley and Lyman P. Hurd ISBN 0-86720-457-5, ca. 250 pp.

  • Fractal Image Compression: Theory and Application, Yuval Fisher (ed.), Springer Verlag, New York, 1995. ISBN number 0-387-94211-4.

  • Fractal Modelling of Biological Structures, School of Mathematics, Georgia Institute of Technology (date unknown).

  • Franklin, Gene F, J David Powell & Abbas Emami-Naeini: “Feedback Control of Dynamic Systems”, Addison-Wesley Publishing Co. Reading, 1994.

  • Freeman, W. T., et al, “The Design and Use of Steerable Filters”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, No. 9, September 1991, pp. 891-906.

  • Friedman, M. B., “An Eye Gaze Controlled Keyboard”, Proceedings of the 2nd International Conference on Rehabilitation Engineering, 1984, 446-447.

  • Fu, Sequential Methods in Pattern Recognition and Machine Learning, Academic, NY, N.Y. 1968.

  • Fua, P. V., “Using probability density functions in the framework of evidential reasoning Uncertainty in knowledge based systems”, B. Bouchon, R. R. Yager, Eds. Springer Verlag (1987).

  • Garretson, R., “IBM Adds ‘Drawing Assistant’ Design Tool to Graphics Series”, PC Week, Aug. 13, 1985, vol. 2, Issue 32, p. 8.

  • Gautama, S., D'Haeyer, J., “Learning Relational Models of Shape: A Study of the Hypergraph Formalism”.

  • Gautama, S., D'Haeyer, J. P. F., “Context Driven Matching in Structural Pattern Recognition”.

  • Gellert, W, H Kustner, M Hellwich & H Kastner: “The VNR Concise Encyclopedia of Mathematics”, Van Nostrand Reinhols Co., New York, 1975.

  • Gelman, A. D., et al.: A Store-And-Forward Architecture For Video-On-Demand Service; ICC 91 Conf.; June 1991; pp. 842-846.

  • George E P Box & Gwilym M Jenkins: “Time Series Analysis: Forecasting and Control”, Holden Day, San Francisco, 1976.

  • Gessler, S. and Kotulla A., “PDAs as mobile WWW browsers,” Computer Networks and ISDN Systems, vol. 28, No. 1-2, pp. 53-59 (December 1995).

  • Gevers, T., et al, “Enigma: An Image Retrieval System”, IEEE 11th IAPR International Conference On Pattern Recognition, 1992, pp. 697-700.

  • Gevers, T., et al, “Indexing of Images by Pictorial Information”, Visual Database Systems, II, 1992 IFIP, pp. 93-101.

  • Gifford, D., “Notes on Community Information Systems”, MIT LCS TM-419, December 1989.

  • Gifford, David K.; “Cryptographic Sealing for Information Secrecy and Authentication”; Stanford University and Xerox Palo Alto Research Center; Communication of the ACM; vol. 25, No. 4; April, 1982.

  • Gifford, David K.; “Digital Active Advertising”; U.S. patent application Ser. No. 08/168,519; filed Dec. 16, 1993.

  • Gilfoil, D., and Mauro, C. L., “Integrating Human Factors and Design: Matching Human Factors Methods up to Product Development”, C. L. Mauro Assoc., Inc., 1-7.

  • Gleick, James, “Chaos—Making a New Science”, Heinemann, London, 1988.

  • Gligor, Virgil D. et al.; “Object Migration and Authentication”; IEEE Transactions on Software Engineering; vol. SE-5, No. 6; November, 1979.

  • Glinert-Stevens, Susan, “Microsoft Publisher: Desktop Wizardry”, PC Sources, February, 1992, vol. 3, Issue 2, p. 357.

  • Goble, C., et al, “The Manchester Multimedia Information System”, Proceedings of IEEE Conference, Eurographics Workshop, April, 1991, pp. 244-268.

  • Gogoussis et al., Proc. SPIE Intl. Soc. Opt. Eng., November 1984, Cambridge, Mass., pp. 121-127.

  • Goldberg, Cheryl, “IBM Drawing Assistant: Graphics for the EGA”, PC Magazine, Dec. 24, 1985, vol. 4, Issue 26, p. 255.

  • Gong et al, “An Image Database System with Content Capturing and Fast Image Indexing Abilities”, PROC of the International Conference on Multimedia Computing and Systems, pp. 121-130 May 19, 1994.

  • Gong et al. “An Image Database System with Content Capturing and Fast Image Indexing Abilities” IEEE, 1994, pp. 121-130, May 1994.

  • Gonzalez et al., Digital Image Processing, Addison-Wesley, Reading, Mass., 1977.

  • Gonzalez, Rafael C., “Digital Image Processing”, Addison-Wesley, Reading, Mass. (1987).

  • Gonzalez, Rafael C., “Digital Image Processing”, Addison-Wesley, Reading, Mass. (1987).

  • Gould, John D., Boies, Stephen J., Meluson, Antonia, Rasammy, Marwan, and Vosburgh, Ann Marie, “Entry and Selection Methods For Specifying Dates”. Human Factors, 32(2):199-214 (April 1989).

  • Graf S, “Barnsley's Scheme for the Fractal Encoding of Images”, Journal Of Complexity, V8, 72-78 (1992).

  • Great Presentations advertisement: Remote, Remote; 1987; p. 32H.

  • Green, Lee, “Thermo Tech: Here's a common sense guide to the new thinking thermostats”, Popular Mechanics, October 1985, 155-159.

  • Grosky, W., et al, “A Pictorial Index Mechanism for Model-based Matching”, Data 7 Knowledge Engineering 8, 1992, pp. 309-327.

  • Grosky, W., et al, “Index-based Object Recognition in Pictorial Data Management”, Computer Vision, 1990, pp. 416-436.

  • Grossberg, S., G. Carpenter, “A Massively Parallel Architecture for a Self-Organizing Neural Pattern Recognition Machine,” Computer Vision, Graphics, and Image Processing (1987, 37, 54-115), pp. 252-315.

  • Grudin, Jonathan, “The Case Against User Interface Consistency”, MCC Technical Report Number ACA-HI-002-89, January 1989.

  • Gudivada, V. N., and V. V. Raghavan. Design and evaluation of algorithms for image retrieval by spatial similarity. ACM Trans. on Information Systems, 13(2), April 1995.

  • Gudivada, V., et al, “A Spatial Similarity Measure for Image Database Applications”, Technical Report 91-1, Department of Computer Science, Jackson, Miss., 39217, 1990-1991.

  • Guenther, O., and A. Buchmann. Research issues in spatial databases. In ACM SIGMOD Record, volume 19, December 1990.

  • Gullichsen E., E. Chang, “Pattern Classification by Neural Network: An Experiment System for Icon Recognition,” ICNN Proceeding on Neural Networks, March 1987, pp. IV-725-32.

  • Gupta, Amarnath; Weymount, Terry & Jain, Ramesh, “Semantic Queries With Pictures: The VIMSYS Model”, Proceedings of the 17th International Conference on Very Large Data Bases, pp. 69-79, Barcelona, September, 1991.

  • Hafner, J., H. S. Sawhney, W. Equitz, M. Flickner, and W. Niblack. Efficient color histogram indexing for quadratic form distance functions. IEEE Trans. Pattern Anal. Machine Intell., July 1995.

  • Haines, R. W., “HVAC Systems Design Handbook”, TAB Professional and Reference Books, Blue Ridge Summit, Pa. (1988) pp. 170-177.

  • Harris, C. J., & S. A. Billings, “Self-Tuning and Adaptive Control: Theory and Applications”, Peter Peregrinus LTD (1981) pp. 20-33.

  • Harty et al., “Case Study: The VISA Transaction Processing System,” 1988.

  • Haruki, K. et al., “Pattern Recognition of Handwritten Phonetic Japanese Alphabet Characters”, International Joint Conference on Neural Networks, Washington, D.C., January 1990, pp. 11-515 to 11-518.

  • Harvey, Michael G., and Rothe, James T., “VideoCassette Recorders: Their Impact on Viewers and Advertisers”, Journal of Advertising, 25:19-29 (December/January 1985).

  • Hasegawa, J., et al, “Intelligent Retrieval of Chest X-Ray Image Database Using Sketches”, System And Computers In Japan, 1989, pp. 29-42.

  • Hawkins, William J., “Super Remotes”, Popular Science, February 1989, 76-77.

  • Hayashi, Y., et al., “Alphanumeric Character Recognition Using a Connectionist Model with the Pocket Algorithm”, Proceedings of the International Joint Conference on Neural Networks, Washington, D.C. Jun. 18-22, 1989, vol. 2, pp. 606-613.

  • Hayes, H. I.; Solka, J. L.; Priebe, C. E.; “Parallel computation of fractal dimension”, Proceedings of the SPIE—The International Society for Optical Engineering, 1962:219-30 (1993).

  • Hendrix, Gary G. and Walter, Brett A., “The Intelligent Assistant: Technical Considerations Involved in Designing Q&A's Natural-language Interface”, Byte Magazine, December 1987, vol. 12, Issue 14, p. 251.

  • Henke, Lucy L., and Donohue, Thomas R., “Functional Displacement of Traditional TV Viewing by VCR Owners”, Journal of Advertising Research, 29:18-24 (April-May 1989).

  • Hinton et al., “Boltzmann Machines: Constraint Satisfaction Networks that Learn”, Tech. Report CMU-CS-85-119, Carnegie-Mellon Univ, 5/84.

  • Hirata, et al. “Query by Visual Example, Content Based Image Retrieval” Advance in Database Technology-EDBT '92, Springer-Verlag, Berlin 1992, pp. 56-71

  • Hirata, K., et al, “Query by Visual Example Content Based Image Retrieval”, Advances In Database Technology, March, 1992, pp. 57-71.

  • Hirzalla et al., “A Multimedia Query User Interface”, IEEE on CD-ROM, pp. 590-593, Sep. 5, 1995.

  • Hirzinger, G., Landzettel, K., “Sensory Feedback Structures for Robots with Supervised Learning”, IEEE Conf. on Robotics and Automation, St. Louis, March 1985.

  • Hoare, F.; de Jager, G., “Neural networks for extracting features of objects in images as a pre-processing stage to pattern classification”, Proceedings of the 1992 South African Symposium on Communications and Signal Processing. COMSIG '92 (Cat. No. 92TH0482-0). Inggs, M. (Ed.), p. 239-42 (1992).

  • Hoban, Phoebe, “Stacking the Decks”, New York, Feb. 16, 1987, 20:14.

  • Hoffberg, Linda I, “AN IMPROVED HUMAN FACTORED INTERFACE FOR PROGRAMMABLE DEVICES: A CASE STUDY OF THE VCR” Master's Thesis, Tufts University (Master of Sciences in Engineering Design, November, 1990).

  • Hoffberg, Linda I., “Designing a Programmable Interface for a Video Cassette Recorder (VCR) to Meet a User's Needs”, Interface 91 pp. 346-351 (1991).

  • Hoffberg, Linda I., “Designing User Interface Guidelines For Time-Shift Programming of a Video Cassette Recorder (VCR)”, Proc. of the Human Factors Soc. 35th Ann. Mtg. pp. 501-504 (1991).

  • Hoffman, D. L. et al., “A New Marketing Paradigm for Electronic Commerce,” (1996, Feb. 19), www2000.ogsm.vanderbilt.edu novak.

  • Hollatz, S. A., “Digital image compression with two-dimensional affine fractal interpolation functions”, Department of Mathematics and Statistics, University of Minnesota-Duluth, Technical Report 91-2.

  • Hong Kong Enterprise advertisement: Two Innovative New Consumer Products From SVI; November 1988; p. 379.

  • Hongjiang, et al., Digital Libraries, “A Video Database System for Digital Libraries”, pp. 253-264, May 1994.

  • Hooge, Charles, “Fuzzy logic Extends Pattern Recognition Beyond Neural Networks”, Vision Systems Design, January 1998, pp. 32-37.

  • Hopfield et al., “Computing with Neural Circuits: A Model”, Science, 233:625-633 (8 Aug. 1986).

  • Hopfield, “Neural Networks and Physical Systems with Emergent Collective Computational Abilities”, Proc. Natl. Acad. Sci. USA, 79:2554-2558 (April 1982).

  • Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons”, Proc. Natl. Acad. Sci. USA, 81:3088-3092 (May 1984).

  • Hopfield; “Neural Networks and Physical Systems with Emergent Collective Computational Abilities”; Proc. Natl. Acad. Sci. USA; 79:2554-2558 (April 1982).

  • Horgan, H., “Medical Electronics”, IEEE Spectrum, January 1984, pp. 90-93.

  • Hou et al., “Medical Image Retrieval by Spatial Features”, IEEE on CD-ROM, pp. 1364-1369, Oct. 18, 1992.

  • Howard, Bill, “Point and Shoot Devices”, PC Magazine, 6:95-97 (August 1987).

  • Hsu et al., “Pattern Recognition Experiments in the Mandala/Cosine Domain”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAM1-5, No. 5, September 1983, pp. 512-520.

  • Hu et al., “Pattern Recognition by Moment Invariants”, Proc. IRE, vol. 49, 1961, p. 1428.

  • Hunter, Jane, “The Application of Metadata Standards to Video Indexing” www.dtsc.edu.au (<12/24/98).

  • Hurtgen, B.; Buttgen, P., “Fractal approach to low rate video coding”, Proceedings of the SPIE—The International Society for Optical Engineering, 2094(pt. 1): 120-31(1993).

  • Hutheesing, H., “Interactivity for the passive”, Forbes magazine Dec. 6, 1993 (@ Forbes Inc. 1993) (2 pages).

  • IEEE Communications Magazine; vol. 32, No. 5, May 1994 New York, N.Y., US, pp. 68-80, XP 000451097 Chang et al “An Open Systems Approach to Video on Demand”.

  • IEEE-1394.

  • Iino et al., “An Object-Oriented Model for Spatio-Temporal Synchronization of Multimedia Information”, May, 1994.

  • Information describing BroadVision One-to-One Application System: “Overview,” p. 1; Further Resources on One-To-One Marketing, p. 1; BroadVision Unleashes the Power of the Internet with Personalized Marketing and Selling, pp. 1-3; Frequently Asked Questions, pp. 1-3; Products, p. 1; BroadVision One-To-One™, pp. 1-2; Dynamic Command Center, p. 1; Architecture that Scales, pp. 1-2; Technology, pp. 1; Creating a New Medium for Marketing and Selling BroadVision One-To-One and the Wide Web a White Paper, pp. 1-15;www.broadvision.com (1996, Jan.-Mar.).

  • Information Network Institute, Carnegie Mellon University, Internet Billing Server, Prototype Scope Document, Oct. 14, 1993.

  • Information Processing 71, North-Holland Publishing Company (1972) pp. 1530-1533.

  • Ingemar J. Cox et al., “Target Testing and the Pic Hunter Bayesian Multimedia Retrieval System,” Proc. of the 3d Forum on Research and Technology Advances in Digital Libraries, ADL '96, IEEE, pp. 66-75.

  • Intel Corporation, iPower Technology, Marketing Brochure, date unknown.

  • Intuit Corp. Quicken User's Guide, “Paying Bills Electronically”, pp. 171-192; undated.

  • Ioka, M., “A Method of Defining the Similarity of Images on the Basis of Color Information”, Bulletin Of The National Museum Of Ethnology Special Issue, pp. 229-244, No. 17, November 1992.

  • Irven, Judith H., et al., “Multi-Media Information Services: A Laboratory Study”, IEEE Communications Magazine, vol. 26, No. 6, June, 1988, pp. 24-44.

  • Ishizuka, M., “Inference methods based on extended Dempster and Shafer's theory for problems with uncertainty/fuzziness”, New Generation Computing, 1:159-168 (1983), Ohmsha, Ltd, and Springer Verlag.

  • Ishizuka, M., “Inference methods based on extended Dempster and Shafer's theory for problems with uncertainty/fuzziness”, New Generation Computing, 1:159-168 (1983), Ohmsha, Ltd., and Springer Verlag.

  • ISO/IEC JTCI/SC29/WGII N1733, MPEG97, July 1997, “MPEG-7 Context and Objectives (v.4—Stockholm)”.

  • ISO/IEC JTCI/SC29/WGII N1735, MPEG97, July 1997—Stockholm, “MPEG-7 Applications Document”.

  • ISO/IEC JTCI/SC29/WGII N1920, “MPEG97, October 1997 “MPEG-7 Context and Objectives (v.5-Fribourg)”.

  • ISO/IEC JTCI/SC29/WGII N2460, MPEG98, October 1998 “MPEG-7 Context and Objectives (v.10—Atlantic City)”.

  • ISO/IEC JTCI/SC29/WGII N2461, MPEG98, October 1998—Atlantic City, “MPEG-7 Requirements”.

  • ISO/IEC JTCI/SC29/WGII N2462, MPEG98, October 1998—Atlantic City, “MPEG-7 Applications”.

  • ISO/IEC JTCI/SC29/WGII N2467, MPEG98, October 1998—Atlantic City, “MPEG-7 Content Set”.

  • Iyengar et al., “Codes Designs for Image Browsing”, 1994.

  • Jackel L. D., H. P. Graf, J. S. Denker, D. Henderson and I. Guyon, “An Application of Neural Net Chips: Handwritten Digit Recognition,” ICNN Proceeding, 1988, pp. 11-107-15.

  • Jacobs, Charles E., Finkelstein, Adam, Salesin, David H., “Fast Multiresolution Image Querying”, Department of Computer Science, University of Washington, Seattle Wash.

  • Jacobs, E. W., Y. Fisher and R. D. Boss. “Image Compression: A study of the Iterated Transform Method.” Signal Processing 29, (1992) 25-263.

  • Jacquin, A., “Image Coding Based on a Fractal Theory of Iterated Contractive Image Transformations” p. 18, January 1992 (Vol I Issue 1) of IEEE Trans on Image Processing.

  • Jacquin, A., “A Fractal Theory of Iterated Markov Operators with Applications to Digital Image Coding”, PhD Thesis, Georgia Tech, 1989.

  • Jacquin, A., ‘Fractal image coding based on a theory of iterated contractive image transformations’, Proc. SPIE Visual Communications and Image Processing, 1990, pages 227-239.

  • Jacquin, A. E., ‘A novel fractal block-coding technique for digital images’, Proc. ICASSP 1990.

  • Jane Pauley Special, NBC TV News Transcript, Jul. 17, 1990, 10:00 PM.

  • Jean, J. S. N., et al., “Input Representation and Output Voting Considerations for Handwritten Numeral Recognition with Backpropagation”, International Joint Conference on Neural Networks, Washington, D.C., January 1990, pp. 1-408 to 1-411.

  • Jeffrey, R. J., “The logic of decision”, The University of Chicago Press, Ltd., London (1983)(2nd Ed.).

  • Jim Binkley & Leslie Young, Rama: An Architecture for Internet Information Filtering, Journal of Intelligent Information Systems: Integrating Artificial Intelligence and Database Technologies, vol. 5, No. 2, September 1995, pp. 81-99.

  • Jones, R., “Digital's World-Wide Web server: A case study,” Computer Networks and ISDN Systems, vol. 27, No. 2, pp. 297-306 (November 1994).

  • JPL New Technology Report NPO-20213, Nasa Tech Brief Vol. 22, No. 4, Item #156 (April 1998).

  • Kato, T., “A Sketch Retrieval Method for Full Color Image Database-Query by Visual Example”, IEEE, Publication No. 0-8186-2910-X/92, 1992, pp. 530-533.

  • Kato, T., “Intelligent Visual Interaction with Image Database Systems Toward the Multimedia Personal Interface”, Journal Of Information Processing, vol. 14, No. 2, 1991, pp. 134-143.

  • Kato, T., et al, “A Cognitive Approach Interaction”, International Conference Of Multimedia Information Systems, January, 1991, pp. 109-119.

  • Kato, T., et al, “Trademark: Multimedia Database with Abstracted Representation on Knowledge Base”, Proceedings Of The Second International Symposium On Interoperable Information Systems, pp. 245-252, November 1988.

  • Kato, T., et al, “Trademark: Multimedia Image Database System with Intelligent Human Interface”, System And Computers In Japan, 1990, pp. 33-46.

  • Kaufmann, A., “Introduction a la theorie des sous-ensembles flous”, Vol. 1, 2 et 3-Masson-Paris (1975).

  • Kaye, Brian H: “A Random Walk Through Fractal Dimensions”, VCH Verlagsgesellschaft, Weinheim, 1989.

  • Keeney, R. L., B. Raiffa, “Decisions with multiple objectives: Preferences and value tradeoffs”, John Wiley and Sons, New York (1976).

  • Kellman, P., “Time Integrating Optical Signal Processing”, Ph. D. Dissertation, Stanford University, 1979, pp. 51-55.

  • Kelly et al. “Efficiency Issues Related to Probability Density Function Comparison”, SPIE vol. 2670, pp. 42-49 January 1996.

  • Kelly, P. M., et al. “Candid Comparison Algorithm for Navigating Digital Image Databases”, Proceedings 7th International Working Conference on Scientific and Statistical Database Management, pp. 252-258, 1994.

  • Kim, D. H.; Caulfield, H. J.; Jannson, T.; Kostrzewski, A.; Savant, G, “Optical fractal image processor for noise-embedded targets detection”, Proceedings of the SPIE—The International Society for Optical Engineering, Vol: 2026 p. 144-9 (1993) (SPIE Conf: Photonics for Processors, Neural Networks, and Memories 12-15 Jul. 1993, San Diego, Calif., USA).

  • Kim, Y., “Chips Deliver Multimedia”, Byte, December 1991, pp. 163-173.

  • Knowlton, K., “Virtual Pushbuttons as a Means of Person-Machine Interaction”, Proc of Conf. Computer Graphics, Pattern Recognition and Data Structure, Beverly Hills, Calif., May 1975, pp. 350-352.

  • Koch, H., “Ergonomische Betrachtung von Schreibtastaturen”, Humane Production, 1, pp. 12-15 (1985).

  • Kohonen, “Self-Organization & Memory”, Second Ed., 1988, Springer-Verlag, pp. 199-209.

  • Kolson, Ann, “Computer wimps drown in a raging sea of technology”, The Hartford Courant, May 24, 1989, B1.

  • Kortegaard, B. L., “PAC-MAN, a Precision Alignment Control System for Multiple Laser Beams Self-Adaptive Through the Use of Noise”, Los Alamos National Laboratory, date unknown.

  • Kortegaard, B. L., “Superfine Laser Position Control Using Statistically Enhanced Resolution in Real Time”, Los Alamos National Laboratory, SPIE-Los Angeles Technical Symposium, Jan. 23-25, 1985.

  • Kraiss, K. F., “Alternative Input Devices For Human Computer Interaction”, Forschunginstitut Für Anthropotecahnik, Werthhoven, F. R. Germany.

  • Kraiss, K. F., “Neuere Methoden der Interaktion an der Schnittstelle Mensch-Maschine”, Z. F. Arbeitswissenschaft, 2, pp. 65-70, 1978.

  • Krajewski, M. et al, “Applicability of Smart Cards to Network User Authentication”, Computing Systems, vol. 7, No. 1, 1994.

  • Krajewski, M., “Concept for a Smart Card Kerberos”, 15th National Computer Security Conference, October 1992.

  • Krajewski, M., “Smart Card Augmentation of Kerberos, Privacy and Security Research Group Workshop on Network and Distributed System Security”, February 1993.

  • Kraus, T. W., T. J. Myron, “Self-Tuning PID Controller Uses Pattern Recognition Approach”, Control Engineering, pp. 106-111, June 1984.

  • Kreifeldt, J. G., “A Methodology For Consumer Product Safety Analysis”, The 3rd National Symposium on Human Factors in Industrial Design in Consumer Products, August 1982, 175-184.

  • Kreifeldt, John, “Human Factors Approach to Medical Instrument Design”, Electro/82 Proceedings, 3/3/1-3/3/6.

  • Ksienski et al., “Low Frequency Approach to Target Identification”, Proc. of the IEEE, 63(12): 165′-1660 December, (1975).

  • Kuo, C.-C. J. (ed), “Multimedia Storage and Archiving Systems”, SPIE Proc. Vol. 2916 (Nov. 18-Nov. 22, 1996).

  • Kuocheng, Andy Poing, and Ellingstad, Vernon S., “Touch Tablet and Touch Input”, Interface '87, 327.

  • Kurokawa, M., “An Approach to Retrieving Images by Using their Pictorial Features”, IBM Research, Japan, September 1989.

  • Kyburg, H. E., “Bayesian and non Bayesian evidential updating”, Artificial Intelligence 31:271-293 (1987).

  • Lampson, Butler; Abadi, Martin; Burrows, Michael; and Wobber, Edward; “Authentication in Distributed Systems: Theory and Practice”; ACM Transactions on Computer Systems; vol. 10, No. 4; November, 1992; pp. 265-310.

  • Landis, Sean, “Content-Based Image Retrieval Systems for Interior Design”, www.tc.cornell.edu.

  • Langton C G (ed): Artificial Life; Proceedings of the first international conference on Artificial life, Redwood City: Addison-Wessley (1989).

  • Lauwerier, Hans: “Fractals—Images of Chaos”, Penguin Books, London, 1991.

  • LeCun, Y. et al., “Handwritten Digit Recognition: Applications of Neural.”, IEEE Comm. Magazine, November 1989, pp. 41-46.

  • LeCun, Y., “Connectionism in Perspective”, in R. Pfeifer, Z. Schreter, F. Fogelman, L. Steels (Eds.), 1989, “Generalization and Network Design Strategies”, pp. 143-155.

  • Ledgard, Henry, Singer, Andrew, and Whiteside, John, Directions in Human Factors for Interactive Systems, New York, Springer-Verlag, 1981.

  • Lee et al., “Video Indexing—An Approach based on Moving Object and Track”, Proceedings of Storage and Retrieval for Image and Video Databases, pp. 25-36. February 1993.

  • Lee, Denis, et al., “Query by Image Content Using Multiple Objects and Multiple Features: User Interface Issues,” 1994 Int'l Conf. on Image Processing, IEEE, pp. 76-80.

  • Lee, E., “Similarity Retrieval Techniques”, Pictorial Information Systems, Springer Verlag, 1980 pp. 128-176.

  • Lee, Eric, and MacGregor, James, “Minimizing User Search Time Menu Retrieval Systems”, Human Factors, 27(2):157-162 (April 1986).

  • Lee, S., et al, “2D C-string: A New Spatial Knowledge Representation for Image Database Systems”, Pattern Recognition, vol. 23, 1990, pp. 1077-1087.

  • Lee, S., et al, “Similarity Retrieval of Iconic Image Database”, Pattern Recognition, vol. 22, No. 6 1989, pp. 675-682.

  • Lee, S., et al, “Spatial Reasoning and Similarity Retrieval of Images Using 2D C-string Knowledge Representation”, Pattern Recognition, 1992, pp. 305-318.

  • Lendaris, G. G., and Stanely, G. L., “Diffraction Pattern Sampling for Automatic Target Recognition”, Proc. IEEE 58:198-205 (1979).

  • Leon, Carol Boyd, “Selling Through the VCR”, American Demographics, December 1987, 40-43.

  • Li, H. Y., Y. Qiao and D. Psaltis, Applied Optics (April, 1993).

  • Liepins, G. E., M. R. Hilliard, “Genetic Algorithms: Foundations & Applications”, Annals of Operations Research, 21:31-58 (1989).

  • Lin, H. K., et al., “Real-Time Screen-Aided Multiple-Image Optical Holographic Matched-Filter Correlator”, Applied Optics, 21(18):3278-3286 (Sep. 15, 1982).

  • Liou, “Overview of the px64 kbits Video Coding Standard”, Communications of the ACM, vol. 34, No. 4, April 1991, pp. 60-63.

  • Lippmann, R. P., “An Introduction to Computing with Neural Nets”, IEEE ASSP Magazine, 4(2):4-22 (April 1987).

  • Liu, Y., “Extensions of fractal theory”, Proceedings of the SPIE—The International Society for Optical Engineering, 1966:255-68(1993).

  • Liu, Y., “Pattern recognition using Hilbert space”, Proceedings of the SPIE—The International Society for Optical Engineering, 1825:63-77 (1992).

  • Ljung, Lennart, & Torsten Soderstrom: “Theory and Practice of Recursive Identification”, The MIT Press, Cambridge, Mass., 1983.

  • Ljung, Lennart: “System Identification; Theory for the User”, Prentice-Hall Englewood Cliffs, N.J., 1987.

  • Lloyd, Sheldon G., & Gerald D Anderson: “Industrial Process Control”, Fisher Controls Co., Marshalltown, 1971.

  • Loeb, S., “Architecting Personalized Delivery of Multimedia Information”, Communications of the ACM, December 1992, vol. 35, No. 12, pp. 39-50.

  • Long, John, “The Effect of Display Format on the Direct Entry of Numerical Information by Pointing”, Human Factors, 26(1):3-17 (February 1984).

  • Lu, C., “Computer Pointing Devices: Living With Mice”, High Technology, January 1984, pp. 61-65.

  • Lu, C., “Publish It Electronically”, Byte, September 1993, pp. 94-109.

  • Mackay et al., “Virtual Video Editing in Interactive Multimedia Applications”, 1989.

  • Mahalanobis, A., et al., “Minimum Average Correlation Energy Filters”, Applied Optics, 26(17):3633-40 (Sep. 1, 1987).

  • Makridakis, Spyros, & Steven Wheelwright: “The Handbook of Forecasting”, John Wiley, New York, 1982.

  • Mandelbrot, Benoit: “Fractal Geometry of Nature”, W H Freeman and Co., New York, 1983 (orig ed 1977).

  • Mandelbrot, Benoit: “Fractals—Form, Chance and Dimensions”, W H Freeman and Co., San Francisco, 1977.

  • Manners, George, “Smart Screens; Development of Personal Navigation Systems for TV Viewers,” Video Magazine, December 1993.

  • Mannes, G., “Smart Screens”, Video Magazine, December 1993) (2 Pages).

  • Mantei, Marilyn M., and Teorey, Toby J., “Cost/Benefit Analysis for Incorporating Human Factors in the Software Lifecycle”, Association for Computing Machinery, 1988.

  • Maragos, P., “Tutorial Advances in Morphological Image Processing” Optical Engineering 26:7:623-632 (1987).

  • Mardia, K V, J T Kent & J M Bibby: “Multivariate Analysis”, Academic Press, London, 1979.

  • Martin, G. L. et al., “Recognizing Hand-Printed Letters and Digits Using Backpropagation Learning”, Technical Report of the MCC, Human Interface Laboratory, Austin, Tex., January 1990, pp. 1-9.

  • Martinez et al. “Imagenet: A Global Distribution Database for Color Image Storage and Retrieval in Medical Imaging Systems” IEEE, 1992, 710-719, May 1992.

  • Masahiro Morita & Yoichi Shinoda, Information Filtering Based on User Behavior Analysis and Best Match Text Retrieval, Proceedings of the Seventeenth Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, Dublin, Jul. 3-6, 1994, Pages Title Page (272)-281.

  • Mazel, D. S., “Fractal Modeling of Time-Series Data”, PhD Thesis, Georgia Tech, 1991. (One dimensional, not pictures).

  • McAulay, A. D., J. C. Oh, “Image Learning Classifier System Using Genetic Algorithms”, IEEE Proc. of the National Aerospace & Electronics Conference, 2:705-710 (1989).

  • McCauley, Joseph L.: “Chaos, Dymanics, and Fractals”, Cambridge University Press, Cambridge, 1993.

  • McFadden, M., “The Web and the Cookie Monster,” Digital Age, (1996, August).

  • Meads, Jon A., “Friendly or Frivolous”, Datamation, Apr. 1, 1988, 98-100.

  • Medvinsy et al, “NetCash: A Design for Practical Electronic Currency on the Internet”, Proc. 1st ACM Conf. on Comp. and Comm. Security, November 1993.

  • Medvinsy et al., “Electronic Currency for the Internet”, Electronic Markets, pp. 30-31, September 1993.

  • Mehrotra, R., et al, “Shape Matching Utilizing Indexed Hypotheses Generation and Testing”, IEEE Transactions On Robotics, vol. 5, No. 1, February 1989, pp. 70-77.

  • Meyer, J. A., Roitblat, H. L., Wilson, W. (eds.): From Animals to Animals. Proceedings of the Second International Conference on Simulation of Adaptive Behaviour. Cambridge, Mass.: MIT Press. (1991).

  • Middleton, G. V. ed., 1991, Nonlinear Dynamics, Chaos and Fractals, with Applications to Geological Systems. Geol. Assoc. Canada Short Course Notes Vol. 9 (available from the GAC at Memorial University of Newfoundland, St. John's NF AIB 3×5).

  • Miller et al., “News On-Demand for Multimedia Networks”, ACM International Conference on Multimedia, Anaheim, Calif., 1-6, August 1993, pp. 383-392.

  • Miller, R. K., Neural Networks ((c) 1989: Fairmont Press, Lilburn, Ga.), pp. 2-12 and Chapter 4, “Implementation of Neural Networks”, pp. 4-1 to 4-26.

  • Mills et al., “A Magnifier Tool for Video Data”, Proceedings of ACM Computer Human Interface (CHI), May 3-7, 1992, pp. 93-98.

  • Mills, “Media Composition for Casual Users”, 1992.

  • Minka, T., “An Image Database Browser that Learns from User Interaction”, Masters Thesis, Massachusetts Institute of Technology; 1996; also appears as MIT Media Laboratory Technical Report 365.

  • Minneman et al., “Where Were We: making and using near-synchronous, pre-narrative video”, Multimedia '93, pp. 1-11. December 1993.

  • Molley, P., “Implementing the Difference-Squared Error Algorithm Using An Acousto-Optic Processor”, SPIE, 1098:232-239, (1989).

  • Molley, P., et al., “A High Dynamic Range Acousto-Optic Image Correlator for Real-Time Pattern Recognition”, SPIE, 938:55-65 (1988).

  • Moloney, Daniel M.: Digital Compression in Todays Addressable Enviroment; 1993 NCTA Technical Papers; Jun. 6, 1993; pp. 308-316.

  • Monro D M and Dudbridge F, “Fractal block coding of images”, Electronics Letters 28(11):1053-1054 (1992).

  • Monro D. M. & Dudbridge F. ‘Fractal approximation of image blocks’, Proc ICASSP 92, pp. III: 485-488.

  • Monro D. M. ‘A hybrid fractal transform’, Proc ICASSP 93, pp. V: 169-72.

  • Monro D. M., Wilson D., Nicholls J. A. ‘High speed image coding with the Bath Fractal Transform’, IEEE International Symposium on Multimedia Technologies Southampton, April 1993.

  • Moore, T. G. and Dartnall, “Human Factors of a Microelectronic Product: The Central Heating Timer/Programmer”, Applied Ergonomics, 1983, 13(1): 15-23.

  • Mori, “Towards the construction of a large-scale neural network”, Electronics Information Communications Association Bulletin PRU 88-59, pp. 87-94.

  • Nadoli, Gajanana and Biegel, John, “Intelligent Agents in the Simulation of Manufacturing Systems”, Proceedings of the SCS Multiconference on AI and Simulation, 1989.

  • Nagasaka et al., “Automatic Video Indexing and Full-Video Search for Object Appearances”, Proceedings of the IFIP TC2/WG2.6 Second Working Conference on Visual Database Systems, North Holland, (Knuth et al., eds.), Sep. 30-Oct. 3, 1991, pp. 113-127, January 1992.

  • Naik et al., “High Performance Speaker Verification.”, ICASSP 86, Tokyo, CH2243-4/86/0000-0881, IEEE 1986, pp. 881-884.

  • National Westminster Bank Group Brochure; pp. 1-29; undated.

  • Needham, Roger M. and Schroeder, Michael D.; “Using Encryption for Authentication in Large Networks of Computers”; Communications of the ACM; vol. 21, No. 12; December, 1978; pp. 993-999.

  • Needham, Roger M.; “Adding Capability Access to Conventional File Servers”; Xerox Palo Alto Research Center; Palo Alto, Calif.

  • Negandaripour, S., et al “Challenges in Computer Vision: Future Research Direction”, IEEE Transactions On Systems, Man And Cybernetics, pp. 189-199, 1992, at Conference on Computer Vision and Pattern Recognition.

  • Netravali, Arun N., and Haskell, Barry G., “Digital Pictures Representation and Compression”, Plenum Press, New York (1988).

  • Newman, B. C., “Proxy-Based Authorization and Accounting for Distributed Systems”, Proc. 13th Int. Conf. on Dist. Comp. Sys., May 1993.

  • NewMedia, November/December 1991, p. 69.

  • Ney, H., et al., “A Data Driven Organization of the Dynamic Programming Beam Search for Continuous Speech Recognition”, Proc. ICASSP 87, pp. 833-836, 1987.

  • Niblack, W. et al., “The QBIC Project: Querying Images by Content Using Color, Texture, and Shape”, IBM Computer Science Research Report, pp. 1-20 (Feb. 1, 1993).

  • Niblack, W., et al, “Find me the Pictures that Look Like This: IBM'S Image Query Project”, Advanced Imaging, April 1993, pp. 32-35.

  • Niblack, W., R. Barber, W. Equitz, M. Flickner, E. Glasman, D. Petkovic, P. Yanker, and C. Faloutsos. The QBIC project: Querying images by content using color, texture, and shape. In Storage and Retrieval for Image and Video Databases, volume SPIE Vol. 1908, February 1993.

  • Nilsson, B. A., “Microsoft Publisher is an Honorable Start for DTP Beginners”, Computer Shopper, February 1992, vol. 12, Issue 2, p. 426, evaluates Microsoft Publisher and Page Wizard.

  • Nilsson, N. J., The Mathematical Foundations of Learning Machines ((c) 1990: Morgan Kaufmann Publishers, San Mateo, Calif.) and particularly section 2.6 “The Threshold Logic Unit (TLU)”, pp. 21-23 and Chapter 6, “Layered Machines” pp. 95-114.

  • Norman, D. A., Fisher, D., “Why Alphabetic Keyboards Are Not Easy To Use: Keyboard Layout Doesn't Much Matter”, Human Factors 24(5), pp. 509-519 (1982).

  • Norman, Donald A., “Infuriating By Design”, Psychology Today, 22(3):52-56 (March 1988).

  • Norman, Donald A., The Psychology of Everyday Things, New York, Basic Book, Inc. 1988.

  • Novak et al., “Anatomy of a Color Histogram”, Proceeding of Computer Vision and Pattern Recognition, Champaign, Ill., June 1992, pp. 599-605.

  • Nussbaumer et al., “Multimedia Delivery on Demand: Capacity Analysis and Implications”, Proc 19th Conference on Local Computer Networks, 2-5 Oct. 1994, pp. 380-386.

  • O'Connor, Rory J., “Apple Banking on Newton's Brain”, San Jose Mercury News, Wednesday, Apr. 22, 1992.

  • O'Docherty, M. H., et al, “Multimedia Information System—The Management and Semantic Retrieval of all Electronic Data Types”, The Computer Journal, vol. 34, No. 3, 1991.

  • Ohsawa, I. and Yonezawa, A., “A Computational Model of an Intelligent Agent Who Talks with a Person”, Research Reports on Information Sciences, Series C, April 1989, No. 92, pp. 1-18.

  • Ohsuga et al, “Entrainment of Two Coupled van der Pol Oscillators by an External Oscillation”, Biological Cybernetics, 51:225-239 (1985).

  • Oien, G. E., S. Lepsoy & T. A. Ramstad, ‘An inner product space approach to image coding by contractive transformations’, Proc. ICASSP 1991, pp 2773-2776.

  • Okada, Y., et al., “An Image Storage and Retrieval System for Textile Pattern Adaptable to Color Sensation of the Individual”, Trans. Inst. Elec. Inf. Comm., vol. J70D, No. 12, pp. 2563-2574, December 1987 (Japanese w/English Abstract).

  • Okamoto et al; “Universal Electronic Cash”, pp. 324-337; 1991.

  • Omata et al, “Holonic Model of Motion Perception”, IEICE Technical Reports, Mar. 26, 1988, pp. 339-346.

  • O'Neal et al., “Coding Isotropic Images”, November 1977, pp. 697-707.

  • Ono, Atsushi, et al., “A Flexible Content-Based Image Retrieval System with Combined Scene Description Keyword,” Proc. of Multimedia '96, IEEE, pp. 201-208.

  • Optical Engineering 28:5 (May 1988)(Special Issue on product inspection).

  • Page, G F, J B Gomm & D Williams: “Application of Neural Networks to Modelling and Control”, Chapman & Hall, London, 1993.

  • Pandit, S. M., & S. M. Wu, “Timer Series & System Analysis with Applications”, John Wiley & Sons, Inc., NY (1983) pp. 200-205.

  • Pawlicki, T. F., D. S. Lee, J. J. Hull and S. N. Srihari, “Neural Network Models and their Application to Handwritten Digit Recognition,” ICNN Proceeding, 1988, pp. 11-63-70.

  • Pazzani, M. et al., “Learning from hotlists and coldlists: Towards a WWW Information Filtering and Seeking Agent,” Proceedings International Conference on Tools with Artificial Intelligence, January 1995, pp. 492-495.

  • Pecar, Branko: “Business Forecasting for Management”, McGraw-Hill Book Co., London, 1994.

  • Peitgen, Heinz-Otto, & Deitmar Saupe: “The Science of Fractal Images”, Springer-Verlag, New York, 1988.

  • Peitgen, Heinz-Otto, Hartmut Jurgens & Deitmar Saupe: “Fractals for the Classroom”, Springer-Verlag, 1992.

  • Perry et al., “Auto-Indexing Storage Device”, IBM Tech. Disc. Bulletin, 12(8):1219 (January 1970).

  • Perspectives: High Technology 2, 1985.

  • Peters: “Chaos and Order in the Capital Markets”, Wiley, 1991. Gershenfeld & Weigend: “The Future of Time Series”, Addison-Wesley, 1993.

  • Peterson, Ivars, “Packing It In-Fractals.”, Science News, 131(18):283-285 (May 2, 1987).

  • Peterson, Ivars: “The Mathematical Tourist”, W H Freeman, New York, 1988.

  • Petrakis, E. G. M., and C. Faloutsos. Similarity searching in large image databases. Technical Report 3388, Department of Computer Science, University of Maryland, 1995.

  • Pettit, Frank: “Fourier Transforms in Action”, Chartwell-Bratt, Lund, 1985.

  • Pfitzmann et al; “How to Break and Repair a Provably Secure Untraceable Payment System”; pp. 338-350; 1991.

  • Phillips, “MediaView: a general multimedia digital publication system”, Comm. of the ACM, v. 34, n. 7, pp. 75-83. July 1991.

  • Picard et al. “Finding Similar Patterns in Large Image Databases”, IEEE, 1993, pp. 161-164, April 1993.

  • Picard, R. W., et al, “finding Similar Patterns in Large Image Databases”, IEEE ICASSP, Minneapolis, Minn., vol. V, pp. 161-164, April 1993; also appears in MIT Media Laboratory Technical Report No. 205.

  • Pickover, Cliff, Visions of the Future: Art, Technology, and Computing in the 21st Century (St. Martin's Press).

  • Pickover, Cliff, Chaos in Wonderland: Visual Adventures in a Fractal World (St. Martin's Press).

  • Pickover, Cliff, Computers and the Imagination (St. Martin's Press).

  • Pickover, Cliff, Computers, Pattern, Chaos, and Beauty (St. Martin's Press).

  • Pickover, Cliff, Frontiers of Scientific Visualization (Wiley).

  • Pickover, Cliff, Mazes for the Mind: Computers and the Unexpected (St. Martin's Press).

  • Pickover, Cliff, Spiral Symmetry (World Scientific).

  • Pizano, A., et al, “Communicating with Pictorial Databases”, Human-Machine Interactive Systems, pp. 61-87, Computer Science Dept, UCLA, 1991.

  • Platte, Hans-Joachim, Oberjatzas, Gunter, and Voessing, Walter, “A New Intelligent Remote Control Unit for Consumer Electronic Device”, IEEE Transactions on Consumer Electronics, Vol. CE-31(1):59-68 (February 1985).

  • Poor, Alfred, “Microsoft Publisher”, PC Magazine, Nov. 26, 1991, vol. 10, Issue 20, p. 40, evaluates Microsoft Publisher.

  • Port, Otis, “Wonder Chips-How They'll Make Computing Power Ultrafast and Ultracheap”, Business Week, Jul. 4, 1994, pp. 86-92.

  • Press, William H. et al, “Numerical Recipes in C The Art of Scientific Computing”, Cambridge University Press, 1988.

  • Price, R., et al., “Applying Relevance Feedback to a Photo Archival System”, Journal of Information Science 18, pp. 203-215 (1992).

  • Priebe, C. E.; Solka, J. L.; Rogers, G. W., “Discriminant analysis in aerial images using fractal based features”, Proceedings of the SPIE—The International Society for Optical Engineering, 1962:196-208 (1993).

  • PRNewswire, information concerning the PointCast Network (PCN) (1996, Feb. 13) p. 213.

  • Proakis, John G., Digital Communications, McGraw-Hill (1983).

  • Proceedings of the IEEE, vol. 82, No. 4, April 1994 New York, N.Y., US, pp. 585-589, XP 000451419 Miller “A Scenario for the Deployment of Interactive Multimedia Cable Television Systems in the United States in the 1990's”.

  • Proceedings, 6th International Conference on Pattern Recognition 1982, pp. 152-136.

  • Psaltis, D., “Incoherent Electro-Optic Image Correlator”, Optical Engineering, 23(1): 12-15 (January/February 1984).

  • Psaltis, D., “Two-Dimensional Optical Processing Using One-Dimensional Input Devices”, Proceedings of the IEEE, 72(7):962-974 (July 1984).

  • Quinell, Richard A., “Web Servers in embedded systems enhance user interaction”, EDN, Apr. 10, 1997, pp. 61-68.

  • Raggett, D., “A review of the HTML+document format,” Computer Networks and ISDN Systems, vol. 27, No. 2, pp. 35-145 (November 1994).

  • Rahmati, M.; Hassebrook, L. G., “Intensity- and distortion-invariant pattern recognition with complex linear morphology”, Pattern Recognition, 27 (4):549-68 (1994).

  • Rampe, Dan, et al. In a Jan. 9, 1989 news release, Claris Corporation announced two products, SmartForm Designer and SmartForm Assistant, which provide “Intelligent Assistance”, such as custom help messages, choice lists, and data-entry validation and formatting.

  • Rangan et al., “A Window-based Editor for Digital Video and Audio”, January 1992.

  • Rao et al., Discrete Cosine Transform—Algorithms, Advantages, Applications, Academic Press, Inc., 1990.

  • Ratcliffe, Mitch and Gore, Andrew, “Intelligent Agents take U.S. Bows.”, MacWeek, Mar. 2, 1992, vol. 6, No. 9, p. 1.

  • Ravichandran, G. and Casasent, D., “Noise and Discrimination Performance of the MINACE Optical Correlation Filter”, Proc. SPIE Technical Symposium, April 1990, Orlando Fla., 1471 (1990).

  • Reimer, “Memories in my Pocket”, Byte, pp. 251-258, February 1991.

  • Reiss, “The Revised Fundamental Theorem of Moment Invariants”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, No. 8, August 1991, pp. 830-834.

  • Reitman, Edward: “Exploring the Geometry of Nature”, Windcrest Books, Blue Ridge Summit, 1989.

  • Reusens, E., “Sequence coding based on the fractal theory of iterated transformations systems”, Proceedings of the SPIE—The International Society for Optical Engineering, 2094(pt. 1): 132-40(1993).

  • Rhodes, W., “Acousto-Optic Signal Processing: Convolution and Correlation”, Proc. of the IEEE, 69(1):65-79 (January 1981).

  • Richards et al., “The Interactive Island”, IEE Revies, July/August 1991 pp. 259-263.

  • Richards J., and Casasent, D., “Real Time Hough Transform for Industrial Inspection” Proc. SPIE Technical Symposium, Boston 1989 1192:2-21 (1989).

  • Rivest, R.; “The MD5 Message-Digest Algorithm”; MIT Laboratory for Computer Science and RSA Data Security, Inc.; April, 1992.

  • Rivest, R. L. et al., “A Method for Obtaining Digital Signatures and Public-Key Cryptosystems,” Laboratory for Computer Science, Massachusetts Institute of Technology, Cambridge, Mass.

  • Rivest, R. L.; Shamir, A. & Adleman, L.; “A Method for Obtaining Digital Signatures and Public-Key Cryptosystems”, Communications of the ACM, February 1978, vol. 21, No. 2, pp. 120-126.

  • Robinson, G., and Loveless, W., “Touch-Tone’ Teletext—A Combined Teletext-Viewdata System,” IEEE Transactions on Consumer Electronics, vol. CE-25, No. 3, July 1979, pp. 298-303.

  • Rogus, John G. and Armstrong, Richard, “Use of Human Engineering Standards in Design”, Human Factors, 19(1): 15-23 (February 1977).

  • Rohrer, C., & Clay Nesler, “Self-Tuning Using a Pattern Recognition Approach”, Johnson Controls, Inc., Research Brief 228 (Jun. 13, 1986).

  • Roizen, Joseph, “Teletext in the USA,” SMPTE Journal, July 1981, pp. 602-610.

  • Rosch, Winn L., “Voice Recognition: Understanding the Master's Voice”, PC Magazine, Oct. 27, 1987, 261-308.

  • Rose, D. E.; Mander, R.; Oren, T., Ponceleon, D. B.; Salomon, G. & Wong, Y. Y. “Content Awareness in a File System Interface Implementing the ‘Pile’ Metaphor for Organizing Information”, 16 Ann. Int'l SIGIR '93, ACM, pp. 260-269.

  • Rosenfeld, Azriel and Avinash C. Kak, Digital Picture Processing, Second Edition, Volume 2, Academic Press, 1982.

  • Roy, B., “Classements et choix en presence de points de vue multiples”, R.I.R.O.-2eme annee-no. 8, pp. 57-75 (1968).

  • Roy, B., “Electre III: un algorithme de classements fonde sur une representation floue des preferences en presence de criteres multiples”, Cahiers du CERO, 20(1):3-24 (1978).

  • Rui, Yong, Huang, Thomas S., Chang, Shih-Fu, “Image Retrieval: Past Present and Future”.

  • Rui, Yong, Huang, Thomas S., Mehotra, Sharad, “Browsing and retrieving Video Content in a Unified Framework”.

  • Rui, Yong, Huang, Thomas S., Ortega, Michael, Mehotra, Sharad, “Relevance Feedback: A Power Tool for Interactive Content-Based Image Retrieval”.

  • Rumelhart, D. E., & James L McClelland, Parallel Distributed Processing, Explorations in Microstructure of Cognition, vol. I, (1986: MIT Press, Cambridge, Mass.), and specifically Chapter 8 thereof, “Learning Internal Representations by Error Propagation”, pp. 318-362.

  • Rutherford, H. G., F. Taub and B. Williams, “Object Identification and Measurement from Images with Access to the Database to Select Specific Subpopulations of Special Interest”, May 1986.

  • Rutter et al., “The Timed Lattice—A New Approach To Fast Converging Equalizer Design”, pp. VIII/1-5 (Inspec. Abstract No. 84C044315, Inspec IEE (London) & IEE Saraga Colloquium on Electronic Filters, May 21, 1984).

  • Sadjadi, F., “Experiments in the use of fractal in computer pattern recognition”, Proceedings of the SPIE—The International Society for Optical Engineering, 1960:214-22 (1993).

  • Sakoe, H., “A Generalization of Dynamic Programming Based Pattern Matching Algorithm Stack DP-Matching”, Transactions of the Committee on Speech Research, The Acoustic Society of Japan, p. S83-23, 1983.

  • Sakoe, H., “A Generalized Two-Level DP-Matching Algorithm for Continuous Speech Recognition”, Transactions of the IECE of Japan, E65(11):649-656 (November 1982).

  • Salomon et al, “Using Guides to Explore Multimedia Databases”, PROC of the Twenty-Second Annual Hawaii International Conference on System Sciences. vol. IV, 3-6 Jan. 1989, pp. 3-12 vol. 4. Jan. 6, 1989.

  • Salton, G., “Developments in Automatic Text Retrieval”, Science, vol. 253, pp. 974-980, Aug. 30, 1991.

  • Samet, H., The quadtree and related hierarchical data structures. ACM Computing Surveys, 16(2):187-260, 1984.

  • Sarver, Carleton, “A Perfect Friendship”, High Fidelity, 39:42-49 (May 1989).

  • Schamuller-Bichl, I., “IC-Cards in High-Security Applications”, in Selected Papers from the Smart Card 2000 Conference, Springer Verlag, 1991, pp. 177-199.

  • Scharlic, A., “Decider sur plusieurs criteres. Panorama de l'aide a la decision multicritere” Presses Polytechniques Romandes (1985).

  • Schied, Francis, “Shaum's Outline Series-Theory & Problems of Numerical Analysis”, McGraw-Hill Book Co., NY (1968) pp. 236, 237, 243, 244, 261.

  • Schmitt, Lee, “Let's Discuss Programmable Controllers”, Modern Machine Shop, May 1987, 90-99.

  • Schniederman, Ben, Designing the User Interface: Strategies for Effective Human-Computer Interaction, Reading, Mass., Addison-Wesley, 1987.

  • Schroeder, M., Fractals, Chaos, Power Laws, W.H. Freeman & Co., New York (1991).

  • Schurmann, J., “Zur Zeichen and Worterkennung beim Automatischen Anschriftenlesen”, Wissenschaftlichl, Berichte, 52(1/2) (1979).

  • Scientific American; “Not Just a Pretty Face”; March 1990, pp. 77-78.

  • Seborg, D. E., T. F. Edgar, & D. A. Mellichamp, “Process Dynamics and Control”, John Wiley & Sons, NY (1989) pp. 294-307, 538-541.

  • Shafer, G., “A mathematical theory of evidence”, Princeton University Press, Princeton, N.J. (1976).

  • Shann et al. “Detection of Circular Arcs for Content-Based Retrieval from an Image Database” IEE Proc.-Vis. Image Signal Process, vol. 141, No. 1, February 1994, pp. 49-55.

  • Shardanand, Upendra, “Social Information Filtering for Music Recommendation” September 1994, pp. 1-93, Massachusetts Institute of Technology, Thesis.

  • Sharif Heger, A. and Koen, B. V., “KNOWBOT: an Adaptive Data Base Interface”, Nuclear Science and Engineering, February 1991, vol. 107, No. 2, pp. 142-157.

  • Sharpless, “Subscription teletext for value added services”, August 1985.

  • Shepard, J. D., “Tapping the Potential of Data Compression”, Military and Aerospace Electronics, May 17, 1993, pp. 25-27.

  • Sheth et al., “Evolving Agents for Personalized Information Filtering”, 1-5 Mar. 1993, pp. 345-352.

  • Sheth, B. & Maes, P. “Evolving Agents For Personalized Information Filtering”, Proc. 9th IEEE Conference, 1993 pp. 345-352.

  • Shimizu et al, “Principle of Holonic Computer and Holovision”, Journal of the Institute of Electronics, Information and Communication, 70(9):921-930 (1987).

  • Shinan et al., “The Effects of Voice Disguise.”, ICASSP 86, Tokyo, CH2243-4/86/0000-0885, IEEE 1986, pp. 885-888.

  • Silverston et al., “Spectral Feature Classification and Spatial Pattern Rec.”, SPIE 201:17-26, Optical Pattern Recognition (1979).

  • Simpson, W. R., C. S. Dowling, “WRAPLE: The Weighted Repair Assistance Program Learning Extension”, IEEE Design & Test, 2:66-73 (April 1986).

  • Sincoskie, W. D. & Cotton C. J. “Extended Bridge Algorithms for Large Networks”, IEEE Network, January 1988-vol. 2, No. 1, pp. 16-24.

  • Sirbu, Marvin A.; Internet Billing Service Design And Prototype Implementation; pp. 1-19; An Internet Billing Server.

  • Smith et al., “A New Family of Algorithms for Manipulating Compressed Images”, IEEE Computer Graphics and Applications, 1993.

  • Smith, J. et al., “Quad-Tree Segmentation for Texture-Based Image Query” Proceeding ACM Multimedia 94, pp. 1-15, San Francisco, 1994.

  • Smith, J. R., and S.-F. Chang. Querying by color regions using the VisualSEEk content-based visual query system. In M. T. Maybury, editor, Intelligent Multimedia Information Retrieval. IJCAI, 1996.

  • Smith, J. R., and S.-F. Chang. Tools and techniques for color image retrieval. In Symposium on Electronic Imaging: Science and Technology—Storage & Retrieval for Image and Video Databases IV, volume 2670, San Jose, Calif., February 1996. IS&T/SPIE.

  • Smith, Sidney J., and Mosier, Jane N., Guidelines for Designing User Interface Software, Bedford, Mass., MITRE, 1986.

  • Smoliar, S. et al., “Content-Based Video Indexing and Retrieval”, IEEE Multimedia, pp. 62-72 (Summer 1994).

  • Society for Worldwide Interbank Financial Telecommunications S.C., “A.S.W.I.F.T. Overview”, undated.

  • Soffer, A., and H. Samet. Retrieveal by content in symbolic-image databases. In Symposium on Electronic Imaging: Science and Technology—Storage & Retrieval for Image and Video Databases IV, pages 144-155. IS&T/SPIE, 1996.

  • Soviero, Marcelle M., “Your World According to Newton”, Popular Science, September 1992, pp. 45-49.

  • Specht, IEEE Internatl. Conf. Neural Networks, 1:1525-1532 (July 1988), San Diego, Calif.

  • Sperling, Barbara Bied, Tullis Thomas S., “Are You a Better ‘Mouser’ or ‘Trackballer’? A Comparison of Cursor—Positioning Performance”, An Interactive/Poster Session at the CHI+GI'87 Graphics Interface and Human Factors in Computing Systems Conference.

  • Sprageu, R. A., “A Review of Acousto-Optic Signal Correlators”, Optical Engineering, 16(5):467-74 (September/October 1977).

  • Sprinzak, J.; Werman, M., “Affine point matching”, Pattern Recognition Letters, 15(4):337-9 (1994).

  • Stanchev, P., et al, “An Approach to Image Indexing of Documents”, Visual Database Systems, II, 1992, pp. 63-77.

  • Stanley R. Sternberg, “Biomedical Image Processing”, IEEE Computer, 1983, pp. 22-34.

  • Stark, J., “Iterated function systems as neural networks”, Neural Networks, Vol 4, pp 679-690, Pergamon Press, 1991.

  • Stevens, “Next Generation Network and Operating System Requirements for Continuous Time Media”, in Herrtwich (Ed.), Network and Operating System Support for Digital Audio and Video, pp. 197-208, November 1991.

  • Stewart, R. M., “Expert Systems For Mechanical Fault Diagnosis”, IEEE, 1985, pp. 295-300.

  • Streeter, L. A., Ackroff, J. M., and Taylor, G. A. “On Abbreviating Command Names”, The Bell System Technical Journal, 62(6): 1807-1826 (July/August 1983).

  • Stricker, M., and A. Dimai. Color indexing with weak spatial constraints. In Symposium on Electronic Imaging: Science and Technology—Storage & Retrieval for Image and Video Databases IV, pages 29-41. IS&T/SPIE, 1996.

  • Stricker, M., and M. Orengo. Similarity of color images. In Storage and Retrieval for Image and Video Databases III, volume SPIE Vol. 2420, February 1995.

  • Sugeno, M., “Theory of fuzzy integrals and its applications”, Tokyo Institute of Technology (1974).

  • Svetkoff et al.; Hybrid Circuits (GB), No. 13, May 1987; pp. 5-8.

  • Swain et al., “Color Indexing”, International Journal of Computer Vision, vol. 7, No. 1, 1991, pp. 11-32.

  • Swanson, David, and Klopfenstein, Bruce, “How to Forecast VCR Penetration”, American Demographic, December 1987, 44-45.

  • Tak W. Yan & Hector Garcia-Molina, SIFT—A Tool for Wide-Area Information Dissemination, 1995 USENIX Technical Conference, New Orleans, La., January 16-20, pp. 177-186.

  • Tamura, H., et al, “Image Database Systems: A Survey”, Pattern Recognition, vol. 17, No. 1, 1984, pp. 29-34.

  • Tamura, H., et al., “Textural Features Corresponding to Visual Perception,” IEEE Transactions on System, Man, and Cyb., vol. SMC-8, No. 6, pp. 460-473 (1978).

  • Tanaka, S., et al, “Retrieval Method for an Image Database based on Topological Structure”, SPIE, vol. 1153, 1989, pp. 318-327.

  • Tanton, N. E., “UK Teletext—Evolution and Potential,” IEEE Transactions on Consumer Electronics, vol. CE-25, No. 3, July 1979, pp. 246-250.

  • TCC Tech Facts, Vols. 1-4, (www.wgbh.org, rev. 9/95).

  • Television Decoder Circuitry Act of 1990, and Section 305 of the Telecommunications Act of 1996, and FCC regulations.

  • Tello, Ernest R., “Between Man And Machine”, Byte, September 1988, 288-293.

  • Tenenbaum, Jay M. and Schiffman, Allan M.; “Development of Network Infrastructure and Services for Rapid Acquisition”; adapted from a white paper submitted to DARPA by MCC in collaboration with EIT and ISI.

  • Thomas, John, C., and Schneider, Michael L., Human Factors in Computer Systems, New Jersey, Ablex Publ. Co., 1984.

  • Thomas, William L., “Electronic Program Guide Applications—The Basics of System Design”, 1994 NCTA Technical Papers, pp. 15-20.

  • Tonomura et al., “Content Oriented Visual Interface Using Video Icons for Visual Database Systems”, Journal of Visual Languages and Computing (1990) I, pp. 183-198.

  • Tonomura et al., “VideoMAP and VideoSpacelcon: Tools for Anatomizing Video Content”, Inter CHI'93 Conference Proceedings, Amsterdam, The Netherlands, 24-29 April, 1993, pp. 131-136.

  • Tortora, G., et al, “Pyramidal Algorithms”, Computer Vision, Graphics and Images Processing, 1990, pp. 26-56.

  • Trachtenberg, Jeffrey A., “How do we confuse thee? Let us count the ways”, Forbes, Mar. 21, 1988, 159-160.

  • Training Computers To Note Images, New York Times, Apr. 15, 1992.

  • Turcotte, Donald L., 1992, Fractals and Chaos in Geology and Geophysics. Cambridge U.P.

  • TV Communications Advertisement for MSI Datacasting Systems, January 1973.

  • Tyldesley, D. A., “Employing Usability Engineering in the Development of Office Products”, The Computer Journal”, 31(5):431-436 (1988).

  • Udagawa, K., et al, “A Parallel Two-Stage Decision Method for Statistical Character Recognition.”, Electronics and Communications in Japan (1965).

  • Ueda et al., “Automatic Structure Visualization for Video Editing”, InterCHI'93 Conference Proceedings, Amsterdam, The Netherlands, 24-29 Apr. 1993, pp. 137-141.

  • Ueda et al., “Impact: An Interactive Natural-Motion-Picture Dedicated Multimedia Authoring System”, Proceedings of Human Factors in Computing Systems (CHI 91), New Orleans, La., Apr. 27-May 2, 1991, pp. 343-350.

  • van den Boom, Henrie: An Interactive Videotex System for Two-Way CATV Networks; AEU, Band 40; 1986; pp. 397-401.

  • Vander Lugt, A., “Practical Considerations for the Use of Spatial Carrier-Frequency Filters”, Applied Optics, 5(11): 1760-1765 (November 1966).

  • Vander Lugt, A., “Signal Detection By Complex Spatial Filtering”, IEEE Transactions On Information Theory, IT-10, 2:139-145 (April 1964).

  • Vander Lugt, A., et al.; “The Use of Film Nonlinearites in Optical Spatial Filtering”; Applied Optics; 9(1):215-222 (January 1970).

  • Vannicola et al, “Applications of Knowledge based Systems to Surveillance”, Proceedings of the 1988 IEEE National Radar Conference, 20-21 Apr. 1988, pp. 157-164.

  • Varela, F. J., and P. Bourgine (eds.): Proceedings of the first European Conference on Artificial Life. Cambridge, Mass.: MIT Press. (1991).

  • Verplank, William L., “Graphics in Human-Computer Communication: Principles of Graphical User-Interface Design”, Xerox Office Systems.

  • Vitols, “Hologram Memory for Storing Digital Data”, IBM Tech. Disc. Bulletin 8(11): 158′-1583 (April 1966).

  • Vittal, J., “Active Message Processing: Message as Messengers”, pp. 175-195; 1981.

  • Voydock, Victor et al.; “Security Mechanisms in High-Level Network Protocols”; Computing Surveys; vol. 15, No. 2; June 1981.

  • Voyt, Carlton F., “PLC's Learn New Languages”, Design News, Jan. 2, 1989, 78.

  • Vrscay, Edward R. “Iterated Function Systems: Theory, Applications, and the Inverse Problem.” Fractal Geometry and Analysis, J. Belair and S. Dubuc (eds.) Kluwer Academic, 1991.405-468.

  • Wachman, J., “A Video Browser that Learns by Example”, Masters Thesis, Massachusetts Institute of Technology; 1996; also appears as MIT Media Laboratory Technical Report No. 383.

  • Wakimoto, K., et al, “An Intelligent User Interface to an Image Database using a Figure interpretation Method”, IEEE Publication No. CH2898-5/90/0000/0516, 1990, pp. 516-520.

  • Wald; Sequential Analysis; Dover Publications Inc., 1947; pp. 34-43.

  • Wallace, “The JPEG Still Picture Compression Standard”, Communications of the ACM, vol. 34, No. 4, April 1991, pp. 31-44.

  • Wasserman, Philip D., “Neural Computing-Theory & Practice”, 1989, pp. 128-129.

  • Weber et al., “Marquee: A Tool for Real-Time Video Logging”, CHI '94. April 1994.

  • Weber, Thomas E., “Software Lets Marketers Target Web Ads,” The Wall Street Journal, Apr. 21, 1997

  • Weiman, Liza and Moran, Tom, “A Step toward the Future”, Macworld, August 1992, pp. 129-131.

  • Weshsler, H. Ed., “Neural Nets For Human and Machine Perception”, Academic Press, New York (1991).

  • Whitefield, A. “Human Factors Aspects of Pointing as an Input Technique in Interactive Computer Systems”, Applied Ergonomics, June 1986, 97-104.

  • Wiedenbeck, Susan, Lambert, Robin, and Scholtz, Jean, “Using Protocol Analysis to Study the User Interface”, Bulletin of the American Society for Information Science, June/July 1989, 25-26.

  • Wilf, Itzhak, “Computer, Retrieve For Me the Video Clip of the Winning Goal”, Advanced Imaging, August 1998, pp. 53-55.

  • Wilke, William, “Easy Operation of Instruments by Both Man and Machine”. Electro/82 Proceedings, 3/2/1-3/2/4.

  • Willett, P., “Recent Trends in Hierarchic Document Clustering: A Critical Review”, Information Processing & Management, vol. 24, No. 5, pp. 557-597, 1988

  • Willshaw et al., “Non-Holographic Associative Memory”, Nature, 222:960-962 (Jun. 7, 1969).

  • Woolsey, K., “Multimedia Scouting”, IEEE Computer Graphics And Applications, July 1991 pp. 26-38.

  • Yager, R. R., “Entropy and specificity in a mathematical theory of Evidence”, Int. J. General Systems, 9:249-260 (1983).

  • Yamada et. al., “Character recognition system using a neural network”, Electronics Information Communications Association Bulletin PRU 88-58, pp. 79-86.

  • Yamamoto, A., et al, “Extraction of Object Features from Image and its Application to Image Retrieval”, IEEE 9th International Conference On Pattern Recognition, vol. 2, 1988, 988-991.

  • Yamamoto, A., et al, “Image Retrieval System Based on Object Features”, IEEE Publication No. CH2518-9/87/0000-0132, 1987, pp. 132-134.

  • Yamamoto, A., et al., “Extraction of Object Features and Its Application to Image Retrieval”, Trans. of IEICE, vol. E72, No. 6, 771-781 (June 1989).

  • Yamane et al., “An Image Data Compression Method Using Two-Dimensional Extrapolative Prediction-Discrete Sine Transform”, Oct. 29-31, 1986, pp. 311-316.

  • Yan et al., “Index Structures for Information Filtering Under the Vector Space Model”, PROC the 10th International Conference on Data Engineering, pp. 14-18 of DRD203RW User's Manual relating to the DSS Digital System.

  • Yan, T. W. and Garcia-Molina, H., “SIFT—A Tool for Wide-Area Information Dissemination,” Paper presented at the USENIX Technical Conference, New Orleans, La. (1995, January), pp. 177-186.

  • Yoder, Stephen Kreider, “U.S. Inventors Thrive at Electronics Show”, The Wall Street Journal, Jan. 10, 1990, B1.

  • Yoshida, J., “The Video-on-demand Demand”, Electronic Engineering Times, Mar. 15, 1993, pp. 1, 72.

  • Yoshida, Y., et al, “Description of Weather Maps and Its Application to Implementation of Weather Map Database”, IEEE 7th International Conference On Pattern Recognition, 1984, pp. 730-733.

  • Zadeh, L. A., “Fuzzy sets as a basis for a theory of possibility”, Fuzzy sets and Systems 1:3-28 (1978).

  • Zadeh, L. A., “Fuzzy sets”, Information and Control, 8:338-353 (1965).

  • Zadeh, L. A., “Probability measures of fuzzy events”, Journal of Mathematical Analysis and Applications, 23:421-427 (1968).

  • Zeisel, Gunter, Tomas, Philippe, Tomaszewski, Peter, “An Interactive Menu-Driven Remote Control Unit for TV-Receivers and VC-Recorders”, IEEE Transactions on Consumer Electronics, 34(3):814-818.

  • Zenith Starsight Telecast brochure, (1994).

  • Zhang et al., “Developing Power Tools for Video Indexing and Retrieval”, Proceedings of SPIE Conference on Storage and Retrieval for Image and Video Databases, San Jose, Calif., 1994.

  • Zhang, X., et al, “Design of a Relational Image Database Management System: IMDAT”, IEEE Publication No. TH0166-9/87/0000-0310, 1987, pp. 310-314.

  • Zhi-Yan Xie; Brady, M., “Fractal dimension image for texture segmentation”, ICARCV '92. 2nd International Conf. on Automation, Robotics and Computer Vision, p. CV-4.3/1-5 vol. I, (1992).

  • Zhu, X., et al., “Feature Detector and Application to Handwritten Character Recognition”, International Joint Conference on Neural Networks, Washington, D.C., January 1990, pp. 11-457 to 11-460.

  • Zhuang, Yueting, Rui, Yong, Huang, Thomas S., Mehotra, Sharad, “Applying Semantic Association to Support Content-Based Video Retrieval”.


Claims
  • 1. An Internet appliance comprising: at least two packet data network interfaces configured to communicate data packets according to an Internet Protocol with a public network and a private network;a media stream interface; anda processor having an associated memory configured to store executable code, wherein said code defines at least a remote virtual interface function, a data packet routing function, and a media stream processing function for controlling the media stream interface.
  • 2. The Internet appliance of claim 1, further comprising an audio codec coupled to the media stream interface.
  • 3. The Internet appliance of claim 1, further comprising a speech interface, wherein the processor is configured to recognize human speech based, at least in part, on signals received at the speech interface.
  • 4. The Internet appliance of claim 1, further comprising a speech interface, wherein the processor is configured to synthesize human speech through the speech interface.
  • 5. The Internet appliance of claim 1, wherein the processor is configured to communicate with a remote device through at least one of the public network or the private network using a markup language interface.
  • 6. The Internet appliance of claim 5, wherein the markup language interface comprises XML and the processor is further configured to automatically communicate with a remote automated device.
  • 7. The Internet appliance of claim 5, wherein the markup language interface comprises HTML and the processor is configured to communicate with a remote HTML browser human user interface.
  • 8. The Internet appliance of claim 1, further comprising a wireless remote control configured to control at least the media stream interface.
  • 9. An Internet appliance within a single housing, the Internet appliance comprising: at least two packet data network interfaces configured to communicate data packets according to an Internet Protocol with a public network and a private network;a media stream interface; anda processor having an associated memory configured to store executable code, wherein said code defines at least a remote virtual interface function, a data packet routing function, and a media stream processing function for controlling the media stream interface.
  • 10. The Internet appliance of claim 9, further comprising an audio codec coupled to the media stream interface.
  • 11. The Internet appliance of claim 9, further comprising a speech interface, wherein the processor is configured to recognize human speech based, at least in part, on signals received at the speech interface.
  • 12. The Internet appliance of claim 9, further comprising a speech interface, wherein the processor is configured to synthesize human speech through the speech interface.
  • 13. The Internet appliance of claim 9, wherein the processor is configured to communicate with a remote device through at least one of the public network or the private network using a markup language interface.
  • 14. The Internet appliance of claim 13, wherein the markup language interface comprises XML and the processor is further configured to automatically communicate with a remote automated device.
  • 15. The Internet appliance of claim 13, wherein the markup language interface comprises HTML and the processor is configured to communicate with a remote HTML browser human user interface.
  • 16. The Internet appliance of claim 9, further comprising a wireless remote control configured to control at least the media stream interface.
CROSS REFERENCE TO RELATED APPLICATIONS

The present application is a continuation of U.S. patent application Ser. No. 13/043,411, filed Mar. 8, 2011, now U.S. Pat. No. 8,583,263, issued Nov. 12, 2013, which is a continuation of U.S. patent application Ser. No. 11/363,411 filed Feb. 27, 2006, now U.S. Pat. No. 7,904,187, issued Mar. 8, 2011, and U.S. patent application Ser. No. 11/363,393 filed Feb. 27, 2006, now U.S. Pat. No. 7,451,005, issued Nov. 11, 2008, which are a continuations of U.S. patent application Ser. No. 10/693,759 filed Oct. 24, 2003, now U.S. Pat. No. 7,006,881, which is a continuation of U.S. patent application Ser. No. 10/162,079 filed Jun. 3, 2002, now U.S. Pat. No. 6,640,145, issued Oct. 28, 2003, which is a continuation of U.S. patent application Ser. No. 09/241,135 filed Feb. 1, 1999, now U.S. Pat. No. 6,400,996, issued Jun. 4, 2002, each of which is expressly incorporated herein by reference in its entirety.

US Referenced Citations (7705)
Number Name Date Kind
312516 Schilling Feb 1885 A
2819020 Baer et al. Jan 1958 A
2918846 Porter Dec 1959 A
2956114 Ginsburg et al. Oct 1960 A
3310111 Pavlich et al. Mar 1967 A
3325810 Frank et al. Jun 1967 A
3419156 Mork Dec 1968 A
3609684 Lipp Sep 1971 A
3621263 Gilson et al. Nov 1971 A
3745462 Trimble Jul 1973 A
3769710 Reister Nov 1973 A
3771483 Bond Nov 1973 A
3772688 Smit et al. Nov 1973 A
3774215 Reed Nov 1973 A
3793635 Potter Feb 1974 A
3796433 Fraley et al. Mar 1974 A
3828306 Angeloni Aug 1974 A
3848193 Martin et al. Nov 1974 A
3848254 Drebinger et al. Nov 1974 A
3849760 Endou et al. Nov 1974 A
3899687 Jones Aug 1975 A
3914692 Seaborn, Jr. Oct 1975 A
3917317 Ryan Nov 1975 A
3922673 Bishop Nov 1975 A
3928719 Sasabe et al. Dec 1975 A
3950733 Cooper et al. Apr 1976 A
3953669 Saccomani et al. Apr 1976 A
3967241 Kawa Jun 1976 A
3980948 Olive Sep 1976 A
3984638 Carrouge Oct 1976 A
3986119 Hemmer, Jr. et al. Oct 1976 A
3987398 Fung Oct 1976 A
3993955 Belcher et al. Nov 1976 A
3993976 Ginsburg Nov 1976 A
4002983 Kavalir et al. Jan 1977 A
4008376 Flanagan et al. Feb 1977 A
4010619 Hightower et al. Mar 1977 A
4013994 Ragano et al. Mar 1977 A
4024382 Fowler May 1977 A
4024401 Bernstein et al. May 1977 A
4025851 Haselwood et al. May 1977 A
4025920 Reitboeck et al. May 1977 A
4028662 Young Jun 1977 A
4035979 Koreska Jul 1977 A
4044243 Cooper et al. Aug 1977 A
4052058 Hintz Oct 1977 A
4065778 Harvey Dec 1977 A
4067411 Conley et al. Jan 1978 A
4077005 Bishop Feb 1978 A
4081753 Miller Mar 1978 A
4084323 McMurtry Apr 1978 A
4100370 Suzuki et al. Jul 1978 A
4114155 Raab Sep 1978 A
4114453 Sandler Sep 1978 A
4117511 Baer et al. Sep 1978 A
4118730 Lemelson Oct 1978 A
4123097 Allemann Oct 1978 A
4135791 Govignon Jan 1979 A
4138726 Girault et al. Feb 1979 A
4139889 Ingels Feb 1979 A
4146892 Overman et al. Mar 1979 A
4148061 Lemelson Apr 1979 A
4152693 Ashworth, Jr. May 1979 A
4155042 Permut et al. May 1979 A
4162377 Mearns Jul 1979 A
4168499 Matsumura et al. Sep 1979 A
4168576 McMurtry Sep 1979 A
4170782 Miller Oct 1979 A
4185265 Griffin et al. Jan 1980 A
4186413 Mortimer Jan 1980 A
4187492 Delignieres Feb 1980 A
4200770 Hellman et al. Apr 1980 A
4203076 Yamashita May 1980 A
4208652 Marshall Jun 1980 A
4213183 Barron et al. Jul 1980 A
4218582 Hellman et al. Aug 1980 A
4221975 Ledniczki et al. Sep 1980 A
4224644 Lewis et al. Sep 1980 A
4225850 Chang et al. Sep 1980 A
4228421 Asada Oct 1980 A
4229620 Schaible Oct 1980 A
4229737 Heldwein et al. Oct 1980 A
4230990 Lert, Jr. et al. Oct 1980 A
4235441 Ciccarello Nov 1980 A
4237987 Sherman Dec 1980 A
4239415 Blikken Dec 1980 A
4240079 Zhilin Dec 1980 A
4244043 Fujita et al. Jan 1981 A
4244123 Lazure et al. Jan 1981 A
4245245 Matsumoto et al. Jan 1981 A
4254474 Cooper et al. Mar 1981 A
4264924 Freeman Apr 1981 A
4264925 Freeman et al. Apr 1981 A
4271532 Wine Jun 1981 A
4280148 Saxena Jul 1981 A
4283709 Lucero et al. Aug 1981 A
4287592 Paulish et al. Sep 1981 A
4288809 Yabe Sep 1981 A
4298889 Burianek et al. Nov 1981 A
4300040 Gould et al. Nov 1981 A
4301506 Turco Nov 1981 A
4303978 Shaw et al. Dec 1981 A
4305101 Yarbrough et al. Dec 1981 A
4305131 Best Dec 1981 A
4307446 Barton et al. Dec 1981 A
4311876 Endo et al. Jan 1982 A
4323921 Guillou Apr 1982 A
4326259 Cooper et al. Apr 1982 A
4331974 Cogswell et al. May 1982 A
4337529 Morokawa Jun 1982 A
4338492 Snopko Jul 1982 A
4338626 Lemelson Jul 1982 A
4338644 Staar Jul 1982 A
4339798 Hedges et al. Jul 1982 A
4345315 Cadotte et al. Aug 1982 A
4346407 Baer et al. Aug 1982 A
4347498 Lee et al. Aug 1982 A
4349701 Snopko Sep 1982 A
4349823 Tagami et al. Sep 1982 A
4350970 von Tomkewitsch Sep 1982 A
4355415 George et al. Oct 1982 A
4355806 Buck et al. Oct 1982 A
4356509 Skerlos et al. Oct 1982 A
4358824 Glickman et al. Nov 1982 A
4359733 O'Neill Nov 1982 A
4363108 Lange et al. Dec 1982 A
4367453 Kuno et al. Jan 1983 A
4367559 Tults Jan 1983 A
4369426 Merkel Jan 1983 A
4375651 Templin et al. Mar 1983 A
4377729 Stacy Mar 1983 A
4381522 Lambert Apr 1983 A
4384293 Deem et al. May 1983 A
4390901 Keiser Jun 1983 A
4390904 Johnston et al. Jun 1983 A
4393270 van den Berg Jul 1983 A
4393819 Tanaka et al. Jul 1983 A
4395780 Gohm et al. Jul 1983 A
4399330 Kuenzel Aug 1983 A
4402049 Gray Aug 1983 A
4403291 Von Tomkewitsch Sep 1983 A
4405829 Rivest et al. Sep 1983 A
4405946 Knight Sep 1983 A
4406016 Abrams et al. Sep 1983 A
4414005 De Bievre et al. Nov 1983 A
4414432 Skerlos et al. Nov 1983 A
4417246 Agnor et al. Nov 1983 A
4420769 Novak Dec 1983 A
4422105 Rodesch et al. Dec 1983 A
4422202 Malvasio Dec 1983 A
4422802 Choate Dec 1983 A
4424414 Hellman et al. Jan 1984 A
4424415 Lin Jan 1984 A
4425579 Merrell Jan 1984 A
4426937 Sietmann et al. Jan 1984 A
4427847 Hofmann et al. Jan 1984 A
4428057 Setliff et al. Jan 1984 A
4429385 Cichelli et al. Jan 1984 A
4431389 Johnson Feb 1984 A
4437151 Hurt et al. Mar 1984 A
4438511 Baran Mar 1984 A
4439788 Frame Mar 1984 A
4441256 Cummings et al. Apr 1984 A
4441526 Taft et al. Apr 1984 A
4442544 Moreland et al. Apr 1984 A
4445118 Taylor et al. Apr 1984 A
4449240 Yoshida May 1984 A
4450477 Lovett May 1984 A
4450531 Kenyon et al. May 1984 A
4451825 Hall et al. May 1984 A
4454529 Philofsky et al. Jun 1984 A
4454556 DePuy Jun 1984 A
4455025 Itkis Jun 1984 A
4456925 Skerlos et al. Jun 1984 A
4458920 Ozaki Jul 1984 A
4459657 Murao Jul 1984 A
4459667 Takeuchi Jul 1984 A
4463357 MacDoran Jul 1984 A
4464625 Lienhard et al. Aug 1984 A
4465220 Ledlow et al. Aug 1984 A
4465902 Zato Aug 1984 A
4466125 Kanayama Aug 1984 A
4467424 Hedges et al. Aug 1984 A
4468704 Stoffel et al. Aug 1984 A
4468930 Johnson Sep 1984 A
4471273 Melocik et al. Sep 1984 A
4471319 Metz Sep 1984 A
4471518 Gold Sep 1984 A
4471520 Houck et al. Sep 1984 A
4472663 Melocik Sep 1984 A
4476336 Sherwin Oct 1984 A
4476488 Merrell Oct 1984 A
4476584 Dages Oct 1984 A
4479373 Montorfano et al. Oct 1984 A
4481437 Parker Nov 1984 A
4481584 Holland Nov 1984 A
4484044 Yoshigae Nov 1984 A
4485383 Maher Nov 1984 A
4486832 Haubner et al. Dec 1984 A
4488179 Krüger et al. Dec 1984 A
4491694 Harmeyer Jan 1985 A
4491962 Sakou et al. Jan 1985 A
4492036 Beckwith, Jr. Jan 1985 A
4492170 Solomon Jan 1985 A
4492952 Miller Jan 1985 A
4494114 Kaish Jan 1985 A
4494121 Walter et al. Jan 1985 A
4494197 Troy et al. Jan 1985 A
4495112 Itou et al. Jan 1985 A
4495283 Araki et al. Jan 1985 A
4495654 Deiss Jan 1985 A
4499006 Valone et al. Feb 1985 A
4499009 Yamanaka et al. Feb 1985 A
4499022 Battais et al. Feb 1985 A
4499057 Burgard et al. Feb 1985 A
4499601 Matthews Feb 1985 A
4501016 Persoon et al. Feb 1985 A
4502313 Phalin et al. Mar 1985 A
4504545 Kurita et al. Mar 1985 A
4504546 Sallay Mar 1985 A
4506301 Kingsley et al. Mar 1985 A
4508271 Gress Apr 1985 A
4508845 Dromard et al. Apr 1985 A
4508999 Melocik et al. Apr 1985 A
4509986 Hooykaas Apr 1985 A
4511918 Lemelson Apr 1985 A
4511947 Melocik et al. Apr 1985 A
4514665 Melocik et al. Apr 1985 A
4518350 Mueller et al. May 1985 A
4518902 Melocik et al. May 1985 A
4519086 Hull et al. May 1985 A
4519462 Kelley May 1985 A
4520674 Canada et al. Jun 1985 A
4521021 Dixon Jun 1985 A
4521644 Bernard, Jr. Jun 1985 A
4521885 Melocik et al. Jun 1985 A
4526078 Chadabe Jul 1985 A
4527194 Sirazi Jul 1985 A
4527508 Juve Jul 1985 A
4527608 Bak et al. Jul 1985 A
4528335 Selby et al. Jul 1985 A
4528563 Takeuchi Jul 1985 A
4528643 Freeny, Jr. Jul 1985 A
4529410 Khaladji et al. Jul 1985 A
4529435 Lavanish Jul 1985 A
4529436 Pasarela Jul 1985 A
4529437 Colle et al. Jul 1985 A
4529919 Melocik et al. Jul 1985 A
4531187 Uhland Jul 1985 A
4532589 Shintani et al. Jul 1985 A
4535453 Rhodes et al. Aug 1985 A
4535866 Shiga Aug 1985 A
4536791 Campbell et al. Aug 1985 A
4538072 Immler et al. Aug 1985 A
4539642 Mizuno et al. Sep 1985 A
4542897 Melton et al. Sep 1985 A
4543577 Tachibana et al. Sep 1985 A
4543660 Maeda Sep 1985 A
4543665 Sotelo et al. Sep 1985 A
4544295 Hashimoto et al. Oct 1985 A
4546382 McKenna et al. Oct 1985 A
4546387 Glaab Oct 1985 A
4546439 Gene Esparza Oct 1985 A
4547438 McArthur et al. Oct 1985 A
4547439 Genies Oct 1985 A
4547778 Hinkle et al. Oct 1985 A
4547811 Ochi et al. Oct 1985 A
4547899 Nally et al. Oct 1985 A
4548815 Ponsford et al. Oct 1985 A
4549004 von Au et al. Oct 1985 A
4549014 Georgiev et al. Oct 1985 A
4550317 Moriyama et al. Oct 1985 A
4550663 DeViaris Nov 1985 A
4552456 Endo Nov 1985 A
4553261 Froessl Nov 1985 A
4555192 Ochiai Nov 1985 A
4555651 Melocik et al. Nov 1985 A
4558464 O'Brien, Jr. Dec 1985 A
4561907 Raicu Dec 1985 A
4567359 Lockwood Jan 1986 A
4567756 Colborn Feb 1986 A
4567757 Melocik et al. Feb 1986 A
4570227 Tachi et al. Feb 1986 A
4571131 Date Feb 1986 A
4572079 Theurer Feb 1986 A
4573072 Freeman Feb 1986 A
4575223 Shimono et al. Mar 1986 A
4575579 Simon et al. Mar 1986 A
4575628 Bankart et al. Mar 1986 A
4575679 Chung et al. Mar 1986 A
4575755 Schoeneberger et al. Mar 1986 A
4575763 Elabd Mar 1986 A
4575769 Arnoldi Mar 1986 A
4578678 Hurd Mar 1986 A
4579482 Gastaldi et al. Apr 1986 A
4579882 Kanbe et al. Apr 1986 A
4579906 Zabrocki et al. Apr 1986 A
4580742 Moosberg et al. Apr 1986 A
4581762 Lapidus et al. Apr 1986 A
4581769 Grimsley et al. Apr 1986 A
4582942 Comninellis et al. Apr 1986 A
4584412 Aicher et al. Apr 1986 A
4584709 Kneisel et al. Apr 1986 A
4588458 Previsani May 1986 A
4589423 Turner May 1986 A
4591730 Pennoni May 1986 A
4591752 Thouret et al. May 1986 A
4591823 Horvat May 1986 A
4591976 Webber et al. May 1986 A
4592004 Bocker et al. May 1986 A
4592280 Shores Jun 1986 A
4592282 Niemi et al. Jun 1986 A
4593367 Slack et al. Jun 1986 A
4593814 Hagiwara et al. Jun 1986 A
4593819 Will Jun 1986 A
4595560 Buchner et al. Jun 1986 A
4595625 Crass et al. Jun 1986 A
4595662 Mochida et al. Jun 1986 A
4596005 Frasier Jun 1986 A
4596010 Beckner et al. Jun 1986 A
4596362 Pralle et al. Jun 1986 A
4596988 Wanka Jun 1986 A
4597653 Seely et al. Jul 1986 A
4597772 Coffman Jul 1986 A
4599620 Evans Jul 1986 A
4600921 Thomas Jul 1986 A
4602279 Freeman Jul 1986 A
4603349 Robbins Jul 1986 A
4603677 Gile et al. Aug 1986 A
4603689 Horner Aug 1986 A
4604007 Hall et al. Aug 1986 A
4605964 Chard Aug 1986 A
4606815 Gibson Aug 1986 A
4607842 Daoust Aug 1986 A
4607867 Jansen Aug 1986 A
4607872 Herner Aug 1986 A
4609089 Kobayashi et al. Sep 1986 A
4609092 Takai Sep 1986 A
4609095 Lenherr et al. Sep 1986 A
4609098 Morgan et al. Sep 1986 A
4609104 Kasper et al. Sep 1986 A
4610025 Blum et al. Sep 1986 A
4612850 Kanazashi et al. Sep 1986 A
4613867 Golinsky Sep 1986 A
4614342 Takashima Sep 1986 A
4614452 Wang Sep 1986 A
4614474 Sudo Sep 1986 A
4614533 Schallner et al. Sep 1986 A
4614545 Hess Sep 1986 A
4614546 Schroer et al. Sep 1986 A
4616214 Naito Oct 1986 A
4617406 Willging Oct 1986 A
4617407 Young et al. Oct 1986 A
4619943 Rao Oct 1986 A
4619946 Sapienza et al. Oct 1986 A
4619976 Morris et al. Oct 1986 A
4620036 Ono et al. Oct 1986 A
4620225 Wendland et al. Oct 1986 A
4620235 Watt Oct 1986 A
4620247 Papciak et al. Oct 1986 A
4620253 Garwin et al. Oct 1986 A
4620259 Oshizawa Oct 1986 A
4620265 Lerude et al. Oct 1986 A
4620266 Baumann et al. Oct 1986 A
4620268 Ferenc Oct 1986 A
4621285 Schilling et al. Nov 1986 A
4622557 Westerfield Nov 1986 A
4624108 Leiber Nov 1986 A
4625080 Scott Nov 1986 A
4625222 Bassetti et al. Nov 1986 A
4626634 Brahm et al. Dec 1986 A
4626658 Gray et al. Dec 1986 A
4626670 Miller Dec 1986 A
4626676 Gerardin Dec 1986 A
4626677 Browne Dec 1986 A
4626678 Morita et al. Dec 1986 A
4626788 Ishigaki Dec 1986 A
4626801 Field Dec 1986 A
4626850 Chey Dec 1986 A
4626891 Achiha Dec 1986 A
4626929 Ichinoi et al. Dec 1986 A
4626933 Bucska et al. Dec 1986 A
4626939 Takai et al. Dec 1986 A
4627620 Yang Dec 1986 A
4628608 Kuhlmann et al. Dec 1986 A
4630108 Gomersall Dec 1986 A
4630308 Hongo Dec 1986 A
4630685 Huck, Jr. et al. Dec 1986 A
4630910 Ross et al. Dec 1986 A
4631542 Grimsley Dec 1986 A
4631735 Qureshi Dec 1986 A
4632058 Dixon et al. Dec 1986 A
4632109 Paterson Dec 1986 A
4632197 Karpa Dec 1986 A
4632198 Uchimura Dec 1986 A
4632199 Ober et al. Dec 1986 A
4632200 Doyen et al. Dec 1986 A
4633507 Cannistra et al. Dec 1986 A
4633966 Fotheringham Jan 1987 A
4634402 Hazebrook Jan 1987 A
4636848 Yamamoto Jan 1987 A
4636951 Harlick Jan 1987 A
4637182 Ellsworth et al. Jan 1987 A
4637540 Fujita et al. Jan 1987 A
4638188 Cray Jan 1987 A
4638445 Mattaboni Jan 1987 A
4639978 Boden Feb 1987 A
4640339 Klaren Feb 1987 A
4641205 Beyers, Jr. Feb 1987 A
4642639 Nelson Feb 1987 A
4642775 Cline et al. Feb 1987 A
4644141 Hagen et al. Feb 1987 A
4644351 Zabarsky et al. Feb 1987 A
4644368 Mutz Feb 1987 A
4644903 Shaver Feb 1987 A
4644907 Hunter Feb 1987 A
4645049 Matsuda et al. Feb 1987 A
4645458 Williams Feb 1987 A
4645873 Chomet Feb 1987 A
4646089 Takanabe et al. Feb 1987 A
4646096 Brown Feb 1987 A
4646250 Childress Feb 1987 A
4647784 Stephens Mar 1987 A
4648042 Staiger Mar 1987 A
4649524 Vance Mar 1987 A
4651157 Gray et al. Mar 1987 A
4652884 Starker Mar 1987 A
4653109 Lemelson et al. Mar 1987 A
4654377 Mohring et al. Mar 1987 A
4654867 Labedz et al. Mar 1987 A
4654879 Goldman et al. Mar 1987 A
4656179 Bernath et al. Apr 1987 A
4656463 Anders et al. Apr 1987 A
4656476 Tavtigian Apr 1987 A
4656665 Pennebaker Apr 1987 A
4656976 Rhoads Apr 1987 A
4657256 Okada Apr 1987 A
4657258 Melov et al. Apr 1987 A
4657264 Wehber Apr 1987 A
4657799 Nann et al. Apr 1987 A
4658094 Clark Apr 1987 A
4658298 Takeda et al. Apr 1987 A
4658370 Erman et al. Apr 1987 A
4658429 Orita et al. Apr 1987 A
4659970 Melocik Apr 1987 A
4660166 Hopfield Apr 1987 A
4663630 Numaho et al. May 1987 A
4666379 Smith May 1987 A
4666384 Kaga et al. May 1987 A
4666461 Dorer, Jr. May 1987 A
4666480 Mann May 1987 A
4666490 Drake May 1987 A
4666580 Beaver et al. May 1987 A
4666757 Helinski May 1987 A
4667203 Counselman, III May 1987 A
4668515 Bankit et al. May 1987 A
4668952 Imazeki et al. May 1987 A
4669185 Westover et al. Jun 1987 A
4669186 Liu Jun 1987 A
4670688 Sigai et al. Jun 1987 A
4671654 Miyahara et al. Jun 1987 A
4671772 Slade et al. Jun 1987 A
4672683 Matsueda Jun 1987 A
4672860 Parker Jun 1987 A
4673936 Kotoh Jun 1987 A
4674041 Lemon et al. Jun 1987 A
4674048 Okumura Jun 1987 A
4675755 Baumeister et al. Jun 1987 A
4677466 Lert, Jr. et al. Jun 1987 A
4677555 Goyet Jun 1987 A
4677563 Itoh et al. Jun 1987 A
4677680 Harima et al. Jun 1987 A
4677686 Hustig et al. Jun 1987 A
4677845 Izumi et al. Jul 1987 A
4678329 Lukowski, Jr. et al. Jul 1987 A
4678792 Nickl et al. Jul 1987 A
4678793 Klaus et al. Jul 1987 A
4678814 Rembaum Jul 1987 A
4679137 Lane et al. Jul 1987 A
4679147 Tsujii et al. Jul 1987 A
4680715 Pawelek Jul 1987 A
4680787 Marry Jul 1987 A
4680835 Horng Jul 1987 A
4681576 Colon et al. Jul 1987 A
4682365 Orita et al. Jul 1987 A
4682953 Doerfel et al. Jul 1987 A
4683860 Shimamura et al. Aug 1987 A
4684247 Hammill, III Aug 1987 A
4684331 LaGrange et al. Aug 1987 A
4685145 Schiller Aug 1987 A
4685821 Marsh Aug 1987 A
4686006 Cheshire et al. Aug 1987 A
4686009 McCabe Aug 1987 A
4686356 Ueda et al. Aug 1987 A
4686357 Douno et al. Aug 1987 A
4686995 Fournial et al. Aug 1987 A
4687732 Ward et al. Aug 1987 A
4688244 Hannon et al. Aug 1987 A
4689022 Peers et al. Aug 1987 A
4689867 Tolliver Sep 1987 A
4690610 Fotheringham Sep 1987 A
4690859 Porter et al. Sep 1987 A
4691097 Theiss et al. Sep 1987 A
4691104 Murata et al. Sep 1987 A
4691106 Hyun et al. Sep 1987 A
4691149 Baumgartner et al. Sep 1987 A
4691351 Hayashi et al. Sep 1987 A
4691354 Palminteri Sep 1987 A
4691385 Tupman Sep 1987 A
4694458 Midavaine et al. Sep 1987 A
4694463 Hirth et al. Sep 1987 A
4694490 Harvey et al. Sep 1987 A
4695175 Tsukada et al. Sep 1987 A
4695429 Lupoli et al. Sep 1987 A
4695587 Terahara et al. Sep 1987 A
4695953 Blair et al. Sep 1987 A
4695975 Bedrij Sep 1987 A
4696290 Steffee Sep 1987 A
4696291 Tyo Sep 1987 A
4697209 Kiewit et al. Sep 1987 A
4697248 Shirota Sep 1987 A
4697251 Birrittella et al. Sep 1987 A
4697256 Shinkai Sep 1987 A
4697281 O'Sullivan Sep 1987 A
4697282 Winter et al. Sep 1987 A
4697503 Okabe et al. Oct 1987 A
4698632 Baba et al. Oct 1987 A
4699458 Ohtsuki et al. Oct 1987 A
4699527 Hutzel Oct 1987 A
4699540 Gibbon et al. Oct 1987 A
4700191 Manor Oct 1987 A
4700301 Dyke Oct 1987 A
4701135 Volk et al. Oct 1987 A
4701197 Thornton et al. Oct 1987 A
4701760 Raoux Oct 1987 A
4701794 Froling et al. Oct 1987 A
4701934 Jasper Oct 1987 A
4702077 Lilley et al. Oct 1987 A
4702475 Elstein et al. Oct 1987 A
4703444 Storms, Jr. et al. Oct 1987 A
4704763 Sacks et al. Nov 1987 A
4706056 McCullough Nov 1987 A
4706074 Muhich et al. Nov 1987 A
4706081 Hart et al. Nov 1987 A
4706121 Young Nov 1987 A
4706675 Ekins Nov 1987 A
4706688 Don Michael et al. Nov 1987 A
4706772 Dawson et al. Nov 1987 A
4707126 Ohshima et al. Nov 1987 A
4707926 Decker, Jr. Nov 1987 A
4709195 Hellekson et al. Nov 1987 A
4709407 Baba Nov 1987 A
4710822 Matsunawa Dec 1987 A
4710955 Kauffman Dec 1987 A
4710964 Yamaguchi et al. Dec 1987 A
4711543 Blair et al. Dec 1987 A
4713008 Stocker et al. Dec 1987 A
4713767 Sato et al. Dec 1987 A
4713775 Scott et al. Dec 1987 A
4716404 Tabata et al. Dec 1987 A
4716804 Chadabe Jan 1988 A
4718080 Serrano et al. Jan 1988 A
4718107 Hayes Jan 1988 A
4719591 Hopfield et al. Jan 1988 A
4722054 Yorozu et al. Jan 1988 A
4722410 Melocik et al. Feb 1988 A
4725840 Orazietti Feb 1988 A
4727492 Reeve et al. Feb 1988 A
4727962 Nelson Mar 1988 A
4728922 Christen et al. Mar 1988 A
4730690 McNutt et al. Mar 1988 A
4731613 Endo et al. Mar 1988 A
4731863 Sezan et al. Mar 1988 A
4733356 Haeussermann et al. Mar 1988 A
4734690 Waller Mar 1988 A
4734786 Minakawa et al. Mar 1988 A
4734928 Weiner et al. Mar 1988 A
4736439 May Apr 1988 A
4737927 Hanabusa et al. Apr 1988 A
4737978 Burke et al. Apr 1988 A
4739398 Thomas et al. Apr 1988 A
4740778 Harding et al. Apr 1988 A
4741245 Malone May 1988 A
4741412 Sable May 1988 A
4742557 Ma May 1988 A
4743913 Takai May 1988 A
4744761 Doerfel et al. May 1988 A
4745468 Von Kohorn May 1988 A
4745549 Hashimoto May 1988 A
4747148 Watanabe et al. May 1988 A
4748678 Takeda et al. May 1988 A
4750197 Denekamp et al. Jun 1988 A
4750215 Biggs Jun 1988 A
4751512 Longaker Jun 1988 A
4751578 Reiter et al. Jun 1988 A
4751642 Silva et al. Jun 1988 A
4751669 Sturgis et al. Jun 1988 A
4751983 Leskovec et al. Jun 1988 A
4752677 Nakano et al. Jun 1988 A
4752890 Natarajan et al. Jun 1988 A
4754280 Brown et al. Jun 1988 A
4754283 Fowler Jun 1988 A
4754326 Kram et al. Jun 1988 A
4754465 Trimble Jun 1988 A
4755872 Bestler et al. Jul 1988 A
4755905 Telecky, Jr. Jul 1988 A
4757267 Riskin Jul 1988 A
4757450 Etoh Jul 1988 A
4757455 Tsunoda et al. Jul 1988 A
4758959 Thoone et al. Jul 1988 A
4760527 Sidley Jul 1988 A
4760604 Cooper et al. Jul 1988 A
4761684 Clark et al. Aug 1988 A
4761742 Hanabusa et al. Aug 1988 A
4763270 Itoh et al. Aug 1988 A
4763418 Decker, Jr. Aug 1988 A
4764971 Sullivan Aug 1988 A
4764973 O'Hair Aug 1988 A
4768110 Dunlap et al. Aug 1988 A
4769697 Gilley et al. Sep 1988 A
4771467 Catros et al. Sep 1988 A
4773024 Faggin et al. Sep 1988 A
4773099 Bokser Sep 1988 A
4774672 Tsunoda et al. Sep 1988 A
4774677 Buckley Sep 1988 A
4775935 Yourick Oct 1988 A
4776464 Miller et al. Oct 1988 A
4776750 Griswold, Jr. et al. Oct 1988 A
4780717 Takanabe et al. Oct 1988 A
4780759 Matsushima et al. Oct 1988 A
4781514 Schneider Nov 1988 A
4782447 Ueno et al. Nov 1988 A
4783741 Mitterauer Nov 1988 A
4783752 Kaplan et al. Nov 1988 A
4783754 Bauck et al. Nov 1988 A
4783829 Miyakawa et al. Nov 1988 A
4785463 Janc et al. Nov 1988 A
4786164 Kawata Nov 1988 A
4787063 Muguet Nov 1988 A
4789933 Chen et al. Dec 1988 A
4790025 Inoue et al. Dec 1988 A
4790402 Field et al. Dec 1988 A
4791420 Baba Dec 1988 A
4791572 Green, III et al. Dec 1988 A
4792995 Harding Dec 1988 A
4796189 Nakayama et al. Jan 1989 A
4796191 Honey et al. Jan 1989 A
4796997 Svetkoff et al. Jan 1989 A
4797920 Stein Jan 1989 A
4799062 Sanderford, Jr. et al. Jan 1989 A
4799270 Kim et al. Jan 1989 A
4801938 Holmes Jan 1989 A
4802022 Harada Jan 1989 A
4802103 Faggin et al. Jan 1989 A
4802230 Horowitz Jan 1989 A
4803103 Pithouse et al. Feb 1989 A
4803348 Lohrey et al. Feb 1989 A
4803736 Grossberg et al. Feb 1989 A
4804893 Melocik Feb 1989 A
4804937 Barbiaux et al. Feb 1989 A
4804949 Faulkerson Feb 1989 A
4805099 Huber Feb 1989 A
4805224 Koezuka et al. Feb 1989 A
4805225 Clark Feb 1989 A
4805231 Whidden Feb 1989 A
4805255 Hed Feb 1989 A
4807131 Clegg Feb 1989 A
4807158 Blanton et al. Feb 1989 A
4807714 Blau et al. Feb 1989 A
4809005 Counselman, III Feb 1989 A
4809065 Harris et al. Feb 1989 A
4809178 Ninomiya et al. Feb 1989 A
4809331 Holmes Feb 1989 A
4809341 Matsui et al. Feb 1989 A
4812820 Chatwin Mar 1989 A
4812843 Champion, III et al. Mar 1989 A
4812845 Yamada et al. Mar 1989 A
4812991 Hatch Mar 1989 A
4814711 Olsen et al. Mar 1989 A
4814989 Dobereiner et al. Mar 1989 A
4815020 Cormier Mar 1989 A
4815030 Cross et al. Mar 1989 A
4817171 Stentiford Mar 1989 A
4817176 Marshall et al. Mar 1989 A
4817950 Goo Apr 1989 A
4818171 Burkholder Apr 1989 A
4818997 Holmes Apr 1989 A
4819053 Halavais Apr 1989 A
4819174 Furuno et al. Apr 1989 A
4819195 Bell et al. Apr 1989 A
4819860 Hargrove et al. Apr 1989 A
4821102 Ichikawa et al. Apr 1989 A
4821294 Thomas, Jr. Apr 1989 A
4821309 Namekawa Apr 1989 A
4821333 Gillies Apr 1989 A
4823122 Mann et al. Apr 1989 A
4823194 Mishima et al. Apr 1989 A
4823901 Harding Apr 1989 A
4825457 Lebowitz Apr 1989 A
4829372 McCalley et al. May 1989 A
4829434 Karmel et al. May 1989 A
4829442 Kadonoff et al. May 1989 A
4829453 Katsuta et al. May 1989 A
4829569 Seth-Smith et al. May 1989 A
4829872 Topic et al. May 1989 A
4831539 Hagenbuch May 1989 A
4831659 Miyaoka et al. May 1989 A
4833469 David May 1989 A
4833477 Tendler May 1989 A
4833637 Casasent et al. May 1989 A
4837700 Ando et al. Jun 1989 A
4837842 Holt Jun 1989 A
4839835 Hagenbuch Jun 1989 A
4841302 Henry Jun 1989 A
4841562 Lem Jun 1989 A
4841575 Welsh et al. Jun 1989 A
4842275 Tsatskin Jun 1989 A
4843562 Kenyon et al. Jun 1989 A
4843568 Krueger et al. Jun 1989 A
4843631 Steinpichler et al. Jun 1989 A
4845610 Parvin Jul 1989 A
4845739 Katz Jul 1989 A
4846297 Field et al. Jul 1989 A
4847698 Freeman Jul 1989 A
4847699 Freeman Jul 1989 A
4847700 Freeman Jul 1989 A
4847862 Braisted et al. Jul 1989 A
4849731 Melocik Jul 1989 A
4852146 Hathcock et al. Jul 1989 A
4853859 Morita et al. Aug 1989 A
4855713 Brunius Aug 1989 A
4855915 Dallaire Aug 1989 A
4856787 Itkis Aug 1989 A
4857999 Welsh Aug 1989 A
4860352 Laurance et al. Aug 1989 A
4861220 Smith Aug 1989 A
4862015 Grandfield Aug 1989 A
4862175 Biggs et al. Aug 1989 A
4862422 Brac Aug 1989 A
4864284 Crayton et al. Sep 1989 A
4864592 Lee Sep 1989 A
4864629 Deering Sep 1989 A
4866434 Keenan Sep 1989 A
4866450 Chisholm Sep 1989 A
4866700 Berry et al. Sep 1989 A
4866776 Kasai et al. Sep 1989 A
4868859 Sheffer Sep 1989 A
4868866 Williams, Jr. Sep 1989 A
4869635 Krahn Sep 1989 A
4870422 Counselman, III Sep 1989 A
4870579 Hey Sep 1989 A
4872024 Nagai et al. Oct 1989 A
4873662 Sargent Oct 1989 A
4875164 Monfort Oct 1989 A
4876527 Oka et al. Oct 1989 A
4876592 Von Kohorn Oct 1989 A
4876659 Devereux et al. Oct 1989 A
4876731 Loris et al. Oct 1989 A
4878170 Zeevi Oct 1989 A
4878179 Larsen et al. Oct 1989 A
4879658 Takashima et al. Nov 1989 A
4881270 Knecht et al. Nov 1989 A
4882689 Aoki Nov 1989 A
4882696 Nimura et al. Nov 1989 A
4882732 Kaminaga Nov 1989 A
4884217 Skeirik et al. Nov 1989 A
4884348 Zeller et al. Dec 1989 A
4885632 Mabey et al. Dec 1989 A
4887068 Umehara Dec 1989 A
4887304 Terzian Dec 1989 A
4888699 Knoll et al. Dec 1989 A
4888814 Yamaguchi et al. Dec 1989 A
4888890 Studebaker et al. Dec 1989 A
4890230 Tanoshima et al. Dec 1989 A
4890233 Ando et al. Dec 1989 A
4890321 Seth-Smith et al. Dec 1989 A
4891650 Sheffer Jan 1990 A
4891761 Gray et al. Jan 1990 A
4891762 Chotiros Jan 1990 A
4893183 Nayar Jan 1990 A
4893346 Bishop Jan 1990 A
4894655 Joguet et al. Jan 1990 A
4894662 Counselman Jan 1990 A
4894734 Fischler et al. Jan 1990 A
4896370 Kasparian et al. Jan 1990 A
4897642 DiLullo et al. Jan 1990 A
4897811 Scofield Jan 1990 A
D306162 Faulkerson et al. Feb 1990 S
4899285 Nakayama et al. Feb 1990 A
4899370 Kameo et al. Feb 1990 A
4901340 Parker et al. Feb 1990 A
4901362 Terzian Feb 1990 A
4901364 Faulkerson et al. Feb 1990 A
4902020 Auxier Feb 1990 A
4902986 Lesmeister Feb 1990 A
4903211 Ando Feb 1990 A
4903212 Yokouchi et al. Feb 1990 A
4903229 Schmidt et al. Feb 1990 A
4904983 Mitchell Feb 1990 A
4905162 Hartzband et al. Feb 1990 A
4905163 Garber et al. Feb 1990 A
4905168 McCarthy et al. Feb 1990 A
4905286 Sedgwick et al. Feb 1990 A
4905296 Nishihara Feb 1990 A
4906099 Casasent Mar 1990 A
4906940 Greene et al. Mar 1990 A
4907159 Mauge et al. Mar 1990 A
4908629 Apsell et al. Mar 1990 A
4908707 Kinghorn Mar 1990 A
4908713 Levine Mar 1990 A
4908758 Sanders Mar 1990 A
4910493 Chambers et al. Mar 1990 A
4910677 Remedio et al. Mar 1990 A
4912433 Motegi et al. Mar 1990 A
4912475 Counselman, III Mar 1990 A
4912643 Beirne Mar 1990 A
4912645 Kakihara et al. Mar 1990 A
4912648 Tyler Mar 1990 A
4912756 Hop Mar 1990 A
4914609 Shimizu et al. Apr 1990 A
4914708 Carpenter et al. Apr 1990 A
4914709 Rudak Apr 1990 A
4918425 Greenberg et al. Apr 1990 A
4918516 Freeman Apr 1990 A
4918609 Yamawaki Apr 1990 A
4920432 Eggers et al. Apr 1990 A
4920499 Skeirik Apr 1990 A
4924402 Ando et al. May 1990 A
4924417 Yuasa May 1990 A
4924699 Kuroda et al. May 1990 A
4925189 Braeunig May 1990 A
4926255 Von Kohorn May 1990 A
4926327 Sidley May 1990 A
4926336 Yamada May 1990 A
4926491 Maeda et al. May 1990 A
4928105 Langner May 1990 A
4928106 Ashjaee et al. May 1990 A
4928107 Kuroda et al. May 1990 A
4928246 Crawley et al. May 1990 A
4928247 Doyle et al. May 1990 A
4930158 Vogel May 1990 A
4930160 Vogel May 1990 A
4931926 Tanaka et al. Jun 1990 A
4931985 Glaise et al. Jun 1990 A
4932065 Feldgajer Jun 1990 A
4932910 Hayday Jun 1990 A
4933872 Vandenberg et al. Jun 1990 A
4937751 Nimura et al. Jun 1990 A
4937752 Nanba et al. Jun 1990 A
4939521 Burin Jul 1990 A
4941125 Boyne Jul 1990 A
4941193 Barnsley et al. Jul 1990 A
4943925 Moroto et al. Jul 1990 A
4944023 Imao et al. Jul 1990 A
4945501 Bell et al. Jul 1990 A
4945563 Horton et al. Jul 1990 A
RE33316 Katsuta et al. Aug 1990 E
4947151 Rosenberger Aug 1990 A
4947244 Fenwick et al. Aug 1990 A
4947261 Ishikawa et al. Aug 1990 A
4949088 Ryan et al. Aug 1990 A
4949187 Cohen Aug 1990 A
4949268 Nishikawa et al. Aug 1990 A
4949391 Faulkerson et al. Aug 1990 A
4951029 Severson Aug 1990 A
4951211 De Villeroche Aug 1990 A
4951212 Kurihara et al. Aug 1990 A
4952936 Martinson Aug 1990 A
4952937 Allen Aug 1990 A
4954824 Yamada et al. Sep 1990 A
4954828 Orr Sep 1990 A
4954837 Baird et al. Sep 1990 A
4954951 Hyatt Sep 1990 A
4954958 Savage et al. Sep 1990 A
4954959 Moroto et al. Sep 1990 A
4955693 Bobba Sep 1990 A
4956870 Hara Sep 1990 A
4958220 Alessi et al. Sep 1990 A
4958375 Reilly et al. Sep 1990 A
4958379 Yamaguchi et al. Sep 1990 A
4959719 Strubbe et al. Sep 1990 A
4959720 Duffield et al. Sep 1990 A
4961074 Martinson Oct 1990 A
4962473 Crain Oct 1990 A
4963865 Ichikawa et al. Oct 1990 A
4963889 Hatch Oct 1990 A
4963994 Levine Oct 1990 A
4964077 Eisen et al. Oct 1990 A
4965285 Bair Oct 1990 A
4965725 Rutenberg Oct 1990 A
4965821 Bishop et al. Oct 1990 A
4965825 Harvey et al. Oct 1990 A
4967273 Greenberg Oct 1990 A
4968877 McAvinney et al. Nov 1990 A
4968981 Sekine et al. Nov 1990 A
4969036 Bhanu et al. Nov 1990 A
4969093 Barker et al. Nov 1990 A
4970652 Nagashima Nov 1990 A
4972431 Keegan Nov 1990 A
4972484 Theile et al. Nov 1990 A
4972499 Kurosawa Nov 1990 A
4974149 Valenti Nov 1990 A
4974170 Bouve et al. Nov 1990 A
4975707 Smith Dec 1990 A
4975904 Mann et al. Dec 1990 A
4975905 Mann et al. Dec 1990 A
4976619 Carlson Dec 1990 A
4977455 Young Dec 1990 A
4977679 Saito et al. Dec 1990 A
4979222 Weber Dec 1990 A
4982344 Jordan Jan 1991 A
4982346 Girouard et al. Jan 1991 A
4983980 Ando Jan 1991 A
4984255 Davis et al. Jan 1991 A
4985863 Fujisawa et al. Jan 1991 A
4986384 Okamoto et al. Jan 1991 A
4986385 Masaki Jan 1991 A
4987486 Johnson et al. Jan 1991 A
4987492 Stults et al. Jan 1991 A
4987604 Rouch Jan 1991 A
4988981 Zimmerman et al. Jan 1991 A
4989090 Campbell et al. Jan 1991 A
4989151 Nuimura Jan 1991 A
4989256 Buckley Jan 1991 A
4989258 Takahashi et al. Jan 1991 A
4991011 Johnson et al. Feb 1991 A
4991304 McMurtry Feb 1991 A
4992940 Dworkin Feb 1991 A
4992947 Nimura et al. Feb 1991 A
4992972 Brooks et al. Feb 1991 A
4994908 Kuban et al. Feb 1991 A
4995078 Monslow et al. Feb 1991 A
4996642 Hey Feb 1991 A
4996645 Schneyderberg Van Der Zon Feb 1991 A
4996703 Gray Feb 1991 A
4996707 O'Malley et al. Feb 1991 A
4998286 Tsujiuchi et al. Mar 1991 A
5001554 Johnson et al. Mar 1991 A
5001777 Liautaud Mar 1991 A
5003317 Gray et al. Mar 1991 A
5003584 Benyacar et al. Mar 1991 A
5005084 Skinner Apr 1991 A
5006855 Braff Apr 1991 A
5008678 Herman Apr 1991 A
5008853 Bly et al. Apr 1991 A
5009429 Auxier Apr 1991 A
5010491 Biasillo et al. Apr 1991 A
5010500 Makkuni et al. Apr 1991 A
5012334 Etra Apr 1991 A
5012349 de Fay Apr 1991 A
5014098 Schlais et al. May 1991 A
5014206 Scribner et al. May 1991 A
5014219 White May 1991 A
5014234 Edwards, Jr. May 1991 A
5014327 Potter et al. May 1991 A
5016272 Stubbs et al. May 1991 A
5016273 Hoff May 1991 A
5017926 Ames et al. May 1991 A
5018169 Wong et al. May 1991 A
5018218 Peregrim et al. May 1991 A
5018219 Matsuzaki et al. May 1991 A
5019899 Boles et al. May 1991 A
5020112 Chou May 1991 A
5020113 Lo et al. May 1991 A
5021792 Hwang Jun 1991 A
5021794 Lawrence Jun 1991 A
5021976 Wexelblat et al. Jun 1991 A
5022062 Annis Jun 1991 A
5025261 Ohta et al. Jun 1991 A
5025310 Sekiya et al. Jun 1991 A
5025324 Hashimoto Jun 1991 A
5027400 Baji et al. Jun 1991 A
5028888 Ray Jul 1991 A
5030957 Evans Jul 1991 A
5031104 Ikeda et al. Jul 1991 A
5031224 Mengel et al. Jul 1991 A
5031228 Lu Jul 1991 A
5031330 Stuart Jul 1991 A
5033101 Sood Jul 1991 A
5034807 Von Kohorn Jul 1991 A
5034916 Ordish Jul 1991 A
5034991 Hagimae et al. Jul 1991 A
5036314 Barillari et al. Jul 1991 A
5036329 Ando Jul 1991 A
5036537 Jeffers et al. Jul 1991 A
5038022 Lucero Aug 1991 A
5038102 Glasheen Aug 1991 A
5038211 Hallenbeck Aug 1991 A
5038379 Sano Aug 1991 A
5038390 Ravi Chandran Aug 1991 A
5039979 McClive Aug 1991 A
5040134 Park Aug 1991 A
5041833 Weinberg Aug 1991 A
5041967 Ephrath et al. Aug 1991 A
5043736 Darnell et al. Aug 1991 A
5043881 Hamazaki Aug 1991 A
5043902 Yokoyama et al. Aug 1991 A
5045861 Duffett-Smith Sep 1991 A
5045937 Myrick Sep 1991 A
5046011 Kakihara et al. Sep 1991 A
5046113 Hoki Sep 1991 A
5046121 Yonekawa et al. Sep 1991 A
5046122 Nakaya et al. Sep 1991 A
5046130 Hall et al. Sep 1991 A
5046179 Uomori et al. Sep 1991 A
5047867 Strubbe et al. Sep 1991 A
5048095 Bhanu et al. Sep 1991 A
5048100 Kuperstein Sep 1991 A
5048112 Alves et al. Sep 1991 A
5049884 Jaeger et al. Sep 1991 A
5049885 Orr Sep 1991 A
5050223 Sumi Sep 1991 A
5051817 Takano Sep 1991 A
5051840 Watanabe et al. Sep 1991 A
5051998 Murai et al. Sep 1991 A
5052043 Gaborski Sep 1991 A
5052045 Peregrim et al. Sep 1991 A
5052046 Fukuda et al. Sep 1991 A
5052799 Sasser et al. Oct 1991 A
5053889 Nakano et al. Oct 1991 A
5053974 Penz Oct 1991 A
5054093 Cooper et al. Oct 1991 A
5054095 Bernsen et al. Oct 1991 A
5054101 Prakash Oct 1991 A
5054103 Yasuda et al. Oct 1991 A
5054110 Comroe et al. Oct 1991 A
5055658 Cockburn Oct 1991 A
5055851 Sheffer Oct 1991 A
5055926 Christensen et al. Oct 1991 A
5056056 Gustin Oct 1991 A
5056106 Wang et al. Oct 1991 A
5056147 Turner et al. Oct 1991 A
5057915 Von Kohorn Oct 1991 A
5058108 Mann et al. Oct 1991 A
5058179 Denker et al. Oct 1991 A
5058180 Khan Oct 1991 A
5058183 Schmidt et al. Oct 1991 A
5058184 Fukushima Oct 1991 A
5058186 Miyaoka et al. Oct 1991 A
5058698 Yoshida et al. Oct 1991 A
5059126 Kimball Oct 1991 A
5059969 Sakaguchi et al. Oct 1991 A
5060262 Bevins, Jr et al. Oct 1991 A
5060276 Morris et al. Oct 1991 A
5060277 Bokser Oct 1991 A
5060278 Fukumizu Oct 1991 A
5060279 Crawford et al. Oct 1991 A
5060282 Molley Oct 1991 A
5060285 Dixit et al. Oct 1991 A
5061063 Casasent Oct 1991 A
5061936 Suzuki Oct 1991 A
5062143 Schmitt Oct 1991 A
5063385 Caschera Nov 1991 A
5063524 Ferre et al. Nov 1991 A
5063525 Kurakake et al. Nov 1991 A
5063601 Hayduk Nov 1991 A
5063602 Peppers et al. Nov 1991 A
5063603 Burt Nov 1991 A
5063605 Samad Nov 1991 A
5063608 Siegel Nov 1991 A
5065326 Sahm Nov 1991 A
5065439 Takasaki et al. Nov 1991 A
5065440 Yoshida et al. Nov 1991 A
5065447 Barnsley et al. Nov 1991 A
5067082 Nimura et al. Nov 1991 A
5067160 Omata et al. Nov 1991 A
5067161 Mikami et al. Nov 1991 A
5067162 Driscoll, Jr. et al. Nov 1991 A
5067163 Adachi Nov 1991 A
5067164 Denker et al. Nov 1991 A
5067166 Ito Nov 1991 A
5068656 Sutherland Nov 1991 A
5068663 Valentine et al. Nov 1991 A
5068664 Appriou et al. Nov 1991 A
5068723 Dixit et al. Nov 1991 A
5068724 Krause et al. Nov 1991 A
5068733 Bennett Nov 1991 A
5068744 Ito Nov 1991 A
5068909 Rutherford et al. Nov 1991 A
5068911 Resnikoff et al. Nov 1991 A
5070404 Bullock et al. Dec 1991 A
5072227 Hatch Dec 1991 A
5072395 Bliss et al. Dec 1991 A
5073931 Audebert et al. Dec 1991 A
5075693 McMillan et al. Dec 1991 A
5075771 Hashimoto Dec 1991 A
5075863 Nagamune et al. Dec 1991 A
5076662 Shih et al. Dec 1991 A
5077557 Ingensand Dec 1991 A
5077607 Johnson et al. Dec 1991 A
5079553 Orr Jan 1992 A
5081667 Drori et al. Jan 1992 A
5081703 Lee Jan 1992 A
5083129 Valentine et al. Jan 1992 A
5083218 Takasu et al. Jan 1992 A
5083256 Trovato et al. Jan 1992 A
5083271 Thacher et al. Jan 1992 A
5083860 Miyatake et al. Jan 1992 A
5084822 Hayami Jan 1992 A
5086385 Launey et al. Feb 1992 A
5086390 Matthews Feb 1992 A
5086394 Shapira Feb 1992 A
5087919 Odagawa et al. Feb 1992 A
5089826 Yano et al. Feb 1992 A
5089885 Clark Feb 1992 A
5089978 Lipner et al. Feb 1992 A
5090049 Chen Feb 1992 A
5093718 Hoarty et al. Mar 1992 A
5093873 Takahashi Mar 1992 A
5093918 Heyen et al. Mar 1992 A
5095480 Fenner Mar 1992 A
5095531 Ito Mar 1992 A
5097269 Takayama et al. Mar 1992 A
5099319 Esch et al. Mar 1992 A
5099422 Foresman et al. Mar 1992 A
5101356 Timothy et al. Mar 1992 A
5101416 Fenton et al. Mar 1992 A
5101444 Wilson et al. Mar 1992 A
5103400 Yamada et al. Apr 1992 A
5103459 Gilhousen et al. Apr 1992 A
5103498 Lanier et al. Apr 1992 A
5105184 Pirani et al. Apr 1992 A
5107256 Ueno et al. Apr 1992 A
5108334 Eschenbach et al. Apr 1992 A
5109279 Ando Apr 1992 A
5109399 Thompson Apr 1992 A
5109431 Nishiya et al. Apr 1992 A
5109439 Froessl Apr 1992 A
5109482 Bohrman Apr 1992 A
5111400 Yoder May 1992 A
5111401 Everett, Jr. et al. May 1992 A
5111516 Nakano et al. May 1992 A
5113259 Romesburg et al. May 1992 A
5113496 McCalley et al. May 1992 A
5115223 Moody May 1992 A
5115233 Zdunek et al. May 1992 A
5115245 Wen et al. May 1992 A
5115398 De Jong May 1992 A
5115501 Kerr May 1992 A
5117232 Cantwell May 1992 A
5117360 Hotz et al. May 1992 A
5119081 Ikehira Jun 1992 A
5119102 Barnard Jun 1992 A
5119301 Shimizu et al. Jun 1992 A
5119475 Smith et al. Jun 1992 A
5119479 Arai et al. Jun 1992 A
5119504 Durboraw, III Jun 1992 A
5119507 Mankovitz Jun 1992 A
5121326 Moroto et al. Jun 1992 A
5122802 Marin Jun 1992 A
5122803 Stann et al. Jun 1992 A
5122886 Tanaka Jun 1992 A
5122957 Hattori Jun 1992 A
5123046 Levine Jun 1992 A
5123052 Brisson Jun 1992 A
5123057 Verly et al. Jun 1992 A
5123087 Newell et al. Jun 1992 A
5124908 Broadbent Jun 1992 A
5124915 Krenzel Jun 1992 A
5126748 Ames et al. Jun 1992 A
5126851 Yoshimura et al. Jun 1992 A
5127487 Yamamoto et al. Jul 1992 A
5128525 Stearns et al. Jul 1992 A
5128669 Dadds et al. Jul 1992 A
5128874 Bhanu et al. Jul 1992 A
5128979 Reich et al. Jul 1992 A
5130792 Tindell et al. Jul 1992 A
5131020 Liebesny et al. Jul 1992 A
5132992 Yurt et al. Jul 1992 A
5133021 Carpenter et al. Jul 1992 A
5133024 Froessl Jul 1992 A
5133045 Gaither et al. Jul 1992 A
5133052 Bier et al. Jul 1992 A
5133075 Risch Jul 1992 A
5133079 Ballantyne et al. Jul 1992 A
5134406 Orr Jul 1992 A
5134649 Gutzmer Jul 1992 A
5134719 Mankovitz Jul 1992 A
5136659 Kaneko et al. Aug 1992 A
5136687 Edelman et al. Aug 1992 A
5136696 Beckwith et al. Aug 1992 A
5141234 Boylan et al. Aug 1992 A
5142161 Brackmann Aug 1992 A
5142574 West, Jr. et al. Aug 1992 A
5144317 Duddek et al. Sep 1992 A
5144318 Kishi Sep 1992 A
5146226 Valentine et al. Sep 1992 A
5146227 Papadopoulos Sep 1992 A
5146231 Ghaem et al. Sep 1992 A
5146404 Calloway et al. Sep 1992 A
5146552 Cassorla et al. Sep 1992 A
5148154 MacKay et al. Sep 1992 A
5148179 Allison Sep 1992 A
5148452 Kennedy et al. Sep 1992 A
5148497 Pentland et al. Sep 1992 A
5148522 Okazaki Sep 1992 A
5151701 Valentine et al. Sep 1992 A
5151789 Young Sep 1992 A
5153512 Glasheen Oct 1992 A
5153598 Alves, Jr. Oct 1992 A
5153836 Fraughton et al. Oct 1992 A
5155490 Spradley, Jr. et al. Oct 1992 A
5155491 Ando Oct 1992 A
5155591 Wachob Oct 1992 A
5155688 Tanaka et al. Oct 1992 A
5155689 Wortham Oct 1992 A
5157384 Greanias et al. Oct 1992 A
5157691 Ohkubo et al. Oct 1992 A
5159315 Schultz et al. Oct 1992 A
5159474 Franke et al. Oct 1992 A
5159549 Hallman, Jr. et al. Oct 1992 A
5159556 Schorter Oct 1992 A
5159668 Kaasila Oct 1992 A
5161027 Liu Nov 1992 A
5161107 Mayeaux et al. Nov 1992 A
5161204 Hutcheson et al. Nov 1992 A
5161886 De Jong et al. Nov 1992 A
5162997 Takahashi Nov 1992 A
5163131 Row Nov 1992 A
5164729 Decker et al. Nov 1992 A
5164904 Sumner Nov 1992 A
5168147 Bloomberg Dec 1992 A
5168353 Walker et al. Dec 1992 A
5168452 Yamada et al. Dec 1992 A
5168529 Peregrim et al. Dec 1992 A
5168565 Morita Dec 1992 A
5170171 Brown Dec 1992 A
5170388 Endoh Dec 1992 A
5170427 Guichard et al. Dec 1992 A
5170466 Rogan et al. Dec 1992 A
5170499 Grothause Dec 1992 A
5172321 Ghaem et al. Dec 1992 A
5172413 Bradley et al. Dec 1992 A
5173710 Kelley et al. Dec 1992 A
5173777 Dangschat Dec 1992 A
5173949 Peregrim et al. Dec 1992 A
5175557 King et al. Dec 1992 A
5177680 Tsukino et al. Jan 1993 A
5177685 Davis et al. Jan 1993 A
5177796 Feig et al. Jan 1993 A
5179439 Hashimoto Jan 1993 A
5179449 Doi Jan 1993 A
5179652 Rozmanith et al. Jan 1993 A
5182555 Sumner Jan 1993 A
5182640 Takano Jan 1993 A
5184123 Bremer et al. Feb 1993 A
5184295 Mann Feb 1993 A
5184303 Link Feb 1993 A
5184311 Kraus et al. Feb 1993 A
5185610 Ward et al. Feb 1993 A
5185761 Kawasaki Feb 1993 A
5185857 Rozmanith et al. Feb 1993 A
5187589 Kono et al. Feb 1993 A
5187787 Skeen et al. Feb 1993 A
5187788 Marmelstein Feb 1993 A
5187797 Nielsen et al. Feb 1993 A
5187805 Bertiger et al. Feb 1993 A
5189612 Lemercier et al. Feb 1993 A
5189619 Adachi et al. Feb 1993 A
5189630 Barstow et al. Feb 1993 A
5191410 McCalley Mar 1993 A
5191423 Yoshida Mar 1993 A
5191532 Moroto et al. Mar 1993 A
5192957 Kennedy Mar 1993 A
5192999 Graczyk et al. Mar 1993 A
5193215 Olmer Mar 1993 A
5194871 Counselman, III Mar 1993 A
5195134 Inoue Mar 1993 A
5196846 Brockelsby et al. Mar 1993 A
5200822 Bronfin et al. Apr 1993 A
5200823 Yoneda et al. Apr 1993 A
5201010 Deaton et al. Apr 1993 A
5202828 Vertelney et al. Apr 1993 A
5202829 Geier Apr 1993 A
5202915 Nishii Apr 1993 A
5202985 Goyal Apr 1993 A
5203199 Henderson et al. Apr 1993 A
5203704 McCloud Apr 1993 A
5206500 Decker et al. Apr 1993 A
5206651 Valentine et al. Apr 1993 A
5206806 Gerardi et al. Apr 1993 A
5208665 McCalley et al. May 1993 A
5208756 Song May 1993 A
5210540 Masumoto May 1993 A
5210611 Yee et al. May 1993 A
5210787 Hayes et al. May 1993 A
5212739 Johnson May 1993 A
5214504 Toriu et al. May 1993 A
5214793 Conway et al. May 1993 A
5216228 Hashimoto Jun 1993 A
5218367 Sheffer et al. Jun 1993 A
5218620 Mori et al. Jun 1993 A
5220420 Hoarty et al. Jun 1993 A
5220501 Lawlor et al. Jun 1993 A
5220507 Kirson Jun 1993 A
5220509 Takemura et al. Jun 1993 A
5220640 Frank Jun 1993 A
5220648 Sato Jun 1993 A
5220657 Bly et al. Jun 1993 A
5220664 Lee Jun 1993 A
5220674 Morgan et al. Jun 1993 A
5222155 Delanoy et al. Jun 1993 A
5223844 Mansell et al. Jun 1993 A
5223914 Auda et al. Jun 1993 A
5223924 Strubbe Jun 1993 A
5224151 Bowen et al. Jun 1993 A
5224706 Bridgeman et al. Jul 1993 A
5225842 Brown et al. Jul 1993 A
5225902 McMullan, Jr. Jul 1993 A
5227874 Von Kohorn Jul 1993 A
5228077 Darbee Jul 1993 A
5228695 Meyer Jul 1993 A
5228854 Eldridge Jul 1993 A
5229590 Harden et al. Jul 1993 A
5229754 Aoki et al. Jul 1993 A
5229756 Kosugi et al. Jul 1993 A
5230048 Moy Jul 1993 A
5231493 Apitz Jul 1993 A
5231494 Wachob Jul 1993 A
5231568 Cohen et al. Jul 1993 A
5231584 Nimura et al. Jul 1993 A
5231698 Forcier Jul 1993 A
RE34340 Freeman Aug 1993 E
D338841 Davis et al. Aug 1993 S
5233423 Jernigan et al. Aug 1993 A
5233533 Edstrom et al. Aug 1993 A
5235509 Mueller et al. Aug 1993 A
5235633 Dennison et al. Aug 1993 A
5239296 Jenkins Aug 1993 A
5239463 Blair et al. Aug 1993 A
5239464 Blair et al. Aug 1993 A
5239617 Gardner et al. Aug 1993 A
5241428 Goldwasser et al. Aug 1993 A
5241465 Oba et al. Aug 1993 A
5241542 Natarajan et al. Aug 1993 A
5241620 Ruggiero Aug 1993 A
5241625 Epard et al. Aug 1993 A
5241645 Cimral et al. Aug 1993 A
5243149 Comerford et al. Sep 1993 A
5243528 Lefebvre Sep 1993 A
5245537 Barber Sep 1993 A
5245909 Corrigan et al. Sep 1993 A
5247285 Yokota et al. Sep 1993 A
5247306 Hardange et al. Sep 1993 A
5247347 Litteral et al. Sep 1993 A
5247363 Sun et al. Sep 1993 A
5247433 Kitaura et al. Sep 1993 A
5247440 Capurka et al. Sep 1993 A
5247564 Zicker Sep 1993 A
5247651 Clarisse Sep 1993 A
5249043 Grandmougin Sep 1993 A
5250951 Valentine et al. Oct 1993 A
5251106 Hui Oct 1993 A
5251205 Callon et al. Oct 1993 A
5251294 Abelow Oct 1993 A
5251316 Anick et al. Oct 1993 A
5251324 McMullan, Jr. Oct 1993 A
5252951 Tannenbaum et al. Oct 1993 A
5253061 Takahama et al. Oct 1993 A
5253066 Vogel Oct 1993 A
5253275 Yurt et al. Oct 1993 A
5255386 Prager Oct 1993 A
5257195 Hirata Oct 1993 A
5257789 LeVasseur Nov 1993 A
5257810 Schorr et al. Nov 1993 A
5259038 Sakou et al. Nov 1993 A
5260778 Kauffman et al. Nov 1993 A
5261042 Brandt Nov 1993 A
5261081 White et al. Nov 1993 A
5262775 Tamai et al. Nov 1993 A
5262860 Fitzpatrick et al. Nov 1993 A
5263167 Conner, Jr. et al. Nov 1993 A
5263174 Layman Nov 1993 A
5265025 Hirata Nov 1993 A
5265033 Vajk et al. Nov 1993 A
5266958 Durboraw, III Nov 1993 A
5267171 Suzuki et al. Nov 1993 A
RE34476 Norwood Dec 1993 E
5268689 Ono et al. Dec 1993 A
5268927 Dimos et al. Dec 1993 A
5269067 Waeldele et al. Dec 1993 A
5270706 Smith Dec 1993 A
5270936 Fukushima et al. Dec 1993 A
5272324 Blevins Dec 1993 A
5272483 Kato Dec 1993 A
5272638 Martin et al. Dec 1993 A
5274387 Kakihara et al. Dec 1993 A
5274560 LaRue Dec 1993 A
5274667 Olmstead Dec 1993 A
5274714 Hutcheson et al. Dec 1993 A
5276451 Odagawa Jan 1994 A
5276737 Micali Jan 1994 A
5278424 Kagawa Jan 1994 A
5278568 Enge et al. Jan 1994 A
5278759 Berra et al. Jan 1994 A
5280530 Trew et al. Jan 1994 A
5283560 Bartlett Feb 1994 A
5283570 DeLuca et al. Feb 1994 A
5283575 Kao et al. Feb 1994 A
5283639 Esch et al. Feb 1994 A
5283641 Lemelson Feb 1994 A
5283731 Lalonde et al. Feb 1994 A
5283734 Von Kohorn Feb 1994 A
5283819 Glick et al. Feb 1994 A
5283829 Anderson Feb 1994 A
5283856 Gross et al. Feb 1994 A
5285272 Bradley et al. Feb 1994 A
5285284 Takashima et al. Feb 1994 A
5285523 Takahashi Feb 1994 A
5287199 Zoccolillo Feb 1994 A
5288078 Capper et al. Feb 1994 A
5288938 Wheaton Feb 1994 A
5291068 Rammel et al. Mar 1994 A
5291202 McClintock Mar 1994 A
5291412 Tamai et al. Mar 1994 A
5291413 Tamai et al. Mar 1994 A
5292254 Miller et al. Mar 1994 A
5293163 Kakihara et al. Mar 1994 A
5293318 Fukushima Mar 1994 A
5293484 Dabbs, III et al. Mar 1994 A
5293513 Umezu et al. Mar 1994 A
5295154 Meier et al. Mar 1994 A
5295491 Gevins Mar 1994 A
5296861 Knight Mar 1994 A
5296931 Na Mar 1994 A
5297204 Levine Mar 1994 A
5297249 Bernstein et al. Mar 1994 A
5298674 Yun Mar 1994 A
5299132 Wortham Mar 1994 A
5300932 Valentine et al. Apr 1994 A
5301028 Banker et al. Apr 1994 A
5301243 Olschafskie et al. Apr 1994 A
5301354 Schwendeman et al. Apr 1994 A
5301368 Hirata Apr 1994 A
5303297 Hillis Apr 1994 A
5303313 Mark et al. Apr 1994 A
5303393 Noreen et al. Apr 1994 A
5305007 Orr et al. Apr 1994 A
5305195 Murphy Apr 1994 A
5305197 Axler et al. Apr 1994 A
5305386 Yamato Apr 1994 A
5305389 Palmer Apr 1994 A
5307173 Yuen et al. Apr 1994 A
5307421 Darboux et al. Apr 1994 A
5309437 Perlman et al. May 1994 A
5309474 Gilhousen et al. May 1994 A
5311173 Komura et al. May 1994 A
5311516 Kuznicki et al. May 1994 A
5314037 Shaw et al. May 1994 A
5315302 Katsukura et al. May 1994 A
5315670 Shapiro May 1994 A
5317320 Grover et al. May 1994 A
5317321 Sass May 1994 A
5317403 Keenan May 1994 A
5317647 Pagallo May 1994 A
5317677 Dolan et al. May 1994 A
5319363 Welch et al. Jun 1994 A
5319445 Fitts Jun 1994 A
5319454 Schutte Jun 1994 A
5319455 Hoarty et al. Jun 1994 A
5319548 Germain Jun 1994 A
5319707 Wasilewski et al. Jun 1994 A
5320356 Cauda Jun 1994 A
5320538 Baum Jun 1994 A
5321241 Craine Jun 1994 A
5323234 Kawasaki Jun 1994 A
5323240 Amano et al. Jun 1994 A
5323321 Smith, Jr. Jun 1994 A
5323322 Mueller et al. Jun 1994 A
5324028 Luna Jun 1994 A
5325183 Rhee Jun 1994 A
5325423 Lewis Jun 1994 A
5326104 Pease et al. Jul 1994 A
5327144 Stilp et al. Jul 1994 A
5327529 Fults et al. Jul 1994 A
5329611 Pechanek et al. Jul 1994 A
5331327 Brocia et al. Jul 1994 A
5334974 Simms et al. Aug 1994 A
5334986 Fernhout Aug 1994 A
5335079 Yuen et al. Aug 1994 A
5335246 Yokev et al. Aug 1994 A
5335277 Harvey et al. Aug 1994 A
5337155 Cornelis Aug 1994 A
5337244 Nobe et al. Aug 1994 A
5339086 DeLuca et al. Aug 1994 A
5339239 Manabe et al. Aug 1994 A
5339392 Risberg et al. Aug 1994 A
5341138 Ono et al. Aug 1994 A
5341140 Perry Aug 1994 A
5341301 Shirai et al. Aug 1994 A
5343239 Lappington et al. Aug 1994 A
5343251 Nafeh Aug 1994 A
5343300 Hennig Aug 1994 A
5343399 Yokoyama et al. Aug 1994 A
5343493 Karimullah Aug 1994 A
5345388 Kashiwazaki Sep 1994 A
5345594 Tsuda Sep 1994 A
5347120 Decker et al. Sep 1994 A
5347285 MacDoran et al. Sep 1994 A
5347286 Babitch Sep 1994 A
5347295 Agulnick et al. Sep 1994 A
5347306 Nitta Sep 1994 A
5347456 Zhang et al. Sep 1994 A
5347477 Lee Sep 1994 A
5347600 Barnsley et al. Sep 1994 A
5347632 Filepp et al. Sep 1994 A
5349531 Sato et al. Sep 1994 A
5349670 Agrawal et al. Sep 1994 A
5351075 Herz et al. Sep 1994 A
5351078 Lemelson Sep 1994 A
5351194 Ross et al. Sep 1994 A
5351235 Lahtinen Sep 1994 A
5351970 Fioretti Oct 1994 A
5353023 Mitsugi Oct 1994 A
5353034 Sato et al. Oct 1994 A
5353121 Young et al. Oct 1994 A
5353218 De Lapa et al. Oct 1994 A
5355146 Chiu et al. Oct 1994 A
5355302 Martin et al. Oct 1994 A
5355480 Smith et al. Oct 1994 A
5357276 Banker et al. Oct 1994 A
5359332 Allison et al. Oct 1994 A
5359367 Stockill Oct 1994 A
5359527 Takanabe et al. Oct 1994 A
5359529 Snider Oct 1994 A
5360971 Kaufman et al. Nov 1994 A
5361393 Rossillo Nov 1994 A
5363105 Ono et al. Nov 1994 A
5364093 Huston et al. Nov 1994 A
5365055 Decker et al. Nov 1994 A
5365282 Levine Nov 1994 A
5365447 Dennis Nov 1994 A
5365450 Schuchman et al. Nov 1994 A
5365451 Wang et al. Nov 1994 A
5365516 Jandrell Nov 1994 A
5367453 Capps et al. Nov 1994 A
5369584 Kajiwara Nov 1994 A
5369588 Hayami et al. Nov 1994 A
5371348 Kumar et al. Dec 1994 A
5371551 Logan et al. Dec 1994 A
5373330 Levine Dec 1994 A
5373440 Cohen et al. Dec 1994 A
5374952 Flohr Dec 1994 A
5375059 Kyrtsos et al. Dec 1994 A
5375235 Berry et al. Dec 1994 A
5377317 Bates et al. Dec 1994 A
5377354 Scannell et al. Dec 1994 A
5377706 Huang Jan 1995 A
5377997 Wilden et al. Jan 1995 A
5379224 Brown et al. Jan 1995 A
5381158 Takahara et al. Jan 1995 A
5381338 Wysocki et al. Jan 1995 A
5382957 Blume Jan 1995 A
5382958 FitzGerald Jan 1995 A
5382983 Kwoh et al. Jan 1995 A
5383127 Shibata Jan 1995 A
5384867 Barnsley et al. Jan 1995 A
5385519 Hsu et al. Jan 1995 A
5388147 Grimes Feb 1995 A
5388198 Layman et al. Feb 1995 A
5389824 Moroto et al. Feb 1995 A
5389930 Ono Feb 1995 A
5389934 Kass Feb 1995 A
5390125 Sennott et al. Feb 1995 A
5390238 Kirk et al. Feb 1995 A
5390281 Luciw et al. Feb 1995 A
5392052 Eberwine Feb 1995 A
5393067 Paulsen et al. Feb 1995 A
5394333 Kao Feb 1995 A
5396227 Carroll et al. Mar 1995 A
5396429 Hanchett Mar 1995 A
5396546 Remillard Mar 1995 A
5398074 Duffield et al. Mar 1995 A
5398138 Tomita Mar 1995 A
5398189 Inoue et al. Mar 1995 A
5398190 Wortham Mar 1995 A
5398310 Tchao et al. Mar 1995 A
5398932 Eberhardt et al. Mar 1995 A
5400018 Scholl et al. Mar 1995 A
5400034 Smith Mar 1995 A
5400254 Fujita Mar 1995 A
5401946 Weinblatt Mar 1995 A
5402347 McBurney et al. Mar 1995 A
5402441 Washizu et al. Mar 1995 A
5403015 Forte et al. Apr 1995 A
5404442 Foster et al. Apr 1995 A
5404458 Zetts Apr 1995 A
5404505 Levinson Apr 1995 A
5404579 Obayashi et al. Apr 1995 A
5404661 Sahm et al. Apr 1995 A
5405152 Katanics et al. Apr 1995 A
5406491 Lima Apr 1995 A
5406492 Suzuki Apr 1995 A
5408415 Inoue et al. Apr 1995 A
5410326 Goldstein Apr 1995 A
5410343 Coddington et al. Apr 1995 A
5410344 Graves et al. Apr 1995 A
5410367 Zahavi et al. Apr 1995 A
5410480 Koseki et al. Apr 1995 A
5410643 Yomdin et al. Apr 1995 A
5410750 Cantwell et al. Apr 1995 A
5412573 Barnea et al. May 1995 A
5412660 Chen et al. May 1995 A
5412720 Hoarty May 1995 A
5412773 Carlucci et al. May 1995 A
5414432 Penny, Jr. et al. May 1995 A
5414756 Levine May 1995 A
5414773 Handelman May 1995 A
5416508 Sakuma et al. May 1995 A
5416695 Stutman et al. May 1995 A
5416712 Geier et al. May 1995 A
5416856 Jacobs et al. May 1995 A
5417210 Funda et al. May 1995 A
5418526 Crawford May 1995 A
5418537 Bird May 1995 A
5418538 Lau May 1995 A
5418622 Takeuchi May 1995 A
5418684 Koenck et al. May 1995 A
5418717 Su et al. May 1995 A
5418951 Damashek May 1995 A
5420592 Johnson May 1995 A
5420593 Niles May 1995 A
5420594 FitzGerald et al. May 1995 A
5420647 Levine May 1995 A
5420794 James May 1995 A
5420825 Fischer et al. May 1995 A
5420975 Blades et al. May 1995 A
5421008 Banning et al. May 1995 A
5422624 Smith Jun 1995 A
5422813 Schuchman et al. Jun 1995 A
5422816 Sprague et al. Jun 1995 A
5423554 Davis Jun 1995 A
5424951 Nobe et al. Jun 1995 A
5425058 Mui Jun 1995 A
5425100 Thomas et al. Jun 1995 A
5425890 Yudin et al. Jun 1995 A
5426594 Wright et al. Jun 1995 A
5426732 Boies et al. Jun 1995 A
5428559 Kano Jun 1995 A
5428606 Moskowitz Jun 1995 A
5428636 Meier Jun 1995 A
5428727 Kurosu et al. Jun 1995 A
5428730 Baker et al. Jun 1995 A
5428774 Takahashi et al. Jun 1995 A
5429361 Raven et al. Jul 1995 A
5430552 O'Callaghan Jul 1995 A
5430558 Sohaei et al. Jul 1995 A
5430653 Inoue Jul 1995 A
5430812 Barnsley et al. Jul 1995 A
5430948 Vander Wal, III Jul 1995 A
5431407 Hofberg et al. Jul 1995 A
5432520 Schneider et al. Jul 1995 A
5432542 Thibadeau et al. Jul 1995 A
5432561 Strubbe Jul 1995 A
5432841 Rimer Jul 1995 A
5432902 Matsumoto Jul 1995 A
5432932 Chen et al. Jul 1995 A
5433446 Lindstedt, Jr. Jul 1995 A
5434574 Hayashi et al. Jul 1995 A
5434777 Luciw Jul 1995 A
5434787 Okamoto et al. Jul 1995 A
5434788 Seymour et al. Jul 1995 A
5434789 Fraker et al. Jul 1995 A
5434933 Karnin et al. Jul 1995 A
5434966 Nakazawa et al. Jul 1995 A
5434978 Dockter et al. Jul 1995 A
5436653 Ellis et al. Jul 1995 A
5436834 Graf et al. Jul 1995 A
5437462 Breeding Aug 1995 A
5438357 McNelley Aug 1995 A
5438361 Coleman Aug 1995 A
5438630 Chen et al. Aug 1995 A
5438687 Suchowerskyj et al. Aug 1995 A
5440262 Lum et al. Aug 1995 A
5440400 Micheron et al. Aug 1995 A
5440678 Eisen et al. Aug 1995 A
5441047 David et al. Aug 1995 A
5442363 Remondi Aug 1995 A
5442389 Blahut et al. Aug 1995 A
5442553 Parrillo Aug 1995 A
5442557 Kaneko Aug 1995 A
5442771 Filepp et al. Aug 1995 A
5444450 Olds et al. Aug 1995 A
5444499 Saitoh Aug 1995 A
5444779 Daniele Aug 1995 A
5446659 Yamawaki Aug 1995 A
5446736 Gleeson et al. Aug 1995 A
5446891 Kaplan et al. Aug 1995 A
5446919 Wilkins Aug 1995 A
5446923 Martinson et al. Aug 1995 A
5448638 Johnson et al. Sep 1995 A
5450329 Tanner Sep 1995 A
5450490 Jensen et al. Sep 1995 A
5451964 Babu Sep 1995 A
5452217 Kishi et al. Sep 1995 A
5452442 Kephart Sep 1995 A
5454043 Freeman Sep 1995 A
5455570 Cook et al. Oct 1995 A
5455892 Minot et al. Oct 1995 A
5458123 Unger Oct 1995 A
5459304 Eisenmann Oct 1995 A
5459306 Stein et al. Oct 1995 A
5459517 Kunitake et al. Oct 1995 A
5459522 Pint Oct 1995 A
5459660 Berra Oct 1995 A
5459667 Odagaki et al. Oct 1995 A
5461365 Schlager et al. Oct 1995 A
5461383 Ono et al. Oct 1995 A
5461415 Wolf et al. Oct 1995 A
5461699 Arbabi et al. Oct 1995 A
5462275 Lowe et al. Oct 1995 A
5462473 Sheller Oct 1995 A
5464946 Lewis Nov 1995 A
5465079 Bouchard et al. Nov 1995 A
5465089 Nakatani et al. Nov 1995 A
5465113 Gilboy Nov 1995 A
5465204 Sekine et al. Nov 1995 A
5465308 Hutcheson et al. Nov 1995 A
5465325 Capps et al. Nov 1995 A
5465353 Hull et al. Nov 1995 A
5465358 Blades et al. Nov 1995 A
5465385 Ohga et al. Nov 1995 A
5465413 Enge et al. Nov 1995 A
5467264 Rauch et al. Nov 1995 A
5467425 Lau et al. Nov 1995 A
5469206 Strubbe et al. Nov 1995 A
5469371 Bass Nov 1995 A
5469740 French et al. Nov 1995 A
5471214 Faibish et al. Nov 1995 A
5471218 Talbot et al. Nov 1995 A
5471629 Risch Nov 1995 A
5473466 Tanielian et al. Dec 1995 A
5473538 Fujita et al. Dec 1995 A
5473602 McKenna et al. Dec 1995 A
5475597 Buck Dec 1995 A
5475651 Bishop et al. Dec 1995 A
5475687 Markkula, Jr. et al. Dec 1995 A
5477228 Tiwari et al. Dec 1995 A
5477262 Banker et al. Dec 1995 A
5477447 Luciw et al. Dec 1995 A
5479264 Ueda et al. Dec 1995 A
5479266 Young et al. Dec 1995 A
5479268 Young et al. Dec 1995 A
5479408 Will Dec 1995 A
5479479 Braitberg et al. Dec 1995 A
5479482 Grimes Dec 1995 A
5479497 Kovarik Dec 1995 A
5479932 Higgins et al. Jan 1996 A
5481278 Shigematsu et al. Jan 1996 A
5481294 Thomas et al. Jan 1996 A
5481296 Cragun et al. Jan 1996 A
5481542 Logston et al. Jan 1996 A
5481712 Silver et al. Jan 1996 A
5483234 Carreel et al. Jan 1996 A
5483278 Strubbe et al. Jan 1996 A
5483466 Kawahara et al. Jan 1996 A
5483827 Kulka et al. Jan 1996 A
5485161 Vaughn Jan 1996 A
5485163 Singer et al. Jan 1996 A
5485197 Hoarty Jan 1996 A
5485219 Woo Jan 1996 A
5485221 Banker et al. Jan 1996 A
5485518 Hunter et al. Jan 1996 A
5485565 Saund et al. Jan 1996 A
5486822 Tenmoku et al. Jan 1996 A
5487132 Cheng Jan 1996 A
5488196 Zimmerman et al. Jan 1996 A
5488409 Yuen et al. Jan 1996 A
5488425 Grimes Jan 1996 A
5488484 Miyano Jan 1996 A
5490208 Remillard Feb 1996 A
5491517 Kreitman et al. Feb 1996 A
5493692 Theimer et al. Feb 1996 A
5495292 Zhang et al. Feb 1996 A
5495537 Bedrosian et al. Feb 1996 A
5495576 Ritchey Feb 1996 A
5495609 Scott Feb 1996 A
5496177 Collia et al. Mar 1996 A
5497314 Novak Mar 1996 A
5497479 Hornbuckle Mar 1996 A
5498003 Gechter Mar 1996 A
5498711 Highsmith et al. Mar 1996 A
5499103 Mankovitz Mar 1996 A
5499108 Cotte et al. Mar 1996 A
5500671 Andersson et al. Mar 1996 A
5500741 Baik et al. Mar 1996 A
5500920 Kupiec Mar 1996 A
5500937 Thompson-Rohrlich Mar 1996 A
5502504 Marshall et al. Mar 1996 A
5502774 Bellegarda et al. Mar 1996 A
5502803 Yoshida et al. Mar 1996 A
5504482 Schreder Apr 1996 A
5504491 Chapman Apr 1996 A
5504518 Ellis et al. Apr 1996 A
5504675 Cragun et al. Apr 1996 A
5505449 Eberhardt et al. Apr 1996 A
5506584 Boles Apr 1996 A
5506768 Seem et al. Apr 1996 A
5506886 Maine et al. Apr 1996 A
5506897 Moore et al. Apr 1996 A
5506963 Ducateau et al. Apr 1996 A
5507491 Gatto et al. Apr 1996 A
5508731 Kohorn Apr 1996 A
5508815 Levine Apr 1996 A
5509009 Laycock et al. Apr 1996 A
5510793 Gregg, III et al. Apr 1996 A
5510798 Bauer Apr 1996 A
5510838 Yomdin et al. Apr 1996 A
5511134 Kuratomi et al. Apr 1996 A
5511153 Azarbayejani et al. Apr 1996 A
5511160 Robson Apr 1996 A
5512707 Ohshima Apr 1996 A
5512908 Herrick Apr 1996 A
5512935 Majeti et al. Apr 1996 A
5512963 Mankovitz Apr 1996 A
5513110 Fujita et al. Apr 1996 A
5513254 Markowitz Apr 1996 A
5515042 Nelson May 1996 A
5515043 Berard et al. May 1996 A
5515098 Carles May 1996 A
5515099 Cortjens et al. May 1996 A
5515106 Chaney et al. May 1996 A
5515173 Mankovitz et al. May 1996 A
5515284 Abe May 1996 A
5515285 Garrett, Sr. et al. May 1996 A
5515419 Sheffer May 1996 A
5515453 Hennessey et al. May 1996 A
5515471 Yamamoto et al. May 1996 A
5515511 Nguyen et al. May 1996 A
5515972 Shames May 1996 A
5516105 Eisenbrey et al. May 1996 A
5517199 DiMattei May 1996 A
5517254 Monta et al. May 1996 A
5517256 Hashimoto May 1996 A
5517257 Dunn et al. May 1996 A
5517331 Murai et al. May 1996 A
5517578 Altman et al. May 1996 A
5517598 Sirat May 1996 A
5519403 Bickley et al. May 1996 A
5519452 Parulski May 1996 A
5519620 Talbot et al. May 1996 A
5519718 Yokev et al. May 1996 A
5519760 Borkowski et al. May 1996 A
5521696 Dunne May 1996 A
5521841 Arman et al. May 1996 A
5521984 Denenberg et al. May 1996 A
5522155 Jones Jun 1996 A
5522798 Johnson et al. Jun 1996 A
5523796 Marshall et al. Jun 1996 A
5523950 Peterson Jun 1996 A
5524065 Yagasaki Jun 1996 A
5524195 Clanton, III et al. Jun 1996 A
5524637 Erickson Jun 1996 A
5525989 Holt Jun 1996 A
5525996 Aker et al. Jun 1996 A
5526034 Hoarty et al. Jun 1996 A
5526035 Lappington et al. Jun 1996 A
5526041 Glatt Jun 1996 A
5526127 Yonetani et al. Jun 1996 A
5526405 Toda Jun 1996 A
5526427 Thomas et al. Jun 1996 A
5526479 Barstow et al. Jun 1996 A
5528234 Mani et al. Jun 1996 A
5528245 Aker et al. Jun 1996 A
5528246 Henderson et al. Jun 1996 A
5528248 Steiner et al. Jun 1996 A
5528304 Cherrick et al. Jun 1996 A
5528391 Elrod Jun 1996 A
5528490 Hill Jun 1996 A
5529138 Shaw et al. Jun 1996 A
5529139 Kurahashi et al. Jun 1996 A
5529660 Kogan et al. Jun 1996 A
5530440 Danzer et al. Jun 1996 A
5530447 Henderson et al. Jun 1996 A
5530655 Lokhoff et al. Jun 1996 A
5530852 Meske, Jr. et al. Jun 1996 A
5530914 McPheters Jun 1996 A
5532469 Shepard et al. Jul 1996 A
5532706 Reinhardt et al. Jul 1996 A
5532732 Yuen et al. Jul 1996 A
5532754 Young et al. Jul 1996 A
5532923 Sone Jul 1996 A
5533141 Futatsugi et al. Jul 1996 A
5534697 Creekmore et al. Jul 1996 A
5534911 Levitan Jul 1996 A
5534917 MacDougall Jul 1996 A
5535302 Tsao Jul 1996 A
5535321 Massaro et al. Jul 1996 A
5535323 Miller et al. Jul 1996 A
5535380 Bergkvist, Jr. et al. Jul 1996 A
5537141 Harper et al. Jul 1996 A
5537472 Estevez-Alcolado et al. Jul 1996 A
5537528 Takahashi et al. Jul 1996 A
5537586 Amram et al. Jul 1996 A
5539395 Buss et al. Jul 1996 A
5539398 Hall et al. Jul 1996 A
5539427 Bricklin et al. Jul 1996 A
5539449 Blahut et al. Jul 1996 A
5539450 Handelman Jul 1996 A
5539645 Mandhyan et al. Jul 1996 A
5539822 Lett Jul 1996 A
5539829 Lokhoff et al. Jul 1996 A
5541419 Arackellian Jul 1996 A
5541590 Nishio Jul 1996 A
5541606 Lennen Jul 1996 A
5541638 Story Jul 1996 A
5541662 Adams et al. Jul 1996 A
5541738 Mankovitz Jul 1996 A
5542102 Smith et al. Jul 1996 A
5543591 Gillespie et al. Aug 1996 A
5543789 Behr et al. Aug 1996 A
5543856 Rosser et al. Aug 1996 A
5543929 Mankovitz et al. Aug 1996 A
5544225 Kennedy, III et al. Aug 1996 A
5544254 Hartley et al. Aug 1996 A
5544358 Capps et al. Aug 1996 A
5544661 Davis et al. Aug 1996 A
5544892 Breeding Aug 1996 A
5546445 Dennison et al. Aug 1996 A
5546475 Bolle et al. Aug 1996 A
5546518 Blossom et al. Aug 1996 A
5548322 Zhou Aug 1996 A
5548345 Brian et al. Aug 1996 A
5548515 Pilley et al. Aug 1996 A
5548645 Ananda Aug 1996 A
5548667 Tu Aug 1996 A
5549300 Sardarian Aug 1996 A
5550055 Reinherz et al. Aug 1996 A
5550551 Alesio Aug 1996 A
5550575 West et al. Aug 1996 A
5550576 Klosterman Aug 1996 A
5550578 Hoarty et al. Aug 1996 A
5550863 Yurt et al. Aug 1996 A
5550928 Lu et al. Aug 1996 A
5550930 Berman et al. Aug 1996 A
5550965 Gabbe et al. Aug 1996 A
5552773 Kuhnert Sep 1996 A
5552833 Henmi et al. Sep 1996 A
5553076 Behtash et al. Sep 1996 A
5553123 Chan et al. Sep 1996 A
5553221 Reimer et al. Sep 1996 A
5553277 Hirano et al. Sep 1996 A
5553609 Chen et al. Sep 1996 A
5554983 Kitamura et al. Sep 1996 A
5555286 Tendler Sep 1996 A
5555363 Tou et al. Sep 1996 A
5555416 Owens et al. Sep 1996 A
5555443 Ikehama Sep 1996 A
5555495 Bell et al. Sep 1996 A
5556749 Mitsuhashi et al. Sep 1996 A
5557254 Johnson et al. Sep 1996 A
5557338 Maze et al. Sep 1996 A
5557658 Gregorek et al. Sep 1996 A
5557721 Fite et al. Sep 1996 A
5557724 Sampat et al. Sep 1996 A
5557728 Garrett et al. Sep 1996 A
5557765 Lipner et al. Sep 1996 A
5559312 Lucero Sep 1996 A
5559508 Orr et al. Sep 1996 A
5559548 Davis et al. Sep 1996 A
5559549 Hendricks et al. Sep 1996 A
5559550 Mankovitz Sep 1996 A
5559707 DeLorme et al. Sep 1996 A
5559945 Beaudet et al. Sep 1996 A
5560011 Uyama Sep 1996 A
5561649 Lee et al. Oct 1996 A
5561704 Salimando Oct 1996 A
5561707 Katz Oct 1996 A
5561709 Remillard Oct 1996 A
5561718 Trew et al. Oct 1996 A
5561796 Sakamoto et al. Oct 1996 A
5563607 Loomis et al. Oct 1996 A
5563665 Chang Oct 1996 A
5563786 Torii Oct 1996 A
5563928 Rostoker et al. Oct 1996 A
5563948 Diehl et al. Oct 1996 A
5563988 Maes et al. Oct 1996 A
5563996 Tchao Oct 1996 A
5564001 Lewis Oct 1996 A
5564038 Grantz et al. Oct 1996 A
5565874 Rode Oct 1996 A
5565909 Thibadeau et al. Oct 1996 A
5565910 Rowse et al. Oct 1996 A
5566274 Ishida et al. Oct 1996 A
5567988 Rostoker et al. Oct 1996 A
5568153 Beliveau Oct 1996 A
5568272 Levine Oct 1996 A
5568390 Hirota et al. Oct 1996 A
5568450 Grande et al. Oct 1996 A
5568452 Kronenberg Oct 1996 A
5569082 Kaye Oct 1996 A
5570113 Zetts Oct 1996 A
5570295 Isenberg et al. Oct 1996 A
5570415 Stretton et al. Oct 1996 A
5572201 Graham et al. Nov 1996 A
5572204 Timm et al. Nov 1996 A
5572246 Ellis et al. Nov 1996 A
5572401 Carroll Nov 1996 A
5572428 Ishida et al. Nov 1996 A
5572442 Schulhof et al. Nov 1996 A
5572528 Shuen Nov 1996 A
5572604 Simard Nov 1996 A
5572643 Judson Nov 1996 A
5574804 Olschafskie et al. Nov 1996 A
5574845 Benson et al. Nov 1996 A
5574963 Weinblatt et al. Nov 1996 A
5576642 Nguyen et al. Nov 1996 A
5576716 Sadler Nov 1996 A
5576755 Davis et al. Nov 1996 A
5576950 Tonomura et al. Nov 1996 A
5576951 Lockwood Nov 1996 A
5576952 Stutman et al. Nov 1996 A
5577266 Takahisa et al. Nov 1996 A
5577981 Jarvik Nov 1996 A
5579013 Hershey et al. Nov 1996 A
5579239 Freeman et al. Nov 1996 A
5579285 Hubert Nov 1996 A
5579471 Barber et al. Nov 1996 A
5579535 Orlen et al. Nov 1996 A
5579537 Takahisa Nov 1996 A
5580249 Jacobsen et al. Dec 1996 A
5581276 Cipolla et al. Dec 1996 A
5581462 Rogers Dec 1996 A
5581479 McLaughlin et al. Dec 1996 A
5581658 O'Hagan et al. Dec 1996 A
5581665 Sugiura et al. Dec 1996 A
5581670 Bier et al. Dec 1996 A
5581681 Tchao et al. Dec 1996 A
5581764 Fitzgerald et al. Dec 1996 A
5581800 Fardeau et al. Dec 1996 A
5583542 Capps et al. Dec 1996 A
5583543 Takahashi et al. Dec 1996 A
5583560 Florin et al. Dec 1996 A
5583561 Baker et al. Dec 1996 A
5583563 Wanderscheid et al. Dec 1996 A
5583653 Timmermans Dec 1996 A
5583763 Atcheson et al. Dec 1996 A
5583774 Diesel Dec 1996 A
5583776 Levi et al. Dec 1996 A
5583966 Nakajima Dec 1996 A
5583980 Anderson Dec 1996 A
5584050 Lyons Dec 1996 A
5585798 Yoshioka et al. Dec 1996 A
5585838 Lawler et al. Dec 1996 A
5585858 Harper et al. Dec 1996 A
5585865 Amano et al. Dec 1996 A
5585866 Miller et al. Dec 1996 A
5585958 Giraud Dec 1996 A
5586024 Shaibani Dec 1996 A
5586025 Tsuji et al. Dec 1996 A
5586218 Allen Dec 1996 A
5586257 Perlman Dec 1996 A
5586317 Smith Dec 1996 A
5586766 Forte et al. Dec 1996 A
5586936 Bennett et al. Dec 1996 A
5586937 Menashe Dec 1996 A
5588074 Sugiyama Dec 1996 A
5588148 Landis et al. Dec 1996 A
5588650 Eman et al. Dec 1996 A
5589892 Knee et al. Dec 1996 A
5590219 Gourdol Dec 1996 A
5590256 Tchao et al. Dec 1996 A
5592212 Handelman Jan 1997 A
5592482 Abraham Jan 1997 A
5592549 Nagel et al. Jan 1997 A
5592551 Lett et al. Jan 1997 A
5592560 Deaton et al. Jan 1997 A
5592566 Pagallo et al. Jan 1997 A
5593349 Miguel et al. Jan 1997 A
5594469 Freeman et al. Jan 1997 A
5594490 Dawson et al. Jan 1997 A
5594509 Florin et al. Jan 1997 A
5594640 Capps et al. Jan 1997 A
5594661 Bruner et al. Jan 1997 A
5594780 Wiedeman et al. Jan 1997 A
5594810 Gourdol Jan 1997 A
5594911 Cruz et al. Jan 1997 A
5595445 Bobry Jan 1997 A
5596373 White et al. Jan 1997 A
5596697 Foster et al. Jan 1997 A
5596705 Reimer et al. Jan 1997 A
5597162 Franklin Jan 1997 A
5597307 Redford et al. Jan 1997 A
5597309 Riess Jan 1997 A
5598456 Feinberg Jan 1997 A
5598460 Tendler Jan 1997 A
5600364 Hendricks et al. Feb 1997 A
5600366 Schulman Feb 1997 A
5600561 Okamura Feb 1997 A
5600573 Hendricks et al. Feb 1997 A
5600646 Polomski Feb 1997 A
5600711 Yuen Feb 1997 A
5600733 MacDonald et al. Feb 1997 A
5600765 Ando et al. Feb 1997 A
5600775 King et al. Feb 1997 A
5600781 Root et al. Feb 1997 A
5602376 Coleman et al. Feb 1997 A
5602570 Capps et al. Feb 1997 A
5602582 Wanderscheid et al. Feb 1997 A
5602739 Haagenstad et al. Feb 1997 A
5603502 Nakagawa Feb 1997 A
5604542 Dedrick Feb 1997 A
5604820 Ono Feb 1997 A
5604823 Ono Feb 1997 A
5605334 McCrea, Jr. Feb 1997 A
5606374 Bertram Feb 1997 A
5606506 Kyrtsos Feb 1997 A
5606618 Lokhoff et al. Feb 1997 A
5606655 Arman et al. Feb 1997 A
5606726 Yoshinobu Feb 1997 A
5608624 Luciw Mar 1997 A
5608778 Partridge, III Mar 1997 A
5610653 Abecassis Mar 1997 A
5610815 Gudat et al. Mar 1997 A
5610821 Gazis et al. Mar 1997 A
5610984 Lennen Mar 1997 A
5611020 Bigus Mar 1997 A
5611730 Weiss Mar 1997 A
5612719 Beernink et al. Mar 1997 A
5613032 Cruz et al. Mar 1997 A
5613190 Hylton Mar 1997 A
5613191 Hylton et al. Mar 1997 A
5613912 Slater Mar 1997 A
5614940 Cobbley et al. Mar 1997 A
5615109 Eder Mar 1997 A
5615116 Gudat et al. Mar 1997 A
5615175 Carter et al. Mar 1997 A
5616078 Oh Apr 1997 A
5616876 Cluts Apr 1997 A
5617085 Tsutsumi et al. Apr 1997 A
5617312 Iura et al. Apr 1997 A
5617371 Williams Apr 1997 A
5617483 Osawa Apr 1997 A
5617526 Oran et al. Apr 1997 A
5617565 Augenbraun et al. Apr 1997 A
5619247 Russo Apr 1997 A
5619249 Billock et al. Apr 1997 A
5619274 Roop et al. Apr 1997 A
5619710 Travis, Jr. et al. Apr 1997 A
5621416 Lennen Apr 1997 A
5621454 Ellis et al. Apr 1997 A
5621456 Florin et al. Apr 1997 A
5621484 Cotty Apr 1997 A
5621579 Yuen Apr 1997 A
5621662 Humphries et al. Apr 1997 A
5621793 Bednarek et al. Apr 1997 A
5621903 Luciw et al. Apr 1997 A
5623494 Rostoker et al. Apr 1997 A
5623601 Vu Apr 1997 A
5623613 Rowe et al. Apr 1997 A
5624265 Redford et al. Apr 1997 A
5624316 Roskowski et al. Apr 1997 A
5625406 Newberry et al. Apr 1997 A
5625464 Compoint et al. Apr 1997 A
5625668 Loomis et al. Apr 1997 A
5625693 Rohatgi et al. Apr 1997 A
5625711 Nicholson et al. Apr 1997 A
5625715 Trew et al. Apr 1997 A
5625783 Ezekiel et al. Apr 1997 A
5625814 Luciw Apr 1997 A
5625833 Levine et al. Apr 1997 A
5627547 Ramaswamy et al. May 1997 A
5627564 Yang May 1997 A
5627915 Rosser et al. May 1997 A
5627960 Clifford et al. May 1997 A
5629626 Russell et al. May 1997 A
5629693 Janky May 1997 A
5629733 Youman et al. May 1997 A
5629981 Nerlikar May 1997 A
5630119 Aristides et al. May 1997 A
5630159 Zancho May 1997 A
5630204 Hylton et al. May 1997 A
5630206 Urban et al. May 1997 A
5630757 Gagin et al. May 1997 A
5631995 Weissensteiner et al. May 1997 A
5632007 Freeman May 1997 A
5632041 Peterson et al. May 1997 A
5633484 Zancho et al. May 1997 A
5633630 Park May 1997 A
5633872 Dinkins May 1997 A
5634051 Thomson May 1997 A
5634849 Abecassis Jun 1997 A
5635925 Kishi et al. Jun 1997 A
5635978 Alten et al. Jun 1997 A
5635979 Kostreski et al. Jun 1997 A
5635982 Zhang et al. Jun 1997 A
5635986 Kim Jun 1997 A
5635989 Rothmuller Jun 1997 A
5636276 Brugger Jun 1997 A
5636346 Saxe Jun 1997 A
5637826 Bessacini et al. Jun 1997 A
5638078 Wichtel Jun 1997 A
5638092 Eng et al. Jun 1997 A
5638279 Kishi et al. Jun 1997 A
5638300 Johnson Jun 1997 A
5638426 Lewis Jun 1997 A
5640193 Wellner Jun 1997 A
5640323 Kleimenhagen et al. Jun 1997 A
5640484 Mankovitz Jun 1997 A
5641288 Zaenglein, Jr. Jun 1997 A
5642434 Nakao et al. Jun 1997 A
5643088 Vaughn et al. Jul 1997 A
5644354 Thompson et al. Jul 1997 A
5644686 Hekmatpour Jul 1997 A
5644735 Luciw et al. Jul 1997 A
5646603 Nagata et al. Jul 1997 A
5646612 Byon Jul 1997 A
5646843 Gudat et al. Jul 1997 A
5648768 Bouve Jul 1997 A
5648824 Dunn et al. Jul 1997 A
5649060 Ellozy et al. Jul 1997 A
5649061 Smyth Jul 1997 A
5649284 Yoshinobu Jul 1997 A
5650826 Eitz Jul 1997 A
5650831 Farwell Jul 1997 A
5651068 Klemba et al. Jul 1997 A
5652570 Lepkofker Jul 1997 A
5652613 Lazarus et al. Jul 1997 A
5652615 Bryant et al. Jul 1997 A
5652849 Conway et al. Jul 1997 A
5652909 Kodosky Jul 1997 A
5654747 Ottesen et al. Aug 1997 A
5654748 Matthews, III Aug 1997 A
5654771 Tekalp et al. Aug 1997 A
5654886 Zereski, Jr. et al. Aug 1997 A
5655117 Goldberg et al. Aug 1997 A
5655214 Mullett Aug 1997 A
5655966 Werdin, Jr. et al. Aug 1997 A
5656804 Barkan et al. Aug 1997 A
5657072 Aristides et al. Aug 1997 A
5657397 Bokser Aug 1997 A
5657414 Lett et al. Aug 1997 A
5659195 Kaiser et al. Aug 1997 A
5659350 Hendricks et al. Aug 1997 A
5659367 Yuen Aug 1997 A
5659368 Landis Aug 1997 A
5659638 Bengtson Aug 1997 A
5659653 Diehl et al. Aug 1997 A
5659732 Kirsch Aug 1997 A
5659742 Beattie et al. Aug 1997 A
5659793 Escobar et al. Aug 1997 A
5660391 Klasee Aug 1997 A
5661516 Carles Aug 1997 A
5661652 Sprague et al. Aug 1997 A
5661755 Van De Kerkhof et al. Aug 1997 A
5663514 Usa Sep 1997 A
5663733 Lennen Sep 1997 A
5663734 Krasner Sep 1997 A
5663757 Morales Sep 1997 A
5663808 Park Sep 1997 A
5664046 Abecassis Sep 1997 A
5664948 Dimitriadis et al. Sep 1997 A
5666293 Metz et al. Sep 1997 A
5666498 Amro Sep 1997 A
5666645 Thomas et al. Sep 1997 A
5668554 Orr et al. Sep 1997 A
5668573 Favot et al. Sep 1997 A
5668880 Alajajian Sep 1997 A
5668897 Stolfo Sep 1997 A
5669061 Schipper Sep 1997 A
5669817 Tarantino Sep 1997 A
5671343 Kondo et al. Sep 1997 A
5671411 Watts et al. Sep 1997 A
5671607 Clemens et al. Sep 1997 A
5673305 Ross Sep 1997 A
5673322 Pepe et al. Sep 1997 A
5675390 Schindler et al. Oct 1997 A
5675494 Sakurai et al. Oct 1997 A
5675507 Bobo, II Oct 1997 A
5675752 Scott et al. Oct 1997 A
5677684 McArthur Oct 1997 A
5677708 Matthews, III et al. Oct 1997 A
5677710 Thompson-Rohrlich Oct 1997 A
5677837 Reynolds Oct 1997 A
5677981 Kato et al. Oct 1997 A
5678057 Rostoker et al. Oct 1997 A
5678175 Stuart et al. Oct 1997 A
5678182 Miller et al. Oct 1997 A
5679077 Pocock et al. Oct 1997 A
5680607 Brueckheimer Oct 1997 A
5682142 Loosmore et al. Oct 1997 A
5682196 Freeman Oct 1997 A
5682206 Wehmeyer et al. Oct 1997 A
5682229 Wangler Oct 1997 A
5682437 Okino et al. Oct 1997 A
5682439 Beernink et al. Oct 1997 A
5682525 Bouve et al. Oct 1997 A
5683082 Takemoto et al. Nov 1997 A
5684488 Liautaud et al. Nov 1997 A
5684525 Klosterman Nov 1997 A
5684526 Yoshinobu Nov 1997 A
5684860 Milani et al. Nov 1997 A
5684863 Katz Nov 1997 A
5684873 Tiilikainen Nov 1997 A
5684891 Tanaka et al. Nov 1997 A
5684918 Abecassis Nov 1997 A
5686910 Timm et al. Nov 1997 A
5686954 Yoshinobu et al. Nov 1997 A
5687215 Timm et al. Nov 1997 A
5687254 Poon et al. Nov 1997 A
5687331 Volk et al. Nov 1997 A
5687971 Khaladkar Nov 1997 A
5688174 Kennedy Nov 1997 A
5689245 Noreen et al. Nov 1997 A
5689269 Norris Nov 1997 A
5689431 Rudow et al. Nov 1997 A
5689442 Swanson et al. Nov 1997 A
5689648 Diaz et al. Nov 1997 A
5689663 Williams Nov 1997 A
5689666 Berquist et al. Nov 1997 A
5690582 Ulrich et al. Nov 1997 A
5691476 Madaras Nov 1997 A
5691724 Aker et al. Nov 1997 A
5691903 Racette, III Nov 1997 A
5692073 Cass Nov 1997 A
5692214 Levine Nov 1997 A
5694163 Harrison Dec 1997 A
5694176 Bruette et al. Dec 1997 A
5694381 Sako Dec 1997 A
5696403 Rostoker et al. Dec 1997 A
5696503 Nasburg Dec 1997 A
5696695 Ehlers et al. Dec 1997 A
5696824 Walsh Dec 1997 A
5696905 Reimer et al. Dec 1997 A
5696964 Cox et al. Dec 1997 A
5696965 Dedrick Dec 1997 A
5697844 Von Kohorn Dec 1997 A
5699052 Miyahara Dec 1997 A
5699053 Jonsson Dec 1997 A
5699056 Yoshida Dec 1997 A
5699107 Lawler et al. Dec 1997 A
5699255 Ellis et al. Dec 1997 A
5699441 Sagawa et al. Dec 1997 A
5699497 Erdahl et al. Dec 1997 A
5701120 Perelman et al. Dec 1997 A
5701328 Schuchman et al. Dec 1997 A
5701369 Moon et al. Dec 1997 A
5701383 Russo et al. Dec 1997 A
5701419 McConnell Dec 1997 A
5701424 Atkinson Dec 1997 A
5701497 Yamauchi et al. Dec 1997 A
5702104 Malek et al. Dec 1997 A
5702305 Norman et al. Dec 1997 A
5703367 Hashimoto et al. Dec 1997 A
5704029 Wright, Jr. Dec 1997 A
5704837 Iwasaki et al. Jan 1998 A
5706145 Hindman et al. Jan 1998 A
5706191 Bassett et al. Jan 1998 A
5706498 Fujimiya et al. Jan 1998 A
5707287 McCrea, Jr. Jan 1998 A
5707289 Watanabe et al. Jan 1998 A
5708767 Yeo et al. Jan 1998 A
5708780 Levergood et al. Jan 1998 A
5708825 Sotomayor Jan 1998 A
5708845 Wistendahl et al. Jan 1998 A
5709603 Kaye Jan 1998 A
5710565 Shirai et al. Jan 1998 A
5710601 Marshall et al. Jan 1998 A
5710605 Nelson Jan 1998 A
5710831 Beernink et al. Jan 1998 A
5710833 Moghaddam et al. Jan 1998 A
5710834 Rhoads Jan 1998 A
5710884 Dedrick Jan 1998 A
5710918 Lagarde et al. Jan 1998 A
5711715 Ringo et al. Jan 1998 A
5712899 Pace, II Jan 1998 A
5712979 Graber et al. Jan 1998 A
5713045 Berdahl Jan 1998 A
5713574 Hughes Feb 1998 A
5713795 Kohorn Feb 1998 A
5714698 Tokioka et al. Feb 1998 A
5715020 Kuroiwa et al. Feb 1998 A
5715400 Reimer et al. Feb 1998 A
5715834 Bergamasco et al. Feb 1998 A
5717391 Rodriguez Feb 1998 A
5717452 Janin et al. Feb 1998 A
5717814 Abecassis Feb 1998 A
5717846 Iida et al. Feb 1998 A
5717860 Graber et al. Feb 1998 A
5717923 Dedrick Feb 1998 A
5718431 Ornstein Feb 1998 A
5719579 Torre et al. Feb 1998 A
5719918 Serbetciouglu et al. Feb 1998 A
5721781 Deo et al. Feb 1998 A
5721827 Logan et al. Feb 1998 A
5722041 Freadman Feb 1998 A
5722418 Bro Mar 1998 A
5724070 Denninghoff et al. Mar 1998 A
5724091 Freeman et al. Mar 1998 A
5724103 Batchelor Mar 1998 A
5724106 Autry et al. Mar 1998 A
5724424 Gifford Mar 1998 A
5724425 Chang et al. Mar 1998 A
5724472 Abecassis Mar 1998 A
5724521 Dedrick Mar 1998 A
5724567 Rose et al. Mar 1998 A
5724985 Snell et al. Mar 1998 A
5726688 Siefert et al. Mar 1998 A
5726702 Hamaguchi et al. Mar 1998 A
5726893 Schuchman et al. Mar 1998 A
5726898 Jacobs Mar 1998 A
5726911 Canada et al. Mar 1998 A
5727057 Emery et al. Mar 1998 A
5727060 Young Mar 1998 A
5729212 Martin Mar 1998 A
5729217 Ito et al. Mar 1998 A
5729279 Fuller Mar 1998 A
5729741 Liaguno et al. Mar 1998 A
5731785 Lemelson et al. Mar 1998 A
5731788 Reeds Mar 1998 A
5731844 Rauch et al. Mar 1998 A
5732074 Spaur et al. Mar 1998 A
5732125 Oyama Mar 1998 A
5732214 Subrahmanyam Mar 1998 A
5732227 Kuzunuki et al. Mar 1998 A
5732338 Schwob Mar 1998 A
5732949 Josephs Mar 1998 A
5734337 Kupersmit Mar 1998 A
5734348 Aoki et al. Mar 1998 A
5734589 Kostreski et al. Mar 1998 A
5734699 Lu et al. Mar 1998 A
5734720 Salganicoff Mar 1998 A
5734786 Mankovitz Mar 1998 A
5734831 Sanders Mar 1998 A
5734853 Hendricks et al. Mar 1998 A
5734893 Li et al. Mar 1998 A
5734923 Sagawa et al. Mar 1998 A
5735525 McCrea, Jr. Apr 1998 A
5735742 French Apr 1998 A
5737444 Colla et al. Apr 1998 A
5737507 Smith Apr 1998 A
5737529 Dolin, Jr. et al. Apr 1998 A
5737533 de Hond Apr 1998 A
5737619 Judson Apr 1998 A
5737700 Cox et al. Apr 1998 A
5740252 Minor et al. Apr 1998 A
5740274 Ono et al. Apr 1998 A
5740369 Yokozawa et al. Apr 1998 A
5740532 Fernandez et al. Apr 1998 A
5740549 Reilly et al. Apr 1998 A
5742086 Rostoker et al. Apr 1998 A
5742289 Naylor et al. Apr 1998 A
5742762 Scholl et al. Apr 1998 A
5742797 Celi, Jr. et al. Apr 1998 A
5742816 Barr et al. Apr 1998 A
5742829 Davis et al. Apr 1998 A
5742845 Wagner Apr 1998 A
5742905 Pepe et al. Apr 1998 A
5745116 Pisutha-Arnond Apr 1998 A
5745126 Jain et al. Apr 1998 A
5745573 Lipner et al. Apr 1998 A
5745640 Ishii et al. Apr 1998 A
5745681 Levine et al. Apr 1998 A
5745710 Clanton, III et al. Apr 1998 A
5745758 Shaw et al. Apr 1998 A
5745759 Hayden et al. Apr 1998 A
5746656 Bezick et al. May 1998 A
5748191 Rozak et al. May 1998 A
5748716 Levine May 1998 A
5748732 Le Berre et al. May 1998 A
5748742 Tisdale et al. May 1998 A
5748780 Stolfo May 1998 A
5748805 Withgott et al. May 1998 A
5748867 Cosman et al. May 1998 A
5748926 Fukuda et al. May 1998 A
5749060 Graf et al. May 1998 A
5749075 Toader et al. May 1998 A
5749081 Whiteis May 1998 A
5749735 Redford et al. May 1998 A
5749785 Rossides May 1998 A
5751211 Shirai et al. May 1998 A
5751282 Girard et al. May 1998 A
5751286 Barber et al. May 1998 A
5751338 Ludwig, Jr. May 1998 A
5751831 Ono May 1998 A
5751956 Kirsch May 1998 A
5752051 Cohen May 1998 A
5752159 Faust et al. May 1998 A
5752160 Dunn May 1998 A
5752217 Ishizaki et al. May 1998 A
5753970 Rostoker May 1998 A
5754060 Nguyen et al. May 1998 A
5754308 Lopresti et al. May 1998 A
5754657 Schipper et al. May 1998 A
5754771 Epperson et al. May 1998 A
5754938 Herz et al. May 1998 A
5754939 Herz et al. May 1998 A
5755621 Marks et al. May 1998 A
5756981 Roustaei et al. May 1998 A
5757916 MacDoran et al. May 1998 A
5758068 Brandt et al. May 1998 A
5758110 Boss et al. May 1998 A
5758257 Herz et al. May 1998 A
5758259 Lawler May 1998 A
5759101 Von Kohorn Jun 1998 A
5760530 Kolesar Jun 1998 A
5760713 Yokoyama et al. Jun 1998 A
5760739 Pauli Jun 1998 A
5760742 Branch et al. Jun 1998 A
5760821 Ellis et al. Jun 1998 A
5761320 Farinelli et al. Jun 1998 A
5761372 Yoshinobu et al. Jun 1998 A
5761477 Wahbe et al. Jun 1998 A
5761516 Rostoker et al. Jun 1998 A
5761606 Wolzien Jun 1998 A
5761655 Hoffman Jun 1998 A
5761662 Dasan Jun 1998 A
5764139 Nojima et al. Jun 1998 A
5764770 Schipper et al. Jun 1998 A
5764794 Perlin Jun 1998 A
5764809 Nomami et al. Jun 1998 A
5764906 Edelstein et al. Jun 1998 A
5764923 Tallman et al. Jun 1998 A
5765152 Erickson Jun 1998 A
5767457 Gerpheide et al. Jun 1998 A
5767804 Murphy Jun 1998 A
5767893 Chen et al. Jun 1998 A
5767894 Fuller et al. Jun 1998 A
5767913 Kassatly Jun 1998 A
5767922 Zabih et al. Jun 1998 A
5768382 Schneier et al. Jun 1998 A
5768418 Berman et al. Jun 1998 A
5768421 Gaffin et al. Jun 1998 A
5768426 Rhoads Jun 1998 A
5768437 Monro et al. Jun 1998 A
5768528 Stumm Jun 1998 A
5768607 Drews et al. Jun 1998 A
5768680 Thomas Jun 1998 A
5770533 Franchi Jun 1998 A
5771275 Brunner et al. Jun 1998 A
5771347 Grantz et al. Jun 1998 A
5771353 Eggleston et al. Jun 1998 A
5774170 Hite et al. Jun 1998 A
5774357 Hoffberg et al. Jun 1998 A
5774539 Maass et al. Jun 1998 A
5774591 Black et al. Jun 1998 A
5774650 Chapman et al. Jun 1998 A
5774664 Hidary et al. Jun 1998 A
5774666 Portuesi Jun 1998 A
5774670 Montulli Jun 1998 A
5774825 Reynolds Jun 1998 A
5774827 Smith, Jr. et al. Jun 1998 A
5774828 Brunts et al. Jun 1998 A
5774859 Houser et al. Jun 1998 A
5774869 Toader Jun 1998 A
5777360 Rostoker et al. Jul 1998 A
5777374 Rostoker et al. Jul 1998 A
5777451 Kobayashi et al. Jul 1998 A
5777580 Janky et al. Jul 1998 A
5777614 Ando et al. Jul 1998 A
5778181 Hidary et al. Jul 1998 A
5778182 Cathey et al. Jul 1998 A
5778333 Koizumi et al. Jul 1998 A
5779242 Kaufmann Jul 1998 A
5779549 Walker et al. Jul 1998 A
5781101 Stephen et al. Jul 1998 A
5781226 Sheehan Jul 1998 A
5781228 Sposato Jul 1998 A
5781245 Van Der Weij et al. Jul 1998 A
5781246 Alten et al. Jul 1998 A
5781662 Mori et al. Jul 1998 A
5781723 Yee et al. Jul 1998 A
5781734 Ohno et al. Jul 1998 A
5781879 Arnold et al. Jul 1998 A
5784007 Pepper Jul 1998 A
5784061 Moran et al. Jul 1998 A
5784365 Ikeda Jul 1998 A
5784504 Anderson et al. Jul 1998 A
5784616 Horvitz Jul 1998 A
5786998 Neeson et al. Jul 1998 A
5787156 Katz Jul 1998 A
5787201 Nelson et al. Jul 1998 A
5787259 Haroun et al. Jul 1998 A
5788507 Redford et al. Aug 1998 A
5788574 Ornstein et al. Aug 1998 A
5789892 Takei Aug 1998 A
5790198 Roop et al. Aug 1998 A
5790201 Antos Aug 1998 A
5790202 Kummer et al. Aug 1998 A
5790753 Krishnamoorthy et al. Aug 1998 A
5790935 Payton Aug 1998 A
5790974 Tognazzini Aug 1998 A
5791294 Manning Aug 1998 A
5791991 Small Aug 1998 A
5793413 Hylton et al. Aug 1998 A
5793631 Ito et al. Aug 1998 A
5793753 Hershey et al. Aug 1998 A
5793813 Cleave Aug 1998 A
5793888 Delanoy Aug 1998 A
5793964 Rogers et al. Aug 1998 A
5793972 Shane Aug 1998 A
5794164 Beckert et al. Aug 1998 A
5794174 Janky et al. Aug 1998 A
5794210 Goldhaber et al. Aug 1998 A
5794249 Orsolini et al. Aug 1998 A
5795156 Redford et al. Aug 1998 A
5795228 Trumbull et al. Aug 1998 A
5796634 Craport et al. Aug 1998 A
5796866 Sakurai et al. Aug 1998 A
5796945 Tarabella Aug 1998 A
5796952 Davis et al. Aug 1998 A
5797001 Augenbraun et al. Aug 1998 A
5797395 Martin Aug 1998 A
5798519 Vock et al. Aug 1998 A
5798693 Engellenner Aug 1998 A
5798758 Harada et al. Aug 1998 A
5798785 Hendricks et al. Aug 1998 A
5799082 Murphy et al. Aug 1998 A
5799109 Chung et al. Aug 1998 A
5799219 Moghadam et al. Aug 1998 A
5799267 Siegel Aug 1998 A
5799292 Hekmatpour Aug 1998 A
5800268 Molnick Sep 1998 A
5801422 Rostoker et al. Sep 1998 A
5801432 Rostoker et al. Sep 1998 A
5801747 Bedard Sep 1998 A
5801750 Kurihara Sep 1998 A
5801753 Eyer et al. Sep 1998 A
5801787 Schein et al. Sep 1998 A
5802220 Black et al. Sep 1998 A
5802243 Yao et al. Sep 1998 A
5802284 Karlton et al. Sep 1998 A
5802361 Wang et al. Sep 1998 A
5802492 DeLorme et al. Sep 1998 A
5802510 Jones Sep 1998 A
5805082 Hassett Sep 1998 A
5805154 Brown Sep 1998 A
5805155 Allibhoy et al. Sep 1998 A
5805167 van Cruyningen Sep 1998 A
5805204 Thompson et al. Sep 1998 A
5805763 Lawler et al. Sep 1998 A
5805804 Laursen et al. Sep 1998 A
5805806 McArthur Sep 1998 A
5805815 Hill Sep 1998 A
5806005 Hull et al. Sep 1998 A
5806018 Smith et al. Sep 1998 A
5808197 Dao Sep 1998 A
5808330 Rostoker et al. Sep 1998 A
5808564 Simms et al. Sep 1998 A
5808566 Behr et al. Sep 1998 A
5808608 Young et al. Sep 1998 A
5808694 Usui et al. Sep 1998 A
5808907 Shetty et al. Sep 1998 A
5809172 Melen Sep 1998 A
5809204 Young et al. Sep 1998 A
5809214 Nureki et al. Sep 1998 A
5809267 Moran et al. Sep 1998 A
5809415 Rossmann Sep 1998 A
5809437 Breed Sep 1998 A
5809471 Brodsky Sep 1998 A
5809476 Ryan Sep 1998 A
5809481 Baron et al. Sep 1998 A
5809482 Strisower Sep 1998 A
5810680 Lobb et al. Sep 1998 A
5811863 Rostoker et al. Sep 1998 A
5812086 Bertiger et al. Sep 1998 A
5812087 Krasner Sep 1998 A
5812123 Rowe et al. Sep 1998 A
5812124 Eick et al. Sep 1998 A
5812205 Milnes et al. Sep 1998 A
5812591 Shumaker et al. Sep 1998 A
5812749 Fernandez et al. Sep 1998 A
5812769 Graber et al. Sep 1998 A
5812776 Gifford Sep 1998 A
5812930 Zavrel Sep 1998 A
5812931 Yuen Sep 1998 A
5812937 Takahisa et al. Sep 1998 A
5814798 Zancho Sep 1998 A
5815092 Gregg, III et al. Sep 1998 A
5815135 Yui et al. Sep 1998 A
5815145 Matthews, III Sep 1998 A
5815551 Katz Sep 1998 A
5815577 Clark Sep 1998 A
5816918 Kelly et al. Oct 1998 A
5818438 Howe et al. Oct 1998 A
5818441 Throckmorton et al. Oct 1998 A
5818510 Cobbley et al. Oct 1998 A
5818511 Farry et al. Oct 1998 A
5818935 Maa Oct 1998 A
5818965 Davies Oct 1998 A
5819019 Nelson Oct 1998 A
5819020 Beeler, Jr. Oct 1998 A
5819156 Belmont Oct 1998 A
5819227 Obuchi Oct 1998 A
5819284 Farber et al. Oct 1998 A
5819288 De Bonet Oct 1998 A
5821880 Morimoto et al. Oct 1998 A
5821925 Carey et al. Oct 1998 A
5822123 Davis et al. Oct 1998 A
5822523 Rothschild et al. Oct 1998 A
5822539 van Hoff Oct 1998 A
5822606 Morton Oct 1998 A
5823879 Goldberg et al. Oct 1998 A
5825283 Camhi Oct 1998 A
5825943 DeVito et al. Oct 1998 A
5826195 Westerlage et al. Oct 1998 A
5828402 Collings Oct 1998 A
5828419 Bruette et al. Oct 1998 A
5828420 Marshall et al. Oct 1998 A
5828734 Katz Oct 1998 A
5828809 Chang et al. Oct 1998 A
5828839 Moncreiff Oct 1998 A
5828945 Klosterman Oct 1998 A
RE35954 Levine Nov 1998 E
5829782 Breed et al. Nov 1998 A
5830067 Graves et al. Nov 1998 A
5830068 Brenner et al. Nov 1998 A
5831527 Jones, II et al. Nov 1998 A
5832119 Rhoads Nov 1998 A
5832212 Cragun et al. Nov 1998 A
5832223 Hara et al. Nov 1998 A
5832279 Rostoker et al. Nov 1998 A
5832474 Lopresti et al. Nov 1998 A
5832528 Kwatinetz et al. Nov 1998 A
5833468 Guy et al. Nov 1998 A
5834821 Rostoker et al. Nov 1998 A
5835087 Herz et al. Nov 1998 A
5835126 Lewis Nov 1998 A
5835717 Karlton et al. Nov 1998 A
5836817 Acres et al. Nov 1998 A
5837987 Koenck et al. Nov 1998 A
5838237 Revell et al. Nov 1998 A
5838314 Neel et al. Nov 1998 A
5838326 Card et al. Nov 1998 A
5838383 Chimoto et al. Nov 1998 A
5838889 Booker Nov 1998 A
5838906 Doyle et al. Nov 1998 A
5839088 Hancock et al. Nov 1998 A
5839438 Graettinger et al. Nov 1998 A
5839725 Conway Nov 1998 A
5839905 Redford et al. Nov 1998 A
5840020 Heinonen et al. Nov 1998 A
5841367 Giovanni Nov 1998 A
5841396 Krasner Nov 1998 A
5844552 Gaughan et al. Dec 1998 A
5844553 Hao et al. Dec 1998 A
5844620 Coleman et al. Dec 1998 A
5845227 Peterson Dec 1998 A
5845240 Fielder Dec 1998 A
5845288 Syeda-Mahmood Dec 1998 A
5845301 Rivette et al. Dec 1998 A
5847661 Ricci Dec 1998 A
5847688 Ohi et al. Dec 1998 A
5848158 Saito et al. Dec 1998 A
5848187 Bricklin et al. Dec 1998 A
5848373 DeLorme et al. Dec 1998 A
5848396 Gerace Dec 1998 A
5848397 Marsh et al. Dec 1998 A
5848410 Walls et al. Dec 1998 A
5850218 LaJoie et al. Dec 1998 A
5850352 Moezzi et al. Dec 1998 A
5850446 Berger et al. Dec 1998 A
5850470 Kung et al. Dec 1998 A
5851149 Xidos et al. Dec 1998 A
5852232 Samsavar et al. Dec 1998 A
5852351 Canada et al. Dec 1998 A
5852437 Wugofski et al. Dec 1998 A
5852676 Lazar Dec 1998 A
5852823 De Bonet Dec 1998 A
5854856 Moura et al. Dec 1998 A
5854923 Dockter et al. Dec 1998 A
5854994 Canada et al. Dec 1998 A
5855007 Jovicic et al. Dec 1998 A
5855008 Goldhaber et al. Dec 1998 A
5857036 Barnsley et al. Jan 1999 A
5857149 Suzuki Jan 1999 A
5857155 Hill et al. Jan 1999 A
5857181 Augenbraun et al. Jan 1999 A
5857201 Wright, Jr. et al. Jan 1999 A
5857911 Fioretti Jan 1999 A
5860073 Ferrel et al. Jan 1999 A
5861881 Freeman et al. Jan 1999 A
5861886 Moran et al. Jan 1999 A
5861906 Dunn et al. Jan 1999 A
5862256 Zetts et al. Jan 1999 A
5862260 Rhoads Jan 1999 A
5862262 Jacobs et al. Jan 1999 A
5862264 Ishikawa et al. Jan 1999 A
5862292 Kubota et al. Jan 1999 A
5862325 Reed et al. Jan 1999 A
5862391 Salas et al. Jan 1999 A
5862509 Desai et al. Jan 1999 A
5864125 Szabo Jan 1999 A
5864165 Rostoker et al. Jan 1999 A
5864323 Berthon Jan 1999 A
5864481 Gross et al. Jan 1999 A
5864635 Zetts et al. Jan 1999 A
5864667 Barkan Jan 1999 A
5864704 Battle et al. Jan 1999 A
5864823 Levitan Jan 1999 A
5864848 Horvitz et al. Jan 1999 A
5867118 McCoy et al. Feb 1999 A
5867150 Bricklin et al. Feb 1999 A
5867205 Harrison Feb 1999 A
5867208 McLaren Feb 1999 A
5867221 Pullen et al. Feb 1999 A
5867223 Schindler et al. Feb 1999 A
5867226 Wehmeyer et al. Feb 1999 A
5867233 Tanaka Feb 1999 A
5867386 Hoffberg et al. Feb 1999 A
5867404 Bryan Feb 1999 A
5867579 Saito Feb 1999 A
5867597 Peairs et al. Feb 1999 A
5867603 Barnsley et al. Feb 1999 A
5867795 Novis et al. Feb 1999 A
5867799 Lang et al. Feb 1999 A
5867821 Ballantyne et al. Feb 1999 A
5870030 DeLuca et al. Feb 1999 A
5870150 Yuen Feb 1999 A
5870151 Korber Feb 1999 A
5870493 Vogl et al. Feb 1999 A
5870502 Bonneau et al. Feb 1999 A
5870549 Bobo, II Feb 1999 A
5870710 Ozawa et al. Feb 1999 A
5870724 Lawlor et al. Feb 1999 A
5870754 Dimitrova et al. Feb 1999 A
5871398 Schneier et al. Feb 1999 A
5872380 Rostoker et al. Feb 1999 A
5872508 Taoka Feb 1999 A
5873080 Coden et al. Feb 1999 A
5873660 Walsh et al. Feb 1999 A
5874914 Krasner Feb 1999 A
5875108 Hoffberg et al. Feb 1999 A
5875183 Nitadori Feb 1999 A
5875265 Kasao Feb 1999 A
5875446 Brown et al. Feb 1999 A
5876286 Lee Mar 1999 A
5877759 Bauer Mar 1999 A
5877803 Wee et al. Mar 1999 A
5877906 Nagasawa et al. Mar 1999 A
5878135 Blatter et al. Mar 1999 A
5878222 Harrison Mar 1999 A
5878356 Garrot, Jr. et al. Mar 1999 A
5878417 Baldwin et al. Mar 1999 A
5879233 Stupero Mar 1999 A
5879235 Kaneko et al. Mar 1999 A
5880411 Gillespie et al. Mar 1999 A
5880720 Iwafune et al. Mar 1999 A
5880731 Liles et al. Mar 1999 A
5880743 Moran et al. Mar 1999 A
5880768 Lemmons et al. Mar 1999 A
5880769 Nemirofsky et al. Mar 1999 A
5881231 Takagi et al. Mar 1999 A
5883621 Iwamura Mar 1999 A
5884046 Antonov Mar 1999 A
5884267 Goldenthal et al. Mar 1999 A
5884282 Robinson Mar 1999 A
5884298 Smith, II et al. Mar 1999 A
5885158 Torango et al. Mar 1999 A
5886707 Berg Mar 1999 A
5886732 Humpleman Mar 1999 A
5886743 Oh et al. Mar 1999 A
5886746 Yuen et al. Mar 1999 A
5887133 Brown et al. Mar 1999 A
5887243 Harvey et al. Mar 1999 A
5887269 Brunts et al. Mar 1999 A
5889236 Gillespie et al. Mar 1999 A
5889477 Fastenrath Mar 1999 A
5889506 Lopresti et al. Mar 1999 A
5889523 Wilcox et al. Mar 1999 A
5889852 Rosecrans et al. Mar 1999 A
5889868 Moskowitz et al. Mar 1999 A
5889896 Meshinsky et al. Mar 1999 A
5889919 Inoue et al. Mar 1999 A
5889950 Kuzma Mar 1999 A
5890061 Timm et al. Mar 1999 A
5890068 Fattouche et al. Mar 1999 A
5890079 Levine Mar 1999 A
5890147 Peltonen et al. Mar 1999 A
5890152 Rapaport et al. Mar 1999 A
5892346 Moroto et al. Apr 1999 A
5892536 Logan et al. Apr 1999 A
5892767 Bell et al. Apr 1999 A
5893095 Jain et al. Apr 1999 A
5893110 Weber et al. Apr 1999 A
5893111 Sharon, Jr. et al. Apr 1999 A
5893113 McGrath et al. Apr 1999 A
5893126 Drews et al. Apr 1999 A
5893130 Inoue et al. Apr 1999 A
5894323 Kain et al. Apr 1999 A
5895371 Levitas et al. Apr 1999 A
5895470 Pirolli et al. Apr 1999 A
5896176 Das et al. Apr 1999 A
5896369 Warsta et al. Apr 1999 A
5898391 Jefferies et al. Apr 1999 A
5898392 Bambini et al. Apr 1999 A
5898434 Small et al. Apr 1999 A
5898762 Katz Apr 1999 A
5898835 Truong Apr 1999 A
5899700 Williams et al. May 1999 A
5899975 Nielsen May 1999 A
5899999 De Bonet May 1999 A
5900825 Pressel et al. May 1999 A
5901214 Shaffer et al. May 1999 A
5901244 Souma et al. May 1999 A
5901246 Hoffberg et al. May 1999 A
5901255 Yagasaki May 1999 A
5901287 Bull et al. May 1999 A
5901342 Heiskari et al. May 1999 A
5901366 Nakano et al. May 1999 A
5901978 Breed et al. May 1999 A
5903261 Walsh et al. May 1999 A
5903317 Sharir et al. May 1999 A
5903454 Hoffberg et al. May 1999 A
5903545 Sabourin et al. May 1999 A
5903654 Milton et al. May 1999 A
5903678 Ibenthal May 1999 A
5903732 Reed et al. May 1999 A
5903816 Broadwin et al. May 1999 A
5903848 Takahashi May 1999 A
5903892 Hoffert et al. May 1999 A
5905251 Knowles May 1999 A
5905433 Wortham May 1999 A
5905493 Belzer et al. May 1999 A
5905800 Moskowitz et al. May 1999 A
5907293 Tognazzini May 1999 A
5907322 Kelly et al. May 1999 A
5907323 Lawler et al. May 1999 A
5907328 Brush II et al. May 1999 A
5907446 Ishii et al. May 1999 A
5907491 Canada et al. May 1999 A
5907793 Reams May 1999 A
5907836 Sumita et al. May 1999 A
5908454 Zyburt et al. Jun 1999 A
5909183 Borgstahl et al. Jun 1999 A
5909559 So Jun 1999 A
5910987 Ginter et al. Jun 1999 A
5910999 Mukohzaka Jun 1999 A
5911035 Tsao Jun 1999 A
5911582 Redford et al. Jun 1999 A
5912696 Buehl Jun 1999 A
5912989 Watanabe Jun 1999 A
5913040 Rakavy et al. Jun 1999 A
5913185 Martino et al. Jun 1999 A
5913727 Ahdoot Jun 1999 A
5913917 Murphy Jun 1999 A
5914654 Smith Jun 1999 A
5914712 Sartain et al. Jun 1999 A
5914746 Matthews, III et al. Jun 1999 A
5915026 Mankovitz Jun 1999 A
5915034 Nakajima et al. Jun 1999 A
5915038 Abdel-Mottaleb et al. Jun 1999 A
5915068 Levine Jun 1999 A
5915214 Reece et al. Jun 1999 A
5915250 Jain et al. Jun 1999 A
5916024 Von Kohorn Jun 1999 A
5916300 Kirk et al. Jun 1999 A
5917405 Joao Jun 1999 A
5917491 Bauersfeld Jun 1999 A
5917725 Thacher et al. Jun 1999 A
5917893 Katz Jun 1999 A
5917912 Ginter et al. Jun 1999 A
5917958 Nunally et al. Jun 1999 A
5918014 Robinson Jun 1999 A
5918213 Bernard et al. Jun 1999 A
5918223 Blum et al. Jun 1999 A
5920477 Hoffberg et al. Jul 1999 A
5920694 Carleton et al. Jul 1999 A
5920854 Kirsch et al. Jul 1999 A
5920856 Syeda-Mahmood Jul 1999 A
5920861 Hall et al. Jul 1999 A
5922040 Prabhakaran Jul 1999 A
5922074 Richard et al. Jul 1999 A
5923362 Klosterman Jul 1999 A
5923376 Pullen et al. Jul 1999 A
5923780 Morfill et al. Jul 1999 A
5923848 Goodhand et al. Jul 1999 A
5924053 Horowitz et al. Jul 1999 A
5924486 Ehlers et al. Jul 1999 A
5926117 Gunji et al. Jul 1999 A
5926624 Katz et al. Jul 1999 A
5928306 France et al. Jul 1999 A
5928325 Shaughnessy et al. Jul 1999 A
5929753 Montague Jul 1999 A
5929849 Kikinis Jul 1999 A
5929850 Broadwin et al. Jul 1999 A
5929932 Otsuki et al. Jul 1999 A
5930250 Klok et al. Jul 1999 A
5931901 Wolfe et al. Aug 1999 A
5931905 Hashimoto et al. Aug 1999 A
5932863 Rathus et al. Aug 1999 A
5933080 Nojima Aug 1999 A
5933100 Golding Aug 1999 A
5933125 Fernie et al. Aug 1999 A
5933811 Angles et al. Aug 1999 A
5933823 Cullen et al. Aug 1999 A
5933827 Cole et al. Aug 1999 A
5933829 Durst et al. Aug 1999 A
5935004 Tarr et al. Aug 1999 A
5935190 Davis et al. Aug 1999 A
5937037 Kamel et al. Aug 1999 A
5937160 Davis et al. Aug 1999 A
5937163 Lee et al. Aug 1999 A
5937164 Mages Aug 1999 A
5937392 Alberts Aug 1999 A
5937421 Petrov et al. Aug 1999 A
5937422 Nelson et al. Aug 1999 A
5938704 Torii Aug 1999 A
5938717 Dunne et al. Aug 1999 A
5938721 Dussell et al. Aug 1999 A
5938757 Bertsch Aug 1999 A
5940004 Fulton Aug 1999 A
5940073 Klosterman et al. Aug 1999 A
5940387 Humpleman Aug 1999 A
5940572 Balaban et al. Aug 1999 A
5940821 Wical Aug 1999 A
5943427 Massie et al. Aug 1999 A
5943428 Seri et al. Aug 1999 A
5945919 Trask Aug 1999 A
5945944 Krasner Aug 1999 A
5945988 Williams et al. Aug 1999 A
5946083 Melendez et al. Aug 1999 A
5946386 Rogers et al. Aug 1999 A
5946406 Frink et al. Aug 1999 A
5946488 Tanguay et al. Aug 1999 A
5946629 Sawyer et al. Aug 1999 A
5946646 Schena et al. Aug 1999 A
5946664 Ebisawa Aug 1999 A
5946687 Gehani et al. Aug 1999 A
5948026 Beemer, II. et al. Sep 1999 A
5948038 Daly et al. Sep 1999 A
5948040 DeLorme et al. Sep 1999 A
5948061 Merriman et al. Sep 1999 A
5949921 Kojima et al. Sep 1999 A
5949954 Young et al. Sep 1999 A
5950137 Kim Sep 1999 A
5950176 Keiser et al. Sep 1999 A
5951620 Ahrens et al. Sep 1999 A
5952599 Dolby et al. Sep 1999 A
5952941 Mardirossian Sep 1999 A
5953005 Liu Sep 1999 A
5953541 King et al. Sep 1999 A
5953650 Villevieille Sep 1999 A
5953677 Sato Sep 1999 A
5954773 Luper Sep 1999 A
5955973 Anderson Sep 1999 A
5955988 Blonstein et al. Sep 1999 A
5956025 Goulden et al. Sep 1999 A
5956038 Rekimoto Sep 1999 A
5956369 Davidovici Sep 1999 A
5956423 Frink et al. Sep 1999 A
5956487 Venkatraman et al. Sep 1999 A
5956660 Neumann Sep 1999 A
5956664 Bryan Sep 1999 A
5956716 Kenner et al. Sep 1999 A
5957695 Redford et al. Sep 1999 A
5959529 Kail, IV Sep 1999 A
5959536 Chambers et al. Sep 1999 A
5959580 Maloney et al. Sep 1999 A
5959592 Petruzzelli Sep 1999 A
5959623 van Hoff et al. Sep 1999 A
5959688 Schein et al. Sep 1999 A
5960362 Grob et al. Sep 1999 A
5960383 Fleischer Sep 1999 A
5960409 Wexler Sep 1999 A
5961569 Craport et al. Oct 1999 A
5961572 Craport et al. Oct 1999 A
5961603 Kunkel et al. Oct 1999 A
5963092 Van Zalinge Oct 1999 A
5963167 Lichten et al. Oct 1999 A
5963264 Jackson Oct 1999 A
5963582 Stansell, Jr. Oct 1999 A
5963645 Kigawa et al. Oct 1999 A
5963670 Lipson et al. Oct 1999 A
5963746 Barker et al. Oct 1999 A
5963966 Mitchell et al. Oct 1999 A
5964463 Moore, Jr. Oct 1999 A
5964660 James et al. Oct 1999 A
5964821 Brunts et al. Oct 1999 A
5964822 Alland et al. Oct 1999 A
5966126 Szabo Oct 1999 A
5966533 Moody Oct 1999 A
5966658 Kennedy, III et al. Oct 1999 A
5966696 Giraud Oct 1999 A
5968109 Israni et al. Oct 1999 A
5969598 Kimura Oct 1999 A
5969748 Casement et al. Oct 1999 A
5969765 Boon Oct 1999 A
5970143 Schneier et al. Oct 1999 A
5970173 Lee et al. Oct 1999 A
5970206 Yuen et al. Oct 1999 A
5970455 Wilcox et al. Oct 1999 A
5970473 Gerszberg et al. Oct 1999 A
5970486 Yoshida et al. Oct 1999 A
5971397 Miguel et al. Oct 1999 A
5973309 Livingston Oct 1999 A
5973376 Rostoker et al. Oct 1999 A
5973643 Hawkes et al. Oct 1999 A
5973683 Cragun et al. Oct 1999 A
5973731 Schwab Oct 1999 A
5974222 Yuen et al. Oct 1999 A
5974368 Schepps et al. Oct 1999 A
5974398 Hanson et al. Oct 1999 A
5974412 Hazlehurst et al. Oct 1999 A
5974547 Klimenko Oct 1999 A
5977884 Ross Nov 1999 A
5977906 Ameen et al. Nov 1999 A
5977964 Williams et al. Nov 1999 A
5978484 Apperson et al. Nov 1999 A
5978578 Azarya et al. Nov 1999 A
5978747 Craport et al. Nov 1999 A
5978766 Luciw Nov 1999 A
5978804 Dietzman Nov 1999 A
5980256 Carmein Nov 1999 A
5982281 Layson, Jr. Nov 1999 A
5982324 Watters et al. Nov 1999 A
5982411 Eyer et al. Nov 1999 A
5982853 Liebermann Nov 1999 A
5982891 Ginter et al. Nov 1999 A
5982928 Shimada et al. Nov 1999 A
5982929 Ilan et al. Nov 1999 A
5983092 Whinnett et al. Nov 1999 A
5983099 Yao et al. Nov 1999 A
5983158 Suzuki et al. Nov 1999 A
5983161 Lemelson et al. Nov 1999 A
5983171 Yokoyama et al. Nov 1999 A
5983176 Hoffert et al. Nov 1999 A
5983190 Trower, II et al. Nov 1999 A
5983236 Yager et al. Nov 1999 A
5983295 Cotugno Nov 1999 A
5986200 Curtin Nov 1999 A
5986644 Herder et al. Nov 1999 A
5986655 Chiu et al. Nov 1999 A
5987136 Schipper et al. Nov 1999 A
5987213 Mankovitz et al. Nov 1999 A
5987306 Nilsen et al. Nov 1999 A
5987381 Oshizawa Nov 1999 A
5987454 Hobbs Nov 1999 A
5987498 Athing et al. Nov 1999 A
5987509 Portuesi Nov 1999 A
5987511 Elixmann et al. Nov 1999 A
5987519 Peifer et al. Nov 1999 A
5987552 Chittor et al. Nov 1999 A
5987979 Bryan Nov 1999 A
5988078 Levine Nov 1999 A
5989157 Walton Nov 1999 A
5990687 Williams Nov 1999 A
5990801 Kyouno et al. Nov 1999 A
5990878 Ikeda et al. Nov 1999 A
5990885 Gopinath Nov 1999 A
5990893 Numazaki Nov 1999 A
5990927 Hendricks et al. Nov 1999 A
5991406 Lipner et al. Nov 1999 A
5991441 Jourjine Nov 1999 A
5991498 Young Nov 1999 A
5991690 Murphy Nov 1999 A
5991735 Gerace Nov 1999 A
5991751 Rivette et al. Nov 1999 A
5991799 Yen et al. Nov 1999 A
5991806 McHann, Jr. Nov 1999 A
5991832 Sato et al. Nov 1999 A
5995094 Eggen et al. Nov 1999 A
5995643 Saito Nov 1999 A
5995649 Marugame Nov 1999 A
5995673 Ibenthal et al. Nov 1999 A
5995882 Patterson et al. Nov 1999 A
5995978 Cullen et al. Nov 1999 A
5995997 Horvitz Nov 1999 A
5996006 Speicher Nov 1999 A
5999091 Wortham Dec 1999 A
5999124 Sheynblat Dec 1999 A
5999126 Ito Dec 1999 A
5999179 Kekic et al. Dec 1999 A
5999216 Kaars Dec 1999 A
5999664 Mahoney et al. Dec 1999 A
5999808 LaDue Dec 1999 A
5999878 Hanson et al. Dec 1999 A
5999940 Ranger Dec 1999 A
5999997 Pipes Dec 1999 A
6000000 Hawkins et al. Dec 1999 A
6000044 Chrysos et al. Dec 1999 A
6002393 Hite et al. Dec 1999 A
6002394 Schein et al. Dec 1999 A
6002443 Iggulden Dec 1999 A
6002444 Marshall et al. Dec 1999 A
6002450 Darbee et al. Dec 1999 A
6002491 Li et al. Dec 1999 A
6002798 Palmer et al. Dec 1999 A
6002808 Freeman Dec 1999 A
6003030 Kenner et al. Dec 1999 A
6003775 Ackley Dec 1999 A
6005513 Hardesty Dec 1999 A
6005548 Latypov et al. Dec 1999 A
6005561 Hawkins et al. Dec 1999 A
6005563 White et al. Dec 1999 A
6005565 Legall et al. Dec 1999 A
6005597 Barrett et al. Dec 1999 A
6005602 Matthews, III Dec 1999 A
6005631 Anderson et al. Dec 1999 A
6006218 Breese et al. Dec 1999 A
6006257 Slezak Dec 1999 A
6006635 Stojkovic et al. Dec 1999 A
6008802 Iki et al. Dec 1999 A
6008803 Rowe et al. Dec 1999 A
6009153 Houghton et al. Dec 1999 A
6009210 Kang Dec 1999 A
6009323 Heffield et al. Dec 1999 A
6009330 Kennedy, III et al. Dec 1999 A
6009356 Monroe Dec 1999 A
6009363 Beckert et al. Dec 1999 A
6009386 Cruickshank et al. Dec 1999 A
6009403 Sato Dec 1999 A
6009420 Fagg, III et al. Dec 1999 A
6009452 Horvitz Dec 1999 A
6009465 Decker et al. Dec 1999 A
6011537 Slotznick Jan 2000 A
6011787 Nakano et al. Jan 2000 A
6011895 Abecassis Jan 2000 A
6011905 Huttenlocher et al. Jan 2000 A
6012046 Lupien et al. Jan 2000 A
6012051 Sammon, Jr. et al. Jan 2000 A
6012052 Altschuler et al. Jan 2000 A
6012071 Krishna et al. Jan 2000 A
6012083 Savitzky et al. Jan 2000 A
6012086 Lowell Jan 2000 A
6012984 Roseman Jan 2000 A
6013007 Root et al. Jan 2000 A
6014090 Rosen et al. Jan 2000 A
6014184 Knee et al. Jan 2000 A
6014406 Shida et al. Jan 2000 A
6014634 Scroggie et al. Jan 2000 A
6014638 Burge et al. Jan 2000 A
6015348 Lambright et al. Jan 2000 A
6016141 Knudson et al. Jan 2000 A
6016485 Amakawa et al. Jan 2000 A
6016509 Dedrick Jan 2000 A
6018292 Penny, Jr. Jan 2000 A
6018342 Bristor Jan 2000 A
6018346 Moran et al. Jan 2000 A
6018372 Etheredge Jan 2000 A
6018659 Ayyagari et al. Jan 2000 A
6018695 Ahrens et al. Jan 2000 A
6018699 Baron, Sr. et al. Jan 2000 A
6018710 Wynblatt et al. Jan 2000 A
6018738 Breese et al. Jan 2000 A
6020845 Weinberg et al. Feb 2000 A
6020880 Naimpally Feb 2000 A
6020883 Herz et al. Feb 2000 A
6021218 Capps et al. Feb 2000 A
6021231 Miyatake et al. Feb 2000 A
6021403 Horvitz et al. Feb 2000 A
6023223 Baxter, Jr. Feb 2000 A
6023232 Eitzenberger Feb 2000 A
6023241 Clapper Feb 2000 A
6023242 Dixon Feb 2000 A
6023267 Chapuis et al. Feb 2000 A
6023507 Wookey Feb 2000 A
6023694 Kouchi et al. Feb 2000 A
6023724 Bhatia et al. Feb 2000 A
6023729 Samuel et al. Feb 2000 A
6024643 Begis Feb 2000 A
6025788 Diduck Feb 2000 A
6025837 Matthews, III et al. Feb 2000 A
6025844 Parsons Feb 2000 A
6025868 Russo Feb 2000 A
6025869 Stas et al. Feb 2000 A
6026368 Brown et al. Feb 2000 A
6026375 Hall et al. Feb 2000 A
6026388 Liddy et al. Feb 2000 A
6028271 Gillespie et al. Feb 2000 A
6028537 Suman et al. Feb 2000 A
6028548 Farmer Feb 2000 A
6028599 Yuen et al. Feb 2000 A
6028604 Matthews, III et al. Feb 2000 A
6028857 Poor Feb 2000 A
6028937 Tatebayashi et al. Feb 2000 A
6029045 Picco et al. Feb 2000 A
6029046 Khan et al. Feb 2000 A
6029092 Stein Feb 2000 A
6029141 Bezos et al. Feb 2000 A
6029195 Herz Feb 2000 A
6031525 Perlin Feb 2000 A
6031531 Kimble Feb 2000 A
6031580 Sim Feb 2000 A
6032051 Hall et al. Feb 2000 A
6032054 Schwinke Feb 2000 A
6032084 Anderson et al. Feb 2000 A
6032089 Buckley Feb 2000 A
6032097 Iihoshi et al. Feb 2000 A
6032141 O'Connor et al. Feb 2000 A
6032156 Marcus Feb 2000 A
6033086 Bohn Mar 2000 A
6034677 Noguchi et al. Mar 2000 A
6035021 Katz Mar 2000 A
6035038 Campinos et al. Mar 2000 A
6035339 Agraharam et al. Mar 2000 A
6035714 Yazdi et al. Mar 2000 A
6036086 Sizer, II et al. Mar 2000 A
6036601 Heckel Mar 2000 A
6037933 Blonstein et al. Mar 2000 A
6037998 Usui et al. Mar 2000 A
6038337 Lawrence et al. Mar 2000 A
6038342 Bernzott et al. Mar 2000 A
6038367 Abecassis Mar 2000 A
6038436 Priest Mar 2000 A
6038554 Vig Mar 2000 A
6038561 Snyder et al. Mar 2000 A
6038563 Bapat et al. Mar 2000 A
6038568 McGrath et al. Mar 2000 A
6040829 Croy et al. Mar 2000 A
6040840 Koshiba et al. Mar 2000 A
6041323 Kubota Mar 2000 A
6042012 Olmstead et al. Mar 2000 A
6042383 Herron Mar 2000 A
6044170 Migdal et al. Mar 2000 A
6044376 Kurtzman, II Mar 2000 A
6044378 Gladney Mar 2000 A
6044403 Gerszberg Mar 2000 A
6044464 Shamir Mar 2000 A
6044698 Bryan Apr 2000 A
6047234 Cherveny et al. Apr 2000 A
6047236 Hancock et al. Apr 2000 A
6047258 Allison et al. Apr 2000 A
6047289 Thorne et al. Apr 2000 A
6047311 Ueno et al. Apr 2000 A
6047327 Tso et al. Apr 2000 A
6048276 Vandergrift Apr 2000 A
6049034 Cook Apr 2000 A
6049327 Walker et al. Apr 2000 A
6049652 Yuen et al. Apr 2000 A
6049758 Bunks et al. Apr 2000 A
6049823 Hwang Apr 2000 A
6052081 Krasner Apr 2000 A
6052082 Hassan et al. Apr 2000 A
6052120 Nahi et al. Apr 2000 A
6052145 Macrae et al. Apr 2000 A
6052481 Grajski et al. Apr 2000 A
6052556 Sampsell Apr 2000 A
6052591 Bhatia Apr 2000 A
6052598 Rudrapatna et al. Apr 2000 A
6052676 Hekmatpour Apr 2000 A
6053413 Swift et al. Apr 2000 A
6054950 Fontana Apr 2000 A
6054991 Crane et al. Apr 2000 A
6055333 Guzik et al. Apr 2000 A
6055335 Ida et al. Apr 2000 A
6055337 Kim Apr 2000 A
6055478 Heron Apr 2000 A
6055513 Katz et al. Apr 2000 A
6055542 Nielsen et al. Apr 2000 A
6055560 Mills et al. Apr 2000 A
6055569 O'Brien et al. Apr 2000 A
D424061 Backs et al. May 2000 S
D424577 Backs et al. May 2000 S
6057808 Tajima May 2000 A
6057844 Strauss May 2000 A
6057845 Dupouy May 2000 A
6057872 Candelore May 2000 A
6057890 Virden et al. May 2000 A
6057966 Carroll et al. May 2000 A
6058179 Shaffer et al. May 2000 A
6058238 Ng May 2000 A
6058307 Garner May 2000 A
6058338 Agashe et al. May 2000 A
6060989 Gehlot May 2000 A
6060995 Wicks et al. May 2000 A
6060996 Kaiser et al. May 2000 A
6061018 Sheynblat May 2000 A
6061021 Zibell May 2000 A
6061050 Allport et al. May 2000 A
6061097 Satterfield May 2000 A
6061347 Hollatz May 2000 A
6061468 Kang May 2000 A
6061503 Chamberlain May 2000 A
6061561 Alanara et al. May 2000 A
6061632 Dreier May 2000 A
6061658 Chou et al. May 2000 A
6061680 Scherf et al. May 2000 A
6061709 Bronte May 2000 A
6061779 Garde May 2000 A
6064336 Krasner May 2000 A
6064376 Berezowski et al. May 2000 A
6064378 Chaney et al. May 2000 A
6064398 Ellenby et al. May 2000 A
6064438 Miller May 2000 A
6064653 Farris May 2000 A
6064854 Peters et al. May 2000 A
6064967 Speicher May 2000 A
6064970 McMillan et al. May 2000 A
6064976 Tolopka May 2000 A
6064980 Jacobi et al. May 2000 A
6065042 Reimer et al. May 2000 A
6065047 Carpenter et al. May 2000 A
6066075 Poulton May 2000 A
6066794 Longo May 2000 A
6067045 Castelloe et al. May 2000 A
6067121 Shigihara May 2000 A
6067500 Morimoto et al. May 2000 A
6067561 Dillon May 2000 A
6067564 Urakoshi et al. May 2000 A
6067570 Kreynin et al. May 2000 A
6069622 Kurlander May 2000 A
6070167 Qian et al. May 2000 A
6070228 Belknap et al. May 2000 A
6070240 Xydis May 2000 A
6070798 Nethery Jun 2000 A
6072421 Fukae et al. Jun 2000 A
6072460 Marshall et al. Jun 2000 A
6072494 Nguyen Jun 2000 A
6072502 Gupta Jun 2000 A
6072934 Abecassis Jun 2000 A
6072983 Klosterman Jun 2000 A
6073489 French et al. Jun 2000 A
6075466 Cohen et al. Jun 2000 A
6075467 Ninagawa Jun 2000 A
6075526 Rothmuller Jun 2000 A
6075551 Berezowski et al. Jun 2000 A
6075568 Matsuura Jun 2000 A
6075570 Usui et al. Jun 2000 A
6075575 Schein et al. Jun 2000 A
6075895 Qiao et al. Jun 2000 A
6075987 Camp, Jr. et al. Jun 2000 A
6077201 Cheng Jun 2000 A
6078269 Markwell et al. Jun 2000 A
6078284 Levanon Jun 2000 A
6078308 Rosenberg et al. Jun 2000 A
6078348 Klosterman et al. Jun 2000 A
6078502 Rostoker et al. Jun 2000 A
6081206 Kielland Jun 2000 A
6081229 Soliman et al. Jun 2000 A
6081621 Ackner Jun 2000 A
6081629 Browning Jun 2000 A
6081691 Renard et al. Jun 2000 A
6081750 Hoffberg et al. Jun 2000 A
6081780 Lumelsky Jun 2000 A
6083248 Thompson Jul 2000 A
6083353 Alexander, Jr. Jul 2000 A
6084510 Lemelson et al. Jul 2000 A
6084512 Elberty et al. Jul 2000 A
6084870 Wooten et al. Jul 2000 A
6085162 Cherny Jul 2000 A
6085244 Wookey Jul 2000 A
6085256 Kitano et al. Jul 2000 A
6085320 Kaliski, Jr. Jul 2000 A
6087952 Prabhakaran Jul 2000 A
6087960 Kyouno et al. Jul 2000 A
6088484 Mead Jul 2000 A
6088635 Cox et al. Jul 2000 A
6088651 Nageswaran Jul 2000 A
6088654 Lepere et al. Jul 2000 A
6088722 Herz et al. Jul 2000 A
6088731 Kiraly et al. Jul 2000 A
6091882 Yuen et al. Jul 2000 A
6091883 Artigalas et al. Jul 2000 A
6091884 Yuen et al. Jul 2000 A
6091956 Hollenberg Jul 2000 A
6092038 Kanevsky et al. Jul 2000 A
6092068 Dinkelacker Jul 2000 A
6094164 Murphy Jul 2000 A
6094169 Smith et al. Jul 2000 A
6094618 Harada Jul 2000 A
6094689 Embry et al. Jul 2000 A
6095418 Swartz et al. Aug 2000 A
6097073 Rostoker et al. Aug 2000 A
6097285 Curtin Aug 2000 A
6097313 Takahashi et al. Aug 2000 A
6097392 Leyerle Aug 2000 A
6097441 Allport Aug 2000 A
6097974 Camp, Jr. et al. Aug 2000 A
6098048 Dashefsky et al. Aug 2000 A
6098065 Skillen et al. Aug 2000 A
6098106 Philyaw et al. Aug 2000 A
6098458 French et al. Aug 2000 A
6100896 Strohecker et al. Aug 2000 A
6101289 Kellner Aug 2000 A
6101510 Stone et al. Aug 2000 A
6101916 Panot et al. Aug 2000 A
6104316 Behr et al. Aug 2000 A
6104334 Allport Aug 2000 A
6104338 Krasner Aug 2000 A
6104401 Parsons Aug 2000 A
6104705 Ismail et al. Aug 2000 A
6104712 Robert et al. Aug 2000 A
6104815 Alcorn et al. Aug 2000 A
6104845 Lipman et al. Aug 2000 A
6107944 Behr et al. Aug 2000 A
6107959 Levanon Aug 2000 A
6107961 Takagi Aug 2000 A
6107994 Harada et al. Aug 2000 A
6108555 Maloney et al. Aug 2000 A
6108637 Blumenau Aug 2000 A
6108656 Durst et al. Aug 2000 A
6108715 Leach et al. Aug 2000 A
6111523 Mee Aug 2000 A
6111541 Karmel Aug 2000 A
6111580 Kazama et al. Aug 2000 A
6111588 Newell Aug 2000 A
6111883 Terada et al. Aug 2000 A
6112181 Shear et al. Aug 2000 A
6112186 Bergh et al. Aug 2000 A
6113494 Lennert Sep 2000 A
6114970 Kirson et al. Sep 2000 A
6115053 Perlin Sep 2000 A
6115471 Oki et al. Sep 2000 A
6115482 Sears et al. Sep 2000 A
6115611 Kimoto et al. Sep 2000 A
6115724 Booker Sep 2000 A
6118403 Lang Sep 2000 A
6118492 Milnes et al. Sep 2000 A
6118521 Jung et al. Sep 2000 A
6118888 Chino et al. Sep 2000 A
6118899 Bloomfield et al. Sep 2000 A
6119013 Maloney et al. Sep 2000 A
6119095 Morita Sep 2000 A
6119098 Guyot et al. Sep 2000 A
6121915 Cooper et al. Sep 2000 A
6121923 King Sep 2000 A
6121924 Meek et al. Sep 2000 A
6122403 Rhoads Sep 2000 A
6122514 Spaur et al. Sep 2000 A
6122520 Want et al. Sep 2000 A
6122593 Friederich et al. Sep 2000 A
6124810 Segal et al. Sep 2000 A
6125230 Yaginuma Sep 2000 A
6125387 Simonoff et al. Sep 2000 A
D432539 Philyaw Oct 2000 S
6127945 Mura-Smith Oct 2000 A
6127970 Lin Oct 2000 A
6127975 Maloney Oct 2000 A
6128003 Smith et al. Oct 2000 A
6128469 Zenick, Jr. et al. Oct 2000 A
6128482 Nixon et al. Oct 2000 A
6128501 Ffoulkes-Jones Oct 2000 A
6128608 Barnhill Oct 2000 A
6130677 Kunz Oct 2000 A
6130726 Darbee et al. Oct 2000 A
6131066 Ahrens et al. Oct 2000 A
6131067 Girerd et al. Oct 2000 A
6133847 Yang Oct 2000 A
6133853 Obradovich et al. Oct 2000 A
6133874 Krasner Oct 2000 A
6133909 Schein et al. Oct 2000 A
6133910 Stinebruner Oct 2000 A
6133912 Montero Oct 2000 A
6134483 Vayanos et al. Oct 2000 A
6134532 Lazarus et al. Oct 2000 A
6137433 Zavorotny et al. Oct 2000 A
6137950 Yuen Oct 2000 A
6138072 Nagai Oct 2000 A
6138073 Uchigaki Oct 2000 A
6138155 Davis et al. Oct 2000 A
6138173 Hisano Oct 2000 A
6138915 Danielson et al. Oct 2000 A
6139177 Venkatraman et al. Oct 2000 A
6140140 Hopper Oct 2000 A
6140943 Levine Oct 2000 A
6140957 Wilson et al. Oct 2000 A
6141003 Chor et al. Oct 2000 A
6141010 Hoyle Oct 2000 A
6141488 Knudson et al. Oct 2000 A
6141611 Mackey et al. Oct 2000 A
6141699 Luzzi et al. Oct 2000 A
6144318 Hayashi et al. Nov 2000 A
6144338 Davies Nov 2000 A
6144366 Numazaki et al. Nov 2000 A
6144401 Casement et al. Nov 2000 A
6144702 Yurt et al. Nov 2000 A
6144905 Gannon Nov 2000 A
6144917 Walters et al. Nov 2000 A
6145003 Sanu et al. Nov 2000 A
6145082 Gannon et al. Nov 2000 A
6147598 Murphy et al. Nov 2000 A
6147678 Kumar et al. Nov 2000 A
6148179 Wright et al. Nov 2000 A
6148261 Obradovich et al. Nov 2000 A
6149519 Osaki et al. Nov 2000 A
6150927 Nesbitt Nov 2000 A
6150937 Rackman Nov 2000 A
6150961 Alewine et al. Nov 2000 A
6150980 Krasner Nov 2000 A
6151059 Schein et al. Nov 2000 A
6151208 Bartlett Nov 2000 A
6151551 Geier et al. Nov 2000 A
6151600 Dedrick Nov 2000 A
6151624 Teare et al. Nov 2000 A
6151631 Ansell et al. Nov 2000 A
6151643 Cheng et al. Nov 2000 A
6152856 Studor et al. Nov 2000 A
6154123 Kleinberg Nov 2000 A
6154172 Piccionelli et al. Nov 2000 A
6154207 Farris et al. Nov 2000 A
6154222 Haratsch et al. Nov 2000 A
6154658 Caci Nov 2000 A
6154723 Cox et al. Nov 2000 A
6154737 Inaba et al. Nov 2000 A
6154745 Kari et al. Nov 2000 A
6154758 Chiang Nov 2000 A
6157317 Walker Dec 2000 A
6157411 Williams et al. Dec 2000 A
6157413 Hanafee et al. Dec 2000 A
6157465 Suda et al. Dec 2000 A
6157621 Brown et al. Dec 2000 A
6157890 Nakai et al. Dec 2000 A
6157924 Austin Dec 2000 A
6157935 Tran et al. Dec 2000 A
6159100 Smith Dec 2000 A
6160477 Sandelman et al. Dec 2000 A
6160841 Stansell, Jr. et al. Dec 2000 A
6160988 Shroyer Dec 2000 A
6160998 Wright et al. Dec 2000 A
6161062 Sicre et al. Dec 2000 A
6161071 Shuman et al. Dec 2000 A
6161097 Glass et al. Dec 2000 A
6161125 Traversat et al. Dec 2000 A
6163316 Killian Dec 2000 A
6163338 Johnson et al. Dec 2000 A
6163345 Noguchi et al. Dec 2000 A
6163681 Wright et al. Dec 2000 A
6163711 Juntunen et al. Dec 2000 A
6163748 Guenther Dec 2000 A
6163749 McDonough et al. Dec 2000 A
6164534 Rathus et al. Dec 2000 A
6165070 Nolte et al. Dec 2000 A
6166627 Reeley Dec 2000 A
6167120 Kikinis Dec 2000 A
6167188 Young et al. Dec 2000 A
6167238 Wright Dec 2000 A
6167239 Wright et al. Dec 2000 A
6167253 Farris et al. Dec 2000 A
6167255 Kennedy, III et al. Dec 2000 A
6167369 Schulze Dec 2000 A
6169543 Wehmeyer Jan 2001 B1
6169894 McCormick et al. Jan 2001 B1
6169901 Boucher Jan 2001 B1
6169902 Kawamoto Jan 2001 B1
6169969 Cohen Jan 2001 B1
6169976 Colosso Jan 2001 B1
6169992 Beall et al. Jan 2001 B1
6170075 Schuster et al. Jan 2001 B1
6172674 Etheredge Jan 2001 B1
6172677 Stautner et al. Jan 2001 B1
6173066 Peurach et al. Jan 2001 B1
6173316 De Boor Jan 2001 B1
6175728 Mitama Jan 2001 B1
6175772 Kamiya et al. Jan 2001 B1
6175782 Obradovich et al. Jan 2001 B1
6175789 Beckert et al. Jan 2001 B1
6175868 Lavian et al. Jan 2001 B1
6175922 Wang Jan 2001 B1
6177873 Cragun Jan 2001 B1
6177931 Alexander et al. Jan 2001 B1
6178261 Williams et al. Jan 2001 B1
6178263 Fan et al. Jan 2001 B1
6178506 Quick, Jr. Jan 2001 B1
6179713 James et al. Jan 2001 B1
6181335 Hendricks et al. Jan 2001 B1
6181343 Lyons Jan 2001 B1
6181778 Ohki et al. Jan 2001 B1
6181922 Iwai et al. Jan 2001 B1
6181988 Schneider et al. Jan 2001 B1
6181994 Colson et al. Jan 2001 B1
6182006 Meek Jan 2001 B1
6182069 Niblack et al. Jan 2001 B1
6182089 Ganapathy et al. Jan 2001 B1
6182094 Humpleman et al. Jan 2001 B1
6182509 Leung Feb 2001 B1
6183365 Tonomura et al. Feb 2001 B1
6183366 Goldberg et al. Feb 2001 B1
6184798 Egri Feb 2001 B1
6184847 Fateh et al. Feb 2001 B1
6184877 Dodson et al. Feb 2001 B1
6185427 Krasner et al. Feb 2001 B1
6185484 Rhinehart Feb 2001 B1
6185491 Gray et al. Feb 2001 B1
6185586 Judson Feb 2001 B1
6185625 Tso et al. Feb 2001 B1
6188354 Soliman et al. Feb 2001 B1
6188381 van der Wal et al. Feb 2001 B1
6188397 Humpleman Feb 2001 B1
6188777 Darrell et al. Feb 2001 B1
6188909 Alanara et al. Feb 2001 B1
6189098 Kaliski, Jr. Feb 2001 B1
6192165 Irons Feb 2001 B1
6192282 Smith et al. Feb 2001 B1
6192314 Khavakh et al. Feb 2001 B1
6192340 Abecassis Feb 2001 B1
6192478 Elledge Feb 2001 B1
6195104 Lyons Feb 2001 B1
6195475 Beausoleil, Jr. et al. Feb 2001 B1
6195542 Griffith Feb 2001 B1
6195557 Havinis et al. Feb 2001 B1
6195654 Wachtel Feb 2001 B1
6196920 Spaur et al. Mar 2001 B1
6198920 Doviak et al. Mar 2001 B1
6199015 Curtwright et al. Mar 2001 B1
6199045 Giniger et al. Mar 2001 B1
6199048 Hudetz et al. Mar 2001 B1
6199076 Logan et al. Mar 2001 B1
6199082 Ferrel et al. Mar 2001 B1
6199099 Gershman et al. Mar 2001 B1
6201493 Silverman Mar 2001 B1
6201903 Wolff et al. Mar 2001 B1
6201996 Crater et al. Mar 2001 B1
6202008 Beckert et al. Mar 2001 B1
6202023 Hancock et al. Mar 2001 B1
6202027 Alland et al. Mar 2001 B1
6202096 Williams et al. Mar 2001 B1
6202211 Williams, Jr. Mar 2001 B1
6203366 Muller et al. Mar 2001 B1
6204798 Fleming, III Mar 2001 B1
6204804 Andersson Mar 2001 B1
6204852 Kumar et al. Mar 2001 B1
6205330 Winbladh Mar 2001 B1
6208247 Agre et al. Mar 2001 B1
6208290 Krasner Mar 2001 B1
6208335 Gordon et al. Mar 2001 B1
6208355 Schuster Mar 2001 B1
6208384 Schultheiss Mar 2001 B1
6208435 Zwolinski Mar 2001 B1
6208799 Marsh et al. Mar 2001 B1
6208805 Abecassis Mar 2001 B1
6208844 Abdelgany Mar 2001 B1
6208862 Lee Mar 2001 B1
6211777 Greenwood et al. Apr 2001 B1
6211907 Scaman et al. Apr 2001 B1
6212299 Yuge Apr 2001 B1
6212327 Berstis et al. Apr 2001 B1
6212552 Biliris et al. Apr 2001 B1
6212553 Lee et al. Apr 2001 B1
6215441 Moeglein et al. Apr 2001 B1
6215890 Matsuo et al. Apr 2001 B1
6215898 Woodfill et al. Apr 2001 B1
6216129 Eldering Apr 2001 B1
6216264 Maze et al. Apr 2001 B1
6216265 Roop et al. Apr 2001 B1
6218964 Ellis Apr 2001 B1
6219057 Carey et al. Apr 2001 B1
6219669 Haff et al. Apr 2001 B1
6219696 Wynblatt et al. Apr 2001 B1
6219839 Sampsell Apr 2001 B1
6222465 Kumar et al. Apr 2001 B1
6222838 Sparks Apr 2001 B1
6223124 Matsuno et al. Apr 2001 B1
6225890 Murphy May 2001 B1
6225901 Kail, IV May 2001 B1
6226389 Lemelson et al. May 2001 B1
6226396 Marugame May 2001 B1
6226631 Evans May 2001 B1
6229137 Bohn May 2001 B1
6229542 Miller May 2001 B1
6229913 Nayar et al. May 2001 B1
6230501 Bailey, Sr. et al. May 2001 B1
6233389 Barton et al. May 2001 B1
6233468 Chen May 2001 B1
6233591 Sherman et al. May 2001 B1
6233610 Hayball et al. May 2001 B1
6233734 Macrae et al. May 2001 B1
6236360 Rudow et al. May 2001 B1
6236365 LeBlanc et al. May 2001 B1
6236652 Preston et al. May 2001 B1
6236975 Boe et al. May 2001 B1
6237049 Ludtke May 2001 B1
6238290 Tarr et al. May 2001 B1
6239081 Korzilius et al. May 2001 B1
6239742 Krasner May 2001 B1
6239794 Yuen et al. May 2001 B1
6240207 Shinozuka et al. May 2001 B1
6240365 Bunn May 2001 B1
6240555 Shoff et al. May 2001 B1
6243450 Jansen et al. Jun 2001 B1
6243683 Peters Jun 2001 B1
6244873 Hill et al. Jun 2001 B1
6246471 Jung et al. Jun 2001 B1
6246479 Jung et al. Jun 2001 B1
6246672 Lumelsky Jun 2001 B1
6246688 Angwin et al. Jun 2001 B1
6246935 Buckley Jun 2001 B1
6247019 Davies Jun 2001 B1
6247135 Feague Jun 2001 B1
6247176 Schein et al. Jun 2001 B1
6249218 Blair Jun 2001 B1
6249252 Dupray Jun 2001 B1
6249292 Christian et al. Jun 2001 B1
6249294 Lefebvre et al. Jun 2001 B1
6249348 Jung et al. Jun 2001 B1
6249606 Kiraly et al. Jun 2001 B1
6249817 Nakabayashi et al. Jun 2001 B1
6249873 Richard et al. Jun 2001 B1
6250148 Lynam Jun 2001 B1
6251017 Leason et al. Jun 2001 B1
6252539 Phillips et al. Jun 2001 B1
6252544 Hoffberg Jun 2001 B1
6252598 Segen Jun 2001 B1
6253187 Fox Jun 2001 B1
6253203 O'Flaherty et al. Jun 2001 B1
6253258 Cohen Jun 2001 B1
6253326 Lincke et al. Jun 2001 B1
6255942 Knudsen Jul 2001 B1
6255953 Barber Jul 2001 B1
6256033 Nguyen Jul 2001 B1
6256400 Takata et al. Jul 2001 B1
6260088 Gove et al. Jul 2001 B1
6260147 Quick, Jr. Jul 2001 B1
6262722 Allison et al. Jul 2001 B1
6262772 Shen et al. Jul 2001 B1
6263268 Nathanson Jul 2001 B1
6263360 Arnold et al. Jul 2001 B1
6263363 Rosenblatt et al. Jul 2001 B1
6263384 Yanase Jul 2001 B1
6263390 Alasti Jul 2001 B1
6263501 Schein et al. Jul 2001 B1
6263503 Margulis Jul 2001 B1
6263507 Ahmad et al. Jul 2001 B1
6264555 Glazman et al. Jul 2001 B1
6264560 Goldberg et al. Jul 2001 B1
6265844 Wakefield Jul 2001 B1
6266057 Kuzunuki et al. Jul 2001 B1
6266667 Olsson Jul 2001 B1
6266814 Lemmons et al. Jul 2001 B1
6267672 Vance Jul 2001 B1
6267675 Lee Jul 2001 B1
6268849 Boyer et al. Jul 2001 B1
6268853 Hoskins et al. Jul 2001 B1
6269187 Frink et al. Jul 2001 B1
6269188 Jamali Jul 2001 B1
6270013 Lipman et al. Aug 2001 B1
6271858 Dalal et al. Aug 2001 B1
6272405 Kubota Aug 2001 B1
6272537 Kekic et al. Aug 2001 B1
6272632 Carman et al. Aug 2001 B1
6273771 Buckley et al. Aug 2001 B1
6275231 Obradovich Aug 2001 B1
6275648 Knudson et al. Aug 2001 B1
6275692 Skog Aug 2001 B1
6275774 Baron, Sr. et al. Aug 2001 B1
6275849 Ludwig Aug 2001 B1
6275854 Himmel et al. Aug 2001 B1
6275989 Broadwin et al. Aug 2001 B1
6279029 Sampat et al. Aug 2001 B1
6281792 Lerg et al. Aug 2001 B1
6281808 Glier et al. Aug 2001 B1
6282464 Obradovich Aug 2001 B1
D448366 Youngers et al. Sep 2001 S
6283860 Lyons et al. Sep 2001 B1
6285794 Georgiev et al. Sep 2001 B1
6285899 Ghaem et al. Sep 2001 B1
6285931 Hattori et al. Sep 2001 B1
6286142 Ehreth Sep 2001 B1
6287201 Hightower Sep 2001 B1
6288643 Lerg et al. Sep 2001 B1
6288716 Humpleman et al. Sep 2001 B1
6289112 Jain et al. Sep 2001 B1
6289304 Grefenstette Sep 2001 B1
6289319 Lockwood Sep 2001 B1
6292109 Murano et al. Sep 2001 B1
6292274 Bohn Sep 2001 B1
6292624 Saib et al. Sep 2001 B1
6292747 Amro et al. Sep 2001 B1
6292889 Fitzgerald et al. Sep 2001 B1
6294987 Matsuda et al. Sep 2001 B1
6295001 Barber Sep 2001 B1
6295346 Markowitz et al. Sep 2001 B1
6295449 Westerlage et al. Sep 2001 B1
6295492 Lang et al. Sep 2001 B1
6295513 Thackston Sep 2001 B1
6295530 Ritchie et al. Sep 2001 B1
6297732 Hsu et al. Oct 2001 B2
6297768 Allen, Jr. Oct 2001 B1
6298302 Walgers et al. Oct 2001 B2
6298348 Eldering Oct 2001 B1
6298445 Shostack et al. Oct 2001 B1
6298482 Seidman et al. Oct 2001 B1
6299308 Voronka et al. Oct 2001 B1
6301245 Luzeski et al. Oct 2001 B1
6304674 Cass et al. Oct 2001 B1
6304816 Berstis Oct 2001 B1
6305018 Usui et al. Oct 2001 B1
6307504 Sheynblat Oct 2001 B1
6307751 Bodony et al. Oct 2001 B1
6307952 Dietz Oct 2001 B1
6307955 Zank et al. Oct 2001 B1
6308175 Lang et al. Oct 2001 B1
6308269 Proidl Oct 2001 B2
6308328 Bowcutt et al. Oct 2001 B1
6308565 French et al. Oct 2001 B1
6310886 Barton Oct 2001 B1
6310971 Shiiyama Oct 2001 B1
6310988 Flores et al. Oct 2001 B1
6311011 Kuroda Oct 2001 B1
6311060 Evans et al. Oct 2001 B1
6311152 Bai et al. Oct 2001 B1
6312175 Lum Nov 2001 B1
6312337 Edwards et al. Nov 2001 B1
6313786 Sheynblat et al. Nov 2001 B1
6313853 Lamontagne et al. Nov 2001 B1
6314184 Fernandez-Martinez Nov 2001 B1
6314326 Fuchu Nov 2001 B1
6314364 Nakamura Nov 2001 B1
6314365 Smith Nov 2001 B1
6314366 Farmakis et al. Nov 2001 B1
6314399 Deligne et al. Nov 2001 B1
6314406 O'Hagan et al. Nov 2001 B1
6314415 Mukherjee Nov 2001 B1
6314420 Lang et al. Nov 2001 B1
6314422 Barker et al. Nov 2001 B1
6314452 Dekel et al. Nov 2001 B1
6314457 Schena et al. Nov 2001 B1
6316710 Lindemann Nov 2001 B1
6316934 Amorai-Moriya et al. Nov 2001 B1
6317090 Nagy et al. Nov 2001 B1
6317132 Perlin Nov 2001 B1
6317718 Fano Nov 2001 B1
6317761 Landsman et al. Nov 2001 B1
6317777 Skarbo et al. Nov 2001 B1
6317781 De Boor et al. Nov 2001 B1
6317881 Shah-Nazaroff et al. Nov 2001 B1
6317884 Eames et al. Nov 2001 B1
6317885 Fries Nov 2001 B1
6318087 Baumann et al. Nov 2001 B1
6320495 Sporgis Nov 2001 B1
6321257 Kotola et al. Nov 2001 B1
6321318 Baltz et al. Nov 2001 B1
6321991 Knowles Nov 2001 B1
6323803 Jolley et al. Nov 2001 B1
6323846 Westerman et al. Nov 2001 B1
6323894 Katz Nov 2001 B1
6323911 Schein et al. Nov 2001 B1
6324338 Wood et al. Nov 2001 B1
6324393 Doshay Nov 2001 B1
6324450 Iwama Nov 2001 B1
6324519 Eldering Nov 2001 B1
6324542 Wright, Jr. et al. Nov 2001 B1
6324573 Rhoads Nov 2001 B1
6324650 Ogilvie Nov 2001 B1
6326903 Gross et al. Dec 2001 B1
6326962 Szabo Dec 2001 B1
6326982 Wu et al. Dec 2001 B1
6327049 Ohtsuka Dec 2001 B1
6327073 Yahav et al. Dec 2001 B1
6327418 Barton Dec 2001 B1
6327473 Soliman et al. Dec 2001 B1
6327536 Tsuji et al. Dec 2001 B1
6327590 Chidlovskii et al. Dec 2001 B1
6327607 Fant Dec 2001 B1
6329984 Boss et al. Dec 2001 B1
6330021 Devaux Dec 2001 B1
6330022 Seligmann Dec 2001 B1
6330499 Chou et al. Dec 2001 B1
6330976 Dymetman et al. Dec 2001 B1
6331877 Bennington et al. Dec 2001 B1
6332086 Avis Dec 2001 B2
6332127 Bandera et al. Dec 2001 B1
6333703 Alewine et al. Dec 2001 B1
6333919 Gaffney Dec 2001 B2
6335569 Joshi Jan 2002 B1
6335725 Koh et al. Jan 2002 B1
6335963 Bosco Jan 2002 B1
6335965 Katz Jan 2002 B1
6336099 Barnett et al. Jan 2002 B1
6338059 Fields et al. Jan 2002 B1
6339370 Ruhl et al. Jan 2002 B1
6339842 Fernandez et al. Jan 2002 B1
6340959 Inamori Jan 2002 B1
6340977 Lui et al. Jan 2002 B1
6341195 Mankovitz et al. Jan 2002 B1
6341280 Glass et al. Jan 2002 B1
6341288 Yach et al. Jan 2002 B1
6341290 Lombardo et al. Jan 2002 B1
6341374 Schein et al. Jan 2002 B2
6341523 Lynam Jan 2002 B2
6343218 Kaneda et al. Jan 2002 B1
6343318 Hawkins et al. Jan 2002 B1
6343377 Gessner et al. Jan 2002 B1
6343810 Breed Feb 2002 B1
6343990 Rasmussen et al. Feb 2002 B1
6344906 Gatto et al. Feb 2002 B1
6345104 Rhoads Feb 2002 B1
6345239 Bowman-Amuah Feb 2002 B1
6345288 Reed et al. Feb 2002 B1
6346045 Rider et al. Feb 2002 B2
6346933 Lin Feb 2002 B1
6346951 Mastronardi Feb 2002 B1
6347290 Bartlett Feb 2002 B1
6347313 Ma et al. Feb 2002 B1
6349134 Katz Feb 2002 B1
6349308 Whang et al. Feb 2002 B1
6349339 Williams Feb 2002 B1
6351222 Swan et al. Feb 2002 B1
6351776 O'Brien et al. Feb 2002 B1
6353398 Amin et al. Mar 2002 B1
6353839 King et al. Mar 2002 B1
6353850 Wies et al. Mar 2002 B1
6356281 Isenman Mar 2002 B1
6356899 Chakrabarti et al. Mar 2002 B1
6356933 Mitchell et al. Mar 2002 B2
6357043 Ellis et al. Mar 2002 B1
6359636 Schindler et al. Mar 2002 B1
6360093 Ross et al. Mar 2002 B1
6360102 Havinis et al. Mar 2002 B1
6360949 Shepard et al. Mar 2002 B1
6360951 Swinehart Mar 2002 B1
6362730 Razavi et al. Mar 2002 B2
6362748 Huang Mar 2002 B1
6362888 Jung et al. Mar 2002 B1
6363160 Bradski et al. Mar 2002 B1
6363254 Jones et al. Mar 2002 B1
6363421 Barker et al. Mar 2002 B2
RE37654 Longo Apr 2002 E
6366288 Naruki et al. Apr 2002 B1
6366701 Chalom et al. Apr 2002 B1
6366893 Hannula et al. Apr 2002 B2
6367019 Ansell et al. Apr 2002 B1
6367080 Enomoto et al. Apr 2002 B1
6369811 Graham et al. Apr 2002 B1
6370448 Eryurek Apr 2002 B1
6370475 Breed et al. Apr 2002 B1
6371850 Sonoda Apr 2002 B1
6373528 Bennington et al. Apr 2002 B1
6373573 Jung et al. Apr 2002 B1
6373841 Goh et al. Apr 2002 B1
6373851 Dadario Apr 2002 B1
6374237 Reese Apr 2002 B1
6374286 Gee et al. Apr 2002 B1
6374290 Scharber et al. Apr 2002 B1
6374406 Hirata Apr 2002 B2
6377209 Krasner Apr 2002 B1
6377296 Zlatsin et al. Apr 2002 B1
6377712 Georgiev et al. Apr 2002 B1
6377825 Kennedy et al. Apr 2002 B1
6377860 Gray et al. Apr 2002 B1
6377986 Philyaw et al. Apr 2002 B1
6378075 Goldstein et al. Apr 2002 B1
6379251 Auxier et al. Apr 2002 B1
6380931 Gillespie et al. Apr 2002 B1
6381341 Rhoads Apr 2002 B1
6381362 Deshpande et al. Apr 2002 B1
6381535 Durocher et al. Apr 2002 B1
6381575 Martin et al. Apr 2002 B1
6381602 Shoroff et al. Apr 2002 B1
6381603 Chan et al. Apr 2002 B1
6381677 Beardsley et al. Apr 2002 B1
6381747 Wonfor et al. Apr 2002 B1
6382897 Mattio et al. May 2002 B2
6384744 Philyaw et al. May 2002 B1
6384776 Martin May 2002 B1
6384819 Hunter May 2002 B1
6384829 Prevost et al. May 2002 B1
6385592 Angles et al. May 2002 B1
6385653 Sitaraman et al. May 2002 B1
6388579 Adcox et al. May 2002 B1
6388714 Schein et al. May 2002 B1
6389029 McAlear May 2002 B1
6389340 Rayner May 2002 B1
6389464 Krishnamurthy et al. May 2002 B1
6389483 Larsson May 2002 B1
6390922 Vange et al. May 2002 B1
6392591 Hsu et al. May 2002 B1
6392692 Monroe May 2002 B1
6393443 Rubin et al. May 2002 B1
6393574 Kashiwagi et al. May 2002 B1
6394899 Walker May 2002 B1
6396523 Segal et al. May 2002 B1
6396544 Schindler et al. May 2002 B1
6396546 Alten et al. May 2002 B1
6396951 Grefenstette May 2002 B1
6397080 Viktorsson et al. May 2002 B1
6397259 Lincke et al. May 2002 B1
6400304 Chubbs, III Jun 2002 B1
6400314 Krasner Jun 2002 B1
6400690 Liu et al. Jun 2002 B1
6400845 Volino Jun 2002 B1
6400953 Furukawa Jun 2002 B1
6400958 Isomursu et al. Jun 2002 B1
6400990 Silvian Jun 2002 B1
6400996 Hoffberg et al. Jun 2002 B1
6401027 Xu et al. Jun 2002 B1
6401029 Kubota et al. Jun 2002 B1
6401085 Gershman et al. Jun 2002 B1
6404352 Ichikawa et al. Jun 2002 B1
6404438 Hatlelid et al. Jun 2002 B1
6405033 Kennedy, III et al. Jun 2002 B1
6405132 Breed et al. Jun 2002 B1
6405252 Gupta et al. Jun 2002 B1
6408174 Steijer Jun 2002 B1
6408257 Harrington et al. Jun 2002 B1
6408437 Hendricks et al. Jun 2002 B1
6409401 Petteruti et al. Jun 2002 B1
6411254 Moeglein et al. Jun 2002 B1
6411696 Iverson et al. Jun 2002 B1
6411744 Edwards Jun 2002 B1
6411936 Sanders Jun 2002 B1
6412110 Schein et al. Jun 2002 B1
6414671 Gillespie et al. Jul 2002 B1
6414750 Jung et al. Jul 2002 B2
6414955 Clare et al. Jul 2002 B1
6415188 Fernandez et al. Jul 2002 B1
6415210 Hozuka et al. Jul 2002 B2
6417782 Darnall Jul 2002 B1
6417797 Cousins et al. Jul 2002 B1
6418324 Doviak et al. Jul 2002 B1
6418380 Pica Jul 2002 B1
6418424 Hoffberg et al. Jul 2002 B1
6418433 Chakrabarti et al. Jul 2002 B1
6421002 Krasner Jul 2002 B2
6421429 Merritt et al. Jul 2002 B1
6421453 Kanevsky et al. Jul 2002 B1
6421606 Asai et al. Jul 2002 B1
6421608 Motoyama et al. Jul 2002 B1
6421675 Ryan et al. Jul 2002 B1
6421726 Kenner et al. Jul 2002 B1
6421738 Ratan et al. Jul 2002 B1
6424912 Correia et al. Jul 2002 B1
6424979 Livingston et al. Jul 2002 B1
6425004 Hardjono Jul 2002 B1
6425828 Walker et al. Jul 2002 B2
6427032 Irons et al. Jul 2002 B1
6427132 Bowman-Amuah Jul 2002 B1
6429789 Kiridena et al. Aug 2002 B1
6429812 Hoffberg Aug 2002 B1
6429899 Nio et al. Aug 2002 B1
6430164 Jones et al. Aug 2002 B1
6430358 Yuen et al. Aug 2002 B1
6430359 Yuen et al. Aug 2002 B1
6430488 Goldman et al. Aug 2002 B1
6430504 Gilbert et al. Aug 2002 B1
6430539 Lazarus et al. Aug 2002 B1
6430554 Rothschild Aug 2002 B1
6430567 Burridge Aug 2002 B2
6430997 French et al. Aug 2002 B1
6433734 Krasner Aug 2002 B1
6433784 Merrick et al. Aug 2002 B1
6434400 Villevieille et al. Aug 2002 B1
6434524 Weber Aug 2002 B1
6434561 Durst, Jr. et al. Aug 2002 B1
6434568 Bowman-Amuah Aug 2002 B1
6434581 Forcier Aug 2002 B1
6434614 Blumenau Aug 2002 B1
6434621 Pezzillo et al. Aug 2002 B1
6436049 Kamiyama et al. Aug 2002 B1
6437692 Petite et al. Aug 2002 B1
6437836 Huang et al. Aug 2002 B1
6438523 Oberteuffer et al. Aug 2002 B1
6438579 Hosken Aug 2002 B1
6441832 Tao et al. Aug 2002 B1
6442169 Lewis Aug 2002 B1
6442332 Knudson et al. Aug 2002 B1
6442391 Johansson et al. Aug 2002 B1
6442485 Evans Aug 2002 B2
6442571 Haff et al. Aug 2002 B1
6442748 Bowman-Amuah Aug 2002 B1
6443843 Walker et al. Sep 2002 B1
6445308 Koike Sep 2002 B1
6445398 Gerba et al. Sep 2002 B1
6446065 Nishioka et al. Sep 2002 B1
6446076 Burkey et al. Sep 2002 B1
6446130 Grapes Sep 2002 B1
6446261 Rosser Sep 2002 B1
6448979 Schena et al. Sep 2002 B1
6449041 Jung et al. Sep 2002 B1
6449473 Raivisto Sep 2002 B1
6449476 Hutchison, IV et al. Sep 2002 B1
6449540 Rayner Sep 2002 B1
6449616 Walker et al. Sep 2002 B1
6449639 Blumberg Sep 2002 B1
6449688 Peters et al. Sep 2002 B1
6450407 Freeman et al. Sep 2002 B1
6452484 Drori Sep 2002 B1
6452535 Rao et al. Sep 2002 B1
6452910 Vij et al. Sep 2002 B1
6453471 Klosterman Sep 2002 B1
6454626 An Sep 2002 B1
6456234 Johnson Sep 2002 B1
6456852 Bar et al. Sep 2002 B2
6457010 Eldering et al. Sep 2002 B1
6457025 Judson Sep 2002 B2
6459425 Holub et al. Oct 2002 B1
6459823 Altunbasak et al. Oct 2002 B2
6460036 Herz Oct 2002 B1
6460181 Donnelly Oct 2002 B1
6463272 Wallace et al. Oct 2002 B1
6463462 Smith et al. Oct 2002 B1
6463585 Hendricks et al. Oct 2002 B1
6466198 Feinstein Oct 2002 B1
6466260 Hatae et al. Oct 2002 B1
6466336 Sturgeon et al. Oct 2002 B1
6466548 Fitzgerald Oct 2002 B1
6466654 Cooper et al. Oct 2002 B1
6466734 Yuen et al. Oct 2002 B2
6466796 Jacobson et al. Oct 2002 B1
6468155 Zucker et al. Oct 2002 B1
6469639 Tanenhaus et al. Oct 2002 B2
6469753 Klosterman et al. Oct 2002 B1
6470138 Um et al. Oct 2002 B1
6470263 Ito et al. Oct 2002 B2
6470381 De Boor et al. Oct 2002 B2
6472982 Eida et al. Oct 2002 B2
6473559 Knudson et al. Oct 2002 B1
6473609 Schwartz et al. Oct 2002 B1
6473688 Kohno et al. Oct 2002 B2
6473794 Guheen et al. Oct 2002 B1
6476830 Farmer et al. Nov 2002 B1
6476834 Doval et al. Nov 2002 B1
6477143 Ginossar Nov 2002 B1
6477150 Maggenti et al. Nov 2002 B1
6477239 Ohki et al. Nov 2002 B1
6477579 Kunkel et al. Nov 2002 B1
6477705 Yuen et al. Nov 2002 B1
6480144 Miller et al. Nov 2002 B1
6480699 Lovoi Nov 2002 B1
6480889 Saito et al. Nov 2002 B1
6480900 Habert Nov 2002 B1
6483094 Yahav et al. Nov 2002 B1
6483513 Haratsch et al. Nov 2002 B1
6484080 Breed Nov 2002 B2
6484148 Boyd Nov 2002 B1
6484149 Jammes et al. Nov 2002 B1
6484156 Gupta et al. Nov 2002 B1
6486874 Muthuswamy et al. Nov 2002 B1
6486892 Stern Nov 2002 B1
6487539 Aggarwal et al. Nov 2002 B1
6489955 Newhall, Jr. Dec 2002 B1
6489970 Pazel Dec 2002 B1
6490525 Baron, Sr. et al. Dec 2002 B2
6490553 Van Thong et al. Dec 2002 B2
6490698 Horvitz et al. Dec 2002 B1
6491217 Catan Dec 2002 B2
6493338 Preston et al. Dec 2002 B1
6493633 Baron, Sr. et al. Dec 2002 B2
6493637 Steeg Dec 2002 B1
6493707 Dey et al. Dec 2002 B1
6493875 Eames et al. Dec 2002 B1
6496107 Himmelstein Dec 2002 B1
6496117 Gutta et al. Dec 2002 B2
6496575 Vasell et al. Dec 2002 B1
6496598 Harman Dec 2002 B1
6496689 Keller et al. Dec 2002 B1
6496778 Lin Dec 2002 B1
6496826 Chowdhury et al. Dec 2002 B1
6498895 Young et al. Dec 2002 B2
6498970 Colmenarez et al. Dec 2002 B2
6498972 Rao et al. Dec 2002 B1
6498987 Kelly et al. Dec 2002 B1
6498989 Pisetski et al. Dec 2002 B1
6499027 Weinberger Dec 2002 B1
6502033 Phuyal Dec 2002 B1
6502125 Kenner et al. Dec 2002 B1
6503195 Keller et al. Jan 2003 B1
6504138 Mangerson Jan 2003 B1
6504491 Christians Jan 2003 B1
6504631 Barry et al. Jan 2003 B1
6505046 Baker Jan 2003 B1
6505086 Dodd, Jr. et al. Jan 2003 B1
6505100 Stuempfle et al. Jan 2003 B1
6505101 Brill Jan 2003 B1
6505123 Root et al. Jan 2003 B1
6505348 Knowles et al. Jan 2003 B1
6507349 Balassanian Jan 2003 B1
6507810 Razavi et al. Jan 2003 B2
6508706 Sitrick et al. Jan 2003 B2
6509707 Yamashita et al. Jan 2003 B2
6509908 Croy et al. Jan 2003 B1
6509912 Moran et al. Jan 2003 B1
6510387 Fuchs et al. Jan 2003 B2
6510417 Woods et al. Jan 2003 B1
6510458 Berstis et al. Jan 2003 B1
6512922 Burg et al. Jan 2003 B1
6512930 Sandegren Jan 2003 B2
6513160 Dureau Jan 2003 B2
6515595 Obradovich et al. Feb 2003 B1
6515623 Johnson Feb 2003 B2
6516311 Yacoby et al. Feb 2003 B1
6516338 Landsman et al. Feb 2003 B1
6516467 Schindler et al. Feb 2003 B1
6516664 Lynam Feb 2003 B2
6518950 Dougherty et al. Feb 2003 B1
6519037 Jung et al. Feb 2003 B2
6519466 Pande et al. Feb 2003 B2
6519571 Guheen et al. Feb 2003 B1
6519646 Gupta et al. Feb 2003 B1
6520407 Nieswand et al. Feb 2003 B1
6522333 Hatlelid et al. Feb 2003 B1
6522682 Kohli et al. Feb 2003 B1
6522875 Dowling et al. Feb 2003 B1
6522977 Corrigan et al. Feb 2003 B2
6523172 Martinez-Guerra et al. Feb 2003 B1
6525687 Roy et al. Feb 2003 B2
6525688 Chou et al. Feb 2003 B2
6525749 Moran et al. Feb 2003 B1
6526041 Shaffer et al. Feb 2003 B1
6526268 Marrah et al. Feb 2003 B1
6526335 Treyz et al. Feb 2003 B1
6526349 Bullock et al. Feb 2003 B2
6526352 Breed et al. Feb 2003 B1
6526395 Morris Feb 2003 B1
6526411 Ward Feb 2003 B1
6526423 Zawadzki et al. Feb 2003 B2
6526449 Philyaw et al. Feb 2003 B1
6526577 Knudson et al. Feb 2003 B1
6526581 Edson Feb 2003 B1
6529153 Dijkstra Mar 2003 B1
6529159 Fan et al. Mar 2003 B1
6529829 Turetzky et al. Mar 2003 B2
6529909 Bowman-Amuah Mar 2003 B1
6529940 Humble Mar 2003 B1
6530083 Liebenow Mar 2003 B1
6530840 Cuomo et al. Mar 2003 B1
6531982 White et al. Mar 2003 B1
6532007 Matsuda Mar 2003 B1
6532448 Higginson et al. Mar 2003 B1
6532469 Feldman et al. Mar 2003 B1
6532494 Frank et al. Mar 2003 B1
6535743 Kennedy, III et al. Mar 2003 B1
6536037 Guheen et al. Mar 2003 B1
6537324 Tabata et al. Mar 2003 B1
6538187 Beigi Mar 2003 B2
6538701 Yuen Mar 2003 B1
6538757 Sansone Mar 2003 B1
6539200 Schiff Mar 2003 B1
6539232 Hendrey et al. Mar 2003 B2
6539304 Chansarkar Mar 2003 B1
6539336 Vock et al. Mar 2003 B1
6539375 Kawasaki Mar 2003 B2
6539544 Ebisawa Mar 2003 B2
6539931 Trajkovic et al. Apr 2003 B2
6540141 Dougherty et al. Apr 2003 B1
6542076 Joao Apr 2003 B1
6542077 Joao Apr 2003 B2
6542464 Takeda et al. Apr 2003 B1
6542734 Abrol et al. Apr 2003 B1
6542743 Soliman Apr 2003 B1
6542748 Hendrey et al. Apr 2003 B2
6542749 Tanaka et al. Apr 2003 B2
6542750 Hendrey et al. Apr 2003 B2
6542758 Chennakeshu et al. Apr 2003 B1
6542793 Kojima et al. Apr 2003 B2
6542794 Obradovich Apr 2003 B2
6542925 Brown et al. Apr 2003 B2
6542933 Durst, Jr. et al. Apr 2003 B1
6543052 Ogasawara Apr 2003 B1
6545578 Yoshiyama Apr 2003 B2
6545601 Monroe Apr 2003 B1
6545669 Kinawi et al. Apr 2003 B1
6545722 Schultheiss et al. Apr 2003 B1
6546385 Mao et al. Apr 2003 B1
6546399 Reed et al. Apr 2003 B1
6546405 Gupta et al. Apr 2003 B2
6546419 Humpleman et al. Apr 2003 B1
6549130 Joao Apr 2003 B1
6549145 Hsu et al. Apr 2003 B2
6549612 Gifford et al. Apr 2003 B2
6549719 Mankovitz Apr 2003 B2
6549751 Mandri Apr 2003 B1
6549776 Joong Apr 2003 B1
6549844 Egberts Apr 2003 B1
6549891 Rauber et al. Apr 2003 B1
6550012 Villa et al. Apr 2003 B1
6550057 Bowman-Amuah Apr 2003 B1
6552682 Fan Apr 2003 B1
6553129 Rhoads Apr 2003 B1
6553178 Abecassis Apr 2003 B2
6553436 Ando et al. Apr 2003 B2
6554433 Holler Apr 2003 B1
6556824 Purnadi et al. Apr 2003 B1
6556832 Soliman Apr 2003 B1
6556950 Schwenke et al. Apr 2003 B1
6557031 Mimura et al. Apr 2003 B1
6559773 Berry May 2003 B1
6560281 Black et al. May 2003 B1
6560461 Fomukong et al. May 2003 B1
6560534 Abraham et al. May 2003 B2
6560578 Eldering May 2003 B2
6563418 Moon May 2003 B1
6563505 Mills et al. May 2003 B1
6563523 Suchocki et al. May 2003 B1
6563796 Saito May 2003 B1
6564144 Cherveny May 2003 B1
6564217 Bunney et al. May 2003 B2
6564379 Knudson et al. May 2003 B1
6564383 Combs et al. May 2003 B1
6567035 Elliott May 2003 B1
6567533 Rhoads May 2003 B1
6567536 McNitt et al. May 2003 B2
6567606 Milnes et al. May 2003 B2
6568754 Norton et al. May 2003 B1
6570530 Gaal et al. May 2003 B2
6570555 Prevost et al. May 2003 B1
6571193 Unuma et al. May 2003 B1
6571235 Marpe et al. May 2003 B1
6571279 Herz et al. May 2003 B1
6572662 Manohar et al. Jun 2003 B2
6573831 Ikeda et al. Jun 2003 B2
6573883 Bartlett Jun 2003 B1
6574538 Sasaki Jun 2003 B2
6574548 DeKock et al. Jun 2003 B2
6574558 Kohli Jun 2003 B2
6574617 Immerman et al. Jun 2003 B1
6577329 Flickner et al. Jun 2003 B1
6577716 Minter et al. Jun 2003 B1
6577953 Swope et al. Jun 2003 B1
6580373 Ohashi Jun 2003 B1
6580390 Hay Jun 2003 B1
6580808 Rhoads Jun 2003 B2
6580904 Cox et al. Jun 2003 B2
6580979 Payton et al. Jun 2003 B2
6583866 Jung et al. Jun 2003 B2
6584382 Karem Jun 2003 B2
6584403 Bunn Jun 2003 B2
6584552 Kuno et al. Jun 2003 B1
6586968 Schauer et al. Jul 2003 B1
6587046 Joao Jul 2003 B2
6587127 Leeke et al. Jul 2003 B1
6587835 Treyz et al. Jul 2003 B1
6588013 Lumley et al. Jul 2003 B1
6590507 Burns Jul 2003 B2
6590529 Schwoegler Jul 2003 B2
6590588 Lincke et al. Jul 2003 B2
6590602 Fernandez et al. Jul 2003 B1
6590660 Jung et al. Jul 2003 B2
6591304 Sitaraman et al. Jul 2003 B1
6593723 Johnson Jul 2003 B1
6594500 Bender et al. Jul 2003 B2
6594616 Zhang et al. Jul 2003 B2
6594688 Ludwig et al. Jul 2003 B2
6594705 Philyaw Jul 2003 B1
6595859 Lynn Jul 2003 B2
6597311 Sheynblat et al. Jul 2003 B2
6597443 Boman Jul 2003 B2
6597812 Fallon et al. Jul 2003 B1
6597903 Dahm et al. Jul 2003 B1
6599130 Moehrle Jul 2003 B2
6600417 Lerg et al. Jul 2003 B2
6600475 Gutta et al. Jul 2003 B2
6600734 Gernert Jul 2003 B1
6600914 Uhlik et al. Jul 2003 B2
6601012 Horvitz et al. Jul 2003 B1
6601057 Underwood et al. Jul 2003 B1
6603405 Smith Aug 2003 B2
6603488 Humpleman et al. Aug 2003 B2
6603973 Foladare et al. Aug 2003 B1
6606495 Korpi et al. Aug 2003 B1
6606554 Edge Aug 2003 B2
6606744 Mikurak Aug 2003 B1
6606746 Zdepski et al. Aug 2003 B1
6609004 Morse et al. Aug 2003 B1
6610936 Gillespie et al. Aug 2003 B2
6611201 Bishop et al. Aug 2003 B1
6611598 Hayosh Aug 2003 B1
6611654 Shteyn Aug 2003 B1
6611755 Coffee et al. Aug 2003 B1
6611757 Brodie Aug 2003 B2
6611813 Bratton Aug 2003 B1
6611867 Bowman-Amuah Aug 2003 B1
6611957 Ebisawa Aug 2003 B2
D479228 Sakaguchi et al. Sep 2003 S
6612932 Stern Sep 2003 B2
6614349 Proctor et al. Sep 2003 B1
6614385 Kuhn et al. Sep 2003 B2
6615039 Eldering Sep 2003 B1
6615088 Myer et al. Sep 2003 B1
6615099 Muller et al. Sep 2003 B1
6615134 Ando Sep 2003 B2
6615136 Swope et al. Sep 2003 B1
6615137 Lutter et al. Sep 2003 B2
6615166 Guheen et al. Sep 2003 B1
6615208 Behrens et al. Sep 2003 B1
6615268 Philyaw et al. Sep 2003 B1
6616038 Olschafskie et al. Sep 2003 B1
6616047 Catan Sep 2003 B2
6616071 Kitamura et al. Sep 2003 B2
6616533 Rashkovskiy Sep 2003 B1
6617369 Parfondry et al. Sep 2003 B2
6618504 Yoshino Sep 2003 B1
6618593 Drutman et al. Sep 2003 B1
6618670 Chansarkar Sep 2003 B1
6618727 Wheeler et al. Sep 2003 B1
6618732 White et al. Sep 2003 B1
6621452 Knockeart et al. Sep 2003 B2
6622083 Knockeart et al. Sep 2003 B1
6622165 Philyaw Sep 2003 B1
6622304 Carhart Sep 2003 B1
6624833 Kumar et al. Sep 2003 B1
6624881 Waibel et al. Sep 2003 B2
6625335 Kanai Sep 2003 B1
6625578 Spaur et al. Sep 2003 B2
6625581 Perkowski Sep 2003 B1
6628227 Rao et al. Sep 2003 B1
6628233 Knockeart et al. Sep 2003 B2
6628295 Wilensky Sep 2003 B2
6628304 Mitchell et al. Sep 2003 B2
6628928 Crosby et al. Sep 2003 B1
6629033 Preston et al. Sep 2003 B2
6629133 Philyaw et al. Sep 2003 B1
6630884 Shanmugham Oct 2003 B1
6630924 Peck Oct 2003 B1
6631404 Philyaw Oct 2003 B1
6632138 Serizawa et al. Oct 2003 B1
6633238 Lemelson et al. Oct 2003 B2
6633255 Krasner Oct 2003 B2
6633294 Rosenthal et al. Oct 2003 B1
6636763 Junker et al. Oct 2003 B1
6636892 Philyaw Oct 2003 B1
6636896 Philyaw Oct 2003 B1
6638314 Meyerzon et al. Oct 2003 B1
6638317 Nakao Oct 2003 B2
6640097 Corrigan et al. Oct 2003 B2
6640145 Hoffberg et al. Oct 2003 B2
6640184 Rabe Oct 2003 B1
6640202 Dietz et al. Oct 2003 B1
6640335 Ebisawa Oct 2003 B2
6640336 Ebisawa Oct 2003 B1
6641037 Williams Nov 2003 B2
6641087 Nelson Nov 2003 B1
6643652 Helgeson et al. Nov 2003 B2
6643661 Polizzi et al. Nov 2003 B2
6643692 Philyaw et al. Nov 2003 B1
6643696 Davis et al. Nov 2003 B2
6645068 Kelly et al. Nov 2003 B1
6646559 Smith Nov 2003 B2
6647128 Rhoads Nov 2003 B1
6647130 Rhoads Nov 2003 B2
6647257 Owensby Nov 2003 B2
6647269 Hendrey et al. Nov 2003 B2
6647270 Himmelstein Nov 2003 B1
6647328 Walker Nov 2003 B2
6647371 Shinohara Nov 2003 B2
6650288 Pitt et al. Nov 2003 B1
6650761 Rodriguez et al. Nov 2003 B1
6650983 Rao et al. Nov 2003 B1
6650984 Rao et al. Nov 2003 B1
6651053 Rothschild Nov 2003 B1
6654689 Kelly et al. Nov 2003 B1
6654725 Langheinrich et al. Nov 2003 B1
6656050 Busch et al. Dec 2003 B2
6658151 Lee et al. Dec 2003 B2
6659861 Faris et al. Dec 2003 B1
6661372 Girerd et al. Dec 2003 B1
6661468 Alten et al. Dec 2003 B2
6661773 Pelissier et al. Dec 2003 B1
6661918 Gordon et al. Dec 2003 B1
6661919 Nicholson et al. Dec 2003 B2
6662091 Wilson et al. Dec 2003 B2
6662106 Evans Dec 2003 B2
6662195 Langseth et al. Dec 2003 B1
6662642 Breed et al. Dec 2003 B2
6663105 Sullivan et al. Dec 2003 B1
6664969 Emerson et al. Dec 2003 B1
6664978 Kekic et al. Dec 2003 B1
6664991 Chew et al. Dec 2003 B1
6665539 Sih et al. Dec 2003 B2
6665541 Krasner et al. Dec 2003 B1
6665640 Bennett et al. Dec 2003 B1
6665659 Logan Dec 2003 B1
6665706 Kenner et al. Dec 2003 B2
6668133 Yuen et al. Dec 2003 B2
6669088 Veeneman Dec 2003 B2
6669562 Shiino Dec 2003 B1
6669564 Young et al. Dec 2003 B1
6670905 Orr Dec 2003 B1
6670912 Honda Dec 2003 B2
6670971 Oral Dec 2003 B1
6671620 Garin et al. Dec 2003 B1
6671684 Hull et al. Dec 2003 B1
6671818 Mikurak Dec 2003 B1
6673019 Kamiyama Jan 2004 B2
6674877 Jojic et al. Jan 2004 B1
6675081 Shuman et al. Jan 2004 B2
6675204 De Boor et al. Jan 2004 B2
6675385 Wang Jan 2004 B1
6675386 Hendricks et al. Jan 2004 B1
6677894 Sheynblat et al. Jan 2004 B2
6677969 Hongo Jan 2004 B1
6678004 Schultheiss et al. Jan 2004 B1
6678075 Tsai et al. Jan 2004 B1
6678250 Grabelsky et al. Jan 2004 B1
6678516 Nordman et al. Jan 2004 B2
6678612 Khawam Jan 2004 B1
6678664 Ganesan Jan 2004 B1
6678687 Watanabe et al. Jan 2004 B2
6680674 Park Jan 2004 B1
6680694 Knockeart et al. Jan 2004 B1
6680695 Turetzky et al. Jan 2004 B2
6680746 Kawai et al. Jan 2004 B2
6681029 Rhoads Jan 2004 B1
6681031 Cohen et al. Jan 2004 B2
6681114 Chang et al. Jan 2004 B2
6681121 Preston et al. Jan 2004 B1
6683941 Brown et al. Jan 2004 B2
6684137 Takagi et al. Jan 2004 B2
6684194 Eldering et al. Jan 2004 B1
6684250 Anderson et al. Jan 2004 B2
6686844 Watanabe et al. Feb 2004 B2
6687504 Raith Feb 2004 B1
6687608 Sugimoto et al. Feb 2004 B2
6687612 Cherveny Feb 2004 B2
6687696 Hofmann et al. Feb 2004 B2
6687745 Franco et al. Feb 2004 B1
6687906 Yuen et al. Feb 2004 B1
6688081 Boyd Feb 2004 B2
6688522 Philyaw et al. Feb 2004 B1
6688523 Koenck Feb 2004 B1
6688525 Nelson et al. Feb 2004 B1
6690017 Remillard et al. Feb 2004 B2
6690294 Zierden Feb 2004 B1
6690358 Kaplan Feb 2004 B2
6690380 Hussain et al. Feb 2004 B1
6690681 Preston et al. Feb 2004 B1
6690918 Evans et al. Feb 2004 B2
6691019 Seeley et al. Feb 2004 B2
6691107 Dockter et al. Feb 2004 B1
6691109 Bjornson et al. Feb 2004 B2
6691123 Gulliksen Feb 2004 B1
6691151 Cheyer et al. Feb 2004 B1
6691194 Ofer Feb 2004 B1
6691914 Isherwood et al. Feb 2004 B2
6692259 Kumar et al. Feb 2004 B2
6694258 Johnson et al. Feb 2004 B2
6694316 Langseth et al. Feb 2004 B1
6694356 Philyaw Feb 2004 B1
6697103 Fernandez et al. Feb 2004 B1
6697629 Grilli et al. Feb 2004 B1
6697730 Dickerson Feb 2004 B2
6697792 Bunney et al. Feb 2004 B2
6697824 Bowman-Amuah Feb 2004 B1
6697838 Jakobson Feb 2004 B1
6697924 Swank Feb 2004 B2
6697949 Philyaw et al. Feb 2004 B1
6698020 Zigmond et al. Feb 2004 B1
6699127 Lobb et al. Mar 2004 B1
6700482 Ververs et al. Mar 2004 B2
6700990 Rhoads Mar 2004 B1
6701144 Kirbas et al. Mar 2004 B2
6701311 Biebesheimer et al. Mar 2004 B2
6701315 Austin Mar 2004 B1
6701354 Philyaw et al. Mar 2004 B1
6701363 Chiu et al. Mar 2004 B1
6701369 Philyaw Mar 2004 B1
6701523 Hancock et al. Mar 2004 B1
6703971 Pande et al. Mar 2004 B2
6703972 van Diggelen Mar 2004 B2
6704024 Robotham et al. Mar 2004 B2
6704028 Wugofski Mar 2004 B2
6704651 van Diggelen Mar 2004 B2
6704699 Nir Mar 2004 B2
6704930 Eldering et al. Mar 2004 B1
6707421 Drury et al. Mar 2004 B1
6707581 Browning Mar 2004 B1
6708100 Russell et al. Mar 2004 B2
6708203 Makar et al. Mar 2004 B1
6708208 Philyaw Mar 2004 B1
6709335 Bates et al. Mar 2004 B2
6711474 Treyz et al. Mar 2004 B1
6711475 Murphy Mar 2004 B2
6711660 Milne et al. Mar 2004 B1
6712702 Goldberg et al. Mar 2004 B2
6714139 Saito et al. Mar 2004 B2
6714236 Wada et al. Mar 2004 B1
6714665 Hanna et al. Mar 2004 B1
6714677 Stearns et al. Mar 2004 B1
6714723 Abecassis Mar 2004 B2
6714793 Carey et al. Mar 2004 B1
6714917 Eldering et al. Mar 2004 B1
6714969 Klein et al. Mar 2004 B1
6715077 Vasudevan et al. Mar 2004 B1
6716103 Eck et al. Apr 2004 B1
6718174 Vayanos Apr 2004 B2
6718239 Rayner Apr 2004 B2
6718263 Glass et al. Apr 2004 B1
6718308 Nolting Apr 2004 B1
6718551 Swix et al. Apr 2004 B1
6720915 Sheynblat Apr 2004 B2
6720920 Breed et al. Apr 2004 B2
6720984 Jorgensen et al. Apr 2004 B1
6721286 Williams et al. Apr 2004 B1
6721578 Minear et al. Apr 2004 B2
6721713 Guheen et al. Apr 2004 B1
6721747 Lipkin Apr 2004 B2
6721748 Knight et al. Apr 2004 B1
6721871 Piispanen et al. Apr 2004 B2
6721921 Altman Apr 2004 B1
6721954 Nickum Apr 2004 B1
6724342 Bloebaum et al. Apr 2004 B2
6725031 Watler et al. Apr 2004 B2
6725125 Basson et al. Apr 2004 B2
6725139 Miller et al. Apr 2004 B2
6725159 Krasner Apr 2004 B2
6725203 Seet et al. Apr 2004 B1
6725260 Philyaw Apr 2004 B1
6725421 Boucher et al. Apr 2004 B1
6727914 Gutta Apr 2004 B1
6728000 Lapstun et al. Apr 2004 B1
6728323 Chen et al. Apr 2004 B1
6728514 Bandeira et al. Apr 2004 B2
6728528 Loke Apr 2004 B1
6728617 Rao et al. Apr 2004 B2
6730913 Remillard et al. May 2004 B2
6731238 Johnson May 2004 B2
6731799 Sun et al. May 2004 B1
6731940 Nagendran May 2004 B1
6732369 Schein et al. May 2004 B1
6732372 Tomita et al. May 2004 B2
6734799 Munch May 2004 B2
6734821 van Diggelen May 2004 B2
6735506 Breed et al. May 2004 B2
6735601 Subrahmanyam May 2004 B1
6735630 Gelvin et al. May 2004 B1
6735632 Kiraly et al. May 2004 B1
6738013 Orler et al. May 2004 B2
6738066 Nguyen May 2004 B1
6738078 Duncombe May 2004 B1
6738519 Nishiwaki May 2004 B1
6738697 Breed May 2004 B2
6738800 Aquilon et al. May 2004 B1
6738814 Cox et al. May 2004 B1
6738978 Hendricks et al. May 2004 B1
6741188 Miller et al. May 2004 B1
6741745 Dance et al. May 2004 B2
6741842 Goldberg et al. May 2004 B2
6741871 Silverbrook et al. May 2004 B1
6741933 Glass May 2004 B1
6741980 Langseth et al. May 2004 B1
6742026 Kraenzel et al. May 2004 B1
6742183 Reynolds et al. May 2004 B1
6744938 Rantze et al. Jun 2004 B1
6744967 Kaminski et al. Jun 2004 B2
6745011 Hendrickson et al. Jun 2004 B1
6745021 Stevens Jun 2004 B1
6745038 Callaway, Jr. et al. Jun 2004 B2
6745183 Nishioka et al. Jun 2004 B2
6745234 Philyaw et al. Jun 2004 B1
6745391 Macrae et al. Jun 2004 B1
6745937 Walsh et al. Jun 2004 B2
6747596 Orler et al. Jun 2004 B2
6747632 Howard Jun 2004 B2
6748195 Phillips Jun 2004 B1
6748306 Lipowicz Jun 2004 B2
6748318 Jones Jun 2004 B1
6750852 Gillespie et al. Jun 2004 B2
6751452 Kupczyk et al. Jun 2004 B1
6751464 Burg et al. Jun 2004 B1
6751574 Shinohara Jun 2004 B2
6752317 Dymetman et al. Jun 2004 B2
6752498 Covannon et al. Jun 2004 B2
6753883 Schena et al. Jun 2004 B2
6754485 Obradovich et al. Jun 2004 B1
6754585 Root et al. Jun 2004 B2
6754632 Kalinowski et al. Jun 2004 B1
6754698 Philyaw et al. Jun 2004 B1
6754710 McAlear Jun 2004 B1
6754904 Cooper et al. Jun 2004 B1
6756938 Zhao et al. Jun 2004 B2
6756997 Ward, III et al. Jun 2004 B1
6757362 Cooper et al. Jun 2004 B1
6757544 Rangarajan et al. Jun 2004 B2
6757574 Gardner et al. Jun 2004 B2
6757611 Rao et al. Jun 2004 B1
6757661 Blaser et al. Jun 2004 B1
6757715 Philyaw Jun 2004 B1
6757740 Parekh et al. Jun 2004 B1
6757783 Koh Jun 2004 B2
6758398 Philyaw et al. Jul 2004 B1
6758746 Hunter et al. Jul 2004 B1
6758754 Lavanchy et al. Jul 2004 B1
6758755 Kelly et al. Jul 2004 B2
6759970 Horita et al. Jul 2004 B1
6760463 Rhoads Jul 2004 B2
6760537 Mankovitz Jul 2004 B2
6760661 Klein et al. Jul 2004 B2
6763040 Hite et al. Jul 2004 B1
6763386 Davis et al. Jul 2004 B2
6764395 Guyett Jul 2004 B1
6764403 Gavin Jul 2004 B2
6765726 French et al. Jul 2004 B2
6766494 Price et al. Jul 2004 B1
6766956 Boylan, III et al. Jul 2004 B1
6768944 Breed et al. Jul 2004 B2
6771208 Lutter et al. Aug 2004 B2
6771283 Carro Aug 2004 B2
6771290 Hoyle Aug 2004 B1
6771629 Preston et al. Aug 2004 B1
6772047 Butikofer Aug 2004 B2
6772330 Merkin Aug 2004 B2
6772331 Hind et al. Aug 2004 B1
6772338 Hull Aug 2004 B1
6772340 Peinado et al. Aug 2004 B1
6772433 LaJoie et al. Aug 2004 B1
6773177 Denoue et al. Aug 2004 B2
6773344 Gabai et al. Aug 2004 B1
6774367 Stephan et al. Aug 2004 B2
6774846 Fullerton et al. Aug 2004 B2
6775392 Rhoads Aug 2004 B1
6775422 Altman Aug 2004 B1
6775605 Rao et al. Aug 2004 B2
6775655 Peinado et al. Aug 2004 B1
6775802 Gaal Aug 2004 B2
6778073 Lutter et al. Aug 2004 B2
6778136 Gronemeyer Aug 2004 B2
6778885 Agashe et al. Aug 2004 B2
6778924 Hanse Aug 2004 B2
6778988 Bengtson Aug 2004 B2
6779004 Zintel Aug 2004 B1
6781920 Bates et al. Aug 2004 B2
6781963 Crockett et al. Aug 2004 B2
6782315 Lu et al. Aug 2004 B2
6783071 Levine et al. Aug 2004 B2
6783460 Galyean, III et al. Aug 2004 B2
6785421 Gindele et al. Aug 2004 B1
6785551 Richard Aug 2004 B1
6785670 Chiang et al. Aug 2004 B1
6785688 Abajian et al. Aug 2004 B2
6785721 Immerman et al. Aug 2004 B1
6785902 Zigmond et al. Aug 2004 B1
6786793 Wang Sep 2004 B1
6788249 Farmer et al. Sep 2004 B1
6788315 Kekic et al. Sep 2004 B1
6788809 Grzeszczuk et al. Sep 2004 B1
6788815 Lui et al. Sep 2004 B2
6788882 Geer et al. Sep 2004 B1
6789073 Lunenfeld Sep 2004 B1
6789252 Burke et al. Sep 2004 B1
6791472 Hoffberg Sep 2004 B1
6791536 Keely et al. Sep 2004 B2
6791588 Philyaw Sep 2004 B1
6792112 Campbell et al. Sep 2004 B1
6792263 Kite Sep 2004 B1
6792351 Lutter Sep 2004 B2
6792452 Philyaw Sep 2004 B1
6792607 Burd et al. Sep 2004 B1
6795699 McCraw et al. Sep 2004 B1
6795966 Lim et al. Sep 2004 B1
6798429 Bradski Sep 2004 B2
6799050 Krasner Sep 2004 B1
6799221 Kenner et al. Sep 2004 B1
6799326 Boylan, III et al. Sep 2004 B2
6799327 Reynolds et al. Sep 2004 B1
6801124 Naitou Oct 2004 B2
6801159 Swope et al. Oct 2004 B2
6801637 Voronka et al. Oct 2004 B2
6801658 Morita et al. Oct 2004 B2
6801662 Owechko et al. Oct 2004 B1
6801843 Rao et al. Oct 2004 B2
6801907 Zagami Oct 2004 B1
6804396 Higaki et al. Oct 2004 B2
6804524 Vandermeijden Oct 2004 B1
6804659 Graham et al. Oct 2004 B1
6807534 Erickson Oct 2004 B1
6807558 Hassett et al. Oct 2004 B1
6809653 Mann et al. Oct 2004 B1
6810323 Bullock et al. Oct 2004 B1
6812860 Schwarzwalder, Jr. Nov 2004 B1
6812961 Parulski et al. Nov 2004 B1
6813039 Silverbrook et al. Nov 2004 B1
6813366 Rhoads Nov 2004 B1
6813501 Kinnunen et al. Nov 2004 B2
6813542 Peshkin et al. Nov 2004 B2
6813560 van Diggelen et al. Nov 2004 B2
6813777 Weinberger et al. Nov 2004 B1
6814663 Edwards et al. Nov 2004 B2
6816111 Krasner Nov 2004 B2
6816458 Kroon Nov 2004 B1
6816710 Krasner Nov 2004 B2
6816719 Heinonen et al. Nov 2004 B1
6816727 Cox et al. Nov 2004 B2
6816734 Wong et al. Nov 2004 B2
6816850 Culliss Nov 2004 B2
6816878 Zimmers et al. Nov 2004 B1
6816894 Philyaw et al. Nov 2004 B1
6816904 Ludwig et al. Nov 2004 B1
6819268 Wakamatsu et al. Nov 2004 B2
6819919 Tanaka Nov 2004 B1
6819991 Rao et al. Nov 2004 B2
6820237 Abu-Hakima et al. Nov 2004 B1
6820269 Baucke et al. Nov 2004 B2
6820277 Eldering et al. Nov 2004 B1
6822639 Silverbrook et al. Nov 2004 B1
6822661 Sai et al. Nov 2004 B2
6823075 Perry Nov 2004 B2
6823244 Breed Nov 2004 B2
6823388 Philyaw et al. Nov 2004 B1
6824044 Lapstun et al. Nov 2004 B1
6824057 Rathus et al. Nov 2004 B2
6825956 Silverbrook et al. Nov 2004 B2
6826592 Philyaw et al. Nov 2004 B1
6826607 Gelvin et al. Nov 2004 B1
6826775 Howe et al. Nov 2004 B1
6827259 Rathus et al. Dec 2004 B2
6827267 Rathus et al. Dec 2004 B2
6827645 Morita et al. Dec 2004 B2
6828993 Hendricks et al. Dec 2004 B1
6829333 Frazier Dec 2004 B1
6829437 Kirby Dec 2004 B2
6829475 Lee et al. Dec 2004 B1
6829606 Ripley Dec 2004 B2
6829650 Philyaw et al. Dec 2004 B1
6830187 Rathus et al. Dec 2004 B2
6830188 Rathus et al. Dec 2004 B2
6831637 Mack Dec 2004 B1
6832116 Tillgren et al. Dec 2004 B1
6832178 Fernandez et al. Dec 2004 B1
6832251 Gelvin et al. Dec 2004 B1
6832373 O'Neill Dec 2004 B2
6833785 Brown et al. Dec 2004 B2
6833936 Seymour Dec 2004 B1
6834195 Brandenberg et al. Dec 2004 B2
6834804 Rathus et al. Dec 2004 B2
6836799 Philyaw et al. Dec 2004 B1
6837436 Swartz et al. Jan 2005 B2
6839020 Geier et al. Jan 2005 B2
6839021 Sheynblat et al. Jan 2005 B2
6840861 Jordan et al. Jan 2005 B2
6842715 Gaal Jan 2005 B1
6842761 Diamond et al. Jan 2005 B2
6842774 Piccioni Jan 2005 B1
6845370 Burkey et al. Jan 2005 B2
6845383 Kraenzel et al. Jan 2005 B1
6845913 Madding et al. Jan 2005 B2
6847686 Morad et al. Jan 2005 B2
6847822 Dennison et al. Jan 2005 B1
6847872 Bodin et al. Jan 2005 B2
6850252 Hoffberg Feb 2005 B1
6850693 Young et al. Feb 2005 B2
6850893 Lipkin et al. Feb 2005 B2
6851063 Boyle et al. Feb 2005 B1
6853849 Tognazzini Feb 2005 B1
6853907 Peterson et al. Feb 2005 B2
6853913 Cherveny et al. Feb 2005 B2
6853916 Fuchs et al. Feb 2005 B2
6853982 Smith et al. Feb 2005 B2
6854016 Kraenzel et al. Feb 2005 B1
6854035 Dunham et al. Feb 2005 B2
6854642 Metcalf et al. Feb 2005 B2
6856282 Mauro et al. Feb 2005 B2
6857016 Motoyama et al. Feb 2005 B1
6859212 Kumar et al. Feb 2005 B2
6859525 McElvaney Feb 2005 B1
6859799 Yuen Feb 2005 B1
6859831 Gelvin et al. Feb 2005 B1
6861980 Rowitch et al. Mar 2005 B1
6862046 Ko Mar 2005 B2
6862553 Schwenke et al. Mar 2005 B2
6863612 Willis Mar 2005 B2
6865171 Nilsson Mar 2005 B1
6865284 Mahoney et al. Mar 2005 B2
6865395 Riley Mar 2005 B2
6865746 Herrington et al. Mar 2005 B1
6865825 Bailey, Sr. et al. Mar 2005 B2
6867734 Voor et al. Mar 2005 B2
6868193 Gharbia et al. Mar 2005 B1
6868331 Hanebrink Mar 2005 B2
6868389 Wilkins et al. Mar 2005 B1
6870616 Jung et al. Mar 2005 B2
6871139 Liu et al. Mar 2005 B2
6871146 Kelly et al. Mar 2005 B1
6871186 Tuzhilin et al. Mar 2005 B1
6871346 Kumbalimutt et al. Mar 2005 B1
6873723 Aucsmith et al. Mar 2005 B1
6873854 Crockett et al. Mar 2005 B2
6874683 Keronen et al. Apr 2005 B2
6876496 French et al. Apr 2005 B2
6876926 Kirkland et al. Apr 2005 B2
6877001 Wolf et al. Apr 2005 B2
6879701 Rhoads Apr 2005 B1
6879957 Pechter et al. Apr 2005 B1
6880005 Bell et al. Apr 2005 B1
6880113 Anderson et al. Apr 2005 B2
6880122 Lee et al. Apr 2005 B1
6880123 Landsman et al. Apr 2005 B1
6880124 Moore Apr 2005 B1
6882299 Allport Apr 2005 B1
6882718 Smith Apr 2005 B1
6882837 Fernandez et al. Apr 2005 B2
6882905 Hall et al. Apr 2005 B2
6882977 Miller Apr 2005 B1
6882978 Ebisawa Apr 2005 B2
6883747 Ratkovic et al. Apr 2005 B2
6885920 Yakes et al. Apr 2005 B2
6885940 Brodie et al. Apr 2005 B2
6886104 McClurg et al. Apr 2005 B1
6888497 King et al. May 2005 B2
6888932 Snip et al. May 2005 B2
6889896 Silverbrook et al. May 2005 B2
6890256 Walker et al. May 2005 B2
6891838 Petite et al. May 2005 B1
6891953 DeMello et al. May 2005 B1
6892264 Lamb May 2005 B2
6895170 Lambert et al. May 2005 B1
6895238 Newell et al. May 2005 B2
6895240 Laursen et al. May 2005 B2
6895249 Gaal May 2005 B2
6895324 Straub May 2005 B2
6898592 Peltonen et al. May 2005 B2
6898762 Ellis et al. May 2005 B2
6900758 Mann et al. May 2005 B1
6901057 Rune et al. May 2005 B2
6903684 Simic et al. Jun 2005 B1
6904029 Fors et al. Jun 2005 B2
6904160 Burgess Jun 2005 B2
6904171 van Zee Jun 2005 B2
6906619 Williams et al. Jun 2005 B2
6907118 Henderson et al. Jun 2005 B2
6907224 Younis Jun 2005 B2
6907238 Leung Jun 2005 B2
6911997 Okamoto et al. Jun 2005 B1
6912395 Benes et al. Jun 2005 B2
6912398 Domnitz Jun 2005 B1
6914891 Ha et al. Jul 2005 B2
6915126 Mazzara, Jr. Jul 2005 B2
6915208 Garin et al. Jul 2005 B2
6915955 Jung et al. Jul 2005 B2
6916096 Eberl et al. Jul 2005 B2
6917331 Gronemeyer Jul 2005 B2
6917722 Bloomfield Jul 2005 B1
6917724 Seder et al. Jul 2005 B2
6920129 Preston et al. Jul 2005 B2
6920431 Showghi et al. Jul 2005 B2
6920494 Heitman et al. Jul 2005 B2
6922664 Fernandez et al. Jul 2005 B1
6922725 Lamming et al. Jul 2005 B2
6925182 Epstein Aug 2005 B1
6925368 Funkhouser et al. Aug 2005 B2
6925567 Hirata Aug 2005 B1
6926374 Dudeck et al. Aug 2005 B2
6927806 Chan Aug 2005 B2
6928396 Thackston Aug 2005 B2
6928414 Kim Aug 2005 B1
6929543 Ueshima et al. Aug 2005 B1
6930634 Peng et al. Aug 2005 B2
6931254 Egner et al. Aug 2005 B1
6931454 Deshpande et al. Aug 2005 B2
6931592 Ramaley et al. Aug 2005 B1
6933837 Gunderson et al. Aug 2005 B2
6934578 Ramseth Aug 2005 B2
6934963 Reynolds et al. Aug 2005 B1
6934964 Schaffer et al. Aug 2005 B1
6937187 van Diggelen et al. Aug 2005 B2
6937732 Ohmura et al. Aug 2005 B2
6937742 Roberts et al. Aug 2005 B2
6937872 Krasner Aug 2005 B2
6938024 Horvitz Aug 2005 B1
6938079 Anderson et al. Aug 2005 B1
6938209 Ogawa et al. Aug 2005 B2
6939155 Postrel Sep 2005 B2
6940646 Taniguchi et al. Sep 2005 B2
6941144 Stein Sep 2005 B2
6941574 Broadwin et al. Sep 2005 B1
6942575 Mergler Sep 2005 B2
6943955 Kaschke et al. Sep 2005 B2
6944315 Zipperer et al. Sep 2005 B1
6944540 King et al. Sep 2005 B2
6947571 Rhoads et al. Sep 2005 B1
6947772 Minear et al. Sep 2005 B2
6947922 Glance Sep 2005 B1
6947930 Anick et al. Sep 2005 B2
6948183 Peterka Sep 2005 B1
6950058 Davis et al. Sep 2005 B1
6950502 Jenkins Sep 2005 B1
6950534 Cohen et al. Sep 2005 B2
6950638 Videtich et al. Sep 2005 B2
6950804 Strietzel Sep 2005 B2
6950818 Dennis et al. Sep 2005 B2
6950850 Leff et al. Sep 2005 B1
6950990 Rajarajan et al. Sep 2005 B2
6952155 Himmelstein Oct 2005 B2
6952281 Irons et al. Oct 2005 B1
6952698 Delaire et al. Oct 2005 B2
6954728 Kusumoto et al. Oct 2005 B1
6955605 Young et al. Oct 2005 B2
6956467 Mercado, Jr. Oct 2005 B1
6957073 Bye Oct 2005 B2
6957186 Guheen et al. Oct 2005 B1
6957384 Jeffery et al. Oct 2005 B2
6958984 Kotzin Oct 2005 B2
6961562 Ross Nov 2005 B2
6961660 Underbrink et al. Nov 2005 B2
6961731 Holbrook Nov 2005 B2
6963899 Fernandez et al. Nov 2005 B1
6964608 Koza Nov 2005 B1
6965754 King Nov 2005 B2
6965767 Maggenti et al. Nov 2005 B2
6965816 Walker Nov 2005 B2
6965912 Friedman et al. Nov 2005 B2
6967566 Weston et al. Nov 2005 B2
6968057 Rhoads Nov 2005 B2
6968567 Gordon et al. Nov 2005 B1
6968736 Lynam Nov 2005 B2
6969183 Okubo et al. Nov 2005 B2
6970102 Ciolli Nov 2005 B2
6970462 McRae Nov 2005 B1
6970834 Martin et al. Nov 2005 B2
6970849 DeMello et al. Nov 2005 B1
6970915 Partovi et al. Nov 2005 B1
6970917 Kushwaha et al. Nov 2005 B1
6972669 Saito et al. Dec 2005 B2
6973030 Pecen et al. Dec 2005 B2
6973166 Tsumpes Dec 2005 B1
6973320 Brown et al. Dec 2005 B2
6973378 Yamada Dec 2005 B2
6973664 Fries Dec 2005 B2
6973669 Daniels Dec 2005 B2
6975266 Abraham et al. Dec 2005 B2
6976071 Donzis et al. Dec 2005 B1
6978297 Piersol Dec 2005 B1
6978453 Rao et al. Dec 2005 B2
6980816 Rohles et al. Dec 2005 B2
6981140 Choo Dec 2005 B1
6981262 DeMello et al. Dec 2005 B1
6983200 Bodin et al. Jan 2006 B2
6983483 Maze et al. Jan 2006 B2
6985105 Pitt et al. Jan 2006 B1
6985169 Deng et al. Jan 2006 B1
6985478 Pogossiants Jan 2006 B2
6985839 Motamedi et al. Jan 2006 B1
6985922 Bashen et al. Jan 2006 B1
6985962 Philyaw Jan 2006 B2
6987221 Platt Jan 2006 B2
6987964 Obradovich et al. Jan 2006 B2
6987987 Vacanti et al. Jan 2006 B1
6988026 Breed et al. Jan 2006 B2
6988034 Marlatt et al. Jan 2006 B1
6989766 Mese et al. Jan 2006 B2
6989822 Pettiross et al. Jan 2006 B2
6990080 Bahl et al. Jan 2006 B2
6990407 Mbekeani et al. Jan 2006 B1
6990497 O'Rourke et al. Jan 2006 B2
6990548 Kaylor Jan 2006 B1
6990590 Hanson et al. Jan 2006 B2
6991158 Munte Jan 2006 B2
6992655 Ericson et al. Jan 2006 B2
6993421 Pillar et al. Jan 2006 B2
6993429 Obradovich et al. Jan 2006 B2
6993456 Brooks et al. Jan 2006 B2
6993511 Himmelstein Jan 2006 B2
6993532 Platt et al. Jan 2006 B1
6993570 Irani Jan 2006 B1
6993580 Isherwood et al. Jan 2006 B2
6995788 James Feb 2006 B2
6996670 Delaire et al. Feb 2006 B2
6996720 DeMello et al. Feb 2006 B1
6996778 Rajarajan et al. Feb 2006 B2
6999779 Hashimoto Feb 2006 B1
6999782 Shaughnessy et al. Feb 2006 B2
7000180 Balthaser Feb 2006 B2
7000469 Foxlin et al. Feb 2006 B2
7001681 Wood Feb 2006 B2
7002942 Kotzin Feb 2006 B2
7003134 Covell et al. Feb 2006 B1
7004390 Silverbrook et al. Feb 2006 B2
7006881 Hoffberg et al. Feb 2006 B1
7006950 Greiffenhagen et al. Feb 2006 B1
RE39038 Fleming, III Mar 2006 E
7010492 Bassett et al. Mar 2006 B1
7010512 Gillin et al. Mar 2006 B1
7010546 Kolawa et al. Mar 2006 B1
7010616 Carlson et al. Mar 2006 B2
7010681 Fletcher et al. Mar 2006 B1
7013339 Schwager Mar 2006 B2
7013478 Hendricks et al. Mar 2006 B1
7016055 Dodge et al. Mar 2006 B2
7016084 Tsai Mar 2006 B2
7016532 Boncyk et al. Mar 2006 B2
7017171 Horlander et al. Mar 2006 B1
7017189 DeMello et al. Mar 2006 B1
7020637 Bratton Mar 2006 B2
7020651 Ripley Mar 2006 B2
7020663 Hay et al. Mar 2006 B2
7020701 Gelvin et al. Mar 2006 B1
7020751 Kershaw Mar 2006 B2
7021836 Anderson et al. Apr 2006 B2
7023979 Wu et al. Apr 2006 B1
7024321 Deninger et al. Apr 2006 B1
7024363 Comerford et al. Apr 2006 B1
7024393 Peinado et al. Apr 2006 B1
7024549 Luu et al. Apr 2006 B1
7024552 Caswell et al. Apr 2006 B1
7024660 Andrade et al. Apr 2006 B2
7025209 Hawkins Apr 2006 B2
7027055 Anderson et al. Apr 2006 B2
7027660 Hersch et al. Apr 2006 B2
7027773 McMillin Apr 2006 B1
7027801 Hall et al. Apr 2006 B1
7027975 Pazandak et al. Apr 2006 B1
7028082 Rosenberg et al. Apr 2006 B1
7028269 Cohen-Solal et al. Apr 2006 B1
7035427 Rhoads Apr 2006 B2
7035907 Decasper et al. Apr 2006 B1
7036094 Cohen et al. Apr 2006 B1
7038855 French et al. May 2006 B2
7039393 Kite May 2006 B1
7039676 Day et al. May 2006 B1
7039935 Knudson et al. May 2006 B2
7042345 Ellis May 2006 B2
7042363 Katrak et al. May 2006 B2
7042440 Pryor et al. May 2006 B2
7042454 Seligman May 2006 B1
7043489 Kelley May 2006 B1
7047411 DeMello et al. May 2006 B1
7047491 Schubert et al. May 2006 B2
7047498 Lui et al. May 2006 B2
7049953 Monroe May 2006 B2
7050606 Paul et al. May 2006 B2
7050988 Atcheson et al. May 2006 B2
7051080 Paul et al. May 2006 B1
7051352 Schaffer May 2006 B1
7051353 Yamashita et al. May 2006 B2
7051943 Leone et al. May 2006 B2
7054465 Rhoads May 2006 B2
7054818 Sharma et al. May 2006 B2
7055169 Delpuch et al. May 2006 B2
7057607 Mayoraz et al. Jun 2006 B2
7058204 Hildreth et al. Jun 2006 B2
7058223 Cox Jun 2006 B2
7058508 Combs et al. Jun 2006 B2
7058626 Pan et al. Jun 2006 B1
7058635 Shah-Nazaroff et al. Jun 2006 B1
7058697 Rhoads Jun 2006 B2
7060957 Lange et al. Jun 2006 B2
7062379 Videtich Jun 2006 B2
7062437 Kovales et al. Jun 2006 B2
7062510 Eldering Jun 2006 B1
7062527 Tyrrell, III Jun 2006 B1
7062706 Maxwell et al. Jun 2006 B2
7064656 Belcher et al. Jun 2006 B2
7064681 Horstemeyer Jun 2006 B2
7065345 Carlton et al. Jun 2006 B2
7065351 Carter et al. Jun 2006 B2
7065483 Decary et al. Jun 2006 B2
7065507 Mohammed et al. Jun 2006 B2
7066391 Tsikos et al. Jun 2006 B2
7069186 Jung et al. Jun 2006 B2
7069188 Roberts Jun 2006 B2
7069240 Spero et al. Jun 2006 B2
7069272 Snyder Jun 2006 B2
7069308 Abrams Jun 2006 B2
7069395 Camacho et al. Jun 2006 B2
7069562 Kushnirskiy et al. Jun 2006 B2
7069576 Knudson et al. Jun 2006 B1
7069582 Philyaw et al. Jun 2006 B2
7070098 Lapstun et al. Jul 2006 B1
7072337 Arutyunov Jul 2006 B1
7072846 Robinson Jul 2006 B1
7072849 Filepp et al. Jul 2006 B1
7072934 Helgeson et al. Jul 2006 B2
7072963 Anderson et al. Jul 2006 B2
7073129 Robarts et al. Jul 2006 B1
7075643 Holub Jul 2006 B2
7079639 Smith Jul 2006 B2
7079713 Simmons Jul 2006 B2
7079857 Maggenti et al. Jul 2006 B2
7079993 Stephenson et al. Jul 2006 B2
7080140 Heitman et al. Jul 2006 B2
7082359 Breed Jul 2006 B2
7082407 Bezos et al. Jul 2006 B1
7084780 Nguyen et al. Aug 2006 B2
7084859 Pryor Aug 2006 B1
7085637 Breed et al. Aug 2006 B2
7085683 Anderson et al. Aug 2006 B2
7085755 Bluhm et al. Aug 2006 B2
7085845 Woodward et al. Aug 2006 B2
7086187 Bandak Aug 2006 B2
7089206 Martin Aug 2006 B2
7089330 Mason Aug 2006 B1
7089583 Mehra et al. Aug 2006 B2
7092723 Himmelstein Aug 2006 B2
7093759 Walsh Aug 2006 B2
7095401 Liu et al. Aug 2006 B2
7096218 Schirmer et al. Aug 2006 B2
7096234 Plastina et al. Aug 2006 B2
7096486 Ukai et al. Aug 2006 B1
7098891 Pryor Aug 2006 B1
7100195 Underwood Aug 2006 B1
7103018 Hansen et al. Sep 2006 B1
7103197 Rhoads Sep 2006 B2
7103460 Breed Sep 2006 B1
7103511 Petite Sep 2006 B2
7103574 Peinado et al. Sep 2006 B1
7103848 Barsness et al. Sep 2006 B2
7104890 Tsuda et al. Sep 2006 B2
7104955 Bardy Sep 2006 B2
7106717 Rousseau et al. Sep 2006 B2
7107285 von Kaenel et al. Sep 2006 B2
7107309 Geddes et al. Sep 2006 B1
7107706 Bailey, Sr. et al. Sep 2006 B1
7109859 Peeters Sep 2006 B2
7110525 Heller et al. Sep 2006 B1
7110576 Norris, Jr. et al. Sep 2006 B2
7110776 Sambin Sep 2006 B2
7110880 Breed et al. Sep 2006 B2
7110982 Feldman et al. Sep 2006 B2
7111240 Crow et al. Sep 2006 B2
7111787 Ehrhart Sep 2006 B2
7113110 Horstemeyer Sep 2006 B2
7113596 Rhoads Sep 2006 B2
7113614 Rhoads Sep 2006 B2
7113771 Kotzin Sep 2006 B2
7113779 Fujisaki Sep 2006 B1
7113860 Wang Sep 2006 B2
7113918 Ahmad et al. Sep 2006 B1
7116781 Rhoads Oct 2006 B2
7116894 Chatterton Oct 2006 B1
7117170 Bennett et al. Oct 2006 B1
7117374 Hill et al. Oct 2006 B2
7117504 Smith et al. Oct 2006 B2
7117518 Takahashi et al. Oct 2006 B1
7119716 Horstemeyer Oct 2006 B2
7120129 Ayyagari et al. Oct 2006 B2
7120248 Hopkins et al. Oct 2006 B2
7120508 Peshkin et al. Oct 2006 B2
7120619 Drucker et al. Oct 2006 B2
7120872 Thacker Oct 2006 B2
7121469 Dorai et al. Oct 2006 B2
7121946 Paul et al. Oct 2006 B2
7123926 Himmelstein Oct 2006 B2
7124004 Obradovich Oct 2006 B2
7124093 Graham et al. Oct 2006 B1
7124101 Mikurak Oct 2006 B1
7126583 Breed Oct 2006 B1
7127143 Elkins, II et al. Oct 2006 B2
7127525 Coleman et al. Oct 2006 B2
7130807 Mikurak Oct 2006 B1
7130885 Chandra et al. Oct 2006 B2
7131061 MacLean et al. Oct 2006 B2
7131062 Nguyen et al. Oct 2006 B2
7133862 Hubert et al. Nov 2006 B2
7134131 Hendricks et al. Nov 2006 B1
7136530 Lee et al. Nov 2006 B2
7136710 Hoffberg et al. Nov 2006 B1
7136814 McConnell Nov 2006 B1
7136838 Peinado et al. Nov 2006 B1
7136866 Springer, Jr. et al. Nov 2006 B2
7136871 Ozer et al. Nov 2006 B2
7137077 Iwema et al. Nov 2006 B2
7137124 Lamb et al. Nov 2006 B2
7139371 McElvaney Nov 2006 B2
7139445 Pilu et al. Nov 2006 B2
7139723 Conkwright et al. Nov 2006 B2
7139983 Kelts Nov 2006 B2
7142099 Ross et al. Nov 2006 B2
7142844 Obradovich et al. Nov 2006 B2
7143091 Charnock et al. Nov 2006 B2
7146260 Preston et al. Dec 2006 B2
7147246 Breed et al. Dec 2006 B2
7147558 Giobbi Dec 2006 B2
7149696 Shimizu et al. Dec 2006 B2
7149698 Guheen et al. Dec 2006 B2
7149741 Burkey et al. Dec 2006 B2
7151768 Preston et al. Dec 2006 B2
7151864 Henry et al. Dec 2006 B2
7151946 Maggenti et al. Dec 2006 B2
7152207 Underwood et al. Dec 2006 B1
7152236 Wugofski et al. Dec 2006 B1
7154638 Lapstun et al. Dec 2006 B1
7155335 Rennels Dec 2006 B2
7155676 Land et al. Dec 2006 B2
7158758 Lim et al. Jan 2007 B2
7158953 DeMello et al. Jan 2007 B1
7158956 Himmelstein Jan 2007 B1
7161926 Elson et al. Jan 2007 B2
7162433 Foroutan Jan 2007 B1
7164662 Preston et al. Jan 2007 B2
7165040 Ehrman et al. Jan 2007 B2
7165041 Guheen et al. Jan 2007 B1
7165098 Boyer et al. Jan 2007 B1
7165268 Moore et al. Jan 2007 B1
7167586 Braun et al. Jan 2007 B2
7170492 Bell Jan 2007 B2
7171016 Rhoads Jan 2007 B1
7171189 Bianconi et al. Jan 2007 B2
7171381 Ehrman et al. Jan 2007 B2
7171468 Yeung et al. Jan 2007 B2
7171624 Baldwin et al. Jan 2007 B2
7173538 Pedraza et al. Feb 2007 B2
7174054 Manber et al. Feb 2007 B2
7174126 McElhatten et al. Feb 2007 B2
7174253 Videtich Feb 2007 B2
7174286 Martin et al. Feb 2007 B2
7174332 Baxter et al. Feb 2007 B2
7176791 Sakaki et al. Feb 2007 B2
7177623 Baldwin Feb 2007 B2
7177935 Bradshaw et al. Feb 2007 B2
7178049 Lutter Feb 2007 B2
7178106 Lamkin et al. Feb 2007 B2
7180473 Horie et al. Feb 2007 B2
7181017 Nagel et al. Feb 2007 B1
7181438 Szabo Feb 2007 B1
7181488 Martin et al. Feb 2007 B2
7181744 Shultz et al. Feb 2007 B2
7181761 Davis et al. Feb 2007 B2
7184048 Hunter Feb 2007 B2
7184866 Squires et al. Feb 2007 B2
7185275 Roberts et al. Feb 2007 B2
7185286 Zondervan et al. Feb 2007 B2
7185355 Ellis et al. Feb 2007 B1
7187847 Young et al. Mar 2007 B2
7187947 White et al. Mar 2007 B1
7188216 Rajkumar et al. Mar 2007 B1
7188307 Ohsawa Mar 2007 B2
7190480 Sturgeon et al. Mar 2007 B2
7191447 Ellis et al. Mar 2007 B1
7194421 Conkwright et al. Mar 2007 B2
7194512 Creemer et al. Mar 2007 B1
7194755 Nakata et al. Mar 2007 B1
7197029 Osterhout Mar 2007 B1
7197234 Chatterton Mar 2007 B1
7197465 Hu et al. Mar 2007 B1
7197472 Conkwright et al. Mar 2007 B2
7197716 Newell et al. Mar 2007 B2
7199885 Dodge et al. Apr 2007 B2
7202776 Breed Apr 2007 B2
7202898 Braun et al. Apr 2007 B1
7203158 Oshima et al. Apr 2007 B2
7203300 Shaffer et al. Apr 2007 B2
7203597 Sato et al. Apr 2007 B2
7203721 Ben-Efraim et al. Apr 2007 B1
7204041 Bailey, Sr. et al. Apr 2007 B1
7206305 Preston et al. Apr 2007 B2
7207041 Elson et al. Apr 2007 B2
7207042 Smith et al. Apr 2007 B2
7209915 Taboada et al. Apr 2007 B1
7209969 Lahti et al. Apr 2007 B2
7210100 Berger et al. Apr 2007 B2
7212296 Dodge et al. May 2007 B2
7212661 Samara et al. May 2007 B2
7212811 Dowling et al. May 2007 B2
7215965 Fournier et al. May 2007 B2
7216121 Bachman et al. May 2007 B2
7216145 Collings, III May 2007 B2
7216224 Lapstun et al. May 2007 B2
7218779 Dodge et al. May 2007 B2
7218940 Niemenmaa et al. May 2007 B2
7219013 Young et al. May 2007 B1
7219123 Fiechter et al. May 2007 B1
7219304 Kraenzel et al. May 2007 B1
7221669 Preston et al. May 2007 B2
7221959 Lindqvist et al. May 2007 B2
7222078 Abelow May 2007 B2
7222780 Lapstun et al. May 2007 B2
7224282 Terauchi et al. May 2007 B2
7224480 Tanaka et al. May 2007 B2
7224820 Inomata et al. May 2007 B2
7224886 Akamatsu et al. May 2007 B2
7225040 Eller et al. May 2007 B2
7225979 Silverbrook et al. Jun 2007 B2
7227526 Hildreth et al. Jun 2007 B2
7227842 Ji et al. Jun 2007 B1
7228340 De Boor et al. Jun 2007 B2
7231357 Shanman et al. Jun 2007 B1
7234645 Silverbrook et al. Jun 2007 B2
7236493 McRae Jun 2007 B1
7236941 Conkwright et al. Jun 2007 B2
7236969 Skillen et al. Jun 2007 B1
7239747 Bresler et al. Jul 2007 B2
7239949 Lu et al. Jul 2007 B2
7240843 Paul et al. Jul 2007 B2
7242492 Currans et al. Jul 2007 B2
7242988 Hoffberg et al. Jul 2007 B1
7246118 Chastain et al. Jul 2007 B2
7249266 Margalit et al. Jul 2007 B2
7254581 Johnson et al. Aug 2007 B2
7256341 Plastina et al. Aug 2007 B2
7257426 Witkowski et al. Aug 2007 B1
7257545 Hung Aug 2007 B1
7257567 Toshima Aug 2007 B2
7257570 Riise et al. Aug 2007 B2
7259747 Bell Aug 2007 B2
7260534 Gandhi et al. Aug 2007 B2
7262798 Stavely et al. Aug 2007 B2
7263367 Sabot Aug 2007 B1
7263521 Carpentier et al. Aug 2007 B2
7268700 Hoffberg Sep 2007 B1
7269188 Smith Sep 2007 B2
7269289 Wu et al. Sep 2007 B2
7271737 Hoffberg Sep 2007 B1
7272497 Koshiji et al. Sep 2007 B2
7272637 Himmelstein Sep 2007 B1
7274800 Nefian et al. Sep 2007 B2
7274988 Mukaiyama Sep 2007 B2
7275049 Clausner et al. Sep 2007 B2
7277693 Chen et al. Oct 2007 B2
7277925 Warnock Oct 2007 B2
7283567 Preston et al. Oct 2007 B2
7283904 Benjamin et al. Oct 2007 B2
7283974 Katz et al. Oct 2007 B2
7283992 Liu et al. Oct 2007 B2
7284192 Kashi et al. Oct 2007 B2
7286522 Preston et al. Oct 2007 B2
7289806 Morris et al. Oct 2007 B2
7295101 Ward et al. Nov 2007 B2
7295925 Breed et al. Nov 2007 B2
7298248 Finley et al. Nov 2007 B2
7298289 Hoffberg Nov 2007 B1
7299186 Kuzunuki et al. Nov 2007 B2
7299969 Paul et al. Nov 2007 B2
7301494 Waters Nov 2007 B2
7302339 Gray Nov 2007 B2
7302419 Conkwright et al. Nov 2007 B2
7302468 Wijeratne Nov 2007 B2
7305442 Lundy Dec 2007 B1
7305445 Singh et al. Dec 2007 B2
7305467 Kaiser et al. Dec 2007 B2
7308112 Fujimura et al. Dec 2007 B2
7308483 Philyaw Dec 2007 B2
7317696 Preston et al. Jan 2008 B2
7317723 Guru Jan 2008 B1
7317836 Fujimura et al. Jan 2008 B2
7318106 Philyaw Jan 2008 B2
7320025 Steinberg et al. Jan 2008 B1
7328450 Macrae et al. Feb 2008 B2
7330693 Goss Feb 2008 B1
7331523 Meier et al. Feb 2008 B2
7339467 Lamb Mar 2008 B2
7340469 Alghathbar et al. Mar 2008 B1
7343160 Morton Mar 2008 B2
7343317 Jokinen et al. Mar 2008 B2
7343364 Bram et al. Mar 2008 B2
7343614 Hendricks et al. Mar 2008 B1
7343616 Takahashi et al. Mar 2008 B1
7348963 Bell Mar 2008 B2
7349552 Levy et al. Mar 2008 B2
7349976 Glaser et al. Mar 2008 B1
7350204 Lambert et al. Mar 2008 B2
7352358 Zalewski et al. Apr 2008 B2
7353199 DiStefano, III Apr 2008 B1
7353533 Wright et al. Apr 2008 B2
7358434 Plastina et al. Apr 2008 B2
7359121 French et al. Apr 2008 B2
7359782 Breed Apr 2008 B2
7362902 Baker et al. Apr 2008 B1
7362999 Petschke et al. Apr 2008 B2
7363314 Picker et al. Apr 2008 B2
7363347 Thomas Apr 2008 B2
7363643 Drake et al. Apr 2008 B2
7363645 Hendricks Apr 2008 B1
7367887 Watabe et al. May 2008 B2
7370002 Heckerman et al. May 2008 B2
7373243 Tengler et al. May 2008 B2
7373345 Carpentier et al. May 2008 B2
7375728 Donath et al. May 2008 B2
7376581 DeRose et al. May 2008 B2
7377421 Rhoads May 2008 B2
7379563 Shamaie May 2008 B2
7379566 Hildreth May 2008 B2
7379707 DiFonzo et al. May 2008 B2
7383263 Goger Jun 2008 B2
7383341 Saito et al. Jun 2008 B1
7385501 Miller et al. Jun 2008 B2
7385736 Tseng et al. Jun 2008 B2
7386127 Bar-On Jun 2008 B2
7386477 Fano Jun 2008 B2
7389591 Jaiswal et al. Jun 2008 B2
7392212 Hancock et al. Jun 2008 B2
7392287 Ratcliff, III Jun 2008 B2
7392475 Leban et al. Jun 2008 B1
7395507 Robarts et al. Jul 2008 B2
7401140 Goulden et al. Jul 2008 B2
7403693 Shteyn Jul 2008 B2
7403769 Kopra et al. Jul 2008 B2
7404084 Fransdonk Jul 2008 B2
7404520 Vesuna Jul 2008 B2
7409434 Lamming et al. Aug 2008 B2
7409707 Swander et al. Aug 2008 B2
7411982 Smith Aug 2008 B2
7412077 Li et al. Aug 2008 B2
7412158 Kakkori Aug 2008 B2
7415181 Greenwood et al. Aug 2008 B2
7415670 Hull et al. Aug 2008 B2
7418346 Breed et al. Aug 2008 B2
7418476 Salesky et al. Aug 2008 B2
7421093 Hildreth et al. Sep 2008 B2
7421155 King et al. Sep 2008 B2
7421454 DeShan et al. Sep 2008 B2
7424543 Rice, III Sep 2008 B2
7424618 Roy et al. Sep 2008 B2
7426438 Robertsson Sep 2008 B1
7426486 Treibach-Heck et al. Sep 2008 B2
7430312 Gu Sep 2008 B2
7433068 Stevens et al. Oct 2008 B2
7433885 Jones Oct 2008 B2
7433893 Lowry Oct 2008 B2
7436496 Kawahito Oct 2008 B2
7437023 King et al. Oct 2008 B2
7437312 Bhatia et al. Oct 2008 B2
7437351 Page Oct 2008 B2
7437368 Kolluri et al. Oct 2008 B1
7437475 Philyaw Oct 2008 B2
7437751 Daniels Oct 2008 B2
7450736 Yang et al. Nov 2008 B2
7450955 Himmelstein Nov 2008 B2
7451005 Hoffberg et al. Nov 2008 B2
7451102 Nowak Nov 2008 B2
7452275 Kuraishi Nov 2008 B2
7457862 Hepworth et al. Nov 2008 B2
7460690 Cohen et al. Dec 2008 B2
7461168 Wan Dec 2008 B1
7463896 Himmelstein Dec 2008 B2
7466823 Vestergaard et al. Dec 2008 B2
7471236 Pitt et al. Dec 2008 B1
7477780 Boncyk et al. Jan 2009 B2
7477783 Nakayama Jan 2009 B2
7477909 Roth Jan 2009 B2
7478323 Dowdy Jan 2009 B2
7480929 Klosterman et al. Jan 2009 B2
7484008 Gelvin et al. Jan 2009 B1
7484237 Joly et al. Jan 2009 B2
7487107 Blanchard et al. Feb 2009 B2
7487112 Barnes, Jr. Feb 2009 B2
7487424 Nam et al. Feb 2009 B2
7487529 Orlick Feb 2009 B1
7489812 Fox et al. Feb 2009 B2
7490045 Flores et al. Feb 2009 B1
7490333 Grimaud et al. Feb 2009 B2
7492367 Mahajan et al. Feb 2009 B2
7493487 Phillips et al. Feb 2009 B2
7493572 Card et al. Feb 2009 B2
7493641 Klosterman et al. Feb 2009 B2
7496548 Ershov Feb 2009 B1
7496638 Philyaw Feb 2009 B2
7496757 Abbott et al. Feb 2009 B2
7496943 Goldberg et al. Feb 2009 B1
RE40653 Fleming, III Mar 2009 E
7499630 Koch et al. Mar 2009 B2
7504983 Chen et al. Mar 2009 B2
7505772 Himmelstein Mar 2009 B2
7505785 Callaghan et al. Mar 2009 B2
7505956 Ibbotson Mar 2009 B2
7505959 Kaiser et al. Mar 2009 B2
7506020 Ellis Mar 2009 B2
7506250 Hull et al. Mar 2009 B2
7506265 Traut et al. Mar 2009 B1
7508810 Moinzadeh et al. Mar 2009 B2
7509134 Fournier et al. Mar 2009 B2
7509673 Swander et al. Mar 2009 B2
7512254 Vollkommer et al. Mar 2009 B2
7518605 Lin et al. Apr 2009 B2
7519397 Fournier et al. Apr 2009 B2
7522995 Nortrup Apr 2009 B2
7523067 Nakajima Apr 2009 B1
7523126 Rivette et al. Apr 2009 B2
7525450 Miller et al. Apr 2009 B2
7529811 Thompson May 2009 B2
7533040 Perkowski May 2009 B2
7536032 Bell May 2009 B2
7536189 Himmelstein May 2009 B2
7536547 Van Den Tillaart May 2009 B2
7542966 Wolf et al. Jun 2009 B2
7546254 Bednarek Jun 2009 B2
7548961 Fernandez et al. Jun 2009 B1
7549159 Shay Jun 2009 B2
7552075 Walsh Jun 2009 B1
7552381 Barrus Jun 2009 B2
7555142 Hildreth et al. Jun 2009 B2
7560701 Oggier et al. Jul 2009 B2
7561312 Proudfoot et al. Jul 2009 B1
7562122 Oliver et al. Jul 2009 B2
7568213 Carhart et al. Jul 2009 B2
7569269 Takada et al. Aug 2009 B2
7570805 Gu Aug 2009 B2
7571121 Bezos et al. Aug 2009 B2
7574020 Shamaie Aug 2009 B2
7574407 Carro et al. Aug 2009 B2
7574422 Guan et al. Aug 2009 B2
7574434 Galuten Aug 2009 B2
7574513 Dunning et al. Aug 2009 B2
7576679 Orr et al. Aug 2009 B1
7576727 Bell Aug 2009 B2
7577665 Ramer et al. Aug 2009 B2
7577722 Khandekar et al. Aug 2009 B1
7577828 Sammer et al. Aug 2009 B2
7577872 DiBartolomeo et al. Aug 2009 B2
7580932 Plastina et al. Aug 2009 B2
7584215 Saari et al. Sep 2009 B2
7587370 Himmelstein Sep 2009 B2
7587412 Weyl et al. Sep 2009 B2
7590262 Fujimura et al. Sep 2009 B2
7591597 Pasqualini et al. Sep 2009 B2
7593552 Higaki et al. Sep 2009 B2
7593605 King et al. Sep 2009 B2
7594000 Himmelstein Sep 2009 B2
7594185 Anderson et al. Sep 2009 B2
7594189 Walker et al. Sep 2009 B1
7596269 King et al. Sep 2009 B2
7596391 Himmelstein Sep 2009 B2
7598942 Underkoffler et al. Oct 2009 B2
7599580 King et al. Oct 2009 B2
7599715 Himmelstein Oct 2009 B2
7599844 King et al. Oct 2009 B2
7599855 Sussman Oct 2009 B2
7606741 King et al. Oct 2009 B2
7607147 Lu et al. Oct 2009 B1
7607509 Schmiz et al. Oct 2009 B2
7613634 Siegel et al. Nov 2009 B2
7614055 Buskens et al. Nov 2009 B2
7616840 Erol et al. Nov 2009 B2
7619623 Hoppe et al. Nov 2009 B2
7620202 Fujimura et al. Nov 2009 B2
7624104 Berkhin et al. Nov 2009 B2
7624146 Brogne et al. Nov 2009 B1
7624424 Morita et al. Nov 2009 B2
7629899 Breed Dec 2009 B2
7630986 Herz et al. Dec 2009 B1
7634407 Chelba et al. Dec 2009 B2
7634465 Sareen et al. Dec 2009 B2
7634468 Stephan Dec 2009 B2
7640268 Gotoh et al. Dec 2009 B2
7647329 Fischman et al. Jan 2010 B1
7647349 Hubert et al. Jan 2010 B2
7650319 Hoffberg et al. Jan 2010 B2
7657907 Fennan et al. Feb 2010 B2
7660813 Milic-Frayling et al. Feb 2010 B2
7664315 Woodfill et al. Feb 2010 B2
7664734 Lawrence et al. Feb 2010 B2
7665109 Matthews, III et al. Feb 2010 B2
7668340 Cohen et al. Feb 2010 B2
7672543 Hull et al. Mar 2010 B2
7672931 Hurst-Hiller et al. Mar 2010 B2
7680067 Prasad et al. Mar 2010 B2
7680298 Roberts et al. Mar 2010 B2
7681147 Richardson-Bunbury et al. Mar 2010 B2
7683954 Ichikawa et al. Mar 2010 B2
7684592 Paul et al. Mar 2010 B2
7689712 Lee et al. Mar 2010 B2
7689832 Talmor et al. Mar 2010 B2
7698344 Sareen et al. Apr 2010 B2
7698545 Campbell et al. Apr 2010 B1
7701439 Hillis et al. Apr 2010 B2
7702130 Im et al. Apr 2010 B2
7702624 King et al. Apr 2010 B2
7704135 Harrison, Jr. Apr 2010 B2
7706611 King et al. Apr 2010 B2
7707039 King et al. Apr 2010 B2
7710391 Bell et al. May 2010 B2
7710598 Harrison, Jr. May 2010 B2
7712113 Yoon et al. May 2010 B2
7721307 Hendricks et al. May 2010 B2
7725492 Sittig et al. May 2010 B2
7729530 Antonov et al. Jun 2010 B2
7729901 Richardson-Bunbury et al. Jun 2010 B2
7730190 Coile et al. Jun 2010 B2
7733853 Moinzadeh et al. Jun 2010 B2
7742953 King et al. Jun 2010 B2
7746345 Hunter Jun 2010 B2
7747281 Preston et al. Jun 2010 B2
7747291 Himmelstein Jun 2010 B2
7757254 Shoff et al. Jul 2010 B2
7760182 Ahmad et al. Jul 2010 B2
7761451 Cunningham Jul 2010 B2
7764219 Pitt et al. Jul 2010 B2
7769633 Jokinen et al. Aug 2010 B2
7769740 Martinez et al. Aug 2010 B2
7769745 Naaman et al. Aug 2010 B2
7779002 Gomes et al. Aug 2010 B1
7783304 Himmelstein Aug 2010 B2
7783617 Lu et al. Aug 2010 B2
7783622 Vandermolen et al. Aug 2010 B1
7788248 Forstall et al. Aug 2010 B2
7792040 Nair et al. Sep 2010 B2
7793136 Lutter Sep 2010 B2
7793326 McCoskey et al. Sep 2010 B2
7796116 Salsman et al. Sep 2010 B2
7802084 Fitzgerald et al. Sep 2010 B2
7802724 Nohr Sep 2010 B1
7804440 Orr Sep 2010 B1
7806322 Brundage et al. Oct 2010 B2
7809167 Bell Oct 2010 B2
7809367 Hellåker Oct 2010 B2
7812860 King et al. Oct 2010 B2
7818178 Overend et al. Oct 2010 B2
7818215 King et al. Oct 2010 B2
7822871 Stolorz et al. Oct 2010 B2
7830388 Lu Nov 2010 B1
7831586 Reitter et al. Nov 2010 B2
7831912 King et al. Nov 2010 B2
7834846 Bell Nov 2010 B1
7844907 Watler et al. Nov 2010 B2
7844996 Chen et al. Nov 2010 B2
7847685 Miller et al. Dec 2010 B2
7848763 Fournier et al. Dec 2010 B2
7852262 Namineni et al. Dec 2010 B2
7859402 Miller et al. Dec 2010 B2
7865308 Athsani et al. Jan 2011 B2
7870199 Galli et al. Jan 2011 B2
7872669 Darrell et al. Jan 2011 B2
7891004 Gelvin et al. Feb 2011 B1
7894670 King et al. Feb 2011 B2
RE42256 Edwards Mar 2011 E
7898522 Hildreth et al. Mar 2011 B2
7904187 Hoffberg et al. Mar 2011 B2
7904569 Gelvin et al. Mar 2011 B1
7907976 Himmelstein Mar 2011 B2
7912645 Breed et al. Mar 2011 B2
7920714 O'Neil Apr 2011 B2
7925708 Davis et al. Apr 2011 B2
7941433 Benson May 2011 B2
7949191 Ramkumar et al. May 2011 B1
7965222 Pitt et al. Jun 2011 B2
7966078 Hoffberg et al. Jun 2011 B2
7971157 Markovic et al. Jun 2011 B2
7983835 Lagassey Jul 2011 B2
7987003 Hoffberg et al. Jul 2011 B2
7990556 King et al. Aug 2011 B2
7996793 Latta et al. Aug 2011 B2
7999721 Orr Aug 2011 B2
8005713 Sanz-Pastor et al. Aug 2011 B1
8005720 King et al. Aug 2011 B2
8006263 Ellis et al. Aug 2011 B2
8018467 Solanki et al. Sep 2011 B2
8019648 King et al. Sep 2011 B2
8020028 Lutter Sep 2011 B1
8024766 Addington Sep 2011 B2
8031060 Hoffberg et al. Oct 2011 B2
8032477 Hoffberg et al. Oct 2011 B1
8035612 Bell et al. Oct 2011 B2
8035614 Bell et al. Oct 2011 B2
8035624 Bell et al. Oct 2011 B2
8046801 Ellis et al. Oct 2011 B2
8072470 Marks Dec 2011 B2
8073821 Zahavi et al. Dec 2011 B2
8073921 Thomas et al. Dec 2011 B2
8074076 Courtois Dec 2011 B2
8081849 King et al. Dec 2011 B2
8082258 Kumar et al. Dec 2011 B2
8117445 Werner et al. Feb 2012 B2
8146156 King et al. Mar 2012 B2
8165916 Hoffberg et al. Apr 2012 B2
8169436 Rivera et al. May 2012 B2
8173634 Liu et al. May 2012 B2
8175921 Kopra May 2012 B1
8191088 Edwards et al. May 2012 B2
8214387 King et al. Jul 2012 B2
8234641 Fitzgerald et al. Jul 2012 B2
8267783 van Datta et al. Sep 2012 B2
8272964 van Datta et al. Sep 2012 B2
8302030 Soroca et al. Oct 2012 B2
8332406 Donaldson Dec 2012 B2
8347357 Chen et al. Jan 2013 B2
8352400 Hoffberg et al. Jan 2013 B2
8361987 Cohen et al. Jan 2013 B2
8364136 Hoffberg et al. Jan 2013 B2
8369967 Hoffberg et al. Feb 2013 B2
8402356 Martinez et al. Mar 2013 B2
8402490 Hoffberg-Borghesani et al. Mar 2013 B2
8405654 Rivera et al. Mar 2013 B2
8458695 Fitzgerald et al. Jun 2013 B2
8473836 Flake et al. Jun 2013 B2
8483546 Borghesani et al. Jul 2013 B2
8505090 King et al. Aug 2013 B2
8516266 Hoffberg et al. Aug 2013 B2
RE44566 Goldberg et al. Oct 2013 E
8554623 Higgins et al. Oct 2013 B2
8560390 Higgins et al. Oct 2013 B2
8574074 van Datta et al. Nov 2013 B2
8578413 Ellis et al. Nov 2013 B2
8578423 Ellis et al. Nov 2013 B2
20010001854 Schena et al. May 2001 A1
20010003099 Von Kohorn Jun 2001 A1
20010003176 Schena et al. Jun 2001 A1
20010003177 Schena et al. Jun 2001 A1
20010003193 Woodring et al. Jun 2001 A1
20010005804 Rayner Jun 2001 A1
20010007086 Rogers et al. Jul 2001 A1
20010009855 I'Anson Jul 2001 A1
20010011226 Greer et al. Aug 2001 A1
20010013009 Greening et al. Aug 2001 A1
20010014915 Blumenau Aug 2001 A1
20010018628 Jenkins et al. Aug 2001 A1
20010025245 Flickinger et al. Sep 2001 A1
20010025254 Park Sep 2001 A1
20010025274 Zehr et al. Sep 2001 A1
20010026533 Schwager Oct 2001 A1
20010027412 Son Oct 2001 A1
20010029610 Corvin et al. Oct 2001 A1
20010032125 Bhan et al. Oct 2001 A1
20010032132 Moran Oct 2001 A1
20010032133 Moran Oct 2001 A1
20010032137 Bennett et al. Oct 2001 A1
20010032252 Durst, Jr. et al. Oct 2001 A1
20010032333 Flickinger Oct 2001 A1
20010034237 Garahi Oct 2001 A1
20010034643 Acres Oct 2001 A1
20010034762 Jacobs et al. Oct 2001 A1
20010035880 Musatov et al. Nov 2001 A1
20010037232 Miller Nov 2001 A1
20010039210 St-Denis Nov 2001 A1
20010042001 Goto et al. Nov 2001 A1
20010042002 Koopersmith Nov 2001 A1
20010044309 Bar et al. Nov 2001 A1
20010044809 Parasnis et al. Nov 2001 A1
20010045104 Bailey, Sr. et al. Nov 2001 A1
20010047297 Wen Nov 2001 A1
20010047298 Moore et al. Nov 2001 A1
20010047384 Croy Nov 2001 A1
20010049620 Blasko Dec 2001 A1
20010049636 Hudda et al. Dec 2001 A1
20010052058 Ohran Dec 2001 A1
20010052123 Kawai Dec 2001 A1
20010052847 Auerbach Dec 2001 A1
20010053252 Creque Dec 2001 A1
20010054181 Corvin Dec 2001 A1
20010055411 Black Dec 2001 A1
20010056434 Kaplan et al. Dec 2001 A1
20010056463 Grady et al. Dec 2001 A1
20010056540 Ober et al. Dec 2001 A1
20010056544 Walker Dec 2001 A1
20020001317 Herring Jan 2002 A1
20020002504 Engel et al. Jan 2002 A1
20020002899 Gjerdingen et al. Jan 2002 A1
20020004743 Kutaragi et al. Jan 2002 A1
20020004744 Muyres et al. Jan 2002 A1
20020007307 Miller et al. Jan 2002 A1
20020007310 Long Jan 2002 A1
20020008619 Lerg et al. Jan 2002 A1
20020008637 Lemelson et al. Jan 2002 A1
20020010626 Agmoni Jan 2002 A1
20020010628 Burns Jan 2002 A1
20020010757 Granik et al. Jan 2002 A1
20020012065 Watanabe Jan 2002 A1
20020012329 Atkinson et al. Jan 2002 A1
20020013174 Murata Jan 2002 A1
20020013781 Petersen Jan 2002 A1
20020013941 Ward, III et al. Jan 2002 A1
20020014742 Conte et al. Feb 2002 A1
20020014976 Yoshida Feb 2002 A1
20020016171 Doganata et al. Feb 2002 A1
20020016750 Attia Feb 2002 A1
20020018076 Gianola Feb 2002 A1
20020018982 Conroy Feb 2002 A1
20020019774 Kanter Feb 2002 A1
20020019849 Tuvey et al. Feb 2002 A1
20020019857 Harjanto Feb 2002 A1
20020022476 Go Feb 2002 A1
20020022516 Forden Feb 2002 A1
20020022924 Begin Feb 2002 A1
20020022927 Lemelson et al. Feb 2002 A1
20020022993 Miller et al. Feb 2002 A1
20020023000 Bollay Feb 2002 A1
20020023091 Silberberg et al. Feb 2002 A1
20020023215 Wang et al. Feb 2002 A1
20020023230 Bolnick et al. Feb 2002 A1
20020023957 Michaelis et al. Feb 2002 A1
20020023959 Miller et al. Feb 2002 A1
20020026345 Juels Feb 2002 A1
20020026355 Mitsuoka et al. Feb 2002 A1
20020026496 Boyer et al. Feb 2002 A1
20020026638 Eldering et al. Feb 2002 A1
20020029350 Cooper et al. Mar 2002 A1
20020032608 Kanter Mar 2002 A1
20020032626 DeWolf et al. Mar 2002 A1
20020032906 Grossman Mar 2002 A1
20020032907 Daniels Mar 2002 A1
20020033844 Levy et al. Mar 2002 A1
20020034757 Cubicciotti Mar 2002 A1
20020035596 Yang et al. Mar 2002 A1
20020035605 McDowell et al. Mar 2002 A1
20020036750 Eberl et al. Mar 2002 A1
20020037735 Maggenti et al. Mar 2002 A1
20020038182 Wong et al. Mar 2002 A1
20020038456 Hansen et al. Mar 2002 A1
20020040475 Yap et al. Apr 2002 A1
20020042914 Walker et al. Apr 2002 A1
20020044687 Federman Apr 2002 A1
20020046084 Steele et al. Apr 2002 A1
20020046087 Hey Apr 2002 A1
20020046095 Wallace Apr 2002 A1
20020046102 Dohring et al. Apr 2002 A1
20020046262 Heilig et al. Apr 2002 A1
20020049389 Abreu Apr 2002 A1
20020049527 Kohno et al. Apr 2002 A1
20020049781 Bengtson Apr 2002 A1
20020049788 Lipkin et al. Apr 2002 A1
20020049968 Wilson et al. Apr 2002 A1
20020051262 Nuttall et al. May 2002 A1
20020051521 Patrick May 2002 A1
20020052214 Maggenti et al. May 2002 A1
20020052747 Sarukkai May 2002 A1
20020052785 Tenenbaum May 2002 A1
20020052786 Kim et al. May 2002 A1
20020052875 Smith et al. May 2002 A1
20020054059 Schneiderman May 2002 A1
20020054089 Nicholas et al. May 2002 A1
20020055833 Sterling May 2002 A1
20020055876 Gabler May 2002 A1
20020055906 Katz et al. May 2002 A1
20020055919 Mikheev May 2002 A1
20020056107 Schlack May 2002 A1
20020057892 Mano et al. May 2002 A1
20020059094 Hosea et al. May 2002 A1
20020059577 Lu et al. May 2002 A1
20020059590 Kitsukawa et al. May 2002 A1
20020059599 Schein et al. May 2002 A1
20020059610 Ellis May 2002 A1
20020061760 Maggenti et al. May 2002 A1
20020061778 Acres May 2002 A1
20020065844 Robinson et al. May 2002 A1
20020067308 Robertson Jun 2002 A1
20020067475 Waibel et al. Jun 2002 A1
20020067730 Hinderks et al. Jun 2002 A1
20020069218 Sull et al. Jun 2002 A1
20020069240 Berk Jun 2002 A1
20020069405 Chapin et al. Jun 2002 A1
20020069529 Wieres Jun 2002 A1
20020070852 Trauner et al. Jun 2002 A1
20020072965 Merriman et al. Jun 2002 A1
20020072966 Eldering et al. Jun 2002 A1
20020073000 Sage Jun 2002 A1
20020073235 Chen et al. Jun 2002 A1
20020073236 Helgeson et al. Jun 2002 A1
20020073424 Ward, III et al. Jun 2002 A1
20020077906 Remler Jun 2002 A1
20020078441 Drake et al. Jun 2002 A1
20020082048 Toyoshima Jun 2002 A1
20020082077 Johnson et al. Jun 2002 A1
20020082901 Dunning et al. Jun 2002 A1
20020082910 Kontogouris Jun 2002 A1
20020082913 Li Jun 2002 A1
20020082941 Bird Jun 2002 A1
20020083435 Blasko et al. Jun 2002 A1
20020083439 Eldering Jun 2002 A1
20020083441 Flickinger et al. Jun 2002 A1
20020083442 Eldering Jun 2002 A1
20020083443 Eldering et al. Jun 2002 A1
20020083444 Blasko et al. Jun 2002 A1
20020083445 Flickinger et al. Jun 2002 A1
20020083451 Gill et al. Jun 2002 A1
20020087401 Leapman et al. Jul 2002 A1
20020087402 Zustak et al. Jul 2002 A1
20020087403 Meyers et al. Jul 2002 A1
20020087887 Busam et al. Jul 2002 A1
20020087973 Hamilton et al. Jul 2002 A1
20020087975 Schlack Jul 2002 A1
20020087980 Eldering et al. Jul 2002 A1
20020090203 Mankovitz Jul 2002 A1
20020091569 Kitaura et al. Jul 2002 A1
20020091928 Bouchard et al. Jul 2002 A1
20020094868 Tuck et al. Jul 2002 A1
20020095411 Caldwell et al. Jul 2002 A1
20020095501 Chiloyan et al. Jul 2002 A1
20020095673 Leung et al. Jul 2002 A1
20020095676 Knee et al. Jul 2002 A1
20020098891 Graham et al. Jul 2002 A1
20020099600 Merriman et al. Jul 2002 A1
20020099611 De Souza et al. Jul 2002 A1
20020099653 De Souza et al. Jul 2002 A1
20020099695 Abajian et al. Jul 2002 A1
20020099738 Grant Jul 2002 A1
20020099812 Davis et al. Jul 2002 A1
20020099952 Lambert et al. Jul 2002 A1
20020100040 Bull Jul 2002 A1
20020100044 Daniels Jul 2002 A1
20020100052 Daniels Jul 2002 A1
20020102966 Lev et al. Aug 2002 A1
20020102999 Maggenti et al. Aug 2002 A1
20020103870 Shouji Aug 2002 A1
20020105423 Rast Aug 2002 A1
20020107073 Binney Aug 2002 A1
20020107075 Stephan Aug 2002 A1
20020107730 Bernstein Aug 2002 A1
20020109680 Orbanes et al. Aug 2002 A1
20020111154 Eldering et al. Aug 2002 A1
20020111172 DeWolf et al. Aug 2002 A1
20020111213 McEntee et al. Aug 2002 A1
20020111825 Martin et al. Aug 2002 A1
20020111865 Middleton, III et al. Aug 2002 A1
20020111956 Yeo et al. Aug 2002 A1
20020111960 Irons et al. Aug 2002 A1
20020112035 Carey et al. Aug 2002 A1
20020112047 Kushwaha et al. Aug 2002 A1
20020112233 Cantu Bonilla et al. Aug 2002 A1
20020112240 Bacso et al. Aug 2002 A1
20020112249 Hendricks et al. Aug 2002 A1
20020112250 Koplar et al. Aug 2002 A1
20020113815 DeGross Aug 2002 A1
20020114004 Ferlitsch Aug 2002 A1
20020114466 Tanaka et al. Aug 2002 A1
20020116284 Steelman et al. Aug 2002 A1
20020116582 Copeland et al. Aug 2002 A1
20020118671 Staples Aug 2002 A1
20020118676 Tonnby et al. Aug 2002 A1
20020118880 Liu et al. Aug 2002 A1
20020120589 Aoki Aug 2002 A1
20020121969 Joao Sep 2002 A1
20020122040 Noyle Sep 2002 A1
20020122052 Reich et al. Sep 2002 A1
20020123928 Eldering et al. Sep 2002 A1
20020124255 Reichardt et al. Sep 2002 A1
20020125411 Christy Sep 2002 A1
20020129368 Schlack et al. Sep 2002 A1
20020133398 Geller et al. Sep 2002 A1
20020133400 Terry et al. Sep 2002 A1
20020135504 Singer Sep 2002 A1
20020135815 Finn Sep 2002 A1
20020136407 Denning et al. Sep 2002 A1
20020138331 Hosea et al. Sep 2002 A1
20020138493 Shapiro et al. Sep 2002 A1
20020138840 Schein et al. Sep 2002 A1
20020143639 Beckett et al. Oct 2002 A1
20020143652 Beckett Oct 2002 A1
20020143901 Lupo et al. Oct 2002 A1
20020144010 Younis et al. Oct 2002 A1
20020144262 Plotnick et al. Oct 2002 A1
20020144263 Eldering et al. Oct 2002 A1
20020147633 Rafizadeh Oct 2002 A1
20020147638 Banerjee et al. Oct 2002 A1
20020147645 Alao et al. Oct 2002 A1
20020147766 Vanska et al. Oct 2002 A1
20020147982 Naidoo et al. Oct 2002 A1
20020151992 Hoffberg et al. Oct 2002 A1
20020152117 Cristofalo et al. Oct 2002 A1
20020152267 Lennon Oct 2002 A1
20020154631 Makansi et al. Oct 2002 A1
20020154817 Katsuyama et al. Oct 2002 A1
20020155878 Lert, Jr. et al. Oct 2002 A1
20020155891 Okada et al. Oct 2002 A1
20020157002 Messerges et al. Oct 2002 A1
20020161625 Brito-Valladares et al. Oct 2002 A1
20020161639 Goldstein Oct 2002 A1
20020163579 Patel et al. Nov 2002 A1
20020164977 Link, II et al. Nov 2002 A1
20020164999 Johnson Nov 2002 A1
20020165026 Perkins et al. Nov 2002 A1
20020165764 Wade et al. Nov 2002 A1
20020169840 Sheldon et al. Nov 2002 A1
20020170685 Weik, III et al. Nov 2002 A1
20020173317 Nykanen et al. Nov 2002 A1
20020173349 Ach, III Nov 2002 A1
20020173359 Gallo et al. Nov 2002 A1
20020173907 Ando Nov 2002 A1
20020173971 Stirpe et al. Nov 2002 A1
20020174073 Nordman et al. Nov 2002 A1
20020174185 Rawat et al. Nov 2002 A1
20020174424 Chang et al. Nov 2002 A1
20020175936 Tenembaum Nov 2002 A1
20020178153 Nishioka et al. Nov 2002 A1
20020178161 Brezin et al. Nov 2002 A1
20020178211 Singhal et al. Nov 2002 A1
20020178223 Bushkin Nov 2002 A1
20020178442 Williams Nov 2002 A1
20020178445 Eldering et al. Nov 2002 A1
20020178447 Plotnick et al. Nov 2002 A1
20020181501 Nova et al. Dec 2002 A1
20020184047 Plotnick et al. Dec 2002 A1
20020184086 Linde Dec 2002 A1
20020184088 Rosenberg Dec 2002 A1
20020184130 Blasko Dec 2002 A1
20020184224 Haff et al. Dec 2002 A1
20020184626 Darbee et al. Dec 2002 A1
20020184642 Lude et al. Dec 2002 A1
20020185590 Yahav et al. Dec 2002 A1
20020191847 Newman et al. Dec 2002 A1
20020193066 Connelly Dec 2002 A1
20020193938 DeKock et al. Dec 2002 A1
20020194058 Eldering Dec 2002 A1
20020194143 Banerjee et al. Dec 2002 A1
20020194215 Cantrell et al. Dec 2002 A1
20020194585 Connelly Dec 2002 A1
20020194590 Pong Dec 2002 A1
20020194596 Srivastava Dec 2002 A1
20020194598 Connelly Dec 2002 A1
20020194607 Connelly Dec 2002 A1
20020196254 Suzuki et al. Dec 2002 A1
20020198632 Breed et al. Dec 2002 A1
20020198633 Weimper Dec 2002 A1
20020198786 Tripp et al. Dec 2002 A1
20020199198 Stonedahl Dec 2002 A1
20030001018 Hussey et al. Jan 2003 A1
20030001816 Badarneh Jan 2003 A1
20030001849 Devins et al. Jan 2003 A1
20030001854 Jade et al. Jan 2003 A1
20030004724 Kahn et al. Jan 2003 A1
20030004810 Eldering Jan 2003 A1
20030004991 Keskar et al. Jan 2003 A1
20030005445 Schein et al. Jan 2003 A1
20030008661 Joyce et al. Jan 2003 A1
20030009367 Morrison Jan 2003 A1
20030009495 Adjaoute Jan 2003 A1
20030009602 Jacobs et al. Jan 2003 A1
20030009752 Gupta Jan 2003 A1
20030009762 Hooper et al. Jan 2003 A1
20030011684 Narayanaswami et al. Jan 2003 A1
20030012555 Yuen et al. Jan 2003 A1
20030014307 Heng Jan 2003 A1
20030014312 Fleisher Jan 2003 A1
20030014414 Newman Jan 2003 A1
20030014659 Zhu Jan 2003 A1
20030014754 Chang Jan 2003 A1
20030016005 Leibowitz et al. Jan 2003 A1
20030018430 Ladetto et al. Jan 2003 A1
20030018527 Filepp et al. Jan 2003 A1
20030018797 Dunning et al. Jan 2003 A1
20030019939 Sellen Jan 2003 A1
20030022953 Zampini et al. Jan 2003 A1
20030027558 Eisinger Feb 2003 A1
20030028433 Merriman et al. Feb 2003 A1
20030031465 Blake Feb 2003 A1
20030032409 Hutcheson et al. Feb 2003 A1
20030032476 Walker et al. Feb 2003 A1
20030033321 Schrempp et al. Feb 2003 A1
20030033331 Sena et al. Feb 2003 A1
20030033394 Stine Feb 2003 A1
20030033405 Perdon et al. Feb 2003 A1
20030034462 Remillard et al. Feb 2003 A1
20030035075 Butler et al. Feb 2003 A1
20030036881 Remillard et al. Feb 2003 A1
20030036944 Lesandrini et al. Feb 2003 A1
20030037163 Kitada et al. Feb 2003 A1
20030037181 Freed Feb 2003 A1
20030037336 Leftwich Feb 2003 A1
20030039411 Nada Feb 2003 A1
20030040957 Rodriguez et al. Feb 2003 A1
20030040962 Lewis Feb 2003 A1
20030041267 Fee et al. Feb 2003 A1
20030041329 Bassett Feb 2003 A1
20030043042 Moores, Jr. et al. Mar 2003 A1
20030045998 Medl Mar 2003 A1
20030046148 Rizzi et al. Mar 2003 A1
20030048293 Werkhoven Mar 2003 A1
20030053536 Ebrami Mar 2003 A1
20030054888 Walker et al. Mar 2003 A1
20030055689 Block et al. Mar 2003 A1
20030055722 Perreault et al. Mar 2003 A1
20030055883 Wiles, Jr. Mar 2003 A1
20030055896 Hu et al. Mar 2003 A1
20030056219 Reichardt et al. Mar 2003 A1
20030059208 Ando et al. Mar 2003 A1
20030060188 Gidron et al. Mar 2003 A1
20030060247 Goldberg et al. Mar 2003 A1
20030060956 Rao et al. Mar 2003 A1
20030060980 Prakah-Asante et al. Mar 2003 A1
20030062997 Naidoo et al. Apr 2003 A1
20030065762 Stolorz et al. Apr 2003 A1
20030065788 Salomaki Apr 2003 A1
20030065982 Grimaud et al. Apr 2003 A1
20030066092 Wagner et al. Apr 2003 A1
20030067542 Monroe Apr 2003 A1
20030069877 Grefenstette et al. Apr 2003 A1
20030069880 Harrison et al. Apr 2003 A1
20030070087 Gryaznov Apr 2003 A1
20030070091 Loveland Apr 2003 A1
20030070167 Holtz et al. Apr 2003 A1
20030073496 D'Amico et al. Apr 2003 A1
20030074252 Chandler-Pepelnjak et al. Apr 2003 A1
20030076347 Barrett et al. Apr 2003 A1
20030076367 Bencze et al. Apr 2003 A1
20030078064 Chan Apr 2003 A1
20030078978 Lardin et al. Apr 2003 A1
20030079224 Komar et al. Apr 2003 A1
20030079226 Barrett Apr 2003 A1
20030079227 Knowles et al. Apr 2003 A1
20030080878 Kirmuss May 2003 A1
20030080992 Haines May 2003 A1
20030081121 Kirmuss May 2003 A1
20030081122 Kirmuss May 2003 A1
20030081127 Kirmuss May 2003 A1
20030081128 Kirmuss May 2003 A1
20030081557 Mettala et al. May 2003 A1
20030081784 Kallahalla et al. May 2003 A1
20030081934 Kirmuss May 2003 A1
20030081935 Kirmuss May 2003 A1
20030084449 Chane et al. May 2003 A1
20030084456 Ryan et al. May 2003 A1
20030084461 Hoang May 2003 A1
20030088786 Moran et al. May 2003 A1
20030093187 Walker May 2003 A1
20030093311 Knowlson May 2003 A1
20030093329 Gutta May 2003 A1
20030093384 Durst, Jr. et al. May 2003 A1
20030093400 Santosuosso May 2003 A1
20030093545 Liu et al. May 2003 A1
20030093792 Labeeb et al. May 2003 A1
20030097227 Bloch et al. May 2003 A1
20030098352 Schnee et al. May 2003 A1
20030098800 Jambhekar et al. May 2003 A1
20030099375 Sefcik May 2003 A1
20030100375 Wakae et al. May 2003 A1
20030100965 Sitrick et al. May 2003 A1
20030100982 Rao et al. May 2003 A1
20030101292 Fisher et al. May 2003 A1
20030101329 Lahti et al. May 2003 A1
20030101341 Kettler, III et al. May 2003 A1
20030101449 Bentolila et al. May 2003 A1
20030101451 Bentolila et al. May 2003 A1
20030101454 Ozer et al. May 2003 A1
20030103484 Oommen et al. Jun 2003 A1
20030103644 Klayh Jun 2003 A1
20030104867 Kobayashi et al. Jun 2003 A1
20030107578 Willis et al. Jun 2003 A1
20030110171 Ozer et al. Jun 2003 A1
20030110495 Bennington et al. Jun 2003 A1
20030110499 Knudson et al. Jun 2003 A1
20030114157 Spitz et al. Jun 2003 A1
20030115074 Freeman et al. Jun 2003 A1
20030115318 Wueste Jun 2003 A1
20030115417 Corrigan Jun 2003 A1
20030115587 Kendall et al. Jun 2003 A1
20030115599 Bennington et al. Jun 2003 A1
20030115602 Knee et al. Jun 2003 A1
20030119528 Pew et al. Jun 2003 A1
20030120630 Tunkelang Jun 2003 A1
20030125853 Takagi et al. Jul 2003 A1
20030126150 Chan Jul 2003 A1
20030126250 Jhanji Jul 2003 A1
20030131356 Proehl et al. Jul 2003 A1
20030135513 Quinn et al. Jul 2003 A1
20030139966 Sirota et al. Jul 2003 A1
20030144044 Pisarsky Jul 2003 A1
20030144048 Silva Jul 2003 A1
20030144865 Lin et al. Jul 2003 A1
20030149528 Lin Aug 2003 A1
20030149574 Rudman Aug 2003 A1
20030149618 Sender et al. Aug 2003 A1
20030149623 Chen Aug 2003 A1
20030149678 Cook Aug 2003 A1
20030149975 Eldering et al. Aug 2003 A1
20030151507 Andre et al. Aug 2003 A1
20030153340 Crockett et al. Aug 2003 A1
20030153341 Crockett et al. Aug 2003 A1
20030153342 Crockett et al. Aug 2003 A1
20030153343 Crockett et al. Aug 2003 A1
20030154010 Rao et al. Aug 2003 A1
20030154011 Rao et al. Aug 2003 A1
20030154293 Zmolek Aug 2003 A1
20030155513 Remillard et al. Aug 2003 A1
20030158872 Adams Aug 2003 A1
20030159157 Chan Aug 2003 A1
20030160975 Skurdal et al. Aug 2003 A1
20030161298 Bergman et al. Aug 2003 A1
20030163369 Arr Aug 2003 A1
20030163482 Bunney et al. Aug 2003 A1
20030163524 Gotoh et al. Aug 2003 A1
20030163813 Klosterman et al. Aug 2003 A1
20030164858 Klosterman et al. Sep 2003 A1
20030165241 Fransdonk Sep 2003 A1
20030169181 Taylor Sep 2003 A1
20030169185 Taylor Sep 2003 A1
20030171910 Abir Sep 2003 A1
20030171988 Sugihara Sep 2003 A1
20030171990 Rao et al. Sep 2003 A1
20030172050 Decime et al. Sep 2003 A1
20030172376 Coffin, III Sep 2003 A1
20030173405 Wilz, Sr. et al. Sep 2003 A1
20030177278 DeNatale Sep 2003 A1
20030177490 Hoshino et al. Sep 2003 A1
20030182053 Swope et al. Sep 2003 A1
20030182254 Plastina et al. Sep 2003 A1
20030182315 Plastina et al. Sep 2003 A1
20030182375 Zhu et al. Sep 2003 A1
20030182399 Silber Sep 2003 A1
20030182567 Barton et al. Sep 2003 A1
20030182663 Gudorf et al. Sep 2003 A1
20030187719 Brocklebank Oct 2003 A1
20030187751 Watson et al. Oct 2003 A1
20030187886 Hull et al. Oct 2003 A1
20030188310 Klosterman et al. Oct 2003 A1
20030188311 Yuen et al. Oct 2003 A1
20030189499 Stricklin et al. Oct 2003 A1
20030190961 Seidman Oct 2003 A1
20030191690 Mclntyre et al. Oct 2003 A1
20030191742 Yonezawa et al. Oct 2003 A1
20030191799 Araujo et al. Oct 2003 A1
20030191816 Landress et al. Oct 2003 A1
20030193409 Crank Oct 2003 A1
20030195021 Yamashita et al. Oct 2003 A1
20030195801 Takakura et al. Oct 2003 A1
20030195837 Kostic et al. Oct 2003 A1
20030195851 Ong Oct 2003 A1
20030196201 Schein et al. Oct 2003 A1
20030198462 Bumgardner et al. Oct 2003 A1
20030199292 Greenberg Oct 2003 A1
20030200152 Divekar Oct 2003 A1
20030200452 Tagawa et al. Oct 2003 A1
20030204620 Cheng Oct 2003 A1
20030204632 Willebeek-LeMair et al. Oct 2003 A1
20030204640 Sahinoja et al. Oct 2003 A1
20030204847 Ellis et al. Oct 2003 A1
20030206100 Richman et al. Nov 2003 A1
20030208756 Macrae et al. Nov 2003 A1
20030208758 Schein et al. Nov 2003 A1
20030212527 Moore et al. Nov 2003 A1
20030212608 Cliff Nov 2003 A1
20030212710 Guy Nov 2003 A1
20030212760 Chen et al. Nov 2003 A1
20030212996 Wolzien Nov 2003 A1
20030214405 Lerg et al. Nov 2003 A1
20030214528 Pierce et al. Nov 2003 A1
20030215211 Coffin, III Nov 2003 A1
20030216961 Barry Nov 2003 A1
20030220835 Barnes, Jr. Nov 2003 A1
20030221030 Pontius et al. Nov 2003 A1
20030222819 Karr et al. Dec 2003 A1
20030222981 Kisak et al. Dec 2003 A1
20030223381 Schroderus Dec 2003 A1
20030223637 Simske et al. Dec 2003 A1
20030225516 DeKock et al. Dec 2003 A1
20030225547 Paradies Dec 2003 A1
20030226141 Krasnow et al. Dec 2003 A1
20030226142 Rand Dec 2003 A1
20030229537 Dunning et al. Dec 2003 A1
20030229718 Tock et al. Dec 2003 A1
20030229893 Sgaraglino Dec 2003 A1
20030229900 Reisman Dec 2003 A1
20040001217 Wu Jan 2004 A1
20040002326 Maher Jan 2004 A1
20040002348 Fraccaroli Jan 2004 A1
20040002380 Brosnan et al. Jan 2004 A1
20040003392 Trajkovic et al. Jan 2004 A1
20040003396 Babu Jan 2004 A1
20040003413 Boston et al. Jan 2004 A1
20040006424 Joyce et al. Jan 2004 A1
20040006509 Mannik et al. Jan 2004 A1
20040006740 Krohn et al. Jan 2004 A1
20040008651 Ahmed Jan 2004 A1
20040009813 Wind Jan 2004 A1
20040010492 Zhao et al. Jan 2004 A1
20040010591 Sinn et al. Jan 2004 A1
20040010601 Afergan et al. Jan 2004 A1
20040010806 Yuen et al. Jan 2004 A1
20040014454 Burgess et al. Jan 2004 A1
20040014457 Stevens Jan 2004 A1
20040015397 Barry et al. Jan 2004 A1
20040015437 Choi et al. Jan 2004 A1
20040015478 Pauly Jan 2004 A1
20040015588 Cotte Jan 2004 A1
20040015608 Ellis et al. Jan 2004 A1
20040019420 Rao et al. Jan 2004 A1
20040019521 Birmingham Jan 2004 A1
20040022416 Lemelson et al. Feb 2004 A1
20040023200 Blume Feb 2004 A1
20040025174 Cerrato Feb 2004 A1
20040028295 Allen et al. Feb 2004 A1
20040029895 Takeuchi et al. Feb 2004 A1
20040030595 Park Feb 2004 A1
20040030670 Barton Feb 2004 A1
20040030798 Andersson et al. Feb 2004 A1
20040031050 Klosterman Feb 2004 A1
20040031058 Reisman Feb 2004 A1
20040034536 Hughes Feb 2004 A1
20040034686 Guthrie Feb 2004 A1
20040034697 Fairhurst et al. Feb 2004 A1
20040034752 Ohran Feb 2004 A1
20040036718 Warren et al. Feb 2004 A1
20040039648 Candelore et al. Feb 2004 A1
20040039796 Watkins Feb 2004 A1
20040039827 Thomas et al. Feb 2004 A1
20040043758 Sorvari et al. Mar 2004 A1
20040043817 Willis Mar 2004 A1
20040043819 Willis Mar 2004 A1
20040044567 Willis Mar 2004 A1
20040044569 Roberts et al. Mar 2004 A1
20040044571 Bronnimann et al. Mar 2004 A1
20040044574 Cochran et al. Mar 2004 A1
20040044576 Kurihara et al. Mar 2004 A1
20040044623 Wake et al. Mar 2004 A1
20040044627 Russell et al. Mar 2004 A1
20040044736 Austin-Lane et al. Mar 2004 A1
20040044952 Jiang et al. Mar 2004 A1
20040045025 Ward, III et al. Mar 2004 A1
20040049428 Soehnlen et al. Mar 2004 A1
20040049515 Haff et al. Mar 2004 A1
20040054264 Ogura Mar 2004 A1
20040054589 Nicholas et al. Mar 2004 A1
20040057348 Shteyn et al. Mar 2004 A1
20040059625 Schrader Mar 2004 A1
20040064453 Ruiz et al. Apr 2004 A1
20040064550 Sakata et al. Apr 2004 A1
20040067752 Himmelstein Apr 2004 A1
20040068409 Tanaka et al. Apr 2004 A1
20040068483 Sakurai et al. Apr 2004 A1
20040068552 Kotz et al. Apr 2004 A1
20040068724 Gardner, III et al. Apr 2004 A1
20040070538 Horie et al. Apr 2004 A1
20040070602 Kobuya et al. Apr 2004 A1
20040073361 Tzamaloukas et al. Apr 2004 A1
20040073482 Wiggins et al. Apr 2004 A1
20040073483 Cohen et al. Apr 2004 A1
20040073630 Copeland et al. Apr 2004 A1
20040073642 Iyer Apr 2004 A1
20040073874 Poibeau et al. Apr 2004 A1
20040073899 Luk et al. Apr 2004 A1
20040073924 Pendakur Apr 2004 A1
20040078263 Altieri Apr 2004 A1
20040078266 Kim Apr 2004 A1
20040078292 Blumenau Apr 2004 A1
20040078621 Talaugon et al. Apr 2004 A1
20040078809 Drazin Apr 2004 A1
20040078815 Lemmons et al. Apr 2004 A1
20040078820 Nickum Apr 2004 A1
20040083133 Nicholas et al. Apr 2004 A1
20040084109 Piccinino, Jr. et al. May 2004 A1
20040088558 Candelore May 2004 A1
20040088583 Yoon et al. May 2004 A1
20040090121 Simonds et al. May 2004 A1
20040093141 Rao et al. May 2004 A1
20040098165 Butikofer May 2004 A1
20040098744 Gutta May 2004 A1
20040102248 Young et al. May 2004 A1
20040103024 Patel et al. May 2004 A1
20040103429 Carlucci et al. May 2004 A1
20040103439 Macrae et al. May 2004 A1
20040107033 Rao et al. Jun 2004 A1
20040107136 Nemirofsky et al. Jun 2004 A1
20040107437 Reichardt et al. Jun 2004 A1
20040110565 Levesque Jun 2004 A1
20040111200 Rao et al. Jun 2004 A1
20040111317 Ebisawa Jun 2004 A1
20040111484 Young et al. Jun 2004 A1
20040116183 Prindle Jun 2004 A1
20040117091 Prakah-Asante et al. Jun 2004 A1
20040117272 Shehab Jun 2004 A1
20040121835 Willis et al. Jun 2004 A1
20040121842 Willis et al. Jun 2004 A1
20040122856 Clearwater Jun 2004 A1
20040126747 Fujisawa et al. Jul 2004 A1
20040128286 Yasushi et al. Jul 2004 A1
20040128514 Rhoads Jul 2004 A1
20040128670 Robinson et al. Jul 2004 A1
20040130550 Blanco et al. Jul 2004 A1
20040133480 Domes Jul 2004 A1
20040133518 Dryall Jul 2004 A1
20040137929 Jones et al. Jul 2004 A1
20040137980 Aenlle Jul 2004 A1
20040139025 Coleman Jul 2004 A1
20040139047 Rechsteiner et al. Jul 2004 A1
20040139064 Chevallier et al. Jul 2004 A1
20040139106 Bachman et al. Jul 2004 A1
20040139107 Bachman et al. Jul 2004 A1
20040139400 Allam et al. Jul 2004 A1
20040139465 Matthews, III et al. Jul 2004 A1
20040140352 Walker et al. Jul 2004 A1
20040143478 Ward Jul 2004 A1
20040143495 Koenig Jul 2004 A1
20040145459 Himmelstein Jul 2004 A1
20040145496 Ellis Jul 2004 A1
20040147265 Kelley et al. Jul 2004 A1
20040148221 Chu Jul 2004 A1
20040148341 Cotte Jul 2004 A1
20040148424 Berkson et al. Jul 2004 A1
20040148625 Eldering et al. Jul 2004 A1
20040152477 Wu et al. Aug 2004 A1
20040152517 Hardisty et al. Aug 2004 A1
20040152518 Kogo Aug 2004 A1
20040153360 Schumann Aug 2004 A1
20040153363 Stehling Aug 2004 A1
20040153385 Allibhoy et al. Aug 2004 A1
20040153453 Brodie et al. Aug 2004 A1
20040158492 Lopez et al. Aug 2004 A1
20040158858 Paxton et al. Aug 2004 A1
20040162064 Himmelstein Aug 2004 A1
20040162758 Willis Aug 2004 A1
20040162759 Willis Aug 2004 A1
20040163101 Swix et al. Aug 2004 A1
20040163134 Willis Aug 2004 A1
20040164228 Fogg et al. Aug 2004 A1
20040168063 Revital et al. Aug 2004 A1
20040168188 Bennington et al. Aug 2004 A1
20040168189 Reynolds et al. Aug 2004 A1
20040168202 Ebihara Aug 2004 A1
20040169678 Oliver Sep 2004 A1
20040172324 Merriman et al. Sep 2004 A1
20040172331 Merriman et al. Sep 2004 A1
20040172332 Merriman et al. Sep 2004 A1
20040172343 Allibhoy et al. Sep 2004 A1
20040172550 Sai Sep 2004 A1
20040176170 Eck et al. Sep 2004 A1
20040176995 Fusz Sep 2004 A1
20040177001 Salinas Sep 2004 A1
20040181688 Wittkotter Sep 2004 A1
20040181808 Schaefer et al. Sep 2004 A1
20040183829 Kontny et al. Sep 2004 A1
20040186766 Fellenstein et al. Sep 2004 A1
20040186771 Squires Sep 2004 A1
20040186859 Butcher Sep 2004 A1
20040189512 Takashima et al. Sep 2004 A1
20040189691 Jojic et al. Sep 2004 A1
20040193371 Koshiji et al. Sep 2004 A1
20040193413 Wilson et al. Sep 2004 A1
20040193488 Khoo et al. Sep 2004 A1
20040194123 Fredlund et al. Sep 2004 A1
20040194128 McIntyre et al. Sep 2004 A1
20040194138 Boylan, III et al. Sep 2004 A1
20040199575 Geller Oct 2004 A1
20040201500 Miller et al. Oct 2004 A1
20040201629 Bates et al. Oct 2004 A1
20040201683 Murashita et al. Oct 2004 A1
20040203851 Vetro et al. Oct 2004 A1
20040203909 Koster Oct 2004 A1
20040203931 Karaoguz Oct 2004 A1
20040204238 Aoki Oct 2004 A1
20040204247 Walker et al. Oct 2004 A1
20040204266 Owens et al. Oct 2004 A1
20040204806 Chen et al. Oct 2004 A1
20040204953 Muir et al. Oct 2004 A1
20040205101 Radhakrishnan Oct 2004 A1
20040205151 Sprigg et al. Oct 2004 A1
20040205157 Bibelnieks et al. Oct 2004 A1
20040205508 Wecker et al. Oct 2004 A1
20040205534 Koelle Oct 2004 A1
20040205807 Wilcoxson et al. Oct 2004 A1
20040206809 Wood et al. Oct 2004 A1
20040209602 Joyce et al. Oct 2004 A1
20040210472 Lew et al. Oct 2004 A1
20040210489 Jackson et al. Oct 2004 A1
20040210653 Kanoor et al. Oct 2004 A1
20040210661 Thompson Oct 2004 A1
20040210824 Shoff et al. Oct 2004 A1
20040210935 Schein et al. Oct 2004 A1
20040215931 Ellis Oct 2004 A1
20040219977 Ebisawa Nov 2004 A1
20040220850 Ferrer et al. Nov 2004 A1
20040220858 Maggio Nov 2004 A1
20040221018 Ji Nov 2004 A1
20040221310 Herrington et al. Nov 2004 A1
20040224772 Canessa et al. Nov 2004 A1
20040225562 Turner Nov 2004 A1
20040225715 Gottfried Nov 2004 A1
20040229194 Yang Nov 2004 A1
20040229632 Flynn et al. Nov 2004 A1
20040230593 Rudin et al. Nov 2004 A1
20040230994 Urdang et al. Nov 2004 A1
20040234932 Hughes et al. Nov 2004 A1
20040236585 Kohnke et al. Nov 2004 A1
20040236791 Kinjo Nov 2004 A1
20040243455 Smith Dec 2004 A1
20040243466 Trzybinski et al. Dec 2004 A1
20040243470 Ozer et al. Dec 2004 A1
20040243623 Ozer et al. Dec 2004 A1
20040248569 Kondou et al. Dec 2004 A1
20040248649 Arai et al. Dec 2004 A1
20040249786 Dabney et al. Dec 2004 A1
20040250131 Swander et al. Dec 2004 A1
20040250201 Caspi Dec 2004 A1
20040252051 Johnson Dec 2004 A1
20040252679 Williams et al. Dec 2004 A1
20040254795 Fujii et al. Dec 2004 A1
20040254831 Dean Dec 2004 A1
20040254957 Hyotyniemi et al. Dec 2004 A1
20040255148 Monteiro et al. Dec 2004 A1
20040256454 Kocher Dec 2004 A1
20040258274 Brundage et al. Dec 2004 A1
20040259553 Delaney et al. Dec 2004 A1
20040260470 Rast Dec 2004 A1
20040260609 Loeb et al. Dec 2004 A1
20040260618 Larson Dec 2004 A1
20040260669 Fernandez Dec 2004 A1
20040260804 Grabarnik et al. Dec 2004 A1
20040261098 Macrae et al. Dec 2004 A1
20040261125 Ellis et al. Dec 2004 A1
20040263337 Terauchi et al. Dec 2004 A1
20040266535 Reeves Dec 2004 A1
20040266537 Morris Dec 2004 A1
20040267611 Hoerenz Dec 2004 A1
20040267715 Polson et al. Dec 2004 A1
20040267734 Toshima Dec 2004 A1
20040267880 Patiejunas Dec 2004 A1
20040267952 He et al. Dec 2004 A1
20040268237 Jones et al. Dec 2004 A1
20040268347 Knauerhase et al. Dec 2004 A1
20050000213 Cameron Jan 2005 A1
20050003797 Baldwin Jan 2005 A1
20050005168 Dick Jan 2005 A1
20050005242 Hoyle Jan 2005 A1
20050005246 Card et al. Jan 2005 A1
20050009506 Smolentzov et al. Jan 2005 A1
20050010765 Swander et al. Jan 2005 A1
20050010949 Ward et al. Jan 2005 A1
20050011012 Sun et al. Jan 2005 A1
20050011013 Schrott et al. Jan 2005 A1
20050011014 Schrott et al. Jan 2005 A1
20050011015 Schmidt et al. Jan 2005 A1
20050011016 Pasquier et al. Jan 2005 A1
20050011017 Legrand et al. Jan 2005 A1
20050011026 Hafliger et al. Jan 2005 A1
20050011042 Hupp et al. Jan 2005 A1
20050011043 Comstock Jan 2005 A1
20050011084 Stephenson Jan 2005 A1
20050011085 Swigart et al. Jan 2005 A1
20050011088 Theurer et al. Jan 2005 A1
20050011089 Duke Jan 2005 A1
20050011090 Bedretdinov Jan 2005 A1
20050011533 Ruben Jan 2005 A1
20050011534 Kampel Jan 2005 A1
20050012510 Thibedeau et al. Jan 2005 A1
20050013114 Ha Jan 2005 A1
20050013297 Eriksson Jan 2005 A1
20050013586 Bhatia et al. Jan 2005 A1
20050015267 Barringer et al. Jan 2005 A1
20050015451 Sheldon et al. Jan 2005 A1
20050015599 Wang et al. Jan 2005 A1
20050015804 LaJoie et al. Jan 2005 A1
20050015815 Shoff et al. Jan 2005 A1
20050017333 Bohr Jan 2005 A1
20050021387 Gottfurcht Jan 2005 A1
20050021396 Pearch et al. Jan 2005 A1
20050021397 Cui et al. Jan 2005 A1
20050021403 Ozer et al. Jan 2005 A1
20050021465 Segerstrom Jan 2005 A1
20050021470 Martin et al. Jan 2005 A1
20050021666 Dinnage et al. Jan 2005 A1
20050021853 Parekh et al. Jan 2005 A1
20050021980 Kanai Jan 2005 A1
20050022010 Swander et al. Jan 2005 A1
20050022114 Shanahan et al. Jan 2005 A1
20050025242 Ma et al. Feb 2005 A1
20050025732 Ansmann et al. Feb 2005 A1
20050026569 Lim et al. Feb 2005 A1
20050027587 Latona et al. Feb 2005 A1
20050027595 Ha et al. Feb 2005 A1
20050027699 Awadallah et al. Feb 2005 A1
20050028034 Gantman et al. Feb 2005 A1
20050028188 Latona et al. Feb 2005 A1
20050028195 Feinleib et al. Feb 2005 A1
20050028201 Klosterman et al. Feb 2005 A1
20050028208 Ellis et al. Feb 2005 A1
20050028218 Blake Feb 2005 A1
20050029534 Ochiai et al. Feb 2005 A1
20050029536 Sugitatsu et al. Feb 2005 A1
20050029537 D'Evelyn et al. Feb 2005 A1
20050029539 Toda et al. Feb 2005 A1
20050029680 Jung et al. Feb 2005 A1
20050030007 Sakata Feb 2005 A1
20050032577 Blackburn et al. Feb 2005 A1
20050033700 Vogler et al. Feb 2005 A1
20050033713 Bala et al. Feb 2005 A1
20050033970 Anson et al. Feb 2005 A1
20050034319 Terrazas Feb 2005 A1
20050034734 Tweardy Feb 2005 A1
20050038698 Lukose et al. Feb 2005 A1
20050038702 Merriman et al. Feb 2005 A1
20050039178 Marolia et al. Feb 2005 A1
20050041578 Huotari et al. Feb 2005 A1
20050050027 Yeh et al. Mar 2005 A1
20050050043 Pyhalammi et al. Mar 2005 A1
20050050070 Sheldon Mar 2005 A1
20050050222 Packer Mar 2005 A1
20050055321 Fratkina et al. Mar 2005 A1
20050055345 Ripley Mar 2005 A1
20050055417 Reich et al. Mar 2005 A1
20050055725 Stewart Mar 2005 A1
20050060264 Schrock et al. Mar 2005 A1
20050060350 Baum et al. Mar 2005 A1
20050060365 Robinson et al. Mar 2005 A1
20050060381 Huynh et al. Mar 2005 A1
20050065950 Chaganti et al. Mar 2005 A1
20050065980 Hyatt et al. Mar 2005 A1
20050070221 Upton Mar 2005 A1
20050071665 Zimmer et al. Mar 2005 A1
20050075155 Sitrick Apr 2005 A1
20050075172 Coleman Apr 2005 A1
20050075908 Stevens Apr 2005 A1
20050076051 Carobus et al. Apr 2005 A1
20050076060 Finn et al. Apr 2005 A1
20050076095 Mathew et al. Apr 2005 A1
20050086187 Grosser et al. Apr 2005 A1
20050086467 Asokan et al. Apr 2005 A1
20050091107 Blum Apr 2005 A1
20050091108 Frost Apr 2005 A1
20050091111 Green et al. Apr 2005 A1
20050091118 Fano Apr 2005 A1
20050091146 Levinson Apr 2005 A1
20050091184 Seshadri et al. Apr 2005 A1
20050091302 Soin et al. Apr 2005 A1
20050091578 Madan et al. Apr 2005 A1
20050091626 Okano et al. Apr 2005 A1
20050096750 Kagan et al. May 2005 A1
20050096755 Heizmann et al. May 2005 A1
20050096975 Moshe May 2005 A1
20050096983 Werkhoven May 2005 A1
20050097008 Ehring et al. May 2005 A1
20050097335 Shenoy et al. May 2005 A1
20050097599 Plotnick et al. May 2005 A1
20050097622 Zigmond et al. May 2005 A1
20050098596 Yano et al. May 2005 A1
20050098597 Cottrell et al. May 2005 A1
20050098598 Kuhn May 2005 A1
20050098599 von Foerster May 2005 A1
20050098600 Yeh et al. May 2005 A1
20050098601 Dragov May 2005 A1
20050098602 Mintzer May 2005 A1
20050098603 Mochizuki et al. May 2005 A1
20050098604 Marks May 2005 A1
20050098605 Edelstein et al. May 2005 A1
20050098606 Takeuchi et al. May 2005 A1
20050098607 Bartley et al. May 2005 A1
20050098609 Greenhut et al. May 2005 A1
20050098610 Onobori et al. May 2005 A1
20050101192 Foskey May 2005 A1
20050101193 Godard May 2005 A1
20050101386 Lavanchy et al. May 2005 A1
20050102177 Takayama May 2005 A1
20050102202 Linden et al. May 2005 A1
20050102610 Jie May 2005 A1
20050105552 Osterling May 2005 A1
20050106643 Saatcioglu May 2005 A1
20050107158 Kanisawa et al. May 2005 A1
20050108001 Aarskog May 2005 A1
20050108095 Perlmutter May 2005 A1
20050108195 Yalovsky et al. May 2005 A1
20050108213 Riise et al. May 2005 A1
20050108322 Kline et al. May 2005 A1
20050112030 Gaus May 2005 A1
20050113170 McHugh May 2005 A1
20050114357 Chengalvarayan et al. May 2005 A1
20050114375 Frieder et al. May 2005 A1
20050114380 Eldar et al. May 2005 A1
20050114526 Aoyama May 2005 A1
20050120003 Drury et al. Jun 2005 A1
20050120006 Nye Jun 2005 A1
20050125117 Breed Jun 2005 A1
20050125286 Crippen et al. Jun 2005 A1
20050125513 Sin-Ling Lam et al. Jun 2005 A1
20050125823 McCoy et al. Jun 2005 A1
20050130656 Chen Jun 2005 A1
20050130725 Creamer et al. Jun 2005 A1
20050131727 Sezan et al. Jun 2005 A1
20050132281 Pan et al. Jun 2005 A1
20050135366 Trappeniers et al. Jun 2005 A1
20050136949 Barnes, Jr. Jun 2005 A1
20050137765 Hein et al. Jun 2005 A1
20050138660 Boyer et al. Jun 2005 A1
20050139649 Metcalf et al. Jun 2005 A1
20050141709 Bratton Jun 2005 A1
20050143174 Goldman et al. Jun 2005 A1
20050144063 Spector Jun 2005 A1
20050144073 Morrisroe et al. Jun 2005 A1
20050144074 Fredregill et al. Jun 2005 A1
20050144475 Sakaki et al. Jun 2005 A1
20050148377 Goldberg et al. Jul 2005 A1
20050149396 Horowitz et al. Jul 2005 A1
20050149397 Morgenstern et al. Jul 2005 A1
20050149538 Singh et al. Jul 2005 A1
20050149726 Joshi et al. Jul 2005 A1
20050149964 Thomas et al. Jul 2005 A1
20050151849 Fitzhugh et al. Jul 2005 A1
20050153654 Anderson et al. Jul 2005 A1
20050153760 Varley Jul 2005 A1
20050154608 Paulson et al. Jul 2005 A1
20050154640 Kolluri et al. Jul 2005 A1
20050154699 Lipkin et al. Jul 2005 A1
20050154717 Watson et al. Jul 2005 A1
20050154760 Bhakta et al. Jul 2005 A1
20050155056 Knee et al. Jul 2005 A1
20050155083 Oh et al. Jul 2005 A1
20050159220 Wilson et al. Jul 2005 A1
20050159970 Buyukkokten et al. Jul 2005 A1
20050160080 Dawson Jul 2005 A1
20050160442 Kaplowitz Jul 2005 A1
20050164757 Ebisawa Jul 2005 A1
20050165640 Kotorov Jul 2005 A1
20050165644 Beyda et al. Jul 2005 A1
20050165699 Hahn-Carlson Jul 2005 A1
20050166240 Kim Jul 2005 A1
20050171865 Beardow Aug 2005 A1
20050171955 Hull et al. Aug 2005 A1
20050177385 Hull et al. Aug 2005 A1
20050177413 Blumberg et al. Aug 2005 A1
20050177430 Willis Aug 2005 A1
20050177431 Willis et al. Aug 2005 A1
20050177461 Rosefelt et al. Aug 2005 A1
20050178940 Granick Aug 2005 A1
20050179685 Kake et al. Aug 2005 A1
20050180330 Shapiro Aug 2005 A1
20050182693 Alivandi Aug 2005 A1
20050182737 Brown Aug 2005 A1
20050182824 Cotte Aug 2005 A1
20050183110 Anderson Aug 2005 A1
20050185825 Hoshino et al. Aug 2005 A1
20050187786 Tsai Aug 2005 A1
20050192025 Kaplan Sep 2005 A1
20050192071 Matsuno et al. Sep 2005 A1
20050192864 Ganz Sep 2005 A1
20050193014 Prince Sep 2005 A1
20050193054 Wilson et al. Sep 2005 A1
20050193075 Haff et al. Sep 2005 A1
20050193396 Stafford-Fraser et al. Sep 2005 A1
20050193411 Funston Sep 2005 A1
20050193414 Horvitz et al. Sep 2005 A1
20050193425 Sull et al. Sep 2005 A1
20050195157 Kramer et al. Sep 2005 A1
20050195696 Rekimoto Sep 2005 A1
20050197977 Buck et al. Sep 2005 A1
20050198075 Plastina et al. Sep 2005 A1
20050202385 Coward et al. Sep 2005 A1
20050203801 Morgenstern et al. Sep 2005 A1
20050203804 Suzuki et al. Sep 2005 A1
20050203807 Bezos et al. Sep 2005 A1
20050203811 David Sep 2005 A1
20050204381 Ludvig et al. Sep 2005 A1
20050204388 Knudson et al. Sep 2005 A1
20050205671 Gelsomini et al. Sep 2005 A1
20050209995 Aksu et al. Sep 2005 A1
20050210101 Janik Sep 2005 A1
20050216295 Abrahamsohn Sep 2005 A1
20050216300 Appelman et al. Sep 2005 A1
20050216346 Kusumoto et al. Sep 2005 A1
20050216348 Martin et al. Sep 2005 A1
20050216581 Blumenau et al. Sep 2005 A1
20050216855 Kopra et al. Sep 2005 A1
20050216936 Knudson et al. Sep 2005 A1
20050219375 Hasegawa et al. Oct 2005 A1
20050220359 Sun et al. Oct 2005 A1
20050222801 Wulff et al. Oct 2005 A1
20050222908 Altberg et al. Oct 2005 A1
20050223039 Kim et al. Oct 2005 A1
20050227749 Bender et al. Oct 2005 A1
20050228683 Saylor et al. Oct 2005 A1
20050228797 Koningstein et al. Oct 2005 A1
20050229215 Schein et al. Oct 2005 A1
20050231746 Parry et al. Oct 2005 A1
20050233741 Zamani et al. Oct 2005 A1
20050234781 Morgenstern et al. Oct 2005 A1
20050234851 King et al. Oct 2005 A1
20050234907 Yamagishi et al. Oct 2005 A1
20050235030 Lauckhart et al. Oct 2005 A1
20050235199 Adams Oct 2005 A1
20050235310 Bies Oct 2005 A1
20050235318 Grauch et al. Oct 2005 A1
20050235320 Maze et al. Oct 2005 A1
20050235338 AbiEzzi et al. Oct 2005 A1
20050235811 Dukane Oct 2005 A1
20050240476 Bigott Oct 2005 A1
20050240962 Cooper et al. Oct 2005 A1
20050246314 Eder Nov 2005 A1
20050246436 Day et al. Nov 2005 A1
20050246736 Beyda et al. Nov 2005 A1
20050247769 Potter et al. Nov 2005 A1
20050251448 Gropper Nov 2005 A1
20050251539 Parekh et al. Nov 2005 A1
20050251822 Knowles et al. Nov 2005 A1
20050251824 Thomas et al. Nov 2005 A1
20050251827 Ellis et al. Nov 2005 A1
20050254366 Amar Nov 2005 A1
20050256768 Robinson Nov 2005 A1
20050256867 Walther et al. Nov 2005 A1
20050256923 Adachi Nov 2005 A1
20050259675 Tuohino et al. Nov 2005 A1
20050260984 Karabinis Nov 2005 A1
20050261062 Lewin et al. Nov 2005 A1
20050261962 Chuah Nov 2005 A1
20050262058 Chandrasekar et al. Nov 2005 A1
20050262101 Halpern et al. Nov 2005 A1
20050262132 Morita et al. Nov 2005 A1
20050262569 Shay Nov 2005 A1
20050262570 Shay Nov 2005 A1
20050264417 Miller et al. Dec 2005 A1
20050265169 Yoshimaru et al. Dec 2005 A1
20050266858 Miller et al. Dec 2005 A1
20050266906 Stevens Dec 2005 A1
20050266907 Weston et al. Dec 2005 A1
20050267638 Peshkin et al. Dec 2005 A1
20050267819 Kaplan Dec 2005 A1
20050268342 Shay Dec 2005 A1
20050270358 Kuchen et al. Dec 2005 A1
20050270537 Mian et al. Dec 2005 A1
20050272442 Miller et al. Dec 2005 A1
20050273510 Schuh Dec 2005 A1
20050275505 Himmelstein Dec 2005 A1
20050276570 Reed, Jr. et al. Dec 2005 A1
20050278314 Buchheit Dec 2005 A1
20050278333 Daniels et al. Dec 2005 A1
20050278483 Andruszkiewicz et al. Dec 2005 A1
20050278712 Buskens et al. Dec 2005 A1
20050278741 Robarts et al. Dec 2005 A1
20050283395 Lesandrini et al. Dec 2005 A1
20050283401 Swix et al. Dec 2005 A1
20050283640 Cheston et al. Dec 2005 A1
20050288954 McCarthy et al. Dec 2005 A1
20050288999 Lerner et al. Dec 2005 A1
20050289054 Silverbrook et al. Dec 2005 A1
20050289542 Uhlig et al. Dec 2005 A1
20060003795 Yamanaka et al. Jan 2006 A1
20060004667 Neil Jan 2006 A1
20060007108 Utsumi et al. Jan 2006 A1
20060007312 James Jan 2006 A1
20060010400 Dehlin et al. Jan 2006 A1
20060010440 Anderson et al. Jan 2006 A1
20060011728 Frantz et al. Jan 2006 A1
20060014727 Karsan et al. Jan 2006 A1
20060015571 Fukuda et al. Jan 2006 A1
20060015904 Marcus Jan 2006 A1
20060018198 McDonald et al. Jan 2006 A1
20060018208 Nathan et al. Jan 2006 A1
20060018209 Drakoulis et al. Jan 2006 A1
20060019676 Miller et al. Jan 2006 A1
20060020062 Bloom Jan 2006 A1
20060020631 Cheong Wan et al. Jan 2006 A1
20060020662 Robinson Jan 2006 A1
20060023715 Chen et al. Feb 2006 A1
20060023717 Trachtman et al. Feb 2006 A1
20060023718 Joly Feb 2006 A1
20060023806 Huang et al. Feb 2006 A1
20060023937 Tessadro Feb 2006 A1
20060023945 King et al. Feb 2006 A1
20060025985 Vinberg et al. Feb 2006 A1
20060026013 Kraft Feb 2006 A1
20060026067 Nicholas et al. Feb 2006 A1
20060026188 Najork et al. Feb 2006 A1
20060026219 Orenstein et al. Feb 2006 A1
20060026263 Raghavan et al. Feb 2006 A1
20060029259 Harrington et al. Feb 2006 A1
20060030334 Hashimoto Feb 2006 A1
20060031108 Oran Feb 2006 A1
20060031505 Ashley Feb 2006 A1
20060031551 Agresta et al. Feb 2006 A1
20060031883 Ellis et al. Feb 2006 A1
20060034218 Ozluturk et al. Feb 2006 A1
20060036462 King et al. Feb 2006 A1
20060036570 Schaefer et al. Feb 2006 A1
20060036853 Chen et al. Feb 2006 A1
20060037011 Shi et al. Feb 2006 A1
20060037044 Daniels Feb 2006 A1
20060040719 Plimi Feb 2006 A1
20060041484 King et al. Feb 2006 A1
20060041538 King et al. Feb 2006 A1
20060041590 King et al. Feb 2006 A1
20060041605 King et al. Feb 2006 A1
20060041635 Alexander et al. Feb 2006 A1
20060045374 Kim et al. Mar 2006 A1
20060047563 Wardell Mar 2006 A1
20060047615 Ravin et al. Mar 2006 A1
20060047663 Rail Mar 2006 A1
20060048046 Joshi et al. Mar 2006 A1
20060048330 Rust et al. Mar 2006 A1
20060052837 Kim et al. Mar 2006 A1
20060053058 Hotchkiss et al. Mar 2006 A1
20060053077 Mourad et al. Mar 2006 A1
20060053097 King et al. Mar 2006 A1
20060053225 Poikselka et al. Mar 2006 A1
20060059091 Wang et al. Mar 2006 A1
20060059253 Goodman et al. Mar 2006 A1
20060062094 Nathan et al. Mar 2006 A1
20060067296 Bershad et al. Mar 2006 A1
20060069612 Hurt et al. Mar 2006 A1
20060069616 Bau Mar 2006 A1
20060069749 Herz et al. Mar 2006 A1
20060072724 Cohen et al. Apr 2006 A1
20060074750 Clark et al. Apr 2006 A1
20060074853 Liu et al. Apr 2006 A1
20060075032 Jain et al. Apr 2006 A1
20060075252 Kallahalla et al. Apr 2006 A1
20060075327 Sriver Apr 2006 A1
20060075508 Guo et al. Apr 2006 A1
20060080356 Burges et al. Apr 2006 A1
20060081714 King et al. Apr 2006 A1
20060082591 Emerson et al. Apr 2006 A1
20060083217 Bae Apr 2006 A1
20060085392 Wang et al. Apr 2006 A1
20060085408 Morsa Apr 2006 A1
20060085419 Rosen Apr 2006 A1
20060085477 Phillips et al. Apr 2006 A1
20060085517 Kaurila Apr 2006 A1
20060085816 Funk et al. Apr 2006 A1
20060085825 Istvan et al. Apr 2006 A1
20060089876 Boys Apr 2006 A1
20060090084 Buer Apr 2006 A1
20060090186 Santangelo et al. Apr 2006 A1
20060093971 Liu et al. May 2006 A1
20060095516 Wijeratne May 2006 A1
20060095538 Rehman et al. May 2006 A1
20060098900 King et al. May 2006 A1
20060100978 Heller et al. May 2006 A1
20060101285 Chen et al. May 2006 A1
20060101514 Milener et al. May 2006 A1
20060103665 Opala et al. May 2006 A1
20060103893 Azimi et al. May 2006 A1
20060104515 King et al. May 2006 A1
20060109266 Itkowitz et al. May 2006 A1
20060111970 Hill et al. May 2006 A1
20060112098 Renshaw et al. May 2006 A1
20060112410 Poli et al. May 2006 A1
20060114451 Wang et al. Jun 2006 A1
20060116924 Angles et al. Jun 2006 A1
20060119900 King et al. Jun 2006 A1
20060122983 King et al. Jun 2006 A1
20060123053 Scannell, Jr. Jun 2006 A1
20060124496 Gasque Jun 2006 A1
20060129313 Becker et al. Jun 2006 A1
20060129605 Doshi Jun 2006 A1
20060132349 Stern et al. Jun 2006 A1
20060136629 King et al. Jun 2006 A1
20060136720 Armstrong et al. Jun 2006 A1
20060136910 Brickell et al. Jun 2006 A1
20060136911 Robinson et al. Jun 2006 A1
20060136966 Folk Jun 2006 A1
20060138219 Brzezniak et al. Jun 2006 A1
20060143650 Tanikawa et al. Jun 2006 A1
20060146169 Segman Jul 2006 A1
20060146766 Nakajima et al. Jul 2006 A1
20060150249 Gassen et al. Jul 2006 A1
20060155398 Hoffberg et al. Jul 2006 A1
20060155735 Traut et al. Jul 2006 A1
20060156336 Knudson et al. Jul 2006 A1
20060161671 Ryman et al. Jul 2006 A1
20060161829 Kobayashi Jul 2006 A1
20060161894 Oustiougov et al. Jul 2006 A1
20060165571 Seon Jul 2006 A1
20060167747 Goodman et al. Jul 2006 A1
20060167784 Hoffberg Jul 2006 A1
20060168591 Hunsinger et al. Jul 2006 A1
20060173838 Garg et al. Aug 2006 A1
20060173859 Kim et al. Aug 2006 A1
20060173910 McLaughlin Aug 2006 A1
20060173916 Verbeck Sibley et al. Aug 2006 A1
20060173985 Moore Aug 2006 A1
20060178822 Lee Aug 2006 A1
20060179476 Challener et al. Aug 2006 A1
20060184508 Fuselier et al. Aug 2006 A1
20060184579 Mills et al. Aug 2006 A1
20060184937 Abels et al. Aug 2006 A1
20060193471 Stehle Aug 2006 A1
20060195438 Galuten Aug 2006 A1
20060195462 Rogers Aug 2006 A1
20060195513 Rogers et al. Aug 2006 A1
20060195514 Rogers et al. Aug 2006 A1
20060195515 Beaupre et al. Aug 2006 A1
20060195516 Beaupre Aug 2006 A1
20060195521 New et al. Aug 2006 A1
20060195695 Keys Aug 2006 A1
20060195789 Rogers et al. Aug 2006 A1
20060195790 Beaupre et al. Aug 2006 A1
20060195859 Konig et al. Aug 2006 A1
20060195860 Eldering et al. Aug 2006 A1
20060200253 Hoffberg et al. Sep 2006 A1
20060200258 Hoffberg et al. Sep 2006 A1
20060200259 Hoffberg et al. Sep 2006 A1
20060200260 Hoffberg et al. Sep 2006 A1
20060200780 Iwema et al. Sep 2006 A1
20060206259 Stiller et al. Sep 2006 A1
20060206576 Obradovich et al. Sep 2006 A1
20060206715 Cowan et al. Sep 2006 A1
20060206811 Dowdy Sep 2006 A1
20060206900 Ooyama et al. Sep 2006 A1
20060206931 Dillaway et al. Sep 2006 A1
20060209021 Yoo et al. Sep 2006 A1
20060209957 Riemens et al. Sep 2006 A1
20060212330 Savilampi Sep 2006 A1
20060212347 Fang et al. Sep 2006 A1
20060212350 Ellis et al. Sep 2006 A1
20060212401 Ameerally et al. Sep 2006 A1
20060212558 Sahinoja et al. Sep 2006 A1
20060212562 Kushwaha et al. Sep 2006 A1
20060212894 Knudson et al. Sep 2006 A1
20060218187 Plastina et al. Sep 2006 A1
20060218536 Kirilline et al. Sep 2006 A1
20060218544 Chakraborty et al. Sep 2006 A1
20060224895 Mayer Oct 2006 A1
20060225065 Chandhok et al. Oct 2006 A1
20060227945 Runge et al. Oct 2006 A1
20060229940 Grossman Oct 2006 A1
20060230141 Willis Oct 2006 A1
20060234639 Kushwaha et al. Oct 2006 A1
20060234698 Fok et al. Oct 2006 A1
20060235816 Yang et al. Oct 2006 A1
20060236257 Othmer et al. Oct 2006 A1
20060236408 Yan Oct 2006 A1
20060239579 Ritter Oct 2006 A1
20060242139 Butterfield et al. Oct 2006 A1
20060242178 Butterfield et al. Oct 2006 A1
20060242259 Vallabh et al. Oct 2006 A1
20060242667 Petersen et al. Oct 2006 A1
20060242703 Abeni Oct 2006 A1
20060248209 Chiu et al. Nov 2006 A1
20060253323 Phan et al. Nov 2006 A1
20060253330 Maggio et al. Nov 2006 A1
20060253874 Stark et al. Nov 2006 A1
20060256371 King et al. Nov 2006 A1
20060258368 Granito et al. Nov 2006 A1
20060259240 Hashimoto Nov 2006 A1
20060259592 Angeline Nov 2006 A1
20060259783 Work et al. Nov 2006 A1
20060259889 Crosetto Nov 2006 A1
20060265503 Jones et al. Nov 2006 A1
20060265733 Chen et al. Nov 2006 A1
20060265734 Chen et al. Nov 2006 A1
20060266839 Yavid et al. Nov 2006 A1
20060268667 Jellison, Jr. et al. Nov 2006 A1
20060271395 Harris et al. Nov 2006 A1
20060274060 Ni et al. Dec 2006 A1
20060274828 Siemens et al. Dec 2006 A1
20060277098 Chung et al. Dec 2006 A1
20060277574 Schein et al. Dec 2006 A1
20060282311 Jiang Dec 2006 A1
20060282455 Lee et al. Dec 2006 A1
20060283952 Wang Dec 2006 A1
20060288044 Kashiwagi et al. Dec 2006 A1
20060288366 Boylan et al. Dec 2006 A1
20060288367 Swix et al. Dec 2006 A1
20060294421 Schneider Dec 2006 A1
20060294566 Zlattner Dec 2006 A1
20060294575 Rogers Dec 2006 A1
20070009245 Ito Jan 2007 A1
20070013560 Casey Jan 2007 A1
20070014536 Hellman Jan 2007 A1
20070015519 Casey Jan 2007 A1
20070016476 Hoffberg et al. Jan 2007 A1
20070016507 Tzara Jan 2007 A1
20070016926 Ward et al. Jan 2007 A1
20070026854 Nath et al. Feb 2007 A1
20070027771 Collins et al. Feb 2007 A1
20070030539 Nath et al. Feb 2007 A1
20070033613 Ward et al. Feb 2007 A1
20070038931 Allaire et al. Feb 2007 A1
20070042765 Bailin et al. Feb 2007 A1
20070043766 Nicholas et al. Feb 2007 A1
20070043829 Dua Feb 2007 A1
20070043860 Pabari Feb 2007 A1
20070050254 Driscoll Mar 2007 A1
20070050409 Bugir et al. Mar 2007 A1
20070050673 DiBartolomeo et al. Mar 2007 A1
20070050712 Hull et al. Mar 2007 A1
20070050842 Smith et al. Mar 2007 A1
20070051217 Weber Mar 2007 A1
20070053513 Hoffberg Mar 2007 A1
20070054677 Himmelstein Mar 2007 A1
20070055980 Megeid et al. Mar 2007 A1
20070060056 Whitaker et al. Mar 2007 A1
20070061022 Hoffberg-Borghesani et al. Mar 2007 A1
20070061023 Hoffberg et al. Mar 2007 A1
20070061146 Jaramillo et al. Mar 2007 A1
20070061487 Moore et al. Mar 2007 A1
20070061735 Hoffberg et al. Mar 2007 A1
20070063875 Hoffberg Mar 2007 A1
20070066287 Papulov Mar 2007 A1
20070067104 Mays Mar 2007 A1
20070067267 Ives Mar 2007 A1
20070067775 Shultz et al. Mar 2007 A1
20070068708 Marks Mar 2007 A1
20070070038 Hoffberg et al. Mar 2007 A1
20070072591 McGary et al. Mar 2007 A1
20070073583 Grouf et al. Mar 2007 A1
20070073641 Perry et al. Mar 2007 A1
20070073756 Manhas et al. Mar 2007 A1
20070074214 Ueno et al. Mar 2007 A1
20070075622 Guo et al. Apr 2007 A1
20070078706 Datta et al. Apr 2007 A1
20070078712 Ott, IV et al. Apr 2007 A1
20070078714 Ott, IV et al. Apr 2007 A1
20070078989 van Datta et al. Apr 2007 A1
20070079326 Datta et al. Apr 2007 A1
20070079331 Datta et al. Apr 2007 A1
20070079335 McDonough Apr 2007 A1
20070082678 Himmelstein Apr 2007 A1
20070083611 Farago et al. Apr 2007 A1
20070084807 Holmes et al. Apr 2007 A1
20070086061 Robbins Apr 2007 A1
20070087756 Hoffberg Apr 2007 A1
20070088852 Levkovitz Apr 2007 A1
20070089151 Moore et al. Apr 2007 A1
20070092053 Sato Apr 2007 A1
20070094081 Yruski et al. Apr 2007 A1
20070094082 Yruski et al. Apr 2007 A1
20070094083 Yruski et al. Apr 2007 A1
20070094363 Yruski et al. Apr 2007 A1
20070100690 Hopkins May 2007 A1
20070100956 Kumar May 2007 A1
20070101360 Gutta et al. May 2007 A1
20070106681 Haot et al. May 2007 A1
20070107010 Jolna et al. May 2007 A1
20070112762 Brubaker May 2007 A1
20070115868 Chen et al. May 2007 A1
20070115897 Chen et al. May 2007 A1
20070118425 Yruski et al. May 2007 A1
20070118546 Acharya May 2007 A1
20070121843 Atazky et al. May 2007 A1
20070126874 Kake Jun 2007 A1
20070130012 Yruski et al. Jun 2007 A1
20070130137 Oliver et al. Jun 2007 A1
20070130232 Therrien et al. Jun 2007 A1
20070130594 Hidary et al. Jun 2007 A1
20070134193 Pauly et al. Jun 2007 A1
20070136048 Richardson-Bunbury et al. Jun 2007 A1
20070136235 Hess Jun 2007 A1
20070136256 Kapur et al. Jun 2007 A1
20070136264 Tran Jun 2007 A1
20070136689 Richardson-Bunbury et al. Jun 2007 A1
20070141020 Barritault et al. Jun 2007 A1
20070143345 Jones et al. Jun 2007 A1
20070146812 Lawton Jun 2007 A1
20070150168 Balcom et al. Jun 2007 A1
20070150359 Lim et al. Jun 2007 A1
20070150919 Morishita Jun 2007 A1
20070155411 Morrison Jul 2007 A1
20070156677 Szabo Jul 2007 A1
20070157242 Cordray et al. Jul 2007 A1
20070159455 Lin Jul 2007 A1
20070161382 Melinger et al. Jul 2007 A1
20070162850 Adler et al. Jul 2007 A1
20070162945 Mills Jul 2007 A1
20070167226 Kelly et al. Jul 2007 A1
20070168430 Brun et al. Jul 2007 A1
20070169121 Hunt et al. Jul 2007 A1
20070173266 Barnes, Jr. Jul 2007 A1
20070174471 Van Rossum Jul 2007 A1
20070179792 Kramer Aug 2007 A1
20070179987 Lim Aug 2007 A1
20070180493 Croft et al. Aug 2007 A1
20070185599 Robinson et al. Aug 2007 A1
20070190506 Jeng et al. Aug 2007 A1
20070192299 Zuckerberg et al. Aug 2007 A1
20070192329 Croft et al. Aug 2007 A1
20070192839 Fee et al. Aug 2007 A1
20070194119 Vinogradov et al. Aug 2007 A1
20070198506 Attaran Rezaei et al. Aug 2007 A1
20070198563 Apparao et al. Aug 2007 A1
20070198612 Prahlad et al. Aug 2007 A1
20070198656 Mazzaferri et al. Aug 2007 A1
20070203591 Bowerman Aug 2007 A1
20070203790 Torrens et al. Aug 2007 A1
20070204266 Beaty et al. Aug 2007 A1
20070204308 Nicholas et al. Aug 2007 A1
20070207797 Pitt et al. Sep 2007 A1
20070208561 Choi et al. Sep 2007 A1
20070208685 Blumenau Sep 2007 A1
20070208732 Flowers et al. Sep 2007 A1
20070214133 Liberty et al. Sep 2007 A1
20070214408 Straub et al. Sep 2007 A1
20070219708 Brasche et al. Sep 2007 A1
20070219940 Mueller et al. Sep 2007 A1
20070228306 Gannon et al. Oct 2007 A1
20070233585 Ben Simon et al. Oct 2007 A1
20070233806 Asadi Oct 2007 A1
20070238076 Burstein et al. Oct 2007 A1
20070239348 Cheung Oct 2007 A1
20070239517 Chung et al. Oct 2007 A1
20070240079 Flynt et al. Oct 2007 A1
20070244760 Bodnar et al. Oct 2007 A1
20070244880 Martin et al. Oct 2007 A1
20070249406 Andreasson Oct 2007 A1
20070250761 Bradley et al. Oct 2007 A1
20070259653 Tang et al. Nov 2007 A1
20070259716 Mattice et al. Nov 2007 A1
20070260508 Barry et al. Nov 2007 A1
20070260604 Haeuser et al. Nov 2007 A1
20070263932 Bernardin et al. Nov 2007 A1
20070271286 Purang et al. Nov 2007 A1
20070271297 Jaffe et al. Nov 2007 A1
20070271340 Goodman et al. Nov 2007 A1
20070271582 Ellis et al. Nov 2007 A1
20070271610 Grobman Nov 2007 A1
20070273758 Mendoza et al. Nov 2007 A1
20070276940 Abraham et al. Nov 2007 A1
20070279494 Aman et al. Dec 2007 A1
20070279711 King et al. Dec 2007 A1
20070282621 Altman et al. Dec 2007 A1
20070282675 Varghese Dec 2007 A1
20070288228 Taillefer et al. Dec 2007 A1
20070288278 Alexander et al. Dec 2007 A1
20070288310 Boos et al. Dec 2007 A1
20070288589 Chen et al. Dec 2007 A1
20070294096 Randall et al. Dec 2007 A1
20070294689 Garney Dec 2007 A1
20070294740 Drake et al. Dec 2007 A1
20070299935 Plastina et al. Dec 2007 A1
20070300142 King et al. Dec 2007 A1
20080002074 Lee et al. Jan 2008 A1
20080004802 Horvitz Jan 2008 A1
20080004948 Flake et al. Jan 2008 A1
20080004990 Flake et al. Jan 2008 A1
20080005264 Brunell et al. Jan 2008 A1
20080005313 Flake et al. Jan 2008 A1
20080005651 Grefenstette et al. Jan 2008 A1
20080010206 Coleman Jan 2008 A1
20080010655 Ellis et al. Jan 2008 A1
20080011829 Roth Jan 2008 A1
20080013429 Chen et al. Jan 2008 A1
20080014255 Tagawa et al. Jan 2008 A1
20080015748 Nagy Jan 2008 A1
20080016187 Neil et al. Jan 2008 A1
20080021957 Medved et al. Jan 2008 A1
20080022054 Hertzberg et al. Jan 2008 A1
20080023550 Yu et al. Jan 2008 A1
20080026804 Baray et al. Jan 2008 A1
20080026838 Dunstan et al. Jan 2008 A1
20080027881 Bisse Jan 2008 A1
20080028031 Bailey et al. Jan 2008 A1
20080028674 Jackson et al. Feb 2008 A1
20080031213 Kaiser et al. Feb 2008 A1
20080031625 Okuda et al. Feb 2008 A1
20080033897 Lloyd Feb 2008 A1
20080040283 Morris Feb 2008 A1
20080040749 Hoffberg et al. Feb 2008 A1
20080046298 Ben-Yehuda et al. Feb 2008 A1
20080046317 Christianson et al. Feb 2008 A1
20080046417 Jeffery et al. Feb 2008 A1
20080046592 Gilhuly et al. Feb 2008 A1
20080046948 Verosub Feb 2008 A1
20080065471 Reynolds et al. Mar 2008 A1
20080070588 Morin Mar 2008 A1
20080071136 Oohashi et al. Mar 2008 A1
20080071775 Gross Mar 2008 A1
20080072134 Balakrishnan et al. Mar 2008 A1
20080072874 Baeuerle Mar 2008 A1
20080077264 Irvin et al. Mar 2008 A1
20080082467 Meijer et al. Apr 2008 A1
20080082903 McCurdy et al. Apr 2008 A1
20080085135 Rieck Apr 2008 A1
20080085915 Becker et al. Apr 2008 A1
20080086356 Glassman et al. Apr 2008 A1
20080086431 Robinson et al. Apr 2008 A1
20080086948 Schussler et al. Apr 2008 A1
20080088228 Noguchi et al. Apr 2008 A1
20080090591 Miller et al. Apr 2008 A1
20080091528 Rampell et al. Apr 2008 A1
20080091537 Miller et al. Apr 2008 A1
20080091796 Story et al. Apr 2008 A1
20080091954 Morris et al. Apr 2008 A1
20080092140 Doninger et al. Apr 2008 A1
20080093460 Frantz et al. Apr 2008 A1
20080096664 Baray et al. Apr 2008 A1
20080097872 Peckover Apr 2008 A1
20080102911 Campbell et al. May 2008 A1
20080104061 Rezaei May 2008 A1
20080104106 Rosenberg et al. May 2008 A1
20080104227 Birnie et al. May 2008 A1
20080109761 Stambaugh May 2008 A1
20080109843 Ullah May 2008 A1
20080109844 Baldeschwieler et al. May 2008 A1
20080114642 Goldberg et al. May 2008 A1
20080114751 Cramer et al. May 2008 A1
20080117202 Martinez et al. May 2008 A1
20080119212 Himmelstein May 2008 A1
20080120183 Park May 2008 A1
20080120308 Martinez et al. May 2008 A1
20080120690 Norlander et al. May 2008 A1
20080126415 Chaudhury et al. May 2008 A1
20080126439 Kaminsky May 2008 A1
20080126565 Osano et al. May 2008 A1
20080126960 Naaman et al. May 2008 A1
20080126961 Naaman et al. May 2008 A1
20080127121 Fenton et al. May 2008 A1
20080127244 Zhang May 2008 A1
20080133540 Hubbard et al. Jun 2008 A1
20080133601 Martin Cervera et al. Jun 2008 A1
20080133750 Grabarnik et al. Jun 2008 A1
20080134239 Knowles et al. Jun 2008 A1
20080137971 King et al. Jun 2008 A1
20080140239 Rosenberg et al. Jun 2008 A1
20080140717 Rosenberg et al. Jun 2008 A1
20080141117 King et al. Jun 2008 A1
20080141372 Massey et al. Jun 2008 A1
20080146248 Himmelstein Jun 2008 A1
20080147655 Sinha et al. Jun 2008 A1
20080147743 Taylor et al. Jun 2008 A1
20080148175 Naaman et al. Jun 2008 A1
20080152191 Fujimura et al. Jun 2008 A1
20080153564 Baerlocher et al. Jun 2008 A1
20080154720 Gounares et al. Jun 2008 A1
20080155588 Roberts et al. Jun 2008 A1
20080155602 Collet et al. Jun 2008 A1
20080161018 Miller et al. Jul 2008 A1
20080163284 Martinez et al. Jul 2008 A1
20080170674 Ozden et al. Jul 2008 A1
20080172365 Ozden et al. Jul 2008 A1
20080172632 Stambaugh Jul 2008 A1
20080177706 Yuen Jul 2008 A1
20080177825 Dubinko et al. Jul 2008 A1
20080178221 Schein et al. Jul 2008 A1
20080184225 Fitzgerald et al. Jul 2008 A1
20080184304 Ellis et al. Jul 2008 A1
20080184308 Herrington et al. Jul 2008 A1
20080184312 Schein et al. Jul 2008 A1
20080184313 Knudson et al. Jul 2008 A1
20080184322 Blake Jul 2008 A1
20080189742 Ellis et al. Aug 2008 A1
20080189743 Ellis et al. Aug 2008 A1
20080192005 Elgoyhen et al. Aug 2008 A1
20080195664 Maharajh et al. Aug 2008 A1
20080199042 Smith Aug 2008 A1
20080207137 Maharajh et al. Aug 2008 A1
20080219502 Shamaie Sep 2008 A1
20080220855 Chen et al. Sep 2008 A1
20080221487 Zohar et al. Sep 2008 A1
20080222166 Hultgren et al. Sep 2008 A1
20080225041 El Dokor et al. Sep 2008 A1
20080235093 Uland Sep 2008 A1
20080235725 Hendricks Sep 2008 A1
20080244579 Muller Oct 2008 A1
20080255989 Altberg et al. Oct 2008 A1
20080261529 Rosenblatt Oct 2008 A1
20080263600 Olague et al. Oct 2008 A1
20080270221 Clemens et al. Oct 2008 A1
20080270579 Herz et al. Oct 2008 A1
20080274804 Harrison et al. Nov 2008 A1
20080285807 Lee et al. Nov 2008 A1
20080285886 Allen Nov 2008 A1
20080288980 Schein et al. Nov 2008 A1
20080289023 Wardrop Nov 2008 A1
20080301250 Hardy et al. Dec 2008 A1
20080307066 Amidon et al. Dec 2008 A1
20080313172 King et al. Dec 2008 A1
20080320001 Gaddam Dec 2008 A1
20090003796 Borghesani et al. Jan 2009 A1
20090005987 Vengroff et al. Jan 2009 A1
20090006336 Forstall et al. Jan 2009 A1
20090012806 Ricordi et al. Jan 2009 A1
20090012934 Yerigan Jan 2009 A1
20090012965 Franken Jan 2009 A1
20090015461 Pitt et al. Jan 2009 A1
20090017919 Brennan Jan 2009 A1
20090018990 Moraleda Jan 2009 A1
20090024504 Lerman et al. Jan 2009 A1
20090024510 Chen et al. Jan 2009 A1
20090024592 Lazarski et al. Jan 2009 A1
20090027337 Hildreth Jan 2009 A1
20090030405 Quick et al. Jan 2009 A1
20090030406 Hickingbotham Jan 2009 A1
20090032173 Nakamura Feb 2009 A1
20090034444 Wang et al. Feb 2009 A1
20090034445 Prakash et al. Feb 2009 A1
20090043844 Zimmet et al. Feb 2009 A1
20090044132 Combel et al. Feb 2009 A1
20090044226 Ellis et al. Feb 2009 A1
20090046258 Schnuckle et al. Feb 2009 A1
20090051247 Kakehi et al. Feb 2009 A1
20090051648 Shamaie et al. Feb 2009 A1
20090055503 Crivella et al. Feb 2009 A1
20090060374 Wang Mar 2009 A1
20090060379 Manabe Mar 2009 A1
20090060476 Iwamoto et al. Mar 2009 A1
20090063254 Paul et al. Mar 2009 A1
20090070186 Buiten et al. Mar 2009 A1
20090070817 Ellis et al. Mar 2009 A1
20090073174 Berg et al. Mar 2009 A1
20090073191 Smith et al. Mar 2009 A1
20090076889 Jhanji Mar 2009 A1
20090076939 Berg et al. Mar 2009 A1
20090076974 Berg et al. Mar 2009 A1
20090077504 Bell et al. Mar 2009 A1
20090077658 King et al. Mar 2009 A1
20090079614 Pitt et al. Mar 2009 A1
20090079813 Hildreth Mar 2009 A1
20090082701 Zohar et al. Mar 2009 A1
20090082950 Vorona Mar 2009 A1
20090083307 Martin Cervera et al. Mar 2009 A1
20090083788 Russell et al. Mar 2009 A1
20090089222 Ferreira De Castro et al. Apr 2009 A1
20090100052 Stern et al. Apr 2009 A1
20090102838 Bullard et al. Apr 2009 A1
20090103902 Matsuura et al. Apr 2009 A1
20090106085 Raimbeault Apr 2009 A1
20090106356 Brase et al. Apr 2009 A1
20090122723 Hirano et al. May 2009 A1
20090125517 Krishnaswamy et al. May 2009 A1
20090132941 Pilskalns et al. May 2009 A1
20090141933 Wagg Jun 2009 A1
20090144141 Dominowska et al. Jun 2009 A1
20090149046 Nakamura Jun 2009 A1
20090150501 Davis et al. Jun 2009 A1
20090150507 Davis et al. Jun 2009 A1
20090156125 Himmelstein Jun 2009 A1
20090156203 Himmelstein Jun 2009 A1
20090164924 Flake et al. Jun 2009 A1
20090164992 Flake et al. Jun 2009 A1
20090164993 Flake et al. Jun 2009 A1
20090165051 Armaly Jun 2009 A1
20090165134 Flake et al. Jun 2009 A1
20090170604 Mueller et al. Jul 2009 A1
20090171939 Athsani et al. Jul 2009 A1
20090177603 Honisch Jul 2009 A1
20090183081 Rodriguez et al. Jul 2009 A1
20090184849 Nasiri et al. Jul 2009 A1
20090186704 Goldberg et al. Jul 2009 A1
20090187637 Wu et al. Jul 2009 A1
20090189892 Desai et al. Jul 2009 A1
20090198354 Wilson Aug 2009 A1
20090204481 Navar et al. Aug 2009 A1
20090204484 Johnson Aug 2009 A1
20090204672 Jetha et al. Aug 2009 A1
20090204676 Parkinson et al. Aug 2009 A1
20090210415 Martin et al. Aug 2009 A1
20090216606 Coffman et al. Aug 2009 A1
20090221275 Trip Sep 2009 A1
20090221368 Yen et al. Sep 2009 A1
20090222302 Higgins et al. Sep 2009 A1
20090222303 Higgins et al. Sep 2009 A1
20090222304 Higgins et al. Sep 2009 A1
20090228841 Hildreth Sep 2009 A1
20090231278 St. Hilaire et al. Sep 2009 A1
20090234814 Boerries et al. Sep 2009 A1
20090234909 Strandell et al. Sep 2009 A1
20090241144 LaJoie et al. Sep 2009 A1
20090247219 Lin et al. Oct 2009 A1
20090249482 Sarathy Oct 2009 A1
20090265431 Jania et al. Oct 2009 A1
20090276369 Mabry et al. Nov 2009 A1
20090281997 Jain Nov 2009 A1
20090282458 Hjelm Nov 2009 A1
20090297124 Ng Dec 2009 A1
20090299837 Steelberg et al. Dec 2009 A1
20090304009 Kolhi et al. Dec 2009 A1
20090311994 Himmelstein Dec 2009 A1
20090311995 Himmelstein Dec 2009 A1
20090313461 Klug Dec 2009 A1
20090313546 Katpelly et al. Dec 2009 A1
20090319462 Tirpak et al. Dec 2009 A1
20090320047 Khan et al. Dec 2009 A1
20090323519 Pun Dec 2009 A1
20090324008 Kongqiao et al. Dec 2009 A1
20090328087 Higgins et al. Dec 2009 A1
20100000497 Lee et al. Jan 2010 A1
20100000498 Middlebrook et al. Jan 2010 A1
20100000499 Braun et al. Jan 2010 A1
20100002635 Eklund Jan 2010 A1
20100014444 Ghanadan et al. Jan 2010 A1
20100020797 Casey et al. Jan 2010 A1
20100022310 van Datta et al. Jan 2010 A1
20100026063 Mosler et al. Feb 2010 A1
20100027254 Nakayama Feb 2010 A1
20100027255 Chang et al. Feb 2010 A1
20100027256 Kinoshita Feb 2010 A1
20100028066 Thomson Feb 2010 A1
20100030640 van Datta et al. Feb 2010 A1
20100043022 Kaftan Feb 2010 A1
20100057290 Brillhart et al. Mar 2010 A1
20100063993 Higgins et al. Mar 2010 A1
20100070368 Choi et al. Mar 2010 A1
20100076642 Hoffberg et al. Mar 2010 A1
20100082688 Davis et al. Apr 2010 A1
20100088273 Donaldson Apr 2010 A1
20100092095 King et al. Apr 2010 A1
20100096191 Lassoie et al. Apr 2010 A1
20100096192 Kawanishi Apr 2010 A1
20100096193 Yilmaz et al. Apr 2010 A1
20100105244 Keith et al. Apr 2010 A1
20100105245 Good et al. Apr 2010 A1
20100105246 Burris et al. Apr 2010 A1
20100108159 Williamson May 2010 A1
20100115413 Schein et al. May 2010 A1
20100115541 Schein et al. May 2010 A1
20100118025 Smith et al. May 2010 A1
20100121848 Yaroslavskiy et al. May 2010 A1
20100125563 Nair et al. May 2010 A1
20100125569 Nair et al. May 2010 A1
20100125604 Martinez et al. May 2010 A1
20100125605 Nair et al. May 2010 A1
20100169910 Collins et al. Jul 2010 A1
20100177970 King et al. Jul 2010 A1
20100182631 King et al. Jul 2010 A1
20100183246 King et al. Jul 2010 A1
20100185538 King et al. Jul 2010 A1
20100185620 Schiller Jul 2010 A1
20100185642 Higgins et al. Jul 2010 A1
20100203969 Takahashi et al. Aug 2010 A1
20100211969 Schein et al. Aug 2010 A1
20100214148 Kuhn Aug 2010 A1
20100214149 Kuhn Aug 2010 A1
20100214214 Corson et al. Aug 2010 A1
20100224457 Majeau Sep 2010 A1
20100228619 Goldberg et al. Sep 2010 A1
20100228620 Goldberg et al. Sep 2010 A1
20100235233 Goldberg et al. Sep 2010 A1
20100238065 Pitt et al. Sep 2010 A1
20100266210 Markovic et al. Oct 2010 A1
20100269138 Krikorian et al. Oct 2010 A1
20100278453 King Nov 2010 A1
20100291874 Himmelstein Nov 2010 A1
20100312433 Preston et al. Dec 2010 A1
20100318797 King et al. Dec 2010 A1
20110004669 Navar et al. Jan 2011 A1
20110010545 Kill et al. Jan 2011 A1
20110015975 Yruski et al. Jan 2011 A1
20110016468 Singh Jan 2011 A1
20110019020 King et al. Jan 2011 A1
20110019919 King et al. Jan 2011 A1
20110022940 King et al. Jan 2011 A1
20110025842 King et al. Feb 2011 A1
20110026838 King et al. Feb 2011 A1
20110029383 Engel et al. Feb 2011 A1
20110029443 King et al. Feb 2011 A1
20110029504 King et al. Feb 2011 A1
20110030027 Nishioka et al. Feb 2011 A1
20110033080 King et al. Feb 2011 A1
20110035289 King et al. Feb 2011 A1
20110035656 King et al. Feb 2011 A1
20110035662 King et al. Feb 2011 A1
20110041084 Karam Feb 2011 A1
20110043652 King et al. Feb 2011 A1
20110044547 King et al. Feb 2011 A1
20110072012 Ah-Pine et al. Mar 2011 A1
20110072395 King et al. Mar 2011 A1
20110072490 Chen et al. Mar 2011 A1
20110075228 King et al. Mar 2011 A1
20110078585 King et al. Mar 2011 A1
20110085211 King et al. Apr 2011 A1
20110096174 King et al. Apr 2011 A1
20110102443 Dror et al. May 2011 A1
20110125582 Datta et al. May 2011 A1
20110131005 Ueshima et al. Jun 2011 A1
20110142371 King et al. Jun 2011 A1
20110145068 King et al. Jun 2011 A1
20110145102 King et al. Jun 2011 A1
20110150335 King et al. Jun 2011 A1
20110153653 King et al. Jun 2011 A1
20110154507 King et al. Jun 2011 A1
20110156896 Hoffberg et al. Jun 2011 A1
20110167075 King et al. Jul 2011 A1
20110173660 Schein et al. Jul 2011 A1
20110197069 Rodgers et al. Aug 2011 A9
20110209191 Shah Aug 2011 A1
20110228791 Flinta et al. Sep 2011 A1
20110234490 Markovic et al. Sep 2011 A1
20110242617 King et al. Oct 2011 A1
20110295842 King et al. Dec 2011 A1
20110299125 King et al. Dec 2011 A1
20110307339 Russell et al. Dec 2011 A1
20120017232 Hoffberg et al. Jan 2012 A1
20120036016 Hoffberg et al. Feb 2012 A1
20120036532 Ellis et al. Feb 2012 A1
20120041941 King et al. Feb 2012 A1
20120054367 Ramakrishnan et al. Mar 2012 A1
20120072274 King et al. Mar 2012 A1
20120078729 Goldberg et al. Mar 2012 A1
20120135805 Miller, IV May 2012 A1
20120151577 King et al. Jun 2012 A1
20120196660 El Dokor et al. Aug 2012 A1
20120208510 Engstrom et al. Aug 2012 A1
20120218260 Rivera et al. Aug 2012 A1
20120272270 Boyer et al. Oct 2012 A1
20130232000 van Datta et al. Sep 2013 A1
20130232001 van Datta et al. Sep 2013 A1
Foreign Referenced Citations (127)
Number Date Country
2009171 Aug 1990 CA
1298387 Mar 1992 CA
1298903 Apr 1992 CA
2245963 Feb 2000 CA
3125161 Jan 1983 DE
3310111 Sep 1984 DE
3419156 Nov 1985 DE
4035979 Jun 1991 DE
4123097 Jan 1992 DE
4237987 May 1994 DE
19743137 Apr 1999 DE
19922608 Nov 2000 DE
19931161 Jan 2001 DE
0059120 Sep 1982 EP
0155776 Sep 1985 EP
0158214 Oct 1985 EP
0181012 May 1986 EP
181012 May 1986 EP
0290725 Nov 1988 EP
0295678 Dec 1988 EP
0323230 Jul 1989 EP
0323246 Jul 1989 EP
0348528 Jan 1990 EP
0379198 Jul 1990 EP
0393935 Oct 1990 EP
0441576 Aug 1991 EP
0444738 Sep 1991 EP
0485120 Mar 1992 EP
0501058 Sep 1992 EP
0512789 Nov 1992 EP
0718614 Jun 1996 EP
0748727 Dec 1996 EP
0750406 Dec 1996 EP
0785535 Jul 1997 EP
0814393 Dec 1997 EP
0841648 May 1998 EP
0921411 Jun 1999 EP
1099341 May 2001 EP
1213919 Jun 2002 EP
1247394 Oct 2002 EP
1355128 Oct 2003 EP
1099341 Oct 2004 EP
2018728 Jan 2009 EP
2554612 May 1985 FR
2126040 Mar 1984 GB
2238870 Jun 1991 GB
2256987 Dec 1992 GB
2261977 Jun 1993 GB
2320973 Jul 1998 GB
1980076706 Jun 1980 JP
1986115298 Jul 1986 JP
1988066479 Mar 1988 JP
1988188517 Mar 1988 JP
03-078678 Apr 1991 JP
03-092714 Apr 1991 JP
2000092714 Mar 2000 JP
2000207691 Jul 2000 JP
2000261731 Sep 2000 JP
2000267564 Sep 2000 JP
2001041753 Feb 2001 JP
2001089414 Apr 2001 JP
2001127047 May 2001 JP
2001173815 Jun 2001 JP
2001173817 Jun 2001 JP
2002004285 Jan 2002 JP
2002103584 Apr 2002 JP
2002212713 Jul 2002 JP
2003044015 Feb 2003 JP
2003137679 May 2003 JP
2003150699 May 2003 JP
2003245075 Sep 2003 JP
2003245076 Sep 2003 JP
2003269317 Sep 2003 JP
2005010775 Jan 2005 JP
2008287386 Nov 2008 JP
2010079679 Oct 2010 JP
2010243438 Oct 2010 JP
2011030299 Feb 2011 JP
2011036300 Feb 2011 JP
2011250383 Dec 2011 JP
2012072656 Apr 2012 JP
2011-283158 Jul 2013 JP
WO9214215 Aug 1992 WO
WO9219078 Oct 1992 WO
WO9221001 Nov 1992 WO
WO9500860 Jan 1995 WO
WO9515658 Jun 1995 WO
WO9522131 Aug 1995 WO
WO9615614 May 1996 WO
WO9624229 Aug 1996 WO
WO9708839 May 1997 WO
WO9722066 Jun 1997 WO
WO9723973 Jul 1997 WO
WO9726061 Jul 1997 WO
WO9729373 Aug 1997 WO
WO9908436 Feb 1999 WO
WO9917477 Apr 1999 WO
WO9923809 May 1999 WO
WO9957662 Nov 1999 WO
WO9959097 Nov 1999 WO
WO9965183 Dec 1999 WO
WO0029948 May 2000 WO
WO0040038 Jul 2000 WO
WO0054237 Sep 2000 WO
WO0130061 Apr 2001 WO
WO0150728 Jul 2001 WO
WO0158110 Aug 2001 WO
WO2004054264 Jun 2004 WO
WO2004100010 Nov 2004 WO
WO2005086969 Sep 2005 WO
WO2005091626 Sep 2005 WO
WO2005122013 Dec 2005 WO
WO2006074305 Jul 2006 WO
WO2006116196 Nov 2006 WO
WO2007022137 Feb 2007 WO
WO2007041022 Apr 2007 WO
WO2007041028 Apr 2007 WO
WO2007070358 Jun 2007 WO
WO2007141020 Jun 2007 WO
WO2007079395 Jul 2007 WO
WO2007027453 Aug 2007 WO
WO2007113546 Oct 2007 WO
WO2007130681 Nov 2007 WO
WP2008028674 Mar 2008 WO
WO2007106185 Aug 2008 WO
WO2008031625 Dec 2008 WO
WO2009059065 May 2009 WO
Non-Patent Literature Citations (1)
Entry
US 6,731,928, 5/2004, Tanaka (Withdrawn)
Related Publications (1)
Number Date Country
20140173452 A1 Jun 2014 US
Continuations (6)
Number Date Country
Parent 13043411 Mar 2011 US
Child 14078334 US
Parent 11363393 Feb 2006 US
Child 13043411 US
Parent 10693759 Oct 2003 US
Child 11363393 US
Parent 10162079 Jun 2002 US
Child 10693759 US
Parent 09241135 Feb 1999 US
Child 10162079 US
Parent 11363411 Feb 2006 US
Child 13043411 US