Methods of and systems for content search based on environment sampling

Abstract
The present disclosure provides user interface methods of and systems for displaying at least one available action overlaid on an image, comprising displaying an image; selecting at least one action and assigning a ranking weight thereto based on at least one of (1) image content, (2) current device location, (3) location at which the image was taken, (4) date of capturing the image; (5) time of capturing the image; and (6) a user preference signature representing prior actions chosen by a user and content preferences learned about the user; and ranking the at least one action based on its assigned ranking weight.
Description
FIELD OF THE DISCLOSURE

The present method and system relate to a live capture image recognition interface for camera-equipped mobile devices such as smart phones and hand-held tablets.


BACKGROUND OF THE DISCLOSURE
Description of Related Art

Camera-equipped mobile phones with resolutions rivaling standalone cameras have become the norm, making social interactions richer with the addition of images and videos that capture the moment. Concurrent with this change, applications for using a camera as an input medium for specific use cases are also emerging, similar to text input using a keypad.


For instance, FIG. 10 illustrates that some search engines on mobile devices place a camera icon alongside a text input search box to encourage users to take a picture and use the picture to serve as input for the search engine, instead of text-input-based search. The search engine may further use the user's location to aid the picture. The application then displays the search results as in any conventional text-based search engine. GOOGLE “goggles” similarly places a camera icon adjacent to the text input box. A user uses the application to take a picture of a scene, and performs a search using the capture image or text. The application captures all text in the range of view and uses the text as input to search. The application then displays the results as in a conventional text-based search engine. Commerce-centric applications use a camera as an input means to take an image snapshot of a barcode to provide comparative pricing information. Some mobile devices are experimenting with having a hardware camera button, similar to keypad hardware for text input, to launch a camera and perform typical actions such as sharing a picture with a friend or upload the picture to a photo sharing site.


In an MIT Media Lab project on wearable computers, the “sixth sense” device system uses a camera as an input means to sample the environment. The system aims to bring the digital world to the real world and vice versa. Gesture interfaces with hands serve as cues to capture an image and projections of a phone keypad on the palm of the user's hands. The gesture interface is used as a means to enter a phone number and make a call. Furthermore, the gesture interface is used to find additional information about an object such as a book, as the user looks at the book, and to project price information, etc. on the physical book. The MIT Media Lab project is not a phone-based interface, though it uses a camera as an input interface to sample the surrounding visual environment and interact with it.


Users have to be aware of applications that are capable of using camera as the input medium. More importantly, users must pick the right application for a particular use case. As illustrated in FIG. 10, some phones offer a choice of actions after an image is taken such as uploading the image, saving to a photo library, or emailing to friends. This actions list could expand to include more use cases (commerce enabling, image input based search etc.). One drawback of such an approach, particularly on a mobile device, is inundating the user with a multitude of choices which makes the process so cumbersome. This difficulty thereby decreases the likelihood of its mass acceptance as a powerful and easy alternative to text input. Furthermore, adding more actions to an actions list would make it appear easier to choose a specific application up front than having to navigate through a long list of potential actions in a centralized input interface once a picture is taken.


SUMMARY OF THE DISCLOSURE

The present disclosure provides user interface methods of and systems for displaying at least one available action overlaid on an image, including displaying an image; selecting at least one action and assigning a ranking weight thereto based on at least one of (1) image content, (2) current device location, (3) location at which the image was taken, (4) date of capturing the image; (5) time of capturing the image; and (6) a user preference signature representing prior actions chosen by a user and content preferences learned about the user; and ranking the at least one action based on its assigned ranking weight.


Under another aspect of the invention, the method also includes displaying the at least one action in the ranked order.


Under another aspect of the invention, the image is an image of a portion of an environment surrounding the user.


Under another aspect of the invention, the selecting at least one action and assigning a ranking weight thereto includes determining the ranking weight by a machine learning process.


Under another aspect of the invention, the method also includes selecting the highest ranked action in response to activation of a hardware camera button.


Under another aspect of the invention, the method also includes analyzing the image to learn about the image content.


Under a further aspect of the invention, the method also includes using at least one of the location of the device and the location at which the image was taken to augment the analyzing the image to learn about the image content.


Under a still further aspect of the invention, the one or more actions include an action to purchase an item corresponding to the displayed image from an online storefront corresponding to a physical storefront, if the device's location is proximate to the physical storefront.


Under another aspect of the invention, the analyzing the image to learn about the image content includes comparing the image against a collection of at least one sample image to determine the image content.


Under a further aspect of the invention, the analyzing the image to learn about the image content includes using optical character recognition to learn about textual image content.


Under another aspect of the invention, the analyzing the image to learn about the image content includes analyzing at least one partial image selected based on a proximity of the at least one partial image to a visual field of interest for the user.


Under another aspect of the invention, the method also includes storing the image to a memory along with data about at least one of the location of the device, the date at which the image was captured, and the time at which the image was captured; and displaying the at least one action in the ranked order when the user later acts upon the stored image.


Under another aspect of the invention, the method also includes updating the user preference signature to include information about the action chosen by the user from among the one or more ordered actions.


Under still a further aspect of the invention, a system for displaying at least one available action overlaid on an image includes a computer memory store comprising instructions in computer readable form that when executed cause a computer system to perform any of the actions set forth above.


Any of the above aspects may be combined with any of the other aspects above.





BRIEF DESCRIPTION OF THE DRAWINGS

For a more complete understanding of various embodiments, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:



FIG. 1 illustrates a high-level flow chart of a process for using a camera as a centralized input interface for performing actions on images captured live and on images stored for deferred action.



FIG. 2 illustrates factors, at least one of which influences the displayed action list to the user.



FIG. 3 illustrates an actions tree for actions exposed to a user for an image.



FIG. 4 illustrates a high-level flow chart of a process for ranking and displaying actions based on incremental sampling of a live capture of an image of an object in real time in the surrounding environment.



FIG. 5 illustrates initial stages of a live image capture user interface in which a user performs an action on an image of an object sampled from the surrounding environment.



FIG. 6 illustrates intermediate stages of the live image capture user interface where the user performs an action on an image of an object sampled from the surrounding environment.



FIG. 7 illustrates final sequences of stages of the live image capture user interface where the user performs an action on an image of an object sampled from the surrounding environment.



FIG. 8 illustrates alternate final sequences of stages of the live image capture user interface where the user performs an action on an image of an object sampled from the surrounding environment.



FIG. 9 illustrates the live image capture user interface in which the user performs an action on an image of a page of a book sampled from the surrounding environment.



FIG. 10 illustrates existing input interfaces for receiving camera-based input.





DETAILED DESCRIPTION OF EMBODIMENTS

The present disclosure relates to a live capture incremental image recognition interface for camera-equipped mobile devices such as smart phones and hand-held tablets. The present system allows a user to perform context-sensitive actions, including content discovery, on the surrounding visual environment, using image capture or video capture from a mobile device camera as input. The context is a combination of one or more of the following factors: image content, location of the mobile device, location at which the image was taken, date and/or time of capturing the image, and a set of user preferences learned from the user's past actions such as the user's previous action list navigation and selection behavior (also referred to as a “user signature” herein).


The term “smart phones” refers to phones with capabilities beyond just making voice calls.



FIG. 10 illustrates prior art existing input interfaces for receiving camera-based input. Preferred embodiments and their advantages may be understood by referring to FIGS. 1-9, wherein like reference numerals refer to like elements.


While there is a distinct advantage to having a centralized image- or video-driven input interface for performing actions based on an image or video taken by a camera, the input interface is further improved by incorporating context. In some embodiments, a centralized image/video-driven input includes a hardware camera button, operating-system-level or phone-interaction-shell-level integration, and/or a standalone camera application. Actions based on an image or video include using the image or video as input into a search engine. Context allows the input interface to offer likely actions of interest to the user, and to order the actions according to their likelihood of relevance to the user at that point in time and location. A centralized interface that fails to account for context can make the user experience cumbersome, thereby decreasing the effectiveness of a centralized user interface. Users may gravitate to use-case-specific “camera-input”-capable applications. Use-case-specific applications have the disadvantage of the user not rapidly discovering new “camera-input-aware applications” as more applications add support for image and video input.


As semantic recognition of images improves over time and the wide gap between human image recognition capability and computer-driven image recognition decreases, visual image input is poised to become a centralized efficient input interface to express user interest in the surrounding visual environment, and for the system to understand user interest unambiguously. This stands in stark contrast to a centralized text input interface, where the user intent is harder to interpret unambiguously. In a centralized text input interface, ambiguity arises because it is harder to infer if a user's interest is based on the current environment, or decoupled from it.


For instance, a user in a store remembers to respond to an email, make a phone call, or has an urge to find a nearby coffee shop to grab coffee. These are thoughts that suddenly occur to the user. To infer that the user is in a store (e.g., based on obtaining a position estimate for the device) and to offer information about things around him in the store as soon as the user opens the mobile device or enters text into a centralized input interface, may not yield useful results. Even with text input search that is location-aware and focused on store results, store-specific search results may result in ranking and ordering results mentioning the nearby coffee shop lower. For example, assume a user has entered a text search for “coffee.” In a book store, an input interface taking location into account may return search results representing books on coffee. In a grocery store, the same input interface may return search results representing brands of coffee. The number of store objects in the immediate vicinity of the user would be so high that location-sensitive text input search results would push down results mentioning even a nearby coffee shop, due to the high relevance boost of immediate objects. However, if the user takes a picture of something in a store, then the intent of the user is unambiguous when he wants to perform an action on the image. The user's intent is unambiguous even if the user selects the action later at a different location. Text input and speech input into a centralized input interface to express a user's interest in the surrounding environment are always burdened by ambiguity inherent in interpreting human thought or intent, regardless of the context of date and/or time and location.


Image input, in contrast, is a direct sampling of the environment around the user. Image input has a distinct advantage over text input or speech input as a better candidate for a centralized interface to understand user interest in a surrounding environment. Image input is perhaps the best and always relevant sampling, like human vision, to understand the environment. For example, even in darkness a user or an input interface can use a camera flash to capture the environment. Furthermore, other sensory modalities such as auditory, olfactory, tactile, and gustatory senses, are also applicable. For example, a centralized input interface uses auditory analysis to sample a bird's cry and identify its species. Thus, while embodiments of the present disclosure are described in terms of image input, other sensory modalities can be used as input and remain within the scope of the disclosure.


As speech recognition improves, over time it will supersede text in many scenarios as an input interface to express user intent decoupled from the surrounding environment. For example, a user uses speech to search for a contact to make a call, remember to respond to an email, make a note to himself or herself, or find directions to a place. Meanwhile, a camera-based input interface will become an input interface augmenting a text/speech interface, to sample a surrounding environment and act upon the sampled image or video. These two broad categories of input interfaces, (1) speech- or text-based and (2) camera-based, will continue to co-exist and evolve as recognition technologies improve. Embodiments of the present disclosure facilitate using a camera-based centralized input interface to augment text/speech interfaces and improve efficacy of a centralized input interface. The present system and method improve the ease of sampling the surrounding environment's visual field, and the ease of performing an action that likely matches the user's intent.


Turning now to the drawings, FIG. 1 illustrates a high-level flow chart of a process 100 for using a camera as a centralized input interface for performing actions on images captured live and/or on images stored for deferred action. The user interface receives a command to start a camera, or receives a command to open a picture library (step 102). In some embodiments, the user interface responds to a user starting the camera using a hardware button to trigger the camera, a camera application on the phone, or any application that has an interface to initiate the camera hardware. In another embodiment, the user interface responds to a command to open a picture library that stores images taken earlier. The user interface receives a command to take a desired picture or video with the camera, or receives a command to select a picture from the picture library (step 104). Once the camera takes the picture or video, the present system acts upon the image. The present system allows the user to decide whether to act upon the image immediately (step 106). If the user decides to act upon the image immediately, the present system assigns a ranking weight to an available action (step 108). The user interface displays a set of context sensitive actions based on the assigned ranking weights (step 110). The weighting and display of the context-sensitive actions factor in one or more of the following factors: (1) content of the image, (2) current device location, (3) location at which the image was taken, (4) date and/or time, and (5) user preferences learned from prior user actions. The user selects a context-sensitive action (step 112). As described below, in some embodiments, the user's selection is fed back into the present system to assist in future weightings (step 114).


In some embodiments, the present system does not require analysis or recognition of the image content. For example, the present system makes the user experience effective in many scenarios with just one or more of the following factors: current device location, location at which the image was taken, image capture location, date and/or time, and user preferences learned from the user's past actions, without needing the image to be processed at all. For example, if the user is in an electronics or book store, the act of triggering a camera click on the phone displays a web site or application associated with the electronics or book store. One advantage of displaying the web site or application associated with the store is that the interaction assists the store to sell alternative options or products on sale to an undecided customer, should the customer find the price of his desired product not competitively priced relative to another store.


Accordingly, the centralized camera input interface serves as a context-sensitive method to bring the user to relevant web sites to help him make informed decisions, without processing, analyzing, recognizing, or using the image content. This usage of the camera, even without using the image content for determining context, is still superior to a centralized text input interface. In a centralized text input interface, the location of the user could be used to automatically push a relevant site or information. However, as described earlier, the user's location may not match the user's intent. Instead, the act of using a camera click to sample the environment is a clear signal of a user's interest in that location. Accordingly, showing location-specific information, automatically in this case, has a higher likelihood of matching a user's intent compared to a centralized text input interface with automatic location-specific information push.


In other embodiments, the present system leverages image content to tailor the actions displayed to the user, based on information determined from the image. Recognition algorithms used to determine image content include: coarse granularity recognition of face or object contours, barcode recognition, OCR (Optical Character Recognition), or more sophisticated recognition methods. In some embodiments, the input interface uses coarse granularity recognition of face contours to prompt the user to share the picture or save the picture to the photo library. In other embodiments, the input interface uses optical character recognition to facilitate recognition of objects in a store, and to help the user gain more information about the objects. Brick and mortar stores can suffer from an inability to cater to visiting customers' questions and risk losing customers. Recognition of objects in the store, facilitated by information already in stores' online sites or applications, offers more information on an object of interest from the store's own site. Furthermore, image recognition coupled with navigating a user to a store's web site or application provides opportunities for up-selling with online promotions.


In further embodiments, the input interface uses coarse granularity recognition of edge detection to highlight clickable active regions in an image. The highlighted regions denote availability of further information. In a crosshair exploration mode, a crosshair in the center of the field of vision blinks at a periodic rate when latching or recognizing an image of an object in a field of view. The input interface instantly provides information on the object once the input interface recognizes the image of the object. If the input interface does not recognize an image of an object, the user is able to infer implicitly that the object is not recognizable, and reorient the mobile device to try a different zoom or angle. In some embodiments, recognition of barcodes on product labels, combined with a current location of the device, shows price comparisons for a product, including any online price promotions in the same store. Advantageously, the present system facilitates retention of the customer in the brick and mortar store in the face of online price comparisons which allow the customer to do optimal shopping.


In some embodiments, the list of actions overlaid on an image (step 110) is based on inferred details of the image content (if available), the current location of the device, and the date and/or time. For example, once the input interface infers based on details of the image content that the user is scanning a bar code, the input interface assigns lower ranking weights to actions of sharing the image or emailing the image, and assigns higher ranking weights to actions of price comparison and online purchase for the product corresponding to the bar code of interest.


When the input interface receives a selected action from the action list (step 112), the input interface feeds the user's choice back to the present system (step 114) to update a user signature. As described above, a user signature refers to a set of user preferences learned from the user's past actions and/or behavior. The user signature includes information reflecting the user's preferences and activities, such as temporal and location-based components, including a timestamp of the user's search, and/or the user's location. Techniques for generating a user signature based on user preferences, activities, and behavior include, but are not limited to, those disclosed in U.S. Pat. No. 7,792,815, entitled Methods and Systems for Selecting and Presenting Content based on Context Sensitive User Preferences, filed Mar. 6, 2007, and U.S. Pat. No. 7,949,627, entitled Methods and Systems for Selecting and Presenting Content based on Learned Periodicity of User Content Selection, filed Jul. 26, 2010, the contents of which are incorporated by reference herein. This user signature feedback assists in improving future ranking and display of action choices to match user preferences. The system learns user behavior to improve the user experience, by modifying the action list to match the user's interest.


If the user does not choose to act upon the image immediately (step 106), the user interface receives a command to store the captured image or video in the photo library (step 116). In this deferred mode of action on a stored image, the present system stores the date and/or time and optionally the current device location information along with the image in the library (step 118). Associating the date and/or time and current device location information with the stored image facilitates future ranking and displaying of the action list to match the user's intent later, when he/she decides to act upon the stored image (step 104). In one embodiment, the action to store a captured image into the library is the default action. In a further embodiment, the input interface does not display any other action. Instead, the input interface defers the display of the ranked and ordered list of actions to when the input interface receives a user selection of a stored picture from the photo library.



FIG. 2 illustrates factors 201-204, at least one of which influences the displayed actions list to the user. As mentioned earlier, image recognition 201 is not a mandatory factor, just as factors 202-204 are not mandatory. Image recognition 201, if used, can occur on the mobile device, or can occur partially or completely on a remote server. If image recognition 201 occurs on a remote server, the present system uses a communication method to transmit images to the remote server, particularly in a live usage interface as described in further detail below. Optionally, image recognition 201 has different levels of granularity, from coarse feature extraction or recognition to fine-grained recognition, and different types of recognition occurring simultaneously, e.g., barcode detection or OCR. As image recognition algorithms evolve and improve, the improved algorithms increase the likelihood of matching the user's intent by ranking and ordering the list of actions displayed to the user.


In some embodiments, the present system combines OCR image recognition with location information 202 to improve recognition of the surrounding environment. Location estimation technology currently used in mobile devices can supply an estimate of the current device location and/or the location at which the image was taken. Example location estimation technologies include, but are not limited to, GPS (Global Positioning System) satellite-based location systems, Wi-Fi wireless-local-area-network-based location systems, and/or cellular-tower-based location systems.


For example, the present system combines knowledge of the user's location in a BARNES AND NOBLE book store to increase recognition rate by compensating for OCR errors. The present system uses online book store information to compensate for OCR recognition errors. For example, assume the user is in a brick-and-mortar book store. As described below in connection with FIGS. 5-6, the input interface uses information derived from an online counterpart to the brick-and-mortar book store to correct OCR recognition errors arising from recognizing an image of a book. In further embodiments, the present system uses (1) output text from OCR recognition, which contains errors in text recognition, and (2) the current device location or the location at which the image was taken, to generate input into an incremental search engine. An incremental search engine can receive input with errors which is partially complete (e.g., incomplete prefixes and suffixes with errors and loss of characters within words), and use the input to generate possible action candidates as output. Optionally, the present system uses date and/or time 203 to leverage repetitive or episodic patterns of user behavior. Optionally, the present system uses a partial or complete signature of past user actions 204 to influence the ranking and ordering of the actions list 205, as described in further detail below.



FIG. 3 illustrates an actions tree 300 exposed to a user for an image. The present system supports user navigation down to a node in the tree, and learning of user actions that can percolate a node up to the root node over time.


It is instructive to compare the improvement in user experience offered by a camera-driven input interface in comparison to a text input interface, and the potential of a camera-driven input interface as improvements in recognition algorithms allow for recognition of more objects in a surrounding environment. In a text-input-based interface, a user goes through three steps to select an action: (1) the text input system suggests words or phrases to complete words or phrases which a user enters, (2) the text input system displays results to match the user's text input, and (3) the text input system displays a list of available actions for each result. Image recognition, in the best case, eliminates the first two steps (phrase completion and results navigation). A user chooses directly from an actionable list based on an object of interest. Eliminating these two steps of phrase completion and results navigation represents a significant improvement in user experience on a mobile device, on which minimizing interactions dramatically improves a user interface.



FIG. 3 further illustrates how the user experience can be improved by optimizing the tree to reduce the navigation needed for the user to arrive at the desired action. The nodes in the actions tree represent potential actions for the input interface to display to a user. The nodes which the user visits repetitively percolate up the navigation hierarchy over time. Techniques for modifying user navigation content based on previous user navigation and selection include, but are not limited to, those disclosed in U.S. Pat. No. 7,461,061, entitled User interface methods and systems for selecting and presenting content based on user navigation and selection actions associated with the content, filed Apr. 20, 2007, the contents of which are incorporated by reference herein. The forward navigation path 302 to a node, down a hierarchy 301, reduces over time as nodes percolate up the hierarchy 303 as the system learns and creates a signature of user's navigation behavior.


In another embodiment, the present system leverages a hardware camera button to select automatically the highest ranked action, without displaying the actions list or requiring input from the user. Certain mobile devices provide a hardware “camera” button to allow users to take a picture or video of an “impromptu” moment with ease. In contrast, other mobile devices require a user to find an on-screen action button which introduces unwanted delay into the picture-taking process. In mobile devices with a hardware camera button, in response to activation of the hardware camera button, the input interface captures an image or video as desired, and further automatically selects the highest ranked action without displaying the actions list or requiring additional input from the user.


In another embodiment, the present system determines the default ranking and ordering of an action according to a bidding process. The present system determines in advance a ranking and ordering used when displaying an action for a recognized image, based on bids placed by advertisers or companies with an interest in the rank and order of actions associated with the advertiser or company. For example, an advertiser such as an online book store participates in a bidding process to bid on action types, keywords, or smart tags. Action types include objects on which actions are performed, such as books or media. The present system considers parameters including potential or actual revenue from the bids, or how well the bid matches the user's intent, before determining a default action. In some embodiments, the present system determines a default rank and order in advance, on a system remote from the mobile device and independent of the user's current request.


In a further embodiment, the present system uses learning such as machine learning to modify the actions tree 300 to match the user's intent based on a past signature of the user's actions. The learning incorporates the current device location or location at which the image was taken, date and/or time, and/or the navigation path. In this manner, the system accounts for locations which a user often visits and actions which a user often takes, to optimize the navigation path. For example, if the present system receives commands from a user who goes often to a brick-and-mortar book store and buys a book from an online competitor book store after comparing online, the present system uses previous actions which the user often takes, to rank and order the competitor's action higher in comparison to other action options. For example, a user often selects the competitor's action when the user's current location is the brick-and-mortar book store. Even if the brick-and-mortar store's price and corresponding action for the book is slightly cheaper than the price and corresponding action for the online competitor's book, the present system uses the user's previous actions to rank, order, and display the brick-and-mortar store's action below the online store's action.


In another embodiment, the present system uses the current device location or previous stored image location as a context to constrain results to that specific location and to compensate for image recognition errors using knowledge of the specific location. If the mobile device is located in a brick-and-mortar store, the present system identifies an online store equivalent of the brick-and-mortar store, and uses the online store equivalent to compensate for errors in image recognition. For example, the present system constrains search results to those appearing in a book store, electronic store, or museum when the mobile device is in those locations. The present system uses knowledge of the location to further understand if the user is in an environment of high object density, as in a retail store, and uses that information to constrain the search context to within that location. The present system further performs a fine-grained analysis of the location to identify an online store equivalent of the brick-and-mortar store, and then use information from the online store equivalent to compensate for errors in image recognition. Optionally, in addition to an online store equivalent, the present system uses any other digital repository of information relating to the mobile device's current location or the previous stored image location.



FIG. 4 illustrates a high-level flow chart for a process 400 of ranking and displaying actions based on incremental sampling a live capture of an image of an object in real time in the surrounding environment. While image recognition technologies have progressed, image recognition is still in its infancy in semantically interpreting a visual scene, a task a child can do easily. Furthermore, even if it is possible to semantically interpret a visual scene, it is an interface challenge to infer the objects that are of interest to the user in the surrounding environment. Overlaying actions on all recognized objects overloads the user with choices, even more than the multitude of choices for a single object. In some embodiments, the present system includes a user interface for live capture mode, where the present system determines a user's object of interest by virtue of a user interface element such as a cross-hair overlaid in the visual field. The user interface element is chosen to be similar to traditional cameras having a range-of-view window.


In an illustrative implementation, the camera-based input interface receives a command to initiate live-image capture and recognition (step 401). In some embodiments, the camera-based input interface receives a trigger from a hardware button or from a centralized input interface that is brought up by a touch screen gesture, touch, or click. In one embodiment, a hardware camera button directly triggers the camera-based live capture interface. In another embodiment, in phones having a hardware keypad, pressing any key triggers a text-based search interface. In a further embodiment, in devices where there is no hardware camera button, a touch screen gesture triggers a centralized input interface with a search input text box, and a software camera button alongside the text box triggers the present live-capture camera input interface.


Once the live capture is initiated (step 402), when a user holds the mobile device steady such that an object falls under a cross-hair, the present system interprets the object under the cross-hair to be the object of interest. In one embodiment, the present system tracks the motion of the mobile device and overlays cross-hairs on the live image capture once the motion of the phone falls to below a threshold to a steady level. The present system begins image recognition, to optimize computation (particularly if the present system performs image recognition on the mobile device) and bandwidth (if the present system performs image recognition on a remote server).


In further embodiments, the present system begins image capture first, and progressively refines incremental image recognition until the camera steadiness reaches a threshold. The present system uses incremental image recognition to determine object contours and active clickable cues with progressive refinement. The input interface overlays object contours and/or active clickable cues progressively on images of objects as the present system incrementally recognizes the objects (step 403).


The present system uses the crosshair to aid incremental recognition of an object in live capture mode. The present system recognizes the object under the crosshair first (step 404). The present system ranks, orders, and displays actions for the object of interest progressively, before recognizing other objects. Advantageously, this incremental image recognition provides quicker feedback and a more responsive input interface compared to current image-input-based applications. In current image-input-based applications, the absence of a cross-hair and lack of incremental image recognition increases response time, and makes user interaction more iterative and cumbersome because of the two stage process. In the two-stage process required by current image-input-based applications, a user takes a snapshot and waits for a response, only to find that the interface did not recognize the object of interest correctly, or in some cases, the interface recognized a different object instead of the object of interest intended by the user. Current image-input-based interfaces then require the user further to zoom in or out and repeat the cumbersome process.


In contrast, the present live capture interface with incremental image recognition makes this process more seamless. From a bandwidth and computation standpoint, the present system lowers bandwidth usage and device computation required for remote image recognition. For example, the present system sends only a region of the image around the cross-hair to a remote server for image recognition. Optionally, the present system uses device-based coarse image analysis to determine this first region intelligently. The present system then dispatches other segments around the first region incrementally to a remote image recognition server, and the remote server combines the image segments for improved recognition. If the present system determines that the segments around the cross-hair are sufficient to recognize the object on the device, the present system aborts dispatching or processing the other image segments. Accordingly, certain embodiments have the advantage of potentially lesser computation and, hence, faster response time to the user by leveraging incremental image recognition based on prioritizing sampling of an image segment indicated by the cross-hair.


The present system using live capture and incremental image recognition allows for multiple sampling and stitching of the visual scene. The present system addresses the problem of when an object is too large to be visible in the range of view for the current zoom, or the problem of when text is too long to fit in the current zoom level, for example when image recognition algorithms require images of letters to be sufficiently large for successful recognition. The present system automatically allows for multiple sampling and stitching of the visual scene since the present system captures the entire image once the input interface receives a command to start capture. In the case of a user scanning text, the present system allows the user to move the cross-hair along the baseline of the text line of interest. The present system prioritizes the object under the crosshair in the recognition process, in preference to other objects in the field of view. Optionally, the cross-hair blinks to indicate that the present system has latched on to the image of the object of interest, and image recognition has begun. At this point, the present system allows the user to bring the phone closer, to interact further with the image of the object of interest. The present system can ignore the motion of the phone while recognition is in progress. In some embodiments, the image remains frozen transiently to indicate the recognition is in progress. The image remains frozen until the user chooses an action, a timeout elapses, or the user cancels recognition.


Advantageously, supporting implicit latching on the best image for recognition and indicating to user that latching has happened eliminates the user's tension that the snapshot may not be the best. Furthermore, since the present system captures multiple images once the user initiates capture, recognition leverages the multiple images to improve the recognition process further. Even when user explicitly chooses a “snap image” action for the image of the object under the cross-hair, if the snapped image is blurred due to motion, the present system leverages images taken prior to the explicit image. The present system automatically displays the list of actions relevant to the object under the cross-hairs without any user action once recognition is complete (step 405).


As described above, the present system ranks and orders actions for display in an actions list. In one embodiment, if the user chooses not to select an action, the input interface switches to live capture mode (step 406). The input interface switches to live capture mode either via a timeout, by receiving a brisk movement indicating that the user is interested in another object, and/or other user operation. If the user chooses to select an action, the present system performs the selected action (step 112, shown in FIG. 1).



FIG. 5 illustrates initial states of a live image capture user interface in which a user performs an action on an image of an object sampled from the surrounding environment. Image 501 illustrates a live image capture of a bookshelf in a store. In image 502, as the phone movement stabilizes, cross-hairs 504 appear at the center of the field of view over object of interest 506. As described above, cross-hairs 504 aid the user to indicate an object of interest 506 to the present system. In one embodiment, crosshair 504 begins to blink as the latching on object of interest 506 happens, and the present system begins image recognition. Image 503 illustrates display of recognized text. If object of interest 506 under the focus of cross-hair 504 has text, the present system displays the recognized text 508 instantly, to inform the user that image recognition has happened.



FIG. 6 illustrates intermediate states of a live image capture user interface in which a user performs an action on an image of an object sampled from the surrounding environment. As described above in connection with FIG. 4, in some embodiments the present system performs incremental image recognition by executing a first phase of image recognition locally on the mobile device where the image recognition is devoid of context and is based purely on recognition of characters. Optionally, the present system dispatches recognized text to a second phase, happening either on the mobile device or remotely on a server. Image 601 illustrates that the second phase uses the context to improve image recognition. For example, the present system uses the context that the mobile device's location is in a book store to use information about other books as the context. As described above, relevant contextual factors include image content, a location of the mobile device, a location where the image was taken, a date and/or time of capturing the image, and a user signature representing the user's past actions.


As illustrated in image 601, the second phase uses the specific book store as the context to correct errors, and displays a corrected string 604. As illustrated in image 503 (shown in FIG. 5), the first phase of recognition has errors in the OCR because the recognized string 508 is “ermats enigma simon sin.” The image recognition missed peripheral characters. Other types of errors such as deletion and substitution are also possible. In some embodiments, this error correction step is not a separate phase that persists long enough to display to the user. Instead, as illustrated in image 701 (shown in FIG. 7), if the present system determines actions for the recognized image immediately, the input interface displays the actions instantaneously.


As illustrated in image 602, in some embodiments, image recognition of other objects happens subsequent or concurrent to the object of interest 506, and visual cues 608a, 608b on recognized objects progressively appear as they are recognized. In some embodiments, a pointing finger icon denotes visual cues 608a, 608b. Visual cues 608a, 608b indicate that the user can select the visual cues to perform actions. In some embodiments, when image recognition happens on a remote server, the present system recognizes the other objects in parallel. Optionally, if an object is not recognizable, the present system allows the user to zoom in or zoom out on the object of interest 506, using either hardware or software buttons, to increase the likelihood of recognizing the object of interest 506. The present live capture interface makes this iterative process much simpler, unlike existing systems. In the present live capture interface, the recognition process is faster using the cross-hair approach, and the user experience is more real time. Existing systems use explicit clicking of a button, followed by recognition, and then repeating the cycle again iteratively to capture a good image snapshot.



FIG. 7 illustrates final sequences of stages of the live image capture user interface where the user performs an action on an image of an object sampled from the surrounding environment. FIG. 7 continues the bookshelf scenario illustrated above where the present system displays actions for a recognized book. Image 701 illustrates that the available actions 704a, 704b, 704c are price comparisons for the book from three booksellers. The input interface allows the user to tap, click, and/or otherwise select a displayed store name and/or a displayed price to initiate an action of navigating to the online site associated with the displayed store name to allow the user to order the book of interest. Image 702 illustrates that the user can select action 704a by touching or clicking to initiate a purchase. In some embodiments, the present system performs the login to the site automatically due to caching of the user's credentials from a previous login. Accordingly, the present system allows the user to perform a one-click purchase of the book.



FIG. 8 illustrates alternate final sequences of stages of the live image capture user interface in which the user performs an action on an image of an object sampled from the surrounding environment. Images 801, 802 illustrate a scenario where there is ambiguity in the object being interpreted. The ambiguity is due to the existence of a hardcover and a paperback version of a book. (If the barcode or ISBN had been visible, the present system could have resolved this ambiguity.) Ambiguity can arise due to other reasons, including an erroneous or incomplete scan resulting in multiple objects qualifying as candidates. In one embodiment, the input interface shows object of interest 506 and associated actions 804a, 804b with visual cues to navigate through. Example visual cues include arrows 806a, 806b. Optionally, if the list of associated actions is above a threshold count, the input interface instead displays a results list similar to a search results listing interface. In another embodiment, the input interface prompts the user to scan the object of interest 506 again. In some implementations, the input interface shows the qualifying list of objects only after resolving the optical recognition input using the context factors, so as to eliminate noisy input. For example, if the mobile device's location is in a bookstore, the present system uses the library of digital information on books available on the store's online site as context factors to resolve the optical recognition input. Using these contextual factors potentially reduces the list of qualifying objects.



FIG. 9 illustrates the live image capture user interface in which the user performs an action on an image of a page of a book sampled from the surrounding environment. Images 901, 902, 903 illustrate a scenario where there is no context other than the image content itself. Image 901 illustrates the user sampling a page of a book. Image 902 illustrates the user focusing cross-hairs 504 on a word in a book, the word “vignette.” In some embodiments, the present system allows the user to eliminate surrounding words from the field of view by zooming in on the word to cull the field of view. Image 903 illustrates that, once image recognition is complete, the present system stops blinking cross-hair 504 and transiently ceases live capture by displaying a frozen image. The frozen image includes displaying the word meaning 904a, along with an option 904b to explore further using a search option. Pursuing search option 904b leads to a traditional text-input-based search interface with results. Freezing the image upon recognition avoids requiring effort from the user to continue to keep the object of interest on the focus of cross-hair 504. Accordingly, stopping the blinking of cross-hair 504 informs user that he can stop trying to focus on the object and, instead, may act on the actionables displayed.


As described above, the present cross-hair live capture with incremental recognition facilitates a variety of use cases spanning different object sizes. A user can scan a single word in a paragraph of a book, or a large object where the user moves the cross-hair over the object to scan the entire object. While the examples described above of using a cross-hair to pin-point an object have all been in the immediate proximity of the user, the present system also facilitates allowing the user to focus on and identify an object far away on the visual horizon. For example, a user visiting a national park tries to identify a monument or a mountain on the visual horizon. Since the present system leverages the location of the device as a contextual factor, the present system improves image recognition and matching of the image of interest with existing images of the monument or mountain to improve the recognition success rate. In further embodiments, the present system uses a magnetometer present in the device to further assist in identifying a vantage point of the user in addition to the location, so as to discern the visual horizon.


Use of OCR (Optical Character Recognition) and barcode recognition alone, or combined with a context of location and time, make the present centralized image-based-input interface useful for acting on objects in the user's immediate surrounding visual environment, since objects of interest may be text labeled. For example, in a store, text labels are text stuck on or imprinted on an object, or external labels adjacent to the object. Over time, recognizing object shapes regardless of orientation, lighting, surface deformities, or color would improve to enable recognizing objects that may not be text labeled. Accordingly, the value of the present interface increases as the quality of image recognition improves.


In some embodiments, the present system couples the present camera-based centralized input interface with a complementary text- or speech-based input interface to compensate for image recognition failures, or to rank and order actions or results which are coupled to or decoupled from the surrounding environment. As described above, the camera-based centralized input interface serves as improved expression of user intent in or coupled to the surrounding environment. In contrast, a complementary text or speech interface serves to capture user's intent decoupled from the surrounding environment. In some embodiments, these two interfaces complement each other as image recognition algorithms improve and the gap between image recognition by humans and machine recognition decreases. In the interim, optionally in scenarios where image recognition is deficient or fails, the present system reduces the text- or speech-based interface decoupling from the environment to compensate for image recognition failures, in specific instances when image recognition fails. In instances where image recognition does work, the text input interface remains decoupled from the surrounding environment, or minimally decreases relevance for results relevant to the immediate environment, in contrast to results decoupled from the immediate environment.


The techniques and systems disclosed herein may be implemented as a computer program product for use with a computer system or computerized electronic device (e.g., Smartphone, PDA, tablet computing device, etc.). Such implementations may include a series of computer instructions, or logic, fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, flash memory or other memory or fixed disk) or transmittable to a computer system or a device, via a modem or other interface device, such as a communications adapter connected to a network over a medium.


The medium may be either a tangible medium (e.g., optical or analog communications lines) or a medium implemented with wireless techniques (e.g., Wi-Fi, cellular, microwave, infrared or other transmission techniques). The series of computer instructions embodies at least part of the functionality described herein with respect to the system. Those skilled in the art should appreciate that such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems.


Furthermore, such instructions may be stored in any tangible memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies.


It is expected that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the network (e.g., the Internet or World Wide Web). Of course, some embodiments may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments are implemented as entirely hardware, or entirely software (e.g., a computer program product).


Further still, any of the various process steps described herein that occur after the user has obtained a sample of the environment (e.g., an image, a sound recording, or other sensory input), can be processed locally on the device and/or on a server system that is remote from the user device. For example, upon latching onto an image, the digitized image can be transmitted to a remote server system for further processing consistent with the disclosure above. Optionally, or alternatively, the image can be processed locally on the device and compared to a locally resident database of information. Thus, possible candidate for a match to the latched image can come from local and/or remote sources for presentation to the user.

Claims
  • 1. A computer-implemented user interface method of displaying at least one available action overlaid on an image, the method comprising: generating for display, a live image and a visual guide overlaid on the live image;identifying an object of interest in the live image based on a proximity of the object of interest to the visual guide;identifying, by a processor, without receiving user input, a first plurality of actions of different types from a second plurality of actions for subsequent selection by a user, the first plurality of actions being identified automatically based at least in part on the object of interest and at least one of (1) current device location, (2) location at which the live image was taken, (3) date of capturing the live image, (4) time of capturing the live image, and (5) a user preference signature representing prior actions selected by a user and content preferences learned about the user associated with particular times or locations at which the prior actions were selected by the user;assigning a ranking weight to the first plurality of actions based on a non-textual portion of the identified object of interest;ranking the first plurality of actions based on its assigned ranking weight; andpresenting the first plurality of actions to a user as selectable options.
  • 2. The method of claim 1, wherein the presenting of the first plurality of actions to the user as selectable options includes displaying the first plurality of actions in an order based on the ranking.
  • 3. The method of claim 2, further comprising updating the user preference signature to include information about the action chosen by the user from among the first plurality of actions.
  • 4. The method of claim 1, wherein the live image is an image of a portion of an environment surrounding the user.
  • 5. The method of claim 1, wherein the identifying a first plurality of actions and assigning a ranking weight thereto includes determining the ranking weight by a machine learning process.
  • 6. The method of claim 1, further comprising selecting the highest ranked action within the first plurality of actions in response to activation of a hardware camera button.
  • 7. The method of claim 1, further comprising analyzing the live image to learn about the object of interest.
  • 8. The method of claim 7, further comprising using at least one of the location of the device and the location at which the live image was taken to augment the analyzing the live image to learn about the object of interest.
  • 9. The method of claim 8, wherein the first plurality of actions includes an action to purchase an item corresponding to the object of interest from an online storefront corresponding to a physical storefront, if the device's location is proximate to the physical storefront.
  • 10. The method of claim 7, wherein the analyzing the live image to learn about the object of interest includes comparing the live image against a collection of at least one sample image to determine the object of interest.
  • 11. The method of claim 10, wherein the analyzing the live image to learn about the object of interest includes using optical character recognition to learn about textual image content.
  • 12. The method of claim 10, wherein the analyzing the live image to learn about the object of interest includes analyzing at least one partial image selected based on a proximity of the at least one partial image to a visual field of interest for the user.
  • 13. The method of claim 1, wherein the first plurality of actions are presented at a first point in time, further comprising: storing the live image to a memory along with data about at least one of the location of the device, the date at which the live image was captured, and the time at which the live image was captured; andpresenting the first plurality of actions at a second point in time in an order based on the ranking when the user later acts upon the stored live image after the first point in time.
  • 14. The method of claim 1, the method further comprising: subsequent to identifying the first plurality of actions, identifying a second object in the live image, wherein the second object is farther from the visual guide than the object of interest;identifying an alternate action associated with the second object; andpresenting the alternate action to a user as a selectable option.
  • 15. A system for displaying at least one available action overlaid on an image, the system comprising: a memory device that stores instructions; anda processor circuitry that executes the instructions and is configured to: generate, for display, a live image and a visual guide overlaid on the live image;identify an object of interest in the live image based on the proximity of the object of interest to the visual guide;identify, without receiving user input, a first plurality of actions of different types from a second plurality of actions for subsequent selection by the user, the first plurality of actions being identified automatically based at least in part on the object of interest and at least one of (1) current device location, (2) location at which the live image was taken, (3) date of capturing the live image, (4) time of capturing the live image, and (5) a user preference signature representing prior actions selected by a user and content preferences learned about the user associated with particular times or locations at which prior actions were selected by the user;assign a ranking weight to the first plurality of actions based on a non-textual portion of the identified object of interest;rank the first plurality of actions based on its assigned ranking weight; andpresent the first plurality of actions to a user as selectable options.
  • 16. The system of claim 15, wherein the processor circuitry is further configured to present the first plurality of actions to a user as selectable options by displaying the first plurality of actions in an order based on the ranking.
  • 17. The system of claim 16, the processor circuitry further being configured to cause the computer system to update the user preference signature to include information about the action chosen by the user from among the first plurality of actions.
  • 18. The system of claim 15, wherein the live image is an image of a portion of an environment surrounding the user.
  • 19. The system of claim 15, wherein the processor circuitry is further configured to determine the ranking weight by a machine learning process.
  • 20. The system of claim 15, the processor circuitry being further configured to cause the computer system to select the highest ranked action within the first plurality of actions in response to activation of a hardware camera button.
  • 21. The system of claim 15, the processor circuitry being further configured to cause the computer system to analyze the live image to learn about the object of interest.
  • 22. The system of claim 21, the processor circuitry being further configured to cause the computer system to use at least one of the location of the device and the location at which the live image was taken to augment the analyzing of the live image to learn about the object of interest.
  • 23. The system of claim 21, wherein the processor circuitry is further configured to compare the live image against a collection of at least one sample image to determine the object of interest.
  • 24. The system of claim 23, wherein the processor circuitry is further configured to use optical character recognition to learn about textual image content.
  • 25. The system of claim 23, wherein the processor circuitry is further configured to analyze at least one partial image selected based on a proximity of the at least one partial image to a visual field of interest for the user.
  • 26. The system of claim 15, wherein the processor circuitry is further configured to present the first plurality of actions at a first point in time, and the processor circuitry is further configured to: cause the computer system to store the live image to a memory along with data about at least one of the location of the device, the date at which the live image was captured, and the time at which the live image was captured; andcause the computer system to present the first plurality of actions again at a second point in time in-an order based on the ranking when the user later acts upon the stored live image after the first point in time.
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit under 35 U.S.C. §119(e) of Provisional Application Ser. No. 61/430,310, filed Jan. 6, 2011, entitled Methods of and Systems for Content Search Based on Environment Sampling, the contents of which are incorporated by reference herein.

US Referenced Citations (1391)
Number Name Date Kind
3440427 Kammer Apr 1969 A
3492577 Reiter et al. Jan 1970 A
3493674 Houghton Feb 1970 A
3729581 Anderson Apr 1973 A
3833757 Kirk, Jr. et al. Sep 1974 A
3891792 Kimura Jun 1975 A
3936868 Thorpe Feb 1976 A
3996583 Hutt et al. Dec 1976 A
4004085 Makino et al. Jan 1977 A
4016361 Pandey Apr 1977 A
4024401 Bernstein et al. May 1977 A
4026555 Kirschner et al. May 1977 A
4031548 Kato et al. Jun 1977 A
4052719 Hutt et al. Oct 1977 A
4058830 Guinet et al. Nov 1977 A
4070693 Shutterly Jan 1978 A
4079419 Siegle et al. Mar 1978 A
4081753 Miller Mar 1978 A
4081754 Jackson Mar 1978 A
4096524 Scott Jun 1978 A
4103524 Mitchell et al. Aug 1978 A
4134127 Campioni Jan 1979 A
4139860 Micic et al. Feb 1979 A
4150254 Schussler et al. Apr 1979 A
4156850 Beyers, Jr. May 1979 A
4161728 Insam Jul 1979 A
4162513 Beyers, Jr. et al. Jul 1979 A
4170782 Miller Oct 1979 A
4186413 Mortimer Jan 1980 A
4189781 Douglas Feb 1980 A
4203130 Doumit et al. May 1980 A
4205343 Barrett May 1980 A
4218698 Bart et al. Aug 1980 A
4228543 Jackson Oct 1980 A
4231031 Crowther et al. Oct 1980 A
4233628 Ciciora Nov 1980 A
4249211 Baba et al. Feb 1981 A
4249213 Imaide et al. Feb 1981 A
4261006 Weintraub et al. Apr 1981 A
4264924 Freeman Apr 1981 A
4264925 Freeman et al. Apr 1981 A
4270145 Farina May 1981 A
4271532 Wine Jun 1981 A
4276597 Dissly et al. Jun 1981 A
4280148 Saxena Jul 1981 A
4283787 Chambers Aug 1981 A
4288809 Yabe Sep 1981 A
4290142 Schnee et al. Sep 1981 A
4300040 Gould et al. Nov 1981 A
4305101 Yarbrough et al. Dec 1981 A
4329684 Monteath et al. May 1982 A
4331974 Cogswell et al. May 1982 A
4337480 Bourassin et al. Jun 1982 A
4337482 Coutta Jun 1982 A
4337483 Guillou Jun 1982 A
4344090 Belisomi et al. Aug 1982 A
4355415 George et al. Oct 1982 A
4367557 Stern et al. Jan 1983 A
4367559 Tults Jan 1983 A
4375651 Templin et al. Mar 1983 A
4381522 Lambert Apr 1983 A
4385210 Marquiss May 1983 A
4388645 Cox et al. Jun 1983 A
4390901 Keiser Jun 1983 A
4393376 Thomas Jul 1983 A
4403285 Kikuchi Sep 1983 A
4405946 Knight Sep 1983 A
4412244 Shanley, II Oct 1983 A
4413281 Thonnart Nov 1983 A
4420769 Novak Dec 1983 A
4425579 Merrell Jan 1984 A
4425581 Schweppe et al. Jan 1984 A
4429385 Cichelli et al. Jan 1984 A
4439784 Furukawa et al. Mar 1984 A
4449249 Price May 1984 A
4456925 Skerlos et al. Jun 1984 A
4466017 Banker Aug 1984 A
4477830 Lindman et al. Oct 1984 A
4488179 Krüger et al. Dec 1984 A
4495654 Deiss Jan 1985 A
4496171 Cherry Jan 1985 A
4496804 Hung Jan 1985 A
4496976 Swanson et al. Jan 1985 A
4510623 Bonneau et al. Apr 1985 A
4520404 Von Kohorn May 1985 A
4523228 Banker Jun 1985 A
4527194 Sirazi Jul 1985 A
4531020 Wechselberger et al. Jul 1985 A
4533910 Sukonick et al. Aug 1985 A
4536791 Campbell et al. Aug 1985 A
4547804 Greenberg Oct 1985 A
4554584 Elam et al. Nov 1985 A
4555755 Kurosawa et al. Nov 1985 A
4555775 Pike Nov 1985 A
4566034 Harger et al. Jan 1986 A
4573072 Freeman Feb 1986 A
4587520 Astle May 1986 A
4595951 Filliman Jun 1986 A
4595952 Filliman Jun 1986 A
4598288 Yarbrough et al. Jul 1986 A
4602279 Freeman Jul 1986 A
4605964 Chard Aug 1986 A
4605973 Von Kohorn Aug 1986 A
4608859 Rockley Sep 1986 A
4611269 Suzuki et al. Sep 1986 A
4620229 Amano et al. Oct 1986 A
4622545 Atkinson Nov 1986 A
4635109 Comeau Jan 1987 A
4635121 Hoffman et al. Jan 1987 A
4641205 Beyers, Jr. Feb 1987 A
4677466 Lert, Jr. et al. Jun 1987 A
4677501 Saltzman et al. Jun 1987 A
4685131 Horne Aug 1987 A
4689022 Peers et al. Aug 1987 A
4691351 Hayashi et al. Sep 1987 A
4694490 Harvey et al. Sep 1987 A
4701794 Froling et al. Oct 1987 A
4704725 Harvey et al. Nov 1987 A
4706121 Young Nov 1987 A
4712105 Kohler Dec 1987 A
4714919 Foster Dec 1987 A
4718107 Hayes Jan 1988 A
RE32632 Atkinson Mar 1988 E
4729027 Hakamada et al. Mar 1988 A
4729028 Micic et al. Mar 1988 A
4734769 Davis Mar 1988 A
4745549 Hashimoto May 1988 A
4746983 Hakamada May 1988 A
4748618 Brown et al. May 1988 A
4750036 Martinez Jun 1988 A
4750213 Novak Jun 1988 A
4751578 Reiter et al. Jun 1988 A
4754326 Kram et al. Jun 1988 A
4768228 Clupper et al. Aug 1988 A
4772882 Mical Sep 1988 A
4775935 Yourick Oct 1988 A
4785408 Britton et al. Nov 1988 A
4787063 Muguet Nov 1988 A
4812834 Wells Mar 1989 A
4814883 Perine et al. Mar 1989 A
4821102 Ichikawa et al. Apr 1989 A
4821211 Torres Apr 1989 A
4829558 Welsh May 1989 A
4847604 Doyle Jul 1989 A
4847698 Freeman Jul 1989 A
4847700 Freeman Jul 1989 A
4847744 Araki Jul 1989 A
4855813 Russell et al. Aug 1989 A
4857999 Welsh Aug 1989 A
4862268 Campbell et al. Aug 1989 A
4864429 Eigeldinger et al. Sep 1989 A
4873584 Hashimoto Oct 1989 A
4873623 Lane et al. Oct 1989 A
4876600 Pietzsch et al. Oct 1989 A
4882732 Kaminaga Nov 1989 A
4884223 Ingle et al. Nov 1989 A
4888796 Olivo, Jr. Dec 1989 A
4890168 Inoue et al. Dec 1989 A
4890320 Monslow et al. Dec 1989 A
4890321 Seth-Smith et al. Dec 1989 A
4894789 Yee Jan 1990 A
4899136 Beard et al. Feb 1990 A
4899139 Ishimochi et al. Feb 1990 A
4905094 Pocock et al. Feb 1990 A
4908707 Kinghorn Mar 1990 A
4908713 Levine Mar 1990 A
4908859 Bennett et al. Mar 1990 A
4914517 Duffield Apr 1990 A
4914732 Henderson et al. Apr 1990 A
4918531 Johnson Apr 1990 A
4930158 Vogel May 1990 A
4930160 Vogel May 1990 A
4931783 Atkinson Jun 1990 A
4935865 Rowe et al. Jun 1990 A
4937821 Boulton Jun 1990 A
4937863 Robert et al. Jun 1990 A
4939507 Beard et al. Jul 1990 A
4942391 Kikuta Jul 1990 A
4945563 Horton et al. Jul 1990 A
4954882 Kamemoto Sep 1990 A
4959719 Strubbe et al. Sep 1990 A
4959720 Duffield et al. Sep 1990 A
4963994 Levine Oct 1990 A
4965825 Harvey et al. Oct 1990 A
4977455 Young Dec 1990 A
4987486 Johnson et al. Jan 1991 A
4991011 Johnson et al. Feb 1991 A
4991012 Yoshino Feb 1991 A
4992782 Sakamoto et al. Feb 1991 A
4992940 Dworkin Feb 1991 A
4995078 Monslow et al. Feb 1991 A
4996642 Hey Feb 1991 A
4998171 Kim et al. Mar 1991 A
5003384 Durden et al. Mar 1991 A
5005084 Skinner Apr 1991 A
5008853 Bly et al. Apr 1991 A
5012409 Fletcher et al. Apr 1991 A
5014125 Pocock et al. May 1991 A
5023721 Moon-Hwan Jun 1991 A
5027400 Baji et al. Jun 1991 A
5031045 Kawasaki Jul 1991 A
5036314 Barillari et al. Jul 1991 A
5038211 Hallenbeck Aug 1991 A
5040067 Yamazaki Aug 1991 A
5045947 Beery Sep 1991 A
5046092 Walker et al. Sep 1991 A
5047867 Strubbe et al. Sep 1991 A
5058160 Banker et al. Oct 1991 A
5062060 Kolnick Oct 1991 A
5068733 Bennett Nov 1991 A
5068734 Beery Nov 1991 A
5072412 Henderson, Jr. et al. Dec 1991 A
5075771 Hashimoto Dec 1991 A
5083205 Arai Jan 1992 A
5083800 Lockton Jan 1992 A
5091785 Canfield et al. Feb 1992 A
5093921 Bevins, Jr. Mar 1992 A
5099319 Esch et al. Mar 1992 A
5103314 Keenan Apr 1992 A
5105184 Pirani et al. Apr 1992 A
5109279 Ando Apr 1992 A
5119188 McCalley et al. Jun 1992 A
5119577 Lilly Jun 1992 A
5121476 Yee Jun 1992 A
5123046 Levine Jun 1992 A
5126851 Yoshimura et al. Jun 1992 A
5128766 Choi Jul 1992 A
5134719 Mankovitz Jul 1992 A
5146335 Kim et al. Sep 1992 A
5148154 MacKay et al. Sep 1992 A
5148275 Blatter et al. Sep 1992 A
5151782 Ferraro Sep 1992 A
5151789 Young Sep 1992 A
5152012 Schwob Sep 1992 A
5155591 Wachob Oct 1992 A
5155806 Hoeber et al. Oct 1992 A
5157768 Hoeber et al. Oct 1992 A
5161019 Emanuel Nov 1992 A
5161023 Keenan Nov 1992 A
5162905 Itoh et al. Nov 1992 A
5170388 Endoh Dec 1992 A
5172111 Olivo, Jr. Dec 1992 A
5172413 Bradley et al. Dec 1992 A
5177604 Martinez Jan 1993 A
5179439 Hashimoto et al. Jan 1993 A
5179654 Richards et al. Jan 1993 A
5182646 Keenan Jan 1993 A
5187589 Kono et al. Feb 1993 A
5189630 Barstow et al. Feb 1993 A
5191423 Yoshida et al. Mar 1993 A
5194941 Grimaldi et al. Mar 1993 A
5195092 Wilson et al. Mar 1993 A
5195134 Inoue et al. Mar 1993 A
5200822 Bronfin et al. Apr 1993 A
5200823 Yoneda et al. Apr 1993 A
5204897 Wyman Apr 1993 A
5206722 Kwan Apr 1993 A
5210611 Yee et al. May 1993 A
5212553 Maruoka May 1993 A
5214622 Nemoto et al. May 1993 A
5216515 Steele et al. Jun 1993 A
5220420 Hoarty et al. Jun 1993 A
5223924 Strubbe Jun 1993 A
5227874 Von Kohorn Jul 1993 A
5231493 Apitz Jul 1993 A
5231494 Wachob Jul 1993 A
RE34340 Freeman Aug 1993 E
5233423 Jernigan et al. Aug 1993 A
5233654 Harvey et al. Aug 1993 A
5235415 Bonicel et al. Aug 1993 A
5236199 Thompson, Jr. Aug 1993 A
5237411 Fink et al. Aug 1993 A
5237417 Hayashi et al. Aug 1993 A
5237418 Kaneko Aug 1993 A
5239540 Rovira et al. Aug 1993 A
5241428 Goldwasser et al. Aug 1993 A
5245420 Harney et al. Sep 1993 A
5247347 Litteral et al. Sep 1993 A
5247364 Banker et al. Sep 1993 A
5247580 Kimura et al. Sep 1993 A
5251921 Daniels Oct 1993 A
5252860 McCarty et al. Oct 1993 A
5253066 Vogel Oct 1993 A
5253067 Chaney et al. Oct 1993 A
5260778 Kauffman et al. Nov 1993 A
5260788 Takano et al. Nov 1993 A
5260999 Wyman Nov 1993 A
5283561 Lumelsky et al. Feb 1994 A
5283639 Esch et al. Feb 1994 A
5283819 Glick et al. Feb 1994 A
5285265 Choi et al. Feb 1994 A
5285278 Holman Feb 1994 A
5285284 Takashima et al. Feb 1994 A
5293357 Hallenbeck Mar 1994 A
5296931 Na et al. Mar 1994 A
5297204 Levine Mar 1994 A
5299006 Kim et al. Mar 1994 A
5301028 Banker et al. Apr 1994 A
5307173 Yuen et al. Apr 1994 A
5311423 Clark May 1994 A
5313282 Hayashi May 1994 A
5315392 Ishikawa et al. May 1994 A
5317403 Keenan May 1994 A
5319445 Fitts Jun 1994 A
5323234 Kawasaki Jun 1994 A
5323240 Amano et al. Jun 1994 A
5325183 Rhee Jun 1994 A
5325423 Lewis Jun 1994 A
5335277 Harvey et al. Aug 1994 A
5343239 Lappington et al. Aug 1994 A
5345430 Moe Sep 1994 A
5347167 Singh Sep 1994 A
5347632 Filepp et al. Sep 1994 A
5351075 Herz et al. Sep 1994 A
5353121 Young et al. Oct 1994 A
5355162 Yazolino et al. Oct 1994 A
5357276 Banker et al. Oct 1994 A
5359367 Stockill Oct 1994 A
5359601 Wasilewski et al. Oct 1994 A
5365282 Levine Nov 1994 A
5367316 Ikezaki Nov 1994 A
5367330 Haave et al. Nov 1994 A
5371551 Logan et al. Dec 1994 A
5373288 Blahut Dec 1994 A
5374942 Gilligan et al. Dec 1994 A
5374951 Welsh Dec 1994 A
5377317 Bates et al. Dec 1994 A
5377319 Kitahara et al. Dec 1994 A
5382983 Kwoh et al. Jan 1995 A
5384910 Torres Jan 1995 A
5387945 Takeuchi Feb 1995 A
5389964 Oberle et al. Feb 1995 A
5390027 Henmi et al. Feb 1995 A
5398074 Duffield et al. Mar 1995 A
5404393 Remillard Apr 1995 A
5410326 Goldstein Apr 1995 A
5410343 Coddington et al. Apr 1995 A
5410344 Graves et al. Apr 1995 A
5410367 Zahavi et al. Apr 1995 A
5412720 Hoarty May 1995 A
5416508 Sakuma et al. May 1995 A
5422389 Trepka et al. Jun 1995 A
5424770 Schmelzer et al. Jun 1995 A
5425101 Woo et al. Jun 1995 A
5428406 Terasawa Jun 1995 A
5432561 Strubbe Jul 1995 A
5434625 Willis Jul 1995 A
5434626 Hayashi et al. Jul 1995 A
5434678 Abecassis Jul 1995 A
5436676 Pint et al. Jul 1995 A
5438355 Palmer Aug 1995 A
5438372 Tsumori et al. Aug 1995 A
5440678 Eisen et al. Aug 1995 A
5442389 Blahut et al. Aug 1995 A
5444499 Saitoh Aug 1995 A
5446919 Wilkins Aug 1995 A
5452012 Saitoh Sep 1995 A
5453146 Kemper Sep 1995 A
5453796 Duffield et al. Sep 1995 A
5457478 Frank Oct 1995 A
5459522 Pint Oct 1995 A
5461415 Wolf et al. Oct 1995 A
5465113 Gilboy Nov 1995 A
5465385 Ohga et al. Nov 1995 A
5469206 Strubbe et al. Nov 1995 A
5473442 Kim et al. Dec 1995 A
5477262 Banker et al. Dec 1995 A
5479266 Young et al. Dec 1995 A
5479268 Young et al. Dec 1995 A
5479302 Haines Dec 1995 A
5479497 Kovarik Dec 1995 A
5481296 Cragun et al. Jan 1996 A
5483278 Strubbe et al. Jan 1996 A
5485197 Hoarty Jan 1996 A
5485219 Woo Jan 1996 A
5485221 Banker et al. Jan 1996 A
5485518 Hunter et al. Jan 1996 A
5488409 Yuen et al. Jan 1996 A
5495295 Long Feb 1996 A
5502504 Marshall et al. Mar 1996 A
5515098 Carles May 1996 A
5515106 Chaney et al. May 1996 A
5515511 Nguyen et al. May 1996 A
5517254 Monta et al. May 1996 A
5517257 Dunn et al. May 1996 A
5521589 Mondrosch et al. May 1996 A
5523791 Berman Jun 1996 A
5523794 Mankovitz et al. Jun 1996 A
5523795 Ueda Jun 1996 A
5523796 Marshall et al. Jun 1996 A
5524195 Clanton, III et al. Jun 1996 A
5525795 MacGregor et al. Jun 1996 A
5526034 Hoarty et al. Jun 1996 A
5527257 Piramoon Jun 1996 A
5528304 Cherrick et al. Jun 1996 A
5532735 Blahut et al. Jul 1996 A
5532754 Young et al. Jul 1996 A
5534911 Levitan Jul 1996 A
5537141 Harper et al. Jul 1996 A
5539449 Blahut et al. Jul 1996 A
5539479 Bertram Jul 1996 A
5539822 Lett Jul 1996 A
5541662 Adams et al. Jul 1996 A
5541738 Mankovitz Jul 1996 A
5543933 Kang et al. Aug 1996 A
5544321 Theimer et al. Aug 1996 A
5546521 Martinez Aug 1996 A
5550576 Klosterman Aug 1996 A
5557338 Maze et al. Sep 1996 A
5557721 Fite et al. Sep 1996 A
5557724 Sampat et al. Sep 1996 A
5559548 Davis et al. Sep 1996 A
5559549 Hendricks et al. Sep 1996 A
5559550 Mankovitz Sep 1996 A
5559942 Gough et al. Sep 1996 A
5561471 Kim Oct 1996 A
5561709 Remillard Oct 1996 A
5563665 Chang Oct 1996 A
5568272 Levine Oct 1996 A
5570295 Isenberg et al. Oct 1996 A
5572442 Schulhof et al. Nov 1996 A
5574962 Fardeau et al. Nov 1996 A
5576755 Davis et al. Nov 1996 A
5576951 Lockwood Nov 1996 A
5579055 Hamilton et al. Nov 1996 A
5581479 McLaughlin et al. Dec 1996 A
5582364 Trulin et al. Dec 1996 A
5583560 Florin et al. Dec 1996 A
5583561 Baker et al. Dec 1996 A
5583563 Wanderscheid et al. Dec 1996 A
5583576 Perlman et al. Dec 1996 A
5583653 Timmermans et al. Dec 1996 A
5584025 Keithley et al. Dec 1996 A
5585838 Lawler et al. Dec 1996 A
5585858 Harper et al. Dec 1996 A
5585865 Amano et al. Dec 1996 A
5585866 Miller et al. Dec 1996 A
5589892 Knee et al. Dec 1996 A
5592551 Lett et al. Jan 1997 A
5594490 Dawson et al. Jan 1997 A
5594491 Hodge et al. Jan 1997 A
5594492 O'Callaghan et al. Jan 1997 A
5594509 Florin et al. Jan 1997 A
5594661 Bruner et al. Jan 1997 A
5596373 White et al. Jan 1997 A
5598523 Fujita Jan 1997 A
5600364 Hendricks et al. Feb 1997 A
5600365 Kondo et al. Feb 1997 A
5600366 Schulman Feb 1997 A
5600573 Hendricks et al. Feb 1997 A
5602582 Wanderscheid et al. Feb 1997 A
5602596 Claussen et al. Feb 1997 A
5602597 Bertram Feb 1997 A
5602598 Shintani Feb 1997 A
5602600 Queinnec Feb 1997 A
5604542 Dedrick Feb 1997 A
5606374 Bertram Feb 1997 A
5608448 Smoral et al. Mar 1997 A
5610653 Abecassis Mar 1997 A
5610664 Bobert Mar 1997 A
5617565 Augenbraun et al. Apr 1997 A
5619247 Russo Apr 1997 A
5619249 Billock et al. Apr 1997 A
5619274 Roop et al. Apr 1997 A
5621456 Florin et al. Apr 1997 A
5621579 Yuen Apr 1997 A
5623613 Rowe et al. Apr 1997 A
5625406 Newberry et al. Apr 1997 A
5625464 Compoint et al. Apr 1997 A
5627940 Rohra et al. May 1997 A
5629733 Youman et al. May 1997 A
5630119 Aristides et al. May 1997 A
5631995 Weissensteiner et al. May 1997 A
5632007 Freeman May 1997 A
5633683 Rosengren et al. May 1997 A
5634051 Thomson May 1997 A
5635978 Alten et al. Jun 1997 A
5635979 Kostreski et al. Jun 1997 A
5635989 Rothmuller Jun 1997 A
5636346 Saxe Jun 1997 A
5640501 Turpin Jun 1997 A
5640577 Scharmer Jun 1997 A
5642153 Chaney et al. Jun 1997 A
5648813 Tanigawa et al. Jul 1997 A
5648824 Dunn et al. Jul 1997 A
5650826 Eitz Jul 1997 A
5650831 Farwell Jul 1997 A
5652613 Lazarus et al. Jul 1997 A
5652615 Bryant et al. Jul 1997 A
5654748 Matthews, III Aug 1997 A
5654886 Zereski, Jr. et al. Aug 1997 A
5657072 Aristides et al. Aug 1997 A
5657091 Bertram Aug 1997 A
5657414 Lett et al. Aug 1997 A
5659350 Hendricks et al. Aug 1997 A
5659366 Kerman Aug 1997 A
5659367 Yuen Aug 1997 A
5661516 Carles Aug 1997 A
5661517 Budow et al. Aug 1997 A
5663757 Morales Sep 1997 A
5664111 Nahan et al. Sep 1997 A
5666293 Metz et al. Sep 1997 A
5666498 Amro Sep 1997 A
5666645 Thomas et al. Sep 1997 A
5671276 Eyer et al. Sep 1997 A
5671411 Watts et al. Sep 1997 A
5671607 Clemens et al. Sep 1997 A
5675390 Schindler et al. Oct 1997 A
5675752 Scott et al. Oct 1997 A
5677708 Matthews, III et al. Oct 1997 A
5677981 Kato et al. Oct 1997 A
5682195 Hendricks et al. Oct 1997 A
5682206 Wehmeyer et al. Oct 1997 A
5684525 Klosterman Nov 1997 A
5686954 Yoshinobu et al. Nov 1997 A
5687331 Volk et al. Nov 1997 A
5689648 Diaz et al. Nov 1997 A
5689666 Berquist et al. Nov 1997 A
5692214 Levine Nov 1997 A
5692335 Magnuson Dec 1997 A
5694163 Harrison Dec 1997 A
5694176 Bruette et al. Dec 1997 A
5694381 Sako Dec 1997 A
5696905 Reimer et al. Dec 1997 A
5699107 Lawler et al. Dec 1997 A
5699125 Rzeszewski et al. Dec 1997 A
5699528 Hogan Dec 1997 A
5703604 McCutchen Dec 1997 A
5708478 Tognazzini Jan 1998 A
5710601 Marshall et al. Jan 1998 A
5710815 Ming et al. Jan 1998 A
5710884 Dedrick Jan 1998 A
5715314 Payne et al. Feb 1998 A
5715399 Bezos Feb 1998 A
5717452 Janin et al. Feb 1998 A
5717923 Dedrick Feb 1998 A
5721829 Dunn et al. Feb 1998 A
5722041 Freadman Feb 1998 A
5724091 Freeman et al. Mar 1998 A
5724103 Batchelor Mar 1998 A
5724521 Dedrick Mar 1998 A
5724525 Beyers, II et al. Mar 1998 A
5724567 Rose et al. Mar 1998 A
5727060 Young Mar 1998 A
5727163 Bezos Mar 1998 A
5731844 Rauch et al. Mar 1998 A
5732216 Logan et al. Mar 1998 A
5734444 Yoshinobu Mar 1998 A
5734720 Salganicoff Mar 1998 A
5734853 Hendricks et al. Mar 1998 A
5734893 Li et al. Mar 1998 A
5737028 Bertram et al. Apr 1998 A
5737029 Ohkura et al. Apr 1998 A
5737030 Hong et al. Apr 1998 A
5737552 Lavallee et al. Apr 1998 A
5740231 Cohn et al. Apr 1998 A
5740549 Reilly et al. Apr 1998 A
5745710 Clanton, III et al. Apr 1998 A
5749043 Worthy May 1998 A
5749081 Whiteis May 1998 A
5751282 Girard et al. May 1998 A
5752159 Faust et al. May 1998 A
5752160 Dunn May 1998 A
5754258 Hanaya et al. May 1998 A
5754771 Epperson et al. May 1998 A
5754939 Herz et al. May 1998 A
5757417 Aras et al. May 1998 A
5758257 Herz et al. May 1998 A
5758258 Shoff et al. May 1998 A
5758259 Lawler May 1998 A
5760821 Ellis et al. Jun 1998 A
5761372 Yoshinobu et al. Jun 1998 A
5761601 Nemirofsky et al. Jun 1998 A
5761606 Wolzien Jun 1998 A
5761607 Gudesen et al. Jun 1998 A
5768528 Stumm Jun 1998 A
5771354 Crawford Jun 1998 A
5774170 Hite et al. Jun 1998 A
5774357 Hoffberg et al. Jun 1998 A
5774534 Mayer Jun 1998 A
5774664 Hidary et al. Jun 1998 A
5774887 Wolff et al. Jun 1998 A
5778181 Hidary et al. Jul 1998 A
5778182 Cathey et al. Jul 1998 A
5781226 Sheehan Jul 1998 A
5781228 Sposato Jul 1998 A
5781245 Van Der Weij et al. Jul 1998 A
5781246 Alten et al. Jul 1998 A
5781734 Ohno et al. Jul 1998 A
5784258 Quinn Jul 1998 A
5790198 Roop et al. Aug 1998 A
5790201 Antos Aug 1998 A
5790202 Kummer et al. Aug 1998 A
5790426 Robinson Aug 1998 A
5790753 Krishnamoorthy et al. Aug 1998 A
5790835 Case et al. Aug 1998 A
5790935 Payton Aug 1998 A
5793364 Bolanos et al. Aug 1998 A
5793409 Tetsumura Aug 1998 A
5793438 Bedard Aug 1998 A
5793964 Rogers et al. Aug 1998 A
5793972 Shane et al. Aug 1998 A
5796952 Davis et al. Aug 1998 A
5797011 Kroll et al. Aug 1998 A
5798785 Hendricks et al. Aug 1998 A
5801747 Bedard Sep 1998 A
5801785 Crump et al. Sep 1998 A
5801787 Schein et al. Sep 1998 A
5802284 Karlton et al. Sep 1998 A
5805154 Brown Sep 1998 A
5805155 Allibhoy et al. Sep 1998 A
5805167 van Cruyningen Sep 1998 A
5805235 Bedard Sep 1998 A
5805763 Lawler et al. Sep 1998 A
5805804 Laursen et al. Sep 1998 A
5808608 Young et al. Sep 1998 A
5808694 Usui et al. Sep 1998 A
5809204 Young et al. Sep 1998 A
5809242 Shaw et al. Sep 1998 A
5812123 Rowe et al. Sep 1998 A
5812124 Eick et al. Sep 1998 A
5812205 Milnes et al. Sep 1998 A
5812937 Takahisa et al. Sep 1998 A
5815145 Matthews, III Sep 1998 A
5815671 Morrison Sep 1998 A
5818438 Howe et al. Oct 1998 A
5818439 Nagasaka et al. Oct 1998 A
5818441 Throckmorton et al. Oct 1998 A
5818511 Farry et al. Oct 1998 A
5818541 Matsuura et al. Oct 1998 A
5818935 Maa Oct 1998 A
5819019 Nelson Oct 1998 A
5819156 Belmont Oct 1998 A
5819284 Farber et al. Oct 1998 A
5822123 Davis et al. Oct 1998 A
5825407 Cowe et al. Oct 1998 A
5828402 Collings Oct 1998 A
5828419 Bruette et al. Oct 1998 A
5828420 Marshall et al. Oct 1998 A
5828839 Moncreiff Oct 1998 A
5828945 Klosterman Oct 1998 A
5830068 Brenner et al. Nov 1998 A
5832223 Hara et al. Nov 1998 A
5833468 Guy et al. Nov 1998 A
5835717 Karlton et al. Nov 1998 A
5838314 Neel et al. Nov 1998 A
5838383 Chimoto et al. Nov 1998 A
5838419 Holland Nov 1998 A
5842010 Jain et al. Nov 1998 A
5842199 Miller et al. Nov 1998 A
5844620 Coleman et al. Dec 1998 A
5848352 Dougherty et al. Dec 1998 A
5848396 Gerace Dec 1998 A
5848397 Marsh et al. Dec 1998 A
5850218 LaJoie et al. Dec 1998 A
5851149 Xidos et al. Dec 1998 A
5852437 Wugofski et al. Dec 1998 A
5861881 Freeman et al. Jan 1999 A
5861906 Dunn et al. Jan 1999 A
5862292 Kubota et al. Jan 1999 A
5864823 Levitan Jan 1999 A
5867226 Wehmeyer et al. Feb 1999 A
5867227 Yamaguchi Feb 1999 A
5867228 Miki et al. Feb 1999 A
5870543 Ronning Feb 1999 A
5872588 Aras et al. Feb 1999 A
5873660 Walsh et al. Feb 1999 A
5874985 Matthews, III Feb 1999 A
5875108 Hoffberg et al. Feb 1999 A
5877906 Nagasawa et al. Mar 1999 A
5880768 Lemmons et al. Mar 1999 A
5883621 Iwamura Mar 1999 A
5883677 Hofmann Mar 1999 A
5886691 Furuya et al. Mar 1999 A
5886731 Ebisawa Mar 1999 A
5889950 Kuzma Mar 1999 A
5892498 Marshall et al. Apr 1999 A
5892535 Allen et al. Apr 1999 A
5892536 Logan et al. Apr 1999 A
5892767 Bell et al. Apr 1999 A
5895474 Maarek et al. Apr 1999 A
5899920 DeSatnick et al. May 1999 A
5900867 Schindler et al. May 1999 A
5900905 Shoff et al. May 1999 A
5903314 Niijima et al. May 1999 A
5903545 Sabourin et al. May 1999 A
5903816 Broadwin et al. May 1999 A
5905497 Vaughan et al. May 1999 A
5907322 Kelly et al. May 1999 A
5907323 Lawler et al. May 1999 A
5907366 Farmer et al. May 1999 A
5912664 Eick et al. Jun 1999 A
5914712 Sartain et al. Jun 1999 A
5914746 Matthews, III et al. Jun 1999 A
5915243 Smolen Jun 1999 A
5917481 Rzeszewski et al. Jun 1999 A
5917830 Chen et al. Jun 1999 A
5918014 Robinson Jun 1999 A
5920700 Gordon et al. Jul 1999 A
5923848 Goodhand et al. Jul 1999 A
5929849 Kikinis Jul 1999 A
5929850 Broadwin et al. Jul 1999 A
5929932 Otsuki et al. Jul 1999 A
5930493 Ottesen et al. Jul 1999 A
5931905 Hashimoto et al. Aug 1999 A
5936614 An et al. Aug 1999 A
5936679 Kasahara et al. Aug 1999 A
5937160 Davis et al. Aug 1999 A
5937397 Callaghan Aug 1999 A
5940073 Klosterman et al. Aug 1999 A
5940572 Balaban et al. Aug 1999 A
5940614 Allen et al. Aug 1999 A
5945988 Williams et al. Aug 1999 A
5946386 Rogers et al. Aug 1999 A
5946678 Aalbersberg Aug 1999 A
5947867 Gierer et al. Sep 1999 A
5949954 Young et al. Sep 1999 A
5951642 Onoe et al. Sep 1999 A
5953005 Liu Sep 1999 A
5955988 Blonstein et al. Sep 1999 A
5959592 Petruzzelli Sep 1999 A
5959688 Schein et al. Sep 1999 A
5960411 Hartman et al. Sep 1999 A
5963264 Jackson Oct 1999 A
5963645 Kigawa et al. Oct 1999 A
5969748 Casement et al. Oct 1999 A
5970486 Yoshida et al. Oct 1999 A
5973683 Cragun et al. Oct 1999 A
5974222 Yuen et al. Oct 1999 A
5977964 Williams et al. Nov 1999 A
5978044 Choi Nov 1999 A
5986650 Ellis et al. Nov 1999 A
5987213 Mankovitz et al. Nov 1999 A
5987509 Portuesi Nov 1999 A
5987621 Duso et al. Nov 1999 A
5988078 Levine Nov 1999 A
5990890 Etheredge Nov 1999 A
5990927 Hendricks et al. Nov 1999 A
5991498 Young Nov 1999 A
5991799 Yen et al. Nov 1999 A
5995155 Schindler et al. Nov 1999 A
5997964 Klima, Jr. Dec 1999 A
5999912 Wodarz et al. Dec 1999 A
6002393 Hite et al. Dec 1999 A
6002394 Schein et al. Dec 1999 A
6002444 Marshall et al. Dec 1999 A
6005561 Hawkins et al. Dec 1999 A
6005562 Shiga et al. Dec 1999 A
6005563 White et al. Dec 1999 A
6005565 Legall et al. Dec 1999 A
6005566 Jones et al. Dec 1999 A
6005597 Barrett et al. Dec 1999 A
6005631 Anderson et al. Dec 1999 A
6006218 Breese et al. Dec 1999 A
6006257 Slezak Dec 1999 A
6008802 Iki et al. Dec 1999 A
6008803 Rowe et al. Dec 1999 A
6011546 Bertram Jan 2000 A
6014137 Burns Jan 2000 A
6014184 Knee et al. Jan 2000 A
6014502 Moraes Jan 2000 A
6016141 Knudson et al. Jan 2000 A
6018372 Etheredge Jan 2000 A
6018768 Ullman et al. Jan 2000 A
6020880 Naimpally Feb 2000 A
6020883 Herz et al. Feb 2000 A
6020929 Marshall et al. Feb 2000 A
6023267 Chapuis et al. Feb 2000 A
6025837 Matthews, III et al. Feb 2000 A
6025886 Koda Feb 2000 A
6028599 Yuen et al. Feb 2000 A
6028600 Rosin et al. Feb 2000 A
6029045 Picco et al. Feb 2000 A
6029176 Cannon Feb 2000 A
6029195 Herz Feb 2000 A
6031806 Tomita Feb 2000 A
6035091 Kazo Mar 2000 A
6035304 Machida et al. Mar 2000 A
6037933 Blonstein et al. Mar 2000 A
6038367 Abecassis Mar 2000 A
6047317 Bisdikian et al. Apr 2000 A
6049824 Simonin Apr 2000 A
6052145 Macrae et al. Apr 2000 A
6057872 Candelore May 2000 A
6057890 Virden et al. May 2000 A
6061060 Berry et al. May 2000 A
6061082 Park May 2000 A
6061097 Satterfield May 2000 A
6064376 Berezowski et al. May 2000 A
6064980 Jacobi et al. May 2000 A
6067303 Aaker et al. May 2000 A
6067561 Dillon May 2000 A
6072460 Marshall et al. Jun 2000 A
6072982 Haddad Jun 2000 A
6075526 Rothmuller Jun 2000 A
6075551 Berezowski et al. Jun 2000 A
6075575 Schein et al. Jun 2000 A
6078348 Klosterman et al. Jun 2000 A
6081291 Ludwig, Jr. Jun 2000 A
6081750 Hoffberg et al. Jun 2000 A
6081830 Schindler Jun 2000 A
6088722 Herz et al. Jul 2000 A
6088945 Sanderfoot Jul 2000 A
6091883 Artigalas et al. Jul 2000 A
6091884 Yuen et al. Jul 2000 A
RE36801 Logan et al. Aug 2000 E
6098065 Skillen et al. Aug 2000 A
6104705 Ismail et al. Aug 2000 A
6108042 Adams et al. Aug 2000 A
6111614 Mugura et al. Aug 2000 A
6112186 Bergh et al. Aug 2000 A
6115057 Kwoh et al. Sep 2000 A
6118492 Milnes et al. Sep 2000 A
6119098 Guyot et al. Sep 2000 A
6119101 Peckover Sep 2000 A
6122011 Dias et al. Sep 2000 A
6124854 Sartain et al. Sep 2000 A
6125230 Yaginuma Sep 2000 A
6130726 Darbee et al. Oct 2000 A
6133909 Schein et al. Oct 2000 A
6133910 Stinebruner Oct 2000 A
6139177 Venkatraman et al. Oct 2000 A
6141003 Chor et al. Oct 2000 A
6141488 Knudson et al. Oct 2000 A
6147714 Terasawa et al. Nov 2000 A
6147715 Yuen et al. Nov 2000 A
6151059 Schein et al. Nov 2000 A
6151643 Cheng et al. Nov 2000 A
6154203 Yuen et al. Nov 2000 A
6154752 Ryan Nov 2000 A
6154771 Rangan et al. Nov 2000 A
6155001 Marin Dec 2000 A
6157411 Williams et al. Dec 2000 A
6157413 Hanafee et al. Dec 2000 A
6160545 Eyer et al. Dec 2000 A
6160546 Thompson et al. Dec 2000 A
6160570 Sitnik Dec 2000 A
6160989 Hendricks et al. Dec 2000 A
6163316 Killian Dec 2000 A
6163345 Noguchi et al. Dec 2000 A
6166778 Yamamoto et al. Dec 2000 A
6167188 Young et al. Dec 2000 A
6169542 Hooks et al. Jan 2001 B1
6172674 Etheredge Jan 2001 B1
6172677 Stautner et al. Jan 2001 B1
6173271 Goodman et al. Jan 2001 B1
6175362 Harms et al. Jan 2001 B1
6177931 Alexander et al. Jan 2001 B1
6178446 Gerszberg et al. Jan 2001 B1
6181335 Hendricks et al. Jan 2001 B1
6184877 Dodson et al. Feb 2001 B1
6185360 Inoue et al. Feb 2001 B1
6186287 Heidenreich et al. Feb 2001 B1
6186443 Shaffer Feb 2001 B1
6191780 Martin et al. Feb 2001 B1
6195501 Perry et al. Feb 2001 B1
6201536 Hendricks et al. Mar 2001 B1
6202058 Rose et al. Mar 2001 B1
6202212 Sturgeon et al. Mar 2001 B1
6208335 Gordon et al. Mar 2001 B1
6208799 Marsh et al. Mar 2001 B1
6209129 Carr et al. Mar 2001 B1
6209130 Rector, Jr. et al. Mar 2001 B1
6212553 Lee et al. Apr 2001 B1
6216264 Maze et al. Apr 2001 B1
6219839 Sampsell Apr 2001 B1
6226447 Sasaki et al. May 2001 B1
6233389 Barton et al. May 2001 B1
6237145 Narasimhan et al. May 2001 B1
6237146 Richards et al. May 2001 B1
6239794 Yuen et al. May 2001 B1
6240555 Shoff et al. May 2001 B1
6253203 O'Flaherty et al. Jun 2001 B1
6256071 Hiroi Jul 2001 B1
6256785 Klappert et al. Jul 2001 B1
6257268 Hope et al. Jul 2001 B1
6262721 Tsukidate et al. Jul 2001 B1
6262772 Shen et al. Jul 2001 B1
6263501 Schein et al. Jul 2001 B1
6263507 Ahmad et al. Jul 2001 B1
6266649 Linden et al. Jul 2001 B1
6268849 Boyer et al. Jul 2001 B1
6275268 Ellis et al. Aug 2001 B1
6275648 Knudson et al. Aug 2001 B1
6279157 Takasu Aug 2001 B1
6282713 Kitsukawa et al. Aug 2001 B1
6285713 Nakaya et al. Sep 2001 B1
6286140 Ivanyi Sep 2001 B1
6289346 Milewski et al. Sep 2001 B1
6298482 Seidman et al. Oct 2001 B1
6311877 Yang Nov 2001 B1
6312336 Handelman et al. Nov 2001 B1
6317885 Fries Nov 2001 B1
6320588 Palmer et al. Nov 2001 B1
6323911 Schein et al. Nov 2001 B1
6323931 Fujita et al. Nov 2001 B1
6324338 Wood et al. Nov 2001 B1
6326982 Wu et al. Dec 2001 B1
6327418 Barton Dec 2001 B1
6331877 Bennington et al. Dec 2001 B1
6334022 Ohba et al. Dec 2001 B1
6335722 Tani et al. Jan 2002 B1
6335963 Bosco Jan 2002 B1
6341195 Mankovitz et al. Jan 2002 B1
6341374 Schein et al. Jan 2002 B2
6342926 Hanafee et al. Jan 2002 B1
6357042 Srinivasan et al. Mar 2002 B2
6357043 Ellis et al. Mar 2002 B1
6359636 Schindler et al. Mar 2002 B1
6363525 Dougherty et al. Mar 2002 B1
6366890 Usrey Apr 2002 B1
6373528 Bennington et al. Apr 2002 B1
6381582 Walker et al. Apr 2002 B1
6388714 Schein et al. May 2002 B1
6389593 Yamagishi May 2002 B1
6392710 Gonsalves et al. May 2002 B1
6396546 Alten et al. May 2002 B1
6400407 Zigmond et al. Jun 2002 B1
6405371 Oosterhout et al. Jun 2002 B1
6408437 Hendricks et al. Jun 2002 B1
6411308 Blonstein et al. Jun 2002 B1
6411696 Iverson et al. Jun 2002 B1
6412110 Schein et al. Jun 2002 B1
6418556 Bennington et al. Jul 2002 B1
6421067 Kamen et al. Jul 2002 B1
6426779 Noguchi et al. Jul 2002 B1
6437836 Huang et al. Aug 2002 B1
6438579 Hosken Aug 2002 B1
6438752 McClard Aug 2002 B1
6441832 Tao et al. Aug 2002 B1
6442332 Knudson et al. Aug 2002 B1
6446261 Rosser Sep 2002 B1
6453471 Klosterman Sep 2002 B1
RE37881 Haines Oct 2002 E
6463585 Hendricks et al. Oct 2002 B1
6469753 Klosterman et al. Oct 2002 B1
6470497 Ellis et al. Oct 2002 B1
6473559 Knudson et al. Oct 2002 B1
6477579 Kunkel et al. Nov 2002 B1
6477705 Yuen et al. Nov 2002 B1
6480667 O'Connor Nov 2002 B1
6486892 Stern Nov 2002 B1
6486920 Arai et al. Nov 2002 B2
6487362 Yuen et al. Nov 2002 B1
6487541 Aggarwal et al. Nov 2002 B1
6493876 DeFreese et al. Dec 2002 B1
6498895 Young et al. Dec 2002 B2
6499138 Swix et al. Dec 2002 B1
6505348 Knowles et al. Jan 2003 B1
6507953 Horlander et al. Jan 2003 B1
6515680 Hendricks et al. Feb 2003 B1
6516323 Kamba Feb 2003 B1
6530082 Del Sesto et al. Mar 2003 B1
6539548 Hendricks et al. Mar 2003 B1
6542169 Marshall et al. Apr 2003 B1
6545722 Schultheiss et al. Apr 2003 B1
6546556 Kataoka et al. Apr 2003 B1
6564005 Berstis May 2003 B1
6564170 Halabieh May 2003 B2
6564378 Satterfield et al. May 2003 B1
6564379 Knee et al. May 2003 B1
6567892 Horst et al. May 2003 B1
6567982 Howe et al. May 2003 B1
6571390 Dunn et al. May 2003 B1
6574424 Dimitri et al. Jun 2003 B1
6588013 Lumley et al. Jul 2003 B1
6600364 Liang et al. Jul 2003 B1
6600503 Stautner et al. Jul 2003 B2
6601074 Liebenow Jul 2003 B1
6606128 Hanafee et al. Aug 2003 B2
6611842 Brown Aug 2003 B1
6611958 Shintani et al. Aug 2003 B1
6614987 Ismail et al. Sep 2003 B1
6622306 Kamada Sep 2003 B1
6631523 Matthews, III et al. Oct 2003 B1
6637029 Eilat et al. Oct 2003 B1
6640337 Lu Oct 2003 B1
6651251 Shoff et al. Nov 2003 B1
6657116 Gunnerson Dec 2003 B1
6660503 Kierulff Dec 2003 B2
6661468 Alten et al. Dec 2003 B2
6665869 Ellis et al. Dec 2003 B1
6670971 Oral et al. Dec 2003 B1
6675386 Hendricks et al. Jan 2004 B1
6678706 Fishel Jan 2004 B1
6681396 Bates et al. Jan 2004 B1
6687906 Yuen et al. Feb 2004 B1
6698020 Zigmond et al. Feb 2004 B1
6704931 Schaffer et al. Mar 2004 B1
6714917 Eldering et al. Mar 2004 B1
6718324 Edlund et al. Apr 2004 B2
6718551 Swix et al. Apr 2004 B1
6721954 Nickum Apr 2004 B1
6727914 Gutta Apr 2004 B1
6728967 Bennington et al. Apr 2004 B2
6732369 Leftwich et al. May 2004 B1
6738978 Hendricks et al. May 2004 B1
6742183 Reynolds et al. May 2004 B1
6744967 Kaminski et al. Jun 2004 B2
6751800 Fukuda et al. Jun 2004 B1
6754904 Cooper et al. Jun 2004 B1
6756987 Goyins et al. Jun 2004 B2
6756997 Ward, III et al. Jun 2004 B1
6757906 Look et al. Jun 2004 B1
6760537 Mankovitz Jul 2004 B2
6760538 Bumgardner et al. Jul 2004 B1
6766100 Komar et al. Jul 2004 B1
6771317 Ellis et al. Aug 2004 B2
6771886 Mendelsohn Aug 2004 B1
6788882 Geer et al. Sep 2004 B1
6792618 Bendinelli et al. Sep 2004 B1
6799326 Boylan, III et al. Sep 2004 B2
6799327 Reynolds et al. Sep 2004 B1
6820278 Ellis Nov 2004 B1
6828993 Hendricks et al. Dec 2004 B1
6837791 Ramsey et al. Jan 2005 B1
6847387 Roth Jan 2005 B2
6850693 Young et al. Feb 2005 B2
6857131 Yagawa et al. Feb 2005 B1
6865746 Herrington et al. Mar 2005 B1
6868551 Lawler et al. Mar 2005 B1
6898762 Ellis et al. May 2005 B2
6920278 Yano et al. Jul 2005 B1
6920281 Agnibotri et al. Jul 2005 B1
6925035 Ueki Aug 2005 B2
6934964 Schaffer et al. Aug 2005 B1
6938208 Reichardt Aug 2005 B2
6947922 Glance Sep 2005 B1
6973621 Sie et al. Dec 2005 B2
6973669 Daniels Dec 2005 B2
6981040 Konig et al. Dec 2005 B1
6983478 Grauch et al. Jan 2006 B1
6985188 Hurst, Jr. Jan 2006 B1
7003792 Yuen Feb 2006 B1
7007294 Kurapati Feb 2006 B1
7017118 Carroll Mar 2006 B1
7017179 Asamoto et al. Mar 2006 B1
7024424 Platt et al. Apr 2006 B1
7027716 Boyle et al. Apr 2006 B1
7028326 Westlake et al. Apr 2006 B1
7029935 Negley et al. Apr 2006 B2
7039935 Knudson et al. May 2006 B2
7047550 Yasukawa et al. May 2006 B1
7058635 Shah-Nazaroff et al. Jun 2006 B1
7065709 Ellis et al. Jun 2006 B2
7069576 Knudson et al. Jun 2006 B1
7073187 Hendricks et al. Jul 2006 B1
7088910 Potrebic et al. Aug 2006 B2
7096486 Ukai et al. Aug 2006 B1
7100185 Bennington et al. Aug 2006 B2
7117518 Takahashi et al. Oct 2006 B1
7143430 Fingerman et al. Nov 2006 B1
7165098 Boyer et al. Jan 2007 B1
7181128 Wada et al. Feb 2007 B1
7185355 Ellis et al. Feb 2007 B1
7187847 Young et al. Mar 2007 B2
7200859 Perlman et al. Apr 2007 B1
7209640 Young et al. Apr 2007 B2
7209915 Taboada et al. Apr 2007 B1
7218839 Plourde, Jr. et al. May 2007 B2
7229012 Enright et al. Jun 2007 B1
7229354 McNutt et al. Jun 2007 B2
7243139 Ullman et al. Jul 2007 B2
7266833 Ward, III et al. Sep 2007 B2
7287267 Knudson et al. Oct 2007 B2
7293276 Phillips et al. Nov 2007 B2
7328450 Macrae et al. Feb 2008 B2
7356246 Kobb Apr 2008 B1
7369749 Ichioka et al. May 2008 B2
7369750 Cheng et al. May 2008 B2
7370342 Ismail et al. May 2008 B2
7392532 White et al. Jun 2008 B2
7398541 Bennington et al. Jul 2008 B2
7403935 Horvitz et al. Jul 2008 B2
7412441 Scott, III et al. Aug 2008 B2
7437751 Daniels Oct 2008 B2
7440677 Strasser Oct 2008 B2
7454515 Lamkin et al. Nov 2008 B2
7454772 Fellenstein et al. Nov 2008 B2
7461061 Aravamudan et al. Dec 2008 B2
7467398 Fellenstein et al. Dec 2008 B2
7477832 Young et al. Jan 2009 B2
7480929 Klosterman et al. Jan 2009 B2
7487528 Satterfield et al. Feb 2009 B2
7487529 Orlick Feb 2009 B1
7493641 Klosterman et al. Feb 2009 B2
7503055 Reynolds et al. Mar 2009 B2
7506350 Johnson Mar 2009 B2
7519268 Juen et al. Apr 2009 B2
7540010 Hanaya et al. May 2009 B2
7577336 Srinivasan et al. Aug 2009 B2
7590993 Hendricks et al. Sep 2009 B1
7599753 Taylor et al. Oct 2009 B2
7603685 Knudson et al. Oct 2009 B2
7634786 Knee et al. Dec 2009 B2
7665109 Matthews, III et al. Feb 2010 B2
7685620 Fellenstein et al. Mar 2010 B2
7689995 Francis et al. Mar 2010 B1
7707617 Birleson Apr 2010 B2
7725467 Yamamoto et al. May 2010 B2
7770196 Hendricks Aug 2010 B1
7778158 Vogel et al. Aug 2010 B2
7779437 Barton Aug 2010 B2
7792815 Aravamudan et al. Sep 2010 B2
7793326 McCoskey et al. Sep 2010 B2
7801888 Rao et al. Sep 2010 B2
7823055 Sull et al. Oct 2010 B2
7840577 Ortega et al. Nov 2010 B2
7859571 Brown et al. Dec 2010 B1
7882520 Beach et al. Feb 2011 B2
7895218 Venkataraman et al. Feb 2011 B2
7925141 Geer et al. Apr 2011 B2
7949627 Aravamudan et al. May 2011 B2
7996864 Yuen et al. Aug 2011 B2
8051450 Robarts et al. Nov 2011 B2
8065702 Goldberg et al. Nov 2011 B2
8078751 Janik et al. Dec 2011 B2
8087050 Ellis et al. Dec 2011 B2
8265458 Helmstetter Sep 2012 B2
8275764 Jeon et al. Sep 2012 B2
8363679 Sorenson et al. Jan 2013 B2
8478750 Rao et al. Jul 2013 B2
8613020 Knudson et al. Dec 2013 B2
8635649 Ward, III et al. Jan 2014 B2
8707366 Wong et al. Apr 2014 B2
8929659 Galvin, Jr. Jan 2015 B2
20010001160 Shoff et al. May 2001 A1
20010013009 Greening et al. Aug 2001 A1
20010013122 Hirata Aug 2001 A1
20010025375 Ahmad et al. Sep 2001 A1
20010027555 Franken et al. Oct 2001 A1
20010027562 Schein et al. Oct 2001 A1
20010028782 Ohno et al. Oct 2001 A1
20010029610 Corvin et al. Oct 2001 A1
20010034237 Garahi Oct 2001 A1
20010042246 Yuen et al. Nov 2001 A1
20010043795 Wood et al. Nov 2001 A1
20010047298 Moore et al. Nov 2001 A1
20010049820 Barton Dec 2001 A1
20010054181 Corvin Dec 2001 A1
20020009283 Ichioka et al. Jan 2002 A1
20020019882 Soejima et al. Feb 2002 A1
20020026496 Boyer et al. Feb 2002 A1
20020042913 Ellis et al. Apr 2002 A1
20020042914 Walker et al. Apr 2002 A1
20020042918 Townsend et al. Apr 2002 A1
20020048448 Daniels Apr 2002 A1
20020049973 Alten et al. Apr 2002 A1
20020053078 Holtz et al. May 2002 A1
20020056098 White May 2002 A1
20020057893 Wood et al. May 2002 A1
20020059599 Schein et al. May 2002 A1
20020059602 Macrae et al. May 2002 A1
20020073424 Ward et al. Jun 2002 A1
20020078450 Bennington et al. Jun 2002 A1
20020083439 Eldering Jun 2002 A1
20020090203 Mankovitz Jul 2002 A1
20020092017 Klosterman et al. Jul 2002 A1
20020095676 Knee et al. Jul 2002 A1
20020107853 Hofmann et al. Aug 2002 A1
20020110353 Potrebic et al. Aug 2002 A1
20020112249 Hendricks et al. Aug 2002 A1
20020120925 Logan Aug 2002 A1
20020120933 Knudson et al. Aug 2002 A1
20020124249 Shintani et al. Sep 2002 A1
20020129368 Schlack et al. Sep 2002 A1
20020138840 Schein et al. Sep 2002 A1
20020144279 Zhou Oct 2002 A1
20020147976 Yuen et al. Oct 2002 A1
20020147977 Hammett et al. Oct 2002 A1
20020154888 Allen et al. Oct 2002 A1
20020166119 Cristofalo Nov 2002 A1
20020174230 Gudorf et al. Nov 2002 A1
20020174424 Chang et al. Nov 2002 A1
20020174430 Ellis et al. Nov 2002 A1
20020174433 Baumgartner et al. Nov 2002 A1
20020191954 Beach et al. Dec 2002 A1
20020194596 Srivastava Dec 2002 A1
20020198762 Donato Dec 2002 A1
20020199185 Kaminski et al. Dec 2002 A1
20030005432 Ellis et al. Jan 2003 A1
20030005445 Schein et al. Jan 2003 A1
20030009766 Marolda Jan 2003 A1
20030014407 Blatter et al. Jan 2003 A1
20030067554 Klarfeld et al. Apr 2003 A1
20030088873 McCoy et al. May 2003 A1
20030093792 Labeeb et al. May 2003 A1
20030098891 Molander May 2003 A1
20030103088 Dresti et al. Jun 2003 A1
20030110056 Berghofer et al. Jun 2003 A1
20030110494 Bennington et al. Jun 2003 A1
20030110495 Bennington et al. Jun 2003 A1
20030110499 Knudson et al. Jun 2003 A1
20030110500 Rodriguez Jun 2003 A1
20030115599 Bennington et al. Jun 2003 A1
20030115602 Knee et al. Jun 2003 A1
20030118323 Ismail et al. Jun 2003 A1
20030126607 Phillips et al. Jul 2003 A1
20030135490 Barrett et al. Jul 2003 A1
20030145323 Hendricks et al. Jul 2003 A1
20030149988 Ellis et al. Aug 2003 A1
20030163813 Klosterman et al. Aug 2003 A1
20030164858 Klosterman et al. Sep 2003 A1
20030188310 Klosterman et al. Oct 2003 A1
20030188311 Yuen et al. Oct 2003 A1
20030192050 Fellenstein et al. Oct 2003 A1
20030196201 Schein et al. Oct 2003 A1
20030196203 Ellis et al. Oct 2003 A1
20030198462 Bumgardner et al. Oct 2003 A1
20030200505 Evans Oct 2003 A1
20030204847 Ellis et al. Oct 2003 A1
20030206719 Bumgardner et al. Nov 2003 A1
20030208756 Macrae et al. Nov 2003 A1
20030208758 Schein et al. Nov 2003 A1
20030208759 Gordon et al. Nov 2003 A1
20030210898 Juen et al. Nov 2003 A1
20030212996 Wolzien Nov 2003 A1
20030225777 Marsh Dec 2003 A1
20030226144 Thurston et al. Dec 2003 A1
20030234805 Toyama et al. Dec 2003 A1
20040003405 Boston et al. Jan 2004 A1
20040003407 Hanafee et al. Jan 2004 A1
20040015397 Barry et al. Jan 2004 A1
20040019907 Li et al. Jan 2004 A1
20040049787 Maissel et al. Mar 2004 A1
20040049794 Shao et al. Mar 2004 A1
20040059570 Mochinaga et al. Mar 2004 A1
20040060063 Russ et al. Mar 2004 A1
20040070594 Burke Apr 2004 A1
20040070678 Toyama et al. Apr 2004 A1
20040073923 Wasserman Apr 2004 A1
20040078809 Drazin Apr 2004 A1
20040078814 Allen Apr 2004 A1
20040078815 Lemmons et al. Apr 2004 A1
20040088729 Petrovic et al. May 2004 A1
20040098744 Gutta May 2004 A1
20040103434 Ellis May 2004 A1
20040111742 Hendricks et al. Jun 2004 A1
20040111745 Schein et al. Jun 2004 A1
20040128686 Boyer et al. Jul 2004 A1
20040133910 Gordon et al. Jul 2004 A1
20040139465 Matthews, III et al. Jul 2004 A1
20040156614 Bumgardner et al. Aug 2004 A1
20040160862 Ueki Aug 2004 A1
20040168189 Reynolds et al. Aug 2004 A1
20040181814 Ellis et al. Sep 2004 A1
20040187164 Kandasamy et al. Sep 2004 A1
20040192343 Toyama Sep 2004 A1
20040194131 Ellis et al. Sep 2004 A1
20040194138 Boylan, III et al. Sep 2004 A1
20040194141 Sanders Sep 2004 A1
20040210932 Mori et al. Oct 2004 A1
20040210935 Schein et al. Oct 2004 A1
20040221310 Herrington et al. Nov 2004 A1
20040255321 Matz Dec 2004 A1
20040264920 Helmstetter Dec 2004 A1
20040268403 Krieger et al. Dec 2004 A1
20050010949 Ward et al. Jan 2005 A1
20050015804 LaJoie et al. Jan 2005 A1
20050015815 Shoff et al. Jan 2005 A1
20050018216 Barsness et al. Jan 2005 A1
20050022239 Meuleman Jan 2005 A1
20050028218 Blake Feb 2005 A1
20050038813 Apparao et al. Feb 2005 A1
20050076365 Popov et al. Apr 2005 A1
20050097622 Zigmond et al. May 2005 A1
20050125240 Speiser et al. Jun 2005 A9
20050129049 Srinivasan et al. Jun 2005 A1
20050132264 Joshi et al. Jun 2005 A1
20050138659 Boccon-Gibod et al. Jun 2005 A1
20050138660 Boyer et al. Jun 2005 A1
20050155056 Knee et al. Jul 2005 A1
20050157217 Hendricks Jul 2005 A1
20050183123 Lee et al. Aug 2005 A1
20050188402 de Andrade et al. Aug 2005 A1
20050193408 Sull et al. Sep 2005 A1
20050198668 Yuen et al. Sep 2005 A1
20050204382 Ellis Sep 2005 A1
20050204388 Knudson et al. Sep 2005 A1
20050216936 Knudson et al. Sep 2005 A1
20050229214 Young et al. Oct 2005 A1
20050229215 Schein et al. Oct 2005 A1
20050234880 Zeng et al. Oct 2005 A1
20050235320 Maze et al. Oct 2005 A1
20050235323 Ellis et al. Oct 2005 A1
20050240962 Cooper et al. Oct 2005 A1
20050240968 Knudson et al. Oct 2005 A1
20050244138 O'Connor et al. Nov 2005 A1
20050251827 Ellis et al. Nov 2005 A1
20050273377 Ouimet et al. Dec 2005 A1
20050273819 Knudson et al. Dec 2005 A1
20050278741 Robarts et al. Dec 2005 A1
20050283796 Flickinger Dec 2005 A1
20050283800 Ellis et al. Dec 2005 A1
20060010217 Sood Jan 2006 A1
20060037044 Daniels Feb 2006 A1
20060041548 Parsons et al. Feb 2006 A1
20060047639 King et al. Mar 2006 A1
20060059260 Kelly et al. Mar 2006 A1
20060083484 Wada et al. Apr 2006 A1
20060101490 Leurs May 2006 A1
20060123448 Ma et al. Jun 2006 A1
20060140584 Ellis et al. Jun 2006 A1
20060150216 Herz et al. Jul 2006 A1
20060156329 Treese Jul 2006 A1
20060161952 Herz et al. Jul 2006 A1
20060184558 Martin et al. Aug 2006 A1
20060195441 Julia et al. Aug 2006 A1
20060212900 Ismail et al. Sep 2006 A1
20060212906 Cantalini Sep 2006 A1
20060248555 Eldering Nov 2006 A1
20060254409 Withop Nov 2006 A1
20060270419 Crowley et al. Nov 2006 A1
20060271691 Joacobs et al. Nov 2006 A1
20070005653 Marsh Jan 2007 A1
20070016926 Ward et al. Jan 2007 A1
20070033613 Ward et al. Feb 2007 A1
20070094067 Kumar et al. Apr 2007 A1
20070118498 Song et al. May 2007 A1
20070136751 Garbow et al. Jun 2007 A1
20070157242 Cordray et al. Jul 2007 A1
20070162934 Roop et al. Jul 2007 A1
20070186240 Ward et al. Aug 2007 A1
20070204308 Nicholas et al. Aug 2007 A1
20070214480 Kamen Sep 2007 A1
20070234393 Walker et al. Oct 2007 A1
20070244902 Seide et al. Oct 2007 A1
20070255618 Meerbergen et al. Nov 2007 A1
20070271582 Ellis et al. Nov 2007 A1
20070288456 Aravamudan et al. Dec 2007 A1
20070288961 Guldi et al. Dec 2007 A1
20080004989 Yi Jan 2008 A1
20080005700 Morikawa Jan 2008 A1
20080066111 Ellis et al. Mar 2008 A1
20080077575 Tateno et al. Mar 2008 A1
20080092155 Ferrone et al. Apr 2008 A1
20080101456 Ridge May 2008 A1
20080115169 Ellis et al. May 2008 A1
20080126303 Park et al. May 2008 A1
20080127265 Ward et al. May 2008 A1
20080127266 Ward et al. May 2008 A1
20080178216 Bennington et al. Jul 2008 A1
20080178221 Schein et al. Jul 2008 A1
20080178222 Bennington et al. Jul 2008 A1
20080178223 Kwoh et al. Jul 2008 A1
20080184286 Kwoh et al. Jul 2008 A1
20080184305 Schein et al. Jul 2008 A1
20080184308 Herrington et al. Jul 2008 A1
20080184312 Schein et al. Jul 2008 A1
20080184315 Ellis et al. Jul 2008 A1
20080189744 Schein et al. Aug 2008 A1
20080192746 DiPietro et al. Aug 2008 A1
20080200205 Liu Aug 2008 A1
20080209350 Sobotka et al. Aug 2008 A1
20080222106 Rao et al. Sep 2008 A1
20080235725 Hendricks Sep 2008 A1
20080281689 Blinnikka et al. Nov 2008 A1
20080288980 Schein et al. Nov 2008 A1
20090025033 Stautner et al. Jan 2009 A1
20090049481 Fellenstein et al. Feb 2009 A1
20090070817 Ellis et al. Mar 2009 A1
20090119723 Tinsman May 2009 A1
20090125544 Brindley May 2009 A1
20090193458 Finseth et al. Jul 2009 A1
20090319181 Khosravy et al. Dec 2009 A1
20100048242 Rhoads Feb 2010 A1
20100115541 Schein et al. May 2010 A1
20100146543 Knee et al. Jun 2010 A1
20100175078 Knudson et al. Jul 2010 A1
20100247065 Cooper et al. Sep 2010 A1
20100275230 Yuen et al. Oct 2010 A1
20100299692 Rao et al. Nov 2010 A1
20100319013 Knudson et al. Dec 2010 A1
20110013885 Wong et al. Jan 2011 A1
20110035771 Ward, III et al. Feb 2011 A1
20110043652 King et al. Feb 2011 A1
20110072393 Wilairat Mar 2011 A1
20110131601 Alten et al. Jun 2011 A1
20110167451 Yuen et al. Jul 2011 A1
20110185387 Schein et al. Jul 2011 A1
20110209170 Schein et al. Aug 2011 A1
20110276995 Alten et al. Nov 2011 A1
20120079539 Schein et al. Mar 2012 A1
20120102523 Herz et al. Apr 2012 A1
20120185901 Macrae et al. Jul 2012 A1
20120206334 Osterhout Aug 2012 A1
20120206335 Osterhout Aug 2012 A1
20120212406 Osterhout Aug 2012 A1
20120272270 Boyer et al. Oct 2012 A1
20130031582 Tinsman et al. Jan 2013 A1
20140053099 Groten Feb 2014 A1
Foreign Referenced Citations (540)
Number Date Country
199856198 Jul 1998 AU
731010 Mar 2001 AU
733993 May 2001 AU
749209 Jun 2002 AU
760568 May 2003 AU
765648 Sep 2003 AU
2008201306 Apr 2008 AU
1030505 May 1978 CA
1187197 May 1985 CA
1188811 Jun 1985 CA
1196082 Oct 1985 CA
1200911 Feb 1986 CA
1203625 Apr 1986 CA
2151458 Jun 1994 CA
2164608 Dec 1994 CA
2285645 Jul 1998 CA
2297039 Jan 1999 CA
2312326 Jun 1999 CA
2322217 Sep 1999 CA
2324278 Nov 1999 CA
2513282 Nov 1999 CA
1200221 Nov 1998 CN
1226030 Aug 1999 CN
1555191 Dec 2004 CN
1567986 Jan 2005 CN
29 18 846 Nov 1980 DE
3246225 Jun 1984 DE
3337204 Apr 1985 DE
36 21 263 Jan 1988 DE
3640436 Jun 1988 DE
3702220 Aug 1988 DE
3909334 Sep 1990 DE
41 43 074 Jul 1992 DE
4201031 Jul 1993 DE
4217246 Dec 1993 DE
4240187 Jun 1994 DE
4407701 Sep 1995 DE
4440419 May 1996 DE
19 531 121 Feb 1997 DE
19 740 079 Mar 1999 DE
19 931 046 Jan 2001 DE
42 90 947 Nov 2006 DE
0 072 153-2 Feb 1983 EP
0 148 733 Jul 1985 EP
0 222 025 May 1987 EP
0 229 526 Jul 1987 EP
0 239 884 Oct 1987 EP
0 276425 Aug 1988 EP
0337336 Oct 1989 EP
0339675 Nov 1989 EP
0 363 847 Apr 1990 EP
0 393 555 Oct 1990 EP
0396062 Nov 1990 EP
0 401 015 Dec 1990 EP
0401930 Dec 1990 EP
0408892 Jan 1991 EP
0420123 Apr 1991 EP
0424648 May 1991 EP
0444496 Sep 1991 EP
0447968 Sep 1991 EP
0 463 451 Jan 1992 EP
0 477 754 Apr 1992 EP
0477756 Apr 1992 EP
0 489 387 Jun 1992 EP
0488379 Jun 1992 EP
0 492 853 Jul 1992 EP
497 235 Aug 1992 EP
0532322 Mar 1993 EP
0536901 Apr 1993 EP
0550911 Jul 1993 EP
0 560 593 Sep 1993 EP
0 572 090 Dec 1993 EP
0 575 956 Dec 1993 EP
0617563 Sep 1994 EP
0 620 689 Oct 1994 EP
0624039 Nov 1994 EP
0 644 689 Mar 1995 EP
0 650 114 Apr 1995 EP
0 658 048 Jun 1995 EP
0 669 760 Aug 1995 EP
0 673 164 Sep 1995 EP
0 682 452 Nov 1995 EP
0 723369 Jul 1996 EP
0721253 Jul 1996 EP
0725539 Aug 1996 EP
0 742669 Nov 1996 EP
0752767 Jan 1997 EP
0753964 Jan 1997 EP
0762751 Mar 1997 EP
0762756 Mar 1997 EP
0772360 May 1997 EP
0774866 May 1997 EP
0775417 May 1997 EP
0784405 Jul 1997 EP
0 789 488 Aug 1997 EP
0797355 Sep 1997 EP
0 804 028 Oct 1997 EP
0 805 590 Nov 1997 EP
0 806 111 Nov 1997 EP
0805594 Nov 1997 EP
0822718 Feb 1998 EP
0827340 Mar 1998 EP
0 836 320 Apr 1998 EP
0 836 321 Apr 1998 EP
0 837599 Apr 1998 EP
0834798 Apr 1998 EP
0843468 May 1998 EP
0848554 Jun 1998 EP
0849948 Jun 1998 EP
0 852361 Jul 1998 EP
0 854645 Jul 1998 EP
0851681 Jul 1998 EP
0852442 Jul 1998 EP
0854654 Jul 1998 EP
0880856 Dec 1998 EP
0 892 554 Jan 1999 EP
0905985 Mar 1999 EP
0 921 682 Jun 1999 EP
0924927 Jun 1999 EP
0935393 Aug 1999 EP
0 940 983 Sep 1999 EP
0 945003 Sep 1999 EP
0940985 Sep 1999 EP
0944253 Sep 1999 EP
0963119 Dec 1999 EP
0988876 Mar 2000 EP
1014715 Jun 2000 EP
1 058 999 Dec 2000 EP
1 059 749 Dec 2000 EP
1067792 Jan 2001 EP
1 093 305 Apr 2001 EP
1095504 May 2001 EP
1135929 Sep 2001 EP
0 856 847 Nov 2001 EP
1213919 Jun 2002 EP
1036466 Mar 2003 EP
0936811 May 2003 EP
1763234 Mar 2007 EP
2662895 Dec 1991 FR
1 554 411 Oct 1979 GB
2034995 Jun 1980 GB
2126002 Mar 1984 GB
2185670 Jul 1987 GB
2217144 Oct 1989 GB
2 227 622 Aug 1990 GB
2 229 595 Sep 1990 GB
2256546 Dec 1992 GB
2264409 Aug 1993 GB
2 275 585 Aug 1994 GB
2305049 Mar 1997 GB
2309134 Jul 1997 GB
2325537 Nov 1998 GB
2 346 251 Aug 2000 GB
2377578 Jan 2003 GB
1035285 Mar 2005 HK
58137344 Aug 1983 JP
58196738 Nov 1983 JP
58210776 Dec 1983 JP
59141878 Aug 1984 JP
61050470 Mar 1986 JP
61074476 Apr 1986 JP
62-060370 Mar 1987 JP
62060372 Mar 1987 JP
62060384 Mar 1987 JP
06392177 Apr 1988 JP
63234679 Sep 1988 JP
01307944 Dec 1989 JP
02048879 Feb 1990 JP
02-119307 May 1990 JP
06-141250 May 1990 JP
2189753 Jul 1990 JP
10-234007 Sep 1990 JP
03-022770 Jan 1991 JP
03063990 Mar 1991 JP
03-167975 Jul 1991 JP
3178278 Aug 1991 JP
03-214919 Sep 1991 JP
03-243076 Oct 1991 JP
09-009244 Jan 1992 JP
04-44475 Feb 1992 JP
04079053 Mar 1992 JP
04-162889 Jun 1992 JP
04-180480 Jun 1992 JP
04227380 Aug 1992 JP
04250760 Sep 1992 JP
04-335395 Nov 1992 JP
4340258 Nov 1992 JP
05-103281 Apr 1993 JP
05-122692 May 1993 JP
05-183826 Jul 1993 JP
05284437 Oct 1993 JP
05-339100 Dec 1993 JP
06021907 Jan 1994 JP
06038165 Feb 1994 JP
06-90408 Mar 1994 JP
60-61935 Mar 1994 JP
06111413 Apr 1994 JP
06-124309 May 1994 JP
06-133235 May 1994 JP
06504165 May 1994 JP
06-164973 Jun 1994 JP
06243539 Sep 1994 JP
06-295312 Oct 1994 JP
06303541 Oct 1994 JP
0723356 Jan 1995 JP
07020254 Jan 1995 JP
07-050259 Feb 1995 JP
07-076592 Mar 1995 JP
07-135621 May 1995 JP
07123326 May 1995 JP
07-162776 Jun 1995 JP
07147657 Jun 1995 JP
07160732 Jun 1995 JP
07193762 Jul 1995 JP
7-262200 Oct 1995 JP
7-284033 Oct 1995 JP
07-288759 Oct 1995 JP
07-321748 Dec 1995 JP
08-32528 Feb 1996 JP
08-056352 Feb 1996 JP
0832538 Feb 1996 JP
08-137334 May 1996 JP
08125497 May 1996 JP
08130517 May 1996 JP
8-506469 Jul 1996 JP
08506941 Jul 1996 JP
08-196738 Aug 1996 JP
08-234709 Sep 1996 JP
08251122 Sep 1996 JP
08275077 Oct 1996 JP
08289281 Nov 1996 JP
08-331546 Dec 1996 JP
09-37168 Feb 1997 JP
09037151 Feb 1997 JP
09037171 Feb 1997 JP
09037172 Feb 1997 JP
9-65321 Mar 1997 JP
09-070020 Mar 1997 JP
09083888 Mar 1997 JP
09-102827 Apr 1997 JP
09-114781 May 1997 JP
09 162818 Jun 1997 JP
09-162821 Jun 1997 JP
09-247565 Sep 1997 JP
092-44475 Sep 1997 JP
09-261609 Oct 1997 JP
09-270965 Oct 1997 JP
09289630 Nov 1997 JP
09322213 Dec 1997 JP
10-042235 Feb 1998 JP
10-501936 Feb 1998 JP
10042218 Feb 1998 JP
10-093933 Apr 1998 JP
10-143340 May 1998 JP
10-143349 May 1998 JP
10-228500 Aug 1998 JP
10228687 Aug 1998 JP
10257400 Sep 1998 JP
10-289205 Oct 1998 JP
2838892 Oct 1998 JP
10-512420 Nov 1998 JP
11008810 Jan 1999 JP
11-136615 May 1999 JP
11-136658 May 1999 JP
11177962 Jul 1999 JP
11261917 Sep 1999 JP
11-313280 Nov 1999 JP
11308561 Nov 1999 JP
2000-013708 Jan 2000 JP
2000-138886 May 2000 JP
2000-224533 Aug 2000 JP
2000-235546 Aug 2000 JP
2000216845 Aug 2000 JP
2000-261750 Sep 2000 JP
2000-287179 Oct 2000 JP
2000-306314 Nov 2000 JP
2000-312333 Nov 2000 JP
2000-339931 Dec 2000 JP
2001-022282 Jan 2001 JP
2001-086423 Mar 2001 JP
2001-088372 Apr 2001 JP
2001-165669 Jun 2001 JP
2001-167522 Jun 2001 JP
2001-213595 Aug 2001 JP
2001-257950 Sep 2001 JP
2001-513595 Sep 2001 JP
2002506328 Feb 2002 JP
2002-279969 Sep 2002 JP
2003-018668 Jan 2003 JP
2003-189200 Jul 2003 JP
2003-199004 Jul 2003 JP
2004-007592 Jan 2004 JP
2004-023326 Jan 2004 JP
2006-186513 Jul 2006 JP
2006-340396 Dec 2006 JP
4062577 Mar 2008 JP
2010-119149 May 2010 JP
5053378 Oct 2012 JP
0247388 Oct 1994 TW
WO-8601359 Feb 1986 WO
WO-8601962 Mar 1986 WO
WO-8703766 Jun 1987 WO
WO-8804057 Jun 1988 WO
WO-8804507 Jun 1988 WO
WO-8902682 Mar 1989 WO
WO-8903085 Apr 1989 WO
WO-8912370 Dec 1989 WO
WO-9000847 Jan 1990 WO
WO-9001243 Feb 1990 WO
WO-9015507 Dec 1990 WO
WO-9100670 Jan 1991 WO
WO-9105436 Apr 1991 WO
WO-9106367 May 1991 WO
WO-9106912 May 1991 WO
WO-9118476 Nov 1991 WO
WO-9204801 Mar 1992 WO
WO-9222983 Dec 1992 WO
WO-9304473 Mar 1993 WO
WO-9305452 Mar 1993 WO
WO-9311638 Jun 1993 WO
WO-9311639 Jun 1993 WO
WO-9311640 Jun 1993 WO
WO-9323957 Nov 1993 WO
WO-9413107 Jun 1994 WO
WO-9414281 Jun 1994 WO
WO-9414282 Jun 1994 WO
WO-9414283 Jun 1994 WO
WO-9414284 Jun 1994 WO
WO-9416441 Jul 1994 WO
WO-9421085 Sep 1994 WO
WO-9423383 Oct 1994 WO
WO-9429811 Dec 1994 WO
WO-9501056 Jan 1995 WO
WO-9501057 Jan 1995 WO
WO-9501058 Jan 1995 WO
WO-9501059 Jan 1995 WO
WO-9502945 Jan 1995 WO
WO-9504431 Feb 1995 WO
WO-9506389 Mar 1995 WO
WO-9507003 Mar 1995 WO
WO-9510910 Apr 1995 WO
WO-9515658 Jun 1995 WO
WO-9516568 Jun 1995 WO
WO-9515649 Jun 1995 WO
WO-9515657 Jun 1995 WO
WO-9519092 Jul 1995 WO
WO-9526095 Sep 1995 WO
WO-9526608 Oct 1995 WO
WO-9528055 Oct 1995 WO
WO-9528799 Oct 1995 WO
WO-9530961 Nov 1995 WO
WO-9532585 Nov 1995 WO
WO-9532587 Nov 1995 WO
WO-9530302 Nov 1995 WO
WO-9531069 Nov 1995 WO
WO-9532583 Nov 1995 WO
WO-9607270 Mar 1996 WO
WO-9608109 Mar 1996 WO
WO-9608923 Mar 1996 WO
WO-9609721 Mar 1996 WO
WO-9608113 Mar 1996 WO
WO-9613932 May 1996 WO
WO-9613935 May 1996 WO
WO-9617467 Jun 1996 WO
WO-9617473 Jun 1996 WO
WO-9621990 Jul 1996 WO
WO-9626605 Aug 1996 WO
WO-9627270 Sep 1996 WO
WO-9627982 Sep 1996 WO
WO-9627989 Sep 1996 WO
WO-9631980 Oct 1996 WO
WO-9634467 Oct 1996 WO
WO-9634486 Oct 1996 WO
WO-9634491 Oct 1996 WO
WO-9636172 Nov 1996 WO
WO-9637075 Nov 1996 WO
WO-9637996 Nov 1996 WO
WO-9638799 Dec 1996 WO
WO-9641477 Dec 1996 WO
WO-9641478 Dec 1996 WO
WO-9638962 Dec 1996 WO
WO-9641470 Dec 1996 WO
WO-9641471 Dec 1996 WO
WO-9702702 Jan 1997 WO
WO-9704595 Feb 1997 WO
WO-9707656 Mar 1997 WO
WO-9712486 Apr 1997 WO
WO-9713368 Apr 1997 WO
WO-9717774 May 1997 WO
WO-9718675 May 1997 WO
WO-9719555 May 1997 WO
WO-9726612 Jul 1997 WO
WO-9729458 Aug 1997 WO
WO-9731480 Aug 1997 WO
WO-9734413 Sep 1997 WO
WO-9734414 Sep 1997 WO
WO-9740623 Oct 1997 WO
WO-9741673 Nov 1997 WO
WO-9742763 Nov 1997 WO
WO-9746943 Dec 1997 WO
WO-9747124 Dec 1997 WO
WO-9748230 Dec 1997 WO
WO-9749237 Dec 1997 WO
WO-9749241 Dec 1997 WO
WO-9749242 Dec 1997 WO
WO-9750251 Dec 1997 WO
WO-9745786 Dec 1997 WO
WO-9747135 Dec 1997 WO
WO-9748228 Dec 1997 WO
WO-9800975 Jan 1998 WO
WO-9800976 Jan 1998 WO
WO-9806219 Feb 1998 WO
WO-9810589 Mar 1998 WO
WO-9814009 Apr 1998 WO
WO-9816062 Apr 1998 WO
WO-9817063 Apr 1998 WO
WO-9817064 Apr 1998 WO
WO-9820675 May 1998 WO
WO-9821664 May 1998 WO
WO-9821877 May 1998 WO
WO-9826584 Jun 1998 WO
WO-9827723 Jun 1998 WO
WO-9826569 Jun 1998 WO
WO-9828906 Jul 1998 WO
WO-9831148 Jul 1998 WO
WO-9837695 Aug 1998 WO
WO-9839893 Sep 1998 WO
WO-9841020 Sep 1998 WO
WO-9843183 Oct 1998 WO
WO-9843406 Oct 1998 WO
WO-9847279 Oct 1998 WO
WO-9847290 Oct 1998 WO
WO-9848566 Oct 1998 WO
WO-9847283 Oct 1998 WO
WO-9856172 Dec 1998 WO
WO-9856173 Dec 1998 WO
WO-9856712 Dec 1998 WO
WO-9901984 Jan 1999 WO
WO-9903267 Jan 1999 WO
WO-9904561 Jan 1999 WO
WO-9907142 Feb 1999 WO
WO-9914947 Mar 1999 WO
WO-9918721 Apr 1999 WO
WO-9918722 Apr 1999 WO
WO-9922502 May 1999 WO
WO-9929109 Jun 1999 WO
WO-9930491 Jun 1999 WO
WO-9931480 Jun 1999 WO
WO-9933265 Jul 1999 WO
WO-9938092 Jul 1999 WO
WO-9935827 Jul 1999 WO
WO-9937045 Jul 1999 WO
WO-9939280 Aug 1999 WO
WO-9945700 Sep 1999 WO
WO-9945701 Sep 1999 WO
WO-9945702 Sep 1999 WO
WO-9952279 Oct 1999 WO
WO-9952285 Oct 1999 WO
WO-9956466 Nov 1999 WO
WO-9956473 Nov 1999 WO
WO-9957837 Nov 1999 WO
WO-9957839 Nov 1999 WO
WO-9960493 Nov 1999 WO
WO-9960783 Nov 1999 WO
WO-9960789 Nov 1999 WO
WO-9960790 Nov 1999 WO
WO-9966725 Dec 1999 WO
WO-9965237 Dec 1999 WO
WO-0004706 Jan 2000 WO
WO-0004708 Jan 2000 WO
WO-0004709 Jan 2000 WO
WO-0002380 Jan 2000 WO
WO-0007368 Feb 2000 WO
WO-0008850 Feb 2000 WO
WO-0008851 Feb 2000 WO
WO-0008852 Feb 2000 WO
WO-0005889 Feb 2000 WO
WO-0011865 Mar 2000 WO
WO-0013415 Mar 2000 WO
WO-0016548 Mar 2000 WO
WO-200014951 Mar 2000 WO
WO-0011869 Mar 2000 WO
WO-0013416 Mar 2000 WO
WO-0016336 Mar 2000 WO
WO-0028734 May 2000 WO
WO-0028739 May 2000 WO
WO-0027122 May 2000 WO
WO-0033560 Jun 2000 WO
WO-0033573 Jun 2000 WO
WO-0033578 Jun 2000 WO
WO-0033160 Jun 2000 WO
WO-0033224 Jun 2000 WO
WO-0033233 Jun 2000 WO
WO-0040014 Jul 2000 WO
WO-0040025 Jul 2000 WO
WO-0049801 Aug 2000 WO
WO-0051310 Aug 2000 WO
WO-0057645 Sep 2000 WO
WO-0058833 Oct 2000 WO
WO-0058967 Oct 2000 WO
WO-0059214 Oct 2000 WO
WO-0059220 Oct 2000 WO
WO-0059223 Oct 2000 WO
WO-0062298 Oct 2000 WO
WO-0062299 Oct 2000 WO
WO-0062533 Oct 2000 WO
WO-0067475 Nov 2000 WO
WO-0079798 Dec 2000 WO
WO-0101677 Jan 2001 WO
WO-0106784 Jan 2001 WO
WO-0110126 Feb 2001 WO
WO-0110128 Feb 2001 WO
WO-0111865 Feb 2001 WO
WO-0115438 Mar 2001 WO
WO-0122729 Mar 2001 WO
WO-0119086 Mar 2001 WO
WO-0135662 May 2001 WO
WO-01-46843 Jun 2001 WO
WO-0147238 Jun 2001 WO
WO-0147249 Jun 2001 WO
WO-0147257 Jun 2001 WO
WO-0147273 Jun 2001 WO
WO-0147279 Jun 2001 WO
WO-0146869 Jun 2001 WO
WO-0150743 Jul 2001 WO
WO-0158158 Aug 2001 WO
WO-0175649 Oct 2001 WO
WO-0176239 Oct 2001 WO
WO-0176248 Oct 2001 WO
WO-0176704 Oct 2001 WO
WO-0189213 Nov 2001 WO
WO-0182600 Nov 2001 WO
WO-0231731 Apr 2002 WO
WO-02078317 Oct 2002 WO
WO-02084992 Oct 2002 WO
WO-03005712 Jan 2003 WO
WO-2004004341 Jan 2004 WO
WO-2004066180 Aug 2004 WO
WO-2006079977 Aug 2006 WO
WO-2008042280 Apr 2008 WO
Non-Patent Literature Citations (269)
Entry
US 5,047,897, 09/1991, Strubbe et al. (withdrawn)
Layar—Wikipedia, the free encyclopedia, http://en.wikipedia.org/wiki/Layar, retrieved on Mar. 31, 2012 (2 pages).
International Search Report issued for PCT/US12/20428, issued on May 4, 2012 (2 pages).
U.S. Appl. No. 09/330,792, filed Jun. 11, 1999, Knudson et al.
U.S. Appl. No. 09/332,244, filed Jun. 11, 1999, Ellis.
U.S. Appl. No. 09/356,268, filed Jul. 16, 1999, Rudnick et al.
“A New Approach to Addressability,” CableData Brochure, 9 pages, undated.
“Generative Models for Cold-Start Recommendations,” Schein et al, SIGR-2001, http://www.cis.upenn.edu/˜popescul/Publications/schein01generative.pdf, last accessed Oct. 24, 2006, 9 pgs.
“Methods and Metrics for cold-Start Recommendations,” Schein et al, SIGIR'02, Aug. 11-15, 2002, Tampere, Finland.
“OpenTV(R) and Interactive Channel Form Strategic Alliance to Deliver Interactive Programming to Satellite Television Subscribers”, from the Internet at http://www.opentv.com/news/interactivechannelfinal.htm, printed on Jun. 8, 1999.
“Probabilistic Models for Unified Collaborative and Content-Based Recommendation in Sparse-Data Environments”, Popescul et al, Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence (UAI-2001), to appear, Morgan Kaufmann, San Francisco, 2001.
“Prodigy Launches Interactive TV Listing”, Apr. 22, 1994 public Broadcasting Report.
“Swami: A Framework for Collaborative Filtering Algorithm Development and Evaluation”, Fisher et al., http://www.cs.berkeley.edu/˜richie/swami/sigir00-final/report.pdf, last accessed on Oct. 24, 2006, 3 pgs.
“Social Information Filtering: Algorithms for automating “Word of Mouth””, Shardanand et al., http://www.cs.ubc.cal/˜conati/532b/papers/chi-95-paper.pdf, last accessed Oct. 24, 2006, 8 pgs.
“StarSight Interactive Television Program Guide III” Jim Leftwich and Steve Schein, Functional/Interactional Architecture Specification Document, Orbit Interaction, Palo alto, California, Published before Apr. 19, 1995.
“StarSight Interactive Television Program Guide IV” Jim Leftwich and Steve Schein, Functional/Interactional Architecture Specification Document, Orbit Interaction, Palo Alto, California, Published before Apr. 19, 1995.
“StarSight Interactive Television Program Guide” Jim Leftwich, Willy Lai & Steve Schein Published before Apr. 19, 1995.
“TV Guide Online Set for Fall”, Entertainment Marketing Letter, Aug. 1994.
“Utilizing Popularity Characteristics for Product Recommendation”, Hyung Jun Ahn, International Journal of Electronic Commerce/Winter Jul. 2006, vol. 11, No. 2, pp. 59-80.
272OR Satellite Receiver User's Guide, General Instrument, 1991, pp. 58-61.
A Financial Times Survey: Viewdata (Advertisement), Financial Times, May 20, 1979, 1 page.
ACM Multimedia 93 Proceedings, A Digital On-Demand Video Service Suporting Content-Based Queries, Little et al. pp. 427-436, Jul. 1993.
Addressable Converters: A New Development at CableData, Via Cable, vol. 1, No. 12, Dec. 1981, 11 pages.
Advanced Analog Systems—Addressable Terminals, General Instrument Corp. of Horsham, Pennsylvania (URL:http—www.gi.com-BUSAREA-ANALOG-TERMINALWATCH-watch.html) Printed from the Internet on Mar. 4, 1999.
Advertisement for “TV Decisions,” Cable Vision, Aug. 4, 1986, 3 pages.
Alexander “Visualizing cleared-off desktops,” Computerworld, May 6, 1991, 1 page.
Anderson et al., UNIX Communications and the Internet (3d ed. 1995).
Antonoff, “Interactive Television,” Popular Science, Nov. 1992, pp. 92-128.
Antonoff, “Stay Tuned for Smart TV,” Popular Science, Nov. 1990, pp. 62-65.
Armstrong, “Channel-Surfing's next wave: Henry Yuen's interactive TV guide takes on TCI and Viacom,” BusinessWeek, Jul. 31, 1995, 3 pages.
Arnold, “Britain to get wired city—via telephone,” Electronics, Mar. 4, 1976, at 76, 3 pages.
Bach, U. et al., “Multimedia TV Set, Part 2 and Conclusion,” Radio-Fernsehen Elektronik (RFE), 10/96, pp. 38-40. (English language translation attached.).
Bach, et al., “Multimedia TV Set, Part 1” Radio-Fernsehen Elektronik (RFE), 9/96, pp., 28, 30, 31, 12 pages. (English language translation attached).
Baer, “Innovative Add-On TV Products,” IEEE Transactions on Consumer Electronics, vol. CE-25, Nov. 1979, pp. 765-771.
Beddow, “The Virtual Channels Subscriber Interface,” Communications Technology, Apr. 30, 1992.
Bell Atlantic Buys Cable TV Company for $22bn, Financial Times (London), Oct. 14, 1993 p. 65.
Bensch, “VPV Videotext Programs Videorecorder,” IEEE Paper, Jun. 1988, pp. 788-792.
Berniker, “TV Guide going online”, Broadcasting & Cable, pp. 49-52, Jun. 13, 1994.
Bertuch, “New Realities for PCs: Multimedia between aspiration and commerce,” (translation), Exhibit NK 12 of TechniSat's nullity action against EP'111, Issue 10, pp. 40-46 (1991).
Bestler, Caitlin “Flexible Data Structures and Interface Rituals for Rapid Development of OSD Applications,” Proceedings from the Eleven Technical Sessions, 42nd Annual Convention and Exposition and Exploration of the NCTA, San Francisco, CA Jun. 6-9, 1993, pp. 223-236. Jun. 6, 1993.
Blahut et al., “Interactive Television,” Proceedings of the IEEE, pp. 1071-1085, Jul. 1995.
Boyd-Merritt, “Television wires two-way video,” Electronic Engineering Times, Apr. 25, 1994, 3 pages.
Brochure, “Weststar and Videotoken Network Present The CableComputer,” Revised Aug. 15, 1985 (Plaintiff's 334).
Brochure, Time Inc., “Now, Through the Advances of the Computer Age, You Can Get the Information You Want, When You Want It. Instantly and Conveniently, On Your Home TV Screen,” Time Teletext, Time Video Information Services, Inc., 9 pages, undated (V 79167-79175).
Brochure, VTN “Videotoken Network, New Dimension Television,” Dec. 1985 (Plaintiff's Exhibit 313).
Brugliera, “Digital On-Screen Display—A New Technology for the Consumer Interface,” Symposium Record Cable TV Sessions of the 18th International Television Symposium & Technical Exhibition—Montreux, Switzerland, Jun. 10-15, 1993, pp. 571-586.
CNN Tech: Sonicblue revives ReplayTV, articles cnn.com, Sep. 10, 2001, retrieved from the internet: http://articles.cnn.com/2001-09-10/tech/replay.tv.idg—1—replayty-sonicblue-digital-video?—s=PM:TECH, 2 pages.
Cable Computer User's Guide, Rev. 1, Dec. 1985 (Plaintiff's Exhibit 289).
Cable Television Equipment, Jerrold Communications Publication, dated 1992 and 1993, pp. 82.1 to 8-6 and 8-14.1 to 8-14.3.
CableData, Roseville Consumer Presentation, Mar. 1986 12 pages.
Cameron et al., Learning GNU Emacs (2d ed. 1996).
Came, E.B., “The Wired Household,” IEEE Spectrum, vol. 16 No. 10, Oct. 1979, pp. 61-66.
Cascading Style Sheets, level 1, W3C Recommendation (Dec. 17, 1996), available at http://www.w3.org/TR/REC-CSS1-961217#anchor-pseudo-classes.
Chan, “Learning Considerations In User Interface Design: The Room Model,” Publication of the Software Portability Laboratory, University of Waterloo, Ontario, Canada, Jul. 1984, 52 pages.
Chang et al., “An Open-Systems Approach to Video on Demand,” IEEE Communications Magazine, May 1994, pp. 68-80.
Chen et al., “Real Time video and Audio in the World Wide Web,” Proc. 4th World Wide Web Conference, 1995, 15 pages.
Cherrick et al., “An Individually Addressable TV Receiver With Interactive Channel Guide Display, VCR, and Cable Box Control”, IEEE Transactions on Consumer Electronics, vol. 4:3 (Aug. 1994), pp. 317-328.
Christodoulakis, Steven and Graham, Stephen “Browsing Within Time-Driven Multimedia Documents,” publication of the Institute for Computer Research, University of Waterloo, Waterloo, Ontario, Canada Jul. 1988 pp. 219-227.
Computer Network: Current Status and Outlook on Leading Science and Technology, Bureau of Science & Technology (Japan), vol. 1, Dec. 1986, 326 pages.
Contents of the website of StarSight Telecast, Inc. (http://www.StarSight.com) as of Apr. 21, 2004.
Cox, J. et al, “Extended Services in A Digital Compression System,” Proceedings from Eleven Technical Sessions: 42nd Annual Convention and Exposition of the National Cable Television Association, Jun. 1993, pp. 185-191.
Creation/Modification of the Audio Signal Processor Setup for a PC Audio Editor, IBM Technical Disclosure Bulletin, vol. 30, No. 10, Mar. 1988, pp. 367-376.
D2B—Home Bus Fur Audio and Video, Selektor, Apr. 1990, pp. 10, 12.
Davic Digital Audio-Visual Council, DAVIC 1.5 Specification, Baseline Document 1, Revised 4.0, Applications for Home Storage and Internet Based Systems, Published by Digital Audio-Visual Council 1995-1999.
DIRECTV Digital Satellite Receiver—Operating Instructions, Sony Electronics Inc. (2001).
DIRECTV Plus2 System, Thompson Consumer Electronics, Inc. (1999), 2 pages.
DIRECTV Receiver—Owners Manual, Samsung, DIRECTV, Inc. (2002).
DIRECTV Receiver with TiVo Digital Satellite Receiver/Recorder SAT-T60—Installation Guide, Sony Electronics Inc. (2000).
DIRECTV Receiver with TiVo Installation Guide, Philips, TiVo Inc. (2000).
DIRECTV Receiver with TiVo Viewers Guide, TiVo Inc., Sony Corp. (1999, 2000).
Daily, Mack, “Addressable Decoder with Downloadable Operation,” Proceedings from the Eleven Technical Sessions, 42nd Annual Convention and Exposition of the NCTA, Jun. 6-9, 1993, pp. 82-89.
Damouny, “Teletext Decoders-Keeping Up With the Latest Advances,” IEEE Transactions on Consumer Electronics, vol. CE-30, No. 3, Aug. 1984, pp. 429-435.
Das, D. and ter Horst, H., Recommender Systems for TV, Technical Report WS-98-08—Papers from the Aaai Workshop, Madison, WI (1998), 2 pages.
Davis, TV Guide on Screen, “Violence on Television”, House of Representatives, Committee on Energy and Commerce, Subcommittee on Telecommunications and Finance, pp. 93-163, Jun. 25, 1993.
Day, “The Great PC/TV Debate,” OEM Magazine, Jul. 1, 1996, 6 pages.
December, Presenting JAVA, “Understanding the Potential of Java and the Web”, pp. 1-208, © 1995 by Sams.net Publishing.
Declaration Under 37 C.F.R. § 1.132 of Richard E. Glassberg, signed Oct. 20, 2006, filed Oct. 24, 2006, from U.S. Appl. No. 10/346,266, 5 pages.
DiRosa, S. “BIGSURF Netguide”, Jul. 1995, vol. 3.1 (Sections 18,21, and 28—renumbered as pp. 1-27).
Dial M for Movie, Funkschau 11/94 Perspektiven, Video on Demand, vol. 11/1994, pp. 78-79. (English language translation attached).
Dialing the printed page, ITT in Europe Profile, 11/Spring 1977, 2 pages.
Digital TV—at a price, New Scientist, Sep. 15,1983, vol. 99. No. 1375, p. 770.
Digital Video Broadcasting (DVB); DVB specification for data broadcasting, European Telecommunication Standards Institute, Draft EN 301 192 V1.2.1 (Jan. 1999).
Dinwiddle et al., “Combined-User Interface for Computers, Televison, Video Recorders, and Telephone, etc.” IBM Technical Disclosure Bulletin, vol. 33(3B), pp. 116-118 (1990).
DishPro Satellite System—User's Guide, Dish Network (Sep. 1, 2001).
Does NBC Get It, Aug. 14, 1995, retrieved from the internet at http://www.open4success.org/db/bin19/019687.html, retrieved on Dec. 11, 2013, 1 page.
Dr. Dobb's, “Implementing a Web Shopping Cart,” from the internet at https://www.drdobbs.com/article/print?articleld=184409959&siteSect . . . , Sep. 1, 1996, printed from the internet on Sep. 13, 2012, 15 pages.
Duck Tales, (1987)[TV Series 1987-1990], Internet Movie Database (IMDB) [Retrieved on Apr. 7, 2007], 5 pages.
Eckhoff, “TV Listing Star on the Computer”, Central Penn Business Journal/High Beam Research, pp. 1-4, Mar. 15, 1996.
Edwardson, “CEEFAX: A Proposed New Broadcasting Service,” Journal of the SMPTE, Jan. 1974, vol. 83 No. 1, pp. 14-19.
Ehrmantraut et al., “The Personal Electronic Program Guide—Towards the Pre-Selection of Individual TV Programs,” CIKM 96, Rockville, MD., Dec. 31, 1996, pp. 243-250.
Eitz et al., “Videotext Programmiert Videoheimgerate,” Rundfunktech Mitteilungen, Jahrg. 30, H.5, 1986, S. 223 bis 229 (English translation attached).
Eitz, Gerhard, “Zukünftige Informations-und Datenangebote beim digitalen Femsehen-EPG Und ‘Lesezeichen’,” RTM Rundfunktechnische Mitteilungen, Jun. 1997, vol. 41, pp. 67-72.
Electronic Program Guide via Internet, Research Disclosure, Kenneth Mason Publications, Hampshire, GB vol. 385(2) (May 1996) p. 276, ISSN:0374-4353.
Email from Iain Lea to Kent Landfield, comp.sources.misc, vol. 29, Issue 19 (Mar. 27, 1992, 03:28:12 GMT), available at https://groups.google.com/group/comp.sources.misc/msg/2e79d4c058a8a4fe?dmode=source&output=gplain&noredirect&pli=1.
Enhanced Content Specification, ATVEF, from the internet at http://www.atvetcomilibraryispec.html, printed Aug. 22, 2001, the document bears a Copyright date of 1998, 1999, 2000, 41 pages.
Ernst & Young “On track: A primer on media asset identification” May 2011 ,retrieved from the internet May 29, 2014 . URL http://www.ey.com/Publication/vwLUAssets/Media—asset—identification—primer/$FILE/Media—Entertainment.pdf.
European Telecommunication Standard, “Electronic Programme Guide (EPG); Protocol for a TV Guide using electronic data transmission,” 89 pages, sections 1-11.12.7 and annex A-P, bearing a date of May 1997.
European Telecommunications Standards: Digital Broadcasting Systems For Television Sound and Data Services; Specification for Service Information (SI) in Digital Video Broadcasting (DVB) Systems, European Telecommunications Standards Institute, Dec. 1994, 64 pages.
Fuller, C., Streaming gijutsu no genzai Web video system no gaiyou [Current Streaming Technology, Outline of Web Video System], UNIX Magazine, Japan, ASCII K.K., Mar. 1, 2000, vol. 15, No. 3, p. 65-72.
Facsimile Transmission, NHK Research Monthly Report, Dec. 1987(Unknown author).
Fall 2001 TiVo Service Update with Dual Tuner!, TiVo Inc. (2001).
Fry et al., “Delivering QoS Controlled Continuous Media on the World Wide Web,” Proceedings of the 4th International IFIP Workshop on QoS, Paris, Mar. 6-8, 1996, 12 pages.
GameSpot's Downloads for Allied General, accessed from the internet at http://web.archive.org/web/19970205060703/http://www.gamespot.com/strategy/allie . . . , copyright 1997, printed on Sep. 19, 2013, 1 page.
Gateway Destination: The PC for the Office and the Family Room, PC Magazine, First Looks section, pp. 39-41, Jun. 11, 1996, 3 pages.
Gavron, Jacquelyn, Moran, Joseph, How to Use Microsoft Windows NT 4 Workstation, 1996, entire document, 5 pages.
Getting Started Installation Guide, Using StarSight 1 Manual, and Remote Control Quick Reference Guide, copywright 1994, 93 pages.
Growing US interest in the impact of viewdata, Computing Weekly, Jul. 20, 1978, 1 page.
Gutta, et al., “TV Content Recommender System”, Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, (Jul. 30, 2000), 2 pages.
Hallenbeck, P., Developing an interactive television system that works, R&D Magazine, vol. 39:7, Jun. 1997, p. 54.
Hartwig et al. “Broadcasting and Processing of Program Guides for Digital TV,” SMPTE Journal, pp. 727-732, Oct. 1997.
Hedger, “Telesoftware: Home Computing Via Broadcast Teletext,” IEEE Transactions on Consumer Electronics, vol. CE-25, No. 3, Jul. 1979, pp. 279-287.
Hendrix, “A Natural Language Interface Facility”, Artificial Intelligence Center, Stanford Research Institute, Sigart Newsletter, No. 61, Feb. 1977, 2 pages.
Hill, et al., “Recommending and Evaluating Choices in a Virtual Community of Use” CHI '95 Mosaic of Creativity, pp. 194-201 (1995).
Hiroshi Ishii et al, “Clearface: Translucent Multiuser Interface for TeamWorkStation,” ECSCW, Sep. 1991, pp. 6-10.
Hiroshi Ishii et al, “Toward an Open Shared Workspace: Computer and Video Fusion Approach of Team Workstation,” Communications of the ACM, Dec. 1991, vol. 34 No. 12, pp. 37-50.
Hirotada Ueda et al, “Impact: An Interactive Natural-Motion-Picture Dedicated Multi-Media Authoring System,” Communications of the ACM, Mar. 1991, pp. 343-350.
Hitachi Consumer Electronics Co., Ltd., Certification of market introduction in 1993 of Hitachi Projection TV Model 55EX7K, Dec. 17, 2012, 1 page.
Hitachi Projection Color TV Operating Guide, for Models 55EX7K, 50EX6K, 50ES1B/K, and 46EX3B/4K, 38 pages, undated.
Hitachi Service Manual, No. 0021, Projection Color Television, Models 55EX7K, 50EX6K, 50ES1B/K, 46EX3B/4K, and 46EX3BS/4KS, Aug. 1993, 1 page.
Hoarty, “Multimedia on Cable Television Systems,” Symposium Record Table TV Sessions, 18th International Television Symposium and Technical Exhibition, Montreux, Switzerland, Jun. 10, 1993, pp. 555-567.
Hobbes' Internet Timeline 10.2, by Robert H'obbes' Zakon, from the internet at http://www.zakon.org/robert/internet/timeline/, printed from the internet on Sep. 13, 2012, 29 pages.
Hofmann, Neumann, Oberlies & Schadwinkel, “Videotext Programmiert Videorecorder,” Rundfunktechnischen Mitteilungen, (Broadcast Engineering Reports), vol. 26 No. 6, pp. 254-257, Nov.-Dec. 1982.
Holland, “NAPLPS standard defines graphics and text communications,” EDN, Jan. 10, 1985, pp. 179-192.
IPG Attitude And Usage Study, prepared by Lieberman Research Worldwide for Gemstar—TV Guide International, Oct. 2002.
Imke, S., Interactive Video Management and Production, Educational Technology Publications, May 1991, http://www.amazon.com/Interactive-Video-Management-Production-Steven/dp/0877782334/ref=sr—1—1?ie=UTF884d=1416426739&sr=8-1&keywords=interactive+video+management+and+production&pebp=1416426742553, 2 pages.
Instruction Manual, “Using StarSight 2,” StarSight Telecast, Inc., 1994, 27 pages.
Instructional Manual, “Sonic The Hedgehog,” Sega of America, 1992, 11 pages.
Interactive Computer Conference Server, IBM Technical Bulletin, vol. 34, No. 7A, Dec. 1991, pp. 375-377.
Interface Device for Conventional TVs to Improve Functionality, IBM Technical Disclosure Bulletin, vol. 36, No. 7, Jul. 1993, pp. 53-54.
Internet User Forecast by Country, Computer Industry Almanac—Press Release, from the internet at http://www.c-i-a.com/internetusersexec.html, printed from the internet on Sep. 13, 2012, 3 pages.
Irven, “Multi-Media Information Services: A Laboratory Study,” IEEE Communications Magazine, vol. 26, No. 6, Jun. 1988, pp. 27-33 and 36-44.
JVC Service Manual, 27″ Color Monitor/Receiver, Model AV-2771S (U.S.), Jul. 1991, 89 pages.
James Sorce, David Fay, Brian Raila and Robert Virzi, Designing a Broadband Residential Entertainment Service: A Case Study, GTE Laboratories Incorporated, undated, pp. 141-148.
James, “Oracle—Broadcasting the Written Word,” Wireless World, Jul. 1973, vol. 79 No. 1453, pp. 314-316.
Judice, “Move Over Cable, Here Comes Video Via Voice Lines,” Network World, Sep. 1986, p. 26.
Kai et al., Development of a Simulation System for Integrated Services Television, Report from Information Processing Society of Japan, Japan, Sep. 13, 1996, vol. 96, No. 90 p. 13-20.
Karstad., “Microprocessor Control for Color-TV Receivers,” IEEE Transactions on Consumer Electronics, vol. CE-26, May 1980, pp. 149-155.
Kojima, Akira et al., “Implementation Measures to Expand Metadata Application Services”, http://www/ntt.co.jp/tr/0306/files/ntr200306051.pdf, (Jun. 2003), 6 pages.
Komarinski, Anonymous FTP p. 1, May 1, 1995 Linux Journal, 5 pages.
Kornhaas, “Von der Textprogrammierung uber TOP zum Archivsystem,” Radio Fernsehen Elektronik, vol. 40, No. 8, Aug. 30, 1991, pp. 465-468, XP 000240875 Veb Verlag Technik. Berlin, DE ISSN: 1436-1574.
Large, “Throw away the books—Viewdata's coming,” Guardian, Jan. 10, 1978, 1 page.
Large, “Viewdata, the invention that brings boundless advice and information to the home, also sets a test for the Post Office,” Financial Guardian, Jun. 20, 1978, 3 pages.
Lee, Hee-Kyung et al., “Personalized Contents Guide and Browsing based on User Preference”, hppt://vega.icu.ac.kr/˜mccb-lab/publications/Paper/PersonalizedTV(2002).pdf, (2002), 10 pages.
Letter from StarSight Telecast, Inc. to a StarSight IPG subscriber (with subscriber name, address and account no. redacted) notifying the subscriber of termination of the StarSight IPG, 2003.
Listing of computer code for Video HTU Program (Plaintiff's Exhibit 299).
Listing of computer code for operating system within the Cable Computer in 1985 (Plaintiff's Exhibit 298).
Lists> What's On Tonite! TV Listings (fwd), Internet article (on line), Jan. 28, 1995, XP 002378869 Retrieved from the Internet: URL: www.scout.wisc.edu/Projects/PastProjects/NH/95/01-31/0018.html> [Retrieved on Apr. 28, 2006]. The whole document, 4 pages.
Lloyd, “Impact of technology,” Financial Times, Jul. 1978, 2 pages.
Lowenstein, R.L. and Aller, H.E., “The Inevitable March of Videotex,” Technology Review, vol. 88, Oct. 1985, p. 22-29.
Lynch, Keith, timeline of net related terms and concepts, Mar. 22, 2007, 8 pages.
M/A-COM, Inc., “Videocipher II Satellite Descrambler Owner's Manual,” dated Prior Feb. 1986, pp. 1-17.
MSI Datacasting Systems, TV Communications Journal, Jan. 1973, 2 pages.
Make Room for POP, Popular Science, Jun. 1993, p. 4.
Mannes, “List-Mania, On-Screen, interactive TV guides that can program your VCR are just around the corner,” Video Review, May 1992, pp. 34-36.
Mannes, “Smart Screens: Development of Personal Navigation Systems for TV Viewers,” Video Magazine, Dec. 1993, 6 pages.
Markowitz, A. “Companies Jump on Interactive Bandwagon,” Discount Store News, Dec. 6, 1993, pp. 4 and 131.
McKenzie, G.A., “Oracle—An Information Broadcasting Service Using Data Transmission in the Vertical Interval,” Journal of the SMPTE, Jan. 1974, vol. 83 No. 1, pp. 6-10.
Merrell, R.G., “Tac Timer,” 1986 NCTA Technical Papers, pp. 203-206.
Miller, Matthew D., “A Scenario for the Deployment of Interactive Multimedia Cable Television Systems in the United States in the 1990's”, Proceedings of the IEEE, vol. 82, pp. 585-589, Apr. 1994.
Money, “Teletext and Viewdata,” Butterworth & Co. Ltd., London, 1979, 159 pages.
Motohashi, lizuka, Kuwana, Building Internet TV Guide Service 1 and 2, the 53rd National Conference Proceedings, Japan, Information Processing Society of Japan, Sep. 6, 1996, 5 pages [english translation].
Neumann, Andreas, “WDR Online Aufbau and Perspektiven Automatisierter Online Dienste im WDR,” RTM Rundfunktechnische Mitteilungen, vol. 41, Jun. 1997, pp. 56-66.
Nikkei Click, You can do it now with your existing computer, Nikkei Business Publications, Inc., Aug. 8, 2000, vol. 7, No. 11, pp. 185-188 (No US Translation).
Oberlies, et al.; “VPS-Anzeige Und Uberwachungsgerat”, Rundfunktechnische Mitteilungen, vol. 30, No. 1 Jan. 1986-Feb. 1986, Norderstedt (DE).
Open TV Launches OpenStreamer TM Technology for Broadcasters to Deliver First Ever Real-Time Digital Interactive Television, from the internet at http://www.opentv.corn/news/openstreamer—press—final.htm, printed on Jun. 28, 1999, the document bears a copyright date of 1999, 2 pages.
Open TV fur interaktives Fernsehen, Trend and Technik, 9-95 RFE, retrieved from the internet Sep. 2, 2006, 4 pages (English language translation attached).
Owen, “How dial-a-fact is coming closer to home,” The Times, Sep. 30, 1977, 2 pages.
Owen, “Why the Post Office is so excited by its plans for a TV screen information service,” The Times, Sep. 26, 1976, 2 pages.
PTV Recorder Setup Guide, Philips Electronics, TiVo Inc. (2000).
Panasonic TX-33A1G Operating Instructions (undated).
Patent Abstracts of Japan vol. 017 , No. 494, Sep. 7, 1993 and JP 05 122692 A (Pioneer Electron Corp), May 18, 1993.
Patent Abstracts of Japan vol. 098, No. 001, Jan. 30, 1998 and JP 09 247565 A (Sony Corp), Sep. 19, 1997.
Pazzani et al., “Learning and Revising User Profiles: The Identification of Interesting Web Sites,” 27 Machine Learning, pp. 313-331 (1997).
Peddicord, “New on TV: You Bet Your Horse,” The Sun, Baltimore Maryland, Dec. 15, 1994, 1 page.
Pfister, Larry T., “Teletext: Its Time Has Come,” Prepared For The IGC Videotext / Teletext Conference, Andover, Massachusetts, Dec. 14, 1982, pp. 1-11.
Philips TV Set, model No. 25 PT 910A, User Manual; 40 pages (undated).
Poole, “Demand for Viewdata grows,” Sunday Times, Feb. 10, 1977, 3 pages.
Postel, J., Reynolds, J., Request for Comments: 959 File Transfer Protocol, Oct. 1985, 70 pages.
Prevue Guide Brochure, Spring 1984, 1 page.
Prevue Guide Brochure, Spring 1994, 22 pages.
Prevue Networks and OpenTV(R) Agree to Work Together on Deploying Interactive Program Guides Worldwide, from the internet at http://www.opentv.com/news/prevuefinal.htm, printed on Jun. 28, 1999, 2 pages.
Prevue Networks, Inc. Promotional Materials (undated).
Probe XL Brochure, Auto Tote Systems Inc., (Newark, Delaware) (undated) 59 pages.
Product Comparison—Group messaging software: Having the last word, InfoWorld, Nov. 6, 1995.
Qayyum, “Using IVDS and VBI for Interactive Television,” IEEE, Jun. 10, 1996, 11 pages.
RCA Satellite Receiver User's Guide, Thomson Multimedia Inc. (2001).
Ramachandran, “Space-Time Memory: a parallel programming abstraction for interactive multimedia applications, SIGPLAN Notices”, vol. 34:8 (Aug. 1999), pp. 183-192.
Raskutti et al., “A Feature-based Approach to Recommending Selections based on Past Preferences” 7 User Modeling and User-Adapted Interaction, pp. 179-218 (1997).
Rath et al., “Set-Top Box Control Software: A Key Component in Digital Video,” Philips Journal Of Research, vol. 50, No. 1/2 1996, pp. 185-189.
Rayers, D.J., “Telesoftware by Teletext,” 1984 IEEE Conference Papers, vol. 240, p. 323.
Revolution on the Screen, 2nd Ed. Verlag, Wilhelm Goldmann. 1979 (English Translation).
Rewind, reply and unwind with new high-tech TV devices, by Lawrence J. Magid, LA Times. This document was printed from the internet on Jun. 6, 1999 and bears a date of May 19, 1999.
Robertson, “Reaching Through Technology,” CHI '91 Conference Proceedings, Apr. 27-May 2, 1991, 15 pages.
Rogers, “Telcos vs. Cable TV : The Global View With Markets Converging And Regulatory Barriers Falling, Service Carriers Are Ready To Rumble,” Data Communications, Sep. 21, 1995, vol. 24, No. 13, pp. 75-76, 78, 80, XP000526196.
Roizen, Joseph “Teletext in the USA,” Society of Motion Picture and Television Engineers Journal, Jul. 1981, pp. 602-610.
Rosch, “New data and information system set for commercial market trial,” Telephony, Mar. 20, 1978, pp. 98-102.
Roseville City Council Presentation, Mar. 13, 1985 (Defendant's Exhibit 226).
Ryan, “Interactive TV Takes a Corporte Twist,” Electronic Engineering Times, Jul. 10, 1995, 3 pages.
Sato, T. et al., WWW jou no eizou browsing kikou no. teian to Jitsugen [A Proposal for A Video Browsing Mechanism on World Wide Web and its Implementation], Japan Society for Software Science and Technology, collection of 14th convention articles, Japan, Japan Society for Software Science and Technology, Sep. 30, 1997, p. 193-196.
SONICblue Incorporated: ReplayTV 4000 User Guide 12.17, Chapter Five: Networking, Sep. 10, 2001, retrieved from the internet: http://www.digitalnetworksna.com/support/replayTV/downloads/ReplayTV4000UserGuide. 12.1 7.pdf.
STORit, Report on the IBC'99 Demonstration, Deliverable #8 AC312/phi/prl/ds/p/008b1 Oct. 1999.
Saito, Takeshi, et al., “Homenetwork Architecture Considering Digital Home Appliance,” Technical Committee meeting of the Institute of Electronics, Information and Communication Engineers (IEICE), Japan, Nov. 6, 1997, vol. 97, No. 368, p. 57-64.
Sandringham, “Dress rehearsal for the PRESTEL show,” New Scientist, Jun. 1, 1978, 9 pages.
Savage, “Internet's What's on Tonite!' Tells You Just That and More,” The News, InfoWatch, May 29, 1995, 1 page.
Schauer, Tom, No subject, (tschauer@moscow.com) Thu, Sep. 28, 1995 16:46:48-700, XP-002378870 [Retrieved from the Internet Apr. 28, 2006].
Schlender, B.R., “Couch Potatoes! Now It's Smart TV,” Fortune, Nov. 20, 1989, pp. 111-116.
Schmuckler, Eric “A marriage that's made in cyberspace (television networks pursue links with online information services),” May 16, 1994 MEDIAWEEK, vol. 4, No. 20, 5 pages.
Sealfon, Peggy, “High Tech TV,” Photographic, Dec. 1984, 2 pages.
Selected pages from the “BBC Online—Schedules” web page. This web page is located at http://www.bbc.co.uk/schedules/ (as printed from the Internet on Oct. 19, 1999 and being dated as early as May 24, 1997, 6 pages.
Sharpless et al., “An advanced home terminal for interactive data communication,” Conf. Rec. Int. Conf. Commun. ICC '77, IEEE, Jun. 12-15, 1977, 6 pages.
Soin et al., “Analogue-Digital ASICs”, Peter Peregrinus Limited, 1991, p. 239.
Split Personality, Popular Science, Jul. 1993, p. 52.
StarSight CB 1500 Customer Letter, 1994, 27 pages.
StarSight Operating Guide and Quick Reference, 19 sheets (undated).
StarSight Telecast, StarSight introduces TVGuide-like programmer for homes, 1994, 1 page.
Start Here, Sony, TiVo and DIRECTV (undated).
Stickland, D.C., “Its a common noun,” The Economist, Jun. 5, 1978, 1 pages.
Stokes, “The viewdata age: Power to the People,” Computing Weekly, Jan. 1979, 2 pages.
Sunada, et al, “Teletext Color Television Receiver Model C-29M950, C26M940,” NEC Home Electronics, NEC Giho, 1987, 16 pages.
Super-TVs, Popular Science, Jul. 1985, p. 64.
SuperGuide on Screen Satellite Program Guide, User's Guide, Owner's Manual, and sales literature, 74 sheets (undated).
Sussman, “GTE Tunes In to Home TV Shopping,” PC Week, vol. 5(26), Jun. 28, 1988, 2 pages.
Symposium Record Cable Sesssions, “Digital On-Screen Display of a New Technology for the Consumer Interface,” Publication Date May 1993.
TV Guide movie database Internet web pages printed on Aug. 12, 1999. 9 pages.
TV Guide on Screen prior Use Transcript of Proceedings—“Violence on Television,” House of Representatives, Committee on Energy and Commerce, Subcommittee on Telecommunications and Finance, Jun. 25, 1993, 36 pages.
TV Listings Functional Spec., Time Video Information Services, Inc., 11 pages, undated.
Tech Notes: Product Updates from M/A-COM Cable Home Group, “Videocipher Owners Manual Update,” Issue No. 6, Feb. 1986, 19 pages.
Technical White Paper, “Open TV™ Operating Environment,” (© 1998 OpenTV Inc.), pp. 1-12.
Technological Examination & Basic Investigative Research Report on Image Databases, Japan Mechanical Engineering Organization Int'l Society for the Advancement of Image Software, Japan, Mar. 1988, 127 pages.
Technology Overview for TV Guide On Screen Information Sheets, 8 Sheets (undated).
Technology: Turn on, tune in and print out—An experimental interactive television service is set to alter our viewing habits, Financial Times (London), Oct. 14, 1993, p. 11.
Teletext presents the alternative view, Financial Times, Oct. 24, 1977, 2 pages.
The Columbia House Video Club: Download Software, accessed from the internet at http://web.archive.org/web/19961223163101/http://www.columbiahouse.com/repl/vc . . . , copyright 1996, printed on Sep. 19, 2013, p. 1.
The New Media and Broadcast Policy: An Investigation & Research Conference Report on Broadcasting Diversification, Radio Regulatory Bureau, Japan Ministry of Posts & Telecommunications, Mar., 1982, 114 pages.
The television program guide website of Gist Communications, Inc. of New York, New York. This website is located at www.gist.com (as printed from the Internet on Aug. 14, 1997), 272 pages.
The television program guide website of TV Guide Entertainment Network. This website is located at www.tvguide.com (as printed from the Internet on Aug. 13-18, 1997) pp. 139.
Thomas, “Electronic Program Guide Applications—The Basics of System Design,” NCTA Technical Papers, 1994, pp. 15-20.
Three men on a Viewdata bike, The Economist, Mar. 25, 1978, pp. 1-2.
Today's Stop: What's On Tonite, Oct. 3, 1995, retrieved from the internet at http://internettourbus.com/arch/1995/TB100395.TXT, 3 pages.
Tol, et al., “Requirements and Scenarios for the Bi-directional Transport of Metadata”, TV Anytime Meeting, Version 1.0, Document TV150 (Aug. 2, 2002), 7 pages.
UVSG Offers System-Specific Web Site Development for OPS, press release of United Video Satellite Group, Apr. 12, 1996, 2 pages.
UVSG Teams With Microsoft on Internet Information Server, press release of United Video Satellite Group, Feb. 22, 1996, 2 pages.
Ueda, Hirotada et al, “Impact: An Interactive Natural-Motion-Picture Dedicated Multi-Media Authoring System,” Communications of the ACM, Mar. 1991, pp. 343-350.
Uniden, UST-4800 Super Integrated Receiver/Descrambler, Preliminary Reference Manual, 80 pages, Nov. 12, 1991.
Uniden, UST-4800, Integrated Receiver/Descrambler, Installation Guide, 60 pages, © 1990, Uniden America Corporation.
Uniden, UST-4800, Integrated Receiver/Descrambler, Operating Guide, 24 pages, © 1990, Uniden America Corporation.
User's Guide RCA Color TV with TV Plus + Guide, Thomson Consumer Electronics(1997).
Various publications of Insight Telecast, 1992 and 1993, 10 pages.
Veith, R.H., “Television's Teletext,” Elsevier Science Publishing Co., Inc, 1983, pp. 13-20, 41-51.
Video Plus, Billboard, vol. 98, No. 4, Jan. 25, 1986, p. 25.
VideoGuide User's Manual, 14 sheets (undated).
VideoGuide, “VideoGuide User's Manual,” pp. 1-28 (p. 11 is the most relevant).
Videocipher Stipulation, May 1996, 5 pages.
Viewdata and its potential impact in the USA: Final Report/Volume One, The UK Experience, Link and Butler Cox & Partners Limited, Oct. 1978, 129 pages.
Viewdata moves in US but GEC may lose out, Computing Weekly, Jan. 25, 1978, 1 page.
Vision/1 from Tecmar, IBM transforms PS/1 into a TV, Info World, vol. 14(9), Mar. 2, 1992, p. 34.
Web TV and Its Consumer Electronics Licenses debut First Internet Television Network and Set Top Box XP 002113265 Retrieved from the Internet: <URL http://www.webtv.net/company/news/archive/License.html> Jul. 10, 1996, 6 pages [retrieved on Dec. 1, 2005].
Welcome to Columbia House Online, accessed from the internet at http://web.archive.org/web/19961221085121/http://www.columbiahouse.com/, copyright 1996, printed on Sep. 19, 2013, 1 page.
Whitehorn, “Viewdata and you,” Observer, Jul. 30, 1978, 1 page.
Windows 98 Feature Combines TV, Terminal and the Internet, New York Times, Aug. 18, 1998.
Winkler, M., “Computer Cinema: Computer and video: from TV converter to TV studio,” Computerkino, (translation) Exhibit NK 13 of TechniSat's nullity action against EP'111, Issue 10, pp. 100-107 (1992).
Wittig et al, “Intelligent Media Agents in Interactive Television Systems,” Proceedings of the International Conference on Multimedia Computing and Systems, Los Alamitos, CA, US, May 15-18, 1995, p. 182-189,48926885—1 XP 000603484.
Yoshida, “Interactive TV a Blur,” Electronic Engineering Times, Jan. 30, 1995, 2 pages.
“Mystrands with Playlist Builder”, http://www.mystrands.com/corp/files/MyStrands-Discovery-Desktop.pdf.
Hickman A., “Enhancing TV-Anytime With Metadata from a Bi-Directional Chanel”, http://www.broadcastpapers.com/whitepapers/paper—loader.cfm?pid=438.
Tewary biographical information (downloaded May 20, 2011 from http://iredescentlearning.org/wp-content/uploads/2011/02/ATewaryBio.pdf).
Toyama, Kentaro et al., “Geographic Location Tags on Digital Images,” MM'03, Nov. 2-8, 2003, Berkeley, California, COPYRGT, 2003 ACM. (downloaded from: http://delivery.acm.org/10.1145/960000/957046/p156-toyama.pdf?key1=957046&key2=8247661921&coll=DL&dl=ACM&CFID=546995&CFTOKEN=16766440 on Dec. 6, 2010.
Titanic—Making of James Cameron's Masterpiece, downloaded from http://www.titanicandco.com/filmtitanic.html on Apr. 22, 2015.
Victorian era, from Wikipedia, downloaded from http://en.wikipedia.org/wiki/Victorian—era on Apr. 22, 2014.
Related Publications (1)
Number Date Country
20120176509 A1 Jul 2012 US
Provisional Applications (1)
Number Date Country
61430310 Jan 2011 US