This application includes a transmittal under 37 C.F.R. §1.52(e) of a Computer Program Listing Appendix comprising duplicate compact discs (2), respectively labeled “Copy 1” and “Copy 2”. The discs are IBM-PC machine formatted and Microsoft® Windows Operating System compatible, and include identical copies of the following list of files:
All of the material disclosed in the Computer Program Listing Appendix is hereby incorporated by reference into the present application.
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
1. Field of the Invention
The present invention relates generally to digital image processing and, more particularly, to improved techniques for rendering digital images on different devices.
2. Description of the Background Art
Today, digital imaging, particularly in the form of digital cameras, is a prevalent reality that affords a new way to capture photos using a solid-state image sensor instead of traditional film. A digital camera functions by recording incoming light on some sort of sensing mechanism and then processes that information (basically, through analog-to-digital conversion) to create a memory image of the target picture. A digital camera's biggest advantage is that it creates images digitally thus making it easy to transfer images between all kinds of devices and applications. For instance, one can easily insert digital images into word processing documents, send them by e-mail to friends, or post them on a Web site where anyone in the world can see them. Additionally, one can use photo-editing software to manipulate digital images to improve or alter them. For example, one can crop them, remove red-eye, change colors or contrast, and even add and delete elements. Digital cameras also provide immediate access to one's images, thus avoiding the hassle and delay of film processing. All told, digital imaging is becoming increasingly popular because of the flexibility it gives the user when he or she wants to use or distribute an image.
Regardless of where they originate, digital images are often manipulated by users. Using Adobe Photoshop on a desktop computer, for example, a user can manually create an image by layering different objects on top of one another. For instance, one layer of an image may contain artwork, another layer may contain text, another layer may contain a bitmap border, and so forth and so on. The image, with its separate layers, may then be saved in Photoshop (native) file format, or saved in one of a variety of different file formats.
Using Photoshop, one could conceivably pre-generate different versions of a given image (i.e., pre-render the image's different layers) so that the image is correctly rendered for each possible (display-enabled) device in the world. However, that approach is not really practical. The various devices have constraints as to file size (e.g., less than 5K bytes), bit depth constraints (e.g., no more than 8 bits per pixel), and image size constraints (e.g., image cannot be more than 100 by 100 pixels). Thus, the task of creating an acceptable version of the image for thousands of devices is impractical.
Consider, for example, the task of layering a character (e.g., Disney character) on top of artwork (e.g., bitmap background), for display on a target device capable of displaying JPEG. In this case, the artwork would need to be resized to the screen size of the target device. The character would then have to be overlaid (layered) on top of the resized artwork, and finally the image would need to be saved to the correct JPEG quality. If the generated image file were too big for the target device, the process would have to be repeated, including resizing the background artwork and relayering the character on top of the artwork. Using currently available tools, the task is at best tedious and labor-intensive. Further, the foregoing manual (i.e., pre-rendering) approach is only possible when one is dealing with static images. If a user wants to layer an object on top of an existing image instantaneously, the manual approach does not offer a possible solution.
Existing approaches to layering objects rely on browser-based, online techniques. However, those approaches are basically online versions of the above-described desktop approach (i.e., Adobe Photoshop approach). In particular, those approaches do not take into account the various constraints that may be imposed by a given target device, such as a handheld device. Instead, those approaches rely on an environment with a fixed set of device constraints (i.e., a fixed viewport). If the image is transferred to a target device, the image may have to be resized. Since the image is not being dynamically re-created, one cannot take advantage of vector graphics; thus, certain features of the image will be lost. For example, text that looks good when displayed on a desktop browser at 640 by 480 resolution will look awful when resized for display on a mobile device having a screen resolution of 100 by 100. Instead, it would be desirable to render the text (as well as any other graphics) based on the target device's final screen resolution as well as any other applicable target device constraints. Given these and other limitations of current approaches, a better solution is sought.
What is needed is a system providing methods that allow dynamic reshaping of a logical viewport and allow dynamic adjusting of encoding parameters, including file size constraints, so that rendering of digital images is dynamically optimized or customized for different target devices. The present invention fulfills this and other needs.
The following definitions are offered for purposes of illustration, not limitation, in order to assist with understanding the discussion that follows.
A system for on-demand creation of images that are customized for a particular device type is described. In one embodiment, the system comprises a module serving as a repository for images, each image comprising image components arranged into distinct layers; a module for processing a request from a device for retrieving a particular image from the repository, the module determining a particular device type for the device based in part on information contained in the request; and a module for creating a copy of the particular image that is customized for the device, the module individually rendering image components in the distinct layers of the particular image based on the determined device type, such that at least some of the image components in the distinct layers of the particular image are customized for the device.
A method for dynamically optimizing display of an image transmitted to a client device is also described. In one embodiment, the method includes steps of receiving an online request from a particular client device for retrieving a target image for display, the request including information assisting with determination of a device type for the client device, and the target image comprising image components arranged into individual layers; based on the request, determining a device type for the particular client device; based on the determined device type, retrieving information specifying viewport and layering information for the particular client device; based on the viewport and layering information, creating a version of the target image optimized for display at the particular client device; and transmitting the created version of the target image to the client device for display.
The following description will focus on the currently preferred embodiment of the present invention, which is implemented in a digital imaging environment. The present invention is not, however, limited to any one particular application or any particular environment. Instead, those skilled in the art will find that the system and methods of the present invention may be advantageously employed on a variety of different devices. Therefore, the description of the exemplary embodiment that follows is for purpose of illustration and not limitation.
I. Digital Camera-Based Implementation
A. Basic Components of Digital Camera
The present invention may be implemented on a media capturing and recording system, such as a digital camera.
As shown, the imaging device 120 is optically coupled to the object 150 in the sense that the device may capture an optical image of the object. Optical coupling may include use of optics, for example, such as a lens assembly (not shown) to focus an image of the object 150 on the imaging device 120. The imaging device 120 in turn communicates with the computer 140, for example, via the system bus 130. The computer 140 provides overall control for the imaging device 120. In operation, the computer 140 controls the imaging device 120 by, in effect, telling it what to do and when. For instance, the computer 140 provides general input/output (I/O) control that allows one to coordinate control of the imaging device 120 with other electromechanical peripherals of the digital camera 100 (e.g., flash attachment).
Once a photographer or camera user has aimed the imaging device 120 at the object 150 (with or without user-operated focusing) and, using a capture button or some other means, instructed the camera 100 to capture an image of the object 150, the computer 140 commands the imaging device 120 via the system bus 130 to capture an image representing the object 150. The imaging device 120 operates, in essence, by capturing light reflected from the object 150 and transforming that light into image data. The captured image data is transferred over the system bus 130 to the computer 140 which performs various image processing functions on the image data before storing it in its internal memory. The system bus 130 also passes various status and control signals between the imaging device 120 and the computer 140. The components and operations of the imaging device 120 and the computer 140 will now be described in greater detail.
B. Image Capture on Imaging Device
In operation, the imaging device 120 captures an image of the object 150 via reflected light impacting the image sensor 230 along optical path 220. The lens 210 includes optics to focus light from the object 150 along optical path 220 onto the image sensor 230. The focus mechanism 241 may be used to adjust the lens 210. The filter(s) 215 preferably include one or more color filters placed over the image sensor 230 to separate out the different color components of the light reflected by the object 150. For instance, the image sensor 230 may be covered by red, green, and blue filters, with such color filters intermingled across the image sensor in patterns (“mosaics”) designed to yield sharper images and truer colors.
While a conventional camera exposes film to capture an image, a digital camera collects light on an image sensor (e.g., image sensor 230), a solid-state electronic device. The image sensor 230 may be implemented as either a charged-coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) sensor. Both CMOS and CCD image sensors operate by capturing light on a grid of small cells known as photosites (or photodiodes) on their surfaces. The surface of an image sensor typically consists of hundreds of thousands of photosites that convert light shining on them to electrical charges. Depending upon a given image, varying amounts of light hit each photosite, resulting in varying amounts of electrical charge at the photosites. These charges can then be measured and converted into digital information. A CCD sensor appropriate for inclusion in a digital camera is available from a number of vendors, including Eastman Kodak of Rochester, N.Y., Philips of The Netherlands, and Sony of Japan. A suitable CMOS sensor is also available from a variety of vendors. Representative vendors include STMicroelectronics (formerly VSLI Vision Ltd.) of The Netherlands, Motorola of Schaumburg, Ill., and Intel of Santa Clara, Calif.
When instructed to capture an image of the object 150, the image sensor 230 responsively generates a set of raw image data (e.g., in CCD format for a CCD implementation) representing the captured object 150. In an embodiment using a CCD sensor, for example, the raw image data that is captured on the image sensor 230 is routed through the signal processor 251, the analog-to-digital (A/D) converter 253, and the interface 255. The interface 255 has outputs for controlling the signal processor 251, the focus mechanism 241, and the timing circuit 242. From the interface 255, the image data passes over the system bus 130 to the computer 140 as previously illustrated at
C. Image Processing
A conventional onboard processor or computer 140 is provided for directing the operation of the digital camera 100 and processing image data captured on the imaging device 120.
The processor (CPU) 264 typically includes a conventional processor device (e.g., microprocessor) for controlling the operation of camera 100. Implementation of the processor 264 may be accomplished in a variety of different ways. For instance, the processor 264 may be implemented as a microprocessor (e.g., MPC823 microprocessor, available from Motorola of Schaumburg, Ill.) with DSP (digital signal processing) logic blocks, memory control logic blocks, video control logic blocks, and interface logic. Alternatively, the processor 264 may be implemented as a “camera on a chip (set)” using, for instance, a Raptor II chipset (available from Conextant Systems, Inc. of Newport Beach, Calif.), a Sound Vision Clarity 2, 3, or 4 chipset (available from Sound Vision, Inc. of Wayland, Mass.), or similar chipset that integrates a processing core with image processing periphery. Processor 264 is typically capable of concurrently running multiple software routines to control the various processes of camera 100 within a multithreaded environment.
The digital camera 100 includes several memory components. The memory (RAM) 266 is a contiguous block of dynamic memory which may be selectively allocated to various storage functions. Dynamic random-access memory is available from a variety of vendors, including, for instance, Toshiba of Japan, Micron Technology of Boise, Id., Hitachi of Japan, and Samsung Electronics of South Korea. The non-volatile memory 282, which may typically comprise a conventional read-only memory or flash memory, stores a set of computer-readable program instructions to control the operation of the camera 100. The removable memory 284 serves as an additional image data storage area and may include a non-volatile device, readily removable and replaceable by a camera 100 user via the removable memory interface 283. Thus, a user who possesses several removable memories 284 may replace a full removable memory 284 with an empty removable memory 284 to effectively expand the picture-taking capacity of the camera 100. The removable memory 284 is typically implemented using a flash disk. Available vendors for flash memory include, for example, SanDisk Corporation of Sunnyvale, Calif. and Sony of Japan. Those skilled in the art will appreciate that the digital camera 100 may incorporate other memory configurations and designs that readily accommodate the image capture and processing methodology of the present invention.
The digital camera 100 also typically includes several interfaces for communication with a camera user or with other systems and devices. For example, the I/O controller 280 is an interface device allowing communications to and from the computer 140. The I/O controller 280 permits an external host computer (not shown) to connect to and communicate with the computer 140. As shown, the I/O controller 280 also interfaces with a plurality of buttons and/or dials 298, and an optional status LCD 299, which in addition to the LCD screen 296 are the hardware elements of the user interface 295 of the device. The digital camera 100 may include the user interface 295 for providing feedback to, and receiving input from, a camera user, for example. Alternatively, these elements may be provided through a host device (e.g., personal digital assistant) for a media capture device implemented as a client to a host device. For an embodiment that does not need to interact with users, such as a surveillance camera, the foregoing user interface components may not be required. The LCD controller 290 accesses the memory (RAM) 266 and transfers processed image data to the LCD screen 296 for display. Although the user interface 295 includes an LCD screen 296, an optical viewfinder or direct view display may be used in addition to or in lieu of the LCD screen to provide feedback to a camera user. Components of the user interface 295 are available from a variety of vendors. Examples include Sharp, Toshiba, and Citizen Electronics of Japan, Samsung Electronics of South Korea, and Hewlett-Packard of Palo Alto, Calif.
The power management 262 communicates with the power supply 272 and coordinates power management operations for the camera 100. The power supply 272 supplies operating power to the various components of the camera 100. In a typical configuration, power supply 272 provides operating power to a main power bus 278 and also to a secondary power bus 279. The main power bus 278 provides power to the imaging device 120, the I/O controller 280, the non-volatile memory 282, and the removable memory 284. The secondary power bus 279 provides power to the power management 262, the processor 264, and the memory (RAM) 266. The power supply 272 is connected to batteries 275 and also to auxiliary batteries 276. A camera user may also connect the power supply 272 to an external power source, as desired. During normal operation of the power supply 272, the main batteries 275 provide operating power to the power supply 272 which then provides the operating power to the camera 100 via both the main power bus 278 and the secondary power bus 279. During a power failure mode in which the main batteries 275 have failed (e.g., when their output voltage has fallen below a minimum operational voltage level), the auxiliary batteries 276 provide operating power to the power supply 276. In a typical configuration, the power supply 272 provides power from the auxiliary batteries 276 only to the secondary power bus 279 of the camera 100.
The above-described system 100 is presented for purposes of illustrating the basic hardware underlying a media capturing and recording system (e.g., digital camera) that may be employed for implementing the present invention. The present invention, however, is not limited to just digital camera devices but, instead, may be advantageously applied to a variety of devices capable of supporting and/or benefiting from the methodologies of the present invention presented in detail below.
D. System Environment
II. Dynamic Viewport Layering
A. Introduction
Content creators want to create interesting content to add to user pictures. For example, content creators may want to layer user pictures with interesting text or interesting animation. This entails creating content on the fly. However, when a content creator creates content on the fly, the creator faces the additional problem of correctly displaying or rendering the content on devices with different display characteristics. The approach of the present invention is to create a solution that allows one to describe what has to happen in the final presentation. For example, an exemplary description would indicate that an image should be displayed with a frame, with animation overlaid on the image, and with the text “Happy Birthday” displayed on top. In this manner, the solution allows the image to be correctly displayed on devices with different display characteristics.
More particularly, the present invention applies a two-pronged approach. First, the approach of the present invention is to provide a description language that allows one to specify how the layering is to be performed. In the currently preferred embodiment, the description language conforms to XML format and provides a hierarchical description of the layers that form a given image. The different layers include images (e.g., bitmaps), animations, text, vector graphics, and the like. The description language includes a syntax that allows one to describe how to compose the different layers together and how to display those layers in a viewport. The description language does not specify an exact layout but, instead, accommodates the constraints of the various target devices. A given description for a particular image is resident on the server; it is not sent to the target device. Instead, the target device receives the final encoded format (image). Thus, the description language accommodates for encoding constraints imposed by a particular target device.
The second prong of the approach of the present invention is to dynamically reshape or reconfigure the viewport, so that the image is correctly rendered at the target device. Consider a set of device constraints for a given target device. The constraints will specify certain limits, such as maximum bits allowed per pixel (e.g., 8 bits per pixel), maximum screen size (e.g., 100 pixels by 100 pixels), and the like. In accordance with the present invention, the viewport is dynamically reconfigured to fit the constraints of the then-current target device. Moreover, multiple constraints must usually be satisfied. For example, a target device may specify a maximum image size (e.g., 5K). In order to accommodate that constraint, it may be necessary to decrease the bit depth (i.e., bits per pixel). The approach of the present invention entails satisfying a device's constraints mutually, so that, for example, an image's bit depth may be varied to 4 bits per pixel to accommodate the 5K file size constraint. However, the bit depth would not be allowed to exceed 8 bits per pixel (i.e., the maximum bit depth supported by the target device). All told, there are a variety of constraints or parameters that could potentially be adjusted to dynamically match the logical viewports (and therefore the image) to the target device.
B. Basic Methodology
The present invention provides an iterative optimization (customization) method that is used to meet the constraints of target devices while maintaining good image quality. As shown at 401 in
At the end of the foregoing, the layers (e.g., Layer 0 and Layer 1) are ready to be mapped to the Viewport, as shown at 403. A File Size Control block 405, which communicates with a Viewport Specification component 417, specifies the Viewport Size 407 for this mapping. The Viewport size may be larger than the target display (e.g., due to scrolling capability). The layers are merged after mapping, as indicated at 409. The next step in the process is clipping the Viewport to a clip-path, at 411. The clip-path corresponds to the Viewport unit rectangle (0.0,0.0,1.0,1.0), but it can also be specified to be one of the rendered layers. The clipped rectangle is then encoded per the device constraints, such as color-depth, encoding method, system palette, and the like. Mapping 413 represents this operation. If the resultant file size meets the file size constraints (tested at 415), then the image is returned to the target (e.g., mobile) display. Otherwise the file size control block re-sizes the viewport and reinitiates, viewport mapping, merging, and the like, as indicated by the loop back to the File Size Control block 405.
C. Image Transform API
The following describes the interface for specifying image transformations. To make effective use of the interface, it is useful to understand the imaging model used by the current invention which is based on a layering paradigm. The layers may include, for example, image, text, and vector graphics layers. Layers have spatial and temporal attributes.
1. Spatial Layering
The image transformation API is a layering API that describes how to combine various layers (image, text, animation, etc.) to create special effects.
The origin is in the top left corner of the Viewport.
The X axis advances to the right.
The Y axis advances down.
The X coordinates are normalized to Viewport width.
The Y coordinates are normalized to Viewport height.
A “Viewport Unit Rectangle” 551 is defined to be a rectangle that spans the coordinates (0.0, 0.0), (1.0,1.0). Each layer is mapped to the sub-region of the Viewport, per its Viewport_map. An example Viewport map sub-region or window is shown at 553 in
2. Temporal Layering
In addition to the spatial “order” attribute, layers also have temporal attributes (all expressed in milliseconds):
3. XML Approach
Layering is achieved using an XML API. In this method the (arg,val) pair “enh=<XML_URL>” specifes an XML URL to use.
http://eswitch.foo.com/es?src=http://source.foo.com/images/imgl.jpg&enh=http://source.foo.com/templates/enhance.xml.
4. XML Hierarchy
The hierarchy of objects that is used in the XML API is shown in
5. Image Transform
The image transform consists of an element tag to wrap the details of the image layering operation.
6. Common Properties of Layers
The layers have common properties that describe spatial and temporal behavior.
a) Spatial Properties
A Layer's spatial properties are determined by the “order” attribute and the “viewport_map” child-element.
The following (advanced) elements are useful to re-position the image after the mapping.
b) Temporal Properties
The temporal attributes: start_time, duration, and repeat_period, are supported by all layers.
7. Image Layer
The image layer's attributes and child-elements determine how it is:
Created
Mapped to a window within the Viewport.
a) Source Image layer
The image specified by the “src=<IMAGE_URL>” (arg,val) pair becomes the “source” layer. This layer is inserted between any background (layer order 0) and the remaining layers. This layer has default attribute and child-element values for the Viewport_map.
8. Text Layer
This layer supports text rendition.
9. Bezier Layer
The Bezier Layer is used to overlay vector graphics. The intent of this layer is to support vector graphics with dynamic text insertion capabilities.
10. Viewport
Once the layers are mapped onto the Viewport and merged, the resultant image is mapped to the client's preferred image format per constraints specified in the Viewport element.
a) Aspect/Anchor Layer
The current invention sets the Viewport's width to the target device's width. But the Viewport height is determined based on the aspect ratio as defined by the aspect_layer.
Example: The target mobile device is 100×120. The current invention will then create a Viewport that is 100×120.
Example: The image is 640×480. The mobile device is 100×100. The current invention will then create a Viewport that is 100×75. Since the coordinate system is normalized to the Viewport, all layering will be then relative to this image layer.
Though initially the Viewport dimensions are determined per the method described above, the dimensions may be adjusted to satisfy file size constraints. The aspect ratio is preserved when the Viewport is resized.
b) Force_Colors
The set of colors to be forced is specified in one of the following formats:
Mobile devices typically have one of the following color modes:
11. Class Definitions
The C++ class definitions of the ImageTransform class, the ImageLayer class and Viewport class are shown here.
a) ImageTransform
b) Layer Class
The layer class is the base class from which all layers (image, text, etc.) are derived.
c) Image Layer Class
The ImageLayer is derived from the Layer class.
d) The Viewport Class
12. Layering Examples
The following sub sections show examples of using the XML based layering API.
a) Graphics Overlay
This example shows how to overlay a graphic on a source image under the following constraints:
The requesting URL would be:
http://eswitch.foo.com/es?src=http://source.foo.com/boyjpg&enh=http://source.foo.com/enhance.xml
The enhancement XML would be:
b) Framing
This section is an example of overlaying a frame on an image.
The requesting URL would be:
The enhancement XML is shown below:
The aspect_layer attribute of Viewport is set to 2. This forces the Viewport to have the same aspect ratio as image layer 2, i.e. image layer 2.
Image_2 is mapped to complete Viewport.
Image layer 1 is mapped to a sub-window that aligns with the transparency in the “flower”.
c) Text Overlay
This example overlays text on the bottom 20% of Viewport
D. Summary of Internal Operation
1. Overall Operation
After identification of the device, the handler proceeds to fetch an XML (configuration) file, at step 604. The URL submitted by the client (at step 601) specified, as one of the name/value pairs, a particular XML file which stores, in a hierarchical fashion, the values for the image transform tree (which describes both the viewport and layers). The XML file that is fetched may now be parsed, using a stock XML parser (e.g., libXML2), at step 605. The parsed values/attributes are then used to create an in-memory copy of the image transform tree.
The next step is to merge viewport information derived from the client database with all of the attributes and their values (e.g., layering information) in the image transform tree, as shown at step 606. At step 607, upon invoking an image transform module, the method proceeds to actually render the image (i.e., dynamically create a version that is optimized or customized for the client). In particular, the image of interest is rendered to the viewport of the identified client device pursuant to the layering and viewport information in the image transform tree; any image format considerations of the client (e.g., JPEG format requirement) may be applied by transforming the image into the required format. The foregoing process may occur in an iterative fashion. For example, if the dynamically created version is deemed to be too large for the client device or has a bit depth that exceeds the client's capabilities, the step is repeated to create a version that is compliant. During a given iteration, encoding/rendering parameters (e.g., image dimensions) may be dynamically adjusted to achieve on-demand generation of an image that is optimized for the client device. Finally, as indicated by step 608, the method emits a fully rendered image (per constraints) that is then transmitted back to the client device (e.g., via wireless connectivity, via Internet connectivity, via wireless Internet connectivity, or the like) in an appropriate format. The image may be cached for future retrieval (e.g., by the same device type), as desired.
2. Image Transform Object
The Image Transform Object class definition (class ImageTransform), which closely mirrors the XML description, includes data members responsible for creating/supporting the various image layers. Each layer itself is an object in its own right. When the Image Transform Object is instantiated, all of the embedded objects are likewise instantiated.
The Image Transform Object includes a “Render” method, Render ( ). In basic operation, the “Render” method invokes a corresponding rendering method on each embedded object so that each layer is correctly rendered. Rendering occurs against an in-memory version (e.g., canonical format, such as a bitmap) of the Viewport, that is, a Viewport object. Ultimately, each embedded object is rendered against the Viewport object for generating a “candidate” rendered image. Next, the candidate image is encoded (e.g., JPEG encoded) to a format that is appropriate for the client, for generating a candidate transformed image. Once the candidate image is transformed, the resulting image is checked for compliance with applicable constraints (e.g., file size), as previously illustrated in
While the invention is described in some detail with specific reference to a single-preferred embodiment and certain alternatives, there is no intent to limit the invention to that particular embodiment or those specific alternatives. For instance, examples have been presented which focus on “displaying” images at client devices. Those skilled in the art will appreciate that other client-side outputting or rendering, such as printing, may benefit from application of the present invention. Therefore, those skilled in the art will appreciate that modifications may be made to the preferred embodiment without departing from the teachings of the present invention.
The present application is a continuation of patent application Ser. No. 10/273,670, filed Oct. 18, 2002 now U.S. Pat. No. 7,051,040, entitled “Imaging System Providing Dynamic Viewport Layering”, which is related to and claims the benefit of priority of the following commonly-owned provisional application(s): application Ser. No. 60/398,211, filed Jul. 23, 2002, entitled “Imaging System Providing Dynamic Viewport Layering”, of which the present application is non-provisional application thereof. The present application is related to the following commonly-owned application(s): application Ser. No. 10/010,616 , filed Nov. 8, 2001, entitled “System and Methodology for Delivering Media to Multiple Disparate Client Devices Based on Their Capabilities”; application Ser. No. 09/588,875, filed Jun. 6, 2000, entitled “System and Methodology Providing Access to Photographic Images and Attributes for Multiple Disparate Client Devices”. The disclosures of each of the foregoing applications are hereby incorporated by reference in their entirety, including any appendices or attachments thereof, for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
4443786 | Hammerling et al. | Apr 1984 | A |
4992887 | Aragaki | Feb 1991 | A |
5067029 | Takahashi | Nov 1991 | A |
5172227 | Tsai et al. | Dec 1992 | A |
5249053 | Jain | Sep 1993 | A |
5309257 | Bonino et al. | May 1994 | A |
5347600 | Barnsley et al. | Sep 1994 | A |
5412427 | Rabbani et al. | May 1995 | A |
5526047 | Sawanobori | Jun 1996 | A |
5548789 | Nakanura | Aug 1996 | A |
5552824 | DeAngelis et al. | Sep 1996 | A |
5613017 | Rao et al. | Mar 1997 | A |
5652621 | Adams, Jr. et al. | Jul 1997 | A |
5657077 | DeAngelis et al. | Aug 1997 | A |
5682152 | Wang et al. | Oct 1997 | A |
5734831 | Sanders | Mar 1998 | A |
5737491 | Allen et al. | Apr 1998 | A |
5742043 | Knowles et al. | Apr 1998 | A |
5754227 | Fukuoka | May 1998 | A |
5761655 | Hoffman | Jun 1998 | A |
5781901 | Kuzma | Jul 1998 | A |
5790878 | Anderson et al. | Aug 1998 | A |
5798794 | Takahashi | Aug 1998 | A |
5818525 | Elabd | Oct 1998 | A |
5826023 | Hall et al. | Oct 1998 | A |
5835580 | Fraser | Nov 1998 | A |
5848193 | Garcia | Dec 1998 | A |
5860074 | Rowe et al. | Jan 1999 | A |
5870383 | Eslambolchi et al. | Feb 1999 | A |
5880856 | Ferriere | Mar 1999 | A |
5883640 | Hsieh et al. | Mar 1999 | A |
5896502 | Shieh et al. | Apr 1999 | A |
5903723 | Beck et al. | May 1999 | A |
5913088 | Moghadam et al. | Jun 1999 | A |
5915112 | Boutcher | Jun 1999 | A |
5917542 | Moghadam et al. | Jun 1999 | A |
5917543 | Uehara | Jun 1999 | A |
5917965 | Cahill et al. | Jun 1999 | A |
5928325 | Shaughnessy et al. | Jul 1999 | A |
5956044 | Giorgianni et al. | Sep 1999 | A |
6008847 | Bauchspies | Dec 1999 | A |
6009201 | Acharya | Dec 1999 | A |
6014763 | Dhong et al. | Jan 2000 | A |
6016520 | Facq et al. | Jan 2000 | A |
6020920 | Anderson | Feb 2000 | A |
6023585 | Perlman et al. | Feb 2000 | A |
6023714 | Hill et al. | Feb 2000 | A |
6028807 | Awsienko | Feb 2000 | A |
6031934 | Ahmad et al. | Feb 2000 | A |
6031964 | Anderson | Feb 2000 | A |
6043837 | Driscoll, Jr. et al. | Mar 2000 | A |
6064437 | Phan et al. | May 2000 | A |
6067383 | Taniguchi et al. | May 2000 | A |
6072598 | Tso | Jun 2000 | A |
6072902 | Myers | Jun 2000 | A |
6081883 | Popelka et al. | Jun 2000 | A |
6085249 | Wang et al. | Jul 2000 | A |
6091777 | Guetz et al. | Jul 2000 | A |
6094689 | Embry et al. | Jul 2000 | A |
6101320 | Schuetze et al. | Aug 2000 | A |
6104430 | Fukuoka | Aug 2000 | A |
6125201 | Zador | Sep 2000 | A |
6128413 | Benamara | Oct 2000 | A |
6141686 | Jackowski et al. | Oct 2000 | A |
6154493 | Acharya et al. | Nov 2000 | A |
6157746 | Sodagar et al. | Dec 2000 | A |
6161140 | Moriya | Dec 2000 | A |
6163604 | Baulier et al. | Dec 2000 | A |
6163626 | Andrew | Dec 2000 | A |
6167441 | Himmel | Dec 2000 | A |
6185625 | Tso et al. | Feb 2001 | B1 |
6195026 | Acharya | Feb 2001 | B1 |
6195696 | Baber et al. | Feb 2001 | B1 |
6198941 | Aho et al. | Mar 2001 | B1 |
6202060 | Tran | Mar 2001 | B1 |
6202097 | Foster et al. | Mar 2001 | B1 |
6226642 | Beranek et al. | May 2001 | B1 |
6243420 | Mitchell et al. | Jun 2001 | B1 |
6256666 | Singhal | Jul 2001 | B1 |
6269481 | Perlman et al. | Jul 2001 | B1 |
6275869 | Sieffert et al. | Aug 2001 | B1 |
6278449 | Sugiarto et al. | Aug 2001 | B1 |
6278491 | Wang et al. | Aug 2001 | B1 |
6285471 | Pornbacher | Sep 2001 | B1 |
6285775 | Wu et al. | Sep 2001 | B1 |
6289375 | Knight et al. | Sep 2001 | B1 |
6300947 | Kanevsky | Oct 2001 | B1 |
6311215 | Bakshi et al. | Oct 2001 | B1 |
6330068 | Matsuyama | Dec 2001 | B1 |
6330073 | Sciatto | Dec 2001 | B1 |
6334126 | Nagatomo et al. | Dec 2001 | B1 |
6335783 | Kruit | Jan 2002 | B1 |
6336142 | Kato et al. | Jan 2002 | B1 |
6341316 | Kloba et al. | Jan 2002 | B1 |
6348929 | Acharya et al. | Feb 2002 | B1 |
6351547 | Johnson et al. | Feb 2002 | B1 |
6351568 | Andrew | Feb 2002 | B1 |
6360252 | Rudy et al. | Mar 2002 | B1 |
6385772 | Courtney | May 2002 | B1 |
6389460 | Stewart et al. | May 2002 | B1 |
6392697 | Tanaka et al. | May 2002 | B1 |
6392699 | Acharya | May 2002 | B1 |
6393470 | Kanevsky et al. | May 2002 | B1 |
6397230 | Carmel et al. | May 2002 | B1 |
6400903 | Conoval | Jun 2002 | B1 |
6411685 | O'Neal | Jun 2002 | B1 |
6414679 | Miodonski et al. | Jul 2002 | B1 |
6417882 | Mahant-Shetti | Jul 2002 | B1 |
6417913 | Tanaka | Jul 2002 | B2 |
6421733 | Tso et al. | Jul 2002 | B1 |
6423892 | Ramaswamy | Jul 2002 | B1 |
6424739 | Ukita et al. | Jul 2002 | B1 |
6438576 | Huang et al. | Aug 2002 | B1 |
6441913 | Anabuki et al. | Aug 2002 | B1 |
6445412 | Shiohara | Sep 2002 | B1 |
6449658 | Lafe et al. | Sep 2002 | B1 |
6457044 | Iwazaki | Sep 2002 | B1 |
6459816 | Matsuura et al. | Oct 2002 | B2 |
6463177 | Li et al. | Oct 2002 | B1 |
6473794 | Guheen et al. | Oct 2002 | B1 |
6480853 | Jain | Nov 2002 | B1 |
6487717 | Brunemann et al. | Nov 2002 | B1 |
6490675 | Sugiura | Dec 2002 | B1 |
6493758 | McLain | Dec 2002 | B1 |
6505236 | Pollack | Jan 2003 | B1 |
6507864 | Klein et al. | Jan 2003 | B1 |
6509910 | Agarwal et al. | Jan 2003 | B1 |
6512919 | Ogasawara | Jan 2003 | B2 |
6519617 | Wanderski et al. | Feb 2003 | B1 |
6539169 | Tsubaki et al. | Mar 2003 | B1 |
6546143 | Taubman et al. | Apr 2003 | B1 |
6549958 | Kuba | Apr 2003 | B1 |
6577338 | Tanaka et al. | Jun 2003 | B1 |
6583813 | Enright et al. | Jun 2003 | B1 |
6598076 | Chang et al. | Jul 2003 | B1 |
6606669 | Nakagiri | Aug 2003 | B1 |
6615224 | Davis | Sep 2003 | B1 |
6628325 | Steinberg et al. | Sep 2003 | B1 |
6704712 | Bleiweiss | Mar 2004 | B1 |
6721769 | Rappaport et al. | Apr 2004 | B1 |
6724721 | Cheriton | Apr 2004 | B1 |
6725300 | Hisamatsu et al. | Apr 2004 | B1 |
6734994 | Omori | May 2004 | B2 |
6742043 | Moussa et al. | May 2004 | B1 |
6745235 | Baca et al. | Jun 2004 | B2 |
6760762 | Pezzutti | Jul 2004 | B2 |
6779042 | Kloba et al. | Aug 2004 | B1 |
6785730 | Taylor | Aug 2004 | B1 |
6850946 | Rappaport et al. | Feb 2005 | B1 |
6910068 | Zintel et al. | Jun 2005 | B2 |
6914622 | Smith et al. | Jul 2005 | B1 |
6925595 | Whitledge et al. | Aug 2005 | B1 |
7020881 | Takahashi et al. | Mar 2006 | B2 |
7034871 | Parulski et al. | Apr 2006 | B2 |
7051040 | Easwar | May 2006 | B2 |
7054905 | Hanna et al. | May 2006 | B1 |
7103357 | Kirani et al. | Sep 2006 | B2 |
7149370 | Willner et al. | Dec 2006 | B2 |
20020062396 | Kakei et al. | May 2002 | A1 |
20030110234 | Egli et al. | Jun 2003 | A1 |
20030174286 | Trumbull | Sep 2003 | A1 |
20030231785 | Rhoads et al. | Dec 2003 | A1 |
20040022444 | Rhoads | Feb 2004 | A1 |
20040177085 | Rappaport et al. | Sep 2004 | A1 |
20060256130 | Gonzalez | Nov 2006 | A1 |
20070011023 | Silverbrook | Jan 2007 | A1 |
20070198687 | Kasriel et al. | Aug 2007 | A1 |
Number | Date | Country |
---|---|---|
19934787 | Feb 2001 | DE |
10050172 | Apr 2001 | DE |
0763943 | Mar 1997 | EP |
0811939 | Dec 1997 | EP |
0835013 | Apr 1998 | EP |
0949805 | Oct 1999 | EP |
0950969 | Oct 1999 | EP |
0992922 | Apr 2000 | EP |
1109371 | Jun 2001 | EP |
1109372 | Jun 2001 | EP |
2289555 | Nov 1995 | GB |
2365177 | Feb 2002 | GB |
2002-202935 | Jul 2002 | JP |
WO 9749252 | Dec 1997 | WO |
WO 9843177 | Oct 1998 | WO |
WO 9906910 | Feb 1999 | WO |
WO 9913429 | Mar 1999 | WO |
WO 9960793 | Nov 1999 | WO |
WO 0013429 | Mar 2000 | WO |
PCT GB0001962 | Nov 2000 | WO |
WO 0072534 | Nov 2000 | WO |
WO 0075859 | Dec 2000 | WO |
PCT SE0000807 | Jan 2001 | WO |
WO 0101663 | Jan 2001 | WO |
WO 0101663 | Jan 2001 | WO |
WO 0157718 | Aug 2001 | WO |
PCT KR0101323 | Feb 2002 | WO |
WO 0213031 | Feb 2002 | WO |
WO 0215128 | Feb 2002 | WO |
WO 0227543 | Apr 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20070009179 A1 | Jan 2007 | US |
Number | Date | Country | |
---|---|---|---|
60398211 | Jul 2002 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10273670 | Oct 2002 | US |
Child | 11439928 | US |