Disclosed herein are computer-implemented systems and methods for allowing an end-user to create and share annotated comments, modify published images, and/or otherwise interact with images published on digital content platforms (e.g., images published on a webpage, mobile application, etc.). The systems and methods may include: (1) providing an annotation interface to allow a first end-user to create an annotation on a published image; (2) providing a comment entry interface to receive a comment from the first end-user; (3) linking the annotation and the comment; (4) identifying when a second end-user accesses the image or comment on the digital content platform; and (5) displaying the comment and/or annotation to the second end-user.
The accompanying drawings, which are incorporated herein, form part of the specification. Together with this written description, the drawings further serve to explain the principles of, and to enable a person skilled in the relevant art(s), to make and use the claimed systems and methods.
Prior to describing the present invention in detail, it is useful to provide definitions for key terms and concepts used herein. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
“Advertisement” or “ad”: One or more images, with or without associated text, to promote or display a product or service. Terms “advertisement” and “ad,” in the singular or plural, are used interchangeably.
“Ad Creative” or “Creative”: Computer file with advertisement, image, or any other content or material related to a product or service. As used herein, the phrase “providing an advertisement” may include “providing an ad creative,” where logically appropriate. Further, as used herein, the phrase “providing a contextually relevant advertisement” may include “providing an ad creative,” where logically appropriate.
Ad server: One or more computers, or equivalent systems, which maintains a catalog of creatives, delivers creative(s), and/or tracks advertisement(s), campaigns, and/or campaign metrics independent of the platform where the advertisement is being displayed.
“Contextual information” or “contextual tag”: Data related to the contents and/or context of digital content (e.g., an image, or content within the image); for example, but not limited to, a description, identification, index, or name of an image, or object, or scene, or person, or abstraction within the digital content (e.g., image).
Contextually relevant advertisement: A targeted advertisement that is considered relevant to the contents and/or context of digital content on a digital content platform.
Crowdsource network: One or more individuals, whether human or computer, used for a crowdsourcing application.
Crowdsourcing: The process of delegating a task to one or more individuals, with or without compensation.
Digital content: Broadly interpreted to include, without exclusion, any content available on a digital content platform, such as images, videos, text, audio, and any combinations and equivalents thereof.
Digital content platform: Broadly interpreted to include, without exclusion, any webpage, website, browser-based web application, software application, mobile device application (e.g., phone or tablet application), TV widget, and equivalents thereof.
Image: A visual representation of an object, or scene, or person, or abstraction, in the form of a machine-readable and/or machine-storable work product (e.g., one or more computer files storing a digital image, a browser-readable or displayable image file, etc.). As used herein, the term “image” is merely one example of “digital content.” Further, as used herein, the term “image” may refer to the actual visual representation, the machine-readable and/or machine-storable work product, location identifier(s) of the machine-readable and/or machine-storable work product (e.g., a uniform resource locator (URL)), or any equivalent means to direct a computer-implemented system and/or user to the visual representation. As such, process steps performed on “an image” may call for different interpretations where logically appropriate. For example, the process step of “analyzing the context of an image,” would logically include “analyzing the context of a visual representation.” However, the process step of “storing an image on a server,” would logically include “storing a machine-readable and/or machine-storable work product, or location identifier(s) of the machine-readable and/or machine-storable work product (e.g., uniform resource locator (URL)) on a server.” Further, process steps performed on an image may include process steps performed on a copy, thumbnail, or data file of the image.
Merchant: Seller or provider of a product or service; agent representing a seller or provider; or any third-party charged with preparing and/or providing digital content associated with a product or service. For example, the term merchant should be construed broadly enough to include advertisers, an ad agency, or other intermediaries, charged with developing a digital content to advertise a product or service.
Proximate: Is intended to broadly mean “relatively adjacent, close, or near,” as would be understood by one of skill in the art. The term “proximate” should not be narrowly construed to require an absolute position or abutment. For example, “content displayed proximate to an image,” means “content displayed relatively near an image, but not necessarily abutting or within the image.” (To clarify: “content displayed proximate to an image,” also includes “content displayed abutting or within the image.”) In another example, “content displayed proximate to an image,” means “content displayed on the same screen page or webpage as the image.”
Publisher: Party that owns, provides, and/or controls digital content or a digital content platform; or third-party who provides, maintains, and/or controls, digital content and/or ad space on a digital content platform.
Except for any term definitions that conflict with the term definitions provided herein, the following related, co-owned, and co-pending applications are incorporated by reference in their entirety: U.S. patent application Ser. Nos. 12/902,066; 13/005,217; 13/005,226; 13/045,426; 13/151,110; 13/219,460; 13/252,053; 13/299,280; 13/308,401; 13/299,280; 13/427,341, which has issued as U.S. Pat. No. 8,255,495; and Ser. No. 13/450,807, which has issued as U.S. Pat. No. 8,234,168.
Modern trends in Internet-based content delivery have shown a heightened emphasis on digital images. Images are typically the most information-rich or information-dense content a publisher can provide. Social network sites and blogs are known to have comment streams proximate to images, wherein end-users may provide captions/comments to a published image. Besides comment streams, however, publishers and end-users seldom have the mechanisms to make images interactive, so as to provide additional/supplemental content if/when an end-user is interested in the image. Publishers also seldom have the mechanisms for allowing their readers (i.e., end-users) to modify the publishers' published images, and share such modifications with other readers.
The present invention generally relates to computer-implemented systems and methods for allowing end-users to create and share annotated comments, modify published images, and/or otherwise interact with images published on digital content platforms. For example, the systems and methods presented generally include: (1) providing an end-user with an interface to create and/or modify content, or otherwise interact with a published image; (2) providing a mechanism to identify when a second end-user has accessed the image; and (3) rendering, to the second end-user, the first end-user's creation, modification, and/or interaction with the image. In one example embodiment, there is provided systems and sub-systems to: (1) provide an annotation interface to allow a first end-user to create an annotation on an image published on a digital content platform; (2) provide a comment entry interface to receive a comment from the first end-user; (3) link the annotation and the comment; (4) identify when a second end-user accesses the image or comment on the digital content platform; and (5) display, highlight, or otherwise draw attention to the comment and/or annotation to the second end-user.
The following detailed description of the figures refers to the accompanying drawings that illustrate exemplary embodiments. Other embodiments are possible. Modifications may be made to the embodiments described herein without departing from the spirit and scope of the present invention. Therefore, the following detailed description is not meant to be limiting.
As common with social network and blog platforms, a comment stream 120 is provided proximate (e.g., below) the image 110 to allow the end-user 105a to leave a comment or caption to the image 110. For example, the end-user 105a can enter an individual comment 121 in the form of an alpha-numeric character string. The system 100 of
The service provider 150 includes sub-systems of an image analysis engine 152 and an image-content matching engine 154. The image analysis engine 152 is configured to identify the image 110, and analyze the image to identify the context/content within the image. The image analysis engine 152 may include sub-protocols and sub-processing units, such as image recognition algorithms, a crowdsource network to analyze the image, and/or proximate text recognition to obtain contextual clues of the context/content within the image based on text published proximate to the image. The image-content matching engine is configured to identify contextually relevant, third-party content (e.g., ads provided by merchants/advertisers) that can be linked to the image 110. Examples of service provider systems (including image analysis and image-content matching) are discussed in more detail in the above-referenced applications.
The data collection engine 130 is configured to receive application (e.g., annotation) data from the digital content platform and/or image data from one or both of the image analysis engine 152 and image-content matching engine 154. For example, the data collection engine 130 may include one or more processing units configured to receive (in push/pull fashion) data from the various sub-systems. The comment collection engine 135, and associated processing units, is configured to receive (in push/pull fashion) one or more of the individual comment strings 121 (or comment string identifiers) from the digital content platform. The image and/or application data from the data collection engine 130 and the comment data from the comment collection engine 135 is then linked and stored in a database (or equivalent storage units) 140. From the database 140, linked data may be sent to the image-content matching engine 154 for additional content matching processing (e.g., based on the linkage and/or comment(s)), or sent directly to a rendering engine 160. When a second end-user 105b access the image 110, the rendering engine 160 (and associated processing units) is configured to display, highlight, or otherwise provide the first end-user's annotation 117 and/or any matched content from the image-content matching engine 154 (e.g., merchant advertisements matched to the image, comment, and/or annotation). The second end-user 105b can access the image 110, the annotation 117, and/or one or more individual comments 121 by means such as clicking, mousing-over, uploading, downloading, viewing, or otherwise enabling the image, the annotation, or the comment(s).
Lines A-G illustrate the above-described process flow of the system 100. Line A, for example, is indicative of the first end-user 105a accessing the image 110 to insert a comment 121 and an annotation 117. Lines B, B′, and B″ are indicative of data flows (in parallel or series) to the service provider 150, the data collection engine 130, and the comment collection engine 135, respectively. Lines C and C′ are indicative of data flows (in parallel or series) to the image-content matching engine 154 and the data collection engine 130, respectively. Lines D and D′ are indicative of data flows (in parallel or series) to the data collection engine 130 and the database 140. Lines E and E′ are indicative of data flows (in parallel or series) to the image-content matching engine 154 and the rendering engine 160. When a second end-user 105b accesses the image 110 and/or the comment 121 (or one or more hotspots indicative of the annotation 117), as indicated by Line F, the rendering engine 160 displays content to the second end-user 105b, as indicated by Line G, in the form of the annotation 117, a highlighting of the comment 121, presentation/display of contextually relevant third-party content or advertisements, etc. One of skill in the art would recognize that one or more of the above-described process flows can be supplemented, omitted, or redirected without departing from the spirit of the present invention. For example, in an alternative embodiment, a process flow may be provided from the data collection engine 130 and/or comment collection engine 135 to the image analysis engine 152 and/or image-content matching engine 154. As such, the functions of the image analysis engine 152 and/or image-content matching engine 154 may be supplemented by data received from the data collection engine 130 and/or comment collection engine 135.
The third-end user 205c can activate a “market” or “shopping” in-image application 215c, by clicking the in-image application 215c icon with her cursor 206c. The third-end user 205c can then select (or draw a circle around) content within the image 210 that she wishes to highlight as an interesting object for purchase (e.g., the sunglasses). The third-end user 205c can then link the object to one or more third-party sites for purchase of the object (or a related object) in an annotation pop-up window (or frame) 217c. In alternative embodiments, the service provider 150 may provide advertisements and/or ad creatives within the annotation frame 217c based on an output from image-content matching engine 154. Similar ad creative display frames are discussed in more detail in the above-reference applications; e.g., U.S. patent application Ser. No. 13/252,053. The third-end user 215c may also provide an individual comment 221c within the comment stream 220. The third individual comment 221c is then linked to the annotation 217c.
In one embodiment, there is provided computer-implemented systems and methods for allowing users to create and share annotated comments on images published on digital content platforms. The systems and methods include: (1) providing an annotation interface to allow a first end-user to create an annotation on an image published on a digital content platform; (2) providing a comment entry interface to receive a comment from the first end-user; (3) receiving and linking the annotation and the comment; (4) identifying when a second end-user accesses the image, comment, and/or annotation on the digital content platform; and (5) displaying the annotation and/or comment to the second end-user. The comment can be in the form of a character string or any equivalent thereof. The digital content platform can be a webpage (e.g., social network), mobile application, or equivalent thereof. The systems and methods may further include: (6) matching the image, annotation, or comment to an advertisement, and (7) displaying the advertisement to the second end-user proximate to the annotation. The advertisement may be contextually relevant to the image, annotation, and/or comment. The systems and methods may further include (8) submitting the image, annotation, and/or comment to an image analysis engine to match the image, annotation, or comment to an advertisement. The image analysis engine may include an image recognition engine, a crowdsource network, or proximate text recognition engine. The annotation interface may be a third-party in-image application, or an interface provided by the digital content platform. The annotation may be any form of a modification or addition to the image; including a link to a third-party website and/or second digital content platform.
In another embodiment, there is provided a computer-implemented method for allowing an end-user to create and share annotated comments on an image published on a digital content platform. The method is performed by a computer processor. The method comprises: (1) providing an annotation interface to allow a first end-user to create an annotation on an image published on a digital content platform; (2) providing a comment entry interface to receive a comment from the first end-user; (3) receiving and linking the annotation and the comment; (4) identifying when a second end-user accesses the image on the digital content platform; and (5) displaying the annotation and comment to the second end-user.
In still another embodiment, there is provide a system comprising: (1) a digital content platform; (2) an interface configured to allow users to enter comments and/or annotations to images published on the digital content platform; (3) an image analysis engine configured to analyze the context/content within the image; (4) an image-content matching engine configured to match the image with contextually relevant content (e.g., advertisements); (5) a data collection engine configured to collect application (e.g., annotation) data from the digital content platform, as well as data from the image analysis engine and/or image-content matching engine; (6) a comment collection engine configured to collect the comments provided on the digital content platform; (7) a database for linking and/or storing the application data, analysis data, and the comment data; and (8) a rendering engine configured to identify when a second end-user has accessed the image, comment, and/or annotation, and display the annotation and/or comment to the second end-user. The rendering engine can also be configured to perform any modification to the comment stream in order to highlight to the second end-user that the comment(s) are related to the displayed annotation.
In another embodiment, there is provide computer-implemented systems and methods for allowing an end-user to create and share content (e.g., annotated images/comments) on an image published on a digital content platform (e.g., webpage, mobile application, etc.). The systems and methods include: (1) means for providing an annotation interface to allow a first end-user to create an annotation on an image published on a digital content platform; (2) means for providing a comment entry interface; (3) means for receiving a comment from the first end-user; (4) means for receiving, linking, and/or storing the annotation and the comment in a database; (5) means for identifying when a second end-user accesses the image, comment, or annotation on the digital content platform; and (6) means for display the annotation and/or comment to the second end-user. The comment may be in the form of a character (e.g., alpha-numeric) string. The digital content platform is a webpage (e.g., social network), a mobile application (e.g., a downloadable application for a mobile phone or tablet). The systems and methods may further include: (6) means for matching the image, annotation, or comment to an advertisement; (7) means for displaying the advertisement to the second end-user proximate to the annotation; and/or (8) means for submitting the image, annotation, or comment to an image analysis engine to match the image, annotation, or comment to an advertisement. The advertisement can be contextually relevant to the image, annotation, or comment. The image analysis engine may include means for image recognition, means for processing the image through a crowdsource network, and/or means for proximate text recognition. The annotation interface may be a third-party in-image application. The annotation may include any modification or addition to the image, and/or any link to a third-party website or second digital content platform.
Communication Between Parties Practicing the Present Invention.
In one embodiment, communication between the various parties and components of the present invention is accomplished over a network consisting of electronic devices connected either physically or wirelessly, wherein digital information is transmitted from one device to another. Such devices (e.g., end-user devices and/or servers) may include, but are not limited to: a desktop computer, a laptop computer, a handheld device or PDA, a cellular telephone, a set top box, an Internet appliance, an Internet TV system, a mobile device or tablet, or systems equivalent thereto. Exemplary networks include a Local Area Network, a Wide Area Network, an organizational intranet, the Internet, or networks equivalent thereto. The functionality and system components of an exemplary computer and network are further explained in conjunction with
Computer Implementation.
In one embodiment, the invention is directed toward one or more computer systems capable of carrying out the functionality described herein. For example,
Computer system 600 also includes a main memory 608, such as random access memory (RAM), and may also include a secondary memory 610. The secondary memory 610 may include, for example, a hard disk drive 612 and/or a removable storage drive 614, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, flash memory device, etc. The removable storage drive 614 reads from and/or writes to a removable storage unit 618. Removable storage unit 618 represents a floppy disk, magnetic tape, optical disk, flash memory device, etc., which is read by and written to by removable storage drive 614. As will be appreciated, the removable storage unit 618 includes a computer usable storage medium having stored therein computer software, instructions, and/or data.
In alternative embodiments, secondary memory 610 may include other similar devices for allowing computer programs or other instructions to be loaded into computer system 600. Such devices may include, for example, a removable storage unit 622 and an interface 620. Examples of such may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an erasable programmable read only memory (EPROM), or programmable read only memory (PROM)) and associated socket, and other removable storage units 622 and interfaces 620, which allow computer software, instructions, and/or data to be transferred from the removable storage unit 622 to computer system 600.
Computer system 600 may also include a communications interface 624. Communications interface 624 allows computer software, instructions, and/or data to be transferred between computer system 600 and external devices. Examples of communications interface 624 may include a modem, a network interface (such as an Ethernet card), a communications port, a Personal Computer Memory Card International Association (PCMCIA) slot and card, etc. Software and data transferred via communications interface 624 are in the form of signals 628 which may be electronic, electromagnetic, optical or other signals capable of being received by communications interface 624. These signals 628 are provided to communications interface 624 via a communications path (e.g., channel) 626. This channel 626 carries signals 628 and may be implemented using wire or cable, fiber optics, a telephone line, a cellular link, a radio frequency (RF) link, a wireless communication link, and other communications channels.
In this document, the terms “computer-readable storage medium,” “computer program medium,” and “computer usable medium” are used to generally refer to media such as removable storage drive 614, removable storage units 618, 622, data transmitted via communications interface 624, and/or a hard disk installed in hard disk drive 612. These computer program products provide computer software, instructions, and/or data to computer system 600. These computer program products also serve to transform a general purpose computer into a special purpose computer programmed to perform particular functions, pursuant to instructions from the computer program products/software. Embodiments of the present invention are directed to such computer program products.
Computer programs (also referred to as computer control logic) are stored in main memory 608 and/or secondary memory 610. Computer programs may also be received via communications interface 624. Such computer programs, when executed, enable the computer system 600 to perform the features of the present invention, as discussed herein. In particular, the computer programs, when executed, enable the processor 604 to perform the features of the presented methods. Accordingly, such computer programs represent controllers of the computer system 600. Where appropriate, the processor 604, associated components, and equivalent systems and sub-systems thus serve as “means for” performing selected operations and functions. Such “means for” performing selected operations and functions also serve to transform a general purpose computer into a special purpose computer programmed to perform said selected operations and functions.
In an embodiment where the invention is implemented using software, the software may be stored in a computer program product and loaded into computer system 600 using removable storage drive 614, interface 620, hard drive 612, communications interface 624, or equivalents thereof. The control logic (software), when executed by the processor 604, causes the processor 604 to perform the functions and methods described herein.
In another embodiment, the methods are implemented primarily in hardware using, for example, hardware components such as application specific integrated circuits (ASICs). Implementation of the hardware state machine so as to perform the functions and methods described herein will be apparent to persons skilled in the relevant art(s). In yet another embodiment, the methods are implemented using a combination of both hardware and software.
Embodiments of the invention, including any systems and methods described herein, may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others. Further, firmware, software, routines, instructions may be described herein as performing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing firmware, software, routines, instructions, etc.
For example, in one embodiment, there is provided a computer-readable storage medium, for allowing an end-user to create and share annotated comments on an image published on a digital content platform. The computer-readable storage medium includes instructions, executable by at least one processing device, that when executed cause the processing device to: (1) provide an annotation interface to allow a first end-user to create an annotation on an image published on a digital content platform; (2) provide a comment entry interface to receive a comment from the first end-user; (3) receive, link, and store the annotation and the comment in a database; (4) identify when a second end-user accesses the image, comment, or annotation on the digital content platform; and (5) display the annotation and/or comment to the second end-user. The comment may be in the form of a character (e.g., alpha-numeric) string. The digital content platform may be a webpage (e.g., social network), a mobile application (e.g., a downloadable application for a mobile phone or tablet), or any equivalent platform. The computer-readable storage medium may further include instructions, executable by at least one processing device, that when executed cause the processing device to: (6) match the image, annotation, or comment to an advertisement; (7) display the advertisement to the second end-user proximate to the annotation; and/or (8) submit the image, annotation, or comment to an image analysis engine to match the image, annotation, or comment to an advertisement. The advertisement can be contextually relevant to the image, annotation, or comment. The image analysis engine may include an image recognition engine, a crowdsource network, or a proximate text recognition engine. The annotation interface may be a third-party in-image application. The annotation may include any modification or addition to the image, and/or any link to a third-party website or second digital content platform.
The foregoing description of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Other modifications and variations may be possible in light of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, and to thereby enable others skilled in the art to best utilize the invention in various embodiments and various modifications as are suited to the particular use contemplated. It is intended that the appended claims be construed to include other alternative embodiments of the invention; including equivalent structures, components, methods, and means.
As will be apparent to those of skill in the art upon reading this disclosure, each of the individual embodiments described and illustrated herein has discrete components and features which may be readily separated from or combined with the features of any of the other several embodiments without departing from the scope or spirit of the present invention. Any recited method can be carried out in the order of events recited or in any other order which is logically possible. Further, each system component and/or method step presented should be considered a “means for” or “step for” performing the function described for said system component and/or method step. As such, any claim language directed to a “means for” or “step for” performing a recited function refers to the system component and/or method step in the specification that performs the recited function, as well as equivalents thereof.
It is to be appreciated that the Detailed Description section, and not the Summary and Abstract sections, is intended to be used to interpret the claims. The Summary and Abstract sections may set forth one or more, but not all exemplary embodiments of the present invention as contemplated by the inventor(s), and thus, are not intended to limit the present invention and the appended claims in any way.
Number | Name | Date | Kind |
---|---|---|---|
D297243 | Wells-Papanek et al. | Aug 1988 | S |
4789962 | Berry et al. | Dec 1988 | A |
5008853 | Bly et al. | Apr 1991 | A |
5199104 | Hirayama | Mar 1993 | A |
5287448 | Nicol et al. | Feb 1994 | A |
5349518 | Zifferer et al. | Sep 1994 | A |
5367623 | Iwai et al. | Nov 1994 | A |
5428733 | Carr | Jun 1995 | A |
5583655 | Tsukamoto et al. | Dec 1996 | A |
5589892 | Knee et al. | Dec 1996 | A |
5615367 | Bennett et al. | Mar 1997 | A |
5627958 | Potts et al. | May 1997 | A |
D384050 | Kodosky | Sep 1997 | S |
D384052 | Kodosky | Sep 1997 | S |
5682469 | Linnett et al. | Oct 1997 | A |
5684716 | Freeman | Nov 1997 | A |
5689669 | Lynch et al. | Nov 1997 | A |
5706507 | Schloss | Jan 1998 | A |
5721906 | Siefert | Feb 1998 | A |
5724484 | Kagami | Mar 1998 | A |
5754176 | Crawford | May 1998 | A |
5796932 | Fox et al. | Aug 1998 | A |
D406828 | Newton et al. | Mar 1999 | S |
5933138 | Driskell | Aug 1999 | A |
5956029 | Okada et al. | Sep 1999 | A |
6026377 | Burke | Feb 2000 | A |
6034687 | Taylor et al. | Mar 2000 | A |
D427576 | Coleman | Jul 2000 | S |
6285381 | Sawano et al. | Sep 2001 | B1 |
D450059 | Itou | Nov 2001 | S |
6414679 | Miodonski et al. | Jul 2002 | B1 |
D469104 | Istvan et al. | Jan 2003 | S |
6728752 | Chen et al. | Apr 2004 | B1 |
7069308 | Abrams | Jun 2006 | B2 |
D528552 | Nevill-Manning | Sep 2006 | S |
D531185 | Cummins | Oct 2006 | S |
7117254 | Lunt et al. | Oct 2006 | B2 |
7124372 | Brin | Oct 2006 | B2 |
7159185 | Vedula et al. | Jan 2007 | B1 |
7231395 | Fain et al. | Jun 2007 | B2 |
7233316 | Smith et al. | Jun 2007 | B2 |
7251637 | Caid et al. | Jul 2007 | B1 |
D553632 | Harvey et al. | Oct 2007 | S |
D555661 | Kim | Nov 2007 | S |
D557275 | De Mar et al. | Dec 2007 | S |
D562840 | Cameron | Feb 2008 | S |
D566716 | Rasmussen et al. | Apr 2008 | S |
D567252 | Choe et al. | Apr 2008 | S |
D577365 | Flynt et al. | Sep 2008 | S |
7437358 | Arrouye et al. | Oct 2008 | B2 |
7502785 | Chen et al. | Mar 2009 | B2 |
D590412 | Saft et al. | Apr 2009 | S |
7519200 | Gokturk et al. | Apr 2009 | B2 |
7519595 | Solaro et al. | Apr 2009 | B2 |
7542610 | Gokturk et al. | Jun 2009 | B2 |
7558781 | Probst et al. | Jul 2009 | B2 |
D600704 | LaManna et al. | Sep 2009 | S |
D600706 | LaManna et al. | Sep 2009 | S |
7599938 | Harrison, Jr. | Oct 2009 | B1 |
7657100 | Gokturk et al. | Feb 2010 | B2 |
7657126 | Gokturk et al. | Feb 2010 | B2 |
7660468 | Gokturk et al. | Feb 2010 | B2 |
D613299 | Owen et al. | Apr 2010 | S |
D613750 | Truelove et al. | Apr 2010 | S |
D614638 | Viegers et al. | Apr 2010 | S |
7760917 | Vanhoucke et al. | Jul 2010 | B2 |
7783135 | Gokturk et al. | Aug 2010 | B2 |
7792818 | Fain et al. | Sep 2010 | B2 |
D626133 | Murphy et al. | Oct 2010 | S |
7809722 | Gokturk et al. | Oct 2010 | B2 |
D629411 | Weir et al. | Dec 2010 | S |
D638025 | Saft et al. | May 2011 | S |
7945653 | Zuckerberg et al. | May 2011 | B2 |
D643044 | Ording | Aug 2011 | S |
8027940 | Li et al. | Sep 2011 | B2 |
8036990 | Mir et al. | Oct 2011 | B1 |
8055688 | Giblin | Nov 2011 | B2 |
8060161 | Kwak | Nov 2011 | B2 |
D652424 | Cahill et al. | Jan 2012 | S |
8166383 | Everingham et al. | Apr 2012 | B1 |
8234168 | Lagle Ruiz et al. | Jul 2012 | B1 |
D664976 | Everingham | Aug 2012 | S |
D664977 | Everingham | Aug 2012 | S |
8250145 | Zuckerberg et al. | Aug 2012 | B2 |
8255495 | Lee | Aug 2012 | B1 |
8280959 | Zuckerberg et al. | Oct 2012 | B1 |
8311889 | Lagle Ruiz et al. | Nov 2012 | B1 |
8392538 | Lee | Mar 2013 | B1 |
20020065844 | Robinson et al. | May 2002 | A1 |
20030050863 | Radwin | Mar 2003 | A1 |
20030131357 | Kim | Jul 2003 | A1 |
20030220912 | Fain et al. | Nov 2003 | A1 |
20040070616 | Hildebrandt et al. | Apr 2004 | A1 |
20050235062 | Lunt et al. | Oct 2005 | A1 |
20050251760 | Sato et al. | Nov 2005 | A1 |
20060155684 | Liu et al. | Jul 2006 | A1 |
20060179453 | Kadie et al. | Aug 2006 | A1 |
20060265400 | Fain et al. | Nov 2006 | A1 |
20070118520 | Bliss et al. | May 2007 | A1 |
20070157119 | Bishop | Jul 2007 | A1 |
20070203903 | Attaran Rezaei et al. | Aug 2007 | A1 |
20070219968 | Frank | Sep 2007 | A1 |
20070255785 | Hayashi et al. | Nov 2007 | A1 |
20070258646 | Sung et al. | Nov 2007 | A1 |
20080079696 | Shim et al. | Apr 2008 | A1 |
20080082426 | Gokturk et al. | Apr 2008 | A1 |
20080091723 | Zuckerberg et al. | Apr 2008 | A1 |
20080134088 | Tse et al. | Jun 2008 | A1 |
20080141110 | Gura | Jun 2008 | A1 |
20080163379 | Robinson et al. | Jul 2008 | A1 |
20080177640 | Gokturk et al. | Jul 2008 | A1 |
20080199075 | Gokturk et al. | Aug 2008 | A1 |
20080208849 | Conwell | Aug 2008 | A1 |
20080268876 | Gelfand et al. | Oct 2008 | A1 |
20090006375 | Lax et al. | Jan 2009 | A1 |
20090007012 | Mandic et al. | Jan 2009 | A1 |
20090064003 | Harris et al. | Mar 2009 | A1 |
20090070435 | Abhyanker | Mar 2009 | A1 |
20090125544 | Brindley | May 2009 | A1 |
20090144392 | Wang et al. | Jun 2009 | A1 |
20090148045 | Lee et al. | Jun 2009 | A1 |
20090158146 | Curtis et al. | Jun 2009 | A1 |
20090159342 | Markiewicz et al. | Jun 2009 | A1 |
20090165140 | Robinson et al. | Jun 2009 | A1 |
20090193032 | Pyper | Jul 2009 | A1 |
20090228838 | Ryan et al. | Sep 2009 | A1 |
20090287669 | Bennett | Nov 2009 | A1 |
20100005001 | Aizen et al. | Jan 2010 | A1 |
20100005087 | Basco | Jan 2010 | A1 |
20100046842 | Conwell | Feb 2010 | A1 |
20100054600 | Anbalagan et al. | Mar 2010 | A1 |
20100077290 | Pueyo | Mar 2010 | A1 |
20100161631 | Yu et al. | Jun 2010 | A1 |
20100260426 | Huang et al. | Oct 2010 | A1 |
20100287236 | Amento et al. | Nov 2010 | A1 |
20100290699 | Adam et al. | Nov 2010 | A1 |
20100312596 | Saffari et al. | Dec 2010 | A1 |
20100313143 | Jung et al. | Dec 2010 | A1 |
20110010676 | Khosravy | Jan 2011 | A1 |
20110022958 | Kang et al. | Jan 2011 | A1 |
20110072047 | Wang et al. | Mar 2011 | A1 |
20110082825 | Sathish | Apr 2011 | A1 |
20110087990 | Ng et al. | Apr 2011 | A1 |
20110131537 | Cho et al. | Jun 2011 | A1 |
20110138300 | Kim et al. | Jun 2011 | A1 |
20110164058 | Lemay | Jul 2011 | A1 |
20110173190 | van Zwol et al. | Jul 2011 | A1 |
20110184814 | Konkol et al. | Jul 2011 | A1 |
20110196863 | Marcucci et al. | Aug 2011 | A1 |
20110243459 | Deng | Oct 2011 | A1 |
20110264736 | Zuckerberg et al. | Oct 2011 | A1 |
20110276396 | Rathod | Nov 2011 | A1 |
20110280447 | Conwell | Nov 2011 | A1 |
20110288935 | Elvekrog et al. | Nov 2011 | A1 |
20110296339 | Kang | Dec 2011 | A1 |
20120005209 | Rinearson et al. | Jan 2012 | A1 |
20120036132 | Doyle | Feb 2012 | A1 |
20120054355 | Arrasvuori et al. | Mar 2012 | A1 |
20120059884 | Rothschild | Mar 2012 | A1 |
20120075433 | Tatzgern et al. | Mar 2012 | A1 |
20120110464 | Chen et al. | May 2012 | A1 |
20120158668 | Tu et al. | Jun 2012 | A1 |
20120203651 | Leggatt | Aug 2012 | A1 |
20120233000 | Fisher et al. | Sep 2012 | A1 |
20120258776 | Lord et al. | Oct 2012 | A1 |
Entry |
---|
Cascia et al., “Combining Textual and Visual Cues for Content-based Image Retrieval on the World Wide Web,” IEEE Workshop on Content-based Access of Image and Video Libraries (Jun. 1998). |
Everingham et al., “‘Hello! My name is . . . Buffy’—Automatic Naming of Characters in TV Video,” Proceedings of the 17th British Machine Vision Conference (BMVC2006), pp. 889-908 (Sep. 2006). |
FAQ from Pixazza's website as published on Feb. 22, 2010, retrieved at http://web.archive.org/web/20100222001945/http://www.pixazza.com/faq/. |
Galleguillos et al., “Object Categorization using Co-Occurrence, Location and Appearance,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Anchorage, USA (2008). |
Heitz & Koller, “Learning Spatial Context: Using Stuff to Find Things,” European Conference on Computer Vision (ECCV) (2008). |
Hoiem et al., “Putting Objects in Perspective,” IJCV (80), No. 1 (Oct. 2008). |
Jain et al., “Fast Image Search for Learned Metrics,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Jun. 2008). |
Lober et al., “IML: An Image Markup Language,” Proceedings, American Medical Informatics Association Fall Symposium, pp. 403-407 (2001). |
Rao, Leena Google Ventures-Backed Pixazza Raises $12 Million for Crowdsourced ‘AdSense for Images’, published Jul. 18, 2010, retrieved from http://techcrunch.com/2010/07/18google-funded-pixazza-raises-12-million-for-crowdsourced-adsense-for-images/. |
Russell & Torralba, “LabelMe: a database and web-based tool for image annotation,” International Journal of Computer Vision, vol. 77, Issue 1-3, pp. 157-173 (May 2008). |
Torralba, “Contextual Priming for Object Detection,” International Journal of Computer Vision, vol. 53, Issue 2, pp. 169-191 (2003). |
Venkatesan et al., “Robust Image Hashing” Image Processing Proceedings. 2000 International Conference vol. 3, 664-666 (2000). |
U.S. Appl. No. 12/902,066, filed Oct. 11, 2010, Response to Non-Final Office Action Entered and Forwarded to Examiner, May 17, 2013. |
U.S. Appl. No. 13/005,217, filed Jan. 12, 2011, Non Final Action Mailed, May 16, 2013. |
U.S. Appl. No. 13/045,426, filed Mar. 10, 2011, Non Final Action Mailed, Apr. 5, 2013. |
U.S. Appl. No. 13/151,110, filed Jun. 1, 2011, Non Final Action Mailed, Jan. 23, 2013. |
U.S. Appl. No. 13/219,460, filed Aug. 26, 2011, Response to Non-Final Office Action Entered and Forwarded to Examiner, Mar. 21, 2013. |
U.S. Appl. No. 13/252,053, filed Oct. 3, 2011, Non Final Action Mailed, Mar. 29, 2013. |
U.S. Appl. No. 13/299,280, filed Nov. 17, 2011, Request for Continued Examination Filed, Feb. 5, 2013. |
U.S. Appl. No. 13/308,401, filed Nov. 30, 2011, Non Final Action Mailed, Feb. 27, 2013. |
U.S. Appl. No. 13/352,188, filed Jan. 17, 2012, Request for Continued Examination Filed, Nov. 14, 2012. |
U.S. Appl. No. 13/398,700, filed Feb. 16, 2012, Final Rejection Mailed, Jan. 3, 2013. |
U.S. Appl. No. 13/486,628, filed Jun. 1, 2012, Final Rejection Mailed, Mar. 27, 2013. |
U.S. Appl. No. 13/599,991, filed Aug. 30, 2012, Response to Non-Final Office Action Entered and Forwarded to Examiner, Mar. 6, 2013. |
U.S. Appl. No. 13/777,917, filed Feb. 26, 2013, Information Disclosure Statement, May 7, 2013. |
U.S. Appl. No. 29/403,731, filed Oct. 10, 2011, Information Disclosure Statement, May 7, 2013. |
U.S. Appl. No. 29/403,732, filed Oct. 10, 2011, Information Disclosure Statement, May 7, 2013. |
U.S. Appl. No. 29/403,733, filed Oct. 10, 2011, Information Disclosure Statement, May 7, 2013. |
U.S. Appl. No. 29/403,734, filed Oct. 10, 2011, Information Disclosure Statement, May 7, 2013. |
U.S. Appl. No. 29/403,826, filed Oct. 11, 2011, Information Disclosure Statement, May 7, 2013. |