Methods and arrangements for device to device communication

Information

  • Patent Grant
  • 11049094
  • Patent Number
    11,049,094
  • Date Filed
    Friday, February 15, 2019
    6 years ago
  • Date Issued
    Tuesday, June 29, 2021
    3 years ago
Abstract
The disclosure relates, e.g., to image processing technology including device to device communication. One claim recites an apparatus for device to device communication using displayed imagery, said apparatus comprising: a camera for capturing a plurality of image frames, the plurality of image frames representing a plurality of graphics displayed on a display screen of a mobile device, in which each of the graphics comprises an output from an erasure code generator, in which the erasure code generator produces a plurality of outputs corresponding to a payload; means for decoding outputs from the plurality of graphics; and means for constructing the payload from decoded outputs; and means for carrying out an action based on a constructed payload. A great variety of other features, arrangements and claims are also detailed.
Description
TECHNICAL FIELD

The present technology concerns, e.g., portable devices such as smartphones, and their use in making secure payments or facilitating transactions. The present technology also concerns payload transmission with portable devices.


BACKGROUND AND INTRODUCTION TO THE TECHNOLOGY

Desirably, shoppers should be able to select from among plural different credit cards when making purchases, and not be tied to a single payment service. Having a variety of credit card payment options provides a variety of advantages.


For example, some credit card providers offer promotions that make spending on one card more attractive than another (e.g., double-miles on your Alaska Airlines Visa card for gas and grocery purchases made during February). Other promotions sometime include a lump-sum award of miles for new account holders after a threshold charge total has been reached (e.g., get 50,000 miles on your new CapitalOne Visa card after you've made $5,000 of purchases within the first five months.) At still other times, a shopper may be working to accumulate purchases on one particular card in order to reach a desired reward level (e.g., reaching 50,000 miles to qualify for a Delta ticket to Europe).


The ability to easily select a desired card from among an assortment of cards is a feature lacking in many existing mobile payment systems. The legacy physical cards that embody the service provider brands and their capabilities are expensive to produce and have security weakness that can be mitigated in mobile payment systems. The look, feel, and user interfaces for physical cards are familiar and well understood. Existing mobile payments solutions involve numerous changes and new learning to operate.


In accordance with one aspect of the present technology, a smartphone programmed with a virtual wallet provides a user interface to present a wallet of virtual credit cards from which a user can pick when making a purchase. Data is conveyed optically from the phone to a cooperating system, such as a point of sale terminal or another smartphone. Preferably, the phone containing the virtual cards presents a graphical illustration of the selected card on the screen. Hidden in this graphical illustration (i.e., steganographically encoded) is transaction data. This transaction data may provide information about the selected card, and may also provide context data used to create a session key for security. Of course, a virtual wallet may receive payments, credits and rewards, as well as initiate payments.


Through use of the present technology, merchants can obtain the digital security advantages associated with “chip card”-based payment systems, without investing in interface hardware that has no other use, using virtual cards that have no costs of manufacture and distribution. The technology is secure, easy, economical, and reliable.


The foregoing and other features and advantages of the present technology will be more readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIGS. 1 and 2 show a fliptych user interface used in certain embodiments to allow a user to select a desired card from a virtual wallet.



FIGS. 3A and 3B show alternative card selection user interfaces.



FIG. 4A shows artwork for a selected card, steganographically encoded with card and authentication information, displayed on a smartphone screen for optical sensing by a cooperating system.



FIG. 4B is similar to FIG. 4A, but uses overt machine readable encoding (i.e., a barcode) instead of steganographic encoding, to optically convey information to the cooperating system.



FIG. 5 illustrates a common type of credit card transaction processing.



FIG. 6 shows a block diagram of a system in which a user's mobile device optically communicates with a cooperating system.



FIG. 7 is a flow chart detailing acts of an illustrative method.



FIGS. 8 and 9 show screenshots of a user interface for selecting and presenting two cards to a vendor.



FIGS. 10A and 10B show screenshots of an alternative user interface for selecting and presenting multiple cards to a vendor.



FIG. 10C illustrates how a payment can be split between two payment cards, in accordance with one aspect of the present technology.



FIG. 11 shows a payment user interface that presents a tally of items for purchase together with payment card artwork, and also provides for user signature.



FIGS. 12A-12D show how checkout tallies can be customized per user preference.



FIGS. 13A-13C show how authentication can employ steganographically-conveyed context data, an anti-phishing mutual validation system, and signature collection—all for increased security.



FIGS. 14 and 15 show an authentication arrangement using photographs earlier captured by the user and stored on the smartphone.



FIG. 16 is a diagram showing a payload coding and transmission scheme.





DETAILED DESCRIPTION

The present technology has broad applicability, but necessarily is described by reference to a limited number of embodiments and applications. The reader should understand that this technology can be employed in various other forms—many quite different than the arrangements detailed in the following discussion.


One aspect of the present technology concerns payment technologies, including auctions to determine which financial vendor will facilitate a transaction. A few particular embodiments are described below, from which various features and advantages will become apparent.


One particular method employs a user's portable device, such as a smartphone. As is familiar, such devices include a variety of components, e.g. a touch screen display, a processor, a memory, various sensor modules, etc.


Stored in the memory is an electronic payment module comprising software instructions that cause the device to present a user interface (UI) on the display. This electronic payment module (and/or a UI provided by such) is sometimes referred to herein as a “virtual wallet”. One such user interface is shown in FIG. 1. The depicted user interface shows graphical representations of plural different cards of the sort typically carried in a user's wallet, e.g., credit cards, shopping loyalty cards, frequent flier membership cards, etc. (“wallet cards”). The software enables the user to scroll through the collection of cards and select one or more for use in a payment transaction, using a fliptych arrangement. (Fliptych is the generic name for the style of interface popularized by Apple under the name “Cover Flow.”) As earlier noted, it is advantageous for a shopper to be able to choose different of the displayed payment cards at different times, and not be virtually tied to a single payment service.


In the illustrated embodiment, after the user has scrolled to a desired card (a Visa card in FIG. 1), it is selected for use in the transaction by a user signal, such as a single-tap on the touch screen. (A double-tap causes the depicted card to virtually flip-over and reveal, on its back side, information about recent account usage and available credit.)


A great variety of other user interface styles can be used for selecting from a virtual wallet of cards. FIG. 3A shows another form of UI—a scrollable display of thumbnails. This UI illustrates that representations of cards other than faithful card depictions can be employed. (Note the logo, rather than the card image, to represent the MasterCard payment service).


Still another alternative UI for card selection is that employed by Apple's Passbook software, shown in FIG. 3B. (The Passbook app is an organizer for passes such as movie tickets, plane and train boarding passes, gift cards, coupons, etc.)


After the user has selected a payment card, the device may perform a user security check—if required by the card issuer or by stored profile data configured by the user. One security check is entry of a PIN or password, although there are many others.


The illustrative transaction method further involves generating context-based authentication data using data from one or more smartphone sensors, as discussed more fully below. This authentication data serves to assure the cooperating system that the smartphone is legitimate and is not, e.g., a fraudulent “replay attack” of the system.


After the security check (if any), and generation of the context-based authentication data, the smartphone displays corresponding artwork on its display, as shown in FIG. 4A. This artwork visually indicates the selected payment service, thereby permitting the user to quickly check that the correct payment card has been selected. The card number, a logo distinctive of the selected payment service (e.g., an American Express, Visa or MasterCard logo) and/or card issuer (e.g., US Bank, Bank of America) can be included in the artwork, for viewing by the user.


While the smartphone display shown in FIG. 4A indicates the selected payment service, it also includes the payment service account data (e.g., account number, owner name, country code, and card expiration date), as well as the context-based authentication data. This information is not evident in the FIG. 4A artwork because it is hidden, using steganographic encoding (digital watermarking). However, such information can be decoded from the artwork by a corresponding (digital watermark) detector. Alternatively, such information can be conveyed otherwise, such as by other forms of machine-readable encoding (e.g., the barcode shown in FIG. 4B).


The user shows the artwork on the phone display to a sensor (e.g., a camera) of a cooperating system, such as a point of sale (POS) terminal, or a clerk's portable device, which captures one or more frames of imagery depicting the display. In one particular case the user holds the smartphone in front of a fixed camera, such as at a self-checkout terminal. In another, a POS terminal camera, or a smartphone camera, is positioned (e.g., by a checkout clerk) so as to capture an image of the smartphone screen. In still another, the user puts the smartphone, display facing up, on a conveyor of a grocery checkout, where it is imaged by the same camera(s) that is used to identify products for checkout. In all such arrangements, information is conveyed optically from the user device to the cooperating system. (Related technology is detailed in applicant's pending application Ser. No. 13/750,752, filed Jan. 25, 2013, and issued as U.S. Pat. No. 9,367,770, which is hereby incorporated herein by reference in its entirety).


The cooperating system decodes the account data and authentication data from the captured imagery. The transaction is next security-checked by use of the authentication data. Corresponding transaction information is then forwarded to the merchant's bank for processing. From this point on, the payment transaction may proceed in the conventional manner. (FIG. 5 illustrates a credit card approval process for a typical transaction.)



FIG. 6 shows some of the hardware elements involved in this embodiment, namely a user's smartphone, and a cooperating system. These elements are depicted as having identical components (which may be the case, e.g., if the cooperating system is another smartphone). The dashed lines illustrate that the camera of the cooperating system captures imagery from the display of the user smartphone.



FIG. 7 summarizes a few aspects of the above-described embodiment in flow chart form.


The authentication data used in the detailed embodiment can be of various types, and can serve various roles, as detailed in the following discussion.


A security vulnerability of many systems is the so-called “replay attack.” In this scenario, a perpetrator collects data from a valid transaction, and later re-uses it to fraudulently make a second transaction. In the present case, if a perpetrator obtained imagery captured by a POS terminal, e.g., depicting the FIG. 4A virtual payment card of a user, then this same imagery might later be employed to mimic presentation of a valid payment card for any number of further transactions. (A simple case would be the perpetrator printing a captured image of the FIG. 4A screen display, and presenting the printed picture to a camera at a self-service checkout terminal to “pay” for merchandise.)


The authentication data of the present system defeats this type of attack. The authentication data is of a character that naturally changes from transaction to transaction. A simple example is time or data. If this information is encoded in the image, the cooperating system can check that the decoded information matches its own assessment of the time/date.


As sensors have proliferated in smartphones, a great variety of other authentication data can be employed. For example, some smartphones now include barometric pressure sensors. The barometric pressure currently sensed by the smartphone sensor can be among the data provided from the smartphone display to the cooperating system. The cooperating system can check a barometric sensor of its own, and confirm that the received information matches within some margin of error, e.g., 1 millibar. Temperature is another atmospheric parameter than can be used in this fashion.


Other authentication data concern the pose and/or motion of the smartphone. Smartphones are now conventionally equipped with a tri-axis magnetometer (compass), a tri-axis accelerometer and/or a tri-axis gyroscope. Data from these sensors allow the smartphone to characterize its position and motion, which information can be encoded in the displayed artwork. The cooperating system can analyze its captured imagery of the smartphone to make its own assessment of these data.


For example, in a supermarket context, a POS terminal may analyze camera data to determine that the shopper's camera is moving 1 foot per second (i.e., on a moving conveyor), and is in a pose with its screen facing straight up, with its top orientated towards a compass direction of 322 degrees. If the authentication data decoded from the artwork displayed on the camera screen does not match this pose/motion data observed by the POS terminal, then something is awry and the transaction is refused.


Another form of authentication data is information derived from the audio environment. A sample of ambient audio can be sensed by the smartphone microphone and processed, e.g., to classify it by type, or to decode an ambient digital watermark, or to generate an audio fingerprint. An exemplary audio fingerprint may be generated by sensing the audio over a one second interval and determining the audio power in nine linear or logarithmic bands spanning 300-3000 Hz (e.g., 300-387 Hz, 387-500 Hz, 500-646 Hz, 646-835 Hz, 835-1078 Hz, 1078-1392 Hz, 1392-1798 Hz, 1798-2323 Hz, and 2323-3000 Hz). An eight bit fingerprint is derived from this series of data. The first bit is a “1” if the first band (300-387 Hz) has more energy than the band next-above (387-500 Hz); else the first bit is a “0.” And so forth up through the eighth bit (which is a “1” if the eighth band (1798-2323 Hz) has more energy than the band next-above (2323-3000 Hz).


The POS terminal can similarly sample the audio environment, and compute its own fingerprint information. This information is then compared with that communicated from the user's smartphone, and checked for correspondence. (The POS terminal can repeatedly compute an audio fingerprint for successive one second sample intervals, and check the received data against the last several computed fingerprints for a match within an error threshold, such as a Euclidean distance.)


In some implementations, the POS terminal may emit a short burst of tones—simultaneously or sequentially. The smartphone microphone senses these tones, and communicates corresponding information back to the POS terminal, where a match assessment is made. (In the case of a sequence of tones, a sequence of audio fingerprints may be communicated back.) By such arrangement, the POS terminal can influence or dictate, e.g., a fingerprint value that should reported back from the smartphone.


This is a form of challenge-response authentication. The POS terminal issues a challenge (e.g., a particular combination or sequence of tones), and the smartphone must respond with a response that varies in accordance with the challenge. The response from the smartphone is checked against that expected by the POS terminal.


Relatedly, information from the visual environment can be used as the basis for authentication data. For example, the smartphone may be held to face towards the camera of a POS terminal. A collection of colored LEDs may be positioned next to the camera of the POS terminal, and may be controlled by the POS processor to shine colored light towards the smartphone. In one transaction the POS system may illuminate a blue LED. In a next transaction it may illuminate an orange LED. The smartphone senses the color illumination from its camera (i.e., the smartphone camera on the front of the device, adjacent the display screen), and encodes this information in the artwork displayed on the phone screen. The POS terminal checks the color information reported from the smartphone (via the encoded artwork) with information about the color of LED illuminated for the transaction, to check for correspondence.


Naturally, more complex arrangements can be used, including some in which different LEDs are activated in a sequence to emit a series of colors that varies over time. This time-varying information can be reported back via the displayed artwork—either over time (e.g., the artwork displayed by the smartphone changes (steganographically) in response to each change in LED color), or the smartphone can process the sequence of different colors into a single datum. For example, the POS terminal may be capable of emitting ten different colors of light, and it issues a sequence of three of these colors—each for 100 milliseconds, in a repeating pattern. The smartphone senses the sequence, and then reports back a three digit decimal number—each digit representing one of the colors. The POS checks the received number to confirm that the three digits correspond to the three colors of illumination being presented, and that they were sensed in the correct order.


In like fashion, other time-varying authentication data can be similarly sensed by the smartphone and reported back to the cooperating system as authentication data.


All of the above types of authentication data are regarded as context data—providing information reporting context as sensed by the smartphone.


Combinations of the above-described types of authentication data—as well as others—can be used.


It will be understood that use of authentication data as described above allows the risk of a replay attack to be engineered down to virtually zero.


Not only does the authentication data serve to defeat replay attacks, it can also be used to secure the payment card information against eavesdropping (e.g., a form of “man-in-the-middle” attack). Consider a perpetrator in a grocery checkout who uses a smartphone to capture an image of a smartphone of a person ahead in line, when the latter smartphone is presenting the FIG. 4B display that includes a barcode with payment card information. The perpetrator may later hack the barcode to extract the payment card information, and use that payment card data to make fraudulent charges.


To defeat such threat, the information encoded in the displayed artwork desirably is encrypted using a key. This key can be based on the authentication data. The smartphone presenting the information can derive the key from its sensed context data (e.g., audio, imagery, pose, motion, environment, etc.), yielding a context-dependent session key. The cooperating POS system makes a parallel assessment based on its sensed context data, from which it derives a matching session key. The authentication data thus is used to create a (context-dependent) secure private channel through which information is conveyed between the smartphone and the POS system.


There are many forms of encryption that can be employed. A simple one is an exclusive-OR operation, by which bits of the message are XOR-d with bits of the key. The resulting encrypted data string is encoded in the artwork presented on the smartphone screen. The POS system recovers this encrypted data from captured imagery of the phone, and applies the same key, in the same XOR operation, to recover the bits of the original message.


More sophisticated implementations employ encryption algorithms such as DES, SHA1, MDS, etc.


Additional security can be provided by use of digital signature technology, which may be used by the POS system to provide for authentication (and non-repudiation) of the information received from the smartphone (and vice-versa, if desired).


In one such embodiment, information identifying the phone or user is conveyed from the phone to the POS system (e.g., via the encoded artwork displayed on the phone screen). This identifier can take various forms. One is the phone's IMEI (International Mobile Station Equipment Identity) data—an identifier that uniquely identifies a phone. (The IMEI can be displayed on most phones by entering *#06# on the keypad.) Another is a phone's IMSI (International Mobile Subscriber Identity) data, which identifies the phone's SIM card. Still other identifiers can be derived using known device fingerprinting techniques—based on parameter data collected from the phone, which in the aggregate distinguishes that phone from others. (All such arrangements may be regarded as a hardware ID.)


This identifier can be conveyed from the phone to the POS system in encrypted form, e.g., using context-based authentication data as described above.


Upon receipt of the identifier, the POS system consults a registry (e.g., a certificate authority) to obtain a public key (of a public-private cryptographic key pair) associated with that identifier. This enables the phone to encrypt information it wishes to securely communicate to the POS system using the phone's (or user's) private key. (This key may be stored in the phone's memory.) Information that may be encrypted in this fashion includes the payment card data. The POS system uses the public key that it obtained from the certificate authority to decrypt this information. Because the communicated information is signed with a key that allows for its decryption using the public key obtained from the certificate authority, the information is known by the POS system to have originated from the identified phone/user. (The public/private key pairs may be issued by a bank or other party involved in the transaction processing. The same party, or another, may operate the certificate authority.) Once the POS system has determined the provenance of the information provided by the mobile phone, a secondary check can be made to determine if the card information provided is associated with the phone, creating a second layer of security for a would-be attacker to surmount (beyond registering a fraudulent phone within the system, they would also have to associate the copied card information for a replay attack with the fraudulent phone).


The context based authentication data can also be encrypted with the private key, and decoded with the corresponding public key obtained from the certificate authority. In this case, since context-based authentication data is encrypted with a key that is tied to the device (e.g., via an IMEI identifier through a certificate authority), then this authentication data is logically bound to both the context and the user device.


The use of physically unclonable functions (PUFs) can also be utilized to provide confidence that the observed optical event (imager of the cooperating device) has not been spoofed. These may include but are not limited to shot-noise and temporal noise of the camera, properties of the image processing pipeline (compression artifacts, tonal curves influenced by Auto White Balance or other operations), etc. In addition, properties of the display of the mobile device can be used for this same purpose, such as dead pixels or fluctuations of display brightness as a function of time or power.


(U.S. Pat. No. 7,370,190, which is hereby incorporated herein by reference in its entirety, provides additional information about physically unclonable functions, and their uses—technology with which the artisan is presumed to be familiar.)


It will be recognized that prior art transactions with conventional credit cards, based on magnetic stripe data, offer none of the security and authentication benefits noted above. The technologies described herein reduce costs and space requirements at checkout by eliminating need for mag stripe readers or RFID terminals. While “chip card” arrangements (sometimes termed “smart cards”) offer a variety of digital security techniques, they require specialized interface technology to exchange data with the chip—interface technology that has no other use. The just-described implementations, in contrast, make use of camera sensors that are commonplace in smartphones and tablets, and that are being increasingly deployed by retailers to read barcodes during checkout. This means that the marginal cost of reading is software only, in that hardware reader requirements are consistent with industry trends towards image capture at retail checkout, thereby exploiting a resource available at no marginal cost to implementers of the present technology. Notably, the reader function could be implemented in hardware as well, if doing so would provide superior cost effectiveness. The same imager-based readers could read other indicia, such as QR codes, authenticate digitally-watermarked driver licenses, and OCR relevant text.


Similarly, the system is more economical than all magnetic stripe and RFID systems because no physical cards or chips are required. (This is a particular savings when contrasted with chip card systems, due to the microprocessors and gold-plated interfaces typically used in such cards.) Nor is there any cost associated with distributing cards, confirming their safe receipt, and attending to their activation. Instead, credentials are distributed by electronically sending a file of data corresponding to a wallet card—encrypted and digitally signed by the issuing bank—to the phone, and using that file data to add the card to the smartphone wallet. The installation and activation of the card can be tied to various unique aspects of the device and/or user characteristics, such as, for example, a hardware ID or a hash of user history or personal characteristics data.


A still further advantage is that the present technology is helpful in alleviating piriformis syndrome. This syndrome involves inflammation of the sciatic nerve due to pressure in the gluteal/pelvic region. A common cause of such pressure is presence of a large wallet in a person's rear pocket, which displaces customary pelvic alignment when sitting. By removing physical cards from a user's wallet, the wallet's volume is reduced, reducing attendant compression of the sciatic nerve. Elimination of the wallet requirement also improves security and convenience of payment processing for users.


Presentation of Multiple Cards


The arrangements just-described involved presentation of a single card—a payment card. Sometimes plural cards are useful. One example is where a merchant offers discounts on certain items to users who are enrolled in the merchant's loyalty program. Another is where an airline offers a discount on checked luggage fees to fliers who are members of its frequent flier program


In accordance with a further aspect of the technology, the UI on payment module of the user's smartphone permits selection of two or more cards from the virtual wallet. One is a payment card, and the other may be a loyalty (“merchant”) card. Data corresponding to both cards may be optically conveyed to the cooperating system via the artwork presented on the display of the user's smartphone.



FIG. 8 shows one such user interface. As before, the user flips through the deck of virtual wallet cards to find a first desired card. Instead of the user tapping the card for selection, a sweeping gesture is used to move the virtual card above the deck (as shown by the Visa card in FIG. 8), while the rest of the virtual deck slides down to make room. The user then continues flipping through the deck to locate a second card, which is selected by tapping. As a consequence of these actions, the phone screen presents artwork representing both the selected payment card, and the other (merchant) card, as shown in FIG. 9.


As before, information encoded in the displayed artwork is sensed by a camera of a cooperating system, and is used in connection with a transaction. The payment card information may be encoded in the portion of the artwork corresponding to the payment card, and likewise with the merchant card information. Or information for both cards can be encoded throughout the displayed imagery (as can the authentication information).



FIG. 10A shows another style of user interface permitting selection of multiple wallet cards. Here, thumbnails of different cards are organized by type along the right edge: payment cards, loyalty cards, gift and coupon cards, and cents-back cards. (Cents-back cards serve to round-up a transaction amount to a next increment (e.g., the next dollar), with the excess funds contributed to a charity.) This right area of the depicted UI is scrollable, to reveal any thumbnails that can't be presented in the available screen space.


Desirably, the thumbnails presented on the right side of the UI are ordered so that the card(s) that are most likely to be used in a given context are the most conspicuous (e.g., not partially occluded by other cards). For example, in a Safeway store (as determined by GPS data, cross-referenced against map data identifying what businesses are at what locations; or as indicated by a sensed audio signal—such as detailed in Shopkick's patent application 20110029370, which is hereby incorporated herein by reference in its entirety), the Safeway loyalty card would be most readily available. Similarly, if a shopper historically tends to use a Visa card at the Safeway store (perhaps because the issuing bank issues triple miles for dollars spent at grocery stores), then the Visa card thumbnail would be positioned at a preferred location relative to the other payment card options. Forward chaining of inference can be used to predict which cards are most likely to be used in different situations.


To use this form of interface, the user slides thumbnails of selected cards towards the center of the screen where they expand and stack, as shown in FIG. 10B. The user may assemble a recipe of cards including a credit card, a pair of coupon cards, a gift card, a loyalty card, and a cents-back card, while the grocery clerk is scanning items. Once the desired deck of cards is assembled, the deck is single-tapped (or in another embodiment double-tapped) to indicate that the user's selection is completed. The displayed artwork is again encoded with information, as described earlier, for optical reading by a cooperating system. As shown in FIGS. 10A and 10B, the artwork can include a background pattern 102, and this background pattern can also be encoded (thereby expanding the payload size and/or increasing the encoding robustness).


A visual indicia can be presented on the screen indicating that the artwork has been steganographically-encoded, and is ready to present for payment. For example, after the user has tapped the stack, and the artwork has been encoded, dark or other distinctive borders can appear around the card depictions.


A user interface can also be employed to split charges between two payment cards. Both cards may be in the name of the same person, or cards from two persons may be used to split a charge. (One such example is a family in which a weekly allowance is issued to teens by deposits to a prepaid debit card. A parent may have such a debit card for a teen in their smartphone wallet, and may occasionally agree to split the costs of a purchase with the teen.)


As shown in FIG. 10C, the artwork presented in one such UI case includes a hybrid card—a graphic composed partly of artwork associated with one card, and partly of artwork associated with another card. At the junction of the two parts is a dark border, and a user interface feature 103 that can be touched by the user on the touch screen and slid right or left to apportion a charge between the two cards in a desired manner. The illustrated UI shows the split detailed in percentage (30%/70%), but a split detailed in dollars could alternatively, or additionally, be displayed.


Auctioning Transaction Privileges:


Consider a shopper who populates a shopping cart—either physical or virtual. The cart's total is determined and presented via a device user interface (UI). Stored in device memory is an electronic payment module (or “virtual wallet”) comprising software instructions and/or libraries that cause the device to present the user interface (UI) on the display.


This particular user has many different payment options associated with her virtual wallet, e.g., various credit accounts, credit cards, BitCoin credit, store cards or rewards, PayPal account(s), checking and/or savings account(s), etc. The virtual wallet may also include, e.g., frequent flyer account information, reward program information, membership information, loyalty membership information, coupons, discount codes, rebates, etc.


The user may indicate through the UI that she is ready to check out and purchase the cart items. If the UI cooperates with a touchscreen interface the user may indicate by touching the screen, flipping through various screens, scrolling, checking boxes, selecting icons, etc. In response, an auction is launched to determine which financial vendor associated with her virtual wallet will facilitate the financial transaction. In other cases, a solicitation of offers is launched to gather offers from the financial vendors associated with her virtual wallet. The virtual wallet can launch the solicitation or auction in a number of ways.


For example, the virtual wallet can communicate with the various financial vendors associated with the user's different payment options. Cart total and contents, store and user location(s), user credit history, etc. can be forwarded to the different financial institutions to consider as they bid to facilitate the user's transaction. If the cart's total is $97.23, American Express may, for example, decide to offer a discount to the user if she uses her American Express account. With the discount the transaction total may now only cost the user, e.g., $92.37. American Express may decide to offer the discount in exchange for promotional or marketing opportunities, pushing targeted advertisements or providing other opportunities to the user during or after the transaction. Or American Express may have a discount arrangement with the store from which the user is shopping, e.g., Target or Amazon.com, and/or a discount arrangement for certain of the cart items. A portion of the discount can be passed along to the user. American Express may base a decision to bid—and the amount of any discount associated with such bid—on a number of factors, e.g., the user's credit history with their American Express account, their overall credit history, a length of time since the user used the account, the user's past response to targeted advertising, agreements with retailers or distributors, the user's demographics, promotion or marketing opportunities to the user, etc.


During the auction another creditor, e.g., PayPal's BillMeLater, may decide based on the user's credit history that she is a solid risk. So BillMeLater low-balls the bid, offering a bargin-basement cost of $82.19 for the purchase, but BillMeLater couples their bid with the user's required acceptance to establish or increase a line of credit.


Another creditor may promise a discount+a certain number of reward or mileage points if the user makes selects them for the transaction. Still another may bid/offer an extended warranty if purchased with them.


The auction can be time-limited so bids must be submitted within a certain response time. In other cases, the user can be preapproved for certain deals or promotions based on her location, which will help reduce auctions time. For example, the virtual wallet may determine that the phone is currently located in Wal-Mart or Target. Location information can be determined from user input, e.g., entering into the virtual wallet—or selecting from a screen pull-down or flip through—that the user is currently shopping in Wal-Mart, GPS information (e.g., coupled with a search of GPS coordinates), environmental information sensed by the user device upon entering the store (e.g., image recognition from recent camera pictures, analyzing digitally watermarked audio playing in a store, calculating audio fingerprints of ambient audio, audio beacons like Apple's iBeacons, Wi-Fi information network, etc.), etc. The virtual wallet can start to solicit bids from financial vendors associated with the virtual wallet or user as soon as the virtual wallet determines that the user is in a retail establishment, even though the user has not finished populating their cart and are not located at checkout. Incoming bids may then be based on all or some of the above factors, e.g., credit history, promotion opportunities, available discounts, etc., and less on the actual cart contents.


The virtual wallet can also start an auction or solicit offers when the first (or other) item is added to the cart.


The virtual wallet can also receive pre-authorization or firm bids from financial vendors. For example, Bank of America may decide that they are offering to the user a 3% discount for all in-store purchases at Wal-Mart made during the upcoming weekend. The virtual wallet stores this information and can present the offer if and when the user finds herself in Wal-Mart. The pre-authorization may include or link to promotional opportunities to be displayed during or after purchase.


The user can select from the various bids to determine which financial vendor will facilitate her transaction. For example, a double tap on a graphic with the desired bid can initiate the transaction. The user can be prompted to confirm the transaction if desired.


The virtual wallet can be user-configured to present only those bids meeting certain criteria. For example, through a settings screen or user interface, the user may decide that she only wants to see and consider the top 2 or 3 bids with cash-only discounts; such a setting will result in the user interface only presenting such top bids. Or the user may be interested in mileage rewards, or credit opportunities; and these will be presented in the top bids. Or the user can decide NOT to be bothered with the decision and may select a “best-deal” mode where the virtual wallet selects a bid based on a plurality of factors including, e.g., deepest discount, best long term financing, and/or proximity to reward levels (e.g., the user only need 5000 more mileage points to qualify for a trip to Hawaii). Such factors may be weighted according to user preference and a top bid can be determined as one with the highest overall weighting. (E.g., 10 points if the bid includes the deepest discount, 1 if it's the least discount; 8 points if the bid includes free long-term financing, 1 if it doesn't; 5 points if the bid includes reward points, 0 if it doesn't; 10 points if the user has selected this payment option recently, 1 if they haven't; 9 points if the user has a low balance on the credit account, 0 if they are near their credit limit; etc., and/or other weighting schemes.)


A virtual wallet may also be configured to track reward status. E.g., if a newly purchased TV is defective, and a user takes it back for a refund, a merchant may communicate with a virtual wallet (or a financial vendor represented in the virtual wallet) to issue a credit. The refund may result in reward points being pulled from a rewards account. This information may be reflected in the virtual wallet.


The virtual wallet may also communicate with a broker or intermediary service. The broker or intermediary service can aggregate information, vendor bids, pre-authorizations, promotions, advertising etc. and associate such with a user or user device. In operation, the virtual wallet communicates with the broker who communicates (and may generate themselves) various bids and promotion opportunities back to the virtual wallet.


Auctions associated with the virtual wallet are not limited to retail checkout locations. The virtual wallet can help find better deals on many other items and services.


For example, a user can prompt the virtual wallet that they need gas. This may cause the virtual wallet to launch a search, auction and/or solicitation for the best possible deals. The auction can consider the various cards and memberships that the user has in her wallet. For example, a user's wallet may include a Chevron rewards card and an American Express account. This information can be communicated to various financial vendors including Chevron and American Express (or their intermediaries). An incoming bid may be presented to the mobile device including additional gas points on the Chevron rewards card and/or a discount if the American Express card is used. If a local Chevron Station is running a promotion, such information can be communicated to the virtual wallet for presentation to the user as well.


In some cases, the virtual wallet can be configured to communicate some or all details about a bid to a competing financial vendor—making the auction even more transparent to participating vendors. A competing vendor may decide to alter their initial bid to sweeten the deal. For example, Shell may decide that they don't want to be outbid by Chevron, and they may send the virtual wallet a bid that is lower, includes more rewards, or otherwise try to seduce the user. Shell's response can be sent back to Chevron, or Chevron's intermediary, who may decide to sweeten their bid in response.


In some cases, the auction can be geographically constricted, e.g., only gas stations within a pre-determined number of miles from a user are considered for an auction. The virtual wallet can determine which stations meet this location criteria by cooperation with one of the many available software apps that determine such stations based on a user's location (e.g., Google Maps, GasBuddy, etc.). Once a station is chosen, the virtual wallet may launch mapping software on the mobile device, pass into the mapping software a winning station's address or GPS coordinates, so that the user can have step-by-step driving directions to the station. Alternatively, the destination address, or the turn by turn instructions, can simply be passed to the control system of a self-driving vehicle, which can drive itself to the gas station, and complete the transaction.


Instead of a user prompting the virtual wallet that she needs gas, the virtual wallet may initiate an auction or solicitation based on other factors. For example, GPS coordinates may indicate that the user is located at or approaching a gas station. An auction may be launched based on such proximity information.


In many cases, cars are becoming smarter and smarter. Cars are already available with low fuel warnings, low tire pressure warnings, service engine warnings, etc. Such warnings may be communicated to the user's device (e.g., via a Bluetooth pairing between the car and mobile phone) and used by the virtual wallet to initiate an auction to provide the best deals to address the warning.


Of course, the virtual wallet need not completely reside on a user's smartphone. For example, components of such may be distributed to the cloud, or to other available devices for processing. In the above example, a virtual wallet may handoff direction to a car's onboard computer and let it do some or all of the direction. In other cases, a wallet shell resides on the cell phone. In this embodiment, the shell includes, e.g., graphic drivers and user interfaces to allow device display, user input and communication with a remote location. Storage of credit card information and other wallet contents are stored remotely, e.g., in the cloud.


A virtual wallet may cause a digital watermark detector (or fingerprint generator) to analyze background audio in a background collection mode. For example, once operating in this background mode a detector or generator may analyze audio accompanying radio, internet, TV, movies, all to decode watermarks (or calculates fingerprints) without requiring human invention. The audio may include watermarks (or result in fingerprints) that link to information associated with advertising, store promotional, coupons, etc. This information can be stored in the virtual wallet, e.g., according to store identifier, location, event, etc. Later, when the virtual wallet enters a store (or comes in proximity of a remote checkout terminal, e.g., a computer), the virtual wallet can receive location or retail information, e.g., included in a signal emanating from an iBeacon or audio source. The virtual wallet may use retail information to search through its stored, previously encountered audio. The virtual wallet can prompt the user if discounts, coupons, promotions are found, or may apply any such discounts/coupons at checkout.


Message Payloads


Some embodiments benefit from using a relatively large payload (e.g., 500-2,500 bits) during a virtual wallet transaction. The payload can be carried in a digital watermark that is embedded in displayed imagery or video, encoded in hearing range audio, or transmitted using a high frequency audio channel. The payload may correspond with credit card or financial information (e.g., IS O/IEC 7813 information like track 1 and track 2 information), account information, loyalty information, etc. Payload information may be stored or generated locally on a smartphone, or the smartphone may query a remotely-located repository to obtain such. In some cases the remotely located repository provides a 1-time token which can be used for a single (sometimes application specific) transaction. In such cases, the receiving party can transmit the 1-time token to a 3rd party clearing house (which may or may not be the remotely located repository) to facilitate payment using the 1-time token. The 1-time token can be cryptographically associated with a user account or user payment.


Now consider encoded, displayed imagery. A user presents their portable device to a point of sale station which includes an optical reader or digital camera. In some cases the point of sale station is a portable device, e.g., like a smartphone, pad or tablet. The user's portable device displays digital watermarked imagery on the device's display for capture by the station's reader or camera. The displayed imagery can be a still image, e.g., an image or graphic representing a credit card, a picture of the family dog, an animation, etc. A virtual wallet can be configured to control the display of the image or graphic so that multiple frames (or versions) of the same still image or graphic are cycled on the display. Preferably, the displayed images appear as if they are collectively a static image, and not a video-like rendering. Each instance of the displayed image or graphic (or groups of images) carries a payload component. For example, a first displayed image carries a first payload component, a second displayed image carries a second payload component . . . and the nth-displayed image carries an nth-payload component (where n is an integer). Since the only change to each displayed image is a different payload component, which is generally hidden from human observation with digital watermarking, the displayed images appear static—as if they are collectively a single image—to a human observer of the smartphone display. A decoder, however, can be configured to analyze each separate image to decode the payload component located therein.


The payload components can take various forms. In a first embodiment, a relatively large payload is segmented or divided into various portions. The portions themselves can be used as the various components, or they can be processed for greater robustness, e.g., error correction encoded, and then used as the various payload components. For example, once the whole payload is segmented, a first portion is provided as the first payload component, which is embedded with digital watermarking in the first image for display, a second portion is provided as the second payload component, which is embedded with digital watermarking in a second image for display, and so on. Preferably, each of the various payload portions includes, is appended to include, or is otherwise associated or supplemented with a relative payload position or portion identifier. This will help identify the particular payload portion when reassembling the whole payload upon detection.


A watermark detector receives image data depicting a display (e.g., smartphone display) captured overtime. Capture of imagery can be synchronized with cycled, displayed images. The watermark detector analyzes captured images or video frames to detect digital watermarks hidden therein. A hidden digital watermark includes a payload component. In the above first embodiment, the payload component corresponds to a payload portion and carries or is accompanied by a portion identifier (e.g., 1 of 12, or 3 of 12, etc.). The watermark detector, or a processor associated with such detector, combines decoded payload components and attempts to reconstruct the whole payload. For example, the payload portions may need to simply be concatenated to yield the entire payload. Or, once concatenated, the payload may need to be decrypted or decoded. The detector or processor tracks the portion identifiers, and may prompt ongoing detection until all payload portions are successfully recovered. If the detector misses a payload component (e.g., 3 of 12), it preferably waits until that component is cycled back through the display and successful captured and decoded, or may direct communication with the display that it needs, e.g., payload component 3 of 12.


From a display side, if the whole payload is carried by 12 payload components, corresponding to 12 embedded image versions (each individual image version carrying one of the 12 payload components), then the 12 image versions can be repeatedly cycled through the display, e.g., for a predetermined time (e.g., 3-30 seconds) or until stopped by the user or point of sale station communicating a successful read back to the virtual wallet. If the display has a frame rate of 24 frames per second, then the 12 embedded image versions can be collectively cycled twice per second (or more or less depending on display frame rates).


In another embodiment of carrying a relatively large payload in displayed imagery, we present embodiments using signal coding techniques known as erasure codes and/or rateless codes. One example of these codes is the so-called “fountain codes.” For example, see, e.g., MacKay, “Fountain codes,” IEE Proc Commun 152(6):1062-1068, December 2005, which is hereby incorporated herein by reference in its entirety. See also U.S. Pat. No. 7,721,184, which is hereby incorporated herein by reference in its entirety.


To quote MacKay, from the above referenced paper, “Abstract: Fountain codes are record-breaking sparse-graph codes for channels with erasures, such as the internet, where files are transmitted in multiple small packets, each of which is either received without error or not received. Standard file transfer protocols simply chop a file up into K packet sized pieces, then repeatedly transmit each packet until it is successfully received. A back channel is required for the transmitter to find out which packets need retransmitting. In contrast, fountain codes make packets that are random functions of the whole file. The transmitter sprays packets at the receiver without any knowledge of which packets are received. Once the receiver has received any N packets, where N is just slightly greater than the original file size K, the whole file can be recovered. In the paper random linear fountain codes, LT codes, and raptor codes are reviewed . . . . 2. Fountain Codes. The computational costs of the best fountain codes are astonishingly small, scaling linearly with the file size. The encoder of a fountain code is a metaphorical fountain that produces an endless supply of water drops (encoded packets); let us say the original source file has a size of K1 bits, and each drop contains 1 encoded bits. Now, anyone who wishes to receive the encoded file holds a bucket under the fountain and collects drops until the number of drops in the bucket is a little larger than K. They can then recover the original file. Fountain codes are rateless in the sense that the number of encoded packets that can be generated from the source message is potentially limitless; and the number of encoded packets generated can be determined on the fly. Fountain codes are universal because they are simultaneously nearoptimal for every erasure channel. Regardless of the statistics of the erasure events on the channel, we can send as many encoded packets as are needed in order for the decoder to recover the source data. The source data can be decoded from any set of K0 encoded packets, for K0 slightly larger than K. Fountain codes can also have fantastically small encoding and decoding complexities.”


One advantage of a fountain code is that a detector need not communicate anything back to a transmitter about which payload portions, if any, are missing. For example, fountain codes can transform a payload into an effectively large number of encoded data blobs (or components), such that the original payload can be reassemble given any subset of those data blobs, as long the same size, or a little more than the same size, of the original payload is recovered. This provides a “fountain” of encoded data; a receiver can reassemble the payload by catching enough “drops,” regardless of which ones it gets and which ones it misses.


We can use erasure codes (e.g., fountain codes) to convey a relatively large payload for use with displayed imagery. For example, the relatively large payload can be presented to a fountain code encoder, which creates a plurality of encoded data blobs (e.g., encoded components). In some cases, each encoded data blob is accompanied with an index or seed. The index or seed allows the decoder to use a complementary decoding procedure to reconstruct the payload. For example, the encoder and decoder may agree on a pseudo-random number generator (or an indexed-based matrix generator). In one example, the generator includes an n×n random bit non-singular matrix where n is the payload's bit length. The matrix can be processed with a dot product of the payload which yields yn outputs. An index can be associated with each yn output, to allow reconstruction by the decoder. In another example, we can seed a generator with a randomly chosen index, and use that to pick a degree and set of source blocks. An encoded data blob is sent with the seed or index for that encoded block, and the decoder can use the same procedure to reconstruct the payload from received blobs/indexes.


Another example is considered with reference to FIG. 16. Payload 170 is presented to a Fountain Code Generator 171. Of course, other types of erasure code generators may be used instead, e.g., Raptor Codes or LT codes (Luby Transform codes). The payload 170 can be a relatively large payload (e.g., in comparison to other, smaller digital watermarking payloads). Payload 170 preferably includes, e.g., 500-8 k bits. (Raptor and LT codes may be helpful when using even larger payloads, e.g., greater than 8 k bits.) One specific example is a payload including 880 bits. Payload 170 may include or may be appended to include additional error correction bits, e.g., CRC bits. Additional CRC bits can be added to the 880 bit payload example, e.g., 32 additional bits.


Fountain Code Generator 171 produces a plurality of coded outputs (or data blobs), Y1 . . . YN, where N is an integer value. Data blob outputs are provided to a Digital Watermark Embedder 172. Digital Watermark Embedder 172 uses the data blob outputs as payloads to be respectively hidden in image versions (I1-IN). The term “image version” may correspond to a copy or buffered version of a static (or still) Image (I) 174 that the user (or virtual wallet) has selected to represent a financial account or credit card or the like. Instead of being a copy of a still image, an image version may correspond to a video frame or video segment. Digital Watermark Embedder 172 embeds a data blob (e.g., Y1) in an image version I1 and outputs such (resulting in watermarked image version Iw1) for display by Display 173. Digital Watermark Embedder 172 continues to embed data blobs in image version, e.g., Y2 in I2 and output (Iw2) for display, Y3 in I3 and output (Iw3) for display and so on. Parallel processing may be advantageously used to embed multiple image versions in parallel. In alternative arrangements, Digital Watermark Embedder 172 delegates embedding functions to other units. For example, Display 173 may include or cooperate with a GPU (graphics processing unit). Digital Watermark Embedder 172 may determine watermark tweaks (or changes) corresponding to embedding an output data blob in an image version and pass that information onto the GPU, which introduces the changes in an image version. In other case Digital Watermark Embedder 172 may calculate a watermark title (e.g., a watermark signal representing an output data blob) can convey such to another unit like the GPU. The GPU may then consider other factors like a perceptual embedding map or human attention model and introduce the watermark title in an image version with consideration of the map or model. (In FIG. 16, it should be understood that the Fountain Code Generator 171, Digital Watermark Embedder 172 and image (I) may be housed and operated in a portable device like the smartphone which includes Display 173. In other configurations, a portable device hosting the Display 173 communicates with a remotely-located device that hosts the Fountain Code Generator 171, Digital Watermark Embedder 172 and/or Image 174.)


Embedded image versions Iw1 . . . Iwn may be stored or buffered for cycling for display on Display 173. For example, if 24 image versions are embedded with data blobs, and if Display 173 has a frame rate of 24 frames per second, then the 24 embedded image versions can be collectively cycled once per second (each image version is shown for 1/24th of a second). Embedded image versions can be repeatedly cycled through the display one after another, e.g., for a predetermined time (e.g., 5-10 seconds) or until stopped by the user or point of sale terminal. For example, the user or terminal may communicating a successful read to the virtual wallet which terminates display. To a human observer of the cycled images, it appears that a static image is being displayed since the changes in the different image versions are digital watermarking, which are generally imperceptible to the human eye. This can be referred to as a “static image display effect”.


Returning to Fountain Code Generator 171, one configuration includes a non-singular random binary n×n matrix, where n is the payload's bit length. So, for the above 880 bit payload (912 including CRC bits) example, a 912×912 matrix is provided. The matrix can be processed with a dot product of the payload (912 bits) to yields y1-yN outputs. Continuing this example, fountain code outputs each include, e.g., 120 bits. A matrix index can be combined with the outputs including, e.g., 5 additional bits per output. The index can be specifically associated with individual outputs yN, can be associated with a group of y outputs, and/or can be associated with the matrix itself. The 125 bits can be error protected, e.g., by appending CRC bits (e.g., 24 bits for a total output data blob YN bit count of 149 bits per data blob). Error protection can be provided by the Fountain Code Generator 171 or the Digital Watermark Embedder 172, or both. For a typical application, about 6-180 data blobs can be used to reconstruct a message. In the 880 bit payload example, if 32 output blobs are used, then 32 corresponding image versions (each individual image version having one of the 32 data blobs digitally watermarked therein) can be embedded in separate versions of the image for display on the smartphone as discussed above. Instead of operating on a bit by bit manner, the Fountain Code Generator 171 can be configured to operate on longer codes, such as with Galois Fields (e.g., GF(256)) discussed in U.S. Pat. Nos. 7,412,641, 7,971,129 and 8,006,160, which are each hereby incorporated herein by reference in its entirety.


From a detector side, e.g., analyzing image data representing some or all of the embedded image versions Iw1-IwN displayed on the Display 173, constructing the payload can begin as soon as a data blob has been decoded from a digital watermark. That is, not all data blobs need to be recovered first before payload reconstruction is initiated with a corresponding erasure code decoder (e.g., in one above example, a corresponding non-singular matrix).


Of course, different payload sizes, error correction bit size and techniques, image version numbers, data blob outputs, intermediate outputs and erasure code generator configurations can be used. Thus, the above examples and embodiments are not intended to be limiting. Additionally, a payload may be segmented prior to fountain code encoding, with each segment having a corresponding number of output blobs. And, other related coding schemes can be used with cycling imagery (including video frames) such as Raptor codes and LT codes.


Of course, different watermark embedding strengths can be used. A relatively higher strength may affect visibility. To help offset visibility, we can use a human perceptibility map where an image is analyzed to find areas that will effectively hide a digital watermark and/or identify those areas which may result in visual artifacts if a digital watermark is hidden therein. A map can be created to avoid such poor hiding areas, or to embed in those areas at a relatively lower embedding strength. Calculating a perceptual map takes processing resources. To avoid calculating a map for each embedding instance of the same image, a map can be reused. For example, in the above FIG. 16 example, the Digital Watermark Embedder 172 may consult a perceptual map to help guide embedding. When using a still image, and since multiple versions of Image (I) 174 are being used, which each preferably include the same image content, a perceptual map can be calculated once, and then reused for each embedding of the image versions. In some cases, the map can be generated as soon as a user identifies an image to be used as transaction graphic, e.g., during registration or virtual wallet set up, which are prior to transactions.


Another way to avoid visual perceptibility of embedded watermarks is to vary embedding strengths based on timing or device sensor feedback. For example, a user may instruct their virtual wallet to display an image for optical sensing. The displayed, cycled images may be embedding with a relatively lower embedding strength for a predetermined time, e.g., the first 0-3 seconds which may correspond to the average time it takes a user to present the smartphone display to an optical reader. Then, for a second time period, e.g., for the next 3-7 seconds, the watermark strength of the displayed, cycled images is pumped up to a relatively stronger level since the display will be pointed at the optical reader, away from human observation.


Instead of using predetermined time periods, the embedding strength may depend on device sensor feedback. For example, after initiating display of imagery, the smartphone may user gyroscope information to make embedding strength decisions. For example, after first movement (corresponding to positioning the display to an optical reader), the embedding strength may be increased, and after one or more movement detections, the embedding strength may be decreased (e.g., corresponding to movement away from the camera). Of course, such gyroscope movements can be analyzed to identify user tendencies, and the embedder can be trained to recognize such movements to optimize watermark embedding strength.


Some of the above embodiments discuss a virtual wallet operating on a smartphone to cause display of a relatively large payload. Our inventive techniques can be applied in a reverse manner, e.g., to a point of sale display which displays cycling imagery to a user's smartphone. A payload can be communicated from the point of sale to a smartphone's virtual wallet. This may be used as a confirmation of a transaction, or it may be as a transaction identifier which can be communicated by the smartphone to a 3rd party (e.g., a credit card vendor, a PayPal like service, etc.). The transaction identifier can be supplemented with account information by the virtual wallet to identify an account associated with the virtual wallet. The 3rd party uses the transaction identifier and the account information to facilitate payment to the vendor. A confirmation of payment can be transmitted to the vender (e.g., from information included or associated with the transaction identifier) and/or virtual wallet. Some users may prefer this system since financial information is not transmitted from the user to the retailer, but from the retailer to the user, to the 3rd party.


In another embodiment, we use high frequency audio to convey a relatively large payload for use in a virtual wallet transaction. For example, smartphone includes a transmitter (e.g., a speaker). The transmitter emits high frequency audio to a receiver. The high frequency audio includes a relatively large payload. At a point of sale check out, the smartphone is positioned in proximity of a receiver at the point of sale location. High frequency audio is emitted from the smartphone, which is received by the point of sale receiver. The payload is decoded from the received audio, and the transaction proceeds. The high frequency audio encoding and transmission techniques disclosed in Digimarc's application Ser. No. 14/054,492, filed Oct. 15, 2013, and issued as U.S. Pat. No. 9,305,559, which is hereby incorporated herein by reference in its entirety, can be used in these virtual wallet applications.


A high frequency audio (or audible audio) channel can be used to establish bi-directional communication between a virtual wallet and a point of sale location. A financial transaction can proceed once communication is established. For example, a virtual wallet can cause its host smartphone to transmit a known high frequency audio message, e.g., the message is known to both the virtual wallet and to a receiver. The receiver determines signal errors or a measure of signal error and communicates such back to the smartphone. The return communication can use Bluetooth, high frequency audio, radio frequency or audible range audio, or the like. The virtual wallet uses this return error signal to adjust (e.g., increase or decrease), if needed, the level of error correction and/or signal strength for it next transmitted audio signal.


In another case, a point of sale receiver expects both captured audio+captured imagery to process or complete a financial transaction. A virtual wallet can cause imagery to be cycled on its display, as discussed above. A high frequency audio signal is generated to cooperate with presented imagery. For example, presented imagery may include financial credit card or account information, and the transmitted high frequency audio signal may include an associated pin for the financial information, an encryption key to decrypt the imagery payload, or an expected hash of the imagery payload. The receiver may request—through a high frequency audio channel—that the virtual wallet transmit the corresponding audio message once the imagery is successfully received. Of course, a transmitted audio signal (including, e.g., the pin, hash or key) may prompt a receiver to enable its camera to read a to-be-presented display screen.


Visual Interfaces for Wearable Computers


The visual constructs provided above can also be utilized both in a wristwatch form-factor, and for users wearing glasses.


The paradigm of card selection can leverage the inherit properties of a watch form factor to facilitate selection. One implementation may consist of the user running a finger around the bezel (device presumed to be circular for this example), to effect scrolling through the stack of cards. Simple motion of the watch may facilitate the same navigation by tilting the watch (e.g., rotation at the wrist). Payment would be facilitated the same way by showing the wearer's wrist watch to the cooperating device.


For users of headworn devices, such as the Google Glass product, the selection and validation process may occur through gaze tracking, blinking or any other known UI construct. Associated with the glasses would be a secondary digital device containing a display (a smartphone, a digitally connected watch such as the Pebble, or possibly a media player). The selected card would be rendered on the secondary device to complete the transaction as before. Alternatively, a portable user device can project a display, for sensing by the POS system


Visual Tallies



FIG. 11 shows an arrangement in which a checkout tally is presented on the user's smartphone as items are identified and priced by a point of sale terminal. In this embodiment, a user “signs” the touchscreen with a finger to signify approval.


A signature is technically not required for most payment card transactions, but there are advantages to obtaining a user's signature approving a charge. For example, some transaction networks charge lower fees if the users' express affirmance is collected. A finger-on-touchscreen signature lacks the fidelity of a pen-on-paper signature, but can still be distinctive. As part of a process of registering cards in a virtual wallet, a user's touchscreen signature can be collected. This signature, or its characterizing features, can be sent to one or more of the parties in the transaction authorization process shown in FIG. 5, who can use this initial signature data as reference information against which to judge signatures collected in subsequent transactions.


Alternatives to signatures can include finger or facial biometrics, such a thumbprint on the user's screen or capture of face using camera functions, or voiceprint, etc.


In the prior art, POS receipts detail items purchased in the order they are presented at checkout—which is perhaps the least useful order. An excerpt from such a receipt is shown in FIG. 12A. In accordance with a further aspect of the present technology, user preference information is stored in the phone and identifies the order in which items should be listed for that user.



FIG. 12B shows an alphabetical listing—permitting the user to quickly identify an item in the list. FIG. 12C shows items listed by price—with the most expensive items topping the list, so that the user can quickly see where most of the money is being spent.



FIG. 12D breaks down the purchased items by reference to stored list data. This list can be a listing of target foods that the user wants to include in a diet (e.g., foods in the Mediterranean diet), or it can be a shopping list that identifies items the user intended to purchase. The first part of the FIG. 12D tally identifies items that are purchased from the list. The second part of the tally identifies items on the list that were not purchased. (Some stores may provide “runners” who go out to the shelves to fetch an item forgotten by the shopper, so that it can be added to the purchased items before leaving the store.) The third part of the FIG. 12D tally identifies items that were purchased but not on the list (e.g., impulse purchases). Breakdown of purchased items in this fashion may help the user reduce impulse purchases.


Image-Based Authentication


An additional layer of security in mobile payment systems can make use of imagery, e.g., captured by the smartphone.



FIGS. 13A-13C illustrate one such arrangement, used to further secure an American Express card transaction. The detailed arrangement is akin to the SiteKey system, marketed by RSA Data Security.


In particular, after the user selects the American Express virtual card from the smartphone wallet, the phone sends related data to a cooperating system (which may be in data communication with American Express or RSA). Once the user/device/card is identified by such sent data, the cooperating system provides a challenge corresponding to that user/device/card for presentation on the phone screen. This challenge includes an image and a SiteKey phrase. In FIG. 13A the image is an excerpt of a quilt image, and the SiteKey is the name MaryAnn. Unlike the SiteKey system, however, the image is drawn from the user's own photo collection, stored on the smartphone that is now engaged in the authentication process. (In the present case, the user may have snapped a picture of the quilt while visiting a gift shop on vacation.) User-selection of one of the user's own images enables the user to select a SiteKey phrase that has some semantic relationship to the image (e.g., the user may have been with a friend MaryAnn when visiting the shop where the quilt was photographed).


The user verifies that the quilt image and the SiteKey word are as expected (to protect against phishing), and then is prompted to enter a Descriptor corresponding to the image. In the present case the Descriptor is the word Napa. (Again, this word may be semantically related to the displayed image and/or the SiteKey. For example, it may have been during a vacation trip to Napa, Calif., that the user and MaryAnn visited the shop where the quilt was photographed.)


A cryptographic hash of the user-entered Descriptor is computed by the smartphone, and transmitted to the cooperating system for matching against reference Descriptor data earlier stored for that user's American Express account. If they match, a message is sent to the smartphone, causing it next to solicit the user's signature, as shown in FIG. 13C. (As in FIG. 11, the signature screen may also include a tally of the items being purchased, or other transaction summary.) After entry of the user's signature or other biometric indicia (and, optionally, checking of signature features against stored data), the transaction proceeds. In addition, or alternatively, the user's image or a user selected image may appear on the merchant's terminal screen permitting a challenge response verification of identity by the store clerk. A facial image can be manually checked and/or compared using facial biometrics algorithms.


Another challenge-response security system employs information harvested from one or more social network accounts of the user, rather than from the phone's image collection. For example, a user can be quizzed to name social network friends—information that may be protected from public inspection, but which was used in an enrollment phase. At both the enrollment phase, and in later use, the actual friends' names are not sent from the phone. Instead, hashed data is use to permit the remote system to determine whether a user response (which may be selected from among several dummy data, as above) is a correct one.


Still other information that can be used in challenge-response checks is detailed in published application 20120123959, which is hereby incorporated herein by reference in its entirety.



FIGS. 14 and 15 show a different authentication procedure. In this arrangement a challenge image 141 is presented, and the user is instructed to tap one of plural candidate images to identify one that is related to the challenge image. The correct, corresponding, image (142a in this case) is selected from the user's own collection of smartphone pictures (e.g., in the phone's Camera Roll data structure), as is the challenge image 141. If the user does not pick the correct candidate image from the presented array of images, the transaction is refused.



FIG. 15 details a preceding, enrollment, phase of operation, in which images are initially selected. The user is instructed to pick one image from among those stored on the phone. This user-picked image 141 is used as the reference image, and a copy of this image is sent to a cooperating system (e.g., at a bank or RSA Security). The user is next instructed to pick several other images that are related to the reference image in some fashion. (For example, all of the picked images may have been captured during a particular vacation trip.) These latter images are not sent from the phone, but instead derivative data is sent, from which these pictures cannot be viewed.


In the illustrated example, the user selects images taken during the vacation to Napa. An image of the quilt, photographed in the gift shop, is selected by the user as the reference image 141. This picture is a good choice because it does not reveal private information of the user (e.g., it does not depict any family members, and it does not reveal any location information that might be sensitive), so the user is comfortable sharing the image with an authentication service. The user then picks several other images taken during the same trip for use as related, matching images. In FIG. 15, the user-picked related images are indicated by a bold border. One shows two figures walking along a railroad track. Another shows a palm tree in front of a house. Another shows plates of food on a restaurant table. Another shows red tomatoes arrayed along a counter. All are related by common geography and time interval (i.e., a vacation to Napa).


For the user-picked related images, no copies are sent from the phone. Instead, software in the phone derives image feature information. This image feature information may comprise, e.g., an image hash, or fingerprint, or color or texture or feature histograms, or information about dominant shapes and edges (e.g., content-based image descriptors of the sort commonly used by content-based image retrieval (CBIR) systems), etc. This derived information is sent from the phone for storage at the authentication service, together with identifying information by which each such related image can be located on the user's smartphone. (E.g., file name, image date/time, check-sum, and/or image file size.)


Returning to FIG. 14, when authentication is required (e.g., after a user/device/card has been identified for a transaction), the remote system sends the reference image 141 for display on the smartphone. The remote system also sends identifying information for one of the several related images identified by the user (e.g., for the picture of the tomatoes on the counter). The remote system also sends several dummy images.


The smartphone uses the identifying information (e.g., the image name) to search for the corresponding related image in the smartphone memory. The phone next presents this image (142a), together with the dummy images received from the authentication service (142b, 142c, 142d), on the phone display. The user is then invited to pick one of the plural candidate images 142 that is related to the reference picture 141.


The user's choice is compared against the correct answer. For example, the remote system may have instructed the smartphone to present the matching image (recalled from the phone's memory, based on the identification data) in the upper left position of the array of pictures. The phone then reports to the remote system the location, in the array of candidate pictures, touched by the user. If that touch is not in the upper left position, then the remote system judges the authentication test as failed.


In other arrangements, the location of the user's tap is not reported to the remote system. Instead, the smartphone computes derived information from the image tapped by the user, and this information is sent to the remote system. The remote system compares this information with the derived information earlier received for the matching (tomatoes) image. If they do not correspond, the test is failed.


In still other arrangements, the pass/fail decision is made by the smartphone, based on its knowledge of placement of the matching image.


Although not evident from the black and white reproduction of FIG. 14, each of the candidate images 142a-142d is similar in color and structure. In particular, each of these images has a large area of red that passes through the center of the frame, angling up from the lower left. (That is, the roadster car is red, the notebook is red, and the ribbon bow is red.) This is possible because, in the illustrated embodiment, the derived information sent from the phone during the enrollment phase included color and shape parameters that characterized the matching images selected by the user. In selecting dummy images, the remote system searched for other images with similar color/shape characteristics.


This feature is important when the reference image and the matching images are thematically related. For example, if the user-selected reference and matching photos are from a camping trip and all show wilderness scenes, then a matching photo of a mountain taken by the user might be paired with dummy photos of mountains located by CBIR techniques. By such arrangement, the thematic relationship between a matching image and the reference image does not give a clue as to which of the candidate images 142 is the correct selection.


In the FIG. 14 example, the tomatoes photo was used as the matching image. The next time authentication is required, another one of the matching images earlier identified by the user can be used (e.g., the photo of a palm tree in front of a house).


It will be recognized that only the true user will be able to discern a relationship between the reference image 141, and one of the displayed candidate images 142, because only the true user knows the context that they share. Moreover, this authentication technique relies on images captured by the user, rather than “canned” imagery, as employed in the prior art.


Card Standards, Etc.


Conventional magstripe credit cards conform to ISO standards 7810, 7811 and 7813, which define the physical and data standards for such cards. Typically, the data on the magstripe includes an account number, an owner name, a country code, and a card expiration date.


“Chip cards” include a chip—typically including a processor and a memory. The memory stores the just-listed information, but in encrypted form. The card employs a variety of common digital security techniques to deter attack, including encryption, challenge-response protocols, digital signatures, etc. Entry of a user's PIN is required for most transactions. Again, an ISO standard (7816) particularly defines the card requirements, and a widely used implementation follows the EMV (EuroPay/MasterCard/Visa) standard. (An updated version of EMV, termed EMV Lite, is being promoted by Morpho Cards, GmbH.)


Artisans commonly speak of “static” and “dynamic” authentication methods.


“Static” authentication methods build on those known from magnetic stripe cards. In static authentication, information is conveyed uni-directionally, i.e., from the card, possibly through an intermediary (e.g., a POS system) to a testing system (e.g., a card issuer). Static techniques can employ digital signatures, public-private keys, etc. For example, the user's name may be hashed, digitally signed with a private key associated with the system (or issuer), and the results stored in a chip card for transmission to the POS system. The POS system receives this encrypted data from the card, together with the user name (in the clear). It applies the corresponding public key to decrypt the former, and compares this with a hash of the latter.


The present technology can be employed in systems using such known static authentication, without any system alterations. Moreover, the present technology affords protection against replay attacks (e.g., through context-based techniques)—a liability to which conventional static authentication techniques are susceptible.


The more sophisticated authentication technique is so-called “dynamic authentication.” This involves a back-and-forth between the payment credential and the testing system, and may comprise challenge-response methods.


With chip cards, the card-side of the transaction is conducted by the chip, for which the POS terminal commonly has a two-way dedicated interface. But the smartphone screen used in embodiments of the present technology—which optically provides information to the cooperating system—cannot reciprocate and receive information from that system.


Nonetheless, the present technology is also suitable for use with dynamic authentication methods. The communication back from the system to the smartphone can be via signaling channels such as radio (NFC communication, WiFi, Zigbee, cellular) or audio. Optical signaling can also be employed, e.g., a POS terminal can be equipped with an LED of a known spectral characteristic, which it controllably operates to convey data to the phone, which may be positioned (e.g., laying on a checkout conveyor) so that the phone camera receives optical signaling from this LED.


Many chip-card dynamic authentication methods rely on key data stored securely in the chip. The same secure methods can be implemented in the smartphone. (Many Android phones already include this, to support the Google Wallet and similar technologies.) For example, the RSA secure architecture for SIM (microSD) cards or NFC chips, employing a tamper resistant Secure Element (SE) and a single wire protocol (SWP), can be used. The keys and other data stored in such arrangement can be accessed only via encrypted protocols.


In one particular implementation, the keys are accessed from the SE in the smartphone, and employed in a static authentication transaction (e.g., with information optically conveyed from the smartphone screen). The remote system may respond to the phone (e.g., by radio) with a request to engage in a dynamic authentication, in which case the smartphone processor (or the SE) can respond in the required back-and-forth manner.


In other arrangements, the key data and other secure information is stored in conventional smartphone memory—encrypted by the user's private key. A cloud resource (e.g., the card issuer) has the user's public key, permitting it to access this secure information. The POS system can delegate the parts of the transaction requiring this information to the issuing bank, based on bank-identifying information stored in the clear in the smartphone and provided to the POS system.


As noted, while chip cards are appealing in some aspects, they are disadvantageous because they often require merchants to purchase specialized reader terminals that have the physical capability to probe the small electrical contacts on the face of such cards. Moreover, from a user standpoint, the card is typically stored in an insecure container—a wallet. In the event a card is stolen, the only remaining security is a PIN number.


As is evident from the foregoing, embodiments of the present technology can employ the standards established for chip card systems and gain those associated benefits, while providing additional advantages such as cost savings (no specialized reader infrastructure required) and added security (the smartphone can provide many layers of security in addition to a PIN to address theft or loss of the phone).


The artisan implementing the present technology is presumed to be familiar with magstripe and chip card systems; the foregoing is just a brief review. Additional information is found, e.g., in the text by Rankl et al, Smart Card Handbook, 4th Ed., Wiley, 2010, and in the white paper, “Card Payments Roadmap in the United States: How Will EMV Impact the Future Payments Infrastructure?,” Smart Card Alliance, Publication PC-12001, January, 2013.


Notifications and Transaction Receipts, etc.:


A virtual wallet can facilitate receipt transmission and management. As part of a transaction checkout, the virtual wallet may request a receipt to be added to or accessible by the wallet—perhaps stored locally on the user device and/or in the cloud associated with a user or device account. For example, the virtual wallet communicates an account identifier, device ID or address to a participating terminal or vendor. In response, the terminal or vendor forwards the transaction receipt to the account, device or address. The user may be prompted through a UI provided by the virtual wallet to add searchable metadata about the transaction or receipt (e.g., warranty information). In other cases, searchable metadata is collected by the virtual wallet itself in addition to or without user intervention. Searchable metadata may be collected, e.g., by accessing and using transaction time, retailer name and location, items purchased, retention information, OCR-produced data if the receipt is in image form or .pdf format, etc. In some cases the receipt can be provided by the retailer with searchable text (e.g., in an XML file), e.g., including items purchased, return information, warranty information, store location and hours, price, etc. Searchable text can be indexed to facilitate rapid future searching. The receipt is accessible through the virtual wallet, e.g., by a user selecting a UI-provided icon next to a corresponding transaction.


The virtual wallet preferably provides a UI through which receipts and other transaction information may be searched. The user inputs information, e.g., types information or selects categories, products, retailers from scrollable lists, via the search UI. After a search is launched, corresponding receipt search results are represented on the display for review by the user.


We mentioned above that receipts can be marked for retention. This is helpful, e.g., for items under warranty. Retention information can be used by the wallet to help expire receipts and other transaction information. For example, a user purchases a TV at Wal-Mart and a receipt is delivered for access by the virtual wallet. (In some cases the virtual wallet may receive a notification that a receipt is available for retrieval, and access a remote location to obtain receipt information.) Metadata is entered or accessed for the receipt and retention data is indexed or stored in an expiration table or calendar. The virtual wallet uses the expiration table or calendar to expire receipts no longer deemed important or needed. The term “expire” in this context may include deleting the receipt, deleting metadata associated with the receipt, and/or updating any remote storage of such.


Retention data can be augmented with any auction related information. For example, we mentioned above that a certain financial bidder may offer an extended warranty if a transaction is made using their account or service. Such a warranty extension may be added to the retention information so a receipt is not prematurely expired.


Receipts and the metadata associated with such can be updated to reflect returns or refunds.


The searchable metadata may also include notification information. For example, a user may be on the fence whether to keep the latest electronic gizmo purchased on a whim last week. In this case the use has 15 days (or other according to the store's return policy) to return the item. Notification information can be stored and calendared for use by the virtual wallet (or a cooperating module) to send the user a reminder, e.g., via email, SMS or display notification pop-up via a UI, so that the 15 days doesn't come and go without notice.


Notifications need not be limited to receipts and warranty information. The virtual wallet may manage and provide many different types of notifications. For example, bill-payment due dates, account balances, credit limits, offers, promotions and advertising are just a few examples of such. Push-messages may be generated for urgent items in addition to having some type of a visual cue or icon within the virtual wallet that would indicate that my attention is needed. For example, a particular card or account in FIG. 3A may have a notification associated with it. (E.g., the user may have forgotten to authorize a monthly payment by its due date.) The depicted card may jiggle, glow, shimmer, flash, strobe and/or break into an animated dance when the virtual wallet is accessed. This type of notification will visually alert the user to investigate the card further, and upon accessing such (e.g., by double tapping the animated card) the notification can be further displayed.


Medical and insurance information may also be stored and managed in a virtual wallet. In addition to a health insurance card, users have car insurance card(s), Medicare card(s), an Intraocular Lens card, and a Vaccess Port card, etc. Unlike bank cards, some of this info is preferably accessible without unlocking a mobile device that is hosting the virtual wallet, e.g., because if a user needs emergency medical care, they may not be conscious to unlock the device. Access to such emergency medical information may be accomplished by adding an Emergency Medical button to a device's unlock screen similar to the Emergency Call button. A user can determine which information they want to provide access to via an Emergency Medial button through an operating systems settings screen or an access user interface associated with the virtual wallet. In another embodiment, emergency responders have an RFID card, NFC device or a digitally watermarked card that can be sensed by the mobile device to trigger unlocking the screen of a mobile device. In other cases, desired medial or insurance information is information is available on an initial splash screen, even if the phone is locked, and without needing to access an Emergency Medical button.


Of course, some or all the information hosted by the virtual wallet can be stored in the cloud or at a remote location so that it is accessible from various user devices programmed with the virtual wallet (e.g., a virtual wallet app) or to cooperate with the virtual wallet and through which a user's identity is authenticated.


Game Consoles and Physical Sales of Virtual Items:


Another device on which a virtual wallet can operate on is a game console. Examples of gaming platforms include Microsoft's Xbox 360, Sony's PlayStation, Nintendo's DS and Wii Kyko PlayCube, OnLive's MicroConsole (a cloud-based gaming console), etc.


One advantage of coupling a virtual wallet to a game console is the ability to monetize and transfer virtual items. Consider the following: after a long night of gaming a user finally wins a rare virtual prize, e.g., a unique power, token, provisions, code, level access, spell or weapon. The virtual prize can be stored or accessed within the user's virtual wallet. For example, the prize may be represented by an XML file, an access code, a cryptographic code, software code, or a pointer to such.


The virtual wallet can facilitate the on-line sale or transfer (e.g., via eBay) of the virtual prize for real money or credit. The wallet may include a virtual prize directory, folder or screen. An eBay (or sell) icon may be displayed next to the virtual prize to allow a user to initiate a transfer, auction or sale of the virtual prize. Selecting the icon initiates an offer to sell, and prompts the virtual wallet to manage the interaction with eBay, e.g., by populating required For Sale fields gathered from the virtual prize's metadata, or prompting the user to insert additional information. (The virtual wallet can access an eBay API or mobile interface to seamlessly transfer such data.)


Upon a successfully sale, the virtual wallet can be used to transfer the virtual prize to the winning purchaser using the techniques (e.g., purchase) discussed in this document.


Anonymous trust; Pick-Pocketing; and Security:


A virtual wallet may also provide an indication of trust. A user may accumulate different trust indicators as they forage online, participate in transactions and interact in society. For example, a user may receive feedback or peer reviews after they participate in an online transaction, auction or in a retail store. Another trust indicator may be a verification of age, residency and/or address. Still another trust indicator may be a criminal background check performed by a trusted third party. The virtual wallet may aggregate such indicators from a plurality of different sources to determine a composite trust score for the user. This trust score can be provided to potential bidder in a financial auction as a factor in deciding whether to offer a bid, and the content of such. The trust score can also be provided as the user interacts through social media sites.


In some cases, the trust score is anonymous. That is, it provides information about a user without disclosing the user's identity. A user can then interact online in an anonymous manner but still convey an indication of their trustworthiness, e.g., the virtual wallet can verify to others that a user is not a 53 year old pedophile, while still protecting their anonymity.


To help prevent digital pickpocketing a virtual wallet may be tethered (e.g., include a cryptographical relationship) to device hardware. For example, a mobile device may include an SID card identifier, or may include other hardware information, which can be used as a device identifier. A virtual wallet may anchor cards within the wallet to the device identifier(s) and, prior to use of a card—or the wallet itself—checks the device identifier(s) from the device with the device identifier(s) in the virtual wallet. The identifiers should correspond in a predetermined manner (e.g., cryptographical relationship) before the virtual wallet allows a transaction. This will help prevent a wallet from being copied to a device that is not associated with the user. (Of course, a user may authorize a plurality of different devices to cooperate with their virtual wallet, and store device identifiers for each.)


In some cases, a virtual wallet may send out a notification (e.g., to the user, credit reporting agency, or law enforcement) if the virtual wallet detects unauthorized use like use of the wallet on an unauthorized device.


In other cases, the virtual wallet gathers information associated with a user's patterns and purchases. After building a baseline, it can notify a user, financial vendor or others when it detects activity that looks out of character (e.g., suspected as fraud) relative to the baseline. For example, the baseline may reflect a geographic component (e.g., North America) and if spending is detected outside of this component (e.g., in Europe) then a notification can be generated and sent. The baseline may also access or incorporate other information to help guide its decision making. For example, the virtual wallet may access a user's online or locally stored calendar and determine that the user is traveling in Europe on vacation. So the geographical component is expanded during the vacation time period and a notification is not sent when European spending is detected.


Combinations


Some combinations supported this disclosure include the following. Of course, the following is no-where near an exhaustive listing, since there are many, many other combinations that will be readily apparent from the above written description.


A1. A method employing a user's portable device, the device including a display, one or more processors and a sensor, the method including acts of:


receiving information from the sensor, the information corresponding to a positioning or relative movement of the portable device;


using the one or more processors, and based at least in part on the information, changing a digital watermark embedding process;


using the one or more processors, embedding a digital watermark in imagery using the changed digital watermark embedding process;


providing the embedded imagery for display.


A2. The method of A1 in which the sensor comprises a gyroscope.


A3. The method of A1 in which said changing a digital watermark embedding process comprises changing a relative embedding strength.


B1. A portable device comprising:


a touch screen display;


a sensor to obtain information corresponding to a positioning or relative movement of the portable device;


memory storing an image; and


one or more processors configured for:

    • changing a digital watermark embedding process based on information obtained by said sensor;
    • embedding a digital watermark in the image using the changed digital watermark embedding process;
    • controlling display of the embedded image on the touch screen display.


B2. The portable device of B1 in which the sensor comprises a gyroscope.


B3. The portable device of B1 in which the changing a digital watermark embedding process comprises changing a relative embedding strength.


C1. A portable device comprising:


a touch screen display;


a microphone for capturing ambient audio;


memory for storing audio identifiers or information obtained from audio identifiers; and


one or more processors configured for:

    • causing the portable device to operate in a background audio collection mode, in which during the mode audio is captured by the microphone without user involvement;
    • processing audio captured in the background audio collection mode to yield one or more audio identifiers;
    • storing the one or more audio identifiers or information obtained from the one or more identifiers in said memory;
    • upon encountering a transmission from a signaling source, determining if the one or more audio identifiers or if the information obtained from the one or more identifiers stored in memory corresponds to the transmission;
    • taking an action if there is a correspondence.


C2. The portable device of C1 in which the signaling source comprises an iBeacon or Bluetooth transmitter.


C3. The portable device of C2 in which the information obtained from the one or more audio identifiers comprises a discount code or coupon, and in which the action comprises applying the discount code or coupon to a financial transaction involving the portable device.


C4. The portable device of C1 in which the processing audio comprises extracting fingerprints from the audio.


C5. The portable device of C1 in which the processing audio comprises decoding digital watermarking hidden in the audio.


C6. The portable device of C1 in which the action comprises prompting the user via a message displayed on the touch screen display.


D1. A system comprising:


a portable device comprising: one or more processors, a high frequency audio transmitter and receiver, and a virtual wallet stored in memory, the virtual wallet comprising financial information;


a retail station comprising: one or more processors, a high frequency audio transmitter and receiver;


in which the virtual wallet configures the one or more processors of the portable device to transmit a known high frequency audio message, the message being known to both the virtual wallet and to the retail station;


in which the one or more processors of the retail station are configured to determine errors associated with the known high frequency audio message and cause an error message to be communicated to the virtual wallet;


and in which the virtual wallet, upon receipt of the error message, configures said one or more processors to transmit the financial information with a high frequency audio signal adapted according to the error message.


Concluding Remarks


From the above description, it will be seen that embodiments of the present technology preserve the familiar ergonomics of credit card usage, while streamlining user checkout. No longer must a user interact with an unfamiliar keypad at the grocery checkout to pay with a credit card (What button on this terminal do I press? Enter? Done? The unlabeled green one?). No longer must the user key-in a phone number on such a terminal to gain loyalty shopper benefits. Additional advantages accrue to the merchant: no investment is required for specialized hardware that has utility only for payment processing. (Now a camera, which can be used for product identification and other tasks, can be re-purposed for this additional use.) And both parties benefit by the reduction in fraud afforded by the various additional security improvements of the detailed embodiments.


Having described and illustrated the principles of our inventive work with reference to illustrative examples, it will be recognized that the technology is not so limited.


For example, while the specification focused on a smartphone exchanging data with a cooperating system using optical techniques, other communication arrangements can be used. For example, radio signals (e.g., Bluetooth, Zigbee, etc.) may be exchanged between the phone and a POS system. Relatedly, NFC and RFID techniques can also be used.


In some embodiments, audio can also be used. For example, card and authentication data can be modulated on an ultrasonic carrier, and transmitted from the phone's speaker to a microphone connected to the POS terminal. The POS terminal can amplify and rectify the sensed ultrasonic signal to provide the corresponding digital data stream. Alternatively, an audible burst of tones within the human hearing range can be employed similarly.


In another audio embodiment, the data is conveyed as a watermark payload, steganographically conveyed in cover audio. Different items of cover audio can be used to convey different information. For example, if the user selects a VISA card credential, a clip of Beatles music, or a recording of a train whistle, can serve as the host audio that conveys the associated authentication/card information as a watermark payload. If the user selects a MasterCard credential, a BeeGees clip, or a recording of bird calls, can serve as the host audio. The user can select, or record, the different desired items of cover audio (e.g., identifying songs in the user's iTunes music library, or recording a spoken sentence or two), and can associate different payment credentials with different of these audio items. The user can thereby conduct an auditory check that the correct payment credential has been selected. (If the user routinely uses a Visa card at Safeway—signaled by the Beatles song clip, and one day he is surprised to hear the BeeGees song clip playing during his Safeway checkout, then he is alerted that something is amiss.)


While watermarking and barcodes have been expressly referenced, other optical communications techniques can also be used. One simply uses pattern recognition (e.g., image fingerprinting, or OCRing) to recognize a payment card by the presented artwork and, in some implementations, read the user name, account number, expiration date, etc., from the artwork.


While the detailed payment arrangements provide card data (e.g., account name and number), from the smartphone to the cooperating system (typically in encrypted form), in other embodiments, such information is not conveyed from the phone. Instead, the phone provides a data token, such as a digital identifier, which serves to identify corresponding wallet card data stored in the cloud. (A related approach is used, e.g., by Braintree's Venmo payment system, which “vaults” the credit card details in a central repository.) Known data security techniques are used to protect the exchange of information from the cloud to the retailer's POS system (or to whatever of the parties in the FIG. 5 transaction system first receives the true card details). The token is useless if intercepted from the phone, because its use cannot be authorized except by using techniques such as disclosed above (e.g., context-based authentication data, digital signatures, etc.).


Token-based systems make it easy for a user to handle loss or theft of the smartphone. With a single authenticated communication to the credentials vault, the user can disable all further use of the payment cards from the missing phone. (The authenticated user can similarly revoke the public/private key pair associated with user through the phone's hardware ID, if same is used.) After the user has obtained a replacement phone, its hardware ID is communicated to the vault, and is associated with the user's collection of payment cards. (A new public/private key pair can be issued based on the new phone's hardware ID, and registered to the user with the certificate authority.) The vault can download artwork for all of the virtual cards in the user's collection to the new phone. Thereafter, the new phone can continue use of all of the cards as before.


Desirable, in such embodiments, is for the artwork representing the wallet cards to be generic, without any personalized identification (e.g., no name or account number). By such arrangement, no personal information is conveyed in the replacement artwork downloaded to the new phone (nor is any personal information evident to a person who might gain possession of the lost/stolen original phone).


In an alternate implementation the virtual card data stored on the phone is logically-bound to the phone via the device ID, so that such data is not usable except on that phone. If the phone is lost or stolen, the issuer can be notified to revoke that card data and issue replacement data for installation on a replacement phone.


In still another embodiment, card data can be revoked remotely in a lost or stolen phone, using the iCloud Find My iPhone technology popularized by the Apple iPhone for remotely locking or wiping a phone.


While any combination of layered security techniques can be employed, one involves public-private key pairs issued to banks that issue payment cards. Among the information conveyed from the smartphone can be credit card account details (name, number, expiration data, etc.) provided to the phone by the issuing bank at time of virtual card issuance, already encrypted by the bank's private key. The POS system can have, stored in memory, the public keys for all credit card-issuing banks. The POS system can apply the different public keys until it finds one that decrypts the information conveyed from the smartphone, thereby assuring that the card credentials are issued by the corresponding bank.


In the detailed arrangements, a POS system makes a context-based assessment using information conveyed from the smartphone (e.g., optically conveyed from its display). In other embodiments, the roles can be reversed. For example, the POS terminal can convey context information to the smartphone, which makes an assessment using context information it determines itself. Some systems use both approaches, with the smartphone testing the POS terminal, and the POS terminal testing the smartphone. Only if both tests conclude satisfactorily does a transaction proceed.


Technology for steganographically encoding (and decoding) watermark data in artwork (and sound) is detailed, e.g., in Digimarc's U.S. Pat. Nos. 6,614,914, 6,590,996, 6,122,403, 20100150434 and 20110274310, as well as in pending application Ser. No. 13/750,752, filed Jan. 1, 2013 (issued as U.S. Pat. No. 9,367,770). Typically, forward error correction is employed to assure robust and accurate optical conveyance of data. Each of the above patent documents is hereby incorporated herein by reference in its entirety.


The steganographic data-carrying payload capacity of low resolution artwork is on the order of 50-100 bits per square inch. With high resolution displays of the sort now proliferating on smartphones (e.g., the Apple Retina display), much higher data densities can reliably be achieved. Still greater data capacity can be provided by encoding static artwork with a steganographic movie of hidden data, e.g., with new information encoded every tenth of a second. Using such techniques, payloads in the thousands of bits can be steganographically conveyed.


Image fingerprinting techniques are detailed in U.S. Pat. No. 7,020,304 (Digimarc), U.S. Pat. No. 7,486,827 (Seiko-Epson), 20070253594 (Vobile), 20080317278 (Thomson), and 20020044659 (NEC). SIFT-based approaches for image recognition can also be employed (e.g., as detailed in U.S. Pat. No. 6,711,293). SURF and ORB are more recent enhancements to SIFT. Each of the above patent documents is hereby incorporated herein by reference in its entirety. Each of the above patent documents is hereby incorporated herein by reference in its entirety.


Applicant's other work that is relevant to the present technology includes that detailed in patent publications 20110212717, 20110161076, 20120284012, 20120046071, 20120214515, and in pending application Ser. Nos. 13/651,182, filed Oct. 12, 2012 (issued as U.S. Pat. No. 8,868,039) and 61/745,501, filed Dec. 21, 2012. Each of the above patent documents is hereby incorporated herein by reference in its entirety.


Related patent publications concerning mobile payment and imaging technologies include 20120303425, 20120024945, 20100082444, 20110119156, 20100125495, 20130085941, 20090276344, U.S. Pat. Nos. 8,423,457, 8,429,407, 8,250,660, 8,224,731, 7,508,954, and 7,191,156. Each of the above patent documents is hereby incorporated herein by reference in its entirety.


Although the detailed description focuses on use of the technology in bricks and mortar stores, the technology is equally useful in making purchases online.


For example, a user may employ a smartphone to browse the web site of an online merchant, and add items to a shopping cart. The merchant may have a dedicated app to facilitate such shopping (e.g., as EBay and Amazon do). At the time for payment, the user (or the web site, or the app) invokes the payment module software, causing one of the depicted interfaces (e.g., FIG. 1 or FIG. 10A) to be presented for user selection of the desired payment card. For example, an app may have a graphical control for selection by the user to activate the payment module. The user then flips through the available cards and taps one to complete the purchase. The payment module determines the device context from which it was invoked (e.g., the Amazon app, or a Safari browser with a Land's End shopping cart), and establishes a secure session to finalize the payment to the corresponding vendor, with the user-selected card. As in the earlier examples, various digital data protocols can be employed to secure the transaction. (In this case, optical communication with the cooperating system is not used. Instead, data is exchanged with the remote system by digital communications, e.g., using a 4G network to the internet, etc.)


While the present technology's robustness to various potential attacks was noted above, the technology also addresses one of the largest fraud channels in the existing credit card system: so-called “card not present” transactions. Many charge transactions are made without presenting a physical card to a merchant. (Consider all online purchases.) If a person knows a credit card number, together with owner name, expiration date, and code on back, they can make a charge. Much fraud results. By the present technology, in contrast, the smartphone serves as the payment credential—the same credential for both online and bricks-and-mortar merchants. For the former its data is presented digitally, and for the latter its data is presented optically—both with reliable security safeguards. As smartphones become ubiquitous, merchants may simply insist on cash if a smartphone is not used, with negligibly few bona fide sales lost as a consequence.


It will be recognized that the detailed user interfaces are illustrative only. In commercial implementation, it is expected that different forms of interface will probably be used, based on the demands and constraints of the particular application. (One alternative form of interface is one in which a virtual representation of a wallet card is dragged and dropped onto an item displayed on-screen that is to be purchased, or is dragged/dropped onto a displayed form that then auto-completes with textual particulars (cardholder name, billing address, card number, etc.) corresponding to the selected card. Such forms of interaction may be particularly favored when using desktop and laptop computers.)


While the focus of the disclosure has been on payment transactions, another use of wallet cards is in identification transactions. There is no reason why driver licenses, passports and other identification documents cannot have virtual counterparts (or replacements) that employ the technology detailed herein. Again, greatly increased security can thereby be achieved.


Such virtual cards are also useful in self-service kiosks and other transactions. An example is checking into a hotel. While hotels routinely employ human staff to check-in guests, they do so not solely to be hospitable. Such human interaction also serves a security purpose—providing an exchange by which guests can be informally vetted, e.g., to confirm that their stated identity is bona fide. The present technology allows such vetting to be conducted in a far more rigorous manner. Many weary travelers would be pleased to check-in via a kiosk (presenting payment card and loyalty card credentials, and receiving a mag stripe-encoded, or RFID-based, room key in return), especially if it spared them a final delay in the day's travel, waiting for a human receptionist.


Similarly, air travel can be made more secure by authenticating travelers using the technologies detailed herein, rather than relying on document inspection by a bleary-eyed human worker at shift's end. Boarding passes can similarly be made more secure by including such documents in the virtual wallet, and authenticating their validity using the presently-detailed techniques.


In the embodiment detailed in FIGS. 14 and 15, the relationship between the images was due to common geography and a common interval of time (a vacation trip to Napa). However, the relationship can be of other sorts, such as person-centric or thing-centric. For example, the reference image may be a close-up of a pair of boots worn by a friend of the user, and the related candidate images can be face shots of that friend. (Dummy images can be face shots of strangers.)


Embodiments that presented information for user review or challenge on the smartphone screen, and/or solicited user response via the smartphone keypad or touch screen, can instead be practiced otherwise. For example, information can be presented to the user on a different display, such as on a point of sale terminal display. Or it can be posed to the user verbally, as by a checkout clerk. Similarly, the user's response can be entered on a device different than the smartphone (e.g., a keypad at a checkout terminal), or the user may simply voice a responsive answer, for capture by a POS system microphone.


The artisan will recognize that spectrum-based analysis of signals (e.g., audio signals, as used above in one authentication embodiment) can be performed by filter banks, or by transforming the signal into the Fourier domain, where it is characterized by its spectral components.


As noted, security checks can be posed to the user at various times in the process, e.g., when the phone is awakened, when the payment app starts, when a card is selected, when payment is finalized, etc. The check may seek to authenticate the user, the user device, a computer with which the device is communicating, etc. The check may be required and/or performed by software in the device, or by software in a cooperating system. In addition to PIN and password approaches, these can include checks based on user biometrics, such as voice recognition and fingerprint recognition. In one particular embodiment, whenever the payment module is launched, a screen-side camera on the user's smartphone captures an image of the user's face, and checks its features against stored reference features for the authorized user to confirm the phone is not being used by someone else. Another form of check is the user's custody of a required physical token (e.g., a particular car key), etc.


Location information (e.g., GPS, cell tower triangulation, etc.) can also be utilized to confirm placement of the associated mobile device within proximity of the cooperating device. High confidence on location can be achieved by relying on network-provided location mechanism from companies such as Locaid, that are not susceptible to application hacking on the mobile device (enabled by unlocking the device or otherwise.)


If a smartphone transaction fails, e.g., because the context information provided from the smartphone to the cooperating system does not match what is expected, or because the user fails multiple consecutive attempts to provide a proper PIN code or pass another security check, a report of the failed transaction can be sent to the authorized user or other recipient. Such a report, e.g., by email or telephone, can include the location of the phone when the transaction failed, as determined by a location-sensing module in the phone (e.g., a GPS system).


Although the focus of this disclosure has been on arrangements that make no use of plastic wallet cards, some of the technology is applicable to such cards.


For example, a plastic chip card can be equipped with one or more MEMS sensors, and these can be used to generate context-dependent session keys, which can then be used in payment transactions in the manners described above in connection with smartphones.


Moreover, plastic cards can also be useful in enrolling virtual cards in a smartphone wallet. One particular such technology employs interaction between printable conductive inks (e.g., of metal oxides), and the capacitive touch screens commonly used on smartphones and tablets. As detailed in publications by Printechnologics Gmbh and others, when a card printed with a pattern of conductive ink is placed on a touch screen, the touch screen senses the pattern defined by the ink and can respond accordingly. (See, e.g., patent publications WO2012136817, WO2012117046, US20120306813, US20120125993, US20120306813 and US20110253789. Such technology is being commercialized under the Touchcode brand name. Each of the above patent documents is hereby incorporated herein by reference in its entirety.)


Loading the card into the digital wallet can involve placing the mobile wallet software in an appropriate mode (e.g., “ingest”), after optional authentication has been completed. The user then places the physical card on the smartphone display. The use of conductive inks on the card serves to identify the card to the mobile device. The user can then lift the card off the display, leaving a virtualized representation of the card on the display to be subsequently stored in the wallet, with the opportunity to add additional metadata to facilitate transactions or preferences (PIN's, priority, etc.).


Such physical item-based interaction with touch screens can also be used, e.g., during a challenge-response stage of a transaction. For example, a cooperating device may issue a challenge through the touch-screen on the mobile device as an alternative to (or in addition to) audio, image, wireless, or other challenge mechanisms. In one particular arrangement, a user places a smartphone screen-down on a reading device (similar to reading a digital boarding-pass at TSA check-points). The cooperating device would have a static or dynamic electrical interconnect that could be used to simulate a multi-touch events on the mobile device. By so doing, the mobile device can use the challenge (presented as a touch event) to inform the transaction and respond appropriately to the cooperating device.


While reference has been made to smartphones and POS terminals, it will be recognized that this technology finds utility with all manner of devices—both portable and fixed. Tablets, portable music players, desktop computers, laptop computers, set-top boxes, televisions, wrist- and head-mounted systems and other wearable devices, servers, etc., can all make use of the principles detailed herein. (The term “smartphone” should be construed herein to encompass all such devices, even those that are not telephones.)


Particularly contemplated smartphones include the Apple iPhone 5; smartphones following Google's Android specification (e.g., the Galaxy S III phone, manufactured by Samsung, and the Motorola Droid Razr HD Maxx phone), and Windows 8 mobile phones (e.g., the Nokia Lumia 920). Details of the Apple iPhone, including its touch interface, are provided in Apple's published patent application 20080174570.


Details of the Cover Flow fliptych interface used by Apple are provided in published patent application 20080062141.


The design of smartphones and other computers referenced in this disclosure is familiar to the artisan. In general terms, each includes one or more processors, one or more memories (e.g. RAM), storage (e.g., a disk or flash memory), a user interface (which may include, e.g., a keypad, a TFT LCD or OLED display screen, touch or other gesture sensors, a camera or other optical sensor, a compass sensor, a 3D magnetometer, a 3-axis accelerometer, a 3-axis gyroscope, one or more microphones, etc., together with software instructions for providing a graphical user interface), interconnections between these elements (e.g., buses), and an interface for communicating with other devices (which may be wireless, such as GSM, 3G, 4G, CDMA, WiFi, WiMax, Zigbee or Bluetooth, and/or wired, such as through an Ethernet local area network, a T-1 internet connection, etc.).


The processes and system components detailed in this specification may be implemented as instructions for computing devices, including general purpose processor instructions for a variety of programmable processors, including microprocessors (e.g., the Intel Atom, the ARM A5, the nVidia Tegra 4, and the Qualcomm Snapdragon), graphics processing units (GPUs, such as the nVidia Tegra APX 2600, and the Adreno 330—part of the Qualcomm Snapdragon processor), and digital signal processors (e.g., the Texas Instruments TMS320 series devices and OMAP series devices), etc. These instructions may be implemented as software, firmware, etc. These instructions can also be implemented in various forms of processor circuitry, including programmable logic devices, field programmable gate arrays (e.g., the Xilinx Virtex series devices), field programmable object arrays, and application specific circuits—including digital, analog and mixed analog/digital circuitry. Execution of the instructions can be distributed among processors and/or made parallel across processors within a device or across a network of devices. Processing of content signal data may also be distributed among different processor and memory devices. “Cloud” computing resources can be used as well. References to “processors,” “modules” or “components” should be understood to refer to functionality, rather than requiring a particular form of implementation.


Software instructions for implementing the detailed functionality can be authored by artisans without undue experimentation from the descriptions provided herein, e.g., written in C, C++, Visual Basic, Java, Python, Tcl, Perl, Scheme, Ruby, etc. In addition, libraries that allow mathematical operations to be performed on encrypted data can be utilized to minimize when and how sensitive information is stored in clear-text. Smartphones and other devices according to certain implementations of the present technology can include software modules for performing the different functions and acts.


Known browser software, communications software, and media processing software can be adapted for use in implementing the present technology.


Software and hardware configuration data/instructions are commonly stored as instructions in one or more data structures conveyed by tangible media, such as magnetic or optical discs, memory cards, ROM, etc., which may be accessed across a network. Some embodiments may be implemented as embedded systems—special purpose computer systems in which operating system software and application software are indistinguishable to the user (e.g., as is commonly the case in basic cell phones). The functionality detailed in this specification can be implemented in operating system software, application software and/or as embedded system software.


Different of the functionality can be implemented on different devices. For example, in a system in which a smartphone communicates with a computer at a remote location, different tasks can be performed exclusively by one device or the other, or execution can be distributed between the devices. Extraction of fingerprint and watermark data from content is one example of a process that can be distributed in such fashion. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., a smartphone) is not limiting but exemplary; performance of the operation by another device (e.g., a remote server), or shared between devices, is also expressly contemplated.


(In like fashion, description of data being stored on a particular device is also exemplary; data can be stored anywhere: local device, remote device, in the cloud, distributed, etc. Thus, while an earlier embodiment employed user photographs stored in the phone, the detailed methods can similarly make use of user photographs stored in an online/cloud repository.)


Many of the sensors in smartphones are of the MEMS variety (i.e., Microelectromechanical Systems). Most of these involve tiny moving parts. Such components with moving parts may be termed motive-mechanical systems.


This specification details a variety of embodiments. It should be understood that the methods, elements and concepts detailed in connection with one embodiment can be combined with the methods, elements and concepts detailed in connection with other embodiments. While some such arrangements have been particularly described, many have not—due to the large number of permutations and combinations. However, implementation of all such combinations is straightforward to the artisan from the provided teachings.


Elements and teachings within the different embodiments disclosed in the present specification are also meant to be exchanged and combined.


While this disclosure has detailed particular ordering of acts and particular combinations of elements, it will be recognized that other contemplated methods may re-order acts (possibly omitting some and adding others), and other contemplated combinations may omit some elements and add others, etc.


Although disclosed as complete systems, sub-combinations of the detailed arrangements are also separately contemplated (e.g., omitting various of the features of a complete system).


The present specification should be read in the context of the cited references. (The reader is presumed to be familiar with such prior work.) Those references disclose technologies and teachings that the inventors intend be incorporated into embodiments of the present technology, and into which the technologies and teachings detailed herein be incorporated.


While certain aspects of the technology have been described by reference to illustrative methods, it will be recognized that apparatuses configured to perform the acts of such methods are also contemplated as part of applicant's inventive work. Likewise, other aspects have been described by reference to illustrative apparatus, and the methodology performed by such apparatus is likewise within the scope of the present technology. Still further, tangible computer readable media containing instructions for configuring a processor or other programmable system to perform such methods is also expressly contemplated.


To provide a comprehensive disclosure, while complying with the statutory requirement of conciseness, applicant incorporates-by-reference each of the documents referenced herein. (Such materials are incorporated in their entireties, even if cited above in connection with specific of their teachings.)


In view of the wide variety of embodiments to which the principles and features discussed above can be applied, it should be apparent that the detailed embodiments are illustrative only, and should not be taken as limiting the scope of the invention. Rather, we claim as our invention all such modifications as may come within the scope and spirit of the following claims and equivalents thereof.

Claims
  • 1. An apparatus for device to device communication using displayed imagery, said apparatus comprising: a camera for capturing a plurality of image frames, the plurality of image frames representing a plurality of graphics displayed on a display screen of a mobile device, in which each of the graphics comprises an output from an erasure code generator, in which the erasure code generator produces a plurality of outputs corresponding to a payload;means for decoding outputs from the plurality of graphics;means for constructing the payload from decoded outputs; andmeans for carrying out an action based on a constructed payload.
  • 2. The apparatus of claim 1 in which the erasure code generator comprises a fountain code generator.
  • 3. The apparatus of claim 1 in which each of the plurality of graphics comprises an output encoded therein with digital watermarking, and wherein said means for decoding utilizes digital watermark detection to decode the outputs from the plurality of graphics.
  • 4. The apparatus of claim 1 in which only one output of the plurality of outputs is embedded in any one graphic of the plurality of graphics.
  • 5. The apparatus of claim 1 in which the plurality of outputs comprises a subset of a total number of outputs provided by the erasure code generator.
  • 6. The apparatus of claim 1 in which the plurality of image frames representing a plurality of graphics are displayed on the display screen of the mobile device as animation.
  • 7. The apparatus of claim 1 in which the mobile device comprises a wristwatch, a tablet or a smartphone.
  • 8. A system to facilitate device to device communication, comprising: i. A portable device comprising: a touch screen display;an input;memory comprising a payload;means for generating a plurality of outputs corresponding to the payload;means for obtaining a plurality of graphics;means for encoding one of the plurality of outputs in one of the plurality of graphics and proceeding with encoding until each of the plurality of outputs is so encoded, respectively, in a graphic of the plurality of graphics; andmeans for controlling the touch screen display to display encoded graphics; andii. A capture device comprising: a camera for capturing imagery representing displayed encoded graphics; means for decoding outputs from the imagery representing displayed encoded graphics;means for constructing the payload from decoded outputs; andmeans for carrying out an action based on the payload.
  • 9. The system of claim 8 in which said means for generating a plurality of outputs comprises an erasure code generator.
  • 10. The system of claim 8 in which only one output of the plurality of outputs is encoded in any one graphic of the plurality of graphics.
  • 11. The system of claim 8 in which the plurality of outputs comprises a subset of a total number of outputs provided by the erasure code generator.
  • 12. The system of claim 8 in which said means for generating a plurality of outputs comprises a fountain code generator, in which the fountain code generator produces the plurality of outputs, from which the payload can be constructed by obtaining a subset of the plurality of outputs, the subset being less than the plurality of outputs.
  • 13. The system of claim 8 in which the portable device comprises a wrist watch, a tablet or a smart phone.
  • 14. The system of claim 8 in which display of the embedded graphics comprises animation.
  • 15. The system of claim 8 in which said means for encoding utilizes digital watermarking.
  • 16. The system of claim 8 in which the encoded graphics are displayed so as to create a static image effect.
  • 17. The system of claim 9 in which the erasure code generator comprises a fountain code generator.
RELATED APPLICATION DATA

This application is a continuation of U.S. patent application Ser. No. 15/096,112, filed Apr. 11, 2016 (U.S. Pat. No. 10,210,502), which is a continuation of U.S. patent application Ser. No. 14/180,277, filed Feb. 13, 2014 (U.S. Pat. No. 9,311,640), which claims the benefit of U.S. Patent Application No. 61/938,673, filed Feb. 11, 2014, which are hereby incorporated herein by reference in their entirety. This application is also related to U.S. Provisional Application Nos. 61/825,059, filed May 19, 2013, and 61/769,701, filed Feb. 26, 2013; and U.S. patent application Ser. No. 14/074,072, filed Nov. 7, 2013 (published as U.S. 2014-0258110 A1), Ser. No. 13/873,117, filed Apr. 29, 2013 (issued as U.S. Pat. No. 9,830,588), and Ser. No. 13/792,764, filed Mar. 11, 2013 (issued as U.S. Pat. No. 9,965,756). Each of the above patent documents is hereby incorporated herein by reference in its entirety.

US Referenced Citations (1090)
Number Name Date Kind
4692806 Anderson Sep 1987 A
4800543 Lyndon-James Jan 1989 A
4807031 Broughton Feb 1989 A
4926480 Chaum May 1990 A
5113445 Wang May 1992 A
5119507 Mankovitz Jun 1992 A
5276311 Hennige Jan 1994 A
5432349 Wood Jul 1995 A
5450490 Jensen Sep 1995 A
5613004 Cooperman Mar 1997 A
5646997 Barton Jul 1997 A
5710834 Rhoads Jan 1998 A
5764763 Jensen Jun 1998 A
5793027 Baik Aug 1998 A
5799068 Kikinis Aug 1998 A
5802179 Yamamoto Sep 1998 A
5822360 Lee Oct 1998 A
5822423 Jehnert Oct 1998 A
5825892 Braudaway Oct 1998 A
5854629 Redpath Dec 1998 A
5859920 Daly Jan 1999 A
5862260 Rhoads Jan 1999 A
5889868 Moskowitz Mar 1999 A
5892900 Ginter Apr 1999 A
5909667 Leontiades Jun 1999 A
5918223 Blum Jun 1999 A
5920841 Schottmuller Jul 1999 A
5933798 Linnartz Aug 1999 A
5983383 Wolf Nov 1999 A
5991737 Chen Nov 1999 A
6014648 Brennan Jan 2000 A
6070140 Tran May 2000 A
6108640 Slotznick Aug 2000 A
6122403 Rhoads Sep 2000 A
6144938 Surace Nov 2000 A
6169890 Vatanen Jan 2001 B1
6182218 Saito Jan 2001 B1
6199044 Ackley Mar 2001 B1
6205231 Isadore-Barreca Mar 2001 B1
6230267 Richards May 2001 B1
6243713 Nelson Jun 2001 B1
6249531 Jacobi Jun 2001 B1
6259476 Greene Jul 2001 B1
6289140 Oliver Sep 2001 B1
6307487 Luby Oct 2001 B1
6311171 Dent Oct 2001 B1
6311214 Rhoads Oct 2001 B1
6320829 Matsumoto Nov 2001 B1
6369907 Aoki Apr 2002 B1
6386450 Ogasawara May 2002 B1
6389402 Ginter May 2002 B1
6400996 Hoffberg Jun 2002 B1
6408082 Rhoads Jun 2002 B1
6442284 Gustafson Aug 2002 B1
6445460 Pavley Sep 2002 B1
6448979 Schena Sep 2002 B1
6449377 Rhoads Sep 2002 B1
6466232 Newell Oct 2002 B1
6466654 Cooper Oct 2002 B1
6483570 Slater Nov 2002 B1
6483927 Brunk Nov 2002 B2
6491217 Catan Dec 2002 B2
6505160 Levy Jan 2003 B1
6507838 Syeda-Mahmood Jan 2003 B1
6516079 Rhoads Feb 2003 B1
6519607 Mahoney Feb 2003 B1
6535617 Hannigan Mar 2003 B1
6546262 Freadman Apr 2003 B1
6556971 Rigsby Apr 2003 B1
6590996 Reed Jul 2003 B1
6601027 Wright Jul 2003 B1
6609113 OLeary Aug 2003 B1
6611607 Davis Aug 2003 B1
6614914 Rhoads Sep 2003 B1
6628928 Noreen Sep 2003 B1
6629104 Parulski Sep 2003 B1
6636249 Rekimoto Oct 2003 B1
6650761 Rodriguez Nov 2003 B1
6654887 Rhoads Nov 2003 B2
6704714 OLeary Mar 2004 B1
6711293 Lowe Mar 2004 B1
6714683 Tian Mar 2004 B1
6722569 Ehrhart Apr 2004 B2
6724914 Brundage Apr 2004 B2
6728312 Whitford Apr 2004 B1
6735324 McKinley May 2004 B1
6738494 Savakis May 2004 B1
6751337 Tewfik Jun 2004 B2
6763124 Alattar Jul 2004 B2
6829368 Meyer Dec 2004 B2
6834308 Ikezoye Dec 2004 B1
6845360 Jensen Jan 2005 B2
6862355 Kolessar Mar 2005 B2
6865589 Haitsma Mar 2005 B2
6921625 Kaczun Jul 2005 B2
6931451 Logan Aug 2005 B1
6941275 Swierczek Sep 2005 B1
6947571 Rhoads Sep 2005 B1
6957393 Fano Oct 2005 B2
6957776 Ng Oct 2005 B1
6964023 Maes Nov 2005 B2
6968564 Srinivasan Nov 2005 B1
6978297 Piersol Dec 2005 B1
6988070 Kawasaki Jan 2006 B2
6988202 Rhoads Jan 2006 B1
6990453 Wang Jan 2006 B2
6993154 Brunk Jan 2006 B2
7003495 Burger Feb 2006 B1
7003731 Rhoads Feb 2006 B1
7006555 Srinivasan Feb 2006 B1
7012183 Herre Mar 2006 B2
7013021 Sharma Mar 2006 B2
7016532 Boncyk Mar 2006 B2
7020304 Alattar Mar 2006 B2
7020337 Viola Mar 2006 B2
7022075 Grunwald Apr 2006 B2
7024016 Rhoads Apr 2006 B2
7027975 Pazandak Apr 2006 B1
7038709 Verghese May 2006 B1
7043048 Ellingson May 2006 B1
7043474 Mojsilovic May 2006 B2
7046819 Sharma May 2006 B2
7058356 Slotznick Jun 2006 B2
7068729 Shokrollahi Jun 2006 B2
7076082 Sharma Jul 2006 B2
7076737 Abbott Jul 2006 B2
7084903 Narayanaswami Aug 2006 B2
7107516 Anderson Sep 2006 B1
7121469 Dorai Oct 2006 B2
7123740 McKinley Oct 2006 B2
7143949 Hannigan Dec 2006 B1
7152786 Brundage Dec 2006 B2
7159116 Moskowitz Jan 2007 B2
7177424 Furuya Feb 2007 B1
7185201 Rhoads Feb 2007 B2
7191156 Seder Mar 2007 B1
7197164 Levy Mar 2007 B2
7223956 Yoshida May 2007 B2
7228327 Shuster Jun 2007 B2
7254406 Beros Aug 2007 B2
7302574 Conwell Nov 2007 B2
7313759 Sinisi Dec 2007 B2
7346512 Li-Chun Mar 2008 B2
7349552 Levy Mar 2008 B2
7359526 Nister Apr 2008 B2
7359889 Wang Apr 2008 B2
7366908 Tewfik Apr 2008 B2
7370190 Calhoon May 2008 B2
7383169 Vanderwende Jun 2008 B1
7391881 Sharma Jun 2008 B2
7397607 Travers Jul 2008 B2
7412072 Sharma Aug 2008 B2
7412641 Shokrollahi Aug 2008 B2
7415129 Rhoads Aug 2008 B2
7418392 Mozer Aug 2008 B1
7425977 Sakai Sep 2008 B2
7450163 Rothschild Nov 2008 B2
7454033 Stach Nov 2008 B2
7461136 Rhoads Dec 2008 B2
7466334 Baba Dec 2008 B1
7486827 Kim Feb 2009 B2
7489801 Sharma Feb 2009 B2
7496328 Slotznick Feb 2009 B2
7503488 Davis Mar 2009 B2
7508954 Lev Mar 2009 B2
7512889 Newell Mar 2009 B2
7516074 Bilobrov Apr 2009 B2
7519200 Gokturk Apr 2009 B2
7519618 Nagahashi Apr 2009 B2
7519819 Bradley Apr 2009 B2
7529563 Pitroda May 2009 B1
7542610 Gokturk Jun 2009 B2
7562392 Rhoads Jul 2009 B1
7564992 Rhoads Jul 2009 B2
7565139 Neven Jul 2009 B2
7565157 Ortega Jul 2009 B1
7565294 Rhoads Jul 2009 B2
7575171 Lev Aug 2009 B2
7587601 Levy Sep 2009 B2
7587602 Rhoads Sep 2009 B2
7590259 Levy Sep 2009 B2
7606790 Levy Oct 2009 B2
7616807 Zhang Nov 2009 B2
7616840 Erol Nov 2009 B2
7627477 Wang Dec 2009 B2
7657100 Gokturk Feb 2010 B2
7668369 Yen Feb 2010 B2
7668821 Donsbach Feb 2010 B1
7676060 Brundage Mar 2010 B2
7676372 Oba Mar 2010 B1
7680324 Boncyk Mar 2010 B2
7693965 Rhoads Apr 2010 B2
7702681 Brewer Apr 2010 B2
7706570 Sharma Apr 2010 B2
7707035 McCune Apr 2010 B2
7711837 Bentsen May 2010 B2
7720436 Hamynen May 2010 B2
7721184 Luby May 2010 B2
7734729 Du Jun 2010 B2
7739221 Lawler Jun 2010 B2
7739224 Weissman Jun 2010 B1
7743980 de Sylva Jun 2010 B2
7751596 Rhoads Jul 2010 B2
7751805 Neven Jul 2010 B2
7760902 Rhoads Jul 2010 B2
7773808 Lim Aug 2010 B2
7774504 Chene Aug 2010 B2
7787697 Ritzau Aug 2010 B2
7792678 Hung Sep 2010 B2
7797338 Feng Sep 2010 B2
7801328 Au Sep 2010 B2
7805500 Rhoads Sep 2010 B2
7822225 Alattar Oct 2010 B2
7831531 Baluja Nov 2010 B1
7836093 Gobeyn Nov 2010 B2
7853582 Gopalakrishnan Dec 2010 B2
7853664 Wang Dec 2010 B1
7856411 Darr Dec 2010 B2
7860382 Grip Dec 2010 B2
7873521 Kurozumi Jan 2011 B2
7881931 Wells Feb 2011 B2
7890386 Reber Feb 2011 B1
7899243 Boncyk Mar 2011 B2
7899252 Boncyk Mar 2011 B2
7924761 Stevens Apr 2011 B1
7930546 Rhoads Apr 2011 B2
7940285 Would May 2011 B2
7941338 Silverbrook May 2011 B2
7958143 Amacker Jun 2011 B1
7970167 Rhoads Jun 2011 B2
7970213 Ruzon Jun 2011 B1
7971129 Watson Jun 2011 B2
7978875 Sharma Jul 2011 B2
7996678 Kalker Aug 2011 B2
8006160 Chen Aug 2011 B2
8009928 Manmatha Aug 2011 B1
8020770 Kamijo Sep 2011 B2
8036418 Meyer Oct 2011 B2
8041734 Mohajer Oct 2011 B2
8069414 Hartwig Nov 2011 B2
8077905 Rhoads Dec 2011 B2
8090579 Debusk Jan 2012 B2
8091025 Ramos Jan 2012 B2
8095888 Jang Jan 2012 B2
8103877 Hannigan Jan 2012 B2
8104091 Qin Jan 2012 B2
8116685 Bregman-Amitai Feb 2012 B2
8121618 Rhoads Feb 2012 B2
8122020 Donsbach Feb 2012 B1
8126858 Ruzon Feb 2012 B1
8140848 Brundage Mar 2012 B2
8150255 Tsai Apr 2012 B2
8151113 Rhoads Apr 2012 B2
8155582 Rhoads Apr 2012 B2
8156115 Erol Apr 2012 B1
8165409 Ritzau Apr 2012 B2
8175617 Rodriguez May 2012 B2
8176067 Ahmad May 2012 B1
8180396 Athsani May 2012 B2
8194986 Conwell Jun 2012 B2
8223799 Karaoguz Jul 2012 B2
8224022 Levy Jul 2012 B2
8224731 Maw Jul 2012 B2
8229160 Rosenblatt Jul 2012 B2
8250660 Levy Aug 2012 B2
8255693 Rhoads Aug 2012 B2
8256665 Rhoads Sep 2012 B2
8279138 Margulis Oct 2012 B1
8294569 Thorn Oct 2012 B2
8301512 Hamilton Oct 2012 B2
8313037 Humphrey Nov 2012 B1
8315554 Levy Nov 2012 B2
8332478 Levy Dec 2012 B2
8334898 Ryan Dec 2012 B1
8341412 Conwell Dec 2012 B2
8355961 Ng Jan 2013 B1
8364720 Levy Jan 2013 B2
8376239 Humphrey Feb 2013 B1
8380177 Laracey Feb 2013 B2
8385039 Rothkopf Feb 2013 B2
8385591 Anguelov Feb 2013 B1
8385971 Rhoads Feb 2013 B2
8388427 Yariv Mar 2013 B2
8396810 Cook Mar 2013 B1
8400548 Bilbrey Mar 2013 B2
8401342 Ruzon Mar 2013 B2
8406507 Ruzon Mar 2013 B2
8412577 Rodriguez Apr 2013 B2
8413903 Dhua Apr 2013 B1
8418055 King Apr 2013 B2
8422777 Aller Apr 2013 B2
8422782 Dhua Apr 2013 B1
8422994 Rhoads Apr 2013 B2
8423457 Schattauer Apr 2013 B1
8427508 Mattila Apr 2013 B2
8429407 Os Apr 2013 B2
8433306 Rodriguez Apr 2013 B2
8439683 Puri May 2013 B2
8447107 Dhua May 2013 B1
8452586 Master May 2013 B2
8463036 Ramesh Jun 2013 B1
8464176 Van Jun 2013 B2
8468377 Scott Jun 2013 B2
8478592 Patel Jul 2013 B2
8483715 Chen Jul 2013 B2
8487867 Wu Jul 2013 B2
8489115 Rodriguez Jul 2013 B2
8498627 Rodriguez Jul 2013 B2
8503791 Conwell Aug 2013 B2
8508471 Suh Aug 2013 B2
8520979 Conwell Aug 2013 B2
8526024 Fukushima Sep 2013 B2
8533761 Sahami Sep 2013 B1
8548810 Rodriguez Oct 2013 B2
8560605 Gyongyi Oct 2013 B1
8577880 Donsbach Nov 2013 B1
8582821 Feldman Nov 2013 B1
8600053 Brundage Dec 2013 B2
8606011 Ivanchenko Dec 2013 B1
8606021 Conwell Dec 2013 B2
8620208 Slotznick Dec 2013 B2
8620790 Priebatsch Dec 2013 B2
8630851 Hertschuh Jan 2014 B1
8631029 Amacker Jan 2014 B1
8631230 Manges Jan 2014 B2
8639036 Singer Jan 2014 B1
8639619 Priebatsch Jan 2014 B1
8660355 Rodriguez Feb 2014 B2
8666446 Kim Mar 2014 B2
8687021 Bathiche Apr 2014 B2
8687104 Penov Apr 2014 B2
8694049 Sharma Apr 2014 B2
8694438 Jernigan Apr 2014 B1
8694522 Pance Apr 2014 B1
8694534 Mohajer Apr 2014 B2
8700392 Hart Apr 2014 B1
8700407 Wang Apr 2014 B2
8706572 Varadarajan Apr 2014 B1
8711176 Douris Apr 2014 B2
8718369 Tompkins May 2014 B1
8725829 Wang May 2014 B2
8737737 Feldman May 2014 B1
8737986 Rhoads May 2014 B2
8738647 Menon May 2014 B2
8739208 Davis May 2014 B2
8743145 Price Jun 2014 B1
8744214 Snavely Jun 2014 B2
8750556 McKinley Jun 2014 B2
8755837 Rhoads Jun 2014 B2
8756216 Ramesh Jun 2014 B1
8762852 Davis Jun 2014 B2
8763908 Feldman Jul 2014 B1
8787672 Raichman Jul 2014 B2
8787707 Sieves Jul 2014 B1
8788977 Bezos Jul 2014 B2
8799401 Bryar Aug 2014 B1
8803912 Fouts Aug 2014 B1
8816179 Wang Aug 2014 B2
8819172 Davis Aug 2014 B2
8849259 Rhoads Sep 2014 B2
8868039 Rodriguez Oct 2014 B2
8872854 Levitt Oct 2014 B1
8886222 Rodriguez Nov 2014 B1
8970733 Faenger Mar 2015 B2
8971567 Reed Mar 2015 B2
8972824 Northcott Mar 2015 B1
8977293 Rodriguez Mar 2015 B2
8990347 Schneider Mar 2015 B2
9022291 Van May 2015 B1
9022292 Van May 2015 B1
9054742 Kwok Jun 2015 B2
9071730 Livesey Jun 2015 B2
9116920 Boncyk Aug 2015 B2
9118771 Rodriguez Aug 2015 B2
9143603 Davis Sep 2015 B2
9196028 Rodriguez Nov 2015 B2
9197736 Davis Nov 2015 B2
9234744 Rhoads Jan 2016 B2
9256806 Aller Feb 2016 B2
9311639 Filler Apr 2016 B2
9311640 Filler Apr 2016 B2
9354778 Cornaby May 2016 B2
9412121 Tatzel Aug 2016 B2
9444924 Rodriguez Sep 2016 B2
9462107 Rhoads Oct 2016 B2
9484046 Knudson Nov 2016 B2
9595258 Rodriguez Mar 2017 B2
9609107 Rodriguez Mar 2017 B2
9609117 Davis Mar 2017 B2
20010024568 Mori Sep 2001 A1
20010031066 Meyer Oct 2001 A1
20010037312 Gray Nov 2001 A1
20010037455 Lawandy Nov 2001 A1
20010056225 Devito Dec 2001 A1
20020004783 Paltenghe Jan 2002 A1
20020010679 Felsher Jan 2002 A1
20020010684 Moskowitz Jan 2002 A1
20020038287 Villaret Mar 2002 A1
20020042266 Heyward Apr 2002 A1
20020044659 Ohta Apr 2002 A1
20020054067 Ludtke May 2002 A1
20020055924 Liming May 2002 A1
20020065063 Uhlik May 2002 A1
20020072982 Barton Jun 2002 A1
20020077534 Durousseau Jun 2002 A1
20020077978 OLeary Jun 2002 A1
20020077993 Immonen Jun 2002 A1
20020083292 Isomura Jun 2002 A1
20020090109 Wendt Jul 2002 A1
20020095389 Gaines Jul 2002 A1
20020102966 Lev Aug 2002 A1
20020113757 Hoisko Aug 2002 A1
20020116195 Pitman Aug 2002 A1
20020124116 Yaung Sep 2002 A1
20020126879 Mihara Sep 2002 A1
20020128857 Lee Sep 2002 A1
20020133499 Ward Sep 2002 A1
20020191862 Neumann Dec 2002 A1
20030012410 Navab Jan 2003 A1
20030018709 Schrempp Jan 2003 A1
20030026453 Sharma Feb 2003 A1
20030031369 Le Feb 2003 A1
20030033321 Schrempp Feb 2003 A1
20030033325 Boogaard Feb 2003 A1
20030035567 Chang Feb 2003 A1
20030037010 Schmelzer Feb 2003 A1
20030044048 Zhang Mar 2003 A1
20030061039 Levin Mar 2003 A1
20030062419 Ehrhart Apr 2003 A1
20030095681 Burg May 2003 A1
20030097444 Dutta May 2003 A1
20030101084 Otero May 2003 A1
20030103647 Rui Jun 2003 A1
20030112267 Belrose Jun 2003 A1
20030117365 Shteyn Jun 2003 A1
20030135623 Schrempp Jul 2003 A1
20030140004 OLeary Jul 2003 A1
20030187659 Cho Oct 2003 A1
20030200089 Nakagawa Oct 2003 A1
20030208499 Bigwood Nov 2003 A1
20030231785 Rhoads Dec 2003 A1
20040003409 Berstis Jan 2004 A1
20040019785 Hawkes Jan 2004 A1
20040028258 Naimark Feb 2004 A1
20040041028 Smith Mar 2004 A1
20040080530 Lee Apr 2004 A1
20040083015 Patwari Apr 2004 A1
20040091111 Levy May 2004 A1
20040099741 Dorai May 2004 A1
20040138877 Ariu Jul 2004 A1
20040163106 Schrempp Aug 2004 A1
20040169674 Linjama Sep 2004 A1
20040201676 Needham Oct 2004 A1
20040212630 Hobgood Oct 2004 A1
20040220978 Bentz Nov 2004 A1
20040250078 Stach Dec 2004 A1
20040250079 Kalker Dec 2004 A1
20040263663 Lee Dec 2004 A1
20050011957 Attia Jan 2005 A1
20050018883 Scott Jan 2005 A1
20050033582 Gadd Feb 2005 A1
20050036628 Devito Feb 2005 A1
20050036656 Takahashi Feb 2005 A1
20050038814 Iyengar Feb 2005 A1
20050044189 Ikezoye Feb 2005 A1
20050049964 Winterer Mar 2005 A1
20050071179 Peters Mar 2005 A1
20050091604 Davis Apr 2005 A1
20050096902 Kondo May 2005 A1
20050104750 Tuason May 2005 A1
20050116026 Burger Jun 2005 A1
20050125224 Myers Jun 2005 A1
20050132021 Mehr Jun 2005 A1
20050132194 Ward Jun 2005 A1
20050132420 Howard Jun 2005 A1
20050159955 Oerder Jul 2005 A1
20050165609 Zuberec Jul 2005 A1
20050165784 Gomez Jul 2005 A1
20050177846 Maruyama Aug 2005 A1
20050178832 Higuchi Aug 2005 A1
20050185060 Neven Aug 2005 A1
20050190972 Thomas Sep 2005 A1
20050195128 Sefton Sep 2005 A1
20050195309 Kim Sep 2005 A1
20050198095 Du Sep 2005 A1
20050216277 Lee Sep 2005 A1
20050232411 Srinivasan Oct 2005 A1
20050240253 Tyler Oct 2005 A1
20050253713 Yokota Nov 2005 A1
20050261990 Gocht Nov 2005 A1
20050273608 Kamperman Dec 2005 A1
20050277872 Colby Dec 2005 A1
20050281410 Grosvenor Dec 2005 A1
20050283379 Reber Dec 2005 A1
20060002607 Boncyk Jan 2006 A1
20060007005 Yui Jan 2006 A1
20060008112 Reed Jan 2006 A1
20060009702 Iwaki Jan 2006 A1
20060012677 Neven Jan 2006 A1
20060013435 Rhoads Jan 2006 A1
20060013446 Stephens Jan 2006 A1
20060031684 Sharma Feb 2006 A1
20060032726 Vook Feb 2006 A1
20060038833 Mallinson Feb 2006 A1
20060041661 Erikson Feb 2006 A1
20060047584 Vaschillo Mar 2006 A1
20060047704 Gopalakrishnan Mar 2006 A1
20060049940 Matsuhira Mar 2006 A1
20060062428 Alattar Mar 2006 A1
20060067575 Yamada Mar 2006 A1
20060085477 Phillips Apr 2006 A1
20060092291 Bodie May 2006 A1
20060097983 Haggman May 2006 A1
20060107219 Ahya May 2006 A1
20060109515 Zhao May 2006 A1
20060114338 Rothschild Jun 2006 A1
20060115108 Rodriguez Jun 2006 A1
20060131393 Cok Jun 2006 A1
20060133647 Werner Jun 2006 A1
20060143018 Densham Jun 2006 A1
20060157559 Levy Jul 2006 A1
20060170956 Jung Aug 2006 A1
20060173859 Kim Aug 2006 A1
20060198549 Van Sep 2006 A1
20060214953 Crew Sep 2006 A1
20060217199 Adcox Sep 2006 A1
20060218192 Gopalakrishnan Sep 2006 A1
20060230073 Gopalakrishnan Oct 2006 A1
20060240862 Neven Oct 2006 A1
20060247915 Bradford Nov 2006 A1
20060251292 Gokturk Nov 2006 A1
20060253335 Keena Nov 2006 A1
20060256200 Matei Nov 2006 A1
20060268007 Gopalakrishnan Nov 2006 A1
20070002077 Gopalakrishnan Jan 2007 A1
20070031064 Zhao Feb 2007 A1
20070036469 Kim Feb 2007 A1
20070064263 Silverbrook Mar 2007 A1
20070064562 Han Mar 2007 A1
20070064957 Pages Mar 2007 A1
20070070069 Samarasekera Mar 2007 A1
20070079161 Gupta Apr 2007 A1
20070091196 Miyanohara Apr 2007 A1
20070100480 Sinclair May 2007 A1
20070100796 Wang May 2007 A1
20070106721 Schloter May 2007 A1
20070109018 Perl May 2007 A1
20070109266 Davis May 2007 A1
20070112567 Lau May 2007 A1
20070116456 Kuriakose May 2007 A1
20070124756 Covell May 2007 A1
20070124775 Dacosta May 2007 A1
20070132856 Saito Jun 2007 A1
20070150411 Addepalli Jun 2007 A1
20070156726 Levy Jul 2007 A1
20070156762 Ben-Yaacov Jul 2007 A1
20070159522 Neven Jul 2007 A1
20070162348 Lewis Jul 2007 A1
20070162761 Davis Jul 2007 A1
20070162942 Hamynen Jul 2007 A1
20070162971 Blom Jul 2007 A1
20070168332 Bussard Jul 2007 A1
20070172155 Guckenberger Jul 2007 A1
20070174043 Makela Jul 2007 A1
20070174059 Rhoads Jul 2007 A1
20070174613 Paddon Jul 2007 A1
20070185697 Tan Aug 2007 A1
20070185840 Rhoads Aug 2007 A1
20070187505 Rhoads Aug 2007 A1
20070192272 Elfayoumy Aug 2007 A1
20070192352 Levy Aug 2007 A1
20070192480 Han Aug 2007 A1
20070200912 Hung Aug 2007 A1
20070208711 Rhoads Sep 2007 A1
20070237106 Rajan Oct 2007 A1
20070250194 Rhoads Oct 2007 A1
20070250716 Brunk Oct 2007 A1
20070253594 Lu Nov 2007 A1
20070266252 Davis Nov 2007 A1
20070274537 Srinivasan Nov 2007 A1
20070282739 Thomsen Dec 2007 A1
20070286463 Ritzau Dec 2007 A1
20070294431 Adelman Dec 2007 A1
20070300070 Shen-Orr Dec 2007 A1
20070300127 Watson Dec 2007 A1
20070300267 Griffin Dec 2007 A1
20080002914 Vincent Jan 2008 A1
20080004978 Rothschild Jan 2008 A1
20080005091 Lawler Jan 2008 A1
20080007620 Wang Jan 2008 A1
20080014917 Rhoads Jan 2008 A1
20080036869 Gustafsson Feb 2008 A1
20080041936 Vawter Feb 2008 A1
20080041937 Vawter Feb 2008 A1
20080048022 Vawter Feb 2008 A1
20080057911 Lauper Mar 2008 A1
20080059211 Brock Mar 2008 A1
20080059896 Anderson Mar 2008 A1
20080062141 Chandhri Mar 2008 A1
20080066052 Wolfram Mar 2008 A1
20080066080 Campbell Mar 2008 A1
20080071749 Schloter Mar 2008 A1
20080071750 Schloter Mar 2008 A1
20080071770 Schloter Mar 2008 A1
20080071988 Schloter Mar 2008 A1
20080082426 Gokturk Apr 2008 A1
20080091602 Gray Apr 2008 A1
20080092168 Logan Apr 2008 A1
20080109369 Su May 2008 A1
20080114737 Neely May 2008 A1
20080122796 Jobs May 2008 A1
20080134088 Tse Jun 2008 A1
20080136587 Orr Jun 2008 A1
20080140306 Snodgrass Jun 2008 A1
20080143518 Aaron Jun 2008 A1
20080155426 Robertson Jun 2008 A1
20080162228 Mechbach Jul 2008 A1
20080165022 Herz Jul 2008 A1
20080165960 Woo Jul 2008 A1
20080174570 Jobs Jul 2008 A1
20080178302 Brock Jul 2008 A1
20080184322 Blake Jul 2008 A1
20080193011 Hayashi Aug 2008 A1
20080201314 Smith Aug 2008 A1
20080208849 Conwell Aug 2008 A1
20080209502 Seidel Aug 2008 A1
20080215274 Yamaguchi Sep 2008 A1
20080218472 Breen Sep 2008 A1
20080226119 Candelore Sep 2008 A1
20080228733 Davis Sep 2008 A1
20080235031 Yamamoto Sep 2008 A1
20080235570 Sawada Sep 2008 A1
20080239350 Ohira Oct 2008 A1
20080242317 Abhyanker Oct 2008 A1
20080243806 Dalal Oct 2008 A1
20080248797 Freeman Oct 2008 A1
20080249961 Harkness Oct 2008 A1
20080250147 Knibbeler Oct 2008 A1
20080250347 Gray Oct 2008 A1
20080253357 Liu Oct 2008 A1
20080255933 Leventhal Oct 2008 A1
20080259918 Walker Oct 2008 A1
20080262928 Michaelis Oct 2008 A1
20080263046 Kristensson Oct 2008 A1
20080267504 Schloter Oct 2008 A1
20080267521 Gao Oct 2008 A1
20080268876 Gelfand Oct 2008 A1
20080270378 Setlur Oct 2008 A1
20080276265 Topchy Nov 2008 A1
20080278481 Aguera Nov 2008 A1
20080281515 Ann Nov 2008 A1
20080292137 Rhoads Nov 2008 A1
20080296392 Connell Dec 2008 A1
20080300011 Rhoads Dec 2008 A1
20080301320 Morris Dec 2008 A1
20080306924 Paolini Dec 2008 A1
20080313146 Wong Dec 2008 A1
20080317278 Lefebvre Dec 2008 A1
20090002491 Haler Jan 2009 A1
20090002497 Davis Jan 2009 A1
20090012944 Rodriguez Jan 2009 A1
20090015685 Shulman Jan 2009 A1
20090016645 Sako Jan 2009 A1
20090018828 Nakadai Jan 2009 A1
20090024619 Dallmeier Jan 2009 A1
20090031381 Cohen Jan 2009 A1
20090031814 Takiguchi Feb 2009 A1
20090037326 Chitti Feb 2009 A1
20090043580 Mozer Feb 2009 A1
20090043658 Webb Feb 2009 A1
20090043726 Watzke Feb 2009 A1
20090049100 Wissner-Gross Feb 2009 A1
20090060259 Goncalves Mar 2009 A1
20090063279 Ives Mar 2009 A1
20090083237 Gelfand Mar 2009 A1
20090083275 Jacob Mar 2009 A1
20090083642 Kim Mar 2009 A1
20090085873 Betts Apr 2009 A1
20090089078 Bursey Apr 2009 A1
20090094289 Xiong Apr 2009 A1
20090102859 Athsani Apr 2009 A1
20090104888 Cox Apr 2009 A1
20090106087 Konar Apr 2009 A1
20090109940 Vedurmudi Apr 2009 A1
20090110245 Thorn Apr 2009 A1
20090116683 Rhoads May 2009 A1
20090119172 Soloff May 2009 A1
20090122157 Kuboyama May 2009 A1
20090129782 Pederson May 2009 A1
20090135918 Mak-Fan May 2009 A1
20090142038 Nishikawa Jun 2009 A1
20090144161 Fisher Jun 2009 A1
20090148068 Woodbeck Jun 2009 A1
20090157795 Black Jun 2009 A1
20090158318 Levy Jun 2009 A1
20090161662 Wu Jun 2009 A1
20090164896 Thorn Jun 2009 A1
20090167787 Bathiche Jul 2009 A1
20090174798 Nilsson Jul 2009 A1
20090175499 Rosenblatt Jul 2009 A1
20090177742 Rhoads Jul 2009 A1
20090182622 Agarwal Jul 2009 A1
20090189830 Deering Jul 2009 A1
20090196460 Jakobs Aug 2009 A1
20090198615 Emerson Aug 2009 A1
20090199235 Surendran Aug 2009 A1
20090203355 Clark Aug 2009 A1
20090204410 Mozer Aug 2009 A1
20090204640 Christensen Aug 2009 A1
20090214060 Chuang Aug 2009 A1
20090216910 Duchesneau Aug 2009 A1
20090220070 Picard Sep 2009 A1
20090231441 Walker Sep 2009 A1
20090232352 Carr Sep 2009 A1
20090234773 Hasson Sep 2009 A1
20090237546 Bloebaum Sep 2009 A1
20090240564 Boerries Sep 2009 A1
20090240692 Barton Sep 2009 A1
20090242620 Sahuguet Oct 2009 A1
20090245600 Hoffman Oct 2009 A1
20090247184 Sennett Oct 2009 A1
20090255395 Humphrey Oct 2009 A1
20090259962 Beale Oct 2009 A1
20090263024 Yamaguchi Oct 2009 A1
20090276344 Maw Nov 2009 A1
20090279794 Brucher Nov 2009 A1
20090284366 Haartsen Nov 2009 A1
20090285492 Ramanujapuram Nov 2009 A1
20090295942 Barnett Dec 2009 A1
20090297045 Poetker Dec 2009 A1
20090299990 Setlur Dec 2009 A1
20090303231 Robinet Dec 2009 A1
20090307132 Phillips Dec 2009 A1
20090310814 Gallagher Dec 2009 A1
20090313269 Bachmann Dec 2009 A1
20090315886 Drive Dec 2009 A1
20090319388 Yuan Dec 2009 A1
20090327272 Koivunen Dec 2009 A1
20090327894 Rakib Dec 2009 A1
20100004926 Neoran Jan 2010 A1
20100009700 Camp Jan 2010 A1
20100020970 Liu Jan 2010 A1
20100023328 Griffin Jan 2010 A1
20100031198 Zimmerman Feb 2010 A1
20100036717 Trest Feb 2010 A1
20100045701 Scott Feb 2010 A1
20100045816 Rhoads Feb 2010 A1
20100046842 Conwell Feb 2010 A1
20100048242 Rhoads Feb 2010 A1
20100057552 OLeary Mar 2010 A1
20100069115 Liu Mar 2010 A1
20100070272 Lee Mar 2010 A1
20100070284 Oh Mar 2010 A1
20100070365 Siotia Mar 2010 A1
20100070501 Walsh Mar 2010 A1
20100076833 Nelsen Mar 2010 A1
20100077017 Martinez Mar 2010 A1
20100082431 Ramer Apr 2010 A1
20100082444 Lin Apr 2010 A1
20100086107 Tzruya Apr 2010 A1
20100088100 Lindahl Apr 2010 A1
20100088188 Kumar Apr 2010 A1
20100088237 Wankmueller Apr 2010 A1
20100092093 Akatsuka Apr 2010 A1
20100098341 Ju Apr 2010 A1
20100100546 Kohler Apr 2010 A1
20100114731 Kingston May 2010 A1
20100119208 Davis May 2010 A1
20100125495 Smith May 2010 A1
20100125816 Bezos May 2010 A1
20100131443 Agarwal May 2010 A1
20100134278 Srinivasan Jun 2010 A1
20100135417 Hargil Jun 2010 A1
20100135527 Wu Jun 2010 A1
20100138344 Wong Jun 2010 A1
20100146445 Kraut Jun 2010 A1
20100150434 Reed Jun 2010 A1
20100158310 McQueen Jun 2010 A1
20100162105 Beebe Jun 2010 A1
20100171826 Hamilton Jul 2010 A1
20100171875 Yamamoto Jul 2010 A1
20100173269 Puri Jul 2010 A1
20100174544 Heifets Jul 2010 A1
20100179859 Davis Jul 2010 A1
20100185448 Meisel Jul 2010 A1
20100198870 Petersen Aug 2010 A1
20100199232 Mistry Aug 2010 A1
20100205628 Davis Aug 2010 A1
20100208997 Xie Aug 2010 A1
20100211693 Master Aug 2010 A1
20100222102 Rodriguez Sep 2010 A1
20100223152 Emerson Sep 2010 A1
20100225773 Lee Sep 2010 A1
20100226526 Modro Sep 2010 A1
20100228612 Khosravy Sep 2010 A1
20100228632 Rodriguez Sep 2010 A1
20100231509 Boillot Sep 2010 A1
20100232727 Engedal Sep 2010 A1
20100241946 Ofek Sep 2010 A1
20100250436 Loevenguth Sep 2010 A1
20100257252 Dougherty Oct 2010 A1
20100260426 Huang Oct 2010 A1
20100261465 Rhoads Oct 2010 A1
20100271365 Smith Oct 2010 A1
20100273452 Rajann Oct 2010 A1
20100277611 Holt Nov 2010 A1
20100282836 Kempf Nov 2010 A1
20100284617 Ritzau Nov 2010 A1
20100306113 Gray Dec 2010 A1
20100309226 Quack Dec 2010 A1
20100312547 Van Dec 2010 A1
20100317420 Hoffberg Dec 2010 A1
20100318470 Meinel Dec 2010 A1
20100318558 Boothroyd Dec 2010 A1
20100325154 Schloter Dec 2010 A1
20110029370 Roeding Feb 2011 A1
20110034176 Lord Feb 2011 A1
20110035406 Petrou Feb 2011 A1
20110035662 King Feb 2011 A1
20110038512 Petrou Feb 2011 A1
20110043652 King Feb 2011 A1
20110058707 Rhoads Mar 2011 A1
20110063317 Gharaat Mar 2011 A1
20110064312 Janky Mar 2011 A1
20110065451 Danado Mar 2011 A1
20110066613 Berkman Mar 2011 A1
20110069958 Haas Mar 2011 A1
20110072047 Wang Mar 2011 A1
20110076942 Taveau Mar 2011 A1
20110078204 Lotikar Mar 2011 A1
20110078205 Salkeld Mar 2011 A1
20110085739 Zhang Apr 2011 A1
20110087685 Lin Apr 2011 A1
20110098029 Rhoads Apr 2011 A1
20110098056 Rhoads Apr 2011 A1
20110105022 Vawter May 2011 A1
20110116690 Ross May 2011 A1
20110119156 Hwang May 2011 A1
20110125696 Wu May 2011 A1
20110125735 Petrou May 2011 A1
20110128288 Petrou Jun 2011 A1
20110129153 Petrou Jun 2011 A1
20110131040 Huang Jun 2011 A1
20110131235 Petrou Jun 2011 A1
20110131241 Petrou Jun 2011 A1
20110135207 Flynn Jun 2011 A1
20110137895 Petrou Jun 2011 A1
20110138286 Kaptelinin Jun 2011 A1
20110141276 Borghei Jun 2011 A1
20110143811 Rodriguez Jun 2011 A1
20110150292 Boncyk Jun 2011 A1
20110153050 Bauer Jun 2011 A1
20110153201 Park Jun 2011 A1
20110153653 King Jun 2011 A1
20110157184 Niehsen Jun 2011 A1
20110158558 Zhao Jun 2011 A1
20110159921 Davis Jun 2011 A1
20110161076 Davis Jun 2011 A1
20110161086 Rodriguez Jun 2011 A1
20110161285 Boldyrev Jun 2011 A1
20110164064 Tanaka Jul 2011 A1
20110164163 Bilbrey Jul 2011 A1
20110165896 Stromberg Jul 2011 A1
20110167053 Lawler Jul 2011 A1
20110173185 Vogel Jul 2011 A1
20110176706 Levy Jul 2011 A1
20110180598 Morgan Jul 2011 A1
20110187652 Huibers Aug 2011 A1
20110187716 Chen Aug 2011 A1
20110188713 Chin Aug 2011 A1
20110191438 Huibers Aug 2011 A1
20110191823 Huibers Aug 2011 A1
20110196855 Wable Aug 2011 A1
20110196859 Mei Aug 2011 A1
20110197226 Hatalkar Aug 2011 A1
20110199479 Waldman Aug 2011 A1
20110202151 Covaro Aug 2011 A1
20110202466 Carter Aug 2011 A1
20110208652 OLeary Aug 2011 A1
20110208656 Alba Aug 2011 A1
20110212717 Rhoads Sep 2011 A1
20110215736 Horbst Sep 2011 A1
20110225167 Bhattacharjee Sep 2011 A1
20110233278 Patel Sep 2011 A1
20110238571 OLeary Sep 2011 A1
20110244919 Aller Oct 2011 A1
20110246362 OLeary Oct 2011 A1
20110246574 Lento Oct 2011 A1
20110247027 Davis Oct 2011 A1
20110251892 Laracey Oct 2011 A1
20110258121 Kauniskangas Oct 2011 A1
20110273455 Powar Nov 2011 A1
20110274310 Rhoads Nov 2011 A1
20110276474 Portillo Nov 2011 A1
20110277023 Meylemans Nov 2011 A1
20110279458 Gnanasambandam Nov 2011 A1
20110283208 Gallo Nov 2011 A1
20110289098 Oztaskent Nov 2011 A1
20110289183 Rollins Nov 2011 A1
20110289224 Trott Nov 2011 A1
20110293094 Os Dec 2011 A1
20110295502 Faenger Dec 2011 A1
20110314549 Song Dec 2011 A1
20110320314 Brown Dec 2011 A1
20120005616 Walsh Jan 2012 A1
20120011063 Killian Jan 2012 A1
20120013766 Rothschild Jan 2012 A1
20120016678 Gruber Jan 2012 A1
20120022958 de Sylva Jan 2012 A1
20120023060 Rothkopf Jan 2012 A1
20120024945 Jones Feb 2012 A1
20120028577 Rodriguez Feb 2012 A1
20120033876 Momeyer Feb 2012 A1
20120034904 Lebeau Feb 2012 A1
20120038668 Kim Feb 2012 A1
20120044350 Verfuerth Feb 2012 A1
20120045093 Salminen Feb 2012 A1
20120046071 Brandis Feb 2012 A1
20120059780 Koenoenen Mar 2012 A1
20120062465 Spetalnick Mar 2012 A1
20120069019 Richards Mar 2012 A1
20120069051 Hagbi Mar 2012 A1
20120075168 Osterhout Mar 2012 A1
20120078397 Lee Mar 2012 A1
20120095865 Doherty Apr 2012 A1
20120096176 Kiss Apr 2012 A1
20120102066 Eronen Apr 2012 A1
20120105473 Bar-Zeev May 2012 A1
20120105475 Tseng May 2012 A1
20120105486 Lankford May 2012 A1
20120123959 Davis May 2012 A1
20120131107 Yost May 2012 A1
20120141660 Fiedler Jun 2012 A1
20120143655 Sunaoshi Jun 2012 A1
20120143752 Wong Jun 2012 A1
20120149342 Cohen Jun 2012 A1
20120150601 Fisher Jun 2012 A1
20120154633 Rodriguez Jun 2012 A1
20120158715 Maghoul Jun 2012 A1
20120166333 Von Jun 2012 A1
20120166810 Tao Jun 2012 A1
20120179914 Brundage Jul 2012 A1
20120197794 Grigg Aug 2012 A1
20120208592 Davis Aug 2012 A1
20120209688 Lamothe Aug 2012 A1
20120209749 Hammad Aug 2012 A1
20120209907 Andrews Aug 2012 A1
20120214515 Davis Aug 2012 A1
20120218444 Stach Aug 2012 A1
20120221645 Anthru Aug 2012 A1
20120224743 Rodriguez Sep 2012 A1
20120232968 Calman Sep 2012 A1
20120233004 Bercaw Sep 2012 A1
20120239506 Saunders Sep 2012 A1
20120240044 Johnson Sep 2012 A1
20120246079 Wilson Sep 2012 A1
20120266221 Castelluccia Oct 2012 A1
20120271712 Katzin Oct 2012 A1
20120278155 Faith Nov 2012 A1
20120278241 Brown Nov 2012 A1
20120280908 Rhoads Nov 2012 A1
20120281987 Schenk Nov 2012 A1
20120284012 Rodriguez Nov 2012 A1
20120286928 Mullen Nov 2012 A1
20120290376 Dryer Nov 2012 A1
20120290449 Mullen Nov 2012 A1
20120296741 Dykes Nov 2012 A1
20120299961 Ramkumar Nov 2012 A1
20120300974 Rodriguez Nov 2012 A1
20120303425 Katzin Nov 2012 A1
20120303668 Srinivasan Nov 2012 A1
20120310760 Phillips Dec 2012 A1
20120310826 Chatterjee Dec 2012 A1
20120310836 Eden Dec 2012 A1
20120311623 Davis Dec 2012 A1
20120323717 Kirsch Dec 2012 A1
20130007201 Jeffrey Jan 2013 A1
20130008947 Aidasani Jan 2013 A1
20130013091 Cavalcanti Jan 2013 A1
20130014145 Bhatia Jan 2013 A1
20130016978 Son Jan 2013 A1
20130024371 Hariramani Jan 2013 A1
20130027576 Ryan Jan 2013 A1
20130028612 Ryan Jan 2013 A1
20130033522 Calman Feb 2013 A1
20130036048 Campos Feb 2013 A1
20130041830 Singh Feb 2013 A1
20130044233 Bai Feb 2013 A1
20130054454 Purves Feb 2013 A1
20130054470 Campos Feb 2013 A1
20130058390 Haas Mar 2013 A1
20130060665 Davis Mar 2013 A1
20130060686 Mersky Mar 2013 A1
20130073373 Fisher Mar 2013 A1
20130080230 Fisher Mar 2013 A1
20130080231 Fisher Mar 2013 A1
20130080232 Fisher Mar 2013 A1
20130080233 Fisher Mar 2013 A1
20130080240 Fisher Mar 2013 A1
20130085877 Ruhrig Apr 2013 A1
20130085941 Rosenblatt Apr 2013 A1
20130089133 Woo Apr 2013 A1
20130090926 Grokop Apr 2013 A1
20130091042 Shah Apr 2013 A1
20130091462 Gray Apr 2013 A1
20130097078 Wong Apr 2013 A1
20130097630 Rodriguez Apr 2013 A1
20130103482 Song Apr 2013 A1
20130126607 Behjat May 2013 A1
20130128060 Rhoads May 2013 A1
20130146661 Melbrod Jun 2013 A1
20130150117 Rodriguez Jun 2013 A1
20130159154 Purves Jun 2013 A1
20130159178 Colon Jun 2013 A1
20130166332 Hammad Jun 2013 A1
20130171930 Anand Jul 2013 A1
20130173477 Cairns Jul 2013 A1
20130179340 Alba Jul 2013 A1
20130179341 Boudreau Jul 2013 A1
20130183952 Davis Jul 2013 A1
20130198242 Levy Aug 2013 A1
20130200999 Spodak Aug 2013 A1
20130201218 Cates Aug 2013 A1
20130208124 Boghossian Aug 2013 A1
20130208184 Castor Aug 2013 A1
20130212012 Doherty Aug 2013 A1
20130215116 Siddique Aug 2013 A1
20130218765 Hammad Aug 2013 A1
20130223673 Davis Aug 2013 A1
20130228616 Bhosle Sep 2013 A1
20130238455 Laracey Sep 2013 A1
20130254028 Salci Sep 2013 A1
20130254422 Master Sep 2013 A2
20130256421 Johnson Oct 2013 A1
20130267275 Onishi Oct 2013 A1
20130272548 Visser Oct 2013 A1
20130282438 Hunter Oct 2013 A1
20130290106 Bradley Oct 2013 A1
20130290379 Rhoads Oct 2013 A1
20130295878 Davis Nov 2013 A1
20130311329 Knudson Nov 2013 A1
20130325567 Bradley Dec 2013 A1
20130328926 Kim Dec 2013 A1
20130329023 Suplee Dec 2013 A1
20130334308 Priebatsch Dec 2013 A1
20130346305 Mendes Dec 2013 A1
20140019758 Phadke Jan 2014 A1
20140023341 Wang Jan 2014 A1
20140058936 Ren Feb 2014 A1
20140074704 White Mar 2014 A1
20140082696 Danev Mar 2014 A1
20140086590 Ganick Mar 2014 A1
20140089672 Luna Mar 2014 A1
20140100973 Brown Apr 2014 A1
20140100997 Mayerle Apr 2014 A1
20140101691 Sinha Apr 2014 A1
20140108020 Sharma Apr 2014 A1
20140111615 McGuire Apr 2014 A1
20140123253 Davis May 2014 A1
20140136993 Luu May 2014 A1
20140142958 Sharma May 2014 A1
20140156463 Hui Jun 2014 A1
20140164124 Rhoads Jun 2014 A1
20140189056 St Jul 2014 A1
20140189524 Murarka Jul 2014 A1
20140189539 St Jul 2014 A1
20140212142 Doniec Jul 2014 A1
20140232750 Price Aug 2014 A1
20140244494 Davis Aug 2014 A1
20140244495 Davis Aug 2014 A1
20140244514 Rodriguez Aug 2014 A1
20140258110 Davis Sep 2014 A1
20140280316 Ganick Sep 2014 A1
20140282051 Cruz-Hernandez Sep 2014 A1
20140293016 Benhimane Oct 2014 A1
20140323142 Rodriguez Oct 2014 A1
20140333794 Rhoads Nov 2014 A1
20140337733 Rodriguez Nov 2014 A1
20140351765 Rodriguez Nov 2014 A1
20140369169 Iida Dec 2014 A1
20140372198 Goldfinger Dec 2014 A1
20150006390 Aissi Jan 2015 A1
20150058200 Inotay Feb 2015 A1
20150058870 Khanna Feb 2015 A1
20150112838 Li Apr 2015 A1
20150153181 Gildfind Jun 2015 A1
20150227922 Filler Aug 2015 A1
20150227925 Filler Aug 2015 A1
20160063611 Davis Mar 2016 A1
20160379082 Rodriguez Dec 2016 A1
Foreign Referenced Citations (69)
Number Date Country
1397920 Feb 2003 CN
1453728 Nov 2003 CN
1631030 Jun 2005 CN
101038177 Sep 2007 CN
101533506 Sep 2009 CN
102118886 Jul 2011 CN
102354389 Feb 2012 CN
1320043 Jun 2003 EP
1691344 Aug 2006 EP
2494496 Sep 2012 EP
2136306 Mar 2013 EP
H1175294 Jul 1999 JP
2000322078 Nov 2000 JP
2001503165 Mar 2001 JP
2003101488 Apr 2003 JP
2003134038 May 2003 JP
2004212641 Jul 2004 JP
2004274653 Sep 2004 JP
2005080110 Mar 2005 JP
2005518594 Jun 2005 JP
2005315802 Nov 2005 JP
2006163227 Jun 2006 JP
2006229894 Aug 2006 JP
2007074495 Mar 2007 JP
2007257502 Oct 2007 JP
2007334897 Dec 2007 JP
2008009120 Jan 2008 JP
2008071298 Mar 2008 JP
2008509607 Mar 2008 JP
2008158583 Jul 2008 JP
2008531430 Aug 2008 JP
2008537612 Sep 2008 JP
2008236141 Oct 2008 JP
2008252690 Oct 2008 JP
2008252907 Oct 2008 JP
2008252930 Oct 2008 JP
2008262541 Oct 2008 JP
2009042167 Feb 2009 JP
2009505288 Feb 2009 JP
2009086549 Apr 2009 JP
2009088903 Apr 2009 JP
2011527004 Oct 2011 JP
0776663 Nov 2007 KR
2010027847 Mar 2010 TL
0070523 Nov 2000 WO
0200001 Jan 2002 WO
02099786 Dec 2002 WO
2006009663 Jan 2006 WO
2006025797 Mar 2006 WO
2007021996 Feb 2007 WO
2007029582 Mar 2007 WO
2007084078 Jul 2007 WO
2008008563 Jan 2008 WO
2009061839 May 2009 WO
2009154927 Dec 2009 WO
2010022185 Feb 2010 WO
2010054222 May 2010 WO
2011059761 May 2011 WO
2011082332 Jul 2011 WO
2011116309 Sep 2011 WO
2011116309 Sep 2011 WO
2011139980 Nov 2011 WO
2012061760 May 2012 WO
2012127439 Sep 2012 WO
2013043393 Mar 2013 WO
2013043393 Mar 2013 WO
2014041381 Mar 2014 WO
2014134180 Sep 2014 WO
2014199188 Dec 2014 WO
Non-Patent Literature Citations (478)
Entry
‘Philips Debuts with Live Media Registration Service in Mediahedge,’ Apr. 2008. (2 pages).
“Linked Data,” article from archived Wikipedia, dated Oct. 25, 2011. (8 pages).
“Message Queue” article from Wikipedia, archive version dated Oct. 31, 2010.
“Tuple,” article from archived Wikipedia, dated Sep. 29, 2011. (6 pages).
Absar et al, Usability of Non-Speech Sounds in User Interfaces, Proc. of 14th Int'l Conf. on Auditory Display, Jun. 2008. (8 pages).
Accelerated Examination Support Document, U.S. Appl. No. 13/197,555 (now U.S. Pat. No. 8,194,986), Aug. 2011. (11 pages).
Ahmed, et al, MACE-Adaptive Component Management Middleware for Ubiquitous Systems, Proc. 4th Int'l Workshop on Middleware for Pervasive and Ad-Hoc Computing, 2006. (6 pages).
Ahn, et al, MetroTrack—Predictive Tracking of Mobile Events using Mobile Phones, Distributed Computing in Sensor Systems, Dartmouth/Computer Science Dept, 2008. (14 pages).
Albus, RCS—A Reference Model Architecture for Intelligent Control, Computer Magazine, 25.5, pp. 56-59, 1992.
Alin, Object Tracing with Iphone 3G's, dated Feb. 16, 2010. (42 pages).
Amigoni, “What Planner for Ambient Intelligence Applications?” IEEE Systems, Man and Cybernetics, 35(1):7-21, 2005.
Amlacher et al, Mobile Object Recognition Using Multi-Sensor Information Fusion in Urban Environments, ICIP 2008. (4 pages).
Amlacher, et al, Geo-contextual priors for attentive urban object recognition, IEEE Int'l Conf. on Robotics and Automation, 2009. (6 pages).
Anciaux, Data Degradation—Making Private Data Less Sensitive Over Time, Proceeding of the 17th ACM conference on Information and knowledge management, 2008, 3pp.
Anciaux, InstantDB—Enforcing Timely Degradation of Sensitive Data, 2008 IEEE 24th International Conference on Data Engineering, 3pp.
Announcing Google Maps API V3, http://googlegeodevelopers.blogspot.com/2009/05/announcing-google-maps-ap- i-v3.html, May 2009. (5 pages).
Arrington, Ex-MySpace Execs Launch Gravity Into Private Beta, TechCrunch, Dec. 2009. (13 pages).
Arth et al, Real-time self-localization from panoramic images on mobile devices, IEEE Int'l Symp. on Mixed and Augmented Reality (ISMAR), Oct. 2011, pp. 37-46.
Arth, et al, Wide Area localization on mobile phones, IEEE Int'l Symp. on Mixed and Augmented Reality (ISMAR), 2009, pp. 73-82.
Assignee's U.S. Appl. No. 14/337,607, filed Jul. 22, 2014 (published US 2014-0337733 A1), including Dec. 4, 2014 non-final Office Action and Jul. 29, 2014 Preliminary Amendment. (20 pages).
Assignee's U.S. Appl. No. 14/337,607, filed Jul. 22, 2014 (specification and drawings), including filing receipt and Jul. 29, 2014 Preliminary Amendment. (9 pages).
Assignee's U.S. Appl. No. 14/337,607, filed Jul. 22, 2014, (published as US 2014-0337733 Al) specification and drawings, including filing receipt and Jul. 29, 2014 Preliminary Amendment Oct. 19, 2015 Advisory Action, Sep. 28, 2015 Response after Final Action, Jul. 27, 2015 Final Rejection, Apr. 6, 2015 Amendment, Dec. 4, 2014 Non-final Rejection.
Assignee's U.S. Appl. No. 14/452,239, filed Aug. 5, 2014 (specification and drawings), including filing receipt. (196 pages).
Assignee's U.S. Appl. No. 14/452,282, filed Aug. 5, 2014 (specification and drawings), including filing receipt, Aug. 11, 2014 Preliminary Amendment and Sep. 18, 2014 Notice of Allowance (now issued as U.S. Pat. No. 8,886,222). (149 pages).
Azizyan, et al, SurroundSense—mobile phone localization via ambience fingerprinting, Proc. 15th Int'l Conf on Mobile Computing and Networking, 2009. (12 pages).
Bach et al., ‘Bubbles: Navigating Multimedia Content in Mobile Ad-hoc Networks,’ Proc. 2nd International Conference on Mobile and Ubiquitous Multimedia, Dec. 31, 2003. (8 pages).
Balan, Tactics-Based Remote Execution for Mobile Computing, MobiSys 2003. (14 pages).
Baldauf, et al, A Survey on Context-Aware Systems, International Journal of Ad Hoc and Ubiquitous Computing 2.4 (2007), 263-277.
Baldzer et al, Location-Aware Mobile Multimedia Applications on Niccimon Platform, 2nd Symp. on Informationssysteme fur Mobile Anwendungen IMA, 2004, pp. 318-334.
Bay et al, SURF: Speeded Up Robust Features, Eur. Conf. on Computer Vision (1), pp. 404-417, 2006.
Becker et al, Dbpedia Mobile—A Location-Enabled Linked Data Browser, 1st Int. Workshop on Linked Data on the Web, 2008. (2 pages).
Belimpasakis, et al, Experience explorer: a life-logging platform based on mobile context collection, Third IEEE Int'l Conf. on Next Generation Mobile Applications, Services and Technologies, 2009.
Bell, et al, A Digital Life, Scientific American, Mar. 2007. (9 pages).
Benbasat, et al, A framework for the automated generation of power-efficient classifiers for embedded sensor nodes, Proc. of the 5th Int'l Conf. on Embedded Networked Sensor Systems. ACM, 2007. (14 pages).
Benesova, et al, A Mobile System for Vision Based Road Sign Inventory, 5th Int'l Symp on Mobile Mapping Technology, 2007. (5 pages).
Benitez, et al, Perceptual knowledge construction from annotated image collections, Proc. IEEE Conf on Multimedia and Expo, 2002, pp. 189-192.
Bergman, Advantages and Myths of RDF at 1-2, Apr. 2009. (15 pages).
Berners-Lee et al, Tabulator Redux—Browsing and Writing Linked Data, 2008. (8 pages).
Berners-Lee, et al, On Integration Issues of Site-Specific APIs into the Web of Data, DERI Technical Report Aug. 14, 2009, Aug. 2009. (23 pages).
Berners-Lee, Linked Data, www<dot>w3<dot>org/DesignIssues/LinkedData.html, 2006, revised Jun. 2009. (8 pages).
Bichler et al, Key generation based on acceleration data of shaking processes, UbiComp 2007, LNCS vol. 4717, pp. 304-317.
Bisht et al, Context-Coded Memories—What, What, Where, When, Why?, HCI Conference, Supporting Human Memory Workshop, 2007. (4 pages).
Bizer et al, Linked Data—The Story So Far, pre-print of paper from Int'l J. on Semantic Web and Information Sys, Special Issue on Linked Data, Jul. 2009. (26 pages).
Blanchette, Data retention and the panoptic society—the social benefits of forgetfulness, The information Society, vol. 18, No. 1, 2002. (18 pages).
Bloisi, et al., “Image Based Steganography and Cryptography,” VISAPP, Mar. 8, 2007, pp. 127-124.
Bors et al., 'Image Watermarking Using DCT Domain Constraints,' Sep. 1996, Proc. IEEE Int. Conf. On Image Processing, vol. 3, pp. 231-234.
BPAI Decision in U.S. Appl. No. 10/139,147, dated Jan. 31, 2008. (18 pages).
Breslin et al. Integrating Social Networks and Sensor Networks, W3W Workshop on the Future of Social Networking, Jan. 15, 2009. (6 pages).
Brunette, et al., Some sensor network elements for ubiquitous computing, 4th Int'l. Symposium on Information Processing in Sensor Networks, pp. 388-392, 2005.
Byers et al. ‘Accessing Multiple Mirror Sites in Parallel: Using Tornado Codes to Speed Up Downloads,’ Proceeding IEEE Infocom. The Conference on Computer Communications, US New York, (Mar. 21, 1999) pp. 275-283.
Byers et al., ‘A Digital Fountain Approach to Reliable Distribution of Bulk Data,’ International Computer Science Technical Report TR-98-013 (May 1998). (pp. 1-22).
Campbell, et al, Low-complexity small-vocabulary speech recognition for portable devices, Proc. of 1999 IEEE Conf. on Signal Processing and Its Applications. pp. 619-622.
Cao, Follow me, follow you-spatiotemporal community context modeling and adaptation for mobile information systems, 2008, Mobile Data Management, 2008. MDM'08. 9th International Conference, pp. 108-115.
Carver, et al, Evolution of Blackboard Control Architectures, Expert Systems with Applications vol. 7.1, pp. 1-30, 1994.
Chen et al, ‘Efficient Extraction of Robust Image Features on Mobile Devices,’ Proc. of the 6th IEEE and ACM Int. Symp. on Mixed and Augmented Reality, 2007. (2 pages).
Chen et al, City-scale landmark identification on mobile devices, 2011 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Jun. 2011, pp. 737-744.
Chen, et al, ‘Listen-to-nose: a low-cost system to record nasal symptoms in daily life,’ 14.sup.th International Conference on Ubiquitous Computing, Sep. 5-8, 2012, pp. 590-591.
Ching-Huei, et al, Object Recognition in Visually Complex Environments—An Architecture for Locating Address Blocks on Mail Pieces, 9th IEEE Int'l Conf on Pattern Recognition, 1988. pp. 365-367.
Chinnici, et al., Web Services Description Language (WSDL) Version 2.0 Part I: Core Language, W3C, Jun. 2007. (103 pages).
Choudhury et al, ‘Towards Activity Databases: Using Sensors and Statistical Models to Summarize People's Lives,’ IEEE Data Eng. Bull, 29(1): 49-58, Mar. 2006.
Chu, et al, Where am 1?—Scene Recognition for Mobile Robots Using Audio Features, 2006 IEEE Int'l Conf on Multimedia and Expo, pp. 885-888.
Chun, Augmented Smartphone Applications Through Clone Cloud Execution, Proc. of the 8th Workshop on Hot Topics in Operating Systems, May 18, 2009. pp. 1-5.
Claims 1-15 from European patent application No. 14709848.7, which is the European regional phase of PCT application No. PCT/US2014/018715 (corresponding to WO2014134180). (3 pages).
Claims 1-15, as amended on Feb. 9, 2017, from European patent application No. 14709848.7, which is the European regional phase of PCT application No. PCT/US2014/018715 (corresponding to WO2014134180). (4 pages).
Clarkson et al, Auditory Context Awareness via Wearable Computing, MIT Media Lab, 1998. (6 pages).
Clarkson et al, Extracting Context from Environmental Audio, Second Intl Symp. on Wearable Computers, 1998. (2 pages).
Coelho, et al, OLBS: offline location based services, 5th IEEE International Conference on Next Generation Mobile Applications, Services and Technologies (NGMAST), Sep. 14, 2011. (6 pages).
comScore, “comScore study highlights digital wallet market potential and current adoption barriers,” Feb. 4, 2013, PR Newswire, p. 5-7.
Corkill, Collaborating Software—Blackboard and Multi-Agent Systems & the Future, Proceedings of the International LISP Conference, 2003. (12 pages).
Costa-Montenegro, et al., QR-maps: an efficient tool for indoor user location based on QR-codes and Google maps, IEEE Consumer Communications and Networking Conference, Jan. 9, 2011. (pp. 928-932).
Cox, Scanning the Technology—On the applications of multimedia processing to communications, Proceedings of the IEEE, vol. 86, No. 5, May 1998, pp. 755-824.
Crowley, Context Driven Observation of Human Activity, Ambient Intelligence, 2003, pp. 101-118.
Crowley, Dynamic Composition of Process Federations for Context Aware Perception of Human Activity, IEEE Int'l Conf on Integration of Knowledge Intensive Multi-Agent Systems, pp. 300-305, 2003.
Crowley, Perceptual Components for Context Aware Computing: Ubicomp 2002, pp. 117-134.
Csirik, et al, Sequential Classifier Combination for Pattern Recognition in Wireless Sensor Networks, 10th Int'l Workshop on Multiple Classifier Systems, Jun. 2011. (pp. 187-196).
D'Ambrosio et al, Some Experiments with Real-Time Decision Algorithms, Proc. of the 12th Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-96), 1996, pp. 194-202.
D'Ambrosio, A Hybrid Approach to Reasoning Under Uncertainty, International Journal of Approximate Reasoning, vol. 2, Issue 1, Jan. 1988. (pp. 29-45).
D'Ambrosio, et al, Constrained Rational Agency, IEEE Int'l Conf on Systems, Man and Cybernetics, 1990, pp. 575-580.
D'Ambrosio, Real-Time Value-Driven Diagnosis, Proc. SPIE vol. 2244, Knowledge-Based Artificial Intelligence Systems in Aerospace and Industry, 1992, pp. 93-104.
Daude et al, Design Process for Auditory Interfaces, Proc. of 2003 Int'l Conf on Auditory Display.
David Ingram, Trust-Based Filtering for Augmented Reality in Trust Management 108-122 (P. Nixon and S. Terzis Eds. 2003).
David Lowe, ‘Object Recognition from Local Scale-Invariant Features,’ International Conference on Computer Vision, Corfu, Greece (Sep. 1999), pp. 1150-1157.
Davidyuk, et al, Context-Aware Middleware for Mobile Multimedia Applications, Proceedings of the 3rd International Conference on Mobile and ubiquitous multimedia. ACM, 2004. (8 pages).
Davis et al, MMM2-Mobile Media Metadata for Media Sharing, Conference on Human Factors in Computing Systems, pp. 1335-1338, 2005.
Davis, Towards Context-Aware Face Recognition, Proc. 13th ACM Int'l Conf on Multimedia, pp. 483-486, 2005.
de Ipina, TRIP—A Distributed Vision-Based Sensor System, Laboratory for Communication Engineering, Cambridge, Technical Report, 1999. (48 pages).
de-las-Heras-Quiros, et al, Mobile augmented reality browsers should allow labeling objects, a position paper for the augmented reality on the web, W3C Workshop: Augmented Reality on the Web, May 30, 2010. (pp. 1-5).
Denis Kalkofen, Erick Mendez, Dieter Schmalstieg, Comprehensible Visualization for Augmented Reality, 15 IEEE Transactions of Visualization and Computer Graphics No. 2 (Mar./Apr. 2009). pp. 193-204.
Dibowski et al., Ontology-based device descriptions and triple store based device repository for automation devices, 2010, Emerging Technologies and Factory Automation (ETFA), 2010 IEEE Conference, IEEE, pp. 1-9.
Divvala, et al, An Empirical Study of Context in Object Detection, Robotics Institute, Paper 270, Jan. 2009. (9 pages).
Doermann, et al. Progress in Camera-Based Document Image Analysis. Proc. Seventh International Conference on Document Analysis and Recognition, vol. 1, 2003, pp. 606-616.
Doherty et al., Automatically Segmenting LifeLog Data Into Events, 2008, IEEE, p. 20-23.
Draper, et al, Learning Blackboard-Based Scheduling Algorithms for Computer Vision, IJPRAI vol. 7.2, pp. 309-328, 1993.
Dunker, ‘Content-based Mood Classification for Photos and Music,’ MIR'08, Oct. 2008. pp. 98-204.
Dur,Optical Flow-Based Obstacle Detection and Avoidance Behaviors for Mobile robots Used in Unmaned Planetary Exploration, 2009, IEEE, p. 638-647.
Duric et al, Integrating Perceptual and Cognitive Modeling for Adaptive and Intelligent Human-Computer Interaction, Proc. of the IEEE, vol. 90, No. 7, pp. 1272-1289, 2002.
Durrant-Whyte, et al, Simultaneous Localization and Mapping, Part I, IEEE Robotics & Automation Magazine, vol. 13, No. 2, 2006, pp. 99-110.
Durrant-Whyte, et al, Simultaneous Localization and Mapping, Part II, IEEE Robotics & Automation Magazine, vol. 13, No. 3, 2006, pp. 108-117.
Eagle, “Machine Perception and Learning of Complex Social Systems”, dated Jun. 27, 2005. (136 pages).
Eagle, et al, Reality Mining—Sensing Complex Social Systems, Personal and Ubiquitous Computing, vol. 10, 2006, pp. 255-268.
Eaton, IBM'S SAPRI: the smartphone image recognition app that recognizes everything, Fast Company web site, Sep. 11, 2009. (2 pages).
Efrati, A., & Troianovski, A., ‘War over the digital wallet—google, verizon wireless spar in race to build mobile-payment services,’ Dec. 7, 2011, Wall Street Journal, p. 5-6.
Eisenman et al, BikeNet—A Mobile Sensing System for Cyclist Experience Mapping, ACM Transactions on Sensor Networks, vol. 6, No. 1, Article 6, Dec. 2009. (pp. 6:1-6:39).
Emiliano Miluzzo, ‘Sensing Meets Mobile Social Networks: The Design, Implementation and Evaluation of the CenceMe Application’, (Nov. 7, 2008), URL: http://d1.acm.org/citation.cfm?id=1460445, (May 8, 2012), XP055026265 (14 pages).
Erol et al, ‘Mobile Media Search,’ ICASSP, IEEE 2009, p. 4897-4900.
Esmaeilsabzali et al, “Online Pricing for Web Service Providers,” ACM Proc. of the 2006 Int'l Workshop on Economics Driven Software Engineering Research. (6 pages).
Ettabaa, et al, Distributed blackboard architecture for multi-spectral image interpretation based on multi-agent system, IEEE Int'l Conf, on Information and Communication Technologies, 2006. (6 pages).
Examiner's Report for Application No. CA2,775,097, dated Oct. 8, 2019, 5 pages.
Excerpts from prosecution of corresponding Chinese application 201280054460.1, including amended claims 1-46 on which prosecution is based, and translated Text of the First Action dated Mar. 27, 2015. (13 pages).
Excerpts from prosecution of corresponding Chinese application 201280054460.1, including amended claims 1-46 on which prosecution is based, translated Text of the First Action dated Mar. 27, 2015, claims submitted in response, translated Text of the Second Action dated Dec. 10, 2015, claims filed in response, and Notice of Allowance dated Apr. 13, 2016. (32 pages).
Excerpts from prosecution of corresponding European application 12833294.7, including amended claims 1-11 on which prosecution is based, Supplemental European Search Report dated Apr. 1, 2015, and Written Opinion dated Apr. 10, 2015,a and claims filed in response in Oct. 2015. (225 pages).
Excerpts from prosecution of corresponding European application 12833294.7, including amended claims 1-11 on which prosecution is based, Supplemental European Search Report dated Apr. 1, 2015, and Written Opinion dated Apr. 10, 2015. (9 pages).
Excerpts from prosecution of Japanese patent application P2014-531853, namely originally-presented claims, English translation of Notice of Reasons for Rejection dated Oct. 25, 2016, amended claims of Jan. 2017, and English translation of Final Notice of Rejection dated Jun. 13, 2017. (34 pages).
Excerpts from prosecution of U.S. Appl. No. 14/189,236, including original claims, Action dated Aug. 20, 2014, Response dated Nov. 19, 2014, Final Action dated Feb. 5, 2015, Response after Final dated Apr. 6, 2015, Advisory Action dated Apr. 22, 2015, Pre-Brief Conference Request dated May 11, 2015, Pre-Appeal Conference Decision dated May 21, 2015, Applicant responses dated Jun. 2 and 19, 2015, Rejection dated Jun. 19, 2015, Interview summary dated Aug. 13, 2015, Applicant Response dated Aug. 19, 2015, and Notice of Allowance dated Nov. 20, 2015. (137 pages).
Feb. 21, 2018 Examination Report from European patent application No. 14709848.7, which is the European regional phase of PCT application No. PCT/US2014/018715 (corresponding to WO2014134180). (7 pages).
First Action in Chinese Application 201080065015.6 (corresponding to PCT WO2011082332), dated Nov. 2013. (8 pages).
Flinn et al, Balancing Performance, Energy, and Quality in Pervasive Computing, Proc. of the 22nd International Conference on Distributed Computing Systems (ICDCS), Jul. 2002. (10 pages).
Flinn, et al, Self-Tuned Remote Execution for Pervasive Computing, Proc. of the 8th Workshop on Hot Topics in Operating Systems (HotOS), May 2001. (6 pages).
Foote, ‘An Overview of Audio Information Retrieval,’ Multimedia Systems, v.7 n. 1, p. 2-10, Jan. 1999.
Franklin, et al, All Gadget and No Representation Makes Jack a Dull Environment, Proc. AAAI 1998 Spring Symp. on Intelligent Environments, pp. 144-150, 1998.
Fransen, Using Vision, Acoustics, and Natural Language for Disambiguation, Proc. 2006 ACM—IEEE Int'l Conf on Human-Robot Interaction, pp. 73-80.
Funk, ‘Image Watermarking in the Fourier Domain Based on Global Features of Concentric Ring Areas’, Proceedings of SPIE 4675, Security and Watermarking of Multimedia Contents IV, 596, Apr. 29, 2002, pp. 596-599.
Further prosecution excerpt from Japanese application 2013-537885 (based on PCT publication WO/2012/061760), namely amended claims presented Oct. 8, 2015, in response to First Office Action dated May 12, 2015. (4 pages).
Further prosecution excerpts from U.S. Appl. No. 14/328,558, filed Jul. 10, 2014, namely applicant amendment filed Feb. 19, 2015; PTO Action dated Apr. 27, 2015; and applicant amendment filed Sep. 9, 2015. (48 pages).
Galleguillos, Object Categorization using Co-Occurrence, Location and Appearance, 2008 IEEE Conf on Computer Vision and Pattern Recognition (CVPR), pp. 1-8.
Gartner Says the Use of Mobile Fraud Detection in Mobile Commerce Environments is Imperative, Press Release, Sep. 20, 2010. (4 pages).
Garvey, Design-To-Time Real-Time Scheduling, PhD Dissertation, U. Mass., 1996. (167 pages).
Geihs, et al, Modeling of Context-Aware Self-Adaptive Applications in Ubiquitous and Service-Oriented Environments, Software Engineering for Self-Adaptive Systems, Springer Berlin Heidelberg, Jun. 19, 2009, pp. 146-163.
Genc, ‘Marker-less Tracking for AR: A Learning-Based Approach,’ Proc. 1st IEEE/ACM Int. Symp. on Mixed and Augmented Reality, Aug. 2002, pp. 295-304.
Gerkey et al, The Player-Stage Project, Tools for Multi-Robot and Distributed Sensor Systems, Proc. of the 11th International Conference on Advanced Robotics, 2003, pp. 317-323.
Ghias, et al., ‘Query by Humming: Musical Information Retrieval in an Audio Database,’ ACM Multimedia, pp. 231-236, Nov. 1995.
Giunchiglia et al, Ontology Driven Community Access Control, University of Trento Technical Report DISI-08-080, Dec. 2008. (19 pages).
Gleaning, Wikipedia, Feb. 9, 2010. (3 pages).
Godsmark, et al, A Blackboard Architecture for Computational Auditory Scene Analysis, Speech Communication, vol. 27, No. 3, pp. 351-366, 1999.
Google Goggles (Labs): Overview, downloaded from Google website, Dec. 14, 2009. (8 pages).
Gu, Adaptive offloading inference for delivering applications in pervasive computing environments, Proc IEEE Int'l Conf on Pervasive Computing and Communication, 2003. (8 pages).
Hakansson, Capturing the Invisible: Designing Context-Aware Photography, ACM, 2003. (4 pages).
Halevi, Tzipora et al., “Secure Proximity Detection for NFC Devices Based on Ambient Sensor Data”, 2012, ESORICS 2012, LNCS 7459, pp. 379-396.
Hansen, et al, Mixed Interaction Space—Expanding the Interaction Space with Mobile Devices, People and Computers XIX—The Bigger Picture. Springer London, pp. 365-380, 2006.
Hartig, et al, Using Web Data Provenance for Quality Assessment, Proc. of Workshop on Semantic Web and Provenance Management, Oct. 25, 2009. (6 pages).
Hausenblas, Exploiting Linked Data to Build Web Applications, IEEE Internet Computing, Jul. 2009, pp. 68-73.
Hayes-Roth, Opportunistic Control of Action in Intelligent Agents, IEEE Trans. on Systems, Man and Cybernetics, vol. 23.6, 1575-1587, 1993.
Hays, et al, Im2gps: estimating geographic information from a single Image, Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2008, pp. 1-8.
Heath, Gmail's “Priority Inbox” Now Available on the iPhone, iDownload Blog, Feb. 8, 2011. (3 pages).
Henrysson, Bringing Augmented Reality to Mobile Phones, Linkopings University, 2007. (80 pages).
Henze, et al, What is that? Object recognition from natural features on a mobile phone, Proceedings of the Workshop on Mobile Interaction with the Real World, Sep. 15, 2009. (4 pages).
Hile, et al, Positioning and orientation in indoor environments using camera phones, IEEE Computer Graphics and Applications 28.4 (2008). (pp. 32-39).
Hinckley, et al, Sensing techniques for mobile interaction, Proceedings of the 13th annual ACM symposium on User interface software and technology, ACM, 2000. (10 pages).
Hollenbach, Using RDF Metadata to Enable Access Control on the Social Semantic Web, Workshop on Collaborative Construction, Management and Linking of Structured Knowledge, Oct. 2009. (10 pages).
Hollerer, User Interfaces for Mobile Augmented Reality Systems, Columbia University Thesis, 2004. (238 pages).
Hong, An Architecture for Privacy-Sensitive Ubiquitous Computing, UC Berkeley PhD Thesis, 2005. (333 pages).
Howe, ‘A Critical Assessment of Benchmark Comparison in Planning,’ Journal of Artificial Intelligence Research, 17:1-33, 2002.
Hoyt, et al, Detection of human speech in structured noise, 1994 IEEE Int'l Conf on Acoustics, Speech, and Signal Processing. (pp. II-237-II-240).
Hsu et al, Knowledge discovery over community-sharing media—from signal to intelligence, IEEE Int'l Conf. on Multimedia and Expo, Jun. 2009, pp. 1448-1451.
Huang, et al, Kimono—Kiosk-Mobile Phone Knowledge Sharing System, Proc. 2d Int'l Workshop in Ubiquitous Computing, 2005. (8 pages).
Huynh et al., Haystack: A Platform for creating, organizing and visualizing information using RDF, 2002, Semantic Web Wrokshop, V.52, pp. 1-13.
Ikezoe, et al, Development of RT-Middleware for Image Recognition Module, International Joint Conference, SICE-ICASE, IEEE, 2006. (6 pages).
International Preliminary Report on Patentability in application PCT/US2010/054544 (published as WO/2011/059761), dated May 10, 2012. (7 pages).
International Preliminary Report on Patentability, PCT/US2011/029038 (published as WO2011/116309), Sep. 25, 2012. (12 pages).
International Preliminary Report on Patentability, PCT/US2011/034829 (published as WO2011139980), Nov. 15, 2012. (8 pages).
International Search Report and Written Opinion dated Oct. 28, 2014 from PCT/US14/18715 (corresponding to WO2014134180). (13 pages).
International Search Report and Written Opinion dated Sep. 9, 2011, in PCT/US11/34829. (8 pages).
International Search Report, PCT/US2010/054544 (published as WO 2011/059761), Feb. 28, 2011. (10 pages).
International Search Report, PCT/US2011/029038 (published as WO 2011/116309), Jul. 19, 2011. (15 pages).
iPhone User Guide (for iOS 4.2 and 4.3 Software), 274 pgs., Sep. 9, 2011.
iPhone User Guide (for iPhone iOS 3.1 Software), 217 pgs., Sep. 9, 2009.
Irschara, et al, From structure-from-motion point clouds to fast location recognition, 2009 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 2599-2606.
Ismail, et al, A framework for sharing and storing serendipity moments in human life memory, First IEEE Int'l Conf. on Ubi-Media Computing, 2008. pp. 132-137.
Isokoski, Poika. “Text input methods for eye trackers using off-screen targets.” Proceedings of the 2000 symposium on Eye tracking research & applications. ACM, 2000. pp. 15-21.
James et al, Towards Semantic Image Annotation with Keyword Disambiguation Using Semantic and Visual Knowledge, 21st Int'l Joint Conference on Artificial Intelligence, Jul. 13, 2009, pp. 35-40.
Jensen, et al, Bayesian Methods for Interpretation and Control in Multi Agent Vision Systems, SPIE vol. 1708, pp. 536-548, 1992.
Jie Yang et al, ‘Smart Sight: a tourist assistant system’, Wearable Computers, 1999. Digest of Papers. The Third International Sy Mposium on San Francisco, CA, USA Oct. 18-19, 1999, Los Alamitos, CA, USA,IEEE Comput. Soc, US, (Oct. 18, 1999), doi:10.1109/ISWC.1999.806662, ISBN 978-0-7695-0428-5, pp. 73-78, XP032391410.
Johnson, A. R., “Carriers to debut mobile-payments service isis next week,” Oct. 17, 2012, Wall Street Journal (Online), p. 5-6.
Jones et al, Automated Image Captioning—the TRIPOD Project, Workshop on Geographic Information on the Internet, Apr. 2009, pp. 85-86.
Jonsson, et al, Building Extendable Sensor Infrastructures for Pervasive Computing Environments, Dept. of Computer and System Sciences Technical Report, KTH Royal Institute of Technology, Sweden (2002). (11 pages).
Jul. 27, 2015, Final Office Action; dated Apr. 6, 2015, Amendment after Non-Final Rejection; dated Dec. 4, 2014, Non-final Office Action; and Jul. 22, 2014, Application Data Sheet; all from assignee's U.S. Appl. No. 14/337,607 (published as US 2014-0337733 A1). (76 pages).
Jul. 29, 2014 Preliminary Amendment; Dec. 4, 2014 Non-final Rejection; Apr. 6, 2015 Amendment; Jul. 27, 2015 Final Rejection; Sep. 28, 2015 Response after Final Action; Jan. 13, 2016 Interview Summary; and Feb. 10, 2016 Notice of Abandonment; all from assignee's U.S. Appl. No. 14/337,607, which published as US 2014-0337733 A1. (99 pages).
Jul. 29, 2015 non-final Office Action, and Sep. 14, 2015 Amendment; both from assignee's U.S. Appl. No. 14/180,218 (published as US 2015-0227922 Al). (26 pages).
Jul. 29, 2015 non-final Office Action, and Sep. 14, 2015 Amendment; both from assignee's U.S. Appl. No. 14/180,277 (published as U.S. Appl. No. 2015-0227925 Al). (26 pages).
Jun. 25, 2020 Decision to Refuse from the European Patent Office, including various prosecution history, from European patent application No. 14709848.7, which is the regional phase of PCT application No. PCT/US2014/018715, published as WO2014/134180. (68 pages).
Kacimi, ‘Deliverable D2. 3 Design report on the final SAPIR architecture,’ Jan. 2009. (30 pages).
Kan, Tai-Wei et al., “Applying QR Code in Augmented Reality Applications,” VRCAI 2009, Dec. 15, 2009, 6 pages.
Kang et al, Orchestrator—An Active Resource Orchestration Framework for Mobile Context Monitoring in Sensor-Rich Mobile Environments, IEEE Conf. on Pervasive Computing and Communications, pp. 135-144, 2010.
Kang, et al, SeeMon—scalable and energy-efficient context monitoring framework for sensor-rich mobile environments, Proc. 6th Int'l ACM Conf. on Mobile Systems, Applications, and Services, 2008. pp. 267-280.
Karpischek, et al, Mobile augmented reality to identify mountains, Adjunct Proc. of Aml, 2009. (4 pages).
Kelm et al., Feature-Based Video Key Frame Extraction for Low Quality Video Sequences, 2009, IEEE, p. 25-28.
Kemp, et al, eyeDentify: Multimedia cyber foraging from a smartphone, 11th IEEE International Symposium on Multimedia, Dec. 14, 2009, pp. 392-399.
Kennedy, How flickr helps us make sense of the world—context and content in community-contributed media collections, Proc. 15th Int'l Conf on Multimedia, 2007. (10 pages).
Kesorn, Multi-modal Multi-Semantic Image Retrieval, 2010, University of London, pp. 1-174.
Kieffer et al, Oral Messages Improve Visual Search, Proc. of the ACM Conf. on Advanced Visual Interfaces, 2006. (4 pages).
Kilgore et al, Listening to Unfamiliar Voices in Spatial Audio: Does Visualization of Spatial Position Enhance Voice Identification, Human Factors in Telecommunication, 2006. (8 pages).
Kim-Mai Cutler, Startups looking to make money by enhancing reality (Jul. 2009), available at http://venturebeat.com/2009/07/03/startups-push-augmented-reality-apps-to- -market/. (7 pages).
Kincaid, TC50 Star Tonchidot Releases Its Augmented Reality Sekai Camera Worldwide, Techcrunch, Dec. 21, 2009. (3 pages).
Kirsch-Pinheiro et al, Context-Aware Service Selection Using Graph Matching, 2nd Non Functional Properties and Service Level Agreements in Service Oriented Computing Workshop (NFPSLA-SOC'08), CEUR Workshop Proceedings, vol. 411. 2008. (14 pages).
Klein et al, Parallel tracking and mapping for small AR workspaces, IEEE Int'l Symp. on Mixed and Augmented Reality (ISMAR), 2007, pp. 225-234.
Klein, et al., ‘Parallel Tracking and Mapping on a camera phone,’ Mixed and Augmented Reality, ISMAR 2009, 8th IEEE International Symposium on Oct. 19-22, 2009. pp. 83-86.
Knox, K. C. , “Digitize your wallet,” Oct. 2012, Information Today, 29(9), 21, p. 5-7.
Koelma, et al, A Blackboard Infrastructure for Object-Based Image Interpretation, Proceedings of Computing Science in the Netherlands, 1994. (12 pages).
Kubota et al, 3D Auditory Scene Visualizer with Face Tracking—Design and Implementation for Auditory Awareness Compensation, 2d Int'l Symp on Universal Communication, 2008, pp. 42-49.
Kubota, et al, Design and Implementation of 3D Auditory Scene Visualizer Towards Auditory Awareness With Face Tracking, 10th IEEE Multimedia Symp., pp. 468-476, 2008.
Kumar, et al, Visible light communication systems conception and VIDAS, IETE Technical Review 25.6 (2008): 359-367.
Kunze, et al, Symbolic object localization through active sampling of acceleration and sound signatures, Proc. of 9th Int'l Conf on Ubiquitous Computing, 2007, pp. 163-180.
Lane, et al., A Survey of Mobile Phone Sensing, IEEE Communications Magazine, 48.9, pp. 140-150, 2010.
Langer, et al, Advances and prospects in high-speed information broadcast using phosphorescent white-light LEDs, 2009 11th International Conference on Transparent Optical Networks. IEEE, 2009. (6 pages).
Larson, et al., ‘SpiroSmart: using a microphone to measure lung function on a mobile phone,’ 14.sup.th International Conference on Ubiquitous Computing, Sep. 5-8, 2012, pp. 280-289.
Laskey et al, Limited Rationality in Action—Decision Support for Military Situation Assessment, Minds and Machines, vol. 10, No. 1, 2000, pp. 53-77.
Le-Phuoc, Linked Open Data in Sensor Data Mashups, Proc of the 2nd Int'l Workshop on Semantic Sensor Networks, Oct. 2009. (16 pages).
Le-Phuoc, RDF on the go: An RDF storage and query processor for mobile devices, 2010, 9th International Semantic Web Conference (ISWC2010), pp. 1-4.
Le-Phuoc, Unifying Stream Data and Linked Open Data, Deri Technical Report Aug. 15, 2010, Feb. 2009, rev'd Aug. 15, 2010. (34 pages).
Lee et al. “Fast Algorithms for Foveated Video Processing.” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, No. 2, Feb. 2003, pp. 149-162.
Lerouge et al., Generic mapping mechanism between content description metadata and user environments, 2002, International Society for Optics and Photonics, ITCom 2002: The Convergence of Information Technologies and Communications, pp. 12-21.
Levy, Secret of Googlenomics: Data-Fueled Recipe Brews Profitability, Wired Magazine, May 22, 2009. (9 pages).
Li et al, Location Recognition Using Prioritized Feature Matching, 2010 European Conference on Computer Vision, pp. 791-804.
Lieberman, et al, Out of context: Computer systems that adapt to, and learn from, context, IBM systems journal 39.3.4 (2000): 617-632.
Lim, et al, The development of an ubiquitous learning system based on audio augmented reality, IEEE Int'l Conf. on Control, Automation and Systems, 2007. pp. 1072-1077.
Lim, Scene Identification Using Discriminative Patterns, 18th Int'l IEEE Conference on Pattern Recognition, 4 pp., 2006. (4 pages).
Lin, et al, A robot indoor position and orientation method based on 2d barcode landmark, Journal of Computers, vol. 6, No. 6, Jun. 2011, pp. 1191-1197.
Lindstaedt et al, Recommending tags for pictures based on text, visual content and user context, Third Int'l Conf on Internet and Web Applications and Services, 2008, pp. 506-511.
Liu et al., ‘Robust and Transparent Watermarking Scheme for Colour Images’, Image Processing, IET, vol. 3, No. 4, Aug. 2009, pp. 228-242.
Liu, ‘Mobile Image Recognition in a Museum,’ Thesis Paper at the University of Bath, Apr. 2008. (112 pages).
Liu, et al, VCode-Pervasive Data Transfer Using Video Barcode, IEEE Trans on Multimedia, vol. 10, No. 3, 2008, pp. 361-371.
Liu, Positioning Beacon System Using Digital Camera and LEDs, IEEE Trans, on Vehicular Technology, Vo. 52, No. 2, 2003. (15 page).
Lopez de Ipina, Interacting with our Environment through Sentient Mobile Phones, Proc. of 2d Int'l Workshop in Ubiquitous Computing, pp. 19-28, 2005.
Lopez de Ipina, TRIP: a Low-Cost Vision-Based Location System for Ubiquitous Computing, Personal and Ubiquitous Computing, vol. 6, No. 3, May 2002, pp. 206-219.
Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, 60, 2 (2004), pp. 91-110.
Lu, et al, SoundSense: scalable sound sensing for people-centric applications on mobile phones, Proc. 7th Int'l Conf. on Mobile Systems, Applications and Services, ACM, 2009. (14 pages).
Lu, et al, SpeakerSense: Energy Efficient Unobtrusive Speaker Identification on Mobile Phones, Pervasive Computing Conference, Jun. 2011, pp. 188-205.
Lu, et al., ‘StressSense: Detecting stress in unconstrained acoustic environments using smartphones,’ 14.sup.th International Conference on Ubiquitous Computing, Sep. 5-8, 2012, pp. 351-360.
Luley, et al, Geo-services and computer vision for object awareness in mobile system applications, Location Based Services and TeleCartography, 2007, pp. 291-300.
Luo et al, Pictures Are Not Taken in a Vacuum, An Overview of Exploiting Context for Semantic Scene Content Understanding, IEEE Signal Processing Magazine, pp. 101-114, Mar. 2006.
Machine translation of JP2007295490 (JP2007295490 was published Nov. 8, 2007). (15 pages).
MacKay D (2005) Fountain codes. IEE Proc Commun 152(6): 1062-1068.
Maik Schott et al, ‘AnnoWaNO: An annotation watermarking framework’, Image and Signal Processing and Analysis, 2009. ISPA 2009. Proceedings of 6th International Symposium on, IEEE, Piscataway, NJ, USA, (Sep. 16, 2009), ISBN 978-953-184-135-1, pp. 483-488, XP031552023.
Making the Visible Invisible (2008), available at http://www.openthefuture.com/2008/08/making.sub.--the.sub.--visible.sub.-- -invisible.html. (5 pages).
Malek, et al, A Framework for Context-Aware Authentication, 2008 IET 4th Int'l Conf on Intelligent Environments, 2008, pp. 1-8.
Marefat et al, Image Interpretation and Object Recognition in Manufacturing, IEEE Control Systems, Aug. 1991, pp. 8-17.
Marleen Morbee, Vladan Velisavljevi , Marla Mrak and Wilfried Philips, ‘Scalable feature-based video retrieval for mobile devices’, Proceedings of the First International Conference on Internet Multimedia Computing and Service, Nov. 2009, pp. 3-9.
Marquardt, ‘Evaluating AI Planning for Service Composition in Smart Environments,’ AC Conf. on Mobile and Ubiquitous Media 2008, pp. 48-55.
Marszalek et al, Semantic Hierarchies for Visual Object Recognition, IEEE Conf. on Computer Vision and Pattern Rec., 2007, 7 pp.
Martin, Sound Source Recognition: A Theory and Computational Model, PhD Thesis, MIT, Jun. 1999. (172 pages).
Martinez, et al, MOPED: A scalable and low latency object recognition and pose estimation system, IEEE Int'l Conf. on Robotics and Automation, May 3, 2010. (7 pages).
MasterCard Introduces MasterPass—The Future of Digital Payments, Press Release, Feb. 25, 2013. (3 pages).
Mathur, et al, ProxiMate—Proximity-based Secure Pairing Using Ambient Wireless Signals, Proc 9th Int'l Conf on Mobile Systems, Applications, and Services, Jun. 2011, pp. 211-224.
Matthias Kalle Dalheimer, Programming with QT (2002). (4 pages).
Mayer-Schonberger, Useful Void—The Art of Forgetting in the Age of Ubiquitous Computing, Harvard Faculty Research Working Paper Series RWP07-022, 2007, 26pp.
Mayrhofer, A Context Authentication Proxy for IPSec Using Spatial Reference, Int'l Workshop on Trustworthy Computing, 2006, pp. 449-462.
Mayrhofer, et al, Shake well before use—Intuitive and secure pairing of mobile devices, IEEE Trans. on Mobile Computing, vol. 8, No. 6, Jun. 2009, pp. 792-806.
Mayrhofer, et al, Using a Spatial Context Authentication Proxy for Establishing Secure Wireless Communications, J. of Mobile Multimedia, 2006, pp. 198-217.
Mayrhofer, Spontaneous mobile device authentication based on sensor data, Information Security Technical Report 13, 2008, pp. 136-150.
Mayrhofer, The candidate key protocol for generating secret shared keys from similar sensor data streams, Security and Privacy in Ad-hoc and Sensor Networks, 2007, pp. 1-15.
McGookin, et al, Eyes-free overviews for mobile map applications, Proc. 11th Int'l Conf. on Human-Computer Interaction with Mobile Devices and Services, 2009. (2 pages).
Mike Schramm, Shazam update to 1.7, adds location awareness (Jun. 19, 2009), available to https://web.archive.org/web/*/http://ww.tuaw.com/2009/06/19/Shazam-upd ate-to-1-7-adds-location-awareness/. (20 pages).
Miluzzo, “Smartphone Sensing”, dated Jun. 2011. (142 pages).
Misra, et al, Optimizing Sensor Data Acquisition for Energy-Efficient Smartphone-based Continuous Event Processing, 12th IEEE International Conf. on Mobile Data Management, Jun. 2011. (10 pages).
Modro et al, Digital Watermarking Opportunities Enabled by Mobile Media Proliferation, Media Forensics and Security, SPIE Proceedings vol. 7254, Jan. 19, 2009. (10 pages).
Mosawi, et al, Aura Digital Toolkit, C0600 Group Project Reports Apr. 2003, Department of Computer Science Kent University, Sun Microsystems Centre for Learning Innovation, 2003. (8 pages).
Motorola Aims RFID Handheld At Business Operations. Anonymous. Informationweek—Online (Nov. 3, 2010). (2 pages).
Mulloni, et al., Indoor positioning and navigation with camera phones, IEEE Pervasive Computing 8.2, 2009. (10 pages).
Mynatt, et al, Designing audio aura, Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 566-573, 1998.
Naaman, Eyes on the World, Computer 39.10, pp. 108-111, 2006.
Nakagawa, Visible light communications, Proc. Conference on Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference and Photonic Applications Systems Technologies, Baltimore. 2007. (51 pages).
Nakamura et al, Multimodal object categorization by a robot, IEEE Conf on Intelligent Robots and Systems (IROS) 2007, pp. 2415-2420.htm.
Natalia Marmasse et al, “WatchMe: communication and awareness between members of a closely-knit group”, UbiComp 2004: Ubiquitous Computing: 6th International Conference, Nottingham, UK, Sep. 7-10, 2004, Lecture Notes in Computer Science, vol. 3205, (Nov. 2, 2004), URL: http://luci.ics.uci.edu/websiteContent/weAreLuci/biographies/faculty/djp3/LocalCopy/WM_ubi04.pdf, (May 18, 2012), XP055027532.
Neira, et al, A Testbed for Research on Multisensor Object Recognition in Robotics, IFAC Symposia Series, Pergamon Press, 1993. pp. 1-6.
Nii, Blackboard Systems, Stanford University Department of Computer Science, Report No. STAN-CS-86-1123, 1986. (92 pages).
Nii, The Blackboard Model of Problem Solving and the Evolution of Blackboard Architectures, AI Magazine, vol. 7, No. 2, 1986. pp. 38-53.
Nishihara et al, Power Savings in Mobile Devices Using Context-Aware Resource Control, IEEE Conf. on Networking and Computing, 2010, pp. 220-226.
Noble, et al, Agile Application-Aware Adaptation for Mobility, Proc. of the ACM Symposium on Operating System Principles (SOSP), 1997. (12 pages).
Noll et al, The metadata triumvirate: Social annotations, anchor texts and search queries, IEEE Int'l Conf on Web Intelligence and Intelligent Agent Technology, 2008, pp. 640-647.
Nov. 14, 2014 non-final Office action from assignee's U.S. Appl. No. 14/452,239. (19 pages).
NXP brochure, ‘NXP GreenChip iCFL and GreenChip iSSL Smart Lighting Solutions,’ May, 2011. (4 pages).
O'Brien, et al, Indoor Visible light communications—Challenges and possibilities, 2008 IEEE 19th International Symposium on Personal, Indoor and Mobile Radio Communications. (9 pages).
O'Hare, Context-Aware Person Identification in Personal Photo Collections, IEEE Trans. on Multimedia, vol. 11, No. 2, Feb. 2009. (9 pages).
Oct. 19, 2015 Advisory Action, Sep. 28, 2015 Response after Final Action, dated Jul. 27, 2015 Final Rejection, Apr. 6, 2015 Amendment, Dec. 4, 2014 Non-final Rejection, all from Assignee's U.S. Appl. No. 14/337,607, filed Jul. 22, 2014 (published as US 2014-0337733 A1). (198 pages).
Oct. 3, 2014 Issue fee payment, Sep. 18, 2014 Notice of Allowance, and Aug. 11, 2014 Preliminary Amendment; all from parent U.S. Appl. No. 14/452,282, filed Aug. 5, 2014. (20 pages).
Olivares et al, Boosting Image Retrieval through Aggregating Search Results based on Visual Annotations, Proc. 16th ACM Int'l Conf on Multimedia, 2008, pp. 189-198.
Oshiro, ‘IBM's New Image Recognition-Based Search,’ ReadWrite, Sep. 10, 2009. (3 pages).
Oshiro, ‘SoundHound: A Music App That Could Change Mobile Search,’ Dec. 16, 2009. (3 pages).
Osman, et al, The Design and Implementation of Zap: A System for Migrating Computing Environments, Proc of the Fifth Symposium on Operating Systems Design and Implementation (OSDI), 2002. (16 pages).
Pammer et al, On the Feasibility of a Tag-based Approach for Deciding Which Objects a Picture Shows—An Empirical Study, 4th Int'l Conf on Semantic and Digital Media Tech, Dec. 2, 2009, pp. 40-51.
Pang et al, LED Location Beacon System Based on Processing of Digital Images, IEEE Trans, on Intelligent Transportation Systems, vol. 2, No. 3, 2001. (17 pages).
Papakonstantinou, et al, Framework for Context-Aware Smartphone Applications, Visual Computing, vol. 25, Aug. 2009, pp. 1121-1132.
Papazoglou, ‘Service-Oriented Computing Research Roadmap,’ Dagstuhl Seminar Proceedings 05462, 2006; and Bichler, ‘Service Oriented Computing,’ IEEE Computer, 39:3, Mar. 2006, pp. 88-90.
Papazoglou, “Service-Oriented Computing Research Roadmap,” Dagstuhl Seminar Proceedings 05462, 2006. (29 pages).
Parviz, Augmented Reality in a Contact Lens, IEEE Spectrum, Sep. 2009. (9 pages).
Pawel Korus et al, ‘A new approach to high-capacity annotation watermarking based on digital fountain codes’, Multimedia Tools and Applications., US, (Feb. 12, 2012), vol. 68, No. 1, doi:10.1007/s11042-011-0986-8, ISSN 1380-7501, pp. 59-77, XP055288220.
Pazzani, Representation of Electronic Mail Filtering Profiles: A User Study, Feb. 2002. (5 pages).
PCT International Preliminary Report on Patentability, PCT/US2011/029038 (published as WO2011116309), Oct. 2012. (12 pages).
PCT International Preliminary Report on Patentability, PCT/US2011/059412 (published as WO 2012/061760), May 7, 2013. (12 pages).
PCT International Search Report and PCT Written Opinion of the International Searching Authority, PCT/US12/54232, dated Nov. 14, 2012. (11 pages).
PCT International Search Report and Written Opinion of the International Searching Authority, PCT/US2011/059412 (published as WO 2012/061760), dated Apr. 13, 2012. (17 pages).
PCT Search report in PCT/US11/34829 (WO2011139980), dated Sep. 9, 2011. (12 pages).
PCT Written Opinion of the Int'l Searching Authority in PCT/US11/34829 (WO2011139980), dated Sep. 9, 2011. (6 pages).
PCT Written Opinion of the Int'l Searching Authority, PCT/US11/59412 (WO12061760), dated Apr. 13, 2012. (10 pages).
PCT Written Opinion of the International Searching Authority, PCT/US2010/054544 (published as WO2011059761), dated Feb. 2011. (5 pages).
PCT Written Opinion of the International Searching Authority, PCT/US2011/029038 (published as WO2011116309), dated Jul. 2011. (10 pages).
Pigeau, et al, Incremental Statistical Geo-Temporal Structuring of a Personal Camera Phone Image Collection, Proc. 17th Int'l IEEE Conference on Pattern Recognition, vol. 3, 2004, pp. 878-881.
Pirchheim et al., ‘Homography-Based Planar Mapping and Tracking for Mobile Phones’, IEEE International Symposium on Mixed and Augmented Reality 2011, Oct. 26-29, 2001, pp. 27-36.
Pollefeys, “Self-Calibration and Metric 3D Reconstruction from Uncalibrated Image Sequences,” Catholic University of Leuven, 1999. (240 pages).
Popa, et al, Using code collection to support large applications on mobile devices, 10th Int'l Conf on Mobile Computing and Networking, 2004, pp. 16-29.
Preliminary Amendment dated Jul. 14, 2014, First Action dated Nov. 19, 2014, and first Amendment dated Feb. 19, 2015, in U.S. Appl. No. 14/328,558, filed Jul. 10, 2014. (35 pages).
Press Release, Shazam Launches Enhanced Music Discovery Application on Apple App Store, Jun. 24, 2009. (3 pages).
Priyantha, et al, Eers—Energy Efficient Responsive Sleeping on Mobile Phones, Workshop on Sensing for App Phones, 2010. (5 pages).
Prosecution excerpt from Japanese application P2012-537074 (corresponding to PCT publication WO11059761), namely amended claims filed on Aug. 31, 2015, in response to Apr. 5, 2015 Action. (3 pages).
Prosecution excerpt from U.S. Appl. No. 14/312,421, namely applicant submission dated Nov. 20, 2015. (6 pages).
Prosecution excerpts from Chinese application 201080059621.7 (corresponding to PCT publication WOI 1059761), including claims originally-filed, first action dated Jan. 16, 2014, claims filed in response, second action dated Sep. 5, 2014, and a final action dated May 21, 2015 (without preceding claim amendment). (34 pages).
Prosecution excerpts from Chinese patent application 201080065015.6 (corresponding to WO2011082332), namely applicant submissions dated Feb. 2013, Apr. 14, 2014, Oct. 8, 2014, and Apr. 24, 2015, and translated Chinese Patent Office communications dated Nov. 28, 2013, Jul. 18, 2014, Feb. 10, 2015 and Aug. 25, 2015. (39 pages).
Prosecution excerpts from commonly-owned U.S. Appl. No. 12/712,176 (now U.S. Pat. No. 8,121,618), including applicant submissions dated Feb. 24, 2010, Nov. 14, 2011, and Dec. 12, 2011, and Office documents dated Oct. 13, 2011, Nov. 23, 2011 and Dec. 29, 2011. (52 pages).
Prosecution excerpts from commonly-owned U.S. Appl. No. 12/797,503 (now U.S. Pat. No. 9,197,736), including applicant submissions dated Jun. 9, 2010, Jul. 29, 2010, Sep. 26, 2012, Apr. 17, 2013, Oct. 11, 2013, Nov. 21, 2013, Dec. 12, 2013, Dec. 11, 203, Jul. 8, 2014, Dec. 29, 2014, Feb. 26, 2015, and Apr. 24, 2015, and Office documents dated Sep. 10, 2012, Dec. 19, 2012, Aug. 13, 2013, Oct. 30, 2013, Dec. 13, 2013, Mar. 5, 2014, Apr. 8, 2014, Oct. 28, 2014, Jan. 27, 2015, Mar. 17, 2015, and Jul. 16, 2015. (319 pages).
Prosecution excerpts from commonly-owned U.S. Appl. No. 12/982,470 (now U.S. Pat. No. 9,143,603), including applicant submissions dated Jan. 4, 2011, Dec. 22, 2011, Feb. 4, 2014, Mar. 6, 2014, Jul. 8, 2014, Jul. 9, 2014, Dec. 23, 2014, and Jun. 6, 2015, and Office documents dated Nov. 5, 2013, Mar. 11, 2014, Apr. 8, 2014, Sep. 23, 2014, Mar. 27, 2015, and Jun. 23, 2015. (101 pages).
Prosecution excerpts from commonly-owned U.S. Appl. No. 13/465,620 (now U.S. Pat. No. 8,737,986), including applicant submissions dated May 11, 2012, Nov. 15, 2013, and Feb. 28, 2014, and Office documents dated Oct. 23, 2013, Dec. 30, 2013, Mar. 7, 2014, and Apr. 18, 2014. (67 pages).
Prosecution excerpts from commonly-owned U.S. Appl. No. 13/466,803 (now U.S. Pat. No. 8,489,115), including applicant submissions dated May 11, 2012, Dec. 5, 2012, and Jan. 7, 2013, and Office documents dated Nov. 7, 2012, Jan. 7, 2013, Jan. 18, 2013 and Mar. 4, 2013.
Prosecution excerpts from commonly-owned U.S. Appl. No. 14/242,417 (published as 20140323142), including applicant submissions dated Jul. 25, 2014, Nov. 14, 2016, and Apr. 7, 2017, and Office documents dated Aug. 12, 2016, Jan. 17, 2017, Apr. 14, 2017, and May 31, 2017. (34 pages).
Prosecution excerpts from commonly-owned U.S. Appl. No. 14/287,933 (now U.S. Pat. No. 9,234,744), including applicant submissions dated Jul. 30, 2014, Aug. 6, 2015, Sep. 14, 2015, and Sep. 15, 2015, and Office documents dated Jul. 30, 2015, Sep. 3, 2015, Sep. 14, 2015, and Oct. 23, 2015. (56 pages).
Prosecution excerpts from commonly-owned U.S. Appl. No. 14/861,758 (now U.S. Pat. No. 9,609,117), including applicant submissions dated Oct. 8, 2015, Aug. 4, 2016, Sep. 30, 2016, and Oct. 26, 2016, and Office documents dated Aug. 3, 2016, Aug. 31, 2016, and Nov. 11, 2016. (78 pages).
Prosecution excerpts from commonly-owned U.S. Appl. No. 15/259,882 (published s 20160379082), including applicant submission dated Sep. 8, 2016, and Office document dated May 2, 2017. (18 pages).
Prosecution excerpts from corresponding Chinese patent application 201180064175.3, namely amended claims presented for examination, and First Office Action dated Nov. 2015 (with translation). . (19 pages).
Prosecution excerpts from European patent application 10841737.9 (corresponding to WO2011082332 ), namely applicant submissions dated Feb. 18, 2013, May 26, 2016, and Dec. 23, 2016, and EPO communications dated Nov. 13, 2015, Sep. 14, and Apr. 17, 2017. (47 pages).
Prosecution excerpts from European patent Application 11757077.0, including amended claims filed May 3, 2013, Extended European Search Report dated Jul. 25, 2013, and Response filed with EPO dated Feb. 20, 2014. (23 pages).
Prosecution excerpts from European patent application 2559030 (corresponding to WO2011116309), namely applicant submissions dated May 3, 2013, Feb. 24, 2014, Oct. 5, 2015, and Jun. 1, 2016, and EPO communications dated Jul. 25, 2013, Sep. 15, 2015, Jan. 26, 2016, Jan. 24, 2017 and May 26, 2017. (64 pages).
Prosecution excerpts from Japanese application P2012-537074 (corresponding to PCT publication WO11059761), including claims originally-filed, and claims filed in response to the first action, and Office Actions dated Sep. 2, 2014 and Apr. 7, 2015. (16 pages).
Prosecution excerpts from Japanese application P2012-537074 (corresponding to PCT publication WOI 1059761), including claims originally-filed, first action dated Sep. 2, 2014, claims filed in response, second action dated Apr. 7, 2015, claims filed in response, final action dated Oct. 6, 2015, Trial decision on appeal, dated Aug. 23, 2016, and cover sheet of issued patent. (39 pages).
Prosecution excerpts from Japanese application P2013-500235 (corresponding to PCT publication WO2011116309), including amended claims presented for examination, first action dated Mar. 10, 2015, and amended claims filed in response (in response to which the application was allowed). (10 pages).
Prosecution excerpts from Japanese application P2013-537885 (based on PCT publication WO/2012/061760), including claims presented for examination, and First Office Action dated May 12, 2015. (10 pages).
Prosecution excerpts from Japanese patent application P2014-531853, namely pending claims, and English translation of Notice of Reasons for Rejection dated Oct. 25, 2016. .(25 pages).
Prosecution excerpts from U.S. Appl. 12/797,503, filed Jun. 9, 2010, including PTO Actions dated Dec. 19, 2012, Aug. 13, 2013, Apr. 8, 2014, and Oct. 28, 2014, and Applicant responses filed Apr. 17, 2013, Nov. 21, 2013, Dec. 10, 2013, Jul. 8, 2014, Dec. 29, 2014 and Apr. 24, 2015. .(214 pages).
Prosecution excerpts from U.S. Appl. No. 12/821,974, including PTO communications dated Mar. 8, 2013, Sep. 11, 2013 and Oct. 7, 2013, and applicant submissions dated Dec. 13, 2012, Jun. 5, 2013, Oct. 16, 2013. (61 pages).
Prosecution excerpts from U.S. Appl. No. 12/982,470, filed Dec. 30, 2010, including PTO Actions dated Nov. 5, 2013 and Apr. 8, 2014, and Applicant submissions filed Jan. 24, 2011, Dec. 22, 2011, Feb. 4, 2014, and Jul. 8, 2014, and Notice of Allowance dated Sep. 23, 2014. (68 pages).
Prosecution excerpts from U.S. Appl. No. 13/207,841, including applicant submissions dated Dec. 2, 2011, Jul. 25, 2014, Sep. 5, 2014, Mar. 16, 2015, Jun. 3, 2015, and Jul. 23, 2015, and Office communications dated Aug. 13, 2014, Dec. 19, 2014, Apr. 10, 2015, Jul. 8, 2015, and Aug. 26, 2015. (123 pages).
Prosecution excerpts from U.S. Appl. No. 13/278,949, including PTO communications dated Apr. 22, 2014, Dec. 9, 2014, and Mar. 27, 2015, and applicant submissions dated Dec. 23, 2013, Aug. 15, 2014, Feb. 11, 2015, and Feb. 20, 2015. (125 pages).
Prosecution excerpts from U.S. Appl. No. 13/299,140, filed Nov. 17, 2011. (46 pages).
Prosecution excerpts from U.S. Appl. No. 13/465,620 (now U.S. Pat. No. 8,737,986), including applicant submissions dated May 11, 2012, Nov. 15, 2013 and Feb. 28, 2014, and Office correspondence dated Oct. 23, 2013, Dec. 30, 2013 and Mar. 7, 2014. (62 pages).
Prosecution excerpts from U.S. Appl. No. 13/552,319, including applicant submissions dated Jul. 24, 2012, Sep. 5, 2014, and Apr. 8, 2015, and Office communications dated Jul. 1, 2014, Jan. 15, 2015, and Jul. 10, 2015. (44 pages).
Prosecution excerpts from U.S. Appl. No. 13/552,337, including PTO communications dated Mar. 18, 2015, May 20, 2015, and Jun. 17, 2015, and applicant submissions dated May 12, 2015, May 26, 2015, and Jun. 4, 2015. (71 pages).
Prosecution excerpts from U.S. Appl. No. 13/607,095, including PTO communications dated Jan. 2, 2015 and May 28, 2015, and applicant submissions dated Feb. 20, 2013, Feb. 6, 2015, Feb. 19, 2015 and Jun. 11, 2015. (55 pages).
Prosecution excerpts from U.S. Appl. No. 14/189,236 (now U.S. pat. No. 9,256,806), including applicant submissions dated Feb. 25, 2014, Nov. 19, 2014, Apr. 6, 2015, May 11, 2015 and Aug. 19, 2015, and Office documents dated Aug. 20, 2014, Feb. 10, 2015, Apr. 22, 2015, May 21, 2015 and Jun. 19, 2015. (120 pages).
Prosecution excerpts from U.S. Appl. No. 14/189,236, including PTO communications dated Aug. 20, 2014, Feb. 10, 2015, May 21, 2015 and Jun. 19, 2015, and applicant submissions dated Feb. 25, 2014, Nov. 19, 2014, May 11, 2015 and Jun. 2, 2015. (99 pages).
Prosecution excerpts from U.S. Appl. No. 14/337,607, filed Jul. 22, 2014, including Preliminary Amendment dated Jul. 29, 2014, and USPTO Action dated Dec. 4, 2014. (29 pages).
Prosecution excerpts from U.S. Appl. No. 14/337,607, including applicant submissions dated Jul. 29, 2014, Apr. 6, 2015, and Sep. 28, 2015, and Office papers dated Dec. 4, 2014, Jul. 27, 2015, and Oct. 19, 2015. (89 pages).
Prosecution excerpts from U.S. Appl. No. 14/244,287, filed Apr. 3, 2014, namely applicant submissions dated Feb. 24, 2016 and Jun. 20, 2016, and Office Actions dated Mar. 16, 2016 and Sep. 22, 2016. (53 pages).
Prosecution excerpts from U.S. Appl. No. 14/452,239, filed Aug. 5, 2014, namely applicant submissions dated Oct. 9, 2014, May 14, 2015, Aug. 27, 2015, and Dec. 15, 2015, and Office Actions dated Nov. 14, 2014, Sep. 15, 2015, and Feb. 3, 2016. (82 pages).
Prosecution excerpts from U.S. Appl. No. 14/460,719, filed Aug. 15, 2014, namely applicant submissions dated Nov. 23, 2015, Mar. 30, 2016, and Jul. 21, 2016, and Office Actions dated Jan. 5, 2016, Jun. 15, 2016, and Aug. 1, 2016. (60 pages).
Prosecution excerpts from WO2011059761, namely International Search Report (May 19, 2011), Written Opinion of the International Search Authority (Apr. 28, 2012), and International Preliminary Report on Patentability (May 1, 2012).
Prosecution excerpts from WO2011116309, namely International Search Report (Sep. 22, 2011), Written Opinion of the International Search Authority (Sep. 19, 2012) and International Preliminary Report on Patentability (Sep. 25, 2012).
Prosecution excerpts, U.S. Appl. No. 13/207,860, filed Aug. 11, 2011. (26 pages).
Prosecution excerpts, U.S. Appl. No. 13/278,949, filed Oct. 21, 2011. (62 pages).
Provisional U.S. Appl. No. 61/159,793, filed Mar. 12, 2009 (to which 20100231509 claims priority). (11 pages).
Provisional U.S. Appl. No. 61/273,673, filed Aug. 7, 2009 (to which US20110138286 claims priority). (11 pages).
Provisional U.S. Appl. No. 61/277,179, filed Sep. 22, 2009 (to which US20110138286 claims priority). (14 pages).
Provisional U.S. Appl. No. 61/295,774, filed Jan. 18, 2010 (to which 20120016678 claims priority). (219 pages).
Provisional U.S. Appl. No. 61/511,589, filed Jul. 26, 2011 (priority application for US20130026224). (37 pages).
Puangpakisiri, et al, High level activity annotation of daily experiences by a combination of a wearable device and Wi-Fi based positioning system, IEEE Int'l Conf on Multimedia and Expo, 2008. pp. 1421-1424.
Publish-Subscribe Wikipedia article, Dec. 10, 2010. (4 pages).
Quack, et al, Object Recognition for the Internet of Things, The Internet of Things, 2008, pp. 230-246.
Rao, my6sense Brings Personalized Content Ranking App to Android Phones, TechCrunch Blog, Sep. 7, 2010. (2 pages).
Rattenbury et al, Methods for Extracting Place Semantics from Flickr Tags, ACM Transactions on the Web, vol. 3, No. 1, Article 1, Jan. 2009, 30 pp.
Rattenbury et al, Towards automatic extraction of event and place semantics from Flickr tags, Proc. 30th Intl ACM Conf. on R and D in Information Retrieval, 2007, pp. 103-110.
Rattenbury et al, Towards Extracting Flickr Tag Semantics, Proc. of the 16th Int'l Conf on World Wide Web, 2007, pp. 1287-1288.
Reichle, et al, A Comprehensive Context Modeling Framework for Pervasive Computing Systems, Distributed Applications and Interoperable Systems, Springer Berlin Heidelberg, 2008. pp. 281-295.
Reitmayr, ‘Going Out: Robust Model-based Tracking for Outdoor Augmented Reality,’ Proc. 5.sup.th IEEE/ACM Int. Symp. on Mixed and Augmented Reality, 2006, pp. 109-118.
Rekimoto, CyberCode: Designing Augmented Reality Environments with Visual Tags, Proc. of Designing Augmented Reality Environments 2000, pp. 1-10.
Response to First Action in Chinese Application 201080065015.6 (corresponding to PCT WO2011082332), Apr. 2014. (10 pages).
Response to Second Action in Chinese Application 201080065015.6 (corresponding to PCT WO2011082332), Oct. 2014. (12 pages).
Rodriguez et al., ‘Evolution of Middleware to Support Mobile Discovery,’ Presented at the MobiSense Workshop/Pervasive on Jun. 12, 2011. (6 pages).
Rohs, Real-World Interaction with Camera Phones, 2004 International Symposium on Ubiquitious Computing Systems, pp. 74-89.
Roli, et al, Knowledge-Based Control in Multisensor Image Processing and Recognition, Engineering, vol. 32.06, pp. 1153-1166, 1993.
Roy, Wearable Audio Computer—A Survey of Interaction Techniques, MIT Media Lab, 1997. (11 pages).
RulerPhone, WebArchive of http://benkamens.com/rulerphone from Apr. 15, 2009. (4 pages).
Rusu, et al, Detecting and Segmenting Objects for Mobile Manipulation, IEEE 12th International Conference on Computer Vision, pp. 47-54, Sep. 27, 2009.
Rusu, et al, Robots in the Kitchen—Exploiting Ubiquitous Sensing and Actuation, Robotics and Autonomous Systems 56, pp. 844-856, 2008.
S. Brindha, “Hiding Fingerprint in Face using Scattered LSB Embedding Steganograhpic Technique for Smart card based Authentication system”, Jul. 2011, International Journal of Computer Applications, vol. 26, No. 10, p. 1-5.
S. Roy, “Online Payment System using Steganogrpahy and Visual Cryptography”, 2014, IEEE Students' Conference on Electrical, Electronics, and Computer Science, ISBN 978-1-4799-2526-1/14, p. 1-5.
Sage, Shazam Plays Nice with Twitter and, for Some Reason, Google Maps, Intomobile web site, Jun. 24, 2009. (1 page).
Samsung Galaxy i7500 user manual, 87 pgs., Rev. 1.3 (English) Oct. 2009 (phone announced Apr. 2009).
Samsung Galaxy S 19000 user manual, 132 pgs., Rev. 1.6 (English) Jul. 2010 (phone announced Mar. 2010).
Santangelo, A Chat-Bot based Multimodal Virtual Guide for Cultural Heritage Tours, Proc. of the 2006 Int'l Conf on Pervasive Systems and Computing, pp. 114-120.
Sarvas, Metadata Creation System for Mobile Images, ACM MobiSYS '04, 13 pp.
Satyanarayanan, The Case for VM-based Cloudlets in Mobile Computing, IEEE Pervasive Computing, vol. 8, No. 4, pp. 14-23, Nov. 2009.
Schandl et al., Adaptive RDF graph replication for mobile semantic web applications, 2009, Ubiquitous Computing and Communication Journal (Special Issue on Managing Data with Mobile Devices)(Aug. 2009, pp. 738-745.
Scheutz, et al, First Steps Toward Natural Human-Like HRI, Autonomous Robots, vol. 22, No. 4, pp. 411-423, 2007.
Schilit et al, Context-Aware Computing Applications, IEEE Proc. of Workshop on Mobile Computing Systems and Applications, Dec. 1994, pp. 85-90.
Schmidt, Interactive Context-Aware Systems interacting with Ambient Intelligence, Chapter 9 in Ambient Intelligence, pp, 159-178JOS Press, 2005.
Schoning, et al, PhotoMap—using spontaneously taken images of public maps for pedestrian navigation tasks on mobile devices, Proc. 11th Int'l Conf. on Human-Computer interaction with Mobile Devices and Services, ACM, 2009. (10 pages).
Schurmann, Secure communication based on ambient audio, IEEE Trans on Mobile Computing, 2011. pp. 1-13.
Schwartz, et al, Discovering Shared Interests Using Graph Analysis, 36:8 Comm'n. ACM 78, Aug. 1993. pp. 78-89.
Schwuttke et al, Improving Real-Time Performance of Intelligent Systems with Dynamic Trade-off Evaluation, Jet Propulsion Laboratory Technical Report, 1993. (22 pages).
Schwuttke, et al, Enhancing Performance of Cooperating Agents in Real-Time Diagnostic Systems, Proc. of the 13th Int'l Joint Conference on Artificial Intelligence—vol. 1, 1993, pp. 332-337.
Scott, et al, Audio location—Accurate low-cost location sensing, Pervasive 2005. LNCS, vol. 3468, 2005, pp. 307-311.
Se, et al, Global localization using distinctive visual features, IEEE Int'l Conf on Intelligent Robots and Systems, 2002, pp. 226-231.
Se, et al, Local and global localization for mobile robots using visual landmarks, IEEE Int'l Conf. on Intelligent Robots and Systems, 2001, pp. 414-420.
Se, et al, Vision-based global localization and mapping for mobile robots. IEEE Trans, on Robotics, 2005, pp. 364-375.
Se, et al, Vision-based mobile robot localization and mapping using scale-invariant features, Proc. of the IEEE Int'l Conf. on Robotics and Automation, 2001, pp. 2051-2058.
Second Action in Chinese Application 201080065015.6 (corresponding to PCT WO2011082332), Jul. 2014. (12 pages).
Senn et al, Parallel Join Processing on Graphics Processors for the Resource Description Framework, 2010, Architecture of Computing Systems (ARCS), 2010 23rd International Conference, VDE, pp. 1-8.
Seo et al., ‘Color Images Watermarking of Multi-Level Structure for Multimedia Services’, International Conference on Convergence Information Technology, Nov. 21, 2007, pp. 854-860.
Sep. 18, 2014 Notice of Allowance, and Aug. 11, 2014 Preliminary Amendment, all from assignee's U.S. Appl. No. 14/452,282 (now issued as U.S. Pat. No. 8,886,222). (149 pages).
Sep. 7, 2016 Final Rejection and Interview Summary, Jun. 6, 2016 Supplemental Response, Apr. 18, 2016 Amendment, Dec. 17, 2015 non-final Rejection, all from assignee's U.S. Appl. No. 14/074,072 (published as US 2014-0258110 A1). (103 pages).
Serra, et al., Inertial navigation systems for user-centric indoor applications, Networked and Electronic Media Summit, Barcelona 2, Oct. 13, 2010. (5 pages).
Shan He et al, ‘High-Fidelity Data Embedding for Image Annotation’, IEEE Transactions on Image Processing, IEEE Service Center, Piscataway, NJ, US, (Feb. 1, 2009), vol. 15, No. 2, doi:10.1109/TIP.2008.2008733, ISSN 1057-7149, pp. 429-435, XP011240926.
Shazam Launches Enhanced Music Discovery Application on Apple App Store (Jun. 24, 2009), available at https://web.archive.org/web/20141115202009/http:1 /news.shazam.com/pressreleases/shazam-launches-enhancedmusic-discovery-ap- plication-on-apple-app-store-8 904 7 6. (3 pages).
Sheth, Citizen Sensing, Social Signals, and Enriching Human Experience, IEEE Internet Computing, Jul. 2009. pp. 87-92.
Sigg, Context-based security—State of the art, open research topics and a case study, 5th ACM Int'l Workshop on Context-Awareness for Self-Managing Systems, Sep. 17, 2011. pp. 17-23.
Sigg, Entropy of Audio Fingerprints for Unobtrusive Device Authentication, Proc. of the 7th Int'l and Interdisciplinary Conference on Modeling and Using Context, Sep. 26, 2011, pp. 296-299.
Sim, et al, Comparing image-based localization methods, International Joint Conference on Artificial Intelligence, Aug. 9, 2003, pp. 1560-1562.
Sim, et al, Learning and evaluating visual features for pose estimation, Proc. 7th Int'l Conf. on Computer Vision (ICCV), 1999, pp. 1217-1222.
Sim, et al, Learning generative models of scene features, 2001 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 406-412.
Simon Julier, Marco Lanzagorta, Yohan Baillot, Lawrence Rosenblum, Steven Feiner, and Tobais Hollerer, Information Filtering for Mobile Augmented Reality, In: Proc. ISAR '00 (Int. Symposium on Augmented Reality), Munich, Germany, Oct. 5-6, 2000, pp. 3-11.
Skrypnyk, Iryna et al. “Scene Modelling, Recognition and Tracking with Invariant Image Features,” 2004, Proceedings of the Third IEEE and ACM International Symposium on Mixed and Augmented Reality, 10 pages.
Smeaton, Content vs context for multimedia semantics—the case of sensecam image structuring, LNCS 4306, pp. 1-10, 2006.
Solachidis et al., ‘Circularly Symmetric Watermark Embedding in 2-D DFT Domain’, IEEE Transactions on Image Processing, vol. 10, No. 11, Nov. 2001, pp. 1741-1753.
Soldatos, et al, Agent Based Middleware Infrastructure for Autonomous Context-Aware Ubiquitous Computing Services, Computer Communications 30.3 (2007) 577-591.
Sony Ericsson, Xperia X10 Extended User Guide, 113 pgs., phone announced on Nov. 3, 2009.
Soria-Morillo et al, Mobile Architecture for Communication and Development of Applications Based on Context, 12th IEEE Int'l Conf. on Mobile Data Management, Jun. 2011. pp. 48-57.
Sperling et al, Image Processing in Perception and Cognition, in Proc. of Rank Prize Funds Int'l Symp at the Royal Society of London, 1982, Springer Series in Information Sciences, vol. 11, Phys and Biological Processing of Images, pp. 359-378.
Sprint, Basics Guide, HTC Hero, 135 pgs., phone announced Jun. 24, 2009. (145 pages).
Srihari, Use of Multimedia Input in Automated Image Annotation and Content-Based Retrieval, Proc. of SPIE, vol. 2420, 1995, pp. 249-260.
Supplementary Search Report and Written Opinion dated Jul. 13, 2016 from European patent application No. 14709848.7, which is the European regional phase of PCT application No. PCT/US2014/018715 (corresponding to WO2014134180). ( 10 pages).
Surachat et al. ‘Pixel-wise based Digital Watermarking Using Weiner Filter in Chrominance Channel.’ 9th International Symposium on Communications and Information Technology, Sep. 28, 2009, pp. 887-892.
Sysok Discovery, ‘Media Recognition for Content Creators—A New Dimension in Media Asset Management,’ Dec. 2009. (4 pages).
Takacs, et al, Outdoors augmented reality on mobile phone using loxel-based visual feature organization, Proc. 1st ACM Int'l Conf on Multimedia Information Retrieval, 2008. (8 pages).
Thomas, Ben et al., “ARQuake: An Outdoor/Indoor Augmented Reality First Person Application,” The Fourth International Symposium on Wearable Computers, Oct. 17, 2000, 9 pages.
Thonnat, Knowledge-Based Techniques for Image Processing and for Image Understanding, Journal de Physique 4, 2002. pp. 1-58.
Tian et al, Implementing a Scalable XML Publish/Subscribe System Using Relational Database Systems, Proceedings of the 2004 ACM SIGMOD International Conference on Management of Data. . (12 pages).
Tivo article, Wikipedia, Feb. 16, 2011. (11 pages).
Tobias H. Hollerer, Steven K. Feiner, Chapter 9: Mobile Augmented Reality in Location-Based Computing and Services (H Karimi and A. Hammad eds. 2004). (pp. 1-39).
Tobias Hans Hollerer, User Interfaces for Mobile Augmented Reality Systems (2004), available at https://web.archive.org/web/20050228181838/http://www.cs.ucsb.edu/.about.- holl/pubs/hollerer-2004-diss.pdf. (238 pages).
Tobias Hollerer, Steven Feiner, Tachio Terauchi, Gus Rashid, Drexel Hallaway, Exploring Mars: Developing Indoor and Outdoor User Interfaces to a Mobile Augmented Reality System (Aug. 1999), available at www.cs.columbia.edu/.about.drexel/research/Hollerer-1999-CandG.pdf. pp. 779-785.
Tsai et al, ParaWorld—a GPS-Enabled Augmented Reality Gaming System, database, 2005. (2 pages).
Tsui, ‘Color Image Watermarking Using Multidimensional Fourier Transforms’, IEEE Transaction on Information Forensics and Security, vol. 3, No. 1, Mar. 2008, pp. 16-28.
Tulusan, et al., ‘Lullaby: a capture & access systems for understanding the sleep environment,’ 14.sup.th International Conference on Ubiquitous Computing, Sep. 5-8, 2012, pp. 226-234.
U.S. Appl. No. 14/074,072, filed Nov. 7, 2013. (64 pages).
Unnikrishnan Audio Scene Segmentation Using a Microphone Array and Auditory Features, University of Kentucky Thesis, 2009. (82 pages).
Vallino, Augmenting Reality Using Affine Object Representations, Fundamentals of Wearable Computers and Augmented Reality, 2001. (38 pages).
van Heerde, A framework to balance privacy and data usability using data degradation, 2009 Int'l Conf on Computational Science and Eng'g, 8 pp.
van Heerde, Privacy-aware data management by means of data degradation, PhD thesis, May 2010. (191 pages).
van Renesse, Hidden and Scrambled Images—a Review, Conference on Optical Security and Counterfeit Deterrence Techniques IV, SPIE vol. 4677, pp. 333-348, 2002.
Verkasalo, Contextual patterns in mobile service usage, Personal and Ubiquitous Computing, vol. 13, No. 5, Jun. 2009 (published online Mar. 2008), pp. 331-342.
Vernon, The Space of Cognitive Vision, Chapter 2 in Cognitive Vision Systems, LNCS 3948, pp. 7-24, 2006.
Viana, et al, PhotoMap—Automatic Spatiotemporal Annotation for Mobile Photos, Web and Wireless Geographical Information Systems, pp. 187-201, 2007.
Viola et al, Detecting Pedestrians Using Patterns of Motion and Appearance, Mitsubishi Electric Research Laboratories Technical Report TR2003-90, 2003. (10 pages).
Viola, et al, Fast and Robust Classification Using Asymmetric Adaboost and a Detector Cascade, Advances in Neural Information Processing systems 2, pp. 1311-1318,2002.
Viola, et al, Rapid Object Detection Using a Boosted Cascade of Simple Features, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. (9 pages).
Viola, et al, Robust Real-Time Face Detection, International Journal of Computer Vision 57.2, pp. 137-154, 2004.
Viola, et al, Robust real-time object detection, Cambridge Research Laboratory Technical Report CRL 2001-01, Feb. 2001. (30 pages).
Wagner, et al, Pose tracking from natural features on mobile phones, IEEE Int'l Symp. on Mixed and Augmented Reality (ISMAR), 2008, pp. 125-134.
Wallach, “Smartphone Security: Trends and Predictions”, dated Feb. 17, 2011. (11 pages).
Wang, et al, A Framework of Energy Efficient Mobile Sensing for Automatic User State Recognition, Proc. 7th Int'l ACM Conf. on Mobile Systems, Applications, and Services, 2009. (14 pages).
Wayback Machine, https://web.archive.org/web/20090324-052359/http://audiotag.info/faq.sub.- --en.html?, ‘AudioTag’. Mar. 2009. (1 page).
WebPro News, The Era of the Interest Graph, Feb. 2011. (6 pages).
Weems, et al, Image Understanding Architecture Final Report, University of Mass., TEC-0029, Sep. 1991. (63 pages).
Wei, et al, Semantic annotation and reasoning for sensor data, Smart Sensing and Context, Springer Berlin Heidelberg, pp. 66-76, Sep. 2009.
Weiser, Some Computer Science Issues in Ubiquitous Computing, Comm. of the ACM, vol. 36, No. 7, Jul. 1993. pp. 75-84.
Weiser, The Computer for the 21st Century, Scientific American, 1991. (8 pages).
Wenyin, Ubiquitous Media Agents—A Frame Work for Managing Personally Accumulated Multimedia Files, Multimedia Systems, vol. 9, No. 2, Aug. 2003. (34 pages).
Werner, et al, Indoor positioning using smartphone camera, IEEE Int'l Conf. on Indoor Positioning and Indoor Navigation, Sep. 21, 2011. (4 pages).
White, et al, Designing a Mobile User Interface for Automated Species Identification, CHI '07 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2007. pp. 291-294.
Whiteside, J., & Decourcy, C, ‘The wallet of the future,’ Oct. 2012, Michigan Banker, 24(9), 21-23, p. 5-7.
Wilhelm, et al, Photo Annotation on a Camera Phone, CHI EA '04 Extended Abstracts on Human Factors in Computing Systems, 2004, 4 pp.
Winiwarter, Pea—a Personal Email Assistant with Evolutionary Adaptation, 5:1 Int'l J. Info. Tech. 12, Jan. 1999. pp. 1-30.
Wired article, If You're Not Seeing Data, You're Not Seeing (Aug. 2009), available at http://www.wired.com/2009/08/augmented-reality/. (15 pages).
Wolfe, D., “Visa's digital wallet, V.me, launches with PNC,” Oct. 16, 2012, American Banker, p. 5-6.
Wu, Tagsense—Marrying Folksonomy and Ontology, University of Georgia Thesis, 2007. (72 pages).
Xie, ‘Mobile Search With Multimodal Queries,’ Proceedings of the IEEE, 96(4), 589-601 , 2008.
Yahoo Transforms Android Phones Into Full-Featured Music Players, Yahoo Mobile Blog, Jun. 16, 2011, 2pp.
Yamabe, et al, Citron—A Context Information Acquisition Framework for Personal Devices, Proceedings of 11th International Conference on Embedded and Real-Time Computing Systems and Applications, IEEE, 2005. (7 pages).
Yamabe, et al, Possibilities and Limitations of Context Extraction in Mobile Devices; Experiments with a Multi-Sensory Personal Device, Int. J. Multimedia, Ubiquitous Eng 4 (2009), 37-52.
Yamazoe et al., A Body-mounted Camera System for Capturing User-view Images without Head-mounted Camera, 2005, IEEE, p. 1-8.
Yatani et al, ‘BodyScope: a wearable acoustic sensor for activity recognition,’ 14.sup.th International Conference on Ubiquitous Computing, Sep. 5-8, 2012, pp. 341-350.
Yeung et al, Web Search Disambiguation by Collaborative Tagging, 2008. (14 pages).
Yoshida, Mobile Magic Hand: Camera Phone Based Interaction Using Visual Code and Optical Flow, Lecture Notes in Computer Science, 2007, vol. 4551, 2007, pp. 513-521.
Z. Hrytskiv, “Cryptography and Steganography of Video Information in Modern Communications”, 1998, Electronics and Energetics, Vo. 11, No. 1, p. 1-11.
Zachariadis, et al, Adaptable Mobile Applications: Exploiting Logical Mobility in Mobile Computing, in Mobile Agents for Telecommunication Applications, Springer Berlin Heidelberg, 2003, pp. 170-179.
Zachariadis, et al, Building Adaptable Mobile Middleware Services Using Logical Mobility Techniques, in Contributions to Ubiquitous Computing, Springer Berlin Heidelberg, 2007, pp. 3-26.
Zander et al., A Framework for Context-driven RDF Data Replication on Mobile Devices, 2010, ACM, pp. 1-22.
Zhang et al, Image based localization in urban environments, 3d Int'l IEEE Symp. on 3D Data Processing, Visualization, and Transmission, 2006, pp. 33-40.
Zhang, et al, Local Image Representations Using Pruned Salient Points with Applications to CBIR, Proceedings of the 14th annual ACM international conference on Multimedia, 2006. (10 pages).
Zhang, et al, Multiple-instance pruning for learning efficient cascade detectors, Advances in Neural Information Processing Systems, pp. 1681-1688, Dec. 12, 2008.
Zheng et al, Adaptive Context Recognition Based on Audio Signal, IEEE 19th Int'l Conf on Pattern Recognition, 2008, pp. 1-4.
Related Publications (1)
Number Date Country
20200051059 A1 Feb 2020 US
Provisional Applications (1)
Number Date Country
61938673 Feb 2014 US
Continuations (2)
Number Date Country
Parent 15096112 Apr 2016 US
Child 16277754 US
Parent 14180277 Feb 2014 US
Child 15096112 US