The present application is generally directed to product demonstration and, more particularly, to demonstrating a product via an altered video image.
As consumer products become more sophisticated, consumers may often become confused regarding many of the features that are available for various products. As an example, a product may be available on a store shelf, packaged such that a consumer may not have the ability to handle the product, apart from its packaging. While the packaging may include pictures and/or descriptions regarding its contents, use, etc., consumers often are still left with uncertainty regarding whether the product is designed to address the issue that the consumer wishes to address. Additionally, while oftentimes a store may provide “demo products” that are removed from the packaging to provide the consumer with more information on the product, the consumer may still be left with questions regarding use, results, related products, and/or other issues.
Included are embodiments for product demonstration. One embodiment of a system includes a first image capture device that captures a real-time video image of a first product and a memory component that stores a computer application, the computer application causing the system to identify the first product and render an altered version of the real-time video image. The altered version of the real-time video image may include a first virtual menu option that is selectable by the user making a first physical gesture and a second virtual menu option that is selectable by the user making a second physical gesture. Some embodiments include a display device for displaying the altered version of the real-time video image.
Similarly, one embodiment of a product demonstrator device includes an image capture device that captures a real-time video image of a first product and a memory component that stores a computer application, the computer application causing the product demonstrator device to identify the first product and render an altered version of the real-time video image. The altered version of the real-time video image a first virtual menu option that is selectable by the user making a first physical gesture and a second virtual menu option that is selectable by the user making a second physical gesture. Some embodiments include a display device for displaying the altered version of the real-time video image.
Also included are embodiments of a non-transitory computer-readable medium for product demonstration. At least one embodiment of a non-transitory computer-readable medium stores a first computer application that, when executed by a computer, causes the computer to identify the product and render an altered version of a real-time video image. The altered version of the real-time video image may include a first virtual menu option that is selectable by a user positioning the product in a predetermined first orientation. The altered version of the real-time video image may also include a second virtual menu option that is selectable by the user positioning the product in a predetermined second orientation.
The following detailed description of specific embodiments of the present disclosure can be best understood when read in conjunction with the drawings enclosed herewith.
The embodiments set forth in the drawings are illustrative in nature and not intended to be limiting of the disclosure defined by the claims. Moreover, individual features of the drawings and disclosure will be more fully apparent and understood in view of the detailed description.
The following text sets forth a broad description of numerous different embodiments of the present disclosure. The description is to be construed as exemplary only and does not describe every possible embodiment since describing every possible embodiment would be impractical, if not impossible. It will be understood that any feature, characteristic, component, composition, ingredient, product, step or methodology described herein can be deleted, combined with or substituted for, in whole or part, any other feature, characteristic, component, composition, ingredient, product, step or methodology described herein. Numerous alternative embodiments could be implemented, using either current technology or technology developed after the filing date of this patent, which would still fall within the scope of the claims. All publications and patents cited herein are incorporated herein by reference.
Embodiments disclosed herein may be configured as a system, device, method, and/or non-transitory computer-readable medium for demonstrating a product and related data via a real-time video image, as well as providing an altered version of the real-time video image to create an interactive interface. In some embodiments, the user may stand within range of an image capture device, such as a camera, with a product also within range of the image capture device. The image capture device may be configured to capture a real-time video image of the user and product. The image capture device may also be physically and/or communicatively coupled to a computing device and a display device. The computing device may include a memory component that stores a computer application that causes the computing device to utilize product identification data to identify the product. Additionally, the computer application may cause the computing device to alter the real-time video image to provide an interactive menu for that product. Menu options may be selected by the user positioning the product in a predetermined orientation, within view of the image capture device.
As an example, if a user enters a store and locates a product of interest, the user may hold the product within view of an image capture device that is coupled to the product demonstrator. The product demonstrator may identify the product from the image and/or via communication with the product. The product demonstrator may additionally provide an interactive interface to the user that is specific to the product being identified. More specifically, upon identifying the product, the product demonstrator may alter the real-time video image to provide a plurality of virtual menu options. The user may select one of the menu options by performing a triggering action. A triggering action may include any action to select an option, including holding the product in a predetermined orientation for a predetermined amount of time. Other triggering actions may include user input via a mouse, keyboard, touch screen, etc., a predetermined motion by the user, and/or other triggering actions. Additionally, the user can view the other menu options by changing the present orientation of the product. The other menu options may be selected by the product demonstrator 102 detecting the product in the new orientation and receiving a triggering action. From the selected menu option, the user may be provided with a plurality of virtual menu sub-options and/or data related to the product.
Similarly, in some embodiments, the product demonstrator may not utilize the image capture device to identify the product. More specifically, the product demonstrator may receive a product identifier and simply determine the product from the identifier. The product identifier may include an image sent from a remote device (such as a mobile device that includes a second image capture device), a product name, a product number, and/or other identifiers. Additionally, the identifier may be sent from a user and/or a remote device via a wired or wireless protocol, such as via an audio signal (e.g., the user speaking to the product demonstrator), via a Bluetooth™ protocol, via a Wi-Fi protocol, via a Wi-Max protocol, via a mobile communications protocol, and the like.
Referring now to the drawings,
More specifically, the product demonstrator 102 may be configured as a computing device, mobile telephone, personal digital assistant, laptop computer, tablet, electronic kiosk, and/or other device. Additionally, the product demonstrator 102 may include and/or be coupled to a display device 102a, an image capture device 102b, and an audio device 102c. The display device 102a may be any device for providing a graphical user interface. The display device 102a may be integral to the product demonstrator, and/or may be a separate component in a system for product demonstration. Similarly, the image capture device 102b may be positioned on and/or be communicatively coupled (via a wired and/or wireless connection) to the product demonstrator 102. The image capture device 102b may be configured to capture real-time video images, still images, 3-dimensional images, and/or other images. Also included is the audio device 102c that may also be physically integral to the product demonstrator 102 and/or physically separate from the product demonstrator 102. The audio device 102c may be configured as a speaker and/or microphone for receiving and/or providing audio data to the user.
Also included in the product demonstrator 102 is a product demonstration application 144, which includes product identification and tracking logic 144a, product menu logic 144b, and image rendering and altering logic 144c. As described in more detail below, the product identification and tracking logic 144a may be configured to cause the product demonstrator 102 to receive image data (such as real-time video images) and determine, from the received image data, at least one product. Additionally, the product identification and tracking logic 144a may be configured to track the location of the identified product within the image, regardless of movement of the product. Similarly, the product menu logic 144b may be configured to cause the product demonstrator 102 to determine a virtual menu for the identified product. Similarly, the real-time video rendering and altering logic 144c may be configured to render a real-time video image for display, as well as alter the imagery, as described in more detail below.
Also illustrated in
Similarly, the remote computing device 106 may also be coupled to the network 100 and may be configured to communicate with the product demonstrator 102 (and/or with the user computing device 104) to receive usage data of the product demonstrator for tracking statistics, purchases, etc. Such information may be utilized to further enhance the accuracy of the product demonstrator 102.
It should be understood that while the product demonstrator 102, the user computing device 104, and the remote computing device 106 are depicted as kiosks, personal computers and/or servers, these are merely examples. More specifically, in some embodiments any type of computing device (e.g. kiosk, mobile computing device, personal computer, server, etc.) may be utilized for any of these components. As an example, while the product demonstrator 102 may be configured as an integrated product demonstrator device, in some embodiments, the product demonstrator 102 may be configured as a system, where the components are not physically integrated within a single housing. Along those lines, while each of these computing devices is illustrated in
Additionally, the memory component 240 may be configured to store operating logic 242 and a product demonstration application 144. The product demonstration application 144 may include a plurality of different pieces of logic, some of which include the product identification and tracking logic 144a, the product menu logic 144b, and the real-time video image rendering and altering logic 144c, each of which may be embodied as a computer program, firmware, and/or hardware. A local interface 246 is also included in
The processor 232 may include any processing component operable to receive and execute instructions (such as from the data storage component 236 and/or memory component 240). The input/output hardware 230 may include and/or be configured to interface with a monitor, keyboard, mouse, printer, image capture device, microphone, speaker, gyroscope, compass, and/or other device for receiving, sending, and/or presenting data. The network interface hardware 234 may include and/or be configured for communicating with any wired or wireless networking hardware, including an antenna, a modem, LAN port, wireless fidelity (Wi-Fi) card, WiMax card, mobile communications hardware, and/or other hardware for communicating with other networks and/or devices. From this connection, communication may be facilitated between the product demonstrator 102 and other computing devices. Similarly, it should be understood that the data storage component 236 may reside local to and/or remote from the product demonstrator 102 and may be configured to store one or more pieces of data for access by the product demonstrator 102 and/or other components.
Included in the memory component 240 are the operating logic 242 and the product demonstration application 144. The operating logic 242 may include an operating system and/or other software for managing components of the product demonstrator 102. Similarly, as discussed above, the product demonstration application 144 may reside in the memory component 240 and may be configured to cause the input/output hardware 230 identify a product from a received real-time video image, an interactive menu specific to the determined product, and alter the real-time video image, based on whether the potential product is in the real-time video image. Other functionality is also included and described in more detail, below.
It should be understood that the components illustrated in
Additionally, while the product demonstrator 102 is illustrated with the product identification and tracking logic 144a, the product menu logic 144b, and the real-time video image rendering and altering logic 144c, as part of the product demonstration application 144, this is also an example. More specifically, in some embodiments, a single piece of logic may perform the described functionality. Similarly, in some embodiments, this functionality may be distributed to a plurality of different pieces of logic, which may reside in the product demonstrator 102 and/or elsewhere. Additionally, while only one application is illustrated as being stored by the memory component 240, other applications may also be stored in the memory component 240 and utilized by the product demonstrator 102.
The product demonstrator 102 may identify the product 302a from the real-time video image and/or via a communication with the product 302a. As an example, in some embodiments, the image capture device 102b can capture an image of the product 302a. From this image of the product 302a, the product demonstrator 102 can identify natural features (such as color, shape of packaging, shape of product, etc.) to identify the product 302a. Similarly, the product demonstrator 102 may be configured to identify (from the image) markers, such as bar codes, radio frequency identifier (RFID) tags, price stickers, and/or other markers. In some embodiments, the product 302a may include communication capabilities to facilitate 1-way and/or 2-way communication with the product demonstrator 102. From this communication, the product demonstrator 102 may identify the product 302a.
Once the product 302a has been identified, the product demonstrator 102 can determine product-specific information, as well as a product-specific alteration to make to the real-time video image. As an example, in addition to rendering a virtual product 302b and a virtual user 304b, the product demonstrator 102 may additionally alter the real-time video image to provide a virtual product alteration 302c. More specifically, in the embodiment of
It should be understood that the user 304a may select the more detail option 308 via any of a number of different ways. As an example, the display device 102a may be configured as a touch screen, where the user may simply touch that portion of the screen. In some embodiments, the user may simply direct the product 302a, such that a predetermined portion of the virtual product 302b touches the more detail option 308. In some embodiments, the user may access a keyboard, mouse, controller, and/or other device for selecting the more detail option 308. Similarly, in still some embodiments, the user 304a may move such that the virtual user 304b touches the desired option.
Also included in the example of
It should be understood that while the embodiment of
Similarly, by selecting the historical price data option 506, information regarding past purchases and/or future price predictions may be provided. As an example, if the price of the product 302a has declined by 10% each month for the last 6 months, the product demonstrator 102 can provide this information to the user 304a. With this information, the user 304a (and/or the product demonstrator 102) can predict that the price will continue to decline at a similar rate over the next month. Thus, the user 304a can determine whether to purchase the product 302a now or wait for future price reductions.
Additionally, a plurality of virtual sub-options 704 may also be provided to the user. The virtual sub-options 704 may be associated with other products that are related to the product 302a (
Additionally included in the embodiment of
It should be understood that while, in some embodiments, the product demonstrator 102 may direct usage of the virtual product 602, this is just an example. In some embodiments, the user 304a may move their hand (and/or body) to show how the user 604a would operate the product 302a. The product demonstrator 102 may provide feedback regarding the user's technique.
As discussed above, in some embodiments, the user 304a can control operation of the virtual product 602. Accordingly, in some embodiments, the interface from
It should also be understood that while the examples in
Additionally, products within the scope of this disclosure include a number of absorbent article products, such as diapers, training pants, adult incontinence products, feminine hygiene garments, facial tissues, bathroom tissues, paper towels and paper napkins In some embodiments, the product may include, for example, laundry or other type of detergents, fabric softeners, bleaches, fabric pretreaters and/or dryer sheets. In still some embodiments, the product may include, for example, dishwashing detergents, glass cleaners, hard surface cleaners, fabric deodorizers, air fresheners, and/or hard surface sanitizers. In some embodiments, the product may include, for example, cosmetics, gift packs, electric or manual appliances, razors, hair products, skin products, pet food products, a consumable product such as food, etc. Other types of products are also included within the scope of this disclosure.
It should be understood that while the embodiments above describe the identification of a single product, these are merely examples. More specifically, in some embodiments, the product demonstrator 102 may be configured to identify a plurality of products and render an altered version of the real time video image to provide corresponding virtual products. Additionally, in such embodiments, comparison data among the plurality of products may be provides such as by the virtual menu options.
Additionally, in some embodiments, a system for product demonstration, may include a first image capture device that captures a real-time video image of a user and a memory component that stores a computer application, the computer application causing the system to identify a first product and render an altered version of the real-time video image. In some embodiments, the altered version of the real-time video image include the real-time video image of the user, an image of a first virtual product that is associated with the first product, a first virtual menu option that is selectable by the user making a first physical gesture, and a second virtual menu option that is selectable by the user making a second physical gesture. Some embodiments include a display device for displaying the altered version of the real-time video image.
Similarly, some in embodiments, a demonstrator device for product demonstration includes an image capture device that captures a real-time video image of a user and a memory component that stores a computer application, the computer application causing the product demonstrator device to identify a first product and render an altered version of the real-time video image. The altered version of the real-time video image may include the real-time video image of the user, an image a first virtual product that is associated with the first product, a first virtual menu option that is selectable by the user making a first physical gesture, and a second virtual menu option that is selectable by the user making a second physical gesture. Some embodiments include a display device for displaying the altered version of the real-time video image.
Further, in some embodiments, systems for product demonstration may include an image capture device for capturing a real-time video image of a user and a memory component that stores a computer application, that when executed by a computer, causes the system to identify a product and render an altered version of the real-time video image. The altered version of the real-time video image may include a first portion that includes an current version of a part of the real-time video image and a second portion that includes a predicted version of the real-time video image, the predicted version of the real-time video image signifying a predicted result of using the product. Some embodiments include a display device for displaying the altered version of the real-time video image.
It should also be understood that, unless a term is expressly defined in this specification using the sentence “As used herein, the term ‘______ ’ is hereby defined to mean . . . ” or a similar sentence, there is no intent to limit the meaning of that term, either expressly or by implication, beyond its plain or ordinary meaning, and such term should not be interpreted to be limited in scope based on any statement made in any section of this patent (other than the language of the claims). No term is intended to be essential to the present disclosure unless so stated. To the extent that any term recited in the claims at the end of this patent is referred to in this patent in a manner consistent with a single meaning, that is done for sake of clarity only so as to not confuse the reader, and it is not intended that such a claim term be limited, by implication or otherwise, to that single meaning Finally, unless a claim element is defined by reciting the word “means” and a function without the recital of any structure, it is not intended that the scope of any claim element be interpreted based on the application of 35 U.S.C. §112, sixth paragraph.
While particular embodiments have been illustrated and described, it would be understood to those skilled in the art that various other changes and modifications can be made without departing from the spirit and scope of the disclosure. It is therefore intended to cover in the appended claims all such changes and modifications that are within the scope of this disclosure.