At a high level, digital signage is an electronic display that shows some form of advertisement, brand promotion, or other information that may be useful to people passing by the signage. In the world of advertising and marketing there are certain trends that may shape the future of digital signage. One of these trends is known as experience branding, which allows the user to experience the product or brand because people tend to remember experiences, not brand messages or advertisements. Another trend is pervasive advertising, a term used to describe an advertisement experience with bidirectional communication in that the user chooses the advertisements, which provides brand owners insight into what consumers want to see. To incorporate these two trends, the signage must be interactive.
The following are three issues in interactive digital signage. First, people must notice the display. To a large extent it is the appearance of the display (such as brightness and the content shown) and where it is located that draws people's attention to it. However, for people to notice the display there is more to overcome than simply increasing the brightness of it. As a result of living in this economy of attention, a common phenomenon that occurs is display blindness, similar to banner blindness in web browsing, which results in people ignoring the signage.
The next issue is that people must notice the display is, in fact, interactive. There are four ways to communicate interactivity: a call to action (e.g., touching the screen to begin an advertisement); an attract sequence (e.g., an illustrative description of what to do); analog signage (additional signage explaining how to interact with a display); and prior knowledge (including seeing others interacting with a display before interacting with it).
Finally, people should want to interact with the signage. This issue is not as readily addressed as the other two issues because it is related to the reward and enjoyment of the interaction. Tangible rewards such as coupons could be given to users who interact with the signage system in order to encourage interaction. However, this reward can lead to people circumventing the system and oftentimes costs more for the brand.
Accordingly, a need exists for digital signage to address the issues described above, for example, in providing for an interactive advertising experience with a person.
A method for interacting with a viewer of a display providing advertising content, consistent with the present invention, includes displaying content on a display and detecting a person in view of a display. The method also includes generating a user representation of the person and showing a manipulation of the user representation on the display while the person is in view of the display.
A system for interacting with a viewer of a display providing advertising content, consistent with the present invention, includes a sensor, a display, and a processor coupled to the sensor and display. The processor is configured to display content on the display and detect a person in view of a display. The processor is also configured to generate a user representation of the person and show a manipulation of the user representation on the display while the person is in view of the display.
The accompanying drawings are incorporated in and constitute a part of this specification and, together with the description, explain the advantages and principles of the invention. In the drawings,
Embodiments of the present invention include features for interactive content for the purpose of advertisement and other brand promotional activities. An important one of these features is to use the passers-by of digital signage as part of the advertisement as a play on experience branding. Furthermore, it is possible to show targeted advertisements depending on the person walking by. By focusing on the surprise and fun of a person's interaction, the advertisement can be more effective, since passers-by may be surprised to see themselves in the content of the signage. In order to further attract and maintain a person's attention, the representation of the passer-by can extend beyond simply mimicking its owner by being manipulated in various ways.
For example, consider a display showing content to promote a beverage brand. After the user representation of the person at the display has been shown on the display and the user has indicated they are interested (perhaps by interacting for some amount of time), the user's representation on the display will no longer follow its owner but instead is shown holding or consuming the beverage. Further interactive aspects can include keeping the user's representation interacting in the display for a relatively short amount of time or incorporating miniature games and quests for the user to accomplish via interacting with the display.
Display 14 can be implemented with an electronic display for displaying information. Examples of display 14 include a liquid crystal display (LCD), a plasma display, an electrochromic display, a light emitting diode (LED) display, and an organic light emitting diode (OLED) display. Processor 16 can be implemented with any processor or computer-based device. Sensor 12 can be implemented with an active depth sensor, examples of which include the KINECT sensor from Microsoft Corporation and the sensor described in U.S. Patent Application Publication No. 2010/0199228, which is incorporated herein by reference as if fully set forth. Sensor 12 can also be implemented with other types of sensors associated with display 14 such as a digital camera or image sensor. Sensor 12 can be located proximate display 14 for detecting the presence of a person within the vicinity or view of display 14.
Sensor 12 can optionally be implemented with multiple sensors, for example a sensor located proximate display 14 and another sensor not located proximate display 14. As another option, one of the multiple sensors can include a microphone for detection of the voice (e.g., language) of a person in view of the display. The system can optionally include an output speaker to further enhance the interaction of the system with the person in view of the display.
In method 20, processor 16 determines via sensor 12 if a person is in view of display 14 (step 22). If a person is in view of display 14, processor 16 generates a user representation of the person based upon information received from sensor 12 (step 24) and displays the user representation on display 14 (step 26). A user representation can be, for example, an image, silhouette, or avatar of the person. An image as the user representation can be obtained from sensor 12 when implemented with a digital camera. A silhouette as the user representation includes a shadow or outline representing the person and having the same general shape as the person's body. The silhouette can be generated by processing the information from sensor 12, such as a digital image or outline of the person, and converting it to a representative silhouette. An avatar as the user representation is a cartoon-like representation of the person. The avatar can be generated by processing information from sensor 12, such as a digital image of the person, and converting it into a cartoon-like figure having similar features as the person.
Table 1 provides sample code for processing information from sensor 12 to generate a user representation of a person in view of display 14. This sample code can be implemented in software for execution by a processor such as processor 16.
If the person remains in view of display 14 for a particular time period (step 28), processor 16 generates and displays on display 14 a manipulation of the user representation. The particular time period can be used to distinguish between a person at the display rather than a person walking by the display without stopping. Alternatively, the time period can be selected to include a time short enough to encompass a person walking by the display. Displaying a manipulation of the user representation in intended, for example, to help obtain or maintain the person's interest in viewing the display. Examples of a manipulation include displaying an alteration of the user representation or displaying the user representation interacting with a product, as illustrated below. Other manipulations are also possible.
Table 2 provides sample code for generating manipulations of the user's representation for the examples of a “floating head” and holding a beverage, as further illustrated in the user interfaces described below. This sample code can be implemented in software for execution by a processor such as processor 16.
Processor 16 via sensor 12 also determines if it detects a particular gesture by the person as determined by information received from sensor 12 (step 32). Such a gesture can include, for example, the person selecting or pointing to a product or area on display 14. If the gesture is detected, processor 16 displays on display 14 product-related information based upon the gesture (step 34). For example, processor 16 can display information about a product the person pointed to or selected on display 14. Product-related information can be retrieved by processor 16 from network 18, such as via accessing a web site for the product, or from other sources.
Other manipulations of a user representation are possible. For example, a user representation can be shown with various types of clothing in order to promote particular brands of clothing. In a fitness or wellness type of brand promotion, a user representation can be altered to show how the user would look after a period of time on an exercise or nutritional program.