1. Field of the Invention
The present invention generally relates to methods and systems for providing a presentation experience to a user. More particularly, the present invention relates to providing a user a non-degraded presentation experience while limiting access to the non-degraded presentation experience.
2. Related Art
Substantial effort and costs have been invested in protecting every type of electronic data (e.g., software programs, movies, music, books, text, graphics, etc.) from unauthorized use. Typically, a protection scheme is developed and implemented in hardware and/or software. This prompts organized and unorganized attempts to defeat the protection scheme. Since a single successful attack on the protection scheme can result in completely undermining the protection scheme, the cost of implementing the protection scheme is significantly greater than the cost of defeating the protection scheme.
Moreover, once the protection scheme is defeated, the data can be easily copied and provided to unauthorized users, denying revenue streams to the creators of the data.
Even if an impenetrable protection scheme is crafted, the data may still be susceptible to unauthorized copying via the “analog hole”. Data that is self-revealing is particularly susceptible via the “analog hole”. Self-revealing data refers to data that delivers its value to the user only by revealing (or presenting) the information of which it is composed. That is, self-revealing data provides a visual and/or audio presentation experience to the user. Examples of self-revealing data include movies, music, books, text, and graphics. The “analog hole” is the presentation experience that reveals sound and/or images that can be easily recorded, copied, and distributed to unauthorized users.
In contrast, a software program is an example of non self-revealing data. For instance, the value of a chess software program lies in the chess algorithm of the chess software program. Even if a great number of chess games are played and recorded, there still are unplayed chess games that have to be played to discover additional elements of the chess algorithm of the chess software program.
Thus, the “analog hole” has to be “plugged” to ensure that any implemented protection scheme is not undermined by the “analog hole”.
A user is provided a non-degraded presentation experience from data while access to the non-degraded presentation experience is limited. In an embodiment, one or more attributes are gathered from one or more sources. The data is accessed. Further, the data is adapted using the one or more attributes so that availability of the non-degraded presentation to the user is dependent on the one or more attributes. Examples of attributes include user attributes, environmental attributes, and presentation attributes.
The accompanying drawings, which are incorporated in and form a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the present invention.
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with these embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention.
As described above, the “analog hole” is the presentation experience that reveals sound and/or images that can be easily recorded, copied, and distributed to unauthorized users. In accordance with embodiments of the present invention, the “analog hole” is “plugged” by introducing customization into the presentation experience. The customization is achieved by adapting the data using nondeterministic information (e.g., user attribute from the user, environmental attribute, presentation attribute of a presentation device). This nondeterministic information can be static or dynamic. Presentation of the adapted data is intended to provide the user a non-degraded presentation experience and to cause unauthorized recordings of the adapted and presented data to make available solely a degraded presentation experience to unauthorized users.
Since the ideal non-degraded presentation experience can be subjective because different users have different expectations, it should be understood that “non-degraded presentation experience” refers to a range of presentation experiences. At one end of this range lies a truly non-degraded presentation experience. While at the other end of this range lies a minimally degraded presentation experience that is sufficiently acceptable to the user.
The data storage unit 1 can store any type of data (e.g., audio, visual, textual, self-revealing data, non self-revealing data, etc.). As described above, examples of self-revealing data include movies, music, books, text, and graphics. In an embodiment, the system 11 implements a protection scheme for the data.
The attribute unit 3 gathers one or more attributes. The attributes can be gathered from one or more sources. Examples of these sources include users, environments where the system 11 is located, and presentation devices. Moreover, the attributes can be static or dynamic. In the case of static attributes, the attribute unit 3 makes a one-time determination of these static attributes before the presentation experience is started. In the case of dynamic attributes, the attribute unit 3 initially determines values for these dynamic attributes and then proceeds to track changes over time in these dynamic attributes.
Continuing, the presentation device 4 presents the adapted data from the adaptation processing unit 2 to the user 5, providing the user 5 the presentation experience. Examples of the presentation device 4 include one or more television monitors, computer monitors, and/or speakers. The presentation device 4 can be designed for visual and/or acoustical presentation to the user 5. Moreover, the presentation device 4 can present the adapted data to multiple users instead of a single user.
Referring to
At 22, one or more attributes are gathered by the attribute unit 3. Sources for the attributes include users, environments where the system 11 is located, and presentation devices. At 24, data for the presentation experience is accessed from the data storage unit 1.
Further, at 26, the data is adapted using the one or more attributes so that availability of the non-degraded presentation experience to the user 5 is dependent on the one or more attributes. In an embodiment, an adaptation processing unit 2 performs the adaptation. Moreover, the adapted data is presented using the presentation device 4, providing the non-degraded presentation experience, which is dependent on the attributes.
The data storage unit 10 can store any type of data (e.g., audio, visual, textual, self-revealing data, non self-revealing data, etc.). As described above, examples of self-revealing data include movies, music, books, text, and graphics. In an embodiment, the system 100 implements a protection scheme for the data.
The user attribute unit 30 gathers one or more user attributes from the user 50. The user attributes can be static or dynamic. Examples of static user attributes are user's audio acuity and user's visual acuity. Examples of dynamic user attributes include eye movement, head movement, and virtual movement in a virtual environment. In the case of static attributes, the user attribute unit 30 makes a one-time determination of these static user attributes before the presentation experience is started. In the case of dynamic attributes, the user attribute unit 30 initially determines values for these dynamic user attributes and then proceeds to track changes over time in these dynamic user attributes. As will be explained below, tracked eye movement facilitates adapting data that will be visually presented to the user 50. Continuing, tracked head movement facilitates adapting data that will be acoustically presented to the user 50. Further, tracked virtual movement facilitates adapting data that will be visually presented to the user 50 in a virtual environment. Moreover, the user attribute unit 30 can track one or more attributes of multiple users.
For tracking eye movement, the user attribute unit 30 may utilize one or more eye tracking techniques. Examples of eye tracking techniques include reflected light tracking techniques, electro-aculography tracking techniques, and contact lens tracking techniques. Although these exemplary eye tracking techniques are well-suited for the user attribute unit 30, it should be understood that other eye tracking techniques are also well-suited for the user attribute unit 30. Since the accuracy of each eye tracking technique is less than ideal, use of multiple eye tracking techniques increases accuracy. On the other hand, the user attribute unit 30 may utilize one or more position tracking techniques to track head movement of the user 50. Furthermore, the user attribute unit 30 may utilize one or more virtual movement tracking techniques to track virtual movement of the user 50. Examples of virtual movement tracking techniques include suit-based tracking techniques, mouse-based tracking techniques, and movement controller-based tracking techniques.
The presentation device 40 presents the adapted data from the adaptation processing unit 20 to the user 50, providing the user 50 the presentation experience. Examples of the presentation device 40 include one or more television monitors, computer monitors, and/or speakers. The presentation device 40 can be designed for visual and/or acoustical presentation to the user 50.
Referring to
In the case of data that will be visually presented to the user 50, the adaptation processing unit 20 may utilize static user attributes (e.g., user's 50 visual acuity) and/or dynamic user attributes (e.g., eye movement). Focusing on tracked eye movement of the user 50, instead of processing the data for visually presenting the entire data in a high-resolution state (or non-degraded state), the adaptation processing unit 20 adapts the data such that the data that will be visually presented in the foveal field of the user's 50 visual field is maintained in a high-resolution state for the reasons that will be described below. The tracked eye movement determines the origin location of the foveal field and the destination location of the foveal field. While the eye movement is causing the foveal field to move from an origin location to a destination location, it is possible to visually present the data in a state other than a high resolution state since the user's 50 visual system is greatly suppressed (though not entirely shut off) during this type of eye movement. Further, the adaptation processing unit 20 adapts the data that will be visually presented outside the foveal field of the user's 50 visual field to a low-resolution state (or degraded state). Thus, the user 50 is provided a non-degraded presentation experience while an unauthorized recording of the output of the presentation device 40 captures mostly low-resolution data with a minor high-resolution zone that moves unpredictably. This unauthorized recording simply provides a degraded presentation experience to an unauthorized user. It is unlikely that the user 50 and the unauthorized user would have the same sequence of eye movements since there are involuntary and voluntary eye movements. Additionally, the user 50 gains a level of privacy since another person looking at the output of the presentation device 40 would mostly see low-resolution data with a minor high-resolution zone that moves unpredictably. Thus, the user 50 is able to use the system 100 in a public place and is still able to retain privacy.
In general, the user's 50 visual field is comprised of the foveal field and the peripheral field. The retina of the eye has an area known as the fovea that is responsible for the user's sharpest vision. The fovea is densely packed with “cone”-type photoreceptors. The fovea enables reading, watching television, driving, and other activities that require the ability to see detail. Thus, the eye moves to make objects appear directly on the fovea when the user 50 engages in activities such as reading, watching television, and driving. The fovea covers approximately 1 to 2 degrees of the field of view of the user 50. This is the foveal field. Outside the foveal field is the peripheral field. Typically, the peripheral field provides 15 to 50 percent of the sharpness and acuity of the foveal field. This is generally inadequate to see an object clearly. It follows, conveniently for eye tracking purposes, that in order to see an object clearly, the user must move the eyeball to make that object appear directly on the fovea. Hence, the user's 50 eye position as tracked by the user attribute unit 30 gives a positive indication of what the user 50 is viewing clearly at the moment.
Contrary to the user's 50 perception, the eye is rarely stationary. It moves frequently as it sees different portions of the visual field. There are many different types of eye movements. Some eye movements are involuntary, such as rolling, nystagmus, drift, and microsaccades. However, saccades can be induced voluntarily. The eye does not generally move smoothly over the visual field. Instead, the eye makes a series of sudden jumps, called saccades, and other specialized movements (e.g., rolling, nystagmus, drift, and microsaccades). The saccade is used to orient the eyeball to cause the desired portion of the visual field fall upon the fovea. It is sudden, rapid movement with high acceleration and deceleration rates. Moreover, the saccade is ballistic, that is, once a saccade begins, it is not possible to change its destination or path. The user's 50 visual system is greatly suppressed (though not entirely shut off) during the saccade. Since the saccade is ballistic, its destination must be selected before movement begins. Since the destination typically lies outside the foveal field, the destination is selected by the lower acuity peripheral field.
Continuing, in the case of data that will be acoustically presented to the user 50, the adaptation processing unit 20 may utilize static user attributes (e.g., user's 50 audio acuity) and/or dynamic user attributes (e.g., head movement). Focusing on tracked head movement of the user 50, instead of processing the data for acoustically presenting the entire data in a non-degraded state, the adaptation processing unit 20 adapts the data such that the data that will be acoustically presented and heard at the hearing position of the user 50 is in a non-degraded state. The tracked head movement determines the hearing position of the user 50. However, the adaptation processing unit 20 adapts the data that will be acoustically presented and heard outside of the hearing position of the user 50 into a degraded state. Thus, the user 50 is provided a non-degraded presentation experience while an unauthorized recording of the output of the presentation device 40 captures mostly degraded sound. This unauthorized recording simply provides a degraded presentation experience to an unauthorized user. It is unlikely that the user 50 and the unauthorized user would have the same sequence of head movements.
In an embodiment, data that will be acoustically presented to the user 50 is a binaural recording. A binaural recording is a two-channel (e.g., right channel and left channel) recording that attempts to recreate the conditions of human hearing, reproducing the full three-dimensional sound field. Moreover, frequency, amplitude, and phase information contained in each channel enable the auditory system to localize sound sources. In the non-degraded presentation experience, the user 50 (at the hearing position indicated by tracking head movement of the user 50) perceives sound as originating from a stable source in the full three-dimensional sound field. However, in the degraded presentation experience, the unauthorized user perceives sound as originating from a wandering source in the full three-dimensional sound field, which can be quite distracting.
Further, in the case of data that will be visually presented to the user 50 in a virtual environment, the adaptation processing unit 20 may utilize a dynamic user attribute such as virtual movement of the user 50, wherein the virtual movement is tracked. Instead of processing the data for visually presenting in the virtual environment the entire data in a non-degraded state, the adaptation processing unit 20 adapts the data such that the data that will be visually presented in the virtual environment at the position of the user 50 in the virtual environment is in a non-degraded state. The tracked virtual movement determines the position of the user 50 in the virtual environment. However, the adaptation processing unit 20 adapts the data that will be visually presented in the virtual environment outside the position of the user 50 in the virtual environment into a degraded state. Thus, the user 50 is provided a non-degraded presentation experience while an unauthorized recording of the output of the presentation device 40 does not capture sufficient data to render the virtual environment for a path other than that followed by the user 50. This unauthorized recording simply provides a degraded presentation experience to an unauthorized user since it is unlikely that the user 50 and the unauthorized user would proceed along the same paths in the virtual environment.
At 210, one or more user attributes from the user 50 are gathered by the user attribute unit 30. Examples of user attributes include user's visual acuity, user's audio acuity, eye movement, head movement, and virtual movement in a virtual environment. At 220, the data for the presentation experience is accessed from the data storage unit 10.
Continuing, at 230, the data is adapted using the one or more user attributes so that the non-degraded presentation experience is available solely to the user 50. In an embodiment, an adaptation processing unit 20 performs the adaptation. Moreover, the adapted data is presented to the user 50 using the presentation device 40, providing the non-degraded presentation experience to the user.
The data storage unit 310 can store any type of data (e.g., audio, visual, textual, self-revealing data, non self-revealing data, etc.). As described above, examples of self-revealing data include movies, music, books, text, and graphics. In an embodiment, the system 300 implements a protection scheme for the data.
The environmental attribute unit 330 gathers one or more environmental attributes of the environment in which the system 300 is located. Examples of environmental attributes include acoustical attributes and optical attributes. The acoustical attributes facilitate adapting data that will be acoustically presented to the user 350. Dimensions of a room; rigidity and mass of the walls, ceiling, and floor of the room; sound reflectivity of the room; and ambient sound are examples of acoustical attributes. Continuing, optical attributes facilitate adapting data that will be visually presented to the user 350. Dimensions of the room, optical reflectivity of the room, color balance of the room, and ambient light are examples of optical attributes.
The acoustical/optical environmental attributes can be static or dynamic. In the case of static environmental attributes, the environmental attribute unit 330 makes a one-time determination of these static environmental attributes before the presentation experience is started. In the case of dynamic environmental attributes, the environmental attribute unit 330 initially determines values for these dynamic environmental attributes and then proceeds to track changes over time in these dynamic environmental attributes.
The presentation device 340 presents the adapted data from the adaptation processing unit 320 to the user 350, providing the user 350 the presentation experience. Examples of the presentation device 340 include one or more television monitors, computer monitors, and/or speakers. The presentation device 340 can be designed for visual and/or acoustical presentation to the user 350.
Continuing with
Thus, the user 350 is provided a non-degraded presentation experience in the environment in which the system 300 is located. An unauthorized recording of the output of the presentation device 340 may capture the non-degraded presentation experience. However, this unauthorized recording simply provides a degraded presentation experience to an unauthorized user outside the environment in which the system 300 is located. This is the case since it is unlikely that the environment in which the system 300 is located and the environment in which the unauthorized user is located would have the same environmental attributes.
At 410, one or more environmental attributes of the environment in which the system 300 is located are gathered by the environmental attribute unit 330. Examples of the environmental attributes include acoustical attributes and optical attributes. At 420, the data for the presentation experience is accessed from the data storage unit 310.
Continuing, at 430, the data is adapted using the one or more environmental attributes so that the non-degraded presentation experience is available solely in the environment in which the system 300 is located. In an embodiment, an adaptation processing unit 320 performs the adaptation. Moreover, the adapted data is presented to the user 350 using the presentation device 340, providing the non-degraded presentation experience to the user.
The data storage unit 510 can store any type of data (e.g., audio, visual, textual, self-revealing data, non self-revealing data, etc.). As described above, examples of self-revealing data include movies, music, books, text, and graphics. In an embodiment, the system 500 implements a protection scheme for the data.
The presentation attribute unit 530 gathers one or more presentation attributes of the presentation device 540. Each one of the plurality of presentation devices has distinct presentation attributes. Examples of presentation attributes include acoustical presentation attributes and visual presentation attributes. The acoustical presentation attributes facilitate adapting data that will be acoustically presented to the user 550. Fidelity range, sound distortion profile, and sound frequency response are examples of acoustical presentation attributes. Moreover, the hearing attributes of the user 550 can be determined and used in adapting data that will be acoustically presented to the user 550. Continuing, visual presentation attributes facilitate adapting data that will be visually presented to the user 550. Pixel resolution, aspect ratio, pixel shape, and pixel offsets are examples of visual presentation attributes. In an embodiment, data that will be visually presented to the user 550 has sufficient information to support higher pixel resolutions than supported by any one of the plurality of presentation devices including presentation device 540.
The acoustical/visual presentation attributes can be static or dynamic. In the case of static presentation attributes, the presentation attribute unit 530 makes a one-time determination of these static presentation attributes before the presentation experience is started. In the case of dynamic presentation attributes, the presentation attribute unit 530 initially determines values for these dynamic presentation attributes and then proceeds to track changes over time in these dynamic presentation attributes.
The presentation device 540 presents the adapted data from the adaptation processing unit 520 to the user 550, providing the user 550 the presentation experience. Examples of the presentation device 540 include one or more television monitors, computer monitors, and/or speakers. The presentation device 540 can be designed for visual and/or acoustical presentation to the user 550. Instead of manufacturing a plurality of presentation devices with the same presentation attributes, each presentation device is manufactured to have a unique set of presentation attributes. The presentation device 540 is one of the plurality of presentation devices.
Continuing with
Thus, the user 550 is provided a non-degraded presentation experience from the presentation device 540. An unauthorized recording of the output of the presentation device 540 may capture the non-degraded presentation experience. However, this unauthorized recording simply provides a degraded presentation experience to an unauthorized user using another presentation device to present the unauthorized recording. This is the case since the presentation device 540 and the presentation device used by the unauthorized user would have different presentation attributes.
For example, data that will be visually presented is resampled from a high pixel resolution to the lower pixel resolution supported by the presentation device 540. Since the unauthorized user will use a different presentation device, the unauthorized recording has to be resampled again from the lower pixel resolution supported by the presentation device 540 to either a higher pixel resolution than that of the presentation device 540 or a lower pixel resolution than that of the presentation device 540. This second resampling results in perceptible degradation in the quality of the visual presentation.
At 610, one or more presentation attributes of the presentation device 540 are gathered by the presentation attribute unit 530. Examples of the presentation attributes include acoustical presentation attributes and visual presentation attributes. At 620, the data for the presentation experience is accessed from the data storage unit 510.
Continuing, at 630, the data is adapted using the presentation attributes so that the non-degraded presentation experience is available solely from the presentation device 540. In an embodiment, an adaptation processing unit 520 performs the adaptation. Moreover, the adapted data is presented to the user 550 using the presentation device 540, providing the non-degraded presentation experience to the user.
The embodiments of
The data storage unit 710 can store any type of data (e.g., audio, visual, textual, self-revealing data, non self-revealing data, etc.). The user and environmental attribute unit 730 provides one or more user attributes associated with the user 750 to the adaptation processing unit 720. The user and environmental attribute unit 730 also provides one or more environmental attributes associated with the environment of user 750 to the adaptation processing unit 720. The user and environmental attributes can be static or dynamic. Examples of user attributes and environmental attributes have been discussed above.
The presentation device 740 presents the adapted data from the adaptation processing unit 720 to the user 750, providing the user 750 the presentation experience. Examples of the presentation device 740 include one or more television monitors, computer monitors, and/or speakers.
At 810, one or more user attributes associated with the user 750 are accessed. Also, one or more environmental attributes associated with the environment of the user 750 are accessed. At 820, the data for the presentation experience is accessed from the data storage unit 710.
Continuing, at 830, in one embodiment, adaptation processing unit 720 adapts the data using the one or more user attributes and the one or more environmental attributes so that the non-degraded presentation experience is available solely to the user 750. The adapted data can then presented to the user 750 using the presentation device 740, providing the non-degraded presentation experience to the user.
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the Claims appended hereto and their equivalents.