In two-dimensional (2D) environments, a system can tell what area a user has selected or is otherwise interacting with by simply determining the X and Y coordinates of the activity. In the three-dimensional (3D) world, however, finding the X/Y coordinate relative to the interactive 2D element on the 3D surface is not always straightforward. For example, 2D objects such as user interfaces can be placed on 3D surfaces, such as a sphere. When such 2D objects are placed on 3D surfaces, it can be difficult to deal with the user's interaction with the 2D object that is now being projected in 3D.
Various technologies and techniques are disclosed that enable interaction with 2D content placed on a 3D surface. The system determines where relative to a 3D surface an input device is located. If the input device is hitting a 3D surface, a hidden content in 2D is positioned so that a point representing the area hit on the 3D surface lines up with a corresponding point on the hidden content in 2D. In one implementation, when a request is received for the input device position when an input device that is not over the bounds of the interactive 2D element is detected at a location in a scene, the 3D surface is projected into two dimensions. A closest point is calculated on the projected 3D surface to a 2D location of the input device. The closest point is provided in response to be used in positioning the hidden content with the corresponding point of the 3D surface.
In one implementation, different processes are followed depending on whether or not a particular 3D surface has capture. For example, if a 3D surface in the 3D scene does not have capture, and if the input device hit a 3D surface, then texture coordinates are used on a 3D triangle to determine what point was hit on the hidden content in 2D. The hidden content is then moved to a position such that the hidden content lines up with a corresponding point on the 3D surface. Similarly, if the 3D surface in the 3D scene has capture, and if the input device is determined to hit the 3D surface with the capture content, then using texture coordinates and the process described previously to line up the hidden content.
In another implementation, if the 3D surface in the 3D scene has capture, and if the input device is determined to not hit the 3D surface with the capture content, then the system computes the boundary of the capture content, finds a closest point on the boundary to the location of the input device, and places the closest point on the boundary under the location of the input device.
This Summary was provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
For the purposes of promoting an understanding of the principles of the invention, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope is thereby intended. Any alterations and further modifications in the described embodiments, and any further applications of the principles as described herein are contemplated as would normally occur to one skilled in the art.
The system may be described in the general context as an application that provides interaction with 2D content placed on 3D surfaces, but the system also serves other purposes in addition to these. In one implementation, one or more of the techniques described herein can be implemented as features within a graphics rendering program such as those included in operating system environments such as MICROSOFT® WINDOWS®, or from any other type of program or service that deals with graphics rendering. In another implementation, one or more of the techniques described herein are implemented as features with other applications that deal with allowing 2D content to be used with 3D surfaces.
In one implementation, the system provides for interaction with 3D surfaces by using hidden 2D content. The real interactive 2D content stays hidden, but the appearance of the hidden 2D content is made non-hidden and placed on 3D. The hidden content is positioned in such a way as to intercept the user's attempts to interact with the rendered appearance of the content on the 3D surface. The term “hidden content” as used herein is meant to include 2D content that is not noticed by the user because it is invisible, sized such that it is not able to be seen, located behind another object, etc. In another implementation, when any part of the 2D content requests the location of the input device or requests capture, the 3D representation of that 2D content is projected back in to 2D. The border of this projected content is then used to determine how to respond to any input requests from the captured 3D surface. The term “capture” as used herein means when 2D content requests to be notified of input device state changes.
As shown in
Additionally, device 100 may also have additional features/functionality. For example, device 100 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in
Computing device 100 includes one or more communication connections 114 that allow computing device 100 to communicate with other computers/applications 115. Device 100 may also have input device(s) 112 such as keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s) 111 such as a display, speakers, printer, etc. may also be included. These devices are well known in the art and need not be discussed at length here. In one implementation, computing device 100 includes interactive 3D application 200. interactive 3D application 200 will be described in further detail in
Turning now to
Interactive 3D application 200 includes program logic 204, which is responsible for carrying out some or all of the techniques described herein. Program logic 204 includes logic for determining that there is a need to update hidden content (e.g. upon receiving a request or determining programmatically) 206; logic for determining where relative to a 3D surface an input device (e.g. mouse, stylus, etc.) is located at 208; logic for determining whether or not the input device hit a 3D surface 210; logic for rendering the hidden content inactive if the system determines that the input device did not hit a 3D surface (e.g. move the hidden content away from the input device or otherwise remove or make it inactive so the user does not accidentally interact with it) 212; logic for positioning the 2D object so that the point on the 3D surface hit with the input device and the 2D object that is hidden line up (e.g. move so the same points line up) if the system determines that the input device did hit a 3D surface 214; logic for waiting for another indication that there is a need to update hidden content (e.g. receiving a request or determining programmatically) and responding accordingly 216; and other logic for operating the application 220. In one implementation, program logic 204 is operable to be called programmatically from another program, such as using a single call to a procedure in program logic 204.
Turning now to
If the system determines that a 3D surface does have capture (decision point 314), then a hit test 3D scene is performed to determine where relative to a 3D surface the input device is located at (stage 318). The system determines if a 3D surface was hit with capture content (e.g. by the input device) (decision point 322). If so, then the texture coordinates are used on a 3D triangle to find what point was hit on the 2D content (stage 326). The 2D content is placed in a hidden layer, and the hidden layer is moved such that the points are lined up (stage 326). If the system determines that the 3D surface was not hit with capture content (decision point 322), then the boundary of the captured content is computed (stage 328). The closest point on the boundary to the input device position is located, and the closest point on the boundary is placed under the input device position (stage 328). The process ends at end point 330.
Turning now to
Some non-limiting examples will now be used to describe how the 2D content is mapped to the 3D surface to achieve the results shown in
For the sake of example, assume that all 3D surfaces are composed of triangles, and that all triangles have texture coordinates associated with them. Texture coordinates specify which part of an image (the texture) should be displayed on the triangle. For instance, assume that texture coordinates are in the range of (0,0) to (1,1), where (0,0) is the upper left corner of the image, and (1,1) is the lower right corner of the image. Then if the texture coordinates are (0,0), (1,0), and (0,1), then the upper left half of the image is displayed on the triangle. Further, assume that the 2D content that is displayed on the 3D surface can be represented as an image, and that this image is the texture for the 3D surface it is applied to. For instance,
Now, when the input device is over the 3D surface, a ray is shot in to the 3D scene to see what part of the 3D surface it intersects. This can be done with many standard techniques. Once the system knows what was intersected, the point on the triangle that was hit as well as the texture coordinate for it can be determined. Once the texture coordinate is determined, since the texture is also known, then the system can map from the texture coordinate to a location on the 2D content. This location is the exact point that is over on the 3D surface. To position correctly, the system moves the hidden content such that the location that was computed in the previous part is directly under the input device location. The point over the 3D surface is directly under that same location on the hidden content, both of which are directly under the input device. Thus, if the user clicks or otherwise inputs from this position, they will be clicking/inputting the exact same point on both the hidden content and on the 2D content that is on the 3D. Also, when the input device moves, due to the positioning, both the hidden content and the 2D representation of it on 3D will be told of the input device movement over the exact same points.
Turning now to
In one implementation, one possible solution to this problem is to reduce the 3D problem back to 2D. In the normal 2D case, the transformations applied to the content can be used to convert the input device position to the content's local coordinate system. This transformed position then lets the content know where the input device is relative to it. In 3D, due to the many orientations of the geometry and texture coordinate layouts, it can sometimes be difficult to say where a 3D point is in the relative coordinate system of the 2D content on 3D. In one implementation, to approximate this, the outline of the 2D content on 3D, after it has been projected to screen space, is computed and then the input device is positioned based on this projection.
The simulated image 700 of
After the outline is available, the closest point on this outline to the input device position is computed, and then this point on the outline is considered what was “hit” and it is placed under the input device position. In the example shown, the highlighting is performed up to the “T” in the middle of the image 750. Since the input device is placed by the closest edge point, the interaction tends to behave as it would in 2D, since the hidden content is positioned based on what the input device is closest to on the 2D content on 3D. By placing the hidden content at the closest edge point, the system is indicating about where it expects the input device to be relative to the 2D on 3D surface's orientation.
To actually perform the process described with reference to
In one implementation, the system also tracks which triangles are facing the viewer and which ones face away. If there are two triangles that share an edge, one facing the user and one facing away, then the system can also add the part of this shared edge that is within the captured 3D surface's boundary to the final list. This can be necessary so that the visible boundary is computed. As a non-limiting example of this situation, consider the sphere in
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. All equivalents, changes, and modifications that come within the spirit of the implementations as described herein and/or by the following claims are desired to be protected.
For example, a person of ordinary skill in the computer software art will recognize that the client and/or server arrangements, user interface screen content, and/or data layouts as described in the examples discussed herein could be organized differently on one or more computers to include fewer or additional options or features than as portrayed in the examples.