1. Field of the Invention
The present invention is directed to a system that integrates the graphical content from one application into the graphical scene of another application and particularly a system that extracts the 3D objects and materials that make up the images generated by a first application from its graphics data stream and fuses them into the second application.
2. Description of the Related Art
Customers often have many forms of related data ingested and presented by separate applications in separate windows, and even on separate computers in separate locations. For example, in the automotive industry, aerodynamics and crash analysis for a single car might be done using separate data sources and be analyzed in separate applications. If these analyses could be more integrated, it would speed up the decision cycle. In practice there may be many more than two data streams or two applications. This problem becomes even more difficult when the data streams represent 3D information.
What is needed is a system that can integrate the view of these stovepipe applications and particularly when three-dimensional (3D) displays are involved.
It is an aspect of the embodiments discussed herein to provide a system that extracts from one application its 3D objects and materials, which may either comprised of pixel data or 3D geometry and other graphics library definition data, such as textures, colors, surface materials, animations, vertex programs, shading algorithms, etc., and fuses them into another application.
It is also an aspect of the embodiments to receive user input device events from the fusion environment, modify them as needed to correspond to user input events expected by the graphics source application and supply them to the graphics source application.
A further aspect of this invention is that an unmodified graphics application may serve as the source of the graphics data or as the target of the graphics data. Furthermore, an unmodified graphics application may serve as the target of user input events or as the source of user input events. That is, a given graphics application may act as the sender or receiver of graphics and input information without any modification to the code of the application, although the application does not need to be unmodified to perform in either capacity.
The above aspects can be attained by a system that captures 3D graphics library commands including 3D geometry from a first application or the color and depth imagery produced by a first application and supplies them to a second application. In the second application the 3D objects are combined into a scene that may include display elements from other applications. The result is a fused display environment where 3D objects are displayed along with other display elements, such as flat windows, each 3D object or display element potentially coming from a different source application. Input events in the fused environment are analyzed and mapped to the first application where they affect the processing of the first application. In order to supply graphic information from an application to the other, the system may go through an intermediary stage if placing the graphic stream data in a memory that is shared between the two applications, using the operating system's shared memory or using a network protocol. This step actually allows more than two applications to access the graphic stream at the same time, allowing therefore collaboration between the users of the various applications.
These together with other aspects and advantages which will be subsequently apparent, reside in the details of construction and operation as more fully hereinafter described and claimed, reference being had to the accompanying drawings forming a part hereof, wherein like numerals refer to like parts throughout.
Integration of graphical application content into the graphical scene of another application or media fusion can solve the general problem of “stovepipe applications”. Using various hardware and software sources of visual input, an integration system ingests, combines (“fuses”), and distributes various types of media streams (e.g. streams of pixels, polygons, user input events), which originate from various sources (e.g. 3D applications, remote desktops/PCs, video recordings, even other media fusion sessions). The system then “fuses” and displays the media streams side-by-side, superimposed, or combined in any number of other ways. Such a fusion session can also be recorded for later playback or visually served out for remote interaction and collaboration. Visual serving is the ability to stream in real time a view of a graphics application over a network with control passed back to the source application from the remote client.
The current Integration of graphical application content into the graphical scene of another application, using video-input cards and Vizserver™ visual serving technology, brings disparate applications into a common environment. However, the output of these applications (models, drawings, statistics, etc.) is still contained within flat windows.
The embodiments of the present invention allow full integration of the application's 3D data content into an integrated 3D landscape. This can be accomplished by intercepting an application's graphics data at any point in the graphics pipeline, which includes the creation and processing of graphical objects, conversion to a raster (pixel) form, and finally the creation of video image on a display surface. For example, near the end of the pipeline the system can extract depth values for every pixel of the application's video output and represent each pixel at a corresponding depth in the media fusion scene (instead of as a fiat 2D window in the media fusion scene). Alternatively, the system can extract the geometric primitives from the application at some point prior to its image generation (e.g. before they are sent to the graphics hardware), and insert the application's 3D objects directly into the 3D Media Fusion scene. These methods provide an improved way to comprehend and interact with applications' data. For example, instead of two 3D graphics applications displaying their visual output within two separate flat windows, possibly on separate computer systems and displays, the 3D data of the two applications is extracted and visually combined (“fused”) into a common 3D scene such that the data may mutually intersect or occlude each other. An extension of this is that the displayed data may be some derivative of multiple captured streams, for example the sum or difference of two streams of data.
Normally, a computer graphics program 100 utilizes standard graphics software libraries 101, such as an OpenGL library, to command computer graphics hardware 106 to form an image 110 in the program's window 109 on the computer display 108. The logic of the graphics program executes as a computer process 102. The process 102 invokes a sequence, or stream, of graphics commands a1 that are interpreted by the computer graphics library 101, namely the OpenGL library, and converted into hardware-specific graphics commands b1.
This embodiment captures the computer graphics commands a1 of a 3D graphics program 100 and later integrates these commands with the commands a2 of another computer graphics program 111, so that the visual output of both programs is combined, in reality or in appearance only, into a single 3D scene that looks and behaves as if only one graphics program had generated it. More generally, the graphics pipeline may be “tapped into” at any point between a1 and e inclusive, not just at its end points (at command generation, pre-rasterization, or post-rasterization).
First, intercept software 115, typically in the form of a software library, is loaded into the computer process 102 of the first graphics program 100. The intercept software 115 converts the graphics program's 3D graphics commands a1 into a format f1 that can be readily transmitted to other processes. This process is typically called serialization, or encoding, or packing. It is commonly performed on 3D graphics commands by software packages such as OpenGL Multipipe, as sold by Silicon Graphics, Inc., or Chromium, created by Stanford University. Preferably, this takes place application-transparently; that is, without modification of code in graphics program 100. Graphics program 100 is unaware that a copy of its graphics commands is being made. In their readily transmittable format f1 the graphics commands can be stored on some permanent storage device for later retrieval, transmitted over a network, or more preferably, placed in shared memory 112 that is shared between processes.
In
Computer process 118 contains a program 117 that reads the commands f1 out of shared memory 116. Program 117 draws these graphics commands into the window of graphics program scene 113. To do this, the two programs communicate about information in the graphics command stream f1 that needs to be modified on the fly to be integrated into the scene 113, of graphics program 111. Such modifications include correlating the 3D coordinate systems and scene lighting of the two streams, and other visual effects that may require changes to stream f1 to visually integrate the result 119 of graphics commands a1′ into the 3D scene 113, that is produced by graphics commands a2. Notice, for example, the difference in orientation and illumination of 3D object 110 when drawn by first graphics program 100, and after it is modified and drawn as an object 119 as part of the 3D scene 113 of the second graphics program 111. Additional detailed are provided with respect to
Depending on implementation, programs 111 and 117 may be combined into a single program (that is a single thread of execution), a single process with separate threads of execution, or as pictured, in two separate processes 112 and 118 each with its own thread of execution 111 and 117. Depending on the implementation, therefore, programs 111 and 117 may produce a single combined graphics command stream, a2+a1 or as depicted here, they may produce separate graphics streams that are later only visually merged into b2+b1′ by the graphics library 101. Each of these implementation alternatives has its set of advantages and drawbacks that will be readily apparent to those skilled in the art.
The difference between the combining of graphics command streams b1 and b2 of the two programs in
Not only do graphics commands travel from the originating graphics program 100 to the window of the receiving graphics program 111 but some user input (keyboard and mouse) commands, received in the normal way by graphics program 111, also need to be passed back to the originating graphics program 100. To fully create the appearance of a single scene produced by a single graphics program, the user is allowed to manipulate and control the inserted 3D object or 3D scene 119, just as he would any other object in the 3D scene of graphics program 111: Control needs to be as seamless as if he was controlling the object in its original window 109. Input events h are transformed from the 3D scene (“world space”) back into the 2D coordinate system of the original application's window. Depending on implementation, input event transformation may be handled in whole or in part by any of graphics programs 111 or 117. A transformed event is decoded by input decoder process 120 and passed as a decoded event i from shared memory 116 back to application 100, often via a regular window server (e.g. an X Window Server).
Unlike the embodiment of
In this embodiment, graphics intercept library 115 listens for certain “trigger” graphics commands, such as “glXSwapBuffers” commands, rather than capturing all graphics commands. This is so the intercept library can determine when it should retrieve a fully drawn image from a video buffer on the graphics hardware. Imagery may be retrieved from the graphics hardware using common graphics readback or video recording techniques such as those used in the Vizserver collaboration software as sold by Silicon Graphics, Inc. In addition to the normal “color image” that can be read back from the graphics hardware, this embodiment also retrieves a “depth image”. The pixels in a depth image indicate the 3D positions of their corresponding pixels in the color image. Intercept library 115 stores the combined color and depth imagery e1′ in shared memory 116, possibly in an efficient, encoded format, f1.
Graphics program 117 in this embodiment reads color and depth imagery f1 out of shared memory 116, then it constructs a 3D object from the depth image and applies the color image onto the surface of the object. The program may make automated or semi-automated modifications to f1, for example, to allow for parts of the imagery to be “cut out”, such as the background of the image: As before, when program 117 communicates with graphics program 111 to map 3D coordinate systems and other visual effects of the first graphics application to that of the second graphics application, it may require changes to stream f1. Then, it executes graphics commands a3, which produce a visually composited image 119 in the 3D scene 113 that is produced by graphics commands a2. Notice in this embodiment that the orientation and illumination of 119 is the same as in the original image 110 since it was derived from the imagery e1′ of the first application, rather than the 3D commands a1 that were used to generate image 110. While it is possible to alter image 119 so that it looks correct from various viewpoints, this embodiment provides less flexibility between the viewpoints of applications 100 and 111 than the embodiment of
User inputs are handled in the same manner as in the previous embodiment.
(o1) First the source application C1 waits for and eventually receives an event from a user input device (e.g. a key is pressed on the keyboard). (o2) The application may update some internal information, such as the position of a 3D object or its color. (o3) This causes the application to issue new drawing commands to update the visual appearance of the 3D object. (o4) Now the graphics intercept library C2 captures the application's drawing commands, and (o5) encodes or packs the commands into a transmittable format before (o6) placing them in shared memory. Now any process C3 can decode and draw C1's 3D commands that C2 stored in shared memory, provided that it has (o7) established a connection to the fusion environment program, which draws the information on screen (in some embodiments, that may actually be process C3). After (o8) some further one-time setup procedures, a decoder process may begin to draw the graphics commands it reads from shared memory (o9) as the fusion environment program indicates that it is ready for the decoder to draw.
(o10) 3D objects and drawing commands are then drawn iteratively. After the first frame, whose special treatment
The previous process describes the way that user input events on the source application are propagated implicitly to the fusion application via the changes in the 3D content carried by the 3D stream.
The flowchart in
The fusion environment is capable of presenting the data of a source application in one of three modes: 1.) 2D mode, 2.) Partial 3D mode, and 3.) Full 3D mode. Referring to
The implementation of the 3D fusion depends on the internal architecture of the Graphics Device. Graphics devices can be classified into two categories: the scene graph based devices, and the buffer-based devices. The first category, scene-graph based, includes Raytracers and Radiosity processors, and is based on a an internal copy of the scene graph, i.e. a full tree-like database of all the geometry of the scene, where the 3D commands are actually explicit updates on the scene graph. The second category, buffer-based, includes most of the Accelerated Graphics hardware sold in the market (Nvidia, ATI, etc.), and is based on processing a flow of geometric primitives that are transformed into pixels and accumulated into buffers.
When the Graphics device is based on a scene graph, the fusion is straightforward, as it just implies encoding the scene tree of the source application, and decoding it in the destination program before adding a branch in the target scene tree with the sub-scene tree of the source application.
The buffer-based Graphics engine is less straightforward, as there is no global knowledge of the scene within the device. In the following sections, we're detailing the process of 3D fusion in this kind of Graphics Devices.
More specifically, the stream of graphics commands a1 from the application, and likewise the set of hardware-specific graphics commands b1 from the graphics library, can be subdivided into 4 types of actions on the graphics pipeline, depending on what part of the Graphics Device they're acting on. The first set of commands, b1-G, contains the geometry and other graphics primitives (vertices, polygons, normals, texture mapping coordinates, etc.). These are pushed to the front of the pipeline. The second set, b1-M, operates on the Geometry Engine's state, typically on the internal transformation matrix stack applied to every geometric primitive. The third, b1-S, operates on the Fragment Processor's state (color, material, textures). And the last ones, b1-F, are direct operations on the video frame buffer, including clearing the buffer, drawing an image directly as pixels.
For more information regarding the process of manipulating OpenGL matrices for the purpose of correcting the orientation or appearance of 3D objects without modifying the code of an application, the reader may refer to U.S. Pat. No. 6,982,682, which utilizes a similar OpenGL matrix manipulation process for correcting images drawn onto curved surfaces.
In addition to manipulating graphics commands related to geometry transformation, other graphics commands may be altered, added, or removed to improve the integration of a foreign graphics stream into the fusion environment. This includes, for example, commands that affect lighting, raster (2D) drawing, textures and other surface material properties, vertex programs, fragment shaders, and the recording and playback of command macros known as display lists.
The diagrams in
An implementation that supports such a 2D to full 3D transition (from mode 1 to mode 3) must include the ability to retrieve on demand or track and record the changes to important graphics state values in the source application such as transformation matrices, lighting modes, current color information, and many other graphics state parameters. Even if the application is only represented as a 2D picture, these states must be tracked from the moment the source application begins making drawing commands in case the user decides at a later time to transition from a mode 1 window to a mode 3 “3D window”. Otherwise the source application must temporarily suspend graphics operations and incur an expensive retrieval of graphics state information for the decoder process or fusion environment to begin drawing the remote application's graphics commands.
An implementation that supports a 2D to partial 3D transition (from mode 1 to mode 2) does not need to perform any graphics state tracking or retrieval. It is therefore more convenient to create an implementation that converts from 2D to partial 3D in this way, but such an implementation places limits on the motion of the user's viewpoint without forcing the source application to redraw the image as the viewpoint changes.
Other variations of the invention can be provided include the ability to replicate and distribute the captured streams via a form of broadcast function. Also, the ability to use captured streams as sources for some form of function, i.e. looking for interference, by doing a difference, construction via addition or even some algorithmic process applied to the input streams to create a derivative. Another aspect is the ability to use this to record states of development, for example, where this capture process can create a permanent record of different phases of a project by capturing and putting to storage. This may be used by capturing versions from a number of users at a particular point in time and keeping this as a snapshot for later review or audit. Another element to consider is the provision of a repository for common parts that can generated once and then shared with remote users. Another aspect is shared collaboration where data is captured via a fusion server and then made available to individuals or collaborators to work on jointly.
Fused content from the fusion environment may be fed back into the original source application as mentioned previously. This is so all applications supplying 3D objects into the fusion environment will see in their 3D scenes, the 3D objects supplied by all the other applications. Essentially, every participating application can also become a fusion environment, This facilitates remote collaboration.
With or without user assistance, meaningful “slices” of an OpenGL stream may be selectively extracted and drawn in the fusion environment, where a “slice” could be of time or of space. This capability could be manifested as a cutaway view, “3D screen capture”, or 3D movie recorder. Some of these 3D slicing capabilities may be found in the software HijackGL, created by The University of Wisconsin-Madison.
The many features and advantages of the invention are apparent from the detailed specification and, thus, it is intended by the appended claims to cover all such features and advantages of the invention that fall within the true spirit and scope of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation illustrated and described, and accordingly all suitable modifications and equivalents may be resorted to, falling within the scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
5491813 | Bondy et al. | Feb 1996 | A |
5682326 | Klingler et al. | Oct 1997 | A |
5774720 | Borgendale et al. | Jun 1998 | A |
5838326 | Card et al. | Nov 1998 | A |
5889951 | Lombardi | Mar 1999 | A |
6002403 | Sugiyama et al. | Dec 1999 | A |
6088032 | Mackinlay | Jul 2000 | A |
6229542 | Miller | May 2001 | B1 |
6417849 | Lefebvre et al. | Jul 2002 | B2 |
6538660 | Celi, Jr. et al. | Mar 2003 | B1 |
6597358 | Miller | Jul 2003 | B2 |
6721950 | Lupu | Apr 2004 | B1 |
6774919 | Miller et al. | Aug 2004 | B2 |
6909443 | Robertson et al. | Jun 2005 | B1 |
6919891 | Schneider et al. | Jul 2005 | B2 |
7064766 | Beda et al. | Jun 2006 | B2 |
7119819 | Robertson et al. | Oct 2006 | B1 |
7170510 | Kawahara et al. | Jan 2007 | B2 |
7170526 | Johnson | Jan 2007 | B1 |
7215335 | Matsumoto et al. | May 2007 | B2 |
7218319 | Matsumoto et al. | May 2007 | B2 |
7245310 | Kawahara et al. | Jul 2007 | B2 |
7277572 | MacInnes et al. | Oct 2007 | B2 |
7290216 | Kawahara et al. | Oct 2007 | B1 |
7400322 | Urbach | Jul 2008 | B1 |
7432934 | Salazar et al. | Oct 2008 | B2 |
7443401 | Blanco et al. | Oct 2008 | B2 |
7480873 | Kawahara | Jan 2009 | B2 |
7487463 | Johnson | Feb 2009 | B2 |
7631277 | Nie et al. | Dec 2009 | B1 |
7685534 | Kamen et al. | Mar 2010 | B2 |
7773085 | Hughes | Aug 2010 | B2 |
7774430 | Hughes | Aug 2010 | B2 |
7868893 | Feth et al. | Jan 2011 | B2 |
8042094 | Napoli et al. | Oct 2011 | B2 |
20010040571 | Miller | Nov 2001 | A1 |
20020154214 | Scallie et al. | Oct 2002 | A1 |
20040085310 | Snuffer | May 2004 | A1 |
20040135974 | Favalora et al. | Jul 2004 | A1 |
20040148221 | Chu | Jul 2004 | A1 |
20040174367 | Liao | Sep 2004 | A1 |
20040179262 | Harman et al. | Sep 2004 | A1 |
20040212589 | Hall et al. | Oct 2004 | A1 |
20050041736 | Butler-Smith et al. | Feb 2005 | A1 |
20050081161 | MacInnes et al. | Apr 2005 | A1 |
20050086612 | Gettman et al. | Apr 2005 | A1 |
20050149251 | Donath et al. | Jul 2005 | A1 |
20050179691 | Johnson | Aug 2005 | A1 |
20050179703 | Johnson | Aug 2005 | A1 |
20050182844 | Johnson et al. | Aug 2005 | A1 |
20050204306 | Kawahara et al. | Sep 2005 | A1 |
20050253840 | Kwon | Nov 2005 | A1 |
20050281276 | West et al. | Dec 2005 | A1 |
20060028479 | Chun et al. | Feb 2006 | A1 |
20060129634 | Khouzam et al. | Jun 2006 | A1 |
20070043550 | Tzruya | Feb 2007 | A1 |
20070070066 | Bakhash | Mar 2007 | A1 |
20070124382 | Hughes | May 2007 | A1 |
20070171222 | Kowalski | Jul 2007 | A1 |
20110022677 | Hughes | Jan 2011 | A1 |
Entry |
---|
Billinghurst, Mark, et al. “Mixing realities in shared space: An augmented reality interface for collaborative computing.” Multimedia and Expo, 2000. ICME 2000. 2000 IEEE International Conference on. vol. 3. IEEE, 2000. |
Mohr, Alex, and Michael Gleicher. “HijackGL: reconstructing from streams for stylized rendering.” Proceedings of the 2nd international symposium on Non-photorealistic animation and rendering. ACM, 2002. |
OpenGL Programming Guide, www.glprogramming.com/red/chapter03.html, Captured Nov. 19, 2005. |
Pan, Zhigeng, Xiaochao Wei, and Jian Yang. “Geometric model reconstruction from streams of DirectX 3D game application.” Proceedings of the 2005 ACM SIGCHI International Conference on Advances in computer entertainment technology. ACM, 2005. |
Reitmayr, Gerhard, Mark Billinghurst, and Dieter Schmalstieg. “WireAR-legacy applications in augmented reality.” Mixed and Augmented Reality, 2003. Proceedings. The Second IEEE and ACM International Symposium on. IEEE, 2003. |
International Preliminary Report on Patentability, mailed Sep. 18, 2008 and issued in corresponding International Patent Application No. PCT/US2007/005715, 5 pages. |
International Preliminary Report on Patentability, mailed Sep. 18, 2008 and issued in corresponding International Patent Application No. PCT/US2007/005716, 4 pages. |
International Preliminary Report on Patentability, mailed Sep. 18, 2008 and issued in corresponding International Patent Application No. PCT/US2007/005717, 5 pages |
PCT International Search Report, mailed Mar. 5, 2008 and issued in related International Patent Application No. PCT/US2007/05715, 2 pages. |
PCT International Search Report, mailed Mar. 6, 2008 and issued in related International Patent Application No. PCT/US2007/05717, 2 pages. |
PCT International Search Report, mailed May 8, 2008 and issued in related International Patent Application No. PCT/US2007/05716, 2 pages. |
Number | Date | Country | |
---|---|---|---|
20130069963 A1 | Mar 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12987615 | Jan 2011 | US |
Child | 13677920 | US | |
Parent | 11368451 | Mar 2006 | US |
Child | 12987615 | US |