Ubiquitous remote access to application programs and data has become commonplace as a result of the growth and availability of broadband and wireless network access. In addition, users are accessing application programs and data using an ever-growing variety of client devices (e.g., mobile devices, table computing devices, laptop/notebook/desktop computers, etc.). Data may be communicated to the mobile device from a remote server over a 3G and 4G mobile data networks or wireless networks such as WiFi and WiMax. Most mobile devices have access to the Internet and are able to interact with various types of application programs.
However, with respect to certain classes of devices (e.g., mobile devices accessing data over slower networks) remote access to cinematic productions is somewhat problematic in high latency settings, such as in mobile data networks or where there is a requirement for high bandwidth. Cinematic productions are a sequence of images that are pre-assembled into an animation, as opposed to video streaming. In addition, because the mobile devices have no knowledge of data until it is received, an end user typically must wait for image data to be provided before a request to view the imagery can be made. In other environments, quick sampling of the image data may lead to missed frames. In yet other environments, if a server is producing frames quicker than the client can consume them, the client may not be able to show all of the frames. While this may be desirable in order to keep up with the server, there are other situations where such a mode of operation is not acceptable, such as in radiology, where a clinician may miss a frame with abnormal pathology, resulting in misdiagnosis. In other environments, if the server is generating frames in an on-demand fashion, every time a frame is requested by a client it has to be generated or re-generated, thus consuming server resources.
Disclosed herein are systems and methods for remotely accessing a cinematic production. In accordance with some implementations, there is provided a method of providing remote access to a cinematic production. The method may include generating a frame from the cinematic production at a server, generating a frame descriptor associated with the frame at the server, storing the frame in a first memory, storing the frame descriptor in a catalogue in a second memory, and synchronizing the catalogue with a remote client. The frame descriptor may be provided for requesting the frame.
In accordance with some implementations, there is provided another method for remotely accessing a cinematic production. The method may include receiving a catalogue of frame descriptors from a server, requesting a frame of the cinematic production from the server using at least one frame identifier from the catalogue as a request, receiving the frame from the server, and caching the frame in a cache.
Other systems, methods, features and/or advantages will be or may become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features and/or advantages be included within this description and be protected by the accompanying claims.
The components in the drawings are not necessarily to scale relative to each other. Like reference numerals designate corresponding parts throughout the several views.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art. Methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present disclosure. While implementations will be described for remotely accessing and viewing cinematic productions, it will become evident to those skilled in the art that the implementations are not limited thereto, but are applicable for remotely accessing any audio, video or still imagery via a mobile device.
Referring to
A user interface program (not shown) may be designed for providing user interaction via a hand-held wireless device for displaying data and/or imagery in a human comprehensible fashion and for determining user input data in dependence upon received user instructions for interacting with the application program using, for example, a graphical display with touch-screen 114A or a graphical display 114B and a keyboard 116B of the handheld wireless device 112A, 112B, respectively. For example, the user interface program is performed by executing executable commands on processor 118A, 118B of the client computer 112A, 112B with the commands being stored in memory 120A, 120B of the client computer 112A, 112B, respectively.
Alternatively, the user interface program is executed on the server computer 102B which is then accessed via an URL by a generic client application such as, for example, a web browser executed on the client computer 112A, 112B. The user interface is implemented using, for example, Hyper Text Markup Language HTML 5.
The user interface program may provide a function to enter and exit a cinematic viewing mode. The user interface program may enable the user to forward reverse, pause and stop the viewing of the cinematic production. The user interface program may also display the actual frames per second (FPS), throttle the speed of the cinematic production, provide a looping feature, and provide an indicator to signal that caching/buffering of images of the cinematic production is complete.
A remote access program may be executing on the client (see,
The state model comprises an association of logical elements of the application program with corresponding states of the application program with the logical elements being in a hierarchical order. The state model may be determined such that each of the logical elements is associated with a corresponding state of the application program. Further, the state model may be determined such that the logical elements are associated with user interactions. For example, the logical elements of the application program are determined such that the logical elements comprise transition elements with each transition element relating a change of the state model to one of control data and application representation data associated therewith.
With reference to
The client 112A/112B may initiate the production of frames sending a request to the server 102B, or the server 102B may begin generating the frames autonomously. In generating the frames 202, the server 102B also generates the frame descriptor 204. For example, the frame descriptor 204 includes at least an identifier, such as random number or other association that is used to lookup the frame 202. The frame descriptor 204 may also be a combination of the identifier and associated metadata (e.g., a MRI slice number, a time, image dimensions, file format, author, etc.) provided by an application that created the cinematic production. For example, the MRI slice number may be generated by a scanning device. The catalogue 206, comprised of frame descriptors 204, may grow or be changed dynamically by the server 102B in accordance with the frames generated and/or removed or a combination of both.
The client 112A/112B may observe the catalogue 206 through a synchronization technique. For example, PUREWEB, available from Calgary Scientific, Inc. of Calgary, Alberta, may be used to synchronize the catalogue 206 between the server 102B and the client 112A/112B. Other communications techniques may be use for synchronization. The catalogue 206 may be partially synchronized between the server 102B and the client 112A/112B by transmitting only changes to the catalogue 206 (modifications, additions and/or deletions of frame descriptors) or fully synchronized by transmitting the entire catalogue 206. A combination of partial and full synchronizations may be used depending on a configuration setting. Thus, the client 112A/112B maintains a duplicate of catalogue that is stored on the server 102B. As will be described below, using the catalogue 206, a remote access program 216 executing on the processor 104A/104B of the client 112A/112B may select what frames to view, and how to view the frames (e.g., where to start viewing frames, whether to skip frames for faster speed, etc.). Using the state manager (See,
In some implementations, the client 112A/112B may be provided with a local cache 214 to store one or more request frames 210 from the server. If the client already has a frame 202 in its cache, the client 112A/112B need not request the frame from the server 102B, but rather may retrieve it from the local cache 214 for display. This serves to reduce bandwidth requirements and lag time, thus increasing the perceived responsiveness of the client 112A/112B.
With the system illustrated in
If M<=200, [0, M] is played.
else If C−100 is <0, [0−200) is played.
else If C+100 is >=M, [M−200, M] is played
else [C−100, C+100) is played.
In addition to the above, the size of the local cache 214 may be configurable. JPEG Images may be transmitted having a quality of between 85 and 100 to provide for memory management of the cache size.
A second process is a synchronization process that synchronizes the catalogue with one or more clients (306). In accordance with aspects of the present disclosure, the catalogue 206 may be simultaneously synchronized with plural clients. Each of the clients receives the catalogue 206 and remains in synchronism with the server 102B and each other. From the catalogue 206, the client 112A/112B is able to select desired frames for viewing before frames are received at the client 112A/112B from the server 102B. In other words, the client 112A/112B knows in advance what is stored in the cache of the server 102B because of the information contained in the catalog 206.
A third process provides frame data to the client in response to a client request. The server may receive a request from the client (308). The client 112A/112B may request frames 202 using the identifier component of the frame descriptor in the catalogue 206. For example, the metadata of the frame descriptor 204 may be used on the client 112A/112B as search criteria, whereby an end user may run a search against the metadata to retrieve frames from the server 112A (or cache 214, as described below) that correspond to the search results. Because the metadata is linked to the identifier in the frame descriptor 204, the client 112A/112B is able to request the appropriate frame (or frames) using the identifier(s). In another example, the metadata may be a timestamp. In yet another example, the client 112A/112B may generate a globally unique identifier (GUID) that is passed to the server as an identifier. The client 112A/112B may request frames 202 using identifiers in the frame descriptors 204 associated with a range of timestamps of interest. The client 112A/112B may specify a maximum frame rate to limit the size of the frame cache 214. The frame rate may be between 15 and 30 FPS for images having a size of approximately 512×512 pixels. A higher frame rate of approximately 45 FPS may be achieved where smaller image sizes are communicated to the client 112A/112B. The client 112A/112B may also specify an encoding quality of the frames to be transferred, a destination path and/or a list of frames using GUIDs.
At 310, the requested frames are returned by the server to the client. The server 102B may add the frames to the cache 208 as raw RGB images and encoded in accordance with the client request. The GUID and metadata may be added to the frame catalogue 206 as the frames are added to the cache 208. The frames 202 may be transmitted by any communication technique to the client 112A/112B and stored in the cache 214. For example, a MIME type may be specified (e.g., image/x-tile, image/jpeg or image/png or other). Once the client 112A/112B buffers enough data, the client will begin playback of the images.
In some implementations, the client request may be in a message that contains at least one or more identifiers. The client may also send the metadata component of the frame descriptor in the message. The server responds by sending the requested frame(s) to the client in a message. The client unpacks the frames from the message and places the frames in the local cache. The frames may then be processed by the graphics subsystem of client and displayed to the end user.
In accordance with some implementations, the system 100 may buffer a minimum number of frames for playback on the client 112A/112B within approximately one minute. The playback of may begin within ten seconds of a user activating the cinematic viewing mode within the user interface program.
It is noted that processes of
In some implementations, synchronization of the catalogue on plural clients may be used in a collaboration setting where different clients can independently choose different views. For example, if a client is connected to the server by a slow network connection, the client may skip frames, whereas a client connected by a faster network connection may receive all frames. However, each of the clients in the collaboration receives and interacts with the same cinematic production.
Numerous other general purpose or special purpose computing system environments or configurations may be used. Examples of well known computing systems, environments, and/or configurations that may be suitable for use include, but are not limited to, personal computers, server computers, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, network personal computers (PCs), minicomputers, mainframe computers, embedded systems, distributed computing environments that include any of the above systems or devices, and the like.
Computer-executable instructions, such as program modules, being executed by a computer may be used. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Distributed computing environments may be used where tasks are performed by remote processing devices that are linked through a communications network or other data transmission medium. In a distributed computing environment, program modules and other data may be located in both local and remote computer storage media including memory storage devices.
With reference to
Computing device 500 may have additional features/functionality. For example, computing device 500 may include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in
Computing device 500 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by device 500 and includes both volatile and non-volatile media, removable and non-removable media.
Computer storage media include volatile and non-volatile, and removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Memory 504, removable storage 508, and non-removable storage 510 are all examples of computer storage media. Computer storage media include, but are not limited to, RAM, ROM, electrically erasable program read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 500. Any such computer storage media may be part of computing device 500.
Computing device 500 may contain communications connection(s) 512 that allow the device to communicate with other devices. Computing device 500 may also have input device(s) 514 such as a keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s) 516 such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at length here.
It should be understood that the various techniques described herein may be implemented in connection with hardware or software or, where appropriate, with a combination of both. Thus, the methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter. In the case of program code execution on programmable computers, the computing device generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. One or more programs may implement or utilize the processes described in connection with the presently disclosed subject matter, e.g., through the use of an application programming interface (API), reusable controls, or the like. Such programs may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language and it may be combined with hardware implementations.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
The present application claims priority to U.S. Provisional Patent Application No. 61/502,803, filed Jun. 29, 2011, and entitled “Method for Cataloguing and Accessing Digital Cinema Frame Content,” which is incorporated herein by reference it is entirety.
Number | Name | Date | Kind |
---|---|---|---|
5025400 | Cook et al. | Jun 1991 | A |
5187574 | Kosemura et al. | Feb 1993 | A |
5886733 | Zdepski | Mar 1999 | A |
6028608 | Jenkins | Feb 2000 | A |
6119154 | Weaver | Sep 2000 | A |
6271847 | Shum et al. | Aug 2001 | B1 |
6674430 | Kaufman et al. | Jan 2004 | B1 |
6888551 | Willis et al. | May 2005 | B2 |
7333236 | Herrmann | Feb 2008 | B2 |
7370120 | Kirsch et al. | May 2008 | B2 |
7379605 | Ticsa | May 2008 | B1 |
7430261 | Forest et al. | Sep 2008 | B2 |
7432935 | Keller | Oct 2008 | B2 |
7519209 | Dawant et al. | Apr 2009 | B2 |
7526131 | Weber | Apr 2009 | B2 |
7574247 | Moreau-Gobard et al. | Aug 2009 | B2 |
7586953 | Forest | Sep 2009 | B2 |
7606314 | Coleman et al. | Oct 2009 | B2 |
7647331 | Li et al. | Jan 2010 | B2 |
7912259 | Arditi et al. | Mar 2011 | B2 |
8005369 | Ko et al. | Aug 2011 | B2 |
8265449 | Nagahara et al. | Sep 2012 | B2 |
8503754 | Roberts et al. | Aug 2013 | B2 |
8589995 | Choi et al. | Nov 2013 | B2 |
8620861 | Uhrhane | Dec 2013 | B1 |
20020101429 | Abdo | Aug 2002 | A1 |
20030025599 | Monroe | Feb 2003 | A1 |
20030081840 | Palmer et al. | May 2003 | A1 |
20040005005 | McIntyre et al. | Jan 2004 | A1 |
20050033758 | Baxter | Feb 2005 | A1 |
20050100220 | Keaton et al. | May 2005 | A1 |
20050105786 | Moreau-Gobard et al. | May 2005 | A1 |
20050110791 | Krishnamoorthy et al. | May 2005 | A1 |
20050111718 | MacMahon et al. | May 2005 | A1 |
20060008254 | Seo | Jan 2006 | A1 |
20060013482 | Dawant et al. | Jan 2006 | A1 |
20060173272 | Qing et al. | Aug 2006 | A1 |
20060222081 | Carrigan | Oct 2006 | A1 |
20070031019 | Lesage et al. | Feb 2007 | A1 |
20070036402 | Cahill et al. | Feb 2007 | A1 |
20070103567 | Wloka | May 2007 | A1 |
20070162568 | Gupta | Jul 2007 | A1 |
20080008401 | Zhu | Jan 2008 | A1 |
20080126487 | Wegenkittl | May 2008 | A1 |
20080137921 | Simon et al. | Jun 2008 | A1 |
20080143718 | Ray et al. | Jun 2008 | A1 |
20080175377 | Merrill | Jul 2008 | A1 |
20080246770 | Kiefer et al. | Oct 2008 | A1 |
20090083809 | Hayashi | Mar 2009 | A1 |
20090087161 | Roberts et al. | Apr 2009 | A1 |
20090125570 | Bailey | May 2009 | A1 |
20090150557 | Wormley | Jun 2009 | A1 |
20100229201 | Choi et al. | Sep 2010 | A1 |
20110085025 | Pace | Apr 2011 | A1 |
20110126127 | Mariotti | May 2011 | A1 |
20110202947 | Gupta et al. | Aug 2011 | A1 |
20110202966 | Gupta et al. | Aug 2011 | A1 |
20110310446 | Komatsu | Dec 2011 | A1 |
20120011568 | Tahan | Jan 2012 | A1 |
20120084350 | Xie | Apr 2012 | A1 |
20120200774 | Ehlers, Sr. | Aug 2012 | A1 |
20120310660 | Liu | Dec 2012 | A1 |
20130019257 | Tschernutter | Jan 2013 | A1 |
20130163835 | Park | Jun 2013 | A1 |
20130179186 | Birtwhistle | Jul 2013 | A1 |
20130215246 | Ozaki | Aug 2013 | A1 |
20130227013 | Maskatia | Aug 2013 | A1 |
20150046953 | Davidson | Feb 2015 | A1 |
20150163379 | Herzog | Jun 2015 | A1 |
Number | Date | Country |
---|---|---|
2261069 | Nov 1998 | CA |
2239994 | Dec 1998 | CA |
2427590 | May 2001 | CA |
1744287 | Apr 2008 | EP |
2010085898 | Aug 2010 | WO |
2010085899 | Aug 2010 | WO |
Entry |
---|
Adalsteinsson, D., et al., “A Fast Level Set Method for Propagating Interfaces,” Journal of Computational Physics, vol. 8, No. 2, Sep. 1994, pp. 269-277. |
Baillard, C., et al., “Robust Adaptive Segmentation of 3D Medical Images with Level Sets,” INRIA, Rennes, France, Research Report No. 1369, Nov. 2000, 28 pages. |
Billeter, M., et al., “Efficient Stream Compaction on Wide SIMD Many-Core Architectures,” Proceedings of the ACM SIGGRAPH/EUROGRAPHICS Conference on High Performance Graphics, HPG, 2009, pp. 159-166. |
Boulos, S., et al., “Interactive Distribution Ray Tracing,” Technical report UUSCI-2006-022, SCI Institute, University of Utah, 2006, 13 pages. |
Bülow, T., “A General Framework for Tree Segmentation and Reconstruction from Medical Volume Data,” Proceedings of the 7th International Conference on Medical Image Computing and Computer-Assisted Intervention, Saint-Malo, France, MICCAI, vol. 3216, 2004, pp. 533-540. |
Carrillo, J., et al., “Recursive tracking of vascular tree axes in 3D medical images,” International Journal of Computer Assisted Radiology and Surgery, vol. 1, Issue 6, Apr. 2007, pp. 331-339. |
Cates, J.E., et al., “GIST: an interactive, GPU-based level set segmentation tool for 3D medical images,” Medical Image Analysis, vol. 8, Issue 3, 2004, pp. 217-231. |
Christensen, P.H., et al., “Ray Differentials and Multiresolution Geometry Caching for Distribution Ray Tracing in Complex Scenes,” Computer Graphics Forum (Eurographics 2003 Conference Proceedings), 2003, pp. 543-552. |
Cocosco, C.A., et al., “BrainWeb: Online Interface to a 3D MRI Simulated Brain Database,” NeuroImage, vol. 5, No. 4, 1997, p. 425. |
Deschamps, T., “Curve and Shape Extraction with Minimal Paths and Level-Sets Techniques. Applications to 3D Medical Imaging,” Ph.D. thesis, Université Paris-IX, Dauphine, 2001, 233 pages. |
Kindlmann, G., et al., “Curvature-based transfer functions for direct volume rendering: methods and applications,” Proceedings of the 14th IEEE Visualization, 2003, pp. 513-520. |
Lefohn, A.E., et al., “A Streaming Narrow-Band Algorithm: Interactive Computation and Visualization of Level Sets,” IEEE Transactions on Visualization an dComputer Graphics, vol. 10, No. 4, 2004, pp. 422-433. |
Lefohn, A.E., et al., “Interactive Deformation and Visualization of Level Set Surfaces Using Graphics Hardware,” IEEE Visualization, 2003, pp. 75-82. |
Lefohn, A.E., et al., “Interactive, GPU-Based Level Sets for 3D Brain Tumor Segmentation,” Apr. 16, 2003, 15 pages. |
Lefohn, A.E., et al., Interactive, GPU-Based Level Sets for 3D Segmentation, MICCAI, 2003, pp. 564-572. |
Månsson, E., et al., “Deep Coherent Ray Tracing,” Proceedings of the 2007 IEEE Symposium on Interactive Ray Tracing, RT, IEEE Computer Society, 2007, pp. 79-85. |
Peng, D., et al., “A PDE-Based Fast Local Level Set Method,” Journal of Computational Physics, vol. 155, No. 2, 1999, pp. 410-438. |
Rumpf, M., et al., “Level set segmentation in graphics hardware,” Proceedings of IEEE International Conference on Image Processing, ICIP, vol. 3, 2001, pp. 1103-1106. |
Schmittler, J., et al., “SaarCOR—A Hardware Architecture for Ray Tracing,” Graphics Hardware, 2002, 11 pages. |
Sengupta, S., et al., “Scan Primitives for GPU Computing,” Graphics Hardware, 2007, 11 pages. |
Sherbondy, A., et al., “Fast Volume Segmentation With Simultaneously Visualization Using Programmable Graphics Hardware,” Proceedings of the 14th IEEE Visualization 2003, VIS, 2003, pp. 171-176. |
Taheri, S. “3D Tumor Segmentation using Level Set,” Course Notes, National University of Singapore, Apr. 2006, 40 pages. |
Taheri, S., “Level-set segmentation of brain tumors in magnetic resonance images,” Ph.D. thesis, National University of Singapore, Jul. 2007, 155 pages. |
Whitaker, R.T., “A Level-Set Approach to 3D Reconstruction from Range Data,” International Journal of Computer Vision, vol. 29, No. 3, 1998, pp. 203-231. |
Zhou, T., et al., “Accurate depth of field simulation in real time,” Computer Graphics Forum, vol. 26, 2007, pp. 15-23. |
Extended European Search Report, dated Mar. 4, 2013, received in connection with European Application No. 10735473.0. |
International Search Report and Written Opinion, dated Nov. 9, 2012, received in connection with International Application No. PCT/IB2012/001273. |
International Search Report, dated Apr. 26, 2010, received in connection with International Application No. PCT/CA2010/000152. |
International Preliminary Report on Patentability and Written Opinion, dated Aug. 2, 2011, received in connection with International Application No. PCT/CA2010/000152. |
International Search Report, dated May 19, 2010, received in connection with International Application No. PCT/CA2010/000153. |
International Preliminary Report on Patentability and Written Opinion, dated Aug. 2, 2011, received in connection with International Application No. PCT/CA2010/000153. |
Roberts, Mike, et al., “A work-efficient GPU algorithm for level set segmentation,” in Proceedings of the Conference on High Performance Graphics (HPG '10, Eurographics Association, Aire-la-Ville, Switzerland, 2010, pp. 123-132. |
Ledergerber, C., et al., “Volume MLS Ray Casting,” IEEE Transactions on Visualization and Computer Graphics, vol. 14, No. 6, 2008, pp. 1372-1379. |
Number | Date | Country | |
---|---|---|---|
20130007185 A1 | Jan 2013 | US |
Number | Date | Country | |
---|---|---|---|
61502803 | Jun 2011 | US |