Method to Use 360 Degree Cameras

Information

  • Patent Application
  • 20230188858
  • Publication Number
    20230188858
  • Date Filed
    June 30, 2022
    a year ago
  • Date Published
    June 15, 2023
    11 months ago
  • Inventors
    • Chaussee; Matthew (Fargo, ND, US)
    • Chaussee; Katie (Fargo, ND, US)
  • Original Assignees
Abstract
The present invention disclosed herein provides for a unique method to use digital cameras with 360-degree capabilities to display an immersive experience to viewers. The method disclosed may comprise identifying a location, shooting a 360-degree photo, recording a 360-degree video, processing the images, compiling finished files to an interactive environment, and displaying the interactive environment to a viewer. Digital cameras with 360-degree capabilities have been used for both taking still photos and videos. Images taken in a 360 degree format may require additional processing to place into a format that is easily viewable by a user. Immersive environments are simulations that allow viewers to experience a sensation similar to that of being physically present. Virtual reality, augmented reality, interactive displays, and forms of art are various examples of immersive environments.
Description
FIELD OF TECHNOLOGY

This disclosure generally relates to technology for 360-degree cameras.


BACKGROUND

Digital cameras have been used in various industries for many purposes. Digital cameras with 360-degree capabilities have increased in popularity in recent years. Digital cameras with 360-degree capabilities have been used for both taking still photos and videos.


Videos and photos are useful to convey information and to educate viewers. Videos and photos may be referred to as images. Images taken in a 360 degree format may require additional processing to place into a format that is easily viewable by a user. For example, 360 images may be processed as monoscopic or stereoscopic images depending on the final display.


Immersive environments are simulations that allow viewers to experience a sensation similar to that of being physically present. Virtual reality, augmented reality, interactive displays, and forms of art are various examples of immersive environments. Immersive environments have been used in educational settings. As digital camera technology and processing capabilities have improved, the demand for more detailed immersive environments has increased. Though 360 cameras are long existing technologies, the industry has not been able to develop methods to allow for the simultaneous use of 360-degree videos and photos in an immersive environment.


SUMMARY

The present invention disclosed herein provides for a unique method to use digital cameras with 360-degree capabilities to display an immersive experience to viewers. Methods to use both reframed videos and stitched photos in an immersive environment as disclosed herein have not been previously disclosed.


The method disclosed herein may comprise identifying a location, shooting a 360-degree photo, recording a 360-degree video, processing the images, compiling finished files to an interactive environment, and displaying the interactive environment to a viewer. To accomplish this method, an operator may physically identify a location and subsequently use one or more cameras to shoot a 360-degree photo and a 360-degree video from that location. In addition, a viewer may interact with the interactive environment that is displayed. The method disclosed herein allows an operator to use 360-degree cameras to shoot and record images at a location, such as communicating an interactive environment to a viewer. As a result, the viewer may experience a sensation similar to that of being physically present at the location.


To accomplish the disclosed method, both the shooting of photos and recording of videos are performed at the same location identified by the operator. Then, the images are uploaded for processing. Processing may include reframing and stitching the images into a format acceptable for displaying in the interactive environment. The interactive environment is processed to display a 360-degree photo upon which videos originating from the 360-degree videos recorded may be displayed. Viewers may interactively activate the videos that provide additional detail about the location to the viewers. Such videos need to be reframed from the 360-degree videos which have been recorded at the location. No known prior art has utilized a reframed 360-degree video which has been reframed to be displayed on an interactive environment displaying a 360-degree photo. Embedding a reframed video in an interactive environment including a stitched 360-degree photo is a novel and unique concept.





BRIEF DESCRIPTION OF THE DRAWINGS

These and other features and advantages of the present invention will become more readily appreciated as the same becomes better understood by reference to the following detailed description when considered in connection with accompanying drawings, wherein:



FIG. 1 is an exemplary block diagram for a method for using one or more 360-degree cameras to display an interactive environment to a viewer;



FIG. 2 is an exemplary block diagram showing the steps for identifying a location for recording a 360-degree video and shooting a 360-degree photo;



FIG. 3 is an exemplary block diagram showing the steps for processing a 360-degree photo shot at a location;



FIG. 4 is an exemplary block diagram showing the steps for processing a 360-degree video recorded at a location; and



FIG. 5 is an exemplary block diagram showing the steps for compiling the images to an interactive environment.





DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS
General

The present invention will now be described with occasional reference to the specific embodiments of the invention. This invention may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.


Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for describing particular embodiments only and is not intended to be limiting of the invention. As used in the description of the invention and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.


Unless otherwise indicated, all numbers expressing quantities of dimensions such as length, width, height, and so forth as used in the specification and claims are to be understood as being modified in all instances by the term “about.” Accordingly, unless otherwise indicated, the numerical properties set forth in the specification and claims are approximations that may vary depending on the desired properties sought to be obtained in embodiments of the present invention. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the invention are approximations, the numerical values set forth in the specific examples are reported as precisely as possible. Any numerical values, however, inherently contain certain errors necessarily resulting from error found in their respective measurements.


Figures Detail


FIG. 1 is an exemplary block diagram for a method for using one or more 360-degree cameras to display an interactive environment to a viewer. This method may be referenced herein as a method to use a camera. The method disclosed herein may comprise identifying 101 a location, shooting 102 a 360-degree photo, recording 103 a 360-degree video, processing 105 finished files, and compiling 106 finished files to an interactive environment. The method may further comprise displaying the interactive environment to a viewer. The location may be at a place of work so that the interactive environment may communicate information about the place of work to a viewer. For example, the location may be at a factory or an office.



FIG. 2 is an exemplary block diagram showing the steps for identifying 101 a location for recording 103 a 360-degree video and shooting 102 a 360-degree photo. Shooting 102 may be the act of taking a 360-degree photo at a particular location. Recording 103 may be the act of taking a 360-degree video at a particular location.


Identifying 101 a location includes the step of determining 201 one or more points of interest and includes the step of picking 202 one or more frames of action. For example multiple frames of action An operator may identify the location. The operator is someone who may physically operate the one or more cameras. A first camera may be used for shooting the 360-degree photos of the point of interest and a second camera may be used for recording a 360-degree video of the frame of action. Alternatively, the first camera may be used for shooting both the 360-degree photos of the point of interest and used for recording a 360-degree video of the frame of action. A point of interest is an area that will ultimately be shot in a 360-degree photo. The point of interest may be where a viewer may interact to access additional information. A point of interest is identified as a hotspot by a user interacting with the interactive environment. Determining 201 a point of interest is the act of choosing where the 360-degree photo to be shot is located. Multiple points of interest may be chosen.


A hotspot or point of interest is a selectable space of content in the interactive environment. The point of interest is at a location that will be accessed by viewers in the interactive environment. The frame of action may provide a pop-out window or content beyond the point of interest. The frame of action may provide text, images, videos, or allows the viewer to learn more about a given part of the interactive environment. A frame of action may be used to provide more information about content communicated in the interactive environment. For example, additional information on the day-to-day items, tools, machinery, or environment a worker is inhabiting may be provided. Picking 202 is the act of choosing a frame of action to be provided in relation to the point of interest. The location may be identified such that the frame of action(s) and the point of interest may be captured by a camera. The point of interest is being shot as a 360-degree photo and the frame of action may be recorded as a 360-degree video.


For example, suppose the method to use a camera is configured for the purpose of providing a career experience to a viewer. In that case, the hotspots may help strengthen the viewer's understanding of what the job environment provides and what prerequisites the viewer may need to succeed in a given career field. The hotspots may also help build the environment with facts about the location, the company, and the developments or practices a company may value. The operator may determine the hotspots.


A frame of action is an area at the identified location, reflecting the spot on the 360-degree photo where a reframed video from a 360-degree video or other content may be displayed. A reframed video is a 360-degree video that has been processed to appear as a video with a standard aspect ratio. In the disclosed method to use a camera, a reframed video is one that originated from the 360-degree video recorded at the identified location. The operator may pick a frame of action.


A frame of action in an interactive environment provides a select area of focus for the viewer. For example, suppose the method to use a camera is configured for the purpose of providing a career experience to a viewer. In that case, the frame of action may be a worker, product, or device. The 360-degree capable camera may be positioned at the location where the environment may be accurately captured. This location can be interior or exterior, as well as static/non-static. Movement is welcome across the location, as the photo and video captured depict the full environment rather than a given space area. The frame of action also determines what the viewer will be witnessing from the depicted scene.


Once determining 201 the point of interest and picking 202 frames of action has been completed, the operator may take the action of positioning 203 cameras for shooting 102 and recording 103. Positioning 203 the cameras includes placing the one or more cameras at a specific spot at the location, which allows for an appropriate distance and height from points of interest and frames of action, ensuring uninterrupted first-person view.


The type of camera which could be used for the disclosed method to use a camera may be any 360-degree photo or video capture device. Typically such a device is a single device with multiple lenses allowing for everything from the camera's perspective to be captured. A 360-degree video capture device may be used to also shoot the 360-degree photo. A 360-degree video capture device may be a camera. A video capture device may carry out the shooting 102 step and the recording 103 step. For example, the video capture device may simultaneously perform both the shooting and recording functions. The video capture device in the example may be considered the first camera. A still-frame photo may be taken from the 360-degree video.



FIG. 3 is an exemplary block diagram showing the steps for refining 105a a 360-degree photo shot at a location. Processing 105 a 360-degree photo is considered refining 105a. Processing 105 a 360-degree video is considered clarifying 105b. Refining 105a and clarifying 105b may collectively be referenced to as processing 105.


Refining 105a comprises the steps of photo sorting 301 360-degree photos that are unstitched, photo stitching 302 the sorted and selected photos, and adjusting 303 the stitched photos. Refining 105a utilizes a photo sorting 301 function wherein 360-degree photos are systematically arranged to subsequently undergo photo stitching 302. Refining 105a may further comprise photo organizing 304 the stitched and adjusted photos into a storyboard sequence. Photo sorting 301 is the step of identifying which 360-degree photos or portions thereof will be used for subsequent display in the interactive environment. Multiple photos may have been shot at the location.


Photo stitching 302 is the step of combining 360-degree photos into a 360-degree equirectangular format. The derived result of photo stitching 302 is an equirectangular photo. The 360-degree photos may be shot such that the photos are stored or saved in multiple files or formats. Photo stitching 302 is the process of combining multiple photos into a panorama that may be viewable by a viewer in an interactive environment. There are many different software platforms upon which photos may undergo the photo stitching 302 step.


Refining 105a utilizes photo adjusting 303 wherein the equirectangular photo is edited for appearance. The equirectangular photo undergoes photo adjusting which may include editing for visual adjustments, color correction, stitching errors, sharpening and denoising. Many different software platforms upon which photos may undergo the photo adjusting 303 are publicly available. If photo organizing 304 is not required, photos that have been stitched and adjusted may be converted into a finished photo file. A finished photo file is one or more resulting files that will be compiled for display in the interactive environment.


Photo organizing 304 the stitched and adjusted photos into a storyboard sequence is a step that is relevant when multiple stitched and adjusted photos are to be used in the interactive environment. There are many different software platforms upon which photos may undergo the photo organizing 304 step.



FIG. 4 is an exemplary block diagram showing the steps for processing 105 a 360-degree video recorded at a location. Processing 105 a 360-degree video is considered clarifying 105b. Clarifying 105b may include many of the same steps as refining 105a. Clarifying 105b may comprise of video sorting 401, video stitching 402, and video adjusting 404. The clarifying 105b step may further comprise any of the following steps: video organizing 403, framing 405, captioning 406, or exporting 407.


Video sorting 401 is the step of identifying which 360-degree videos or portions thereof, will be used for subsequent display in the interactive environment. Multiple 360-degree videos may have been shot at the location. Clarifying 105b utilizes a video sorting 401 function wherein 360-degree videos are systematically arranged to subsequently undergo video stitching 402.


Video stitching 402 is the step of combining 360-degree videos into a 360-degree equirectangular format. The derived result of video stitching 402 is an equirectangular video. The 360-degree videos may be recorded such that the videos are stored or saved in multiple files or formats. Video stitching 402 is the process of combining multiple videos into a panorama that may be viewable by a viewer in an interactive environment. There are many different software platforms upon which videos may undergo the video stitching 402 step.


Video adjusting 404 the stitched videos includes editing for visual adjustments, color correction, stitching errors, sharpening, denoising, and adding a voiceover. Many different software platforms upon which videos may undergo the video adjusting 404 are publicly available. If video organizing 403 is not required, videos that have been stitched and adjusted may be converted into a finished video file. A finished video file is one or more resulting files that will be compiled for display in the interactive environment.


Video organizing 403 multiple video stitched and video adjusted video into a storyboard sequence is a step that is relevant when multiple stitched and adjusted videos are to be used in the interactive environment. There are many different software platforms upon which videos may undergo the video organizing 403 step.


Framing 405 is a step wherein a stitched and adjusted video is reframed to a standard aspect ratio. Video resulting from framing 405 may be considered a reframed video. Framing is the process of manipulating the 360-degree video into a standard aspect ratio. The video resulting from the framing 405 step may be referenced as a reframed video. To reframe to a standard aspect ratio only a portion of the stitched and adjusted video is used. For example, one small portion of the 360-degree video may be reframed for display in the interactive environment. A portion of the 360-degree video may include the portion which includes a person speaking, a device, or other item which is desired to be displayed in an interactive environment. A standard aspect ratio may include, for example, a 16:9 or 4:3 ratio.


Captioning 406 is a step wherein captions are added to a video that has been reframed from a 360-degree video. For example, text could be added that provides a translation or text that provides additional information about what is being displayed in the reframed video. Captioning 406 is a step that is optional.


Exporting 407 is a step wherein the video is exported for display in an interactive environment. Exporting 407 may reference reframed exporting 407a or non-reframed exporting 407b. Exporting 407 may include exporting of finished files including both a finished video file and a finished photo file.


Reframed exporting 407a is the exporting of 360-degree video that has undergone video stitching 402, adjusting 404, framing 405, and optionally captioning 406. Multiple videos may be the result of reframed exporting 407a. For each frame of action there is the reframed exporting 407a step. For example, if there are four reframed videos in a particular interactive environment, the reframed exporting 407a step will occur four times.


Non-reframed exporting 407b is the exporting of 360-degree video that has undergone video stitching 402 and adjusting 404. The video exported in the non-reframed exporting 407b is not reframed video and is referenced as non-reframed video. Non-reframed video is the video exported in non-reframed exporting 407b. The 360-degree video that has not been reframed may be exported to the interactive environment to provide additional images for the interactive environment. For example, the 360-degree video that has not been reframed may be uploaded in an interactive environment displayed in a virtual reality device such as a headset. The 360-degree video that has not been reframed may provide for a supplementary experience to the interactive environment. A virtual reality device may be a simulation providing an experience to a viewer wherein the viewer has a seemingly real or physical experience with the video. Video that is non-reframed video undergoes non-reframed video uploading 501c.



FIG. 5 is an exemplary block diagram showing the steps for compiling 106 the finished files to an interactive environment. Within the step of compiling 106 the finished video file resulting from clarifying 105b and the finished photo file resulting from refining 105a are combined to be displayed in an interactive environment. A finished video file and a finished photo file may be collectively referenced as finished files. The step of compiling 106 comprises uploading 501, embedding 502, adding 503 point of interest icons, overlaying 504, and publishing 505.


Uploading 501 includes reframed video uploading 501a, photo uploading 501b, and non-reframed video uploading 501c. A finished video file may be non-reframed video or reframed video. A finished photo file may undergo photo uploading 501b. Uploading 501 is the step of transferring finished files that resulted from refining 105a and clarifying 105b to a hosting server. A hosting server is a server from which a viewer may access an interactive environment. The finished photo file resulting from the refining 105a may be displayed as the background of the interactive environment. Additional features, including reframed videos, viewer controls, and other features, may be overlayed.


Non-reframed video that undergoes non-reframed video uploading 501c does not need to go through the steps of embedding 502, adding 503, and overlaying 504. Non-reframed video is published to a server. Non-reframed publishing 505b is the publishing of non-reframed video. Non-reframed video, may for example, be used in a virtual reality device. Non-reframed video published for display may be considered a supplementary experience.


Publishing 505 is the final step in the compiling 106 step. Publishing is the act of allowing viewers access to the interactive environment. The compiled interactive environment may be comprised of the uploaded finished files, the background, and the embedded content. A hosting server may be connected to computers via the internet. The interactive environment may be distributed to any computer connected to the internet. Viewers may view the displayed interactive environment on a device. An alternative method of publishing 505 may be that the compiled interactive environment is loaded to a local storage media. The local storage media may be a SD card or a computer. The SD card in such embodiment may be physically delivered and accessed for display of the interactive environment. Publishing 505 includes both non-reframed publishing 505b and reframed publishing 505a. Reframed publishing 505a may be performed when publishing an interactive environment with finished video file as a reframed video. Non-reframed publishing 505b may be performed when publishing a non-reframed finished video file.


Embedding 502 is the function wherein a finished video file is attached to a spot on the photo displayed as the interactive environment's background. This spot is the location wherein an operator has selected as a frame of action. Multiple reframed videos may be attached to various spots on the photo. Multiple frames of action may be in one interactive environment. The reframed video may play when a viewer activates the reframed video. Embedding 502 a reframed video in an interactive environment including a stitched 360-degree photo is a novel and unique concept.


Adding 503 points of interest icons is the step of placing descriptive information at each point of interest selected by the operator. The added icons at the points of interest allow a viewer to be displayed additional information when an icon is clicked.


Overlaying 504 is the step of applying additional information over the top of the interactive environment. Additional information may include navigation, virtual reality mode, instructions, and interactive interview videos. Navigation may include menu buttons, a back button, links, and other similar items. Once the images have been uploaded, videos embedded, points of interest added, and additional information overlayed, the various elements are now considered a compiled interactive environment.


Reframed video that has undergone reframed video uploading 501a and has gone through the steps of embedding 502, adding 503, and overlaying 504 undergoes reframed publishing 505a. Reframed publishing 505a is the act of loading the compiled interactive environment to a server that viewers can access. A hosting server may be connected to computers via the internet. The interactive environment may be distributed to any computer connected to the internet. Viewers may view the displayed interactive environment on a device.


The availability of both published non-reframed video for a supplemental experience using virtual reality, and an interactive environment including reframed video in one or more frames of action, both from the recording 103 of a 360-degree video, is a new and unique concept.


The interactive environment may be displayed on a variety of devices. Displaying the interactive environment is when a viewer may access the interactive environment on a device. Devices may include: smart boards, desktop computers, laptops, touch screen tablets, mobile phones, mobile phones fitted with virtual reality glasses, and wearable devices like dedicated virtual reality headsets and dedicated augmented reality headsets. Viewers may use the devices to interact with the interactive environment. For example, a viewer could use a virtual reality device to view the interactive environment. In addition, a virtual reality device could allow for the user to physically walk around, navigate the interactive environment, and activate embedded reframed videos and points of interest.


Explanation of Exemplary Language

While various inventive aspects, concepts and features of the general inventive concepts are described and illustrated herein in the context of various exemplary embodiments, these various aspects, concepts and features may be used in many alternative embodiments, either individually or in various combinations and sub-combinations thereof.


Unless expressly excluded herein all such combinations and sub-combinations are intended to be within the scope of the general inventive concepts. Still further, while various alternative embodiments as to the various aspects, concepts and features of the inventions (such as alternative materials, structures, configurations, methods, devices and components, alternatives as to form, fit and function, and so on) may be described herein, such descriptions are not intended to be a complete or exhaustive list of available alternative embodiments, whether presently known or later developed. Those skilled in the art may readily adopt one or more of the inventive aspects, concepts or features into additional embodiments and uses within the scope of the general inventive concepts even if such embodiments are not expressly disclosed herein. Additionally, even though some features, concepts or aspects of the inventions may be described herein as being a preferred arrangement or method, such description is not intended to suggest that such feature is required or necessary unless expressly so stated. Still further, exemplary or representative values and ranges may be included to assist in understanding the present disclosure; however, such values and ranges are not to be construed in a limiting sense and are intended to be critical values or ranges only if so expressly stated. Moreover, while various aspects, features and concepts may be expressly identified herein as being inventive or forming part of an invention, such identification is not intended to be exclusive, but rather there may be inventive aspects, concepts and features that are fully described herein without being expressly identified as such or as part of a specific invention. Descriptions of exemplary methods or processes are not limited to inclusion of all steps as being required in all cases, nor is the order that the steps are presented to be construed as required or necessary unless expressly so stated.

Claims
  • 1. A method to use a camera comprising: identifying a location, identifying comprising of determining a point of interest and picking a frame of action, the location is such that the point of interest and the frame of action may be captured with a camera, the point of interest will be accessed by a viewer in an interactive environment;shooting a 360-degree photos of the point of interest at the location;recording a 360-degree video of the frame of action at the location, the 360-degree video of the frame of action provides content beyond the 360-degree photos of the point of interest;refining the 360-degree photos into a finished photo file, refining comprising of photo sorting, photo stitching, and photo adjusting, refining utilizing photo sorting wherein the 360-degree photos are sorted to subsequently undergo photo stitching wherein an equirectangular photo is derived, the equirectangular photo undergoes adjusting to result in the finished photo file;clarifying the 360-degree video into a finished video file, clarifying comprising of video sorting, video stitching, and video adjusting, clarifying utilizing video sorting wherein the 360-degree video are sorted to subsequently undergo video stitching wherein an equirectangular video is derived, the equirectangular video undergoes adjusting, clarifying results in the finished video file; andcompiling the finished video file and the finished photo file, the finished video file and the finished photo file to be combined to be displayed in the interactive environment, compiling comprising of uploading, embedding, and publishing, the finished video file and the finished photo file undergo uploading wherein the finished video file and the finished photo file is transferred to a hosting server, the finished photo file is displayed as a background to the interactive environment, embedding is where a finished video is attached to a spot on the background, publishing is allowing the viewer access to the interactive environment.
  • 2. The method of claim 1, wherein clarifying is further comprised of framing, framing is manipulating the 360-degree video into a standard aspect ratio.
  • 3. The method of claim 2, wherein the location is at a place of work.
  • 4. The method of claim 3, wherein an operator chooses the location.
  • 5. The method of claim 2, wherein a first camera is used for shooting both the 360-degree photos of the point of interest and used for recording the 360-degree video of the frame of action.
  • 6. The method of claim 5, wherein the first camera simultaneously performs shooting and recording functions.
  • 7. The method of claim 2, wherein a first camera is used for shooting the 360-degree photos of the point of interest and a second camera is used for recording the 360-degree video of the frame of action.
  • 8. The method of claim 2, wherein clarifying is further comprised of organizing multiple equirectangular videos into a storyboard sequence.
  • 9. The method of claim 2, further comprising of more than one frame of action.
  • 10. The method of claim 2, wherein compiling is further comprised of overlaying additional information in the interactive environment.
  • 11. The method of claim 2, further comprising displaying the interactive environment on a virtual reality headset.
  • 12. The method of claim 2, further comprising displaying the interactive environment to a viewer.
  • 13. A method to use a camera comprising: identifying a location, identifying comprising of determining a point of interest and picking a frame of action, the location is such that the point of interest and the frame of action may be captured with a camera, the point of interest will be accessed by a viewer in an interactive environment, the location is at a place of work, an operator chooses the location;shooting a 360-degree photos of the point of interest at the location;recording a 360-degree video of the frame of action at the location, the 360-degree video of the frame of action provides content beyond 360-degree photos of the point of interest, a first camera is used for shooting both the 360-degree photos of the point of interest and used for recording the 360-degree video of the frame of action;refining the 360-degree photos into a finished photo file, refining comprising of photo sorting, photo stitching, and photo adjusting, refining utilizing photo sorting wherein the 360-degree photos are sorted to subsequently undergo photo stitching wherein an equirectangular photo is derived, the equirectangular photo undergoes adjusting to result in the finished photo file;clarifying the 360-degree video into a finished video file, clarifying comprising of video sorting, video stitching, framing, and video adjusting, clarifying utilizing video sorting wherein the 360-degree video are sorted to subsequently undergo video stitching wherein an equirectangular video is derived, the equirectangular video undergoes adjusting, clarifying results in the finished video file, framing is manipulating the 360-degree video into a standard aspect ratio; andcompiling the finished video file and the finished photo file, the finished video file and the finished photo file to be combined to be displayed in the interactive environment, compiling comprising of uploading, embedding, and publishing, the finished video file and the finished photo file undergo uploading wherein the finished video file and the finished photo file is transferred to a hosting server, the finished photo file is displayed as a background to the interactive environment, embedding is where a finished video is attached to a spot on the background, wherein publishing is allowing the viewer access to the interactive environment.
CROSS-REFERENCE TO RELATED APPLICATION(S)

This application claims the priority and benefit to U.S. Provisional Patent Application No. 63/289,090, filed Dec. 13, 2021, the contents of which are hereby incorporated by reference in its entirety.

Provisional Applications (1)
Number Date Country
63289090 Dec 2021 US