3D SPHERICAL IMAGE SYSTEM

Abstract
Systems for producing 3D spherical imagery for a virtual walkthrough corresponding with a real environment are presented, the systems including: a rightward facing camera for capturing real-time rightward facing images, the rightward facing camera electronically coupled with a computing system; a leftward facing camera for capturing real-time leftward facing image data; a backward facing camera for capturing real-time backward facing image data; a frontward facing camera for capturing real-time frontward facing image data; an upward facing camera for capturing real-time upward facing image data; a number of laser scanners; and an inertial movement unit (IMU), where data from the number of laser scanners and the IMU captures 3D geometry information of the real environment and is rendered to provide a 3D virtual model of the real environment, and where image data from the cameras is rendered to provide image texture blending of the 3D virtual model.
Description
BACKGROUND

In recent years, three-dimensional modeling of indoor and outdoor environments has attracted much interest due to its wide range of applications such as virtual reality, disaster management, virtual heritage conservation, and mapping of potentially hazardous sites. Manual construction of these models is labor intensive and time consuming Interior modeling in particular poses significant challenges, the primary one being presence of an operator in the resulting model, followed by lack of GPS indoors.


Traditional solutions to capturing spherical images include a number of cameras positioned in a ball configuration. While traditional solutions may provide a convenient structural platform for an operator to use, often times the operator occupies a large portion of the captured and rendered image. In some applications, the presence of an operator in the image is not critical. However, in other applications, the presence of an operator may be undesirable or distracting. As such, 3D spherical image systems are presented herein.


BRIEF SUMMARY

The following presents a simplified summary of some embodiments of the invention in order to provide a basic understanding of the invention. This summary is not an extensive overview of the invention. It is not intended to identify key/critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some embodiments of the invention in a simplified form as a prelude to the more detailed description that is presented below.


Systems for producing 3D spherical imagery for a virtual walkthrough corresponding with a real environment are presented, the systems including: a rightward facing camera for capturing real-time rightward facing images, the rightward facing camera electronically coupled with a computing system; a leftward facing camera for capturing real-time leftward facing image data; a backward facing camera for capturing real-time backward facing image data; a frontward facing camera for capturing real-time frontward facing image data; an upward facing camera for capturing real-time upward facing image data; a number of laser scanners; and an inertial movement unit (IMU), where data from the number of laser scanners and the IMU captures 3D geometry information of the real environment and is rendered to provide a 3D virtual model of the real environment, and where image data from the cameras is rendered to provide image texture blending of the 3D virtual model. In some embodiments, the laser scanners include: a first laser scanner for scanning a horizontal plane; a second laser scanner for scanning a first vertical plane normal to a direction of motion; a third laser scanner for scanning a second vertical plane normal to the direction of motion; and a fourth laser scanner for scanning a third vertical plane tangent to the direction of motion. In some embodiments, the cameras and the number of laser scanners are each positioned with an unobstructed field of view. In some embodiments, data collection sources include: a spectrometer for capturing light source data, a barometer for capturing atmospheric pressure data, a magnetometer for capturing magnetic field data, a thermometer for capturing temperature data, a wireless local area network (WLAN) packet capture device for capturing WLAN data, a CO2 meter for capturing carbon dioxide data, and a lux meter for measuring luminance data. In some embodiments, systems further include an assembly such as: a backpack assembly for carrying the system by an operator, a motorized assembly for carrying the system by an autonomous robotic device, a motorized assembly for carrying the system by a semi-autonomous robotic device, and a motorized assembly for carrying the system by an operator guided robotic device. In some embodiments, the frontward facing camera of the backpack assembly is positioned at approximately a height of an operator's head and extends beyond the operator's head, where the upward facing camera of the backpack assembly is positioned at approximately the height of an operator's head and extends above the operator's head, where the rightward facing camera of the backpack assembly is positioned at approximately a height of an operator's right shoulder and extends beyond the operator's right shoulder, and where the leftward facing camera of the backpack assembly is positioned at approximately a height of the operator's left shoulder and extends beyond the operator's left shoulder, where the cameras are positioned immediately proximate to the operator to minimize an appearance of an operator in the 3D virtual model and to maximize the real environment being captured. In some embodiments, image data from the cameras is stitched together to provide a seamless image texture. In some embodiments, the seamless image texture results in a seamless four pi steradian spherical image texture. In some embodiments, the seamless image texture results in a seamless cubic image texture.


In other embodiments backpack assemblies for carrying a system by an operator for producing 3D spherical imagery for a virtual walkthrough corresponding with a real environment are presented, the backpack assemblies including: a rightward facing camera for capturing real-time rightward facing images, the rightward facing camera electronically coupled with a computing system, where the rightward facing camera of the backpack assembly is positioned at approximately a height of an operator's right shoulder and extends beyond the operator's right shoulder; a leftward facing camera for capturing real-time leftward facing image data, where the leftward facing camera of the backpack assembly is positioned at approximately a height of the operator's left shoulder and extends beyond the operator's left shoulder; a backward facing camera for capturing real-time backward facing image data; a frontward facing camera for capturing real-time frontward facing image data, where the frontward facing camera of the backpack assembly is positioned at approximately a height of an operator's head and extends beyond the operator's head; an upward facing camera for capturing real-time upward facing image data, where the upward facing camera of the backpack assembly is positioned at approximately the height of an operator's head and extends above the operator's head, and where the cameras are positioned immediately proximate to the operator to minimize an appearance of an operator in the 3D virtual model and to maximize the real environment being captured; a number of laser scanners; and an inertial movement unit (IMU), where data from the number of laser scanners and the IMU captures 3D geometry information of the real environment and is rendered to provide a 3D virtual model of the real environment, and where image data from the cameras is rendered to provide image texture blending of the 3D virtual model. In some embodiments, the laser scanners include: a first laser scanner for scanning a horizontal plane; a second laser scanner for scanning a first vertical plane normal to a direction of motion; a third laser scanner for scanning a second vertical plane normal to the direction of motion; and a fourth laser scanner for scanning a third vertical plane tangent to the direction of motion.





BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:



FIG. 1 is an illustrative representation of a 3D spherical image system in accordance with embodiments of the present invention;



FIG. 2 is an illustrative representation of a 3D spherical image system on a backpack assembly in accordance with embodiments of the present invention; and



FIG. 3 is an illustrative side view representation of a 3D spherical image system on a backpack assembly carried by an operator in accordance with embodiments of the present invention.





DETAILED DESCRIPTION

As will be appreciated by one skilled in the art, the present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.


A computer readable storage medium, as used herein, is not to be construed as being transitory signals /per se/, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.


Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.


The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.



FIG. 1 is an illustrative representation of a 3D spherical image system 100 in accordance with embodiments of the present invention. As illustrated, a number of cameras may be mounted in system embodiments featured herein. For example, embodiments illustrated include: rightward facing camera 102 for capturing real-time rightward facing images; leftward facing camera 104 for capturing real-time leftward facing image data; backward facing camera 106 for capturing real-time backward facing image data; frontward facing camera 108 for capturing real-time frontward facing image data; and upward facing camera 110 for capturing real-time upward facing image data. Image data from the cameras may be rendered to provide image texture blending of a 3D virtual model. In embodiments, cameras may sample at a frame rate of approximately one frame every 4 seconds to 30. Still further, in embodiments, the cameras may be positioned to have an overlapping field of view (FOV). In some embodiments, the cameras have a field of view (FOV) of approximately 90 to 360°. When camera embodiments are electronically coupled with a computing system as shown in FIG. 2, image data from the cameras may be stitched together to provide a seamless image texture. In addition, the seamless image texture may provide a seamless four pi steradian spherical image texture or a seamless cubic image texture without limitation.


Further illustrated are a number of laser scanners including: laser scanner 120 for scanning a horizontal plane; laser scanner 122 for scanning a first vertical plane normal to a direction of motion (DOM) 140; laser scanner 124 for scanning a second vertical plane normal to DOM 140; and laser scanner 126 for scanning a third vertical plane tangent to DOM 140. It may be appreciated that the DOM illustrated is provided for clarity in understanding embodiments of the present invention and should not be construed as limiting with respect to path actually traversed. Rather, DOM is indicative of general orientation of the camera embodiments disclosed with respect to the path actually traversed. Laser scanners are utilized to provide range data. For example, a horizontal plane laser scanner may be utilized to determine how the system moves in the X and Y directions that is, for 2D localization. Further, first and second vertical planes normal to the direction of motion laser scanners may be stacked successively as the operator moves in order to determine geometry of the surrounding environment of the operator. Still further third vertical plane tangent to the direction of motion laser scanner may be utilized to determine how the system moves in the Z direction that is, for Z-direction localization. Further included is at least one inertial movement unit (IMU) for providing orientation and movement of the system. Although an IMU is not illustrated, one skilled in the art will recognize that the shape and configuration of an IMU is relatively simple and thus may be mounted in embodiments in a manner that does not occlude other data collection devices. As such, data from the laser scanners and the IMU captures 3D geometry information of the real environment and may be rendered to provide a 3D virtual model of the real environment. In embodiments, the laser scanners sample at a rate in the range of approximately 5 to 60 Hz. In embodiments, the IMU samples at a rate in a range of approximately 150 to 300 Hz. Furthermore, it may be appreciated that, in embodiments, the cameras and the laser scanners may each be positioned with an unobstructed field of view. In this manner, resulting 3D spherical images may be rendered without or with minimal appearance of an operator, of capture devices, or of associated hardware.



FIG. 2 is an illustrative representation of a 3D spherical image system 200 on a backpack assembly 202 in accordance with embodiments of the present invention. Backpack assembly may include a data processing component 204, which component may include an electronic computing device coupled with a power supply such as a battery pack. In addition, it may be desirable to include any number of data collection sources such as, for example: a spectrometer for capturing light source data, a barometer for capturing atmospheric pressure data, a magnetometer for capturing magnetic field data, a thermometer for capturing temperature data, a wireless local area network (WLAN) packet capture device for capturing WLAN data, a CO2 meter for capturing carbon dioxide data, and a lux meter for measuring luminance data. Thus, it may be possible to collect data from various data collection sources and to map that data to a 3D virtual model.


The illustrated representation is merely one manner in which 3D spherical image system embodiments may be deployed. For example, deployment assemblies may include: motorized assembly for carrying the system by an autonomous robotic device, a motorized assembly for carrying the system by a semi-autonomous robotic device, and a motorized assembly for carrying the system by an operator guided robotic device. Furthermore, motorized assemblies may be land based, air based, or water based without limitation and without departing from embodiments provided herein.


Human Operator



FIG. 3 is an illustrative side view representation of a 3D spherical image system on a backpack assembly 300 carried by operator 302 in accordance with embodiments of the present invention. As noted above, traditional solutions to capturing spherical images include a number of cameras positioned in a ball configuration. While traditional solutions may provide a convenient structural platform for an operator to use, often times the operator occupies a large portion of the captured and rendered image. In some applications, the presence of an operator in the image is not critical. However, in other applications, the presence of an operator may be undesirable or distracting. As illustrated, cameras may be placed to minimize or eliminate the operator from the imagery captured to render the 3D virtual model. As such, frontward facing camera 304 of the backpack assembly may be positioned at approximately a height of operator head 314 and may extend beyond operator head 314. The position of frontward facing camera embodiments may be achieved through various lengths of extension arms, a sliding assembly, or may be temporarily secured on the human operator without limitation. In addition, upward facing camera 306 of the backpack assembly may positioned at approximately the height of operator head 314 and may extend above operator head 314. Furthermore, rightward facing camera 308 of the backpack assembly may be positioned at approximately the height of operator right shoulder 318 and may extend beyond operator right shoulder 318. In like manner, a leftward facing camera of the backpack assembly (not shown) may be positioned at approximately a height of the operator's left shoulder and may extend beyond the operator's left shoulder. Furthermore, it may be appreciated that, in embodiments, the cameras may each be positioned with an unobstructed field of view. In this manner, resulting 3D spherical images may be rendered without or with minimal appearance of an operator, of capture devices, or of associated hardware. As such, the cameras are positioned immediately proximate to the operator to minimize an appearance of an operator in the 3D virtual model and to maximize the real environment being captured.


It may be appreciated that in motorized configurations as noted above, the position of the various cameras may be adjusted to minimize or eliminate the appearance of a vehicle or device by which 3D spherical image systems are deployed. However, the orientation of the various cameras and lasers may be preserved to ensure a seamless and complete 3D virtual model. With the backpack worn upright, x is forward, y is leftward, and z is upward. In operation, a yaw scanner scans the x-y plane, a pitch scanner scans the x-z plane, and a left vertical geometry scanner scans the y-z plane. Thus, the yaw scanner can resolve yaw rotations about the z axis and the pitch scanner about the y axis.


Assuming that the yaw scanner scans the same plane over time, scan matching may be applied on successive laser scans from the yaw scanner and integrate the translations and rotations obtained from scan matching to recover x, y, and yaw of the backpack over time. Likewise, assuming that the pitch scanner scans the same plane over time, scan matching may be applied on successive laser scans from the pitch scanner to recover x, z, and pitch. The assumption of scanning the same plane roughly holds for both the yaw and the pitch scanners. However, empirical data indicates that coplanarity assumption remains more valid if the effective range of the yaw scanner is limited. In particular, points scanned that are closer to the yaw scanner appear to come from approximately the same plane between two successive scan times. However, points farther away from the yaw scanner can potentially come from two very different planes between two successive scan times, for example if between these times the backpack experiences a large pitch change. These scan points that clearly come from different planes between two scan times cannot be aligned by scan matching. Thus, it may be desirable to discard points farther than a certain threshold away from the yaw scanner. At a scanner range of up to approximately 15 meters, nearly all the yaw scanner's range data between two successive scan times to appear to roughly come from the same plane.


Laser/IMU based localization algorithms may be utilized to estimate the transformation between backpack poses at consecutive time steps. These transformations may be composed to reconstruct the entire trajectory the backpack traverses. However, since each transformation is somewhat erroneous, the error in the computed trajectory can become large over time, resulting in errors. Therefore, an automatic loop closure detection method based on images collected by the backpack may be applied. Once loops are detected, loop closure may be enforced using a nonlinear optimization technique in a Tree based netwORk Optimzer (TORO) or any other optimization or bundle adjustment framework to reduce the overall localization error. Using the pose information provided by localization algorithms, all captured laser scans may be transformed into a single 3D coordinate frame. Since camera images are acquired at nearly the same time as a subset of the laser scans, nearest-neighbor interpolation of the pose parameters allows the pose of every camera image to be estimated. Therefore, to generate a 3D model, a) all laser scans may be transformed from the floor scanner to a single world coordinate frame and known methods may be utilized to create a triangulated surface model from ordered laser data, and b) the model may be texture mapped by projecting laser scans onto temporally close images. However, laser based localization algorithms alone may not be accurate enough for building textured 3D surface models. Thus, an image based approach to refine the laser/IMU localization results may be utilized to address this inaccuracy.


The terms “certain embodiments”, “an embodiment”, “embodiment”, “embodiments”, “the embodiment”, “the embodiments”, “one or more embodiments”, “some embodiments”, and “one embodiment” mean one or more (but not all) embodiments unless expressly specified otherwise. The terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless expressly specified otherwise. The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a”, “an” and “the” mean “one or more”, unless expressly specified otherwise.


While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents, which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods, computer program products, and apparatuses of the present invention. Furthermore, unless explicitly stated, any method embodiments described herein are not constrained to a particular order or sequence. Further, the Abstract is provided herein for convenience and should not be employed to construe or limit the overall invention, which is expressed in the claims. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.

Claims
  • 1. A system for producing 3D spherical imagery for a virtual walkthrough corresponding with a real environment, the system comprising: a rightward facing camera for capturing real-time rightward facing images, the rightward facing camera electronically coupled with a computing system;a leftward facing camera for capturing real-time leftward facing image data;a backward facing camera for capturing real-time backward facing image data;a frontward facing camera for capturing real-time frontward facing image data;an upward facing camera for capturing real-time upward facing image data;a plurality of laser scanners; andan inertial movement unit (IMU), whereindata from the plurality of laser scanners and the IMU captures 3D geometry information of the real environment and is rendered to provide a 3D virtual model of the real environment, and whereinimage data from the cameras is rendered to provide image texture blending of the 3D virtual model.
  • 2. The system of claim 1, wherein the plurality of laser scanners comprises: a first laser scanner for scanning a horizontal plane;a second laser scanner for scanning a first vertical plane normal to a direction of motion;a third laser scanner for scanning a second vertical plane normal to the direction of motion; anda fourth laser scanner for scanning a third vertical plane tangent to the direction of motion.
  • 3. The system of claim 1, wherein the cameras and the plurality of laser scanners are each positioned with an unobstructed field of view.
  • 4. The system of claim 1, wherein a data collection source selected from the group consisting of: a spectrometer for capturing light source data,a barometer for capturing atmospheric pressure data,a magnetometer for capturing magnetic field data,a thermometer for capturing temperature data,a wireless local area network (WLAN) packet capture device for capturing WLAN data,a CO2 meter for capturing carbon dioxide data, anda lux meter for measuring luminance data.
  • 5. The system of claim 1, further comprising an assembly selected from the group consisting of: a backpack assembly for carrying the system by an operator, a motorized assembly for carrying the system by an autonomous robotic device, a motorized assembly for carrying the system by a semi-autonomous robotic device, and a motorized assembly for carrying the system by an operator guided robotic device.
  • 6. The system of claim 5, wherein the frontward facing camera of the backpack assembly is positioned at approximately a height of an operator's head and extends beyond the operator's head, whereinthe upward facing camera of the backpack assembly is positioned at approximately the height of an operator's head and extends above the operator's head, whereinthe rightward facing camera of the backpack assembly is positioned at approximately a height of an operator's right shoulder and extends beyond the operator's right shoulder, and whereinthe leftward facing camera of the backpack assembly is positioned at approximately a height of the operator's left shoulder and extends beyond the operator's left shoulder, wherein the cameras are positioned immediately proximate to the operator to minimize an appearance of an operator in the 3D virtual model and to maximize the real environment being captured.
  • 7. The system of claim 1, wherein the cameras capture image data at a frame rate of approximately one frame every 4 seconds to 30.
  • 8. The system of claim 1, wherein the cameras are positioned to have an overlapping field of view (FOV).
  • 9. The system of claim 8, wherein the cameras have a field of view (FOV) of approximately 90 to 360°.
  • 10. The system of claim 1, wherein image data from the cameras is stitched together to provide a seamless image texture.
  • 11. The system of claim 10, wherein the seamless image texture results in a seamless four pi steradian spherical image texture.
  • 12. The system of claim 10, wherein the seamless image texture results in a seamless cubic image texture.
  • 13. The system of claim 1, wherein the IMU samples at a rate in a range of approximately 150 to 300 Hz.
  • 14. The system of claim 1, wherein the plurality of laser scanners sample at a rate in a range of approximately 5 to 60 Hz.
  • 15. A backpack assembly for carrying a system by an operator for producing 3D spherical imagery for a virtual walkthrough corresponding with a real environment, the backpack assembly comprising: a rightward facing camera for capturing real-time rightward facing images, the rightward facing camera electronically coupled with a computing system, wherein the rightward facing camera of the backpack assembly is positioned at approximately a height of an operator's right shoulder and extends beyond the operator's right shoulder;a leftward facing camera for capturing real-time leftward facing image data, wherein the leftward facing camera of the backpack assembly is positioned at approximately a height of the operator's left shoulder and extends beyond the operator's left shoulder;a backward facing camera for capturing real-time backward facing image data;a frontward facing camera for capturing real-time frontward facing image data, wherein the frontward facing camera of the backpack assembly is positioned at approximately a height of an operator's head and extends beyond the operator's head;an upward facing camera for capturing real-time upward facing image data, wherein the upward facing camera of the backpack assembly is positioned at approximately the height of an operator's head and extends above the operator's head, and wherein the cameras are positioned immediately proximate to the operator to minimize an appearance of an operator in the 3D virtual model and to maximize the real environment being captured;a plurality of laser scanners; andan inertial movement unit (IMU), whereindata from the plurality of laser scanners and the IMU captures 3D geometry information of the real environment and is rendered to provide a 3D virtual model of the real environment, and whereinimage data from the cameras is rendered to provide image texture blending of the 3D virtual model.
  • 16. The backpack assembly of claim 15, wherein the plurality of laser scanners comprises: a first laser scanner for scanning a horizontal plane;a second laser scanner for scanning a first vertical plane normal to a direction of motion;a third laser scanner for scanning a second vertical plane normal to the direction of motion; anda fourth laser scanner for scanning a third vertical plane tangent to the direction of motion.
  • 17. The backpack assembly of claim 15, wherein the cameras and the plurality of laser scanners are each positioned with an unobstructed field of view (FOV).
  • 18. The backpack assembly of claim 15, wherein the cameras are positioned to have an overlapping FOV.
  • 19. The backpack assembly of claim 15, wherein image data from the cameras is stitched together to provide a seamless image texture.
  • 20. The backpack assembly of claim 19, wherein the seamless image texture results in a seamless four pi steradian spherical image texture.