The present invention relates generally to an interactive virtual reality (VR) system. More specifically, the present invention relates to a VR system for use in a physical workspace to manipulate real objects by controlling virtual representation of those objects.
There is an increasing demand for systems and methods that enable virtual reality (VR) to control real-world objects including machines, equipment, devices, tools, vehicles, etc. Conventional industrial control technologies are becoming increasingly complex. Workers and users in various industries can encounter a large amount of work under poor conditions, operating risk, and other shortcomings. For example, workers can be in direct contact with high voltage in high altitudes. Some medical procedures involve creating a number of small incisions in a patient with surgical instruments. Moreover, in petroleum refineries, assembly plants, or other complex facilities, training personnel on operation and maintenance tasks can be very expensive and risky.
In recent years, VR has seen great success in overcoming such problems. The VR experience is very similar to the real-world experience in a sense of space and object sensing. All objects in the VR world can be computer generated, and a user can be “immersed” in the computer-generated space via VR visual technologies. In addition, augmented reality and mixed reality are variations of VR in which the real and virtual worlds are integrated.
Most of the existing VR systems, however, are designed for training, presentation, or marketing purposes. Currently, these VR systems and technologies do not include a process to transfer control information to the real world to facilitate any changes or controls in VR. Thus, there is a need to develop a VR system that can control real-world devices and system components so users can receive training and avoid risks or dangerous and/or adverse situations when working on hazardous operational zones.
The present invention aims to solve the aforementioned problems, issues, and shortcomings by improving conventional VR systems and methods through an innovative system designed to provide a new form of VR in which the real world is controlled by the user in the virtual world.
The present invention offers a method and system that displays objects in an operational zone of reality to a user who is situated inside a virtual reality (VR) environment in or outside the operational zone using a central controller. Using various sensors, cameras, and controls in a control system deployed in the operational zone, the central controller can instantly and continuously capture/monitor the objects in reality. Simultaneously, the central controller acquires and processes the data acquired for the operational zone, which includes point clouds, video streaming data, geographical data, superimposed mesh data, etc. Subsequently, the processed data is displayed and/or projected to a VR display system, thus creating a live and vivid three-dimensional (3D) VR environment for the user.
Using various tools for viewing and controlling the objects in the operational zone through the VR display system in the VR environment, the method of the present invention provides complete control to the user. Thus, objects including, but not limited to, robots, equipment, machinery, tools, vehicles, hardware, etc., in the operational zone of real world can be controlled and operated by the user in the VR environment to eliminate the need to place people and/or users in hazardous working areas. Additionally, the method is fully accessible and controllable via a network including, but not limited to, intranet and Internet, so a user can be located on or off the operational zone. Further, the method provides at least one remote server that manages the central controller and a corresponding personal computing (PC) device of the user so that an efficient and effective VR access to and control of the operational zone in reality can be achieved.
All illustrations of the drawings are for the purpose of describing selected versions of the present invention and are not intended to limit the scope of the present invention.
As can be seen in
As can be seen in
Additionally, the method provides a control system to an operational zone, wherein the control system is electronically connected to the central controller in Step B. An operational zone is where actual tasks, motions, status of objects occur in reality. The operational zone includes, but is not limited to, job site, field work site, indoors/outdoors facilities, buildings, fields, farms, etc. The control system is deployed in the operational zone for controlling all objects in the operational zone. Additionally, the control system may include, but is not limited to, sensors, cameras, actuators, tools, robots, control modules, control units, programmable logic controls (PLC), motor controls, step motors, electrical controls, electronic controls, hydraulic controls, computer controls, microcontrollers, etc. Further, the method provides a plurality of sensors and a plurality of cameras deployed in the operational zone, wherein both the plurality of sensors and the plurality of cameras are electronically connected to the control system in Step C. The plurality of sensors resides in the operational zone and includes, but is not limited to, location sensor, geographical sensor, orientation sensor, velocity sensor, distance sensor, proximity sensor, temperature sensor, relative humidity sensor, sound sensor, ultrasound sensor, radar, laser sensor, light sensor, color sensor, pressure sensor, force sensor, light detection and ranging (LiDAR) sensor, three-dimensional (3D) point cloud sensor, metrological sensor, topographical sensor, etc. Further, the plurality of cameras also resides in the operational zone and includes, but is not limited to, 3D camera, 360° camera, video camera, webcam, surveillance camera, point cloud camera, video streaming device, image scanning device, 3D scanner, 3D photogrammetry scanning device, etc.
Next, the method of the present invention acquires a plurality of data from each of the plurality of sensors and each of the plurality of cameras through the control system, wherein the plurality of data includes a plurality of point clouds and a plurality of video stream data in Step D. Subsequently, the method processes the plurality of data through the central controller, wherein the plurality of point clouds is converted to mapping data superimposed with the plurality of video stream data (Step E), and displays the processed data onto the VR display system through the central controller (Step F). The each of the plurality of point clouds acquired may comprise a plurality of data points in space, which represents a 3D shape of one of the plurality of objects in the operational zone. In one embodiment of the present invention, each of the plurality of point clouds may be superimposed with one set of video stream data by the central controller to product a 3D live representation of one of the plurality of object in the operating zone using various technologies, including, but not limited to, 3D visualization, animation, 3D rendering, mass customization, digital elevation modeling, triangular mesh modeling, triangulation, polygon meshing, surface reconstruction, etc.
As can be seen in
As can be seen in
As can be seen in
As can be seen in
As shown in
This reality input process, as shown in
The processing module of the reality input step may be communicatively connected to various input devices, including a controller, positioning device, sensor data, geospatial data, and data storage device. The positioning device determines the time, location, and orientation of the tools. The positioning device may include one or more navigation systems, such as a global positioning system (GPS), an inertial navigation system, or other such location sensors. The sensor device may include devices for recording video, audio, and/or other geo-referenced data and can be provided on handheld devices (e.g., camera, personal digital assistant, portable computer, telephone), other equipment, or a vehicle.
Sensor devices may also include video and audio input devices that receive position and altitude information from the positioning device. Video input devices may include an analog or digital camera, a camcorder, a charged coupled device (CCD) camera, or any other image acquisition device. Audio input devices can include a microphone or other audio transducer that converts sounds into electrical signals. Sensor data sources are not limited to manned systems and may include other sources, such as remote surveillance video and satellite-based sensors. The video equipment can be a three-dimensional (3D) 360° camera, triangulation camera system, or any video system that can stream 360° video.
Geospatial data can include any source of geospatial data, for example, a geospatial information system (a.k.a. “GIS”), an interactive map system, or an existing database that contains location-based information.
The data storage device can be configured for storing software and data and may be implemented with a variety of components or subsystems, including a magnetic disk drive, an optical disk drive, flash memory, or other devices capable of storing information.
When the attributes (location, etc.) of the real-world object in the real world change, the changes can be detected by the camera and the information related to the change and can be transferred to the VR representation process to cause a corresponding change to one of the virtual scenes or the virtual object in the VR. For example, if the tool is moved or tilted in the real world, this information is obtained by the camera during the reality input step, and the camera provides the obtained information to the VR representation process. The determination of which change to apply to at least one of the virtual objects and the virtual scene is made through programming instructions associated with the virtual object and the virtual scene.
The information exchange step may include software located on one or more local and/or global servers. In one embodiment, the software can be configured to process video streams captured from the operational zone and project the video streams on the virtual canvas.
In some embodiments, for any real tool used in the real environment, the virtual copy (e.g., 3D objects) can be generated in advance. The virtual tool can be super-positioned with the video stream, as in the case of augmented reality, and it can be formatted to be displayed via a VR screening system, which will be described later.
In some embodiments, as shown in
As shown in
When combined, the live stream video from the operational zone and pre-generated virtual tool can be projected to the VR screening system; the virtual tool can be aligned with the real tool. The user may then have visual information on the exact position of the tool in reality.
In some embodiments, the VR screening system can provide a three-dimensional or other immersive display of the environment, including the physical layout of that environment and a reproduction of the control system and apparatuses at the operational zone, for example, the controlled equipment, the materials, and/or other things processed by the apparatuses. In other embodiments, the VR screening system provides an immersive display of the environment that permits the user to experience interactions with the virtual environment.
The VR screening system may comprise various devices for communicating information to a user, including video and audio outputs. Video output can communicate with any device for displaying visual information, for example, a cathode ray tube (CRT), liquid crystal display (LCD), light emitting diode display (LED), plasma display, or electroluminescent display. Audio output may be a loudspeaker or any other transducer for generating audible sounds from electrical signals. That display can be conveyed to users via stereoscopic headgear of the type used for VR displays.
In some embodiments, the VRR step may include various VR controllers (e.g., wired gloves) that the user uses to manipulate the virtual tool. Any change in the virtual tool alignment will rewrite the virtual tool constraints that will be streamed in real time to the solver of the information exchange step for optimization. The optimized virtual tool constraints (or virtual equipment operating parameters) will be sent as new setup points to controllers of the real tool so that the real tool can be positioned corresponding to the virtual object's position, as shown in
The VR controller(s) may include any device for communicating the user's commands to virtual reality, including a keyboard, keypad, computer mouse, touch screen, trackball, scroll wheel, joystick, television remote controller, or voice recognition controller.
The steps and the processes of a module described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module can reside in a memory unit that can include volatile memory, non-volatile memory, and network devices, or other data storage devices now known or later developed for storing information/data. The volatile memory may be any type of volatile memory including, but not limited to, static or dynamic, random access memory (SRAM or DRAM). The non-volatile memory may be any non-volatile memory including, but not limited to, ROM, EPROM, EEPROM, flash memory, and magnetically or optically readable memory or memory devices such as compact discs (CDs) or digital video discs (DVDs), magnetic tape, and hard drives.
The computing device may be a laptop computer, a cellular phone, a personal digital assistant (PDA), a tablet computer, and other mobile devices of the type. Communications between components and/or devices in the systems and methods disclosed herein may be unidirectional or bidirectional electronic communication through a wired or wireless configuration or network. For example, one component or device may be wired or networked wirelessly directly or indirectly, through a third-party intermediary, over the Internet, or otherwise with another component or device to enable communication between the components or devices. Examples of wireless communications include, but are not limited to, radio frequency (RF), infrared, Bluetooth, wireless local area network (WLAN) (such as WiFi), or wireless network radio, such as a radio capable of communication with a wireless communication network such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, 4G network, and other communication networks of the type.
Although the invention has been explained in relation to its preferred embodiment, it is to be understood that many other possible modifications and variations can be made without departing from the spirit and scope of the invention as hereinafter claimed.
The current application claims a priority to the U.S. Provisional Patent application Ser. No. 62/949,827 filed on Dec. 18, 2019.
Number | Date | Country | |
---|---|---|---|
62949827 | Dec 2019 | US |