Information
-
Patent Application
-
20040077285
-
Publication Number
20040077285
-
Date Filed
April 22, 200321 years ago
-
Date Published
April 22, 200420 years ago
-
CPC
-
US Classifications
-
International Classifications
Abstract
A method, apparatus, and system are disclosed for simulating visual depth in a concatenated image of a remote field of action. A vision system provides multiple video camera fields of view covering a visual space of a field of action. Video image fields of view are divided into clockwise and counterclockwise sub fields of view. The clockwise sub fields of view and the counterclockwise sub fields of view cover the visual space of a field of action. Sub fields of view are concatenated into clockwise and counterclockwise images of the field of action capable of simulating visual depth.
Description
BACKGROUND OF THE INVENTION
[0001] 1. The Field of the Invention
[0002] The invention relates to concatenating images covering a visual field of action. Specifically, the invention relates to simulating visual depth in a concatenated image of a field of action of a remotely controlled vehicle.
[0003] 2. The Relevant Art
[0004] Remote control enthusiasts regularly maneuver remotely controlled vehicles over challenging courses and in sophisticated racing events. Radio controllers facilitate the control of a vehicle through radio transmissions. By breaking the physical link between the vehicle and controller, R/C enthusiasts are able to participate in organized group events such as racing or in what is known as “backyard bashing.” Additionally, R/C controllers have allowed scaled vehicles to travel over and under water, and through the air, which for obvious reasons was not previously possible with a cabled control mechanism.
[0005] Racing scaled versions of NASCAR™, Formula 1™, and Indy™ series racecars have become very popular because, unlike other sports, the public generally does not have the opportunity to race these cars. Although scaled racecars give the hobbyist the feeling of racing, for example, a stock car, remotely racing a scaled racecar may lack realism. In order to make a racecar visually interesting to the point of view of the racer, the racecar is normally operated at speeds that if scaled are unrealistic. Additionally R/C is limited by the amount of channels or frequencies available for use. Currently, operators of racing tracks or airplane parks must track each user's frequency, and when the limited number of the available channels are being used, no new users are allowed to participate.
[0006] A solution to this problem has been to assign a binary address to each vehicle in a system. Command data is then attached to the binary address and transmitted to all vehicles in the system. In an analog R/C environment, commands to multiple vehicles must be placed in a queue and transmitted sequentially; this presents a slight lag between a user control and response by the vehicle. Each vehicle constantly monitors transmitted commands and waits for a command with the assigned binary address. Limitations to this system include the loss of fine control of vehicles due to transmit lag, and ultimately the number of vehicles is limited because the time lag could become too great.
[0007] Users typically must maneuver their vehicles with only the visual input from an observation viewpoint removed from the vehicle and track. Removed observation viewpoints often obscure important visual information needed to maneuver a remotely controlled vehicle with a constantly changing position and orientation.
[0008] Users have attempted to attain the visual perspective of the remotely controlled vehicle with vision systems that mount a video camera on the actual vehicle. However, the field of view of a vehicular mounted video camera image does not cover the visual space of the entire field of action of the remotely controlled vehicle. Additionally, video images lack depth clues vital to maneuvering a remotely controlled vehicle in difficult, high-performance situations.
[0009] Users have compensated for the visual feedback limitations of a video camera image with vision systems displaying images from multiple cameras, providing a user with a mosaic of images of the visual space of a field of action. However, various images covering the field of action may display mutually inconsistent visual feedback, reducing the effectiveness of visual clues. Multiple images of the field of action also lack visual depth information.
[0010] Users have compensated for the lack of visual depth in images of remote fields of action by mounting stereoscopic vision system cameras on a remote vehicle. However, stereoscopic cameras have a limited field of view. Stereoscopic cameras also have a greater cost for a given viewing angle.
[0011] Accordingly, it is apparent that a need exists for an improved system of controlling vehicles remotely. The need further exists for an improved system of controlling vehicles that accords a vision system for concatenating a consistent image of a remotely controlled vehicle's field of action. More specifically, what are needed are a method, apparatus, and system for simulating visual depth in a concatenated image covering the visual space of a field of action.
BRIEF SUMMARY OF THE INVENTION
[0012] The various elements of the present invention have been developed in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available remote controlled vehicles. More particularly, various elements of the present invention have been developed in response to the present state of the art and in response to the problems and needs in the art that have not yet been fully solved by currently available remote control vehicle control vision systems. Accordingly, the present invention provides an improved method, apparatus, and system for displaying an integrated three-dimensional image of a remote field of action.
[0013] In accordance with the invention as embodied and broadly described herein in the preferred embodiments, an improved remote control vehicle is provided and configured to move in a direction selectable remotely by a user. The vehicle comprises a chassis configured to move about in response to vehicle control data from a user; a controller residing within the chassis configured to receive network switched packets containing the vehicle control data; and an actuator interface module configured to operate an actuator in response to the vehicle control data received by the controller. The controller is configured to transmit vehicle data feedback to a user. Additionally, the controller may comprise a wireless network interface connection configured to transmit and receive network switched packets containing vehicle control data.
[0014] The present invention comprises a method of controlling a vehicle over a digital data network, including but not limited to a LAN, WAN, satellite, and digital cable networks. The method comprises providing a mobile vehicle configured to transmit and receive vehicle control data over the network, providing a central server configured to transmit and receive vehicle control data, transmitting vehicle control data, controlling the mobile vehicle in response to the transmitted vehicle control data, and receiving vehicle feedback data from the vehicle. Transmitting vehicle control data may comprise transmitting network switched packets in a peer-to-peer environment or in an infrastructure environment.
[0015] In one aspect of the present invention, a method for simulating three-dimensional visual depth in an image of a remote field of action is presented. The method comprises concatenating multiple video image fields of view covering a visual space of a field of action. Video images comprising visual spaces covered by multiple fields of view are aligned and concatenated into an image of a field of action.
[0016] The method divides a field of view into two sub fields of view. All portions of a visual field of action are covered by at least two video image sub fields of view. The method concatenates sub fields of multiple views into two distinct images of a visual field of action, with a point of view of a first image offset from a point of view of a second image. Each image of a concatenated field of action may be displayed separately to the right and left eyes of a user, simulating three-dimensional visual depth. In one embodiment, concatenated images are organized in data packets for transmission over a network.
[0017] In another aspect of the present invention, an apparatus is also presented for simulating three-dimensional visual depth in a concatenated image of a remote field of action. The apparatus includes multiple video cameras covering a single visual field of action. Each portion of a visual field of action is captured by a field of view of at least two video cameras. The apparatus divides a field of view into a clockwise and a counterclockwise sub field of view. Clockwise and counterclockwise sub fields of view are concatenated into clockwise and counterclockwise images of a visual field of action. In one embodiment, the clockwise and counterclockwise images are displayed to simulate three-dimensional visual depth.
[0018] In one embodiment, video cameras capture images reflected off mirrors. The mirrors may be positioned to locate the virtual center of each camera's focal plane in the same point to reduce parallax effects.
[0019] Various elements of the present invention are combined into a system for simulating three-dimensional visual depth in a concatenated image of a remote field of action. The system includes multiple video cameras capturing multiple video images covering one or more fields of view. Each portion of a visual space covered by a first video camera field of view is also covered by a second video camera field of view. The system divides each camera's field of view into at least two sub fields of view, a clockwise sub field of view and a counterclockwise sub field of view. The system concatenates multiple clockwise sub fields of view into a single clockwise image of a field of action. Similarly, the system concatenates multiple counterclockwise sub fields of view into a single counterclockwise image of a field of action. The clockwise and counterclockwise images may be used to display an image of the field of action with three-dimensional visual depth.
[0020] The various elements and aspects of the present invention facilitate controlling a vehicle over a digital data network with control feedback that includes the simulation of visual depth in a concatenated image of a field of action. These and other features and advantages of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0021] In order that the manner in which the advantages and objects of the invention are obtained will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof, which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
[0022]
FIG. 1 is a perspective view of one embodiment of a network controlled vehicle of the present invention;
[0023]
FIG. 2 is a schematic block diagram illustrating one embodiment of a vehicle control module of the present invention;
[0024]
FIG. 3 is a schematic top view of one embodiment of a remotely controlled vehicle with video cameras in accordance with the prior art;
[0025]
FIG. 4 is a schematic top view of one embodiment of a remotely controlled vehicle with stereoscopic video cameras in accordance with the prior art;
[0026]
FIG. 5 is a schematic top view diagram illustrating one embodiment of a remotely controlled vehicle with video cameras in accordance with the present invention;
[0027]
FIG. 6 is a flow chart illustrating one embodiment of a field of view concatenation method in accordance with the present invention;
[0028]
FIG. 7 is a schematic top view of one embodiment of a video camera field of view in accordance with the present invention;
[0029]
FIG. 8 is a schematic top view of one embodiment of multiple, overlapping video camera fields of view in accordance with the present invention;
[0030]
FIG. 9 is a flow chart illustrating one embodiment of a visual depth image generation method in accordance with the present invention;
[0031]
FIG. 10 is a block diagram of one embodiment of a field of view processing system in accordance with the present invention; and
[0032]
FIG. 11 is a simplified side view of one embodiment of a video camera and mirror system in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0033] Many of the functional units described in this specification have been labeled as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
[0034] Modules may also be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.
[0035] Indeed, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network.
[0036]
FIG. 1 shows a vehicle 100 that is controllable over a network. As depicted, the vehicle 100 comprises a video camera module 102 and a vehicle control module 104. The vehicle 100 is in one embodiment replicated at one-quarter scale, but may be of other scales also, including one-tenth scale, one-fifth scale, and one-third scale. Additionally, the network controlled vehicle 100 may embody scaled versions of airplanes, monster trucks, motorcycles, boats, buggies, and the like. In one embodiment, the vehicle 100 is a standard quarter scale vehicle 100 with centrifugal clutches and gasoline engines, and all of the data for the controls and sensors are communicated across the local area network. Alternatively, the vehicle 100 may be electric or liquid propane or otherwise powered. Quarter scale racecars are available from New Era Models of Nashua, N.H. as well as from other vendors, such as Danny's ¼ Scale Cars of Glendale, Ariz.
[0037] The vehicle 100 is operated by remote control, and in one embodiment an operator need not be able to see the vehicle 100 to operate it. Rather, a video camera module 102 is provided with a one or more cameras 106 connected to the vehicle control module 104 for 1 displaying the points of view of the vehicle 100 to an operator. The operator may control the vehicle 100 from a remote location at which the operator receives vehicle control data and 3 optionally audio and streaming video. In one embodiment, the driver receives the vehicle control data over a local area network. Under a preferred embodiment of the present invention, the video camera module 102 is configured to communicate to the operator using the vehicle control module 104. Alternatively, the video camera module 102 may be configured to transmit streaming visual data directly to an operator station.
[0038]
FIG. 2 shows one embodiment of the vehicle control module 104 of FIG. 1. The vehicle control module 104 preferably comprises a network interface module 202, a central processing unit (CPU) 204, a servo interface module 206, a sensor interface module 208, and the video camera module 102. In one embodiment, the network interface module 202 is provided with a wireless transmitter and receiver 205. The transmitter and receiver 205 may be custom designed or may be a standard, off-the-shelf component such as those found on laptops or electronic handheld devices. Indeed, a simplified computer similar to a Palm™ or Pocket PC™ may be provided with wireless networking capability, as is well known in the art and placed in the vehicle 100 for use as the vehicle control module 104.
[0039] In one embodiment of the present invention, the CPU 204 is configured to communicate with the servo interface module 206, the sensor interface module 208, and the video camera module 102 through a data channel 210. The various controls and sensors may be made to interface through any type of data channel 210 or communication ports, including PCMCIA ports. The CPU 204 may also be configured to select from a plurality of performance levels upon input from an administrator received over the network. Thus, an operator may use the same vehicle 100 and may progress from lower to higher performance levels. The affected vehicle performance may include steering sensitivity, acceleration, and top speed. This feature is especially efficacious in driver education and training applications. The CPU 204 may also provide a software failsafe with limitations to what an operator is allowed to do in controlling the vehicle 100.
[0040] In one embodiment, the CPU 204 comprises a Simple Network Management Protocol (SNMP) server module 212. SNMP provides an extensible solution with low computing overhead to managing multiple devices over a network. SNMP is well known to those skilled in the art. In an alternate embodiment not depicted, the CPU 204 may comprise a web-based protocol server module configured to implement a web-based protocol, such as Java™, for network data communications.
[0041] The SNMP server module 212 is configured to communicate vehicle control data to the servo interface module 206. The servo interface module 206 communicates the vehicle control data with the corresponding servo. For example, the network interface card 202 receives vehicle control data that indicates a new position for a throttle servo 214. The network interface card 202 communicates the vehicle control data to the CPU 204 which passes the data to the SNMP server 212. The SNMP server 212 receives the vehicle control data and routes the setting that is to be changed to the servo interface module 206. The servo interface module 206 then communicates a command to the throttle servo 214 to accelerate or decelerate.
[0042] The SNMP server 212 is also configured to control a plurality of servos through the servo interface module 206. Examples of servos that may be utilized depending upon the type of vehicle are the throttle servo 214, a steering servo 216, a camera servo 218, and a brake servo 220. Additionally, the SNMP server 212 may be configured to retrieve data by communicating with the sensor interface module 308. Examples of some desired sensors for a gas vehicle 100 are a head temperature sensor 222, a tachometer 224, an oil pressure sensor 226, a speedometer 228, and one or more accelerometers 230. In addition, other appropriate sensors and actuators can be controlled in a similar manner. Actuators specific to an airplane, boat, submarine, or robot may be controlled in this manner. For instance, the arms of a robot may be controlled remotely over the network.
[0043]
FIG. 3 is a schematic top view of a remotely controlled vehicle 310 with video cameras 320 illustrated in accordance with the prior art. The remotely controlled vehicle 310 includes one or more video cameras 320 and a transmitter 330. The video cameras 320 in one embodiment are mounted on the vehicle 310. Each of the video cameras 320 captures a video field of view according to video processing commonly known in the art. The transmitter 330 broadcasts a video signal from the cameras 320 to a user maneuvering a remotely controlled vehicle 300. In one embodiment, the transmitter 330 also broadcasts control signals used for control of the vehicle 310. In a further embodiment, the transmitter 330 may also transmit feedback data corresponding to performance parameters of the vehicle 310 during operation.
[0044]
FIG. 4 is a schematic top view of a remotely controlled vehicle 310 with stereoscopic video cameras 420 in accordance with the prior art. The remotely controlled vehicle 310 includes two video cameras 420 and a transmitter 330.
[0045] The cameras 420 are substantially similar to the cameras 320 of FIG. 3 and are mounted on the remotely controlled vehicle 310. The cameras 420 are mounted in an orientation that allows the cameras 420 to capture video images of approximately the same field of view from two slightly offset points of view. The transmitter 330 broadcasts a video signal from each camera 420 to a user maneuvering a remotely controlled vehicle. In one embodiment, each video image is displayed to a single display unit. In an alternative embodiment, each video image may be displayed to individual display units and processes so as to simulate three-dimensional visual depth in the displayed image.
[0046]
FIG. 5 is a schematic top view illustrating one embodiment of a remotely controlled vehicle 510 with video cameras 520 of the present invention. The remotely controlled vehicle 510 includes two or more video cameras 520 and a transmitter 330. Although the vehicle 510 is depicted with eight video cameras 520, other quantities, orientations, or combinations of video cameras 520 may be employed.
[0047] The video cameras 520 are mounted on the remotely controlled vehicle 510 and configured to provide a remote user with multiple video images of the fields of action for the vehicle 510. The transmitter 330 in one embodiment broadcasts one or more of the video images to the user. The images from two or more video cameras 520 may be concatenated together to form a larger image of a single field of action.
[0048]
FIG. 6 is a flow chart illustrating one embodiment of a field of view concatenation method 600 of the present invention. The concatenation method 600 combines fields of view from two video images. Although for clarity purposes the steps of the concatenation method 600 are depicted in a certain sequential order, execution of the individual steps within an actual process may be conducted in parallel or in an order distinct from the depicted order.
[0049] The depicted concatenation method 600 includes an input fields of view step 610, a fields of view difference step 620, a match complete test 630, a shift and scale step 640, a calculate algorithm step 650, an apply algorithm step 660, a combine fields of view step 670, a continue test 680, a terminate test 690, and an end step 695.
[0050] The input fields of view step 610 samples two distinct fields of view. In one embodiment, the two fields of view are obtained from two distinct video cameras 520 mounted to a vehicle 510. Portions of the first and second fields of view have an overlapping visual space and the corresponding video images have overlapping pixels that are captured simultaneously. Portions of each field of view that cover the overlapping visual space may be culled for comparison.
[0051] The fields of view difference step 620 compares pixels from the first and the second fields of view. The fields of view difference step 620 compares a pixel pair, with one pixel culled from the first field of view and one pixel culled from the second field of view. In one embodiment, each pixel in the pixel pair represents a target pixel in a field of action. The fields of view difference step 620 may calculate a mathematical sum of the differences of all pixel pairs of the field of view. The sum diminishes as the first and second fields of view are more precisely aligned.
[0052] The match complete test 630 in one embodiment uses the calculated sum of differences to determine if an alignment of the first and second fields of view is satisfactory. If the alignment is satisfactory, the method 600 proceeds to the calculate algorithm step 650. If the alignment is unsatisfactory, the method 600 loops to the shift and scale step 640.
[0053] The shift and scale step 640 shifts and scales the alignment of the first field of view relative to the second field of view. The shift and scale step 640 in one embodiment shifts pixels in the first field of view horizontally and vertically to improve the alignment of the first and second fields of view. The shift and scale step 640 may also scale the first and second fields of view to improve the alignment of the fields of view.
[0054] The calculate algorithm step 650 uses a best alignment between the first and second fields of view as calculated by the fields of view difference step 620 to determine a concatenation algorithm for concatenating the first and second fields of view. In one embodiment, the calculate algorithm step 650 creates a video mask storing the concatenation algorithm.
[0055] The apply algorithm step 660 relates the concatenation algorithm of the calculate algorithm step 650 to the first and second fields of view. The apply algorithm step 660 may modify a pixel value in preparation for concatenation. The step 660 may also delete a pixel value. The combine fields of view step 670 concatenates the first and second fields of view. In one embodiment, a pixel value from the first field of view is added to a pixel value of the second field of view.
[0056] The continue test 680 determines if the field of view concatenation method 600 will continue to use the current concatenation algorithm in concatenating the first and second fields of view. If the continue test 680 determines to continue using the concatenation algorithm, the field of view concatenation method 600 loops to the apply algorithm step 660. If the continue test 680 determines to recalculate the concatenation algorithm, the method 600 proceeds to the terminate test 690.
[0057] The terminate test 690 determines if the field of view concatenation method 600 should terminate. If the terminate test 690 determines the method 600 should not terminate, the method 600 loops to the input fields of view step 610. If the terminate test 690 determines the method 600 should terminate, the field of view concatenation method 600 proceeds to the end step 695.
[0058]
FIG. 7 is a schematic top view of one embodiment of a video camera field of view 740 of the present invention. The video camera field of view 700 includes a video camera 710, a field of view 740, a clockwise sub field of view 720, and a counterclockwise sub field of view 730.
[0059] The field of view of the video camera 710 includes a visual space captured by the video camera 710. The field of view 740 may be divided into the clockwise sub field of view 720 and the counterclockwise sub field of view 730. The clockwise sub field of view 720 and the counterclockwise sub field of view 730 may cover completely distinct visual spaces. Alternately, the clockwise sub field of view 720 and the counterclockwise sub field of view 730 may include overlapping portions of the same visual space.
[0060]
FIG. 8 is a schematic top view of one embodiment of multiple, overlapping fields of view 800 of a plurality of video cameras 520 of the present invention. The depicted schematic shows coverage of a field of action by a plurality of video camera 520 fields of view 740. Although the fields of view 740 are depicted using eight video cameras 320, other quantities, orientations, or combinations of video cameras 520 may be employed. The video camera 520 fields of view 740 includes one or more video cameras 520, one or more fields of view 740, one or more clockwise sub fields of view 720, and one or more counterclockwise sub fields of view 730.
[0061] The field of view 740 of the video camera 520 is divided into the clockwise sub field of view 720 and the counterclockwise sub field of view 730. Two or more clockwise sub fields of view 720 may be concatenated together to form a clockwise field of action image. Two or more counterclockwise sub fields of view 730 may similarly be concatenated to form a counterclockwise field of action image. The clockwise and the counterclockwise field of action images may be used to simulate three-dimensional visual depth. In one embodiment, clockwise and counterclockwise fields of action images are alternately displayed to a user's right and left eyes to provide three-dimensional visual depth in the field of action image. In an alternate embodiment, a clockwise field of action is displayed to a user's right eye and a counterclockwise field of action is displayed to a user's left eye. The clockwise and counterclockwise field of action images may also be combined for display in a polarized three-dimensional display.
[0062]
FIG. 9 is a flow chart illustrating one embodiment of a visual depth image generation method 900 of the present invention. The method 900 generates alternating clockwise and counter-clockwise images of a field of action. The visual depth image generation method 900 includes a process clockwise sub fields of view step 910, a process counterclockwise sub fields of view step 920, a terminate test 930, and an end step 940.
[0063] The process clockwise sub fields of view step 910 prepares a clockwise sub field of view 720 for display. The step 910 may employ the field of view concatenation method 600 to concatenate clockwise sub fields of view 720 into a clockwise field of action image. The process counterclockwise sub fields of view step 920 prepares counterclockwise sub fields of view 730 for display. The step 920 may also employ the field of view concatenation method 600 to concatenate counterclockwise sub fields of view 730 into a counterclockwise field of action image. In one alternate embodiment, the process clockwise sub fields of view step 910 displays a counterclockwise field of action image and the process counterclockwise sub fields of view step 920 displays a clockwise field of action image.
[0064]
FIG. 10 is a block diagram of one embodiment of a field of view processing system 1000 of the present invention. The depicted system 1000 prepares video images for transmission to a display device. Although the field of view processing system 1000 is illustrated using a network to transmit images, other transmission mechanisms may be employed. The field of view processing system 1000 includes one or more video cameras 1010, a video splitting module 1020, a video processing module 1030, a video memory module 1040, a packet transmission module 1050, a packet receipt module 1070, and an image display module 1080.
[0065] The video camera 1010 captures a video image of a field of view 740. The video splitting module 1020 splits the video image into a clockwise sub field of view 720 and a counterclockwise sub field of view 730. In one embodiment, the clockwise and counterclockwise sub fields of view cover distinct, separate visual spaces. In an alternate embodiment, the clockwise and counterclockwise sub fields of view share portions of visual space.
[0066] The video processing module 1030 prepares the video camera field of view for display. The video processing module 1030 concatenates two or more clockwise sub fields of view 720 into a clockwise field of action image. The video processing module 1030 also concatenates two or more counterclockwise sub fields of view 730 into a counterclockwise field of action image.
[0067] The video memory module 1040 stores a video image and a video algorithm. The video memory module 1040 may store the video image and the video algorithm for concatenating the field of action image. The packet transmission module 1050 prepares the field of action image for transmission as an image data packet over a network. The counterclockwise field of action image data may be compressed and transmitted in separate data packets. In an alternate embodiment, clockwise and counterclockwise data packets are compressed and transmitted using shared data packets.
[0068] The packet receipt module 1070 receives the image data packet. The packet receipt module 1070 decompresses the image data packet into a displayable format of the field of action image. The image display module 1080 displays the field of action image. The clockwise and the counterclockwise field of action images may be displayed to a right and a left eye of a user, simulating three-dimensional visual depth. In an alternate embodiment, the clockwise and the counterclockwise field of action images are combined in a polarized display with simulated three-dimensional visual depth.
[0069]
FIG. 11 is a simplified side view drawing of one embodiment of a video camera/mirror system 1100 of the present invention. The system 1100 includes a first video camera 1110, a second video camera 1120, one or more mirrors 1130, and a common point 1140. Although for purposes of clarity only two video cameras and two mirrors are illustrated, any number of cameras and mirrors may be employed.
[0070] The first video camera 1110 is positioned to capture an image of a portion of a visual space of a field of action as reflected by the mirror 1130. The mirror 1130 is positioned to locate the center of the virtual focal plane of the first camera 1110 in approximately the common point 1140 in space shared by the center of the virtual focal planes of the second video camera 1110. Positioning the virtual focal plane of the first camera 1110 and the second camera 1120 at the common point 1140 may eliminate parallax effects when images from the cameras 1110, 1120 are concatenated.
[0071] The present invention allows a user to maneuver a vehicle over a digital data network using visual feedback from an image covering a visual space of the vehicle's field of action. Two or more field of view images are concatenated into field of action image with consistent visual feedback clues. Multiple field of action images are generated to allow visual feedback with simulated three-dimensional visual depth to improve the visual clues provided to the user.
[0072] The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Claims
- 1. A method for simulating visual depth using a concatenated image of a remote field of action, the method comprising:
receiving a first video image and a second video image of a remote field of action; dividing the first video image field of view into a first clockwise sub field of view and a first counterclockwise sub field of view; dividing the second video image field of view into a second clockwise sub field of view and a second counterclockwise sub field of view; concatenating the first and second clockwise sub fields of view into a clockwise field of action image; and concatenating the first and second counterclockwise sub fields of view into a counterclockwise field of action image.
- 2. The method of claim 1, further comprising aligning a first field of action image with a second field of action image.
- 3. The method of claim 1, further comprising displaying two or more field of action images to simulate three-dimensional visual depth.
- 4. The method of claim 1, further comprising providing an unbroken 360° field of action image.
- 5. The method of claim 1, further comprising controlling a remote vehicle, the remote vehicle providing the locus of the field of action image.
- 6. The method of claim 5, further comprising generating network switched packets containing vehicle control data.
- 7. A method for simulating visual depth using a concatenated image of a remote field of action, the method comprising:
receiving a first video image and a second video image of a remote field of action; dividing the first video image field of view into a first clockwise sub field of view and a first counterclockwise sub field of view; dividing the second video image field of view into a second clockwise sub field of view and a second counterclockwise sub field of view; concatenating the first and second clockwise sub fields of view into a clockwise field of action image; concatenating the first and second counterclockwise sub fields of view into a counterclockwise field of action image; aligning a first field of action image with a second field of action image; displaying two or more field of action images to simulate visual depth; and providing an unbroken 360° field of action image.
- 8. The method of claim 7, further comprising controlling a remote vehicle, the remote vehicle providing the locus of the field of action image.
- 9. The method of claim 8, further comprising controlling network switched packets containing vehicle control data.
- 10. An apparatus for simulating visual depth using a concatenated image of a remote field of action, the apparatus comprising:
a first video camera configured to capture a first video image field of view and a second video camera configured to capture a second video image field of view; a video splitting module configured to divide the first video image field of view into a clockwise sub field of view and a counterclockwise sub field of view; the video splitting module further configured to divide the second video image field of view into a clockwise sub field of view and a counterclockwise sub field of view; a video processing module configured to concatenate the first and second clockwise sub fields of view into a clockwise field of action image; and the video processing module further configured to concatenate the first and second counterclockwise sub fields of view into a counterclockwise field of action image.
- 11. The apparatus of claim 10, further configured with a display module to selectively display two or more field of action images to simulate three-dimensional visual depth.
- 12. The apparatus of claim 10, further configured with a mirror positioned to locate a center of a virtual focal plane of the video camera in a common point.
- 13. The apparatus of claim 10, further configured to vertically orient the axis of the video camera with the greatest pixel density.
- 14. The apparatus of claim 10, further comprising a remotely controlled vehicle, at least one of the first and second video cameras disposed on the remotely controlled vehicle.
- 15. The apparatus of claim 14, wherein the remotely controlled vehicle is the locus of the field of action image.
- 16. An apparatus for simulating visual depth in a concatenated image of a remote field of action, the apparatus comprising:
means for receiving a first video image and a second video image of a remote field of action; means for dividing the first video image into a first clockwise sub field of view and a first counterclockwise sub field of view; means for dividing the second video image into a second clockwise sub field of view and a second counterclockwise sub field of view; means for concatenating the first and second clockwise sub fields of view into a clockwise field of action image; and means for concatenating the first and second counterclockwise sub fields of view into a counterclockwise field of action image.
- 17. The apparatus of claim 16, the apparatus further comprising means for aligning a first field of action image and a second field of action image.
- 18. The apparatus of claim 16, the apparatus further comprising means for displaying the two or more field of action images to simulate three-dimensional visual depth.
- 19. The apparatus of claim 16, the apparatus further comprising means for locating the center of the focal plane of a video camera in a common point.
- 20. The apparatus of claim 16, the apparatus further comprising means for displaying an unbroken 360° field of action image.
- 21. A system for simulating visual depth using a concatenated image of a remote field of action, the system comprising:
a remotely controlled vehicle; a first video camera and a second video camera each mounted on the remotely controlled vehicle and configured to scan a field of action; a video splitting module configured divide the video camera field of view into a clockwise sub field of view and a counterclockwise sub field of view; a video processing module configured to combine two or more clockwise sub fields of view into a clockwise field of action image and two or more counterclockwise sub fields of view into a counterclockwise field of action image; a data network configured to transmit the video images; and a data storage server configured to store the video images.
- 22. The system of claim 21, further comprising an image display module to display two or more field of action images to simulate visual depth.
- 23. The system of claim 21, further comprising a mirror configured to locate the center of the virtual focal plane of the video camera at a common point.
- 24. The system of claim 21, further comprising the video cameras oriented around a vertical axis.
- 25. The system of claim 21, further comprising the video cameras oriented around a horizontal axis.
- 26. The system of claim 21, further comprising a video image transmission module configured to transmit the video images.
- 27. The system of claim 26, further comprising the video transmission module configured to transmit the video images over the data network.
- 28. The system of claim 21, further comprising a video image transmission module configured to transmit the video images over the data network using data packets.
- 29. The system of claim 21, further comprising a remotely controlled vehicle, the remotely controlled vehicle providing the locus of the field of action image.
- 30. A computer readable storage medium comprising computer readable program code configured to carry out a method for simulating visual depth using a concatenated image of a remote field of action, the method comprising:
receiving a first video image and a second video image; dividing the first video image into a first clockwise sub field of view and a first counterclockwise sub field of view; dividing the second video image into a second clockwise sub field of view and a second counterclockwise sub field of view; concatenating the first and second clockwise sub fields of view into a clockwise field of action image; and concatenating the first and second counterclockwise sub fields of view into a counterclockwise field of action image.
- 31. The computer readable storage medium of claim 30, wherein the method further comprises aligning a first field of action image and a second field of action image.
- 32. The computer readable storage medium of claim 30, wherein the method further comprises providing an unbroken 360° field of action image.
- 33. The computer readable storage medium of claim 30, wherein the method further comprises displaying two or more field of action images to simulate three-dimensional visual depth.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60374440 |
Apr 2002 |
US |