In U.S. Pat. No. 5,130,794, claim 5, the present inventor disclosed a portable system incorporating a plurality of cameras for recording a spherical FOV scene. In that same patent, claim 4, the present inventor disclosed an optical assembly that can be constructed and placed on a conventional camcorder that enables the camcorder to record spherical field-of-view (FOV) panoramic images. Similarly, in U.S. Pat. No. ______, IPIX later claims a portable plural camera system for recording panoramic imagery. However, in many instances using a single camcorder is advantageous because most people cannot afford to buy several camcorders, one camcorder for recording panoramic spherical FOV imagery and one for recording conventional directional FOV imagery. Given this, it is the object of the present invention to overcome various limitations of conventional camcorders for recording panoramic imagery.
An advantage of using a single conventional camcorder to record panoramic images is that it is readily available, adaptable, and affordable to the average consumer. However, the disadvantage is that conventional camcorders are not tailored to recording panoramic images. For instance, a limitation of the claim 4 lens is that transmitting image segments representing all portions of a spherical FOV scene to a single frame results in a scene of low resolution image when a portion of that scene is enlarged. This limitation is compounded further when overlapping images are recorded adjacent to one another on a single frame in order facilitate stereographic recording. For example, the Canon XL1 camcorder with inter-changeable lens capability produces an EIA standard television signal of 525 lines, 60 fields, NTSC color signal. The JVC JY-HD10U HDTV Camcorder produces a 1280×720P image in a 16:9 format at 60 fields, color signal. And finally the professional Sony HDW-F900 produces a 1920×1080 image in a 16:9 format at various frame rates to include 25, 29.97, and 59.94 fields per second color signal. The images can be recorded in either a progressive or interlaced mode. Assuming two fisheye lenses are used to record a complete scene of spherical coverage, it is preferable that each hemispherical image be recorded at a resolution of 1000×1000 pixels. While HDTV camcorders represent an improvement they still fall short of this desired resolution. The optical systems put forth in the present invention facilitates recording images nearer to or greater than the 1000×1000 pixel resolution desired, depending on which of the above cameras is incorporated. It is therefore an objective of the present invention to provide several related methods for adapting and enhancing a single conventional camcorder to record a higher resolution spherical FOV images.
A limitation of the current panoramic optical assemblies that incorporate wide angle and fisheye lenses is that the recorded image is barrel distorted. Typically, the distortion is removed through image processing. The problem with this is that it takes time and computer resources. It also requires purchasing and tightly controlled proprietary software that is restricting use of imagery captured by panoramic optical assemblies currently on the market. It is therefore an object of the present invention to reduce or remove the barrel distortion caused by the fisheye lenses by optical means, specifically by specially constructed fiber optic image conduits.
A limitation current panoramic camcorders is that they rely on magnetic tape media. It is therefore an objective of the present invention to provide a method of adapting a conventional camcorder system into the panoramic camcorder system incorporating a diskette (i.e. CD ROM or DVD) recording system for easy storage and playback of the panoramic imagery.
Another limitation is that the microphone(s) on the above conventional camcorders is not designed for panoramic recording. The microphone(s) of a conventional camcorder are typically oriented to record sound in front of the conventional zoom lens. Also, typically, conventional camcorders incorporate a boom microphone(s) that extend outward over the top of the camcorder. This does not work well with a panoramic camera system using the optical assembly in claim 4 records a spherical FOV because the microphone gets in the way of visually recording the surrounding panoramic scene. It is therefore an object of the present invention to incorporate the microphones into the optical assembly in an outward orientation consistent with recording a panoramic scene.
Another limitation is that the tripod socket mount on the above conventional camcorders is not designed to facilitate panoramic recording. Conventional camcorder mounting sockets are typically on the bottom of the camera and do not facilitate orienting the camera lens upward toward the ceiling or sky. However, orienting the camera upward toward the ceiling or sky is the optimal orientation when the panoramic lens in claim 4 is to be mounted. A limitation of current cameras is that no tripod socket is on the rear of the camera, opposite the lens end of the camera. It is therefore an objective of the present invention to provide a tripod mount to the back end of the camera to facilitate the preferred orientation of the panoramic lens assembly of claim 4 and as improved upon in the present invention.
Another limitation of current camcorders is that they have not been designed to facilitate recording panoramic imagery and conventional directional zoom lens imagery without changing lenses. It is therefore an objective of the present invention to put forth several panoramic sensor assembly embodiments that can be mounted to a conventional camcorder which facilitate recording and playback of panoramic and/or zoom lens directional imagery.
Another limitation is that the control mechanism on the above conventional camcorders is not designed for panoramic recording. Typically, conventional camcorders are designed to be held and manually operated by the camera operator, where the operator is located out of sight behind the camera. This does not work well with the panoramic camera system using the optical assembly in claim 4 records a spherical FOV such that the camera operator cannot hide when manually operating the controls of the camera. It is therefore an object of the present invention to provide a wireless remote control device for remotely controlling the camcorder operation with a spherical FOV optical assembly, like that in claim 4 or as improved upon in the present invention.
Another limitation is that the viewfinder on the above conventional camcorders is not designed for panoramic recording. Typically, conventional camcorders are designed to be held and viewed by the camera operator, where the operator is located out of sight behind the camera. This does not work well with the panoramic camera system using the optical assembly in claim 4 records a spherical FOV such that the camera operator cannot hide when manually operating the controls of the camera. It is therefore an object of the present invention to provide a wireless remote viewfinder for remotely controlling the camcorder with a spherical FOV optical assembly, like that in claim 4 or as improved upon in the present invention.
Another limitation is that the remote control receiver(s) on the above conventional camcorders is not designed for camcorders adapted for panoramic recording. Typically, conventional camcorders incorporate remote control receiver(s) that face forward and backward of the camera. The problem with this is that a camcorder incorporating a lens like that in claim 4 or improved upon in the present invention work most effectively when the recording lens end of the camcorder is placed upward with the optical assembly mounted onto the recording lens end of the camera. When the camcorder is placed upward the remote control signal does not readily communicate with the remote control device because the receivers are facing upward to the sky and downward toward the ground. It is therefore an object of the present invention to incorporate a remote control receiver(s) onto a conventional camera that has been adapted for taking panoramic imagery such that the modified camcorder is able to receive control signals from an operator using a wireless remote control device located horizontal (or to the any side) of the panoramic camcorder.
A previous limitation of panoramic camcorder systems is that image segments comprising the panoramic scene required post production prior to viewing. With the improvement of compact high-speed computer processing systems panoramic imagery can be viewed in real time. It is therefore an objective of the present invention to incorporate realtime playback into the panoramic camcorder system (i.e. in camera, in remote control unit, and/or in a linked computer) by using modern processors with a software program to manipulate and view the recorded panoramic imagery live or in playback.
A previous limitation of the camcorder system has been that there is no way to designate what subjects in a recorded panoramic scene to focus in on. There has also not been a method to extract from the panoramic scene a sequence of conventional imagery of a limited FOV of just the designated subjects. It is therefore an objective of the present invention to provide associated hardware and a target tracking/feature tracking software program that allows the user to designate what subjects in the recorded panoramic scene to follow and to make a video sequence of during production or later in post production.
A previous limitation of panoramic camcorder systems is that panoramic manipulation and viewing software was not incorporated/embedded into the panoramic camcorder system and/or panoramic remote control unit. It is therefore an object of the present invention to provide associated hardware and software for manipulating and viewing the panoramic imagery recorded by the panoramic camera in order to facilitate panoramic recording and ease of use by the operator and/or viewer.
The proceeding and other objects and features of this invention will become further apparent from the detailed description that follows. Such description is accompanied by a set of drawing figures. Numerals of the drawing figures, corresponding to those of the written description, point to the various features of the invention. Like numerals refer to like features throughout both the written description and the drawing figures.
Since the early years of film several large formats have evolved. Large format film and very large high definition digital video systems are the enabling technology for creating large spherical panoramic movie theaters as disclosed by the present inventor in his publications for The International Society for Optical Engineering proceedings Volume 1668, (1992) pp 2-14 and in SPIE Volume 1656 High-Resolution Sensors and Hybrid Systems (1992) pp 87-97. The present invention takes advantages of using the above mentioned enabling technologies to build a large panoramic theaters which completely surround the audience in a continuous audio-visual environment.
It is therefore an object of the present invention to provide apparatus for transforming a received field-of-view into a visual stream suitable for application to a three-dimensional viewing system.
The invention provides an apparatus for transforming a stereographic received field-of-view into a panoramic image sequence for application to a camera comprising a single movie film image pickup. Such an apparatus includes means for imparting a predetermined angular differential between a first perspective and a second perspective of the field-of-view. Means are provided for imparting orthogonal polarizations to the perspective views. Means are also provided for receiving and sequentially providing the first and second perspective views to the camera.
The “Basis of Design” of all things invented can be said to be to overcome mans limitations. A current limitation of humans is that while we live in a three-dimensional environment, our senses have limitations in perceiving our three-dimensional environment. One of these constraints is that our sense of vision only perceives things in one general direction at any one time. Similarly, typical camera systems are designed to only facilitate recording, processing, and display of a multi-media event within a limited field-of-view. An improvement over previous systems would be to provide a system that allows man to record, process, and display the total surrounding in a more ergonomic and natural manner independent of his physical constraints. A further constraint is mans ability to communicate with his fellow man in a natural manner over long distances. This invention relates, in general, to an improved panoramic interactive recording and communications system and method that allows for recording, processing, and display of the total surrounding. As with other shortcomings, man has evolved inventions by creating machines to overcome his limitations. For example, man has devised communication systems and methods to do this over the centuries . . . from smoke signals used in ancient days to advanced satellite communication systems of today. In the same vain this invention has as its objective and aim to converge new yet uncombined technologies into a novel, more natural and user friendly system for communication, popularly referred to today as “telepresence”, “visuality”, “videoality”, or “Image Based Virtual Reality” (IBVR).
Strub et al., U.S. Pat. No. 6,563,532, May 13, 2003, entitled Low Attention Recording Unit For Use By Vigorously Active Recorder discloses a system for video images by a individual wearing an input, processing, and a display device. Stub et al. discusses the use of cellular telephone connectivity, use of displaying the processed scene on the wearers eyeglasses, and recording and in general processing of panoramic images. However, Strub et al. does not disclose the idea of incorporating a spherical field-of-view camera system. Such a spherical field-of-view camera system would be an improvement over the systems Strub et al. mentioned because such a system would allow recording of a more complete portion of the surrounding environment. Specifically, the system disclosed by Strub et al. only provides at most for hemispherical recording using a single fisheye lens oriented in a forward direction, while the system proposed by the present inventor records images that facilitate spherical field-of-view recording, processing and display.
Additionally, an additional embodiment of the present invention also includes a mast mounted system that allows the wearers face to be recorded as well as the remaining surrounding scene about the mast mounted camera recording head. This is an improvement over Strub et al. in that it allows viewers at a remote location who receive the transmitted signal to see who they are talking to and the surrounding environment where the wearer is located. Strub et al. does not disclose a system that looks back at the wearers face. Looking at the wearers face allows face allows more natural and personable communication between people communicating from remote locations. Finally, a related embodiment of the present invention that is also an improvement over Strub et al. is the inclusion of software to remove distortion and correct the perspective of facial images recorded when using wide field of view lenses with the present inventions panoramic sensor assembly 10.
Additionally, the present invention puts forth a recording, processing, and display system that is completely housed in a head mounted unit 120 or 122. Several improvements in technologies have made this possible. The following paragraphs discuss these enabling technologies that allow for the convergence of a single head-mounted unit 120 or 122.
In contrast to the present invention Strub et al. incorporates a body harness for housing some portion of the recording, processing, and display system. Including these systems in a single unit is beneficial over Strub et al. in certain situations because of its improved compactness, unobtrusiveness, portability, and reduction of parts.
Additionally, the present invention discloses a panoramic camera head unit 10 incorporating micro-optics and imaging sensors that have reduced volume over that disclosed in the present inventor's U.S. Pat. No. 5,130,794, dated 14 Jul. 1992, and a panoramic camera system marketed by Internet Pictures Corporation (IPIX), Knoxyille, Tenn. Prototypes by Ritchey incorporate the Nikon FC-E8 and FC-E9 Fisheye Lenses. The FC-E8 has a diameter of 75 mm with a field of view of 183 degrees, and the FC-E9 has a diameter of 100 mm and with a field of view of 190 degrees circular Field-Of-View (FOV) coverage. Spherical FOV Panoramic cameras by IPIX incorporate fisheye lenses and have been manufactured Coastal Optical Systems Inc., of West Palm Beach, Fla. A Coastal fisheye lenses used for IPIX film photography mounted of the Aaton cinematic camera and others is the Super 35 mm Cinematic Lens with 185 degrees FOV Coverage with a diameter of 6.75 inches and a depth of 6.583 inches, and for spherical FOV video photograph mounted on a Sony HDTV F900 Camcorder use the ⅔ inch Fisheye Video Lens at $2500 with 185 degrees FOV with a diameter of 58 millimeters and a depth of 61.56 millimeters. Coastal and IPIX use of these lenses infringes on the present inventors claim 4 of the '794 patent. The above Ritchey prototype and IPIX/Coastal systems have not incorporate recent micro-optics which have reduced the required size of the optic. This is an important improvement in that it allows the optic to be reduced in size from several inches to several millimeters, which makes the optical head lightweight and very portable and thus feasible for use in the present invention.
An important aspect of the present invention is the miniaturization of the spherical FOV sensor assembly 10 which includes imaging and may include audio recording capabilities. Small cameras which facilitate this and are of a type used in the present invention include the ultra small Panasonic GP-CX261V ¼ inch 512H Pixel Color CCD Camera Module with Digital Signal Processing board. The sensor is especially attractive for incorporation in the present invention because the cabling from the processing to the sensor can reach ˜130 millimeters. This allows the cabling to be placed in an eye-glass frame or the mast of the panoramic sensor assembly of the present invention which is described below. Alternatively, the company Super Circuits of Liberty Hill, Tex. sells several miniature cameras, audio, and associated transmission systems whose entire systems and components can be incorporated into the present invention, as will be skilled to those in the art. The Super Circuits products for incorporation into unit 120 or 122 include the worlds smallest video camera that is smaller than a dime, and pinhole micro-video camera systems in the form of a necktie cam, product number WCV2 (mono) and WCV3 (color), ball cap cam, pen cam, glasses cam, jean jacket button cam, and eye-glasses cam embodiments. A small remote wireless video transmitter may be attached to any of these cameras. The above cameras, transmitters, and lenses may be incorporated into the above panoramic sensor assembly or other portion of the panoramic capable wireless communication terminals/units 120, 122 to form the present invention. Additionally, Still alternatively, a very small wireless video camera and lens, transceiver, data processor and power system and components that may be integrated and adapted to form the panoramic capable wireless communication terminals/units 120, 122 is disclosed by Dr. David Cumming of Glasgow University and by Dr. Blair Lewis of Mt Sinai Hospital in New York. It is known as the “Given Diagnostic Imaging System” and administered orally as a pill/capsule that can pass through the body and is used for diagnostic purposes.
Objective micro-lenses suitable for taking lenses in the present invention, especially the panoramic taking assembly 10, are manufactured and of a type by AEI North America, of Skaneateles, N.Y., that provide alternative visual inspection systems. AEI sales micro-lenses for use in borescopes, fiberscopes, and endoscopes. They manufacture objective lens systems (including the objective lens and relay lens group) from 4-14 millimeters in diameter, and 4-14 millimeters in length, with circular FOV coverage from 20 approximately 180 degrees. Of specific note is that AEI can provide an objective lens with 180 degree or slightly larger FOV coverage required for some embodiments of the panoramic sensor assembly, like that shown in
Additionally, technologies enabling and incorporated into the present invention includes camcorder and camcorder electronics whose size has been reduced such that those electronics can be incorporated into a HMD or body worn device for spherical or circular FOV coverage about a point in the environment according to the present invention. Camcorder manufacturers and systems that are of a type whose components may be incorporated into the present invention include Panasonic D-Snap SV AS-A10 Camcorder, JVC-30 DV Camcorder, Canon XL1 Camcorder, JVC JY-HD10U Digital High Definition Television Camcorder, Sony DSR-PDX10, and JVC GR-D75E and GR-DVP7E Mini Digital Video Camcorder. The optical and/or digital software/firmware picture stabilization systems incorporated into these systems are incorporated by reference into the present invention.
Additionally, technologies enabling and incorporated into the present invention include video cellular phones and personal digital assistants, and their associated integrated circuit technology. Video cellular phone manufacturers and systems that are a type that is compatible and may be incorporated into the present invention include RVS Remote Video Surveillance System. The system 120 or 122, includes a CellularVideoTransmitter (CVT) unit that includes a Transmitter (Tx) and Receiver (Rx) software. The Tx transmits live video or high-quality still images over limited bandwidth. The Tx sends high quality images through a cellular/PSTN/satellite phone or a leased/direct line to Rx software on a personal computer capable system. The Tx is extremely portable with low weight and low foot-print. Components may integrated into any of the panoramic capable wireless communication terminals/units 120, 122 of the present invention. The Tx along with a panoramic camera means, processing means (portable PC+panoramic software), panoramic display means, and telecommunication means (video capable cellular phone), special panoramic software, may constitute unit 120 or 122. For instance, it could be configured into the belt worn and head unit embodiment of the System shown in
Correspondingly, makers of video cellular phone of a of a type which in total or whose components may be integrated into the present invention 120 or 122 includes the AnyCall IMT-2000, Motorola, SIM Free V600; the Samsung Inc., Video Cell Phone Model SCH-V300 with 2.4 Megabit/second transfer rate capable of two-way video phonecalls; and other conventional wireless satellite and wireless cellular phones using the H.324 and other related standards that allow the transfer of video information between wireless terminals. These systems a include MPEG3/MPEG4, H.263 video capabilities, call management features, messaging features, data features including Bluetooth™ wireless technology/CE Bus (USB/Serial) that allows them to be used as the basis for the panoramic capable wireless communication terminals/units 120 or 122. Cellular video phones of a type that can be adapted for terminal/unit 120 or 122 includes that by King et al in U.S. Patent Application Publication 2003/0224832 A1, by Ijas et al in U.S Pat. App. Pub. 2002/0016191 A1, and by Williams in U.S. Pat. App. Pub. 2002/0063855 A1. Still alternatively, the Cy-visor, personal LCC for Cell Phones by Daeyang E&C with a head mounted display that projects a virtual 52 inch display that can be used with vide cell phones may be integrated and adapted into a panoramic capable wireless communication terminals/units 120 or 122 according to the present invention. The Cy-visor is preferably adapted by adding a mast and panoramic sensor assembly, a head position and eye tracking system, and a see through display. An advantage of the present system is that embodiments of it may be retrofitted and integrated with new wireless video cell phones and networks. This makes the benefits of the invention affordable and available to many people. Additionally, the present inventors confidential disclosures dating back to 1994 as witnessed by Penny Mellies also provide cellular phone embodiments that are directly related to a type that can be used in the present invention to form panoramic capable wireless communication terminals/units 120 or 122.
The telecommunication network that forms system 100 of the present invention and into which wireless panoramic units 120 or 122 can communicate over and are of a type that can be incorporated into the present invention include that by Dertz et al. in U.S. Patent Application Publication 2002/0093948 A1 and U.S. Pat. App. Pub. 2002/0184630 A1 used to provide examples in this specification in
Additionally, technologies enabling and incorporated into the present invention include wide band telecommunication networks and technology 100. Specifically, video streaming is incorporated into the present invention. A telecommunication system 100 that may incorporate video streaming of a type compatible with the present invention is Dertz et al. in U.S. Patent Application Publication 2002/0093948 A1 and iMove, Inc. Portland, Oreg. in U.S. Pat. No. 6,654,019 B2. Another patent which incorporates video streaming manufacturer and system that may be incorporated into the present invention include the Play Incorporated, Trinity Webcaster and associated system, which can accept panoramic input feeds, perform the digital video effects required for spherical FOV content processing/manipulation, display, and broadcast over the internet.
Additionally, technologies enabling and incorporated into the present invention 120 or 122 include wireless technology that has done away with the requirements for physical connections from the viewers camera, head-mounted display, and remote control of the camera to host computers required for image processing and control processing. Wireless connectivity of can be realized in the panoramic capable wireless communication terminals/units 120, 122 by the use of conventional RF and Infrared transceivers. Corresponding, recent hardware and software/firmware such as Intel™ Centrino™ mobile technology, Bluetooth technology, and Intel's™ Bulverde™ chip processor allows easy and cost-effective incorporation of video camera capabilities into wireless laptops, PDA's, smart cellular phones, HMDs, and so forth that enable wireless devices to conduct panoramic video-teleconferencing and gaming using the panoramic capable wireless communication terminals/units 120, 122 according to the present invention. These technologies may be part of the components and systems incorporated into the present invention. For example, these wireless technologies are enabling and incorporated into the present invention in order to realize the wireless image based remote control unit that is wrist mounted is claimed in the present invention to control spherical FOV cameras and head mounted displays. Chips and circuitry which include transceivers allow video and data signals to be sent wirelessly between the input, processing and display means units 120 when distributed over the users body or off the users body. Specifically, for example, the Intel Pro/Wireless 2100 LAN MiniPCI Adapters Types 3A and 3B provide IEEE 802.11b standard technology. The 2100 PCB facilitates the wireless transmission of up to eleven megabits per second and can be incorporated into embodiments of the Panoramic capable wireless communication terminals/units 120 or 122 of the present invention.
a is a perspective view of a conventional camcorder.
b is a perspective view of a conventional camcorder of an alternative design.
c is a perspective view of a conventional remote control unit for conventional camcorder like that shown in
d is a diagram of an operator using a conventional remote control unit with a conventional camera.
e is a diagram of a sequence of conventional frames recorded by a camera in
f is a drawing of a sequence of conventional frames recorded by a single monoscopic panoramic camera according to the prior art of U.S. Pat. No. 5,130,794, claim 4.
g is a drawing of a sequence of conventional frames recorded by a single stereoscopic panoramic camera according to the prior art of U.S. Pat. No. 5,130,794, claim 4.
h is a diagram of a sequence of conventional frames recorded by two cameras which each record a respective hemisphere that comprise a panoramic spherical FOV scene are frame multiplexed electronically by a video multiplexer device as described in described in U.S. Pat. No. 5,130,794.
a is an exterior perspective view of a stereographic camcorder of prior art whose electro-optical system has been modified in the present invention to form a panoramic camcorder system. The stereographic camcorder in
b is an exterior perspective view of a conventional camcorder incorporating improvements disclosed herein to the arrangement in
c is a cutaway perspective view of the optical system in
d is a drawing of a sequence of conventional frames recorded by a monoscopic panoramic camcorder arrangement shown in
e is a schematic diagram showing the construction of an imaging device used in the camera in
f is a timing chart showing the operating timing and liquid crystal shutter switching timing of the present invention depicted in
g is a block diagram showing the construction of the optical shutter depicted in
h is a schematic diagram of the monoscopic panoramic camera arrangement of the present invention illustrated in
a is a perspective view of a conventional camcorder incorporating improvements disclosed herein to facilitate improved recording of a stereoscopic panoramic scene.
b is a cutaway perspective view of the sterographic optical recording system shown in 3a.
c is a drawing of a sequence of conventional frames recorded by a sterographic panoramic camcorder arrangement shown in
d is a timing chart showing the operating timing and liquid crystal shutter switching timing of the present invention depicted in
e is a schematic diagram of the sterographic panoramic camera arrangement shown in
a is an exterior perspective view of a remote control unit that includes a display unit for use with a panoramic camcorder like that shown in
b is a perspective of an operator using the remote control unit in
a illustrates an method of optically distorting an image using fiber optic image conduits.
b illustrates applying fiber optic image conduits as illustrated in
c is a cutaway perspective view of an alternative specially designed fiber optic image conduit arrangement according to
a′ is an exterior perspective drawing of a combined panoramic spherical FOV and zoom lens camcorder system.
b′ is a schematic drawing of the electro-optical system and related components and systems associated with the camcorder system shown in
c′ is a schematic drawing of an alternative embodiment of the electro-optical system and related components and systems associated with the camcorder system shown in
d′ is a schematic drawing of another electro-optical system and related components and systems associated with the camcorder system shown in
a is a diagram showing the dimensions of a typical IMAX 70 mm movie screen.
b is a photograph of a 5 perforation, 70 mm movie frame.
c is a photograph of a 35 mm movie frame.
d is a photograph of a 15 perforation, 70 mm IMAX 70 mm movie frame.
e is a photograph of a table comparing standard 16 mm, standard 35 mm, standard 70 mm, IMAX 70 mm, and IMAX Dome 70 mm film formats.
a is a perspective drawing of a cameraman operating a conventional portable filmstrip movie camera with an adapter for recording stereo coded images.
b is a top sectional view of an adapter for a filmstrip movie camera for recording stereo coded images.
a is an exterior perspective view of a conventional portable filmstrip movie camera like that in
b is a plan view of a new 11 perforation, 70 mm filmstrip format for recording hemispherical and square images in a panoramic spherical FOV filmstrip movie camera.
c is a schematic drawing of the electro-optical system according to the conventional portable filmstrip movie camera like that in
d(1) through 11d(5) comprise a set of timing diagrams that illustrate the operation of the electronic shuttering system of the movie camera and stereo adapter that have been modified to receive and record panoramic spherical FOV images.
e is a circuit schematic diagram of an electronic shuttering system of the movie camera and stereo adapter that have been modified to receive and record panoramic spherical FOV images.
a through 13e are plan views of a set of new film formats for recording hemispherical and square images in a panoramic spherical FOV filmstrip movie camera.
a through 16e illustrate large venue format panoramic film or video projection theaters designed to distribute, project, and display imagery recorded by the panoramic spherical FOV monoscopic and stereoscopic filmstrip movie cameras disclosed in the present invention.
a is a side sectional drawing of a panoramic theater like those shown in
b is a top view drawing of a panoramic theater like those shown in
a is a perspective drawing showing the one way communication between user #1 and user #2 communicating with a head-mounted wireless panoramic communication device according to the present invention.
b is a drawing the image that is captured by the distorted image recorded by the panoramic sensor assembly.
c is a drawing of the image after the computer program corrects the distorted image for viewing.
d is a drawing of the signal processing of the distortion correction program.
e is a drawing representing the telecommunication network which the corrected image travels over to get to remote user #2.
f is a drawing representing the intended recipient, user #2, of the undistorted facial image.
a is a confidential disclosure dated 1994, witnessed by the present inventor and Penny L. Mellies, showing the conception of major aspects of the present invention.
b is the other side of the confidential disclosure page dated 1994.
a is a schematic diagram illustrating the input means/option of using a dynamic selective raw image capture using a spatial light modulator illustrated in
b is a schematic diagram illustrating the input means/option of using a dynamic selective raw image capture from a plurality of cameras according to the present invention.
c is a schematic diagram further illustrating input/option means in which content is input from remote sources on the telecommunications network (i.e. network servers sending 2-D or 3-D content or other remote users sending 3-D content to the local user), or from prerecorded sources (i.e. 3-D movies) and applications (i.e. 3-D games) programmed or stored on the system worn by the user.
d is a schematic diagram illustrating a portion of the hardware and software or firmware processing means that comprise the panoramic communications system that can be worn by a user.
e is a schematic diagram illustrating an additional portion of the hardware and software or firmware processing means that comprise the panoramic communications system that can be worn by a user.
f is a schematic diagram illustrating an additional portion of the hardware and software or firmware processing means that comprise the panoramic communications system that can be worn by a user.
g is a schematic diagram illustrating examples of wearable Panoramic projection communication display means according the present invention.
h is a schematic diagram illustrating wearable head-mounted and portable panoramic communication display means according to the present invention.
i is a is schematic diagram illustrating prior art display means that are compatible with the present invention.
a-c are drawings of a telescoping panoramic sensor assembly 10 according to the present invention.
a is an side sectional view showing the unit 10 in the stowage position.
b is a side sectional view of the unit 10 in the operational position.
c is a perspective drawing of the unit 10 is the operational position.
a-f are drawings of the present invention 120 or 122 integrated into various common hats.
a-c are exterior perspectives illustrating the integration of the present invention into cowboy hat 120 or 122.
d-f are exterior perspectives illustrating the integration of the present invention into a baseball cap 120 or 122.
As required, detailed embodiments of the present invention are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention which may be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present invention in virtually any appropriately detailed structure.
In all aspects of the present invention, references to “camera” mean any device or collection of devices capable of simultaneously determining a quantity of light arriving from a plurality of directions and or at a plurality of locations, or determining some other attribute of light arriving from a plurality of directions and or at a plurality of locations. Similarly references to “display”, “television” or the like, shall not be limited to just television monitors or traditional televisions used for the display of video from a camera near or distant but shall also include computer data display means, computer data monitors, other video display devices, still picture display devices, ASCII text display devices, terminals, systems that directly scan light onto the retina of the eye to form the perception of an image, direct electrical stimulation through a device implanted into the back of the brain (as might create the sensation of vision in a blind person), and the like.
With respect to both the cameras and displays, as broadly defined above, the term “zoom” shall be used in a broad sense to mean any lens of variable focal length, any apparatus of adjustable magnification, or any digital,
computational, or electronic means of achieving a change in apparent
magnification. Thus, for example, a zoom viewfinder, zoom television, zoom
display, or the like, shall be taken to include the ability to display a picture
upon a computer monitor in various sizes through a process of image
interpolation as may be implemented on a body-worn computer system.
References to “processor”, or “computer” shall include sequential instruction, parallel instruction, and special purpose architectures such as digital signal processing hardware, Field Programmable Gate Arrays (FPGAs), programmable logic devices. as well as analog signal processing devices.
References to “transceiver” shall include various combinations of radio transmitters and receivers, connected to a computer by way of a Terminal Node Controller (TNC), comprising, for example, a modem and a High Level Datalink Controller (HDLCs), to establish a connection to the Internet, but shall not be limited to this form of communication. Accordingly, “transceiver” may also include analog transmission and reception of video signals on different frequencies, or hybrid systems that are partly analog and partly digital. The term “transceiver” shall not be limited to electromagnetic radiation in the frequence bands normally associated with radio, and may therefore include infrared or other optical frequencies. Moreover, the signal need not be electromagnetic, and “transceiver” may include gravity waves, or other means of establishing a communications channel.
While the architecture illustrated shows a connection from the headgear, through a computer, to the transceiver, it will be understood that the connection may be direct, bypassing the computer, if desired, and that a remote computer may be used by way of a video communications channel (for example a full-duplex analog video communications link) so that there may be no need for the computer to be worn on the body of the user.
The term “headgear” shall include helmets, baseball caps, eyeglasses, and any other means of affixing an object to the head, and shall also include implants, whether these implants be apparatus imbedded inside the skull, inserted into the back of the brain, or simply attached to the outside of the head by way of registration pins implanted into the skull. Thus “headgear” refers to any object on, around, upon, or in the head, in whole or in part.
For clarity of description, a preliminary summary of the major features of the recording, processing, and display portions of a preferred embodiment of the system is now provided, after which individual portions of the system will be described in detail.
Referring to the drawings in more detail:
As shown in
c shows a prior art conventional remote control unit 11 that comes with a conventional camera that includes standard control buttons 12. Controls include play, rewind, stop, pause, stop/start. The remote control unit includes a transmitter for communicating with the camera unit. The transmitter 13 sends a radio frequency signal 14 or infrared signal 15 over the air to receiver 9 sensors located on the camera body that face forward and backward from the camera. The remote control unit is powered by batteries located in a battery storage chamber 16 within the housing/body of the remote control unit.
d shows a camera operator using conventional remote control unit to control a conventional video camcorder mounted on a tripod. This is shown to illustrate the limitations of using a conventional camcorder, remote control unit, and camera mount for recording panoramic camcorder images.
e through
h is a prior art diagram of a sequence of conventional frames recorded by two separate cameras which each record a respective hemisphere that comprise a panoramic spherical FOV scene are frame multiplexed electronically by a video multiplexer device as described in described in U.S. Pat. No. 5,130,794. Interlacing and alternating image frame information from two cameras increases the resolution. However, it requires two cameras, which can be costly. Most consumers can only afford to have a single camcorder. The present invention uses an electro-optical adapter to multiplex two alternating images onto a single conventional camcorder.
In view of the above difficulties with the prior art, it is an object of the present invention to provide a three-dimensional image pickup apparatus which is inexpensive in construction and easy in adjustment. To achieve the above object, the three-dimensional image pickup apparatus of the present invention employs a television camera equipped with an imaging device which has at least photoelectric converting elements and vertical transfer stages and which is so designed as to read out signal charges stored in the photoelectric converting elements one or more times for every field by transferring them almost simultaneously to the corresponding vertical transfer states, and alternately selects object images projected through two different optical paths for every field for picking up an image, the selection timing being approximately in synchronization with the transfer timing of the signal charges from the photoelectric converting elements to the vertical transfer stages. (Pat 994, p 7). In the above construction, object images projected through the two optical paths are alternately selected in synchronization with the field scanning of the imaging device, thereby permitting the use of a single television camera for picking up an image in three dimensions. The imaging device employed in the television camera has at least photoelectric converting elements and vertical transfer stages. In the case where the photoelectric converting elements contains the vertical transfer stages, the imaging device has a storage site for the signal charges on the extension of each vertical transfer stages in the transferring direction, and since the signal charges stored in the photoelectric converting elements are transferred almost simultaneously to the corresponding vertical transfer stages for simultaneous pickup of the whole screen of the image, the imaging device capable of surface scanning is used. The storage time of the signal charges in each photoelectric converting element of the imaging device is equal to, or shorter than the time needed for scanning one field. The object images projected through the two optical paths onto the imaging device are alternately selected using optical shutters, approximately in synchronization with the timing of transferring the signal charges from the photoelectric converting elements to the vertical transfer stages of the imaging device. By using the above-mentioned imaging device, by setting the signal charge storage time in each photoelectric converting element of the imaging device to be less than the time needed for scanning one field, and by approximately synchronizing the selection timing of the optical paths with the timing of transferring the signal charges from the photoelectric converting elements to the vertical transfer stages of the imaging device, it is possible to pick up an image of good quality in three dimensions using a single television camera. (Pat 994, p 7)
a is an exterior perspective view of a stereographic camcorder of prior art. The stereographic camcorder in
Correspondingly, the stereographic camcorders electro-optical system in
d is a drawing of a sequence of conventional frames recorded by the monoscopic panoramic camcorder arrangement shown in
h is a schematic diagram of the monoscopic panoramic camera arrangement of the present invention illustrated in
The operation of the panoramic apparatus is now described. To the television camera 40, pulse signals necessary for driving the television camera are supplied from the synchronizing signal generator 4. The television camera driving pulses, field pulses, and synchronizing pulses supplied from the synchronizing signal generator 4 are all in synchronizing relationship with one another. The light from an object introduced through the mirrors 22 and 23 and the liquid crystal shutter 24 is passed through the semitransparent mirror 25, and then focused onto the photoelectric converting area of an imaging device
provided in the television camera 40. The light from the object introduced
through the mirror 26 and the liquid crystal shutter 27 is deflected by 90
degrees by the semitransparent mirror 25, and then focused onto the
photoelectric converting area of the imaging device provided in the television
camera 40. The optical paths 1 and 2 are disposed with their respective optical
axes forming a given angle .theta. (not shown) with respect to the same object.
(The optical paths 1 and 2 correspond to the human right and left eyes,
respectively).
g is a block diagram showing the construction of the optical shutter depicted in
The optical shutters useful in the present invention are which liquid crystal
shutters which capable of transmitting and obstructing light by controlling the voltage, which respond sufficiently fast with respect to the field scanning frequency of the television camera, and which have a long life. The optical shutters using liquid crystals may be of approximately the same construction as those previously described with reference to
Each of the liquid crystal shutters 24 and 27 comprise the deflector plates 10 and 11, the liquid crystal 12, and the transparent electrodes 13 and 14 shown in FIG. x. The liquid crystal shutters 24 and 27 are controlled by the driving pulses supplied from the liquid crystal shutter driving circuit. As previously described with reference to FIGS. x and x, description is given here supposing that the liquid crystal shutters become light permeable when the field pulse supplied to the AND circuits 16 and 17 that form part of the liquid crystal shutter driving circuit is at a low level. It is also supposed that the field pulse is at a high level for the first field and at a low level for the second field. Therefore, the liquid crystal shutter 27 shown in FIG. x transmits light in the first field, while the liquid crystal shutter 24 transmits light in the second field. This means that in the first field the light signals of the object image introduced through the second optical path is projected onto the imaging device, while in the second field the light signals of the object image introduced through the first optical path is projected onto the imaging device.
The imaging device receives the light signals of the object image on its photoelectric converting area, basically, over the period of one field or one frame, and integrates (stores) the photoelectrically converted signal charges over the period of one field or one frame, after which the thus stored signal charges are read out. Therefore, the output signal is provided with delay time equivalent to the period of one field against the light signals projected on the imaging screen.
If a line-sequential scanning image device such as an image pickup tube or an X-Y matrix imaging device (MOS imaging device) is used for the television camera 40, three-dimensional image signals cannot be obtained. The reason will be explained with reference to FIG. x. FIG. x shows diagrammatically the conditions of the television camera scanning field and the liquid crystal shutters and the potential at a point A on the imaging screen (photoelectric converting area) of the above line-sequential scanning imaging device, while FIG. x shows the imaging screen of the line-sequential scanning imaging device.
The light signals of the optical image to be projected onto the imaging device are introduced through the second optical path (liquid crystal shutter 27) in the first field, and through the first optical path (liquid crystal shutter 24) in the second field. For convenience of explanation, the light signals introduced through the first optical path are hereinafter denoted by R, and the light signals introduced through the second optical by L. Description will be given by taking the above mentioned image pickup tube which is a line-sequential scanning imaging device, as an example of the imaging device. The potential at the point A on the imaging screen of the image pickup tube gradually changes with time as the stored signal charge increases. The signal charges at the point A are then read out when a given scanning timing comes. At this point of time, however, as is apparent from FIG. x, the signal charge component SR generated by the light introduced through the first optical path and the signal charge component SL generated by the light introduced through the second optical path are mixed in the signal charge generating at the point A. This virtually means
that the light from the two optical paths are mixed for projection onto the
imaging device, and therefore, the television camera 40 is only able to produce
blurred image signals, thus being unable to produce three-dimensional image
signals. Therefore, for the television camera 40, this embodiment of the
invention uses an imaging device which has at least photoelectric converting
elements and vertical transfer stages, or in the case where photoelectric
converting elements and vertical transfer stages are combined, an imaging device which has a storage site provided on the extension of each vertical transfer stage in its transferring direction. Also, the storage time of the signal charge in the photoelectric converting elements of the imaging device is set at less than the time needed for scanning one field. The optical images introduced
through the two optical paths into the imaging device are alternately selected
for every field using optical shutters approximately in synchronization with the
timing of transferring the signal charges from the photoelectric converting
elements to the vertical transfer stages of the imaging device of the above
construction.
e is a schematic diagram showing the construction of an imaging device used in the camera in
Imaging devices useful in the present invention include an interline transfer charge-coupled device (hereinafter abbreviated as IL-CCD), a frame transfer charge-coupled device (hereinafter abbreviated as FT-CCD), and a frame/interline transfer charge-coupled device (hereinafter abbreviated as FIT-CCD). In the description of this embodiment, we will deal with the case where an IL-CCD is used as the imaging device. FIG. x is a schematic diagram showing the construction of an interline transfer charge-coupled device (IL-CCD) used in the three-dimensional image pickup apparatus according to this embodiment of the invention. Since the IL-CCD is well known, its construction and operation are only briefly described herein. As shown in FIG. x, the IL-CCD is composed of a light receiving section A and a horizontal transfer section B. The numeral 41 indicates a semiconductor substrate. The light receiving section A comprises two-dimensionally arranged photoelectric converting elements (light receiving elements) 42, gates 44 for reading out signal charges accumulated in the photoelectric converting elements, and vertical transfer stages 43 formed by CCDs to vertically transfer the signal charges read out by the gates. All the areas except the photoelectric converting elements 42 are shielded from light by an aluminum mask (not shown).
The photoelectric converting elements are separated from one another in both vertical and horizontal directions by means of a channel stopper 45. Adjacent to each photoelectric converting element are disposed an overflow drain (not shown) and an overflow control gate (not shown). The vertical transfer stages 43 comprise polysilicon electrodes .phi.V1, .phi.V2, .phi. V3, and .phi.V4, which are disposed continuously in the horizontal direction and linked in the vertical direction at the intervals of four horizontal lines. The horizontal transfer section B comprises horizontal transfer stages 46 formed by CCDs, and a signal charge detection site 47. The horizontal transfer stages 46 comprise transfer electrodes .phi.H1, .phi.H2, and .phi.H3, which are linked in the horizontal direction at the intervals of three electrodes. The signal charges transferred by the vertical transfer stages are transferred toward the electric charge detection site 47, by means of the horizontal transfer stages 46. The electric charge detection site 47, which is formed by a well known floating diffusion amplifier, converts a signal charge to a signal voltage. The operation will now be described briefly.
The signal charges photoelectrically converted and accumulated in the photoelectric converting elements 42 and 42 are transferred from the photoelectric converting sections 42 and 42 to the vertical transfer stages 43 during the vertical blanking period, using the signal readout pulse .phi.CH superposed on .phi.V1 and .phi.V3 of the vertical transfer pulses .phi.V1-.phi.V4 applied to the vertical transfer stages. When the signal readout pulse .phi.CH is applied to .phi.V1, only the signal charges accumulated in the photoelectric converting elements 42 are transferred to the potential well under the electrode .phi.V1, and when the signal readout pulse .phi.CH is applied to .phi.V3, only the signal charges accumulated in the photoelectric converting section 42 are transferred to the potential well under the electrode .phi.V3.
Thus, the signal charges accumulated in the two-dimensionally arranged numerous photoelectric converting elements 42 and 42 are transferred to the vertical transfer stages 43, simultaneously when the signal readout pulse .phi.CH is applied. Therefore, by superposing the signal readout pulse .phi.CH alternately on .phi.V1 and .phi.V3 in alternate fields, signals are read out from each photoelectric converting section once for every frame, and thus the IL-CCD operates to accumulate frame information.
The signal charges transferred from the photoelectric converting elements 42 to the electrodes .phi.V1 or .phi.V3 of the vertical transfer stages 43 are transferred to the corresponding horizontal transfer electrode of the horizontal transfer stages 46 line by line in every horizontal scanning cycle, using the vertical transfer pulses .phi.V1, .phi.V2, .phi.V3, and .phi.V4. Also, if the signal readout pulse .phi.CH is applied almost simultaneously to both .phi.V1 and .phi.V3 in one field period, the signal charges accumulated in the photoelectric converting element 42 are transferred to the potential well under the electrode .phi.V1, and the signal charges accumulated in the photoelectric converting element 42 to the potential well under the electrode .phi.V3. Signals are read out from each photoelectric converting element once for every field, and thus the IL-CCD operates to accumulate field information. In this case, the signal charges from the vertically adjacent photoelectric converting elements, i.e. L for the first field and M for the second field, are mixed in the vertical transfer stages, thereafter the signal charges which had been transferred from the photoelectric converting elements 42 to the electrodes .phi.V1 and phi. V3 of the vertical transfer stages 43 are transferred to the corresponding horizontal transfer electrodes of the horizontal transfer stages 46 line by line in every horizontal scanning cycle, using the vertical transfer pulses .phi.V1, .phi.V2, .phi.V3, and .phi.V4. The signal charges transferred to the horizontal transfer electrodes are transferred to the horizontally disposed signal charge detection site 47, using high-speed horizontal transfer pulses .phi.H1, .phi.H2, and .phi.H3, where the signal charges are converted to a voltage signal to form the video signal to be outputted from the imaging device.
f is a timing chart showing the operating timing and liquid crystal shutter switching timing of the present invention depicted in
The signal readout timing of the above IL-CCD in the three-dimensional image pickup apparatus of the present invention, the driving timing of the liquid crystal shutter, and the potential change in the photoelectric converting element at point Z shown in
The signal charge at point Z is transferred to the vertical transfer stage at the specified timing (application of the pulse for reading out the signal from the photoelectric converting element to the vertical transfer stage). As is apparent from
A second embodiment of the present invention will be described with reference to FIG. x. In an IL-CCD, it is possible to obtain video information of field accumulation without mixing the signal charges from two adjacent photoelectric converting elements as is done in the case of the foregoing embodiment. The principle is described referring to FIGS. x and x. FIG. x shows the pulse (VBLK) representing the vertical blanking period, the field pulse emitted from the synchronizing signal generator 4 shown in FIG. x, the signal readout timing of the IL-CCD, the driving timing of the liquid crystal shutters, the potential change in the photoelectric converting element at point Z, and the output signal from the imaging device.
The following describes the operation. During the first field, the signal readout pulse .phi.CH is applied to .phi.V3 to transfer the signal charges generated at the photoelectric converting element 42 to the vertical transfer stage. The signal charges are then transferred at high speed, using a high-speed transfer pulse .phi.VF attached to the vertical transfer pulses .phi.V1-.phi.V4, and are emitted from the horizontal transfer stage. Thereafter, the signal readout pulse .phi.CH is applied to .phi.V1 to transfer the signal charges generated at the photoelectric converting element 42 to the vertical transfer stage 43. The signal charges are then transferred, line by line in every horizontal scanning cycle, to the corresponding horizontal transfer electrode of the horizontal transfer stage 46, using the vertical transfer pulses .phi.V1-.phi.V4, thereby conducting the horizontal transfer. During the second field, the signal readout pulse .phi.CH is applied to .phi.V1 to transfer the signal charges generated at the photoelectric converting element 42 to the vertical transfer stage 43. The signal charges are then transferred at high speed, using a high-speed transfer pulse .phi.VH attached to the vertical transfer pulses .phi.V1-.phi.V4, and are emitted from the horizontal transfer stage. After that, the signal readout pulse .phi.CH is applied to .phi.V3 to transfer the signal charges generated at the photoelectric converting element 42 to the vertical transfer stage. The signal charges are then transferred, line by line in every horizontal scanning cycle, to the corresponding horizontal transfer electrode of the horizontal transfer stage 46, using the vertical transfer pulses .phi.V1-.phi.V4, thereby conducting horizontal transfer. With the above operation, it is possible to obtain the video signal of field accumulation. As is apparent from FIG. x, the above-mentioned emission of unnecessary signal charge and transfer of the signal charges from the photoelectric converting section to the vertical transfer stage are performed during the vertical blanking period, thus preventing the light from the two optical paths from being mixed with each other for projection onto the photoelectric converting elements in the imaging device. Therefore, the television camera 40 shown in FIG. x alternately outputs the video signal of the object image transmitted through the optical path 1 for the first field, and the video signal of the object image transmitted through the optical path 2 for the second field, thus producing a three-dimensional image video signal.
In the IL-CCD, it is also possible to set the storage time of the signal charges in the photoelectric converting elements so as to be shorter than the field period. The purpose of a shorter storage time of the signal charges is to improve the dynamic resolution of the video signal. The imaging device produces the video signal by integrating (accumulating) the signal charges generated by the light signals projected onto the photoelectric converting element.
Therefore, if the object moves during the integrating time of the signal charges, the resolution (referred to as the dynamic resolution) of the video signal will deteriorate. To improve the dynamic resolution, it is necessary to provide a shorter integrating (accumulating) time of the signal charges. The present invention is also applicable to the case where a shorter integrating (accumulating) time of the signal charges is used.
The following describes the principle with reference to FIGS. x and x. FIG. x shows the pulse (VBLK) representing the vertical blanking period, the field pulse emitted from the synchronizing signal generator 4 shown in
An overflow drain (abbreviated as OFD) is provided, as is well know, to prevent the blooming phenomenon which is inherent in a solid-stage imaging device including the IL-CCD. The amount of charge which can be accumulated in the photoelectric converting element is set in terms of the potential of an overflow control gate (abbreviated as OFCG). When the signal charge is generated exceeding the set value, the excess charge spills from the OFCG into the OFG, thus draining the excess charge from the imaging device.
Therefore, when the potential barrier of the OFCG is lowered (i.e., the voltage applied to the OFCG is increased) while the light signals from the object are projected onto the photoelectric converting elements (i.e., during the vertical blanking period), the signal charges accumulated in the photoelectric converting elements are spilled into the OFD. As a result, the potential of the photoelectric converting element at point Z is as shown in
In this embodiment, description has been giving dealing with the case of a horizontal OFD with and OFCG and an OFG which are disposed adjacent to each photoelectric converting element, but the present invention is also applicable to the case in which a vertical OFD disposed in the internal direction of the imaging device is used. The operating principle described with reference to FIG. x can be directly applied to the case in which the storage time is controlled by using an frame/interline transfer solid-state imaging device. Since the frame/interline transfer solid-state imaging device is described in detail in
Japanese Unexamined Patent Publication (Kokai) No. 55(1980)-52675, a description of this device will not be given. This imaging device is essentially the same device as the above-mentioned interline transfer solid-state imaging device except that a vertical transfer storage gate is disposed on the extension of each of the vertical transfer stages. The purpose of this construction is to reduce the level of vertically generated smears by sequentially reading out the signal charges in the light receiving section after transferring them at high speed to the vertical storage transfer stage, as well as to enable the exposure time of the photoelectric element to be set at any value. Setting the exposure time of the photoelectric converting element at any value has the same effect as described in FIG. x in terms of an example of control of the exposure time (storage time) using the interline solid-state imaging device. Referring again to FIG. x, the optical paths are alternately selected to project light into the television camera, approximately in synchronization with the timing of reading out the signal charges from the photoelectric converting elements to the vertical transfer stages. Alternatively, as is apparent from FIG. x, the optical paths may be alternately selected using the liquid crystal shutters, approximately in synchronization, for example, with the timing at which the pulse voltage is input to be applied to the OFCG. Also, an object image through each optical projected onto the photoelectric converting elements may be approximately equal to the period from the timing of application of the pulse voltage to the OFCG to the timing of application of the readout pulse.
It is also apparent that in the case where a storage period of the signal charges in the photoelectric converting elements is shorter than the field period, the projection periods from the two optical paths into the television camera are not necessary to be equal. In other words, the object image through each optical path projected onto the photoelectric converting elements of the solid-stage imaging device should be approximately equal to or cover the signal storage time.
As described above, according to the present invention, object images introduced through two different optical paths are alternately selected in synchronization with the field scanning of the imaging device, thus permitting the use of a single television camera for picking up an image in three dimensions. In this embodiment, the timings shown in FIGS. x and x are used, but the signal charge readout timing and the switching timing of the liquid crystal shutters have only to be set inside the vertical retrace period. Also, the relative division of the signal charge readout timing with respect to the switching timing of the liquid crystal shutters is allowable for practical use if the deviation is inside the vertical retrace period. In this embodiment, description of the three-dimensional image pickup apparatus has been omitted as it is exactly the same as the one described with reference to FIG. x. Industrial Applicability
As described above, the present invention can provide a panoramic image pickup apparatus using a single television camera which is inexpensive. Therefore, the panoramic image pickup apparatus of the present invention does not only allow anyone who does not have a special skill to shoot an object to produce an image in three dimensions, but in the preferred embodiment as a camcorder also provides improved mobility of the apparatus.
And finally, the above specification teaches several new ways for building a panoramic camcorder. The present invention teaches that generally any stereographic camera can be modified into a panoramic camera by swapping out the stereographic lenses that are oriented in parallax and replacing them with two fisheye lenses faced in opposite directions that have adjacent FOV coverage. And furthermore the present invention teaches the swapping out of the stereographic lenses and replacing one of the image paths with an electro-optical assembly comprising two fisheyes faced in opposite directions that have adjacent FOV coverage and using the second image path with a conventional zoom lens to record conventional imagery such that either type of imagery may be recorded, or that imagery from path one and path two may be recorded in an alternating manner.
a is an exterior perspective view of a conventional camcorder incorporating improvements disclosed herein to facilitate improved recording of a stereoscopic panoramic scene. To accomplish this the stereographic panoramic audio-visual recording assembly is attached to a conventional camcorder.
b is a cutaway perspective view of the panoramic sterographic optical recording system shown in 3a that illustrates the general operation of the system. In operation images are recorded in alternating fashion by fisheye lens S1 and S2 are recorded simultaneously, and then recorded from fisheye lens S3 and S4. The optical shutters useful in the present invention are liquid crystal shutters capable of transmitting and obstructing light by controlling the voltage, which respond sufficiently fast with respect to the field scanning frequency of the television camera, and which have a long life. The optical shutters using liquid crystals may be of approximately the same construction as those previously described with reference to
c is a drawing of a sequence of conventional frames recorded by a sterographic panoramic camcorder arrangement shown in
a is an exterior perspective view of a generalized design for a remote control unit that includes a display unit for use with a panoramic camcorder like that shown in
In it's simplest form, the remote control unit generally described in
The wireless video transmitter and receiver unit may be like that described in FIG. Radio Electronics magazine articles, such as those by William Sheets and Rudolf F. Graf, entitled “Wireless Video Camera Link”, dated February 1986, and entitled “Amateur TV Transmitter” dated June 1989. Similarly, U.S. Pat. No. 5,264,935, dated November 1993, by Nakajima presents a wireless unit that may be incorporated in the present invention to facilitate wireless video transmission to the control unit and reception by the panoramic camera control unit. In this arrangement the wireless video transmitter transmits a radio frequency signal from the camera to the receiver located on the remote control unit.
In this arrangement the control unit uses a transmitter arrangement like that found with typical camcorder units. The remote control unit transmits an infrared signal to the panoramic camera system. However, it is preferable that the typical camcorder transmitters have been reoriented so that they face the sides when the camera is pointed in the vertical direction to facilitate panoramic recording. For example, the infrared sensor arrangement shown in
b is a perspective of an operator using the remote control unit in
Alternatively, a modem with transceiver may transmit video signals from the camcorder to a transceiver and modem that form part of the remote control unit. And the same modem and transceiver may transmit control signals back to the camera. A modem and transceiver to accomplish this is presented in U.S. Pat. No. 6,573,938 B1, dated June 2003, by Schulz et al. Similarly, in U.S. Pat. No. 6,307,589 B1 dated October 2001 by Maquire and U.S. Pat. Nos. 6,307,526 dated 23 Oct. 2001 and 6,614,408 B1 dated September 2003 by Mann wireless modems and signal relay systems that are incorporated into the present invention for sending video signals to the panoramic remote control unit and the panoramic camera to remote devices are disclosed. In those systems they are not used with panoramic recording and control system. The present invention takes advantage of those systems to advance the art of panoramic videography.
Discuss incorporation of the modem on the camera.
Discuss incorporation of the modem on the remote control unit.
a illustrates a method of optically distorting an image using fiber optic image conduits according to U.S. Pat. No. 4,202,599, dated 1978 and U.S. Pat. No. 4,202,599 dated 1980 by Tosswill, consistent with and an undated “technical memorandum 100 titled fiber optics: theory and applications” by Galileo Electro-Optics Corporation, pp. 1-12. Specifically, page 12 of this document describes a fiber optic assembly called “Fibreye”, Trademarked by Galileo, that can magnify or compress an image with controlled non-linearity.
b illustrates applying fiber optic image conduits as illustrated in
c is a cutaway perspective view of an alternative specially designed fiber optic image conduit arrangement according to
a′ is an exterior perspective drawing of a combined panoramic spherical FOV and zoom lens camcorder system. Fisheye lens #1 and fisheye lens #2 cooperate to record two hemispherical images on a frame when the camera is set to record in the panoramic mode. Alternatively, the camera may be held and set to be operated like a normal camera to record a directional image using the cameras zoom lens.
b′ is a schematic drawing of the electro-optical system and related components and systems associated with the camcorder system shown in
c′ is a schematic drawing of an alternative embodiment of the electro-optical system and related components and systems associated with the video camcorder system shown in
In
As shown in
A mirror 31 completes the gross optical system of the adapter 10. The mirror 31 is so positioned within the adapter housing 16 and with respect to the optical axis 32 of the lensing system of the attached video camera 18 that the image received through the window 22 upon the mirror 31 will vary from that transmitted to the left shutter by a predetermined angle to provide a “right eye perspective” that differs from a “left eye perspective” in a way that mimics human vision. It has been found that a 1.5 degree angle of parallax is appropriate to obtain convergence between the right and left eye perspectives at a distance of about three meters, the distance at which the primary subject is commonly located within a camera's field-of-view. To obtain such a setting the mirror 31 is oriented so that the angle .theta. of
The electronics of the adapter 10 serves to regulate the passage of a visual stream through the adapter 10 and to the camera 12. Such electronics is arranged upon a circuit board 35 that is fixed to a side panel of the adapter housing 16. A battery 36 stored within a battery compartment 37 of the housing 16 energizes the circuitry mounted upon the circuit board 35 to control the operation of the light shutter 28 as described below.
The circuitry of the adapter 10 comprises, in part, a standard video stripper circuit for extracting the SYNC pulses from a video-format signal. The adapter 10 receives such video signal by tapping the “VIDEO OUT” terminal 38 of the camera 12 through a plug connector 40.
In operation, a ray “L” represents the path of a ray of the “left eye image” received by the adapter 10 while “R” represents a ray of light of the “right eye image” received. As mentioned above, the specific right eye perspective (with respect to the left eye perspective) or parallax desired is determined by the angle .theta. of the surface of the mirror 31 with respect to the face plate 22.
The light rays L and R are unpolarized upon passing through the glass face plate 22. Thereafter the L image passes through the first polarizer, attaining a first linear polarization prior to entering the cube 24. (The direction of polarization of light passing along a ray or path is indicated in
Although shown with separation distances therebetween in
Returning to the processing of the R and L images within the optical system of the adapter 10, the internal beamsplitter coating 26 of the glass cube 24 acts to pass the L image through while reflecting the R image. Hence, after passage through the glass cube 24, L and R images of orthogonal polarizations are received at the front window 46 of the switchable polarization rotator 28.
It is a property of the layer of twisted nematic mode liquid crystal material 42 that, when quiescent, the polarization of light passing therethrough is rotated by ninety degrees while, when activated (generally, by the imposition of an a.c. signal), no change in polarization occurs. The polarization filter 54 passes light of preselected polarization. Either of the two orthogonal polarization modes of the L and R images is suitable. Accordingly, the image having a polarization, upon exiting the glass cube 24, that is the same as the polarization selectivity of the filter 54 will pass through the liquid crystal polarization rotator 28 when the layer 42 of liquid crystal material is actuated by the imposition of an a.c. electrical signal. Conversely, when no signal is applied and, thus, the polarization of that particular image is rotated by ninety degrees upon passage through the quiescent layer 42, the filter 54 will block the transmission of that image to the camera lensing system. The orthogonally-polarized image (containing the other perspective view) can only pass through the filter 54 after a rotation in polarization of ninety degrees. Therefore, that image is blocked by the filter 54 when an a.c. signal is applied across the electrodes 48 and 50 and it will pass through the filter 54 only after its polarization is rotated (i.e. no signal applied). Thus, it can be seen that the arrangement of the adapter 10 requires only a single liquid crystal polarization rotator to generate a sequence of images that alternates between right and left eye perspective views.
Referring back to
In contrast, in a film camera, a shutter pulse stream permits one to “slave” together various optical effect generators. Such a pulse stream is typically of 24 Hz frequency corresponding to the standard film format of 24 frames per second.
Returning to
The active electrode 50 receives the output of the exclusive-OR gate 60. FIGS. 5(a) through 5(e) comprise a set of timing diagrams for illustrating the operation of the above-described electronic shuttering system for a video camera adapter. Referring first to
As can be seen, during the first 1/60 second (Field 1) the output from the video stripper circuit 58 is high. Adopting, as a convention, that an exclusive-OR gate outputs a high output only when its inputs differ, then, for the duration of a pulse from the stripper circuit 58, pulses are output from the gate 60 in a stream of the same frequency, but out-of-phase with, the high frequency pulse stream from the oscillator 62.
The active electrode 50 receives the output of the exclusive-OR gate 60 at the same time that the counterelectrode 48 receives the output of the oscillator 62. As a result of the out-of-phase relationship between the signals applied to the opposed electrodes of the rotator 28, an a.c. voltage V.sub.28 (illustrated in
The above-described process is reversed during the second 1/60 second period (Field 2) when the output from the video stripper circuit 58 goes low. The high frequency stream of pulses from the oscillator 62 produces an output of frequency and phase unchanged from the output of the oscillator 62. The pulses of the voltage waveforms applied to the active electrode 50 and to the counterelectrode 48 accordingly arrive in-phase during Field 2. Thus, no voltage difference is applied across the layer of liquid crystal material 42. During Field 2 the molecules of the layer of liquid crystal material 42 remain quiescent and aligned. Thus, the polarization of light is twisted by ninety degrees upon passage therethrough. Again, assuming inputs of s-polarized L light and p-polarized R light and a p-oriented polarization filter 54, the L light image (rotated to p-polarization) now passes through the adapter 10 and to the lens system of the camera 12 while the R image light, rotated to s-polarization, is blocked by the filter 54.
The above-described sequence is repeated over every 1/30 second video frame. Referring to the previous figures, it is thus seen that right and left eye perspective views are accordingly transmitted to the image pickup within the video camera every 1/60 second in accordance with standard video protocols. Thus, without modification, the internal image sensor of the camera (e.g. a charge-coupled-device (CCD)), sequentially receives right and left eye perspectives of the field-of-view. Without modification to the readout and detection mechanisms of a standard camera, the images picked up by that camera will then provide a video signal which, when applied to a display (perhaps after recordation onto videotape) produces interlaced left eye perspective and right eye perspective fields suitable for viewing by a commercially-available viewing system such as a pair of shuttered eyeglasses or a three-dimensional headset to produce the desired three-dimensional viewing sensation. (to here U.S. Pat 715)
d′ is a schematic drawing of another electro-optical system and related components and systems associated with the camcorder system shown in
As a basic method for picking up an object image in three dimensional, it has been known to shoot an object using two television cameras each disposed at a given angle to the object, the output signals from these two television cameras being alternately selected for every field.
Since the three-dimensional image pickup apparatus and three-dimensional display apparatus having the above configuration are well known in the art, only a brief description is given herein, and the three-dimensional image pickup apparatus will be described. The television cameras 2 and 3 are disposed forming a given angle theta. between them with respect to the object 1. The scanning timings of the television cameras 2 and 3 are in synchronizing relationship with each other. For this purpose, the synchronizing signal generator 4 supplies pulse signals necessary for driving the television cameras, simultaneously to the television camera 2 and the television camera 3 (the television camera 2 corresponds to the human right eye, and the television camera 3 to the human left eye). The video signals from the television cameras 2 and 3 are respectively supplied to terminals a and b of the switch 5. The switch 5 is controlled by field pulses supplied from the synchronizing signal generator 4, alternately switching the output signals at the terminal c of switch 5 from field to field in such a way that the video signal fed from the television camera 1 is output in the first field and that the video signal fed from the television camera 2 is output in the second field. Both the video signal thus obtained by switching and the synchronizing signals supplied from the synchronizing signal generator 4 are supplied to the adder 6 which combines these signals to produce a three-dimensional image video signal. Needless to say, the television camera driving pulses, field pulses, and synchronizing signals supplied from the synchronizing signal generator 4 are all in synchronizing relationship with one another.
Next, the three-dimensional display apparatus will be described. The three-dimensional image video signal produced by the three-dimensional image pickup apparatus having the above-mentioned structure is transmitted via an appropriate means to the three-dimensional display apparatus. The transmitted three-dimensional image video signal is fed into the monitor television 8 for displaying the image. Since the three-dimensional image video signal is produced by alternately selecting the video signals from the television cameras 2 and 3, the image displayed on the monitor television 8 when directly viewed appears double and unnatural, and does not give a three-dimensional effect to the human eye.
In order to view the image displayed on the monitor television 8 in three dimensions, it is necessary for the observer to view the image shot by the television camera 2 only with his right eye, and the image shot by the television camera 3 only with his left eye. That is, the image displayed on the monitor television 8 must be selected so that the image pattern of the first field enters the right eye and the image pattern of the second field enters the left eye. To achieve this object, the light signals from the monitor television 8 are selected by means of the glasses 9 having optical shutters so that the image pattern of the first field is viewed with the right eye and the image pattern of the second field with the left eye. The sync separator 7 outputs field pules synchronous with the synchronizing signals. Here it is supposed that the field pulse signals output from the sync separator 7 are at a high level for the first field and at a low level for the second field. The field pulses are supplied to the glasses 9 to alternately operate the optical shutters provided therein, thus selecting the light signals from the monitor television 8 between the right and left eyes. To describe specifically, during the first field, the optical shutter for the right eye of the glasses 9 transmits the light while the optical shutter for the left eye blocks the light. Conversely, during the second field, the optical shutter for the left eye of the glasses 9 transmits the light while the optical shutter for the right eye blocks the light. The light signals from the monitor television 8 are thus selected, making it possible to view the image in three dimensions.
The outline of the optical shutters will be described. A mechanical shutter may be used as the optical shutter, but here we will describe an optical shutter using a liquid crystal. A liquid crystal shutter is capable of transmitting and blocking light by controlling the voltage applied to the liquid crystal, and has a sufficiently fast response to the field scanning frequency of the television camera. It also has other advantages of longer life, easier handling, etc., as compared with the mechanical shutter.
Referring to
The deflector plates, the liquid crystal, and the transparent electrodes constitute the optical section of each of optical shutters 100 and 200. The deflector plate 10 transmits only the horizontal polarization wave of the light transmitted from the object, while the deflector plate 11 works to transmit only the vertical polarization wave. The transparent electrode 14 is grounded. The transparent electrode 13 is used to apply an electric field to the liquid crystal 12. In the above construction, when a voltage is not applied to the transparent electrode 13, the horizontal polarization wave transmitted through the deflector plate 10 is phase-shifted to a vertical polarization wave when it passes through the layer of the liquid crystal 12, and the vertical polarization wave passed through the layer of the liquid crystal 12 is transmitted through the deflector plate 11. This means that the liquid crystal shutter is in a permeable state, allowing the light from the monitor television to reach the human eye. On the other hand, when a voltage is applied to the transparent electrode 13, the horizontal polarization wave transmitted through the deflector plate 10 is not phase-shifted, but passes through the layer of the liquid crystal 12, retaining the state of the horizontal polarization. Therefore, the horizontal polarization wave passed through the layer of the liquid crystal 12 cannot permeate the deflector plate 11. This means that the liquid crystal shutter is in a non-permeable state, preventing the light from the monitor television from reaching the human eye. The transparent electrode 14 is grounded, as previously noted, and a driving signal is supplied to the transparent electrode 13 via the capacitors 20 and 21. The driving voltage applied to the transparent electrode 13 is approximately 10 V, and the driving frequency is approximately 200 Hz. The driving signal is produced using the rectangular wave generator 15, the AND circuits 16 and 17, the inverter 18, and the field pulse input terminal 19. To describe in detail, the rectangular wave generator 15 is caused to generate a rectangular wave of approximately 200 Hz, and the output signal from the rectangular wave generator 15 is supplied to the AND circuits 16 and 17 simultaneously. To the AND circuit 16, the field pulse which is at a high level for the first field and at a low level for the second field is supplied via the field pulse input terminal 19. Therefore, the driving signal for the liquid crystal layer is derived from the AND circuit 16 only when the first field is being reproduced. On the other hand, the field pulse supplied via the field pulse input terminal 19 is inverted by the inverter 18, and then supplied the AND circuit 17. Thus, the driving signal for the liquid crystal layer is derived from the AND circuit 17 only when the second field is being reproduced. The crystal shutter is constructed in this way. Namely, the light is allowed to pass through the right side shutter 100 of the glasses 9 shown in
However, the three-dimensional image pickup apparatus having the above-described structure requires two television cameras, thus making it expensive to construct the system. Also, since two television cameras are used to shoot the same object, preciseness is required in adjusting the shooting angles, the focusing, the angle between the two television cameras to the object, and other settings. Therefore, the above construction requires a lot of time for adjustment as compared with the time needed for shooting the object, and thus lacks mobility. (to here Pat '994)
The use of target/feature tracking software is well known. It's use was generally anticipated in image based virtually reality applications in U.S. Pat. '794 by the present inventor. Target tracking and feature tracking software has been commonly used in industrial inspection, the military, and security security since the 1970's. (i.e. Data Translation, Inc. has provided such software). Image editing software that incorporates feature tracking software that may be incorporated with the panoramic cameras described in this invention includes that in U.S. Pat. No. 6,289,165 B1 by Abecassis dated September 2001, and U.S. Pat. No. 5,469,536 by Blank dated November 1995. This software is applicable for use in the present invention. More specifically, and more recently target/feature tracking software has been used in tracking subjects in U.S. Pat. No. 5,850,352 granted 15 Dec. 1998 by Moezzi et al. Moezzi teaches that subject characteristics and post production criteria can be defined by the use of software when used with multiple video cameras placed in different locations. In this manner a user of the panoramic camcorder can during live recording or in post production define the scene he or she wishes to view. Moezzi teaches a conventional computer with “Viewer Selector” software is used to define a variety of criteria and metrics to determine a “best” view. (Ref. 3.3 Best View Selection, Pat. '352).
For example, in
It is important to note that when a feature is found in separate image segments the computer is normally programmed to choose the best fit of signatures that the user preferences have designated. Additionally, software that anticipates movement to overcome latency problems of a tracked subject from frame to frame is available and may be incorporated into the target tracking/feature tracking software (i.e. U.S. Pat. Appl. 2002/0089587 A1).
Once the target tracking/feature tracking software defines the subject the scene is seamed together and distortion is removed by panoramic software. Software to accomplish this may be done in near real time so that it appears to the viewer to be accomplished live (i.e. 10-15 frames per second or faster), or may be done in post production. Software to seam image segments to form panoramic imagery and for viewing that panoramic imagery is available from many venders to include: Helmet Dersch entitled Panoramic Tools, from MindsEye Inc. entitled Pictosphere (Ref. U.S. Pats. 2004/0004621 A1, U.S. Pat. No. 6,271,853 B1, U.S. Pat. No. 6,252,603 B1, U.S. Pat. No. 6,243,099 B1, U.S. Pat. Nos. 6,157,385, 5,936,630, 5,903,782, and 5,684,937), from Internet Pictures Corporation (Ref. U.S. Pats. TBP), from iMove Incorporated (Ref. U.S. Pat. 2002/0089587 A1, U.S. Pat. No. 6,323,858, 2002/0196330, U.S. Pat. No. 6,337,683 B1, U.S. Pat. No. 6,654,019 B2) Microsoft (Ref. U.S. Pat. No. 6,018,349), and others referenced by the present inventor in his U.S. Pat. Nos. 5,130,794 and 5,495,576.
Conventional computer processing systems may be used to perform said processing, such that the target tracking/feature tracking software, camera control software, display control software, and panoramic image manipulation software may on incorporated on a personal computer or incorporated into a set-top box, digital video recorder, panoramic camera system, cellular phone, personal digital assistant, panoramic camcorder system or the remote control unit of a panoramic camcorder system. Specific applications for the target tracking/feature tracking embodiment of the present invention include video teleconferencing and surveillance. General applications include making home and commercial panoramic video clips for education and entertainment.
a is a perspective drawing of a cameraman operating a conventional portable filmstrip movie camera with an adapter for recording stereo coded images. The conventional stereoscopic camera has a limited field-of-view. For immersive applications it is advantageous to record a panoramic scene of substantially spherical FOV coverage. Such a camera is described in detail in U.S. Pat. No. 6,259,865, by Burke et al, dated 10 Jul. 2001, and sold under the name Nu-View Stereographic 3-D adapter for video and film cameras by 3-D Video, Inc.
b is a top sectional view of a stereographic adapter for a filmstrip movie camera for recording stereo coded images.
a is an exterior perspective view of a conventional portable filmstrip movie camera like that in
As shown in
b is a plan view of a new 11 perforation, 70 mm filmstrip format for recording hemispherical and square images in a panoramic spherical FOV filmstrip movie camera.
c is a schematic drawing of the electro-optical system according to the conventional portable filmstrip movie camera like that in
A mirror 31 completes the gross optical system of the adapter 10. The mirror 31 is so positioned within the adapter housing 16 and with respect to the optical axis 32 of the lensing system of the attached film camera 18 that the image received through the window 22 upon the mirror 31 will vary from that transmitted to the left shutter by a predetermined angle to provide a “S1 perspective” that differs from a “S2 perspective”.
The electronics of the adapter 10 serves to regulate the passage of a visual stream through the adapter 10 and to the camera 12. Such electronics is arranged upon a circuit board 35 that is fixed to a side panel of the adapter housing 16. A battery 36 stored within a battery compartment 37 of the housing 16 energizes the circuitry mounted upon the circuit board 35 to control the operation of the light shutter 28 as described below.
e is a circuit schematic diagram of an electronic shuttering system of the movie camera and stereo adapter that have been modified to receive and record panoramic spherical FOV images.
The primary distinction between the role of an adapter for use with a film, as opposed to a video, camera derives from the different processes employed in film and video photography. As discussed earlier, while a video camera is arranged to convert an input image into a video signal to then re-create the image on a raster through the scanning of interlaced 1/60 second video fields, a film camera captures a moving image by exposing a series of still images onto a strip of film. Conventionally, twenty-four (24) still images are photographed per second. This requires that the strip of unexposed film be advanced by means of a film transport mechanism, then held still and exposed by means of a shutter, the process recurring twenty-four times per second. Numerous operations, including the synchronization of picture with sound, require the use of a common signal, known as a “shutter pulse” for synchronization. The shutter pulse waveform comprises a series of pulses separated by 1/24 second that directs the film transport mechanism, in coordination with the shutter, to create a sequence of twenty-four still images per second.
Referring to the schematic diagram of
d(1) through 11d(5) comprise a set of timing diagrams that illustrate the operation of the electronic shuttering system of the movie camera and stereo adapter that have been modified to receive and record panoramic spherical FOV images.
d
1 is a diagram of the output of the trigger circuit 66. As mentioned earlier, the trigger circuit 66 is tied to the stage of the least significant bit stored in the counter 64 and arranged to trigger a 1/24 second duration pulse upon detection of a predetermined transition in the state of that stage. For example, as shown in
d
2 and 11d3 correspond exactly to FIGS. 5(b) and 5(c) is grouped into two film frames corresponding to the exposure of adjacent images onto an advancing strip of film. Assuming that the output of the oscillator 60 is a 10 kHz pulse stream, then each 1/24 second film frame spans 416.67 oscillator pulses.
As in the case of the adapter for a video camera, the result of the application of pulses to the liquid crystal polarization rotator 28 is the production of waveform V.sub.28 as shown in
a through 13e are plan views of a set of new film formats for recording hemispherical and square images in a panoramic spherical FOV filmstrip movie camera.
a through 16e illustrate large venue format panoramic film or video projection theaters designed to distribute, project, and display imagery recorded by the panoramic spherical FOV monoscopic and stereoscopic filmstrip movie cameras disclosed in the present invention.
a is a side sectional drawing of a panoramic theater like those shown in
b is a top view drawing of a panoramic theater like those shown in
The term “System” generally refers to the hardware and software that comprises the Panoramic Image Based Virtual Reality/Telepresence Personal Communication System.
Additionally, “system” may refer to the specific computer system or device referenced in the specific discussion as the System is made up of many sub-systems and devices.
System Overview: As illustrated in
Wireless Panoramic/3-D Multimedia Input Means Overview: Still referring to
Wireless Panoramic/3-D Multimedia Processing Means Overview: The processing means consists of computers, networks, and associated software and firmware that operates on the signals from the input means. The processing means is typically part of unit 120 or 122, and part of system 100. The distribution of processing on unit 120 or 122 and 100 can vary. In the preferred embodiment the input means consists of a panoramic camera system which provides panoramic imagery.
Preferrably, the raw image content received from the panoramic camera system is processed for viewing. Image processing software or firmware that is applied to the images are selected by the user using graphic user interfaces common to computer systems and unique to the present invention. Applications that may be selected include but are not limited to image selection, image stabilization, recording and storage, image segment mosaicing, image segment stitching, image distortion reduction or removal, target/feature tracking, overlay/augmented reality operations, 3-D gaming, 3-D browsing, 3-D video-teleconferencing, 3-D video playback, system controls, graphic user interface controls, and interactive 3-D input controls.
Besides processing means to operate on incoming raw content or prerecorded content, the processing means of the present invention also includes 3-D user interface processing means. Interface processing means includes both hardware and software or firmware that facilitates the user interacting with a panoramic camera system, 3-D game, or other 3-D content.
Wireless Panoramic/3-D Multimedia Display Means Overview:
The display means receives imagery for display from the processing means. The display means may comprise but is not limited to any multi-media device associated with head or helmet mounted, body worn device, desktop, laptop, set-top, television, handheld, room-like, or any other suitable immersive or non-immersive systems that are typically used to display imagery and present audio information to a viewer.
Wireless Panoramic/3-D Multimedia Communication Means Overview:
In the preferred embodiment, a packet-based, multimedia telecommunication system 100 is disclosed that extends IP host functionality to panoramic wireless terminals 120 or 122, also referred to as communications unit 120 or 122 serviced by wireless links. The wireless communication units may provide or receive wireless communication resources in the form of panoramic or three-dimensional content. Typical panoramic and three dimensional content will included imagery stitched together to form panoramic prerecorded movies or a live feed which the user/wearer can pan and zoom in on, or three-dimensional video games which the user can interact with. Multimedia content is preferably sensed as panoramic video by the panoramic sensor assembly of the present invention. The content is translated into packet-based information for wireless transmission to another wireless terminal 122. A service controller of the communication system manages communications services such as voice calls, video calls, web browsing, video-conferencing and/or internet communications over a wireless packet network between source and destination host devices. The ability to manipulate panoramic video is currently well known in the computer and communications industry (i.e. IPIX movies). And the ability to manipulate and interact with three-dimensional games and imagery is also well known in the computer and communications industry (i.e. Quicktime VR). Similar storage and transfer of panoramic content generated by the present panoramic sensor assembly 10 may be transmitted from the novel wireless panoramic personal communications terminals 120 or 122 that comprise the present invention. Correspondingly, three-dimensional input can also be transmitted to and from the terminals just as is done in manipulating IPIX movies and Quicktime VR.
Terminals 120 and 122 may interact with one another over the internet or with servers on the internet in order to share and manipulate panoramic video and interact with three-dimensional content according to the System disclosed in the present invention. A multimedia content server of the communication system provides access to one or more requested panoramic multimedia communication services. A bandwidth manager of the communication system determines an availability of bandwidth for the service requests and, if bandwidth is available, reserves bandwidth sufficient to support the service requests. Wireless link manager(s) of the communication system manage wireless panoramic communication resources required to support the service requests. Methods are disclosed herein including the service controller managing a call request for a panoramic video/audio call; the panoramic multimedia content server accommodating a request for panoramic multimedia information (e.g., web browsing or video playback request); the bandwidth manager accommodating a request for a reservation of bandwidth to support a panoramic video/audio call; execution of a panoramic two-way video calls, panoramic video playback calls, and panoramic web browsing requests.
The present invention extends communication system that extends the usefulness of packet transport service over both wireline and wireless link(s). Adapting existing and new communication systems to handle panoramic and three-dimensional content according to the present invention supports high-speed throughput of packet data, including but not limited to streaming voice and video between IP host devices including but not limited to wireless communication units. In this manner the wireless panoramic personal communications systems/terminals described in the present invention may be integrated into/overlaid onto any conventional video capable telecommunications system.
For example, and now referring to the drawings in more detail,
Referring to embodiment A of
Manufacturers of mechanical shutters of a type that may be incorporated into the present invention are known by those skilled in the art, and so are not referenced in detail in this specification.
Fiber optic image conduits of a type suitable for incorporation into the present invention are manufactured by Schott Fiber Optics Inc., Southbridge, Mass. The exit ends of the fiber optic image conduits are situated so that the image focused on the exit ends of the fiber optic image conduits are optically transmitted through corresponding respective micro-lenses. The micro-lenses are preferably part of a micro-lens array. The exit ends of the fiber optic image conduits may be dispersed and held in place by a housing such that each fibers associated respective image is directed through a corresponding micro-lens, through the spatial light modulator liquid crystal display shutter, and focused on the imaging surface.
Manufacturers of micro-lens arrays suitable for inclusion in the present invention are made by MEM Optical, Huntsville, Ala. MEM's manufactures spherical, aspherical, positive (convex), and negative (concave) micro-lens arrays. Images transmitted through the micro-lens array are transmitted through the spatial light modulator liquid shutter to an imaging surface. Alternatively, the micro-lens array may be put on the other side of the spatial light modulator shutter. The imaging surface may be film, a charge coupled display device, CMOS, or other type of light sensitive surface. The image sensor can be comprised of a single or plural number of image sensors.
The optical system may be arranged such that all images, say R, S, T, U, V, and W from an objective lens may be transmitted through corresponding fiber optic image conduits to fill up the image sensor frame simultaneously. However, preferably, the optical system is arranged such that each image, say R, S, T, U, V, or W from an objective lens is transmitted through corresponding fiber optic image conduits to fill up the image sensor frame. However, alternatively, the system may also be arranged such that a subset of any image, R, S, T, U, V, or W is transmitted from an objective lens through corresponding fiber optic image conduits to fill up the image sensor frame. The later is especially advantageous when using two fisheye lenses and the user only wants the system to capture a small portion of the field-of-view imaged by fish-eye lens.
For instance,
Images focused onto the image sensor may be slightly off axis. But the images will still be in a focal distance imaged such that the image is high enough quality for the purpose of accomplishing the objectives of the present invention. Alternatively, to improve the image quality by achieving perpendicular focus of the of the image across the optical path to the image sensor plane a beam splitter arrangement may be used to transmit the image to the image sensor similar to the arrangement taught by
The spatial light modulator liquid crystal display shutter contains pixels that are addressable. The pixels may be addressed by a computer controlled control unit such that they block the transmitted image or let the image go through. A manufacturer and type of liquid crystal display shutter suitable for use in the present invention is the Meadowlark Optics Corporation, Boulder, Colo. Operation of the spatial light modulator liquid crystal display system will be described in additional detail below in the processing section of this disclosure.
The housing for holding the components that comprise the assemblies in
Alternatively, referring to embodiment B of
Other cameras suitable for use include those previously mentioned. It is known in the camera industry that camera processing operations may be placed directly onto or adjacent to the image sensing surface of the CCD or CMOS device. It is conceived by the present inventor that in some instances of the present invention that placing image processing operations such as compression functions and region of interest operation on the CCD or CMOS chip may be beneficial to save space and promote design efficiency. For instance, the Dalsa 2M30-SA, manufactured by Dalsa, Inc., Waterloo, Ontario, Canada, has a 2048×2048 pixel resolution and color capability incorporates Region Of Interest processing on the image sensing chip. In the present invention this allows users to read out the image area of interest the user is looking at instead of the entire 2K picture. In
Audio system components and systems suitable for use in the present example are small compact systems typically used in conventional cellular phones that are referenced in this text elsewhere. Microphones are preferably incorporated into the panoramic sensor assembly 10 or at the into the HMD housing that becomes a expanded part of assembly 10 in
Input mean's embodiments A and B shown in
The panoramic sensor assembly 10 like that shown in enlarged details
Eyeglass safety strap typically extends to a long cloth-wrapped cable harness and, when worn inside a shirt, has the appearance of an ordinary eyeglass safety strap, which ordinarily would hang down into the back of the wearer's shirt.
Still referring to
However, a satisfactory embodiment of the invention may be constructed by having the television screen be driven by a coaxial cable carrying a video signal similar to an NTSC RS-170 signal. In this case the coaxial cable and additional wires to power it are concealed inside the eyeglass safety-strap and run down to a belt pack or other body-worn equipment by connection.
In some embodiments, the television contains a television tuner so that a single coaxial cable may provide both signal and power. In other embodiments the majority of the electronic components needed to construct the video signal are worn on the body, and the eyeglasses and panoramic sensor assembly contain only a minimal amount of circuitry, perhaps only a spatial light modulator, LCD flat panel, or the like, with termination resistors and backlight. In this case, there are a greater number of wires (CCD readout wires for input embodiment B) or fiber optic image conduits (input embodiment A). In some embodiments of the invention the television screen is a
VGA computer display, or another form of computer monitor display, connected to a computer system worn on the body of the wearer of the eyeglasses.
Wearable display devices have been described, such as in U.S. Pat. No. 5,546,099, Head mounted display system with light blocking structure, by Jessica L. Quint and Joel W. Robinson, Aug. 13, 1996, as well as in U.S. Pat. No. 5,708,449, Binocular Head Mounted Display System by Gregory Lee Hcacock and Gordon B. Kuenster, Jan. 13, 1998. (Both of these two patents are assigned to Virtual Vision, a wellknown manufacturer of head-mounted displays). A “personal liquid crystal image display” has been described U.S. Pat. No. 4,636,866, by Noboru Hattori, Jan. 13, 1987. Any of these head-mounted displays of the prior art may be modified into a form such that they will function in place of active television display according to the present invention. A transceiver of a type that may be used to wirelessly transmit video imagery from the camera system to the processing unit and then wirelessly back to the head mounted display in an embodiment of the present invention is the same as incorporated in U.S. Pat. No. 6,614,408 B1 by Mann.
While display devices will typically be held by conventional frames that fit over the ears or head other more contemporary methods are envisioned in the present invention. Because display devices including associated electronics are increasingly becoming lighter in weight the display device or devices may be supported and held in place by body piercings in the eyebrow, nose, or hung on hair from the persons head. Still even weirder, but feasible, is that display devices may be supported by magnets. The magnets can either be stuck on the viewers skin, say the users temple, using a stickie backing or can be embedded under the users skin in the same location. Magnets at the edge of the display that coincide with the magnets mounted under on the skin, along with a noise support hold the displays in front of the users eyes. Because of the electrical power required to drive the display, a conduit to supply power to the display is required. The conduit may contain wires to provide a video signal and for eye tracking cameras also.
In the typical operation of the System shown in
Typically, rather than displaying raw video on the active display, the video is processed for display as illustrated in
Typically the objects displayed are panoramic imagery from a like camera located in a remote location or a panoramic scene from a video game or recording. Synthetic (virtual) objects overlaid in the same position as some of the real objects from the scene may also be displayed. Typically the virtual objects displayed on television correspond to real objects within the field of view of panoramic sensor assembly. Preferrably, more detail is recorded by the panoramic sensor assembly in the direction the user is gazing. This imagery provides the vision analysis processor input with extra details about the scene so to make the analysis is more accurate in this foveal region, while the audio and video from other microphones and image sensors provide an anticipatory role and a head-tracking role. In the anticipatory role, the vision analysis processor is already making crude estimates of identity or parameters of objects outside the field of view of the viewfinder screen, with the possible expectation that the wearer may at any time turn his or her head to include some of these objects, or that some of these objects may move into the field of view of active display area reflected to the viewers eyes. With this operation, synthetic objects overlaid on real objects in the viewfinder provide the wearer with enhanced information of the real objects as compared with the view the wearer has of these objects outside of the central field of the user.
Thus even though television active display screen may only have 240 lines of resolution, a virtual television screens of extremely high resolution, wrapping around the wearer, may be implemented by virtue of the head-tracker, so that the wearer may view very high resolution pictures through what appears to be a small window that pans back and forth across the picture by the head-movements of the wearer. Optionally, in addition to overlaying synthetic objects on real objects to enhance real objects, graphics synthesis processor (
User control of the Panoramic Image Based Virtual Reality/Telepresence Personal Communication System may be a variety of input techniques. For example, as shown in
Besides voice recognition input, another preferable interactive user input means in the present invention is user body gesture input via camera input. While separate non-panoramic cameras can be mounted on the user to record body movement, an advantage of using the panoramic sensor assembly to provide input is that it simultaneously provides a panoramic view of the surrounding environment and the user thus obviating the need for a single or dispersed cameras being worn by the viewer. A computer gesture input software or firmware program of a type suitable for use in the present invention is Facelab by the company Seeingmachines, Canberra, Australia. Facelab3 and variants of the software uses at least one, but preferably two points of view, to track the head position and eye location and blink rate. Simultaneous real-time and smoothed tracking data available provides instantaneous output to the host processing unit. In this manner the user can use the panoramic sensor assembly, to track the users head and eye position and define the view the user when viewing panoramic imagery or 3-D graphics on the unit 120 or 122.
Making a window active in the X-windows system is normally done by a user using his hand to operate a mouse and placing the mouse cursor on the window and possibly clicking on it. However, having a mouse on a wearable panoramic camera/computer system is difficult owing to the fact that it requires a great deal of dexterity to position a cursor while walking around. Mann in U.S. Pat. No. 6,307,526 describes an active display viewfinder where the mouse/cursor: the wearer's head is the mouse, and the center of the viewfinder is the cursor. The Mann system may be incorporated in the present invention. However, the present invention expands upon Mann by using the panoramic input device to record more than just head and eye position by using the panoramic sensor assembly to record other body gestures such as hand and finger gestures. A software package for recording head, body, hand, finger, and other body gestures is facelab mentioned above, or the system used by Mann. The gestures are recorded by the image sensors of the panoramic sensor assembly. The input from the sensors is translated by an image processing system into machine language commands that control the Panoramic Image Based Virtual Reality/Telepresence Personal Communication System. The menus in the active display viewfinder are visible and may be overlayed over the real world scene or seen threw the glasses or overlaid on panoramic video transmitted for display. Portions of the menue within the viewfinder are shown with solid lines so that they stand out to the wearer. And xterm operating system suitable and of a type for use in the present invention is that put forth in the Mann patent previously mentioned. A windows type operating system suitable for and of the type suitable for incorporation in the present invention is Microsoft Inc., Windows XP or Read Hat Linux. Application software or firmware may be written/coded in any suitable computer language like C, C++, Java, or other suitable O.S. Preferrably, software is compiled in the same language to avoid translator software slowing application processing down during operation.
Once the wearer selects window by a voice command or body gesture, then the wearer presses uses a follow-on voice command or gesture to choose another command. In this manner the viewer may control the system applications using menus that may or may not be designed to pop up the viewers field-of-view conventional button or switch to turn the system on and off may be mounted in any suitable place on the invention that is worn by the viewer. Still referring to
Note that here the drawings depict objects moved translationally (e.g. the group of translations specified by two scalar parameters) while in actual practice, virtual objects undergo a projective coordinate transformation in two dimensions, governed by eight scalar parameters, or objects undergo three dimensional coordinate transformations. When the virtual objects are flat, such as text windows, such a user-interface is called a “Reality Window Manager” (RWM).
In using the invention, typically various windows appear to hover above various real objects, and regardless of the orientation of the wearer's head (position of the viewfinder), the system sustains the illusion that the virtual objects (in this example, xterms) are attached to real objects. The act of panning the head back-and forth in order to navigate around the space of virtual objects also may cause an extremely high-resolution picture to be acquired through appropriate processing of a plurality of pictures captured by a plurality of objective lenses and stitching the images together. This action mimicks the function of the human eye, where saccades are replaced with head movements to sweep out the scene using the camera's light-measurement ability as is typical of PEAOCIGRAPHIC imaging. Thus the panoramic sensor assembly is used to direct the camera to scan out a scene in the same way that eyeball movements normally orient the eye to scan out a scene.
The processor is typically responsible for ensuring that the view rendered in graphics processor matches the view chosen by the user and corresponding to a coherent spherical scene of stitched together image sub-segments and not the vision processor. Thus if the point of view of the user is attempted to be replicated there is a change of viewing angle, in the rendering, so as to compensate for the difference in position (parallax) between the panoramic sensor assembly and the view afforded by the display.
Some homographic and quantigraphic image analysis embodiments do not require a 3-D scene analysis, and instead use 2-D projective coordinate transformations of a flat object or flat surface of an object, in order to effect the parallax correction between virtual objects and the view of the scene as it would appear with the glasses removed from the wearer.
A drawback of the apparatus depicted is that some optical elements may interfere with the eye contact of the wearer. This however, can be minimized by the careful choice of the optical elements chosen. One technique that can be used to minimize interference is for the wearer is for the wearer to look at video captured by the camera such that an illusion of transparency is created, in the same way that a hand-held camcorder creates an illusion of transparency. The only problem with that is that the panoramic sensor is not able to observe where the viewers eyes are located, unless sensors are placed behind the display as Mann does. Therefore this invention proposes to put cameras or objective lenses and relays behind the eye-glasses to observe the viewer and also incorporate a panoramic sensor assembly as indicated to capture images outside the eyeglasses and about the wearer as depicted in
The embodiments of the wearable camera system depicted in
In the present invention image processing can be done to help compensate for the difference between where the viewers eyes are and the panoramic sensor assembly is located. To attempt to create such an illusion of transparency requires parsing all objects through the analysis processor, followed by the synthesis processor, and this may present processor with a formidable task. Moreover, the fact that the eye of the wearer is blocked means that others cannot make eye-contact with the wearer. In social situations this creates an unnatural form of interaction. For this reason head mounted displays with transparency are desirable in social situations and in situations when it is important for the panoramic sensor assembly or mini-optics to track the eyes of the user. Design is a set of tradeoffs, and embodiments disclosed in the present inventions can be incorporated in various manners to optimize the application the user must perform.
Although the lenses of the glasses may be made sufficiently dark that the viewfinder optics are concealed, it is preferable that the active display viewfinder optics be concealed in eyeglasses so to allow others to see both of the wearer's eyes as they would if the user was wearing regular eyeglasses. A beamsplitter may be used for this purpose, but it is preferable that there be a strong lens directly in front of the eye of the wearer to provide for a wide field of view. While a special contact lens might be worn for this purpose. there are limitations on how short the focal length of a contact lens can be, and such a solution is inconvenient for other reasons.
Accordingly, a viewfinder system is depicted in
As discussed later in the specification in
Multiple users, at the same location, may also collaborate in such a way that multiple panoramic sensor assembly viewpoints may be shared among the users so that they can advise each other on matters such as composition, or so that one or more viewers at remote locations can advise one or more of the users on matters such as composition. Multiple users, at different locations, may also collaborate on an effort that may not pertain to photography or videography directly, but an effort nevertheless that is enhanced by the ability for each person to experience the viewpoint of another.
It is also possible for one or more remote participants using a like system or other conventional remote device like that shown at the top of
It is noted that it is possible to distribute the image capture, processing, and display means of the present invention into standalone units worn by the viewer. In such an instance the means may be linked in communicating relationship to one another by transceivers. Wireless transceivers of a type that may be integrated into the present invention have been discussed in the text above and are included here by reference. These wireless transceivers adopt the standards discussed above and have been integrated into numerous products so that electronic devices may communicate wirelessly with one another. The advantage of doing this is to alleviate wires, cables, or optical relays running between these means when they are distributed over the users body. Additionally, the distribution of these means allows for the distribution of weight of these components. For instance, a significant amount of weight can be removed from the integrated system in
As illustrated in
A manufacturer of a device of a type that may be used to hold the unit 120 or 122 on the wrist of the user is the Wrist Cell Sleeve cellular phone holder of Vista, Calif., referred to as the “CSleeve” that is secures the unit and is made of a material that goes around the wrist and is secured around the wrist by velcrove. The mast and panoramic sensor may be placed in the operational position by pushing a button to release a coil or wire spring that pushes the mast and assembly upright. Similar systems are used in switch-blade knives and Mercedes keyholders. The mast and assembly lock in place when pushed down into the closed position.
Additionally, and novel to standard techniques, the user may also use the panoramic sensor assembly as an input. As previously described, panoramic sensor assembly records the viewer and surrounding audio-visual environment. The audio-signals representing all or some portion of the surrounding scene are then transmitted over the cellular communication network.
a-f are drawings of the present invention integrated into various common hats.
a-c are exterior perspectives illustrating the integration of the present invention into cowboy hat that forms unit 120 or 122. Micro-lense objectives are integrated into the hat in an outward facing manner such that they record adjacent or overlapping portions of the surround panoramic environment. Microphones may also be integrated in a similar fashion. Fiber optic image conduits relay the images from the micro-lenses to image an image sensor if input means embodiment A is used as depicted in
d-f are exterior perspectives illustrating the integration of the present invention into unit 120 or 122 in the form of a baseball cap. The components and operation of the baseball hat can be similar to the cowboy hat. However, in order to show various embodiments of the invention, the illustration shows that the processing means is communicated with through a cable worn elsewhere on the viewer's body. And the baseball cap illustration the display is stowed on the bottom surface of the bill of the cap and flipped down into position in front of the wearer's eyes.
Still alternatively, as depicted in
The processing means consists of computers, networks, and associated software and firmware that operates on the signals from the input means. In the preferred embodiment the input means consists of a panoramic camera system which provides panoramic imagery. Processing means is a subset of unit 120 or 122, and to varying degrees network 100. Some of the processing operations have been described above in order to facilitate the cohesion of the disclosure of the primary embodiments A and B of the system so will not be repeated. But it will be clear to those skilled in the art that those processing portions are transferable and applicable to the more detailed and additional discussion below concerning processing means.
Referring to
Referring generally to the processing hardware shown in
In
Preferrably, at least head and eye tracking hardware and software of a type for incorporation in the present invention is described in U.S. Pat. No. 6,307,526 by Mann or of a type manufactured by Seeingmachines, Australia which has already been described. Position, orientation, and heading data, and optionally eye tracking data, received from the input control system is transmitted to the computer processing system to achieve dynamic selective image capture system. Position, orientation, and heading data define the position, orientation, and heading of the viewers head and eye position. According to studies, head position information alone can be used to predict the direction of view about 86 percent of the time.
Operating system and application software are an integral part of the Panoramic Image Based Virtual Reality/Telepresence Personal Communication System and Method. Alternatively, the software can be stored as firmware. For instance, a software may be embedded onto the memory of a reconfigurable central processing unit chip of a body worn computer. Operating system and application software is stored in firmware, as RAM, on hard drives, or other suitable media storage worn by the viewer. Different processors worn by the viewer are programmed to complete tasks that enable the invention. The user of the system operates the system to perform those application programs which he or she chooses to accomplish the tasks desired. In operation the software applications are integrated, threaded, and compiled together in a seamless manner to achieve concerted and specific operations. Those tasks and applications are described below:
As graphically illustrated in
Once turned on, the system may use body gestures tracked by cameras worn by the wearer or a remote user, voice recognition, or other conventional method to interact with the xterm menu in window on the users display. A typical sequence frames that would displayed for the user of the menus is illustrated in
Still referring to
Audio processing software or firmware of a type that is incorporated into the present invention is that described by U.S. Pat. No. 6,654,019 hardware and software by Gilbert et al.; or Sound Forge Incorporated's multi-channel software. Speech recognitions systems by Dragon Systems and Kurzweil may also be incorporated as discussed in more detail elsewhere. Additionally, speech recognition control of unit 120 or 122 may be controlled by software described in U.S. Pat. No. 6,535,854 B2 by Buchner et al; and features tracked using software described in U.S. Pat. Pub. App. 2003/0227476.
Again referring to
Again referring to
As depicted in
Referring to
Again referring to
Again referring to
Referring to
Augmented Reality (AR) software of a type incorporated for use in the present invention is Studierstube, from Germany, which is a windows based software package that operates on an personal computer architecture, like that on a conventional laptop. It provides a user interface management system for AR based on but not limited to stereoscopic 3-D graphics. It provides a multi-user, multi-application environment together with 3-D window equivalents, 3-D widgets, an supports different display devices such as HMDs, projection walls and workbenches. It also provices the means of interaction, either with the objects or with user interface elements registered with the pad. The Studierstube software also supports the sharing and migration of applications between different host units/terminals 120 or 122 or servers sharing different users.
The inputs of the different tracking devices are preferably processed by trackers associated with the panoramic sensor assembly 10 of the present invention, but others are also feasible. The devices are linked to the Augmented reality software. The software receives data about the users head orientation from the sensor 10 to provice a coordinate system that is positionally body stabilized and oreintationly world stabilized. Within this coordinate system the pehn and pad are tracked using the panoramic sensor assembly 10 mounted on the HMD, cell phone, and ARToolKit to process the video information.
The information may be used to drive onboard or remote system software and firmware applications. Augmented Reality software may be used for interactive gaming, educational, medical assistance and telepresence, just to name a few applications. Gaming applications (
Again referring to
Again referring to
The users face in the frame has barrel distortion because of the optical sensor is fisheye lens.
Again referring to
Referring to
Some of the display means have been described above in order to facilitate the cohesion of the disclosure of the primary embodiments A and B of the system so that discussion will not be repeated for the sake of efficiency. But it will be clear to those skilled in the art that discussions on display systems are transferable and applicable to the more detailed and additional discussion below concerning display means.
g, section a and b is a schematic diagram illustrating examples of wearable panoramic projection communication display means according the present invention. A first embodiment at the top of the page of
A second embodiment at the bottom of the page of
h is a schematic diagram illustrating wearable portable head-mounted and portable panoramic communication display means according to the present invention.
At the top of the page, in
At the bottom of the page, in
i is a schematic diagram illustrating prior art display means that are compatible with the present invention. As
As
Communication systems 100, such as land mobile radio and cellular communications systems 100, are well known. Such systems typically include a plurality of radio communication units 120 or 122 (e.g., vehicle-mounted mobiles or portable radios in a land mobile system and radio/telephones in a cellular system); one or more repeaters; and dispatch consoles that allow an operator or computer to control, monitor or communicate on multiple communication resources.
Typically, the repeaters are located at various repeater sites and the consoles at a console site. The repeater and console sites are typically connected to other fixed portions of the system (i.e., the infrastructure) via wire connections, whereas the repeaters communicate with communication units and/or other repeaters within the coverage area of their respective sites via a wireless link. That is, the repeaters transceiver information via radio frequency (RF) communication resources, typically comprising voice and/or data resources such as, for example, narrow band frequency modulated channels, time division modulated slots, carrier frequencies, frequency pairs, etc. that support wireless communications within their respective sites.
Communication systems 100 may be organized as trunked systems, where a plurality of communication resources is allocated amongst multiple users by assigning the repeaters within an RF coverage area on a communication-by-communication basis, or as conventional (non-trunked) radio systems where communication resources are dedicated to one or more users or groups. In trunked systems, there is usually provided a central controller (sometimes called a “zone controller”) for allocating communication resources among multiple sites. The central controller may reside within a fixed equipment site or may be distributed among the repeater or console sites.
Communication systems 100 may also be classified as circuit-switched or packet-switched, referring to the way data is communicated between endpoints.
Historically, radio communication systems have used circuit-switched architectures, where each endpoint (e.g., repeater and console sites) is linked, through dedicated or on-demand circuits, to a central radio system switching point, or “central switch.” The circuits providing connectivity to the central switch require a dedicated wire for each endpoint whether or not the endpoint is participating in a particular call. More recently, communication systems are beginning to use packet-switched networks using the Internet Protocol (IP). In packet-switched networks, the data that is to be transported between endpoints (or “hosts” in IP terminology) is divided into IP packets called datagrams. The datagrams include addressing information (e.g., source and destination addresses) that enables various routers forming an IP network to route the packets to the specified destination. The destination addresses may identify a particular host or may comprise an IP multicast address shared by a group of hosts. In either case, the Internet Protocol provides for reassembly of datagrams once they reach the destination address. Packet-switched networks are considered to be more efficient than circuit-switched networks because they permit communications between multiple endpoints to proceed concurrently over shared paths or connections.
Because packet-based communication systems 100 offer several advantages relative to traditional circuit-switched networks, there is a continuing need to develop and/or refine packet-based communication architectures. Historically, however, particularly for packet-based radio and cellular communications systems, the endpoints or “hosts” of the IP network comprise repeaters or consoles. Thus, the IP network does not extend across the wireless link(s) to the various communication units. Existing protocols used in IP transport networks such as, for example, H.323, SIP, RTP, UDP and TCP neither address the issue nor provide the functionality needed for sending multimedia data (particularly time-critical, high-frame-rate streaming voice and video) over the wireless link(s). Thus, any packets that are to be routed to the communication units must be tunneled across the wireless link(s) using dedicated bandwidth and existing wireless protocols such as the APCO-25 standard (developed by the U.S. Association of Public Safety Communications Officers (APCO)) or the TETRA standard (developed by the European Telecommunications Standards Institute (ETSI)). Until recently, none of these protocols are sufficiently able to accommodate the high speed throughput of packet data that is needed to fully support multimedia communications. However, recent internet wideband cellular services allow for panoramic and three-dimensional content to be transmitted over the internet when it is broken down in to manageable video/image sub-segments using the systems/devices/and methods described in the present invention.
Accordingly, there is a need for a panoramic communication system that extends packet transport service across the wireless link(s), or stated differently, that extends IP “host” functionality to wireless communication units so as not to require dedicated bandwidth between endpoints. Advantageously, the communication system and protocol will support high-speed throughput of packet data, including but not limited to panoramic streaming voice and video over the wireless link. The present invention is directed to addressing these needs. The following describes a panoramic communication system that extends packet transport service over both wireline and wireless link(s). The communication system supports high-speed throughput of packet data, including but not limited to streaming voice and video between IP host devices including but not limited to wireless communication units 120 or 122. A packet-based, multimedia communication system that is of the type required and is integrated to support Panoramic Image Based Virtual Reality/Telepresence Personal Communication is described by Dertz et al. in U.S. Patent Application Publication 2002/0093948 A1 dated Jul. 18, 2002.
Systems and methods are disclosed herein including the service controller managing a call request for a panoramic video/audio call; the panoramic multimedia content server accommodating a request for panoramic multimedia information (e.g., web browsing or video playback request); the bandwidth manager accommodating a request for a reservation of bandwidth to support a panoramic video/audio call; execution of a panoramic two-way video calls, panoramic video playback calls, and panoramic web browsing requests. The present invention extends communication system that extends the usefulness of packet transport service over both wireline and wireless link(s). Adapting existing and new communication systems to handle panoramic and three-dimensional content according to the present invention supports high-speed throughput of packet data, including but not limited to streaming voice and video between IP host devices including but not limited to wireless communication units. In this manner the wireless panoramic personal communications systems/terminals described in the present invention may be integrated into/overlaid onto any conventional video capable telecommunications system.
In one embodiment of the integrated telecommunication system the panoramic communication units 120, 122 comprise wireless radio terminals that are equipped for one-way or two-way communication of IP datagrams associated with multimedia calls (e.g., voice, data and/or video, including but not limited to high-speed streaming voice and video) singly or simultaneously with other hosts in the communication system 100. In such case, the communication units 120, 122 include the necessary call control, voice and video coding, and user interface needed to make and receive multimedia calls.
In another embodiment of the integrated telecommunication system the repeater 112, panoramic communication units 120, 122, routers 108, dispatch console 124, gatekeeper 126, web server 128, video server 130 and IP Gateway 132 all comprise IP host devices that are able to send and receive IP datagrams between other host devices of the network. For convenience, the communication units 120, 122 will be referred to as “wireless terminals.” As will be appreciated and has been described above in
In another embodiment of the integrated telecommunication system the fixed equipment host devices at the respective sites are connected to their associated routers 108 via wireline connections (e.g., Ethernet links 134) and the routers themselves are also connected by wireline connections (e.g., T1 links). These wireline connections thus comprise a wireline packet switched infrastructure (“packet network”) 136 for routing IP
datagrams between the fixed equipment host devices. One of the unique aspects of the example telecommunications system is the extension of IP host functionality to the panoramic wireless host devices (e.g., the panoramic communication units 120, 122) over a wireless link (i.e., the wireless communication resource 116). For convenience, the term “wireless packet network” will hereinafter define a packet network that extends over at least one wireless link to a wireless host device as described herein.
The wireless communication resource 116 may comprise multiple RF (radio frequency) channels such as pairs of frequency carriers, code division multiple access (CDMA) channels, or any other RF transmission media. The repeater 112 is used to generate and/or control the wireless communication resource 116. In one embodiment, the wireless communication resource 116 comprises time division multiple access (TDMA) slots that are shared by devices receiving and/or transmitting over the wireless link. IP datagrams transmitted across the wireless link can be split among multiple slots by the transmitting device and reassembled by the receiving device. In the preferred input embodiment A or B transmitted datagrams will transmit panoramic video and three-dimensional content and three-dimensional position, orientation, and heading data.
In another embodiment, the repeater 112 performs a wireless link manager function and a base station function. The wireless link manager sends and receives datagrams over the wireline network 136, segments and formats datagrams for transmission over the wireless link 116, prioritizes data for transmission over the wireless link 116 and controls access of the wireless terminals 120, 122 to the wireless link 116. In one embodiment, the latter function is accomplished by the wireless link manager allocating “assignments” granting permission for the wireless terminals to send messages over the wireless link.
The assignments may comprise either “Non-Reserved Assignment(s)” or “Reserved Assignments,” each of which is described in greater detail in related application referenced as [docket no. CM04761H] to the Detz et al, 2002/0093948 A1 application. The base station sends and receives radio signals over the wireless link 116. Multiple base stations can be attached to a single wireless link manager.
In a related application to the Detz et al, 2002/0093948 referenced [docket no. CM04762H] discloses a slot structure that supports the transmission of multiple types of data over the wireless link 116 and allows the packets of data to be segmented to fit within TDMA slots. It also provides for different acknowledgement requirements to accommodate different types of service having different tolerance for delays and errors. For example, a voice call between two wireless terminals A, 120 and B, 122, can tolerate only small delays but may be able to tolerate a certain number of errors without noticeably effecting voice quality. However, a data transfer between two computers may require error-free transmission but delay may be tolerated.
Advantageously, the slot format and acknowledgement method may be implemented in the present invention to transmit delay-intolerant packets on a priority basis without acknowledgements, while transmitting error-intolerant packets at a lower priority but requiring acknowledgements and retransmission of the packets when necessary to reduce or eliminate errors. The acknowledgement technique may be asymmetric on the uplink (i.e., wireless terminal to repeater) and downlink (i.e., repeater to wireless terminal) of the wireless link 116. The routers 108 of the wireline portion of the network are specialized or general purpose computing devices configured to receive IP packets or datagrams from a particular host in the communication system 100 and relay the packets to another router or another host in the communication system 100. The routers 108 respond to addressing information in the IP packets received to properly route the packets to their intended destination. In accordance with internet protocol, the IP packets may be designated for unicast or multicast communication. Unicast is communication between a single sender and a single receiver over the network.
Multicast is communication between a single sender and multiple receivers on a network. Each type of data communication is controlled and indicated by the addressing information included in the packets of data transmitted in the communication system 100. For a unicast message, the address of the packet indicates a single receiver. For a multicast communication, the address of the packet indicates a multicast group address to which multiple hosts may join to receive the multicast communication. In such case, the routers of the network replicate the packets, as necessary, and route the packets to the designated hosts via the multicast group address. In this way one user of the panoramic communication units 120, 122 described in
The wireless packet network is adapted to transport IP packets or datagrams between two or more hosts in the communication system 100, via wireless and/or wireline links. In a preferred embodiment, the wireless packet network will support multimedia communication, including but not limited to high-speed streaming voice and video so as to provide the hosts of the communication system 100 with access to voice, video, web browsing, video-conferencing and internet applications. As will be appreciated, depending on which host devices are participating in a call, IP packets may be transported in the wireless packet network over wireline portions, wireless portions or both wireline and wireless portions of the network. For example, IP packets that are to be communicated between fixed equipment host devices (e.g., between console 124 and gatekeeper 126) will be routed across only wireline links, and IP packets that are communicated between fixed equipment host devices and wireless communication devices are transported across both wireline and wireless links.
Those packets that are to be communicated between wireless terminals (e.g., between panoramic communication units 120, 122) may be transported across only wireless links, or wireless and wireline links, depending on the mode of operation of the communication system 100. For example, in site trunking mode, packets might be sent from communication unit 120 to repeater site 102 via wireless link 116, to router 108 via Ethernet 134, back to the repeater site 102 and then to communication unit 122 via wireless link 118. In a direct mode, sometimes referred to as “talk around” mode, packets may be sent between the panoramic communication units 120, 122 directly via a wireless link. In either case, the wireless packet network of the present invention is adapted to support multimedia communication, including but not limited to high-speed streaming of panoramic voice and video so as to provide the host devices with access to panoramic audio, video, web browsing, video-conferencing and internet applications.
Microphones on the panoramic sensor assembly, just like objective lenses, are associated with certain set regions on the housing to enable reconstruction of imagery for panning a scene or reconstructing a scene. The software/firmware described in
Practitioners skilled in the art will appreciate that the communication system 100 may include various other communication devices not shown in
For example, the communication system 100 may include comparator(s), telephone interconnect device(s), internet protocol telephony device(s), call logger(s), scanner(s) and gateway(s). Generally, any of such communication devices may comprise wireless or fixed equipment host devices that are capable of sending or receiving IP datagrams routed through the communication system 100. Now referring to the core equipment site 106, the gatekeeper 126, web server 128, video server 130 and IP Gateway 132 will be described in greater detail. Generally, the gatekeeper 126, web server 128, video server 130 and IP
Gateway 132 operate either singly or in combination to control audio and/or video calls, streaming media, web traffic and other IP datagrams that are to be transported over a wireless portion of the communication system 100. In one embodiment, the gatekeeper 126, web server 128 and video server 130 are functional elements contained within a single device, designated in
According to one embodiment of the present invention, the gatekeeper 126 authorizes all video and/or audio calls between host devices within the communication system 100. The audio and/or video may be of a spatial, three-dimensional or panoramic nature as described above. For convenience, the term “video/audio calls” in include spatial audio and video data and will be used herein to denote video and/or audio calls, whether or not they are panoramic, as either can be accommodated. The video/audio calls that must be registered with the gatekeeper are one of three types: video only, audio only, or combination audio and video. Calls of either type can be two-way, one-way (push), one-way (pull), or a combination of one-way and two-way. Two-way calls define calls between two host devices wherein host devices sends audio and/or video to each other in full duplex fashion, thus providing simultaneous communication capability. One-way push calls define calls in which audio and/or video is routed from a source device to a destination device, typically in response to a request by the source device (or generally, by any requesting device other than the destination device). The audio and/or video is “pushed” in the sense that communication of the audio and/or video to the destination device is initiated by a device other than the destination device. Conversely, one-way pull calls define calls in which audio and/or video is routed from a source device to a destination device in response to a request initiated by the destination device.
In one embodiment, any communication between host devices other than video/audio calls including, for example, control signaling or data traffic (e.g., web browsing, file transfers) may proceed without registering with the gatekeeper 126. As has been noted, the host devices may comprise wireless devices (e.g., panoramic communication units 120, 122) or fixed equipment devices (e.g., repeater 112, routers 108, console 124, gatekeeper 126, web server 128, video server 130 and IP Gateway 132).
For video/audio calls, the gatekeeper 126 determines, cooperatively with the host device(s), the type of transport service and bandwidth needed to support the panoramic or three-dimensional content call. In one embodiment, for example, this is accomplished by the gatekeeper exchanging control signaling messages with both the source and destination device. If the call is to be routed over a wireless link, the gatekeeper determines the RF resources 116 needed to support the call and reserves those resources with the wireless link manager (a functional element of repeater 112). The gatekeeper 126 further monitors the status of active calls and terminates a call, for example when it determines that the source and/or recipient devices are no longer participating in the call or when error conditions in the system necessitate terminating the call. The wireless link manager receives service reservation commands or requests from the gatekeeper and determines the proper combination of error correction techniques, reserved RF bandwidth and wireless media access controls to support the requested service. The base station is able to service several simultaneous service reservations while sending and receiving other IP traffic between the panoramiccommunication units 120, 122 and host device(s) over the wireless link 116.
The web server 128 provides access to the management functions of the gatekeeper 126. In one embodiment, the web server 128 also hosts the selection of video clips, via selected web pages, by a host device and provides the selected streaming video to the video server 130. The video server 130 interfaces with the web server 128 and gatekeeper 126 to provide stored streaming video information to requesting host devices. For convenience, the combination of web server 128 and video server 130 will be referred to as a multimedia content server 128, 130. The multimedia content server 128, 130 may be embodied within a single device 140 or distributed among separate devices.
The IP gateway 132 provides typical firewall security services for the communication system 100. As previously discussed the web server and video server may be equipped with software for storing, manipulation, and transmitting spatial 3-D or panoramic content. The spatial content may be in the form of video, game, or 3-D web browsing content. The server may be programmed to continuously receive and respond instantaneously to commands from the user of a user of a panoramic communication unit handheld or worn 120, 122 device.
The present invention contemplates that the source and/or panoramic destination devices 120, 122 may be authorized for certain services and not authorized for others.
If the service controller determines that the source and destination devices are authorized for service at steps 204, 206 and that the destination device is in service at step 208, the service controller requests a reservation of bandwidth to support the call at step 210. In one embodiment, this comprises the service controller sending a request for a reservation of bandwidth to the bandwidth manager. In one embodiment, the service controller may also request a modification or update to an already-granted reservation of bandwidth. For example, the service controller might dynamically scale video bitrates of active calls depending on system load.
Examples of various types of communication supportable by the communication system 100 that is adapted and integrated with the panoramic capable communication terminals/units 120, 122 are described in
It should be noted that the message sequence charts of
As has been noted, the communication system 100 of the present invention is adapted to support several different types of communication between host devices, including panoramic audio and/or video calls requiring registration with the gatekeeper (i.e., the service controller function of the gatekeeper) and communication other than audio and/or video calls (e.g., control signaling, data traffic (including web browsing, file transfers, and (position, orientation, and heading data) that may proceed without registering with the gatekeeper 126. Moreover, as has been noted, the sources and recipients of the different types of communication may comprise wireless panoramic devices and/or fixed panoramic equipment devices.
Referring initially to
In the current example the wireless terminals comprise wireless head mounted worn by user A and wrist mounted panoramic communication systems worn by user B according to the present invention. The panoramic communications terminals A and B include a panoramic sensor assembly for recording multi-media signals representing the surrounding environment. The resulting signals are processed and selectively transmitted to a remote user for viewing. Data derived from the panoramic sensor assembly may also be used to define the content viewed. For example, head position, orientation, and heading data and eye gaze data of User A might be derived and transmitted to User B. User B would then use the data to send corresponding imagery to A based on that point of view and gaze data. In this way User A would be slaved to wherever User B looked at his or her remote location.
If all permissions are left open by Terminal A and Terminal B user, the users may immerse themselves in what each other see in their respective environments. Alternatively, viewer A may restrict viewer B to what he or she sees, or visa versa. Still alternatively, viewers A and B may agree to push and pull rules of their respective scenes. Viewer A may select to see thru his visor at the real world, if it has see a thru type, and transmit the sensed world recorded to User B. Or alternatively, user A may select to use his active viewfinder display to view the image he is sending to user B. Still alternatively, User A could view the processed image he is transmitting to User B in one eye, and view the real world in the other eye. Within these instances, users may define what is acquired, tracked, and reported on using menus discussed previously in the processing section of this invention. For instance, user A could select from the menu to just send image segments of his face to user B. And User A, who has just gotten out of the shower, being modest, could just allow user B to see everything in the environment except her body by using menu selections. Still alternatively, just to be safe, she might just allow audio to be sent and no imagery using the menu selections. The wireless terminals may include other interaction devices besides the optical sensor assembly, for example, magnetic position sensors, datagloves, gesture recognition systems, voice recognitions systems, keypads, touchscreens, menu options, etc. that permit the user to select the type and rules of call and the second party for the call. Positional data and commands can be sent in packets just like the video information. Full duplex communication allows for the transmission and receival of information is a simultaneous manner between user terminal A and B. The second party may be identified by user identification number, telephone number or any suitable means of identification.
Finally, it should be understood that one-way push or pull calls between a User A and User B are possible if two-way calls are possible, as they are even less demanding.
If authorization is received for the call, the multimedia content server sets up a one-way video call. Permissions and software and hardware to allow a panoramic or three-dimensional content distribution and interaction is set up between the multimedia content server and the destination device/users wireless panoramic/3-D panoramic communication units/unit 120. The one-way video call may comprise a video-only, or combination video and audio call. In one embodiment, setting up the video call comprises the multimedia content server negotiating terms of the video call with the destination device. For example, the multimedia content server and destination device might negotiate the type of audio, vocoder type, video coder type and/or bit rate to
be used for the call. After setting up the call, the multimedia content server retrieves video information (i.e., from memory or from a web site link) associated with the call and sends the video information to the requesting device (or destination device, if different than the requesting device) until the call ends.
Alternatively or additionally, the multimedia content server may source audio-only, video-only, or lip-synced audio and video streams. The message sequence of
Next, Wireless Terminal A obtains a Non-Reserved Assignment 804 from Wireless Link Manager A, thereby allowing it to send a Video Playback Request 806 across an associated wireless link to the 3-D Multimedia Content Server. The Multimedia Content Server, which is the source of video information for the call, sends a Video Call Setup Request 808 to the Service Controller. The Service Controller determines an availability of bandwidth to support the call by sending a Reserve Bandwidth Request 810 to the Bandwidth Manager. The Bandwidth Manager responds to the request by determining an amount of bandwidth required for the call and granting or denying the Reserve Bandwidth Request based on an availability of bandwidth for the call. In one embodiment the Bandwidth Manager responds to the request by determining an amount of bandwidth required on the wireless link(s) required for the call and granting or denying the Reserve Bandwidth Request based on an availability of bandwidth on the wireless link(s). In the example of
Proceed message 814 to the Multimedia Content Server, thereby authorizing the video playback call to proceed.
Thereafter, the Panoramic Multimedia Content Server and Wireless Terminal A exchange Setup Video Call message(s) 816, 820 to negotiate terms of the video call such as, for example, the type of audio, vocoder type, video coder type and/or bit rate to be used for the 3-D video playback call. In one embodiment, the Setup Video Call message(s) 820 from Panoramic Wireless Terminal A can not be sent until Non-Reserved Assignment(s) 818 are received from Wireless Link Manager A. After terms of the panoramic video playback call have been negotiated, the Multimedia Content Server retrieves video/audio packets 822 from memory or from an associated web server and sends them to Panoramic capable Wireless Terminal A. Upon receiving the video/audio packets, Panoramic capable Wireless Terminal A converts the IP packets into video/audio information 824 that is displayed/communicated to Wireless User A.
When the Panoramic capable Multimedia Content Server has finished sending the video/audio packets 822, it ends the video playback call by sending End Call message(s) 826 to Panoramic capable Wireless Terminal A and Video Call Ended message(s) 828 to the Service Controller. Upon receiving the Video Call Ended message 828, the Service Controller initiates a release of the bandwidth supporting the call by sending a Release Bandwidth Request 830 to the Bandwidth Manager.
The message sequence of
Wireless User A initiates the request by sending Video Playback signal(s) 902 to Panoramic capable Wireless Terminal A. The Panoramic Video Playback signal(s) 902 identify the video information (e.g., video clips) that is desired for playback in a manner that is recognizable by the panoramic capable multimedia content server, such that the panoramic capable multimedia content server may ultimately retrieve and source the requested video information. For example, the Video Playback Signal(s) may identify a URL for a particular web site video link with panoramic/3-D content. The Video Playback Signal(s) 902 also identify the destination for the call, which in the present example is Panoramic capable Wireless Terminal B of Wireless User B, located at a different RF site than Wireless User A. The mechanism for Wireless User A entering the Panoramic Video Playback signal(s) 902 may comprise other interaction devices besides the optical sensor assembly, for example, magnetic position sensors, datagloves, gesture recognition systems, voice recognitions systems, keypads, touchscreens, menu options, and the like depending on the features and functionality of Wireless Terminal A. Positional data and commands can be sent in packets just like the video information. Full duplex communication allows for the transmission and receival of information is a simultaneous manner between user terminal A the server and terminal B. The second party may that the playback request if for may be identified by user identification number, telephone number or any suitable means of identification.
Wireless Terminal A obtains a Non-Reserved Assignment 904 from Wireless Link Manager A and sends a Video Playback Request 906 across an associated wireless link to the Multimedia Content Server. The Multimedia Content Server, which is the source of video information for the call, sends a Video Call Setup Request 908 to the Service Controller. The Service Controller determines an availability of bandwidth to support the call by sending a Reserve Bandwidth Request 910 to the Bandwidth Manager. The Bandwidth Manager responds to the request by determining an amount of bandwidth required for the call and granting or denying the Reserve Bandwidth Request based on an availability of bandwidth for the call. In the example of
When the Multimedia Content Server has finished sending the panoramic content video/audio packets 922, it ends the video playback call by sending End Call message(s) 926 to Wireless Terminal B and Video Call Ended message(s) 928 to the Service Controller. Upon receiving the Video Call Ended message 928, the Service
Controller initiates a release of the bandwidth supporting the call by sending a
Release Bandwidth Request 930 to the Bandwidth Manager.
Panoramic Wireless Terminal A 120 obtains a Non-Reserved Assignment 1004 from Wireless Link Manager A and sends a Browsing Request 1006 across an associated wireless link to the Panoramic capable Multimedia Content Server. The Multimedia Content Server sends a Browsing Response signal 1008 to Wireless Terminal A that includes browsing information associated with the browsing request. Upon receiving the browsing information, Wireless Terminal A displays the panoramic or 3-D browsing Content 1010 on the panoramic/3-D capable Wireless Terminal A to Wireless User A.
The present disclosure therefore has identified a panoramic/3-D capable communication system 100 that extends packet transport service over both wireline and wireless link(s).
The wireless panoramic communication system supports high-speed throughput of packet data, including but not limited to streaming voice and video to wireless terminals 120 or 122 participating in two-way video calls, video playback calls, and web browsing requests.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.
The present application claims the benefit of U.S. Provisional Patent Application Ser. No. 60/572,408, filed May 19, 2004, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
60572408 | May 2004 | US |