The present invention related generally to experimental reality integrated systems which may include elements for creation, editing, monitoring and display of virtual, augmented and mixed reality manifestations for any application, purpose or industry; telecommunications; and optical and personal display systems applied to virtual, augmented and mixed reality systems for all applications, entertainment manifestations and head-mounted displays.
Miniature displays are also well known and may involve a miniaturized version of planar or stereoscopic 3D technologies which display a distinct image to each eye. With increase miniaturization and incorporation into eyeglasses design, head-mounted displays (HMDs) have enjoyed an increasing popularity for applications ranging from fighter pilot helmet displays and endoscopic surgery to virtual reality games and augmented reality glasses. The 3D HMD display technology has numerous extensions including Near-to-Eye (NTD)—periscopes and tank sights; Heads-Up (HUD)—windshield and augmented reality—and immersive displays (IMD)—including CAVE, dome and theater size environments. The principal employed varies little from that of the 1930 Polaroid™ glasses, or the barrier stereoscopic displays of the 1890s, despite extensive invention related to the active technology to produce each display has occurred over the past twenty years. As applied to small displays, these techniques evolved to include miniature liquid crystal, field emission, OLED, quantum dot and other two-dimensional matrix displays; variations of virtual screen and retinal scanning methodologies. These inventions have provided practical solutions to the problem of providing lightweight, high resolution displays but are limited to providing a stereoscopic view by means of image disparity.
It is also well known in the field that wavefront-based technologies, such as digital phase and diffractive holography, may at high-resolutions, convey a limited amount of accommodation data. However, their limitations including coherent effects, impart significant specular and other aberrations degrading performance and inducing observer fatigue.
Augmented reality had in origins at MIT Lincoln Laboratory in the 1960s and involved in a translucent HMD with head-orientation tracking in a wall projection immersive environment. The ‘virtual image’ in the HMD did not have accommodation, and the immersive environment did not include spatially-tracked, portable audience elements with multiplicative effects.
Despite the improvements during the past decades, the significant problem of providing a low cost, highly accurate visual display with full accommodation remains.
One of the principal limitations has been the inability of sequentially resonant or programmed variable focal length optics combined with scanning configurations to properly display solid three dimensional pixels, orthogonal to the scanning plane. Another limitation is the inability of the observer's eye to properly and comfortably focus on rapidly flashing elements. Numerous inventions have been proposed which have generally been too complicated to be reliable, too expensive to manufacture, without sufficient resolution, accuracy, stability to gain wide acceptance.
A further problem solved by the innovation of present invention is the method and apparatus to comfortably and useful carry and use an audio-visual display on one's person.
A further problem solved by the innovation of present invention is the method and apparatus to ergonomically, comfortably and useful carry and use an audio-visual display on one's person.
A further problem solved by the innovation of present invention is the method and apparatus to provide lightweight, optical components with high resolution and negligible chromatic aberrations.
A further problem solved by the innovation of present invention is the method and apparatus to provide lightweight, optical components with high resolution and negligible chromatic aberrations which may be transformed into a compact package;
A further problem solved by the innovation of present invention is to provide the method and apparatus which is lightweight, ergonomic, with high resolution and negligible chromatic aberrations, and which may be transformed into a compact package and integrated into an event manifestation.
The present invention solves these and additional problems, particularly related to the portable multiphasic design, augmented reality, environmental dynamics and the accurate display of 3D pixels.
The present invention discloses an improved method and device for the display of a visual image in two or three dimensions including stereoscopic and/or visual accommodation, light field, beam holographic or diffractive. Another object of the present invention is an improved method and device for an immersive, augmented reality environment.
Another object of the present invention is an improved method and device for monitoring the physiological, psychological, fixation, processing, awareness and response of an individual.
Another object of the present invention is an improved method and device for constructing an accurate, augmented reality, visual display with automatic bi-ocular alignment,
Another object of the present invention is an improved method and device for constructing an accurate, augmented reality, visual display without an intermediate image plane,
Another object of the present invention is an improved method and device for manufacturing a visual display independent of coherence and wavefront curvature constraints,
Another object of the present invention is an improved method and device for thin, wave-guided display.
Another object of the present invention is an improved method of presenting visual information,
Another object of the present invention is an improved method and device for an immersive, augmented-virtual reality, audience performance environment.
Another object of the present invention is an improved method and device to present visual information in compact form unaffected by an external environment.
Another object of the present invention is an improved method and device to compactly wear upon one's person and transform into an immersive, augmented environment.
Another object of the present invention is an improved method and device to compactly wear upon one's person and transform into an immersive, augmented or virtual environment including a coordinated event manifestation and audience effects.
Another object of present invention relates generally to robotic, moving-light devices including those which illuminate and project data and images in visible and invisible wavelengths particularly to those used for theatre, stage, events, security and defense.
One object of the present invention is an improved luminaire, compact in size, lightweight, ad with a low moment of inertia.
Another object is 4π, continuous scan of the venue,
Another object is high efficiency, low cost, low maintenance design without electrical slip rings, split transformers or other devices to transfer base electrical power to a rotating optical element.
Another object is low moment of inertia of the rotating optical projection element,
Another object is a lightweight and compact design.
The above and still further objects, features and advantages of the present invention will become apparent upon consideration of the following detailed disclosure of specific embodiments of the invention, especially when taken in conjunction with the accompanying drawings, wherein:
In the following descriptions, the integrated headset device and system 10 may refer to a multiplicity of discrete elements (displays, cameras, touchscreens, computers, motion sensors, rf and optical communications, microphones, speakers, physiological sensors and other elements integrated into a functional structure. Unless specifically described, the descriptions, functions and references may also refer to a well-known “Smart Phone”, manufactured by or marketed as an iPhone®, Samsung®, or others. The interactive embodiments of the present invention may be employed in manifestations of any sort to enable any effects or communication including but not limited to visual, audio streaming and interactivity.
(Base Station)
(Nanotech Embedded Display)
(Flexible Wraparound Display Tech)
In a preferred embodiment, the display device 100 folds about the nose bridge and adjusts the arms 120 to enable the earpieces 130 to removably affixed to secure the device 100 to the user's wrist or any other object.
In a preferred embodiment, the device 100 has a positive curl imparted in the frame which causes the device to roll up in its natural state. This configuration enables the frame 110 to natural wrap around a user's wrist or be expanded to present sufficient curl force to stably affixed to a user's head, supported in part by the nose bridge.
In a preferred embodiment, the VR lenses 54 and support 56 may slide into a pocket behind the display 40 for storage or AR operation. In operation, the support 56 may removably affixed to the eye visor 20.
In another preferred embodiment, the VR lenses 54 and support 56 may movably and/or removably attached to the eye visor 20 and/or the device support 50. In operation for VR, the support 56 may removably but rigidly affixed to the eye visor 20 and the device support 50. When stored, the VR lenses 54 and support 56 may fold onto the eye visor 20 and both folded adjacent to the device support 50. In this configuration, the user's line-of-sight is direct.
These preferred embodiments may incorporate any or all of the features disclosed in the parent applications including but not limited to U.S. patent application '044.
(Collapsible Head Strap Feature)
The attachment arm 75 may collapsible, hinged, elastic or of other construction to enable a rigid and stiff connection between the head strap 71 and the headset apparatus 10.
((Game/Trade Show Variant of Ergo))
These preferred embodiments incorporate by reference any or all of the features disclosed in the parent applications including but not limited to U.S. patent application Ser. No. 16/190,044 and Provisional Patent Application No. 63/222,59.
(Name Tag Variant))
(Diffusive Overlay for Optical Data Signal)
The popular smart phone cameras may be employed in dual role: as a normal scene camera and as a data receiver. Normally, in order to receive a narrow data beam which may incident at any angle either the full frame must be analyzed. The process may be greatly simplified by dedicating part of the camera aperture, preferable in a plane of focus, to a diffusive or holographic filter which redirect part of the data beam to a dedicated region of the camera sensor. Thusly, the diffusive, translucent target in the field of view may be monitored for any beam characteristics (color, intensity and timing) of an external illuminating beam.
(Focal Distance by Divergence of the Emitted Beam of the Display Element)
The perception of the distance of an object is determined by a number of factors including but not limited to the focal length of the lens of the eye; binocular convergence; image disparity; occlusion of or by other objects in the scene; relative size; relative or direction of the motion; color; and shading. The instantaneous focal length of the eye is in part determined by the divergence of the beam emitted from a resolvable, observable point source. The emitted beam may be of any form or combination including but not limited to conical or divaricated in one or multiple directions. For example, binocular emitter arrays, each pixel having a variable, horizontal divaricated form would enable the simultaneous projection of perceived focal distance (divergence), binocular convergence and image disparity.
(Construction Flip Up Eye-Optics)
((Ergo Name Tag Variant from App '044))
(Eye Sensors)
(Opera Designs)
Alternative configurations may be employed including but not limited to a snap-out, sliding and fold from a front pivot 78′. Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated that various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be part of this disclosure and are intended to be within the spirit and scope of the invention.
(Collapsible Head Strap Feature
The attachment element 74 may be adjustable, elastic and flexible so as to maintain a secure and rigid attachment between the user and the headset apparatus 10 at the forehead (72, 73, 74) while stabilizing the headset arm 75 attachment elements 74 proximal to the user's ears. The headstrap 71 may slide horizontally within the attachment device 74 or may be statically affixed to the headset 10 at one or more attachment elements 74, which may also integrate the adjustment device 72. In a preferred simplified embodiment the headstrap 71 is comprised to two sections, each of which is statically attached at adjacent forehead points, dynamically attached to the headset arm 75 attachment elements 74 and adjusted in length by the independent adjustment device 72.
The attachment arm 75 may collapsible, hinged, elastic or of other construction to enable a rigid and stiff connection between the head strap 71 and the headset apparatus 10.
This view of the present invention having a pliable inelastic or elastic headstrap 71, with or without a length adjustment 72, may be employed with zero, one or more fixed or sliding attachments 74 to the headset apparatus 10/70. In a preferred embodiment the headstrap 71 is immovably affixed to the headset 10/70 proximal to the forehead panel 74-1 and may have pass-through attachments 74 on the headset temple arms 75/750. The headstrap 71 may be adjustable at the attachment points 74 on the forehead element 755, the headset temple arms attachment 74; or an independent strap adjustment device 72. The headset temple arm 750 may be articulated at one or more points 76 enabling the headset arms 750 to fold within the outline forehead (device support frame) headset element 755.
In a preferred embodiment one or more regions 640, 48 of the Display Module/Smart Phone Screen 40 is visible and/or accessible for touch screen activation. The visual and touch characteristics be dynamically altered in response to any event, location, command, etc. In combination with the sensor input including spatial positioning and status, the region 640 may function as a show and/or user controller “pixel” as part of a large collection of the Receivers.
The Smartphone may be removed through any openings, folds or other constructions in any direction including but not limited to refastenable retaining flaps on the sides or the top comprising an integral or separate element.
inelastic or elastic headstrap 71, with or without a length adjustment 72, may be employed with zero, one or more fixed or sliding¬attachments 74 to the headset apparatus 10/70. In a preferred embodiment the headstrap 71 is immovably affixed to the headset 10/70 proximal to the forehead panel 74-1 and may have pass-through attachments 74 on the headset temple arms 75/750. The headstrap 71 may be adjustable at the attachment points 74 on the forehead element 755, the headset temple arms attachment 74; or an independent strap adjustment device 72. The headset temple arm 750 may be articulated at one or more points 76 enabling the headset arms 750 to fold within the outline forehead (device support frame) headset element 755.
This side view of a preferred ergonomic (ERGO) embodiment of the present invention having a pliable, inelastic or elastic, headstrap 71, with or without a length adjustment 72, may be employed with zero, one or more fixed or sliding attachment region 74 to the headset apparatus 10/70. In a preferred embodiment the headstrap 71 is immovably affixed to the headset 10/70 proximal to the forehead panel 74-1 and may have pass-through attachments 74 on the headset temple arms 75/750. The headstrap 71 may be adjustable at the attachment points on the forehead 74/740, the headset temple arms attachment 74; or an independent strap adjustment device 72/174. The headset temple arm 75o may be articulated at one or more points 76 enabling the headset arms 75 to forehead (device support frame) headset element 76.
A foam, cloth, or other material comfort element 73/730 may be provided at the forehead element 755. The comfort element 73/730 may be formed and attached in any manner including but not limited over the headstrap 71-attachment element 74 or as horizontal-array of spaced vertical cylinders affixed to surface of the forehead element 755, under or over the headstrap 71.
The ERGO embodiment is shown having the Display Module/SmartPhone 40/740 positioned proximal to the forehead 755, the first mirror 30/730 aligned to reflect the display image downwards to the lens element 50/52 and the eye reflector 20/22.
present a side view of preferred LEGACY embodiment of the present invention having the Display Module/SmartPhone 40/740 positioned proximal to the forehead 755, the first mirror 30/730 aligned to reflect the display image downwards to the lens element 50/52 and the eye reflector 20/22.
The First Mirror (Hypotenuse Mirror) may be comprised of a flexible material and unravel to increase the hypotenuse length; may be of accordion-design; and/or include a top and bottom slider elements 730 into which the mirror 30 slides. In the closed state, the sliders converge to reduce the overall hypotenuse length. In the open state, the sliders move outwards thus increasing the hypotenuse length. Side and supporting rails and inserts may be provided.
Many additional preferred embodiments of the present invention, where the headstrap 71 may have fixed or sliding alignment/stabilizing slots at an intermediate strap 71/72.
The user's ear may support and stabilize the headset through the temple arm region 752 which may rest upon the user's ear. The headstrap 71 may constructed from two or three segments, and be adjustable at the attachment points on the forehead 73, 76; the headset temple arms attachment 74; or an independent strap adjustment device 174. Elements may included: Separate Arm-Forehead Strap, Single Adjustment, On headset arms, On forehead unit, Independent, Other, Dual Adjustments, On arm, On forehead, Independent, Other, Flow through, On arms slot, On forehead.
Various configurations may included: articulated Temple Arm-Fold Lines 76 lays flatter when folded. Articulation Fold Creases may be dimensioned such that the main part of the temple arm folds to cover 50% or less of the width of the forehead section; Foam, absorbent/dissipating materials, patterns, spacers from forehead pad; May be patterns with slots and/or attached from above or below the strap attachment region; Orientations—Strap Slot; Horizontal; Canted forward; Canted backward; Spaced above lens; Transmissive—Reflective Lens combination; ERGO, —Vertical Display Forward; LEGACY-Vertical Display Rear; Transformable Module Orientation; Headset may transform; Headset construction as a platform for independent Display Module.
Materials for construction of the headset may include: Paper, Synthetic, Plastic, Carbon fiber, Wood, Cloth or other.
(Telescoping Optic Axis Redirection
Modern cell phones are increasingly employed as the display device for Head-mounted displays including, but not limited to, those designed by AR for Everyone and Google. The cell phone commonly has one or more integrated cameras, most often with in a fixed position with a principal optical axis orthogonal to and offset from the principal center of the phone. This arrangement presents a problem for the accurate registration of the camera image with the display image. Further the orthogonal principal axis presents a problem when the external object of interest in not in a comfortable or convenient location relative to the orientation of the user's head and HMO. Further, the distance between the principal optical axis and the phone center varies from phone to phone. Thus, an object of this invention is to vary the principal view axis of the camera, provides a system to align with the principal or designated axis of the phone and an optical system which folds compactly. This innovative and economical solution comprising a foldable, telescoping, rotatable optical system which may be better understood from the drawings and specification herein. While the camera 80 is shown and described as integrated into a contemporary “Smartphone” 40, the present invention may be applied to any camera or optical device of combination.
In normal operation the first mirror is fixed to a base 100 which may rotate about the axis of the camera 80. The second mirror rotates about axis 124 causing the incident view angle 140 to sweep a plane orthogonal to the main axis of the phone 40.
A second adjustment may be made by a transverse axis 138 introducing a change in the plane parallel to the main axis 40.
126. As shown in
The 2nd Mirror Assembly may telescope by sliding in telescoping slot 121. Further support may be provided by folding flaps 138.
A retaining ring 100 affixed to the phone 40 which enables the base 100 to rotate about the camera axis 80 is shown.
(Convergence & Accommodation) (
Human Eyes employ two complementary mechanisms for normal vision: Convergence—changing the angle of intersection of the principal optical axes of each eye, and Accommodation—changing the shape of the eye's lens and thereby it focal length. In our normal 3D environment, these mechanisms operate autonomically. (see Autonomic Control Of The Eye by David H. McDougal and Paul D. Gamlin, Compr Physiol. 2015 January; 5(1): 439-473, incorporated herein by reference.)1 1 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4919817/pdijnihrns795047.pdf
Current display technology simplifies our normal visual environment by presenting 3D environments as flat or monoscopic ‘representations’ (drawing, pictures, television, video, cinema) which eliminates both normal object convergence and accommodation—the viewer's eyes converge and focus at the same distance regardless of the “real world’ relationships between the objects in a “real” environment. One display technology which moves toward the “real world” is stereoscopy which presents slightly different images to each eye, thereby enabling an object-based convergence.
When the objects are within 5 meters of the observer, a significant conflict arises between the object-specific convergence and the fixed global accommodation. This conflict may produce eye strain, headaches; it reduces scene acquisition, interpretation, apprehension. This effect manifests in the physiological processes of the vision system, measurable at the eye, the optic nerve, the brain, the muscles, and the entire human body.
Object (or pixel)-specific accommodation requires that the divergent cone of light from each pixel of the display precisely converge at the proper ‘real world’ distance.
Various solutions have been proposed including phase and digital holography, and dynamic ‘lightfield’. The calculations requires and the apparatus to display or project are complex, artifactual, and incomplete.
The present invention discloses a novel method and improved apparatus for the display of the visual accommodation directly translates the depth coordinate, commonly designated the Z value in cartesian XYZ space to spatially-aligned display source emitter accurately displaced from the “zero” plane in the display optical system resulting in the proper true focal for each pixel at the observer's eye.
The apparatus in described herein.
The method includes the discrete element analysis of the image to be displayed relative to the principal axis and plane of the observer's eye lens wherein each elements distance, angular displacement, chromaticity and transparency is described. Any global or local coordinate system may be employed and encoded.
For the purpose of this generalized example, the method is presented using a cartesian coordinate model, wherein the image projected onto the observer's retina is described an X-Y (Z, C, T) array of discrete elements (pixels) where, by convention, X is lateral or horizontal and Y is vertical, and the intersection of the principal optical axis of the eye is represented by the array element (0,0). The focal distance of the image element (X,Y) is the value (Z); its color is value(s) (C); and the transparency (T). Numerous conventions exists for the description of color and transparency (RBGA, CYMK, CIE1931, etc.) and any one may be employed. For the purposes of this generalized example, the RBGA convention is employed.
Thus, the initial steps of this method is to described the image to be displayed as comprised of discrete elements in an array (X, Y, Z, R, B, G, A).
In three-dimensional computer graphics, the rendering process maps a complex 3D model to an 2D image (an X-Y array of pixels), where the sharpness may be degraded by accommodative realism, and the displayed color an integration of the color and transparency of all the 3D model point lying on the optical ray defined by the eye lens and the spatial coordinates.
Non-Linear Matrix L(z) for dynamic methods accounts for the non-linear focal response of the eye, time to focus, and relative brightness where the maximum brightness curve corresponds to the maximum pixel focus curve envelope.
Contemporary displays are viewed at a fixed distance from the observer. When a three-dimensional object is presented, the observer naturally analyzes the scene employing cues such as the distortion of the object, relative size, shading and shadows, occlusion, and with classic stereoscopy—image disparity. Visual accommodation—the variable focus length of the lens of the observers eye—is not employed.
In many real world situations, the inability of displays to enable visual accommodation limits visual cognition, situational awareness, response time and accuracy. In the case where two objects are co-axially located at different distances along a principal ray from an observer and entirely overlap, the objects will appear fused when rendered for a existing monoscopic or stereoscopic displays.
The present invention presents a novel method and apparatus which enables the accurate display of high resolution visual accommodation of a scene co-axially positioned virtual objects with a minimum of computer graphics processing.
Scene—the real world, virtual CGI or other data set containing the images, objects to be displayed.
Trixel—3D pixel or element in a n-dimensional matrix mapping of 3D spatial coordinates generally described in cartesian systems as X, Y, Z, horizontal, vertical, depth, respectively.
Trixel Attribute—Each 3D spatial coordinate may associate any number of attributes in the data set including but not limited to RGB color, Alpha opacity, reflectivity, emissivity, etc.
Rendered Matrix—refers to a 2D spatial coordinate data set with a z-depth attribute for each 2D spatial coordinate matrix and additional optional attributes which represents a monoscopic view of the scene viewed by each eye of the observer.
Critical Attribute—Target of Interest Occluded but exhibiting initiating behavior triggers the Critical Modalities—reduction of the opacity of the proximal, alternating frames (proximal, distal), symbolic overlays, etc.
Display Queue—refers to the sequence and timing of Display Projection elements to activate based on the Display Projection Device (3D Volume, 2D Pixel Matrix, 1D Pixel Matrix, etc.)
Display Projection Device—refer to any device which creates a visual image viewable by the observer which includes but is not limited to screen television technology, light emitting pixels (LED, OLED, uLED, LCD, FLCD, DMD, DLP, Laser). A preferred embodiment is a 1D or 2D matrix of LEPs arranged parallel or obliquely to the principle optic axis of the System such that the LEP at the proximal side are optically (measured along the principal optic axis) closer to the observer's eye than the distal LEPs at the distal side.
The method comprises the following steps:
Create a data set which encodes the Scene with 3D spatial coordinate applied to Scene elements—objects, surfaces, etc.—computable as Trixels and optional Attributes. Contemporary 3D CGI programs generally encode object-based data which reduces the size of the data. Object-based data sets generally contain object descriptions such as: Object 1—Cube, Edge Length, Orientation, Center position and Attributes rather than an 3D coordinate ordered list of the attribute of each element.
Analyze the Scene and compute the Rendered Matrix. The novel Render Computation includes all distal Trixels occluded by more proximal Trixels along the principal axis of the observer's eye (monoscopic view) but visible in the peripheral rays (marginal, chief meridional ray, etc)
Compute the display queue based on the Image Projection Element employed. Derive the monoscopic renderings with pixel depth data and high acuity visibility/surround acuity (points visible in only one view pair).
Test for Critical Attribute behaviors
Transfer control data to Display Device and Run.
Que for projection (mode based on device—cubic, linear, oblique
Project synchronously with at least one axis having focusable arc.
(Sensors and Inducers Affixed to the Present Invention)
(Smartphone Camera Auxillary Input System Using Telescoping Optics) (
Fig. The present invention discloses a compact, inexpensive, auxiliary, optical input technology for camera-based system 400 which does not require RF comm (Wifi, Bluetooth, 5G, etc) bandwidth, pairing or other registration. The optical input system 400 comprises a camera-based device such as but not limited to a SmartPhone 402 having a camera or light sensor input 404; and an independent, removably-attachable Auxillary Input Unit 410 comprises a black box controller 412; one or more data signal receiver elements 414; one or more output light elements 416 which may be modulators or sources (such as lasers, LEDs, LCDs, ODs, etc.) driven by the black box controller 412; and an optical combiner 430 such that the output from the output light element(s) 420 combines with the normal field of view 444 of the camera system 442.
In a preferred embodiment of operation, the black box 412 comprises an IR optical receiver (414) connected to processing unit 416 which transforms the data received in an encoded, infrared data signal (418) broadcast from a sender (450) into one or more changes in the static or temporal brightness or chromaticity of the output light elements 416. These changes are recognized by the camera and interpreted according to the commands of the software.
The data signal receiver elements 414 may be fixed or moving, point or rotated in any direction relative the host Smartphone 402.
Audience Effects and Manifestions (
The Experimental Reality System taught in the present invention incorporates by reference this Applicant's earlier applications and patents, including pending application Ser. No. 13/294,011. The System may be applied to audience effects, manifestations, or any distributed control application where simultaneously communication to a multiplicity of specific locations or radial directions from a signal emitter is advantageous.
In a simple embodiment, the System is comprised of a data source such as a stage projection/lighting control board, one or more image projectors such as the DIP series based on the Texas Instrument DIP® x-y matrix moving mirror shutter chip, and an audience receiver unit having a photonic receiver such as Smartphone camera. The communication transmission may be encoded in the modulation of one or more photonic wavelengths using known or custom electromagnetic (optical, RF, etc.) communication protocols
Data Source may be any device or storage media including but not limited to an entertainment media or light board, media server, DVD or any player, computer, smartphone, or integrated input device such as but not limited to a handheld baton, wand, wrist or body sensor,
Data set may be any digital and analog data describing any effects including but not limited to any visual data 2D or 3D picture, drawing, video, cartoon, art or abstract presentation; any audio data; any positional data; any motion data; or any other effects data (scent, vibration, stiffness, humidity, etc).
The communications system may be any device with transmits or projects the data set between the data source and receiver module including but not limited to a electromagnetic projector at any single or multiple wavelength(s) such as a UV, visible or IR projector, RF or ultrasound transmitter or audio speakers.
The receiver and effects module may be any device with receives the data set and produces the related effect.
Raving a receiver module comprising a smartphone or augmented reality headset.
In accordance with a first aspect of the present invention, a projection system is provided. The projection system is for providing a distributed effects within a location. The projection system comprises a data source, a projector and a plurality of receiving units distributed within the location. The data source for generating a plurality of data sets of associated effects data and spatial coordinate data. The projector is in communication with the data source for receiving the data sets therefrom. It comprises a signal generating module for generating a plurality of electromagnetic signals, each one of the electromagnetic signals being representative of the effects data from one of the data sets. The projector also includes a projecting module for projecting each of the electromagnetic signals towards a target location within the location. Each target location corresponds to the spatial coordinate expressed by each one of the receiving units depending at least in part on the target location at which the one receiving unit resides when the electromagnetic signal is received.
The plurality of receiving units is distributed within the location. Each receiving unit is provided with a receiver for receiving one of the electromagnetic signals when the receiving unit is positioned in the corresponding target location. Each of the receiving units is also adapted to perform a change of state in response to the effects data. In accordance with another aspects of the invention, there is provided a projector for providing a distributed effect within a location through a plurality of receiving units. The receiving units are adapted to perform a change of state and are positioned at target locations within the location. The distributed effect is based on a plurality of data sets of associated effects data and spatial coordinate data. The projector first includes a signal generating module for generating a plurality of electromagnetic signals, and encoding each one of these electromagnetic signals with the effects data from one of the data sets. Encoded electromagnetic signals are thereby obtained. The projector further includes a projecting module for projecting each of the encoded electromagnetic signals towards one of the target locations within the location corresponding to the spatial coordinate data associated to the effects data encoded within the electromagnetic signal. Preferably, the projector is provided with an encoder and the receiving units are each provided with a decoder. Preferably, the encoder is a modulator and the decoders are demodulators. Still preferably, the effects data is representative of a video stream and the receiving elements are provided with LEDs. In accordance with yet another aspect of the present invention, a method is provided. The method comprises the steps of: a) generating a plurality of data sets of associated effects data and spatial coordinate data; b) generating a plurality of electromagnetic signals, each one of the electromagnetic signals being representative of the effects data from one of the data sets; c) projecting each of the electromagnetic signals towards a target location within the location corresponding to the spatial coordinate data associated with the effects data transmitted by the electromagnetic signal; d) distributing a plurality of receiving units within the location; and e) at each of the target locations where one of the receiving unit is positioned: i) receiving the corresponding electromagnetic signal; and ii) changing a state of said receiving unit in response to the effects data. Advantageously, the present invention allows updating individually a plurality of receiving units with a wireless technology in order to create a effect, for example a visual animation. Embodiments of the invention may advantageously provide systems for displaying or animating elements by controlling or animating them from at least one centralized source. Control of these elements in function of their locations within a given space may also be provided, while not limiting their displacement within this space. Embodiments may also provide the capability of wirelessly updating the modular elements dispersed within the given space.
In accordance with a first aspect thereof, the present invention generally concerns a projecting system for creating am effect using a projector and several receiving units distributed within a given location. Electromagnetic signals are sent by the projector and may vary in function of specific locations targeted by the projector. In other words, receiving units located within a target location of the location will receive specific electromagnetic signals. These signals will include a effects data, instructing the receiving element on a change of state they need to perform. The change of state can be for example a change of color. The combined effect of the receiving units will provide a effect, each unit displaying a given state according to its location. The expression “effect” is used herein to refer to any physical phenomena which could take place within the location. In the illustrated embodiments, the effect is a visual animation, such as a change in color, video, or simply the presence or absence of light or an image. The present invention is however not limited to visual animations and could be used to provide other types of effects such as sound, shape or odor. The location could be embodied by any physical space in which the effect takes place. Examples of such locations are infinite: the architectural surface of a public space, a theatre, a hall, a museum, a field, a forest, a city street or even the ocean or the sky. The location need not be bound by physical structures and may only be limited by the range of propagation of the electromagnetic signals generated by the system, as will be explained in detail further below. The receiving units can be dispersed in any appropriate manner within the location. At any given time, the receiving unit may define a 2D or a 3D effect. The effect within the location may be fixed for any given period of time, or dynamic, changing in real-time or being perceived to do so. The distribution of receiving elements within the location may also be either fixed or dynamic, as will be apparent from the examples given further below. Referring to
Components of projection systems according to embodiments of the invention will be described in the following section.
Data Source
Referring to
“State”
The term “state” refers to a mode or a condition which can be displayed or expressed by a receiving unit. For example, a state can take the form of a visual effect, such as a color, a level of intensity and/or opacity. The state can also relate to a sound, an odor or a shape. It can be a sequence of state changes in time. For example, the effects data can be representative of a video stream, the distributed effect displayed by the receiving units 200 being a video, each receiving unit 200 thus becoming a pixel within a giant screen formed by the plurality of units 210. In order for the comm projector 100 to address specific receiving units 200 within the plurality of units, the effects data is associated with spatial coordinate data. The term spatial coordinate refers to a coordinate which may take various forms such as for example a position in an array of data, a location within a table, the position of a switch in a matrix addresser, a physical location, etc. Projector Now with reference to FIGS. X to X, different embodiments of the comm projector 100 are shown. A comm projector 100 can be any device able to project directional electromagnetic signals. It can be fixed or mobile, and a comm projection system according to the present invention can include one or several projectors. The comm projector 100 is in communication with the data source 18 and receives the data sets therefrom. While in
In a preferred embodiment, the electromagnetic signals have a wavelength within the infrared spectrum. Other wavelengths may be considered without departing from the scope of the present invention. The signal generating module preferably includes one or more light emitters. Each light emitter generates corresponding electromagnetic signals. The wavelength of the electromagnetic signals may be in the infrared, the visible or the ultraviolet spectrum, and the signal generating module can include light emitters generating electromagnetic signals at different wavelengths. The electromagnetic signals may be monochromatic or quasi-monochromatic or have a more complex spectrum. For example, the light emitters may be embodied by lamps, lasers, LEDs or any other device apt to generate light having the desired wavelength or spectrum. Referring more specifically in particular embodiments of the invention, the signal generating module 24 may include an encoder for encoding each electromagnetic signal in order to obtain an encoded electromagnetic signal. While not shown, this embodiment of the invention also preferably includes an encoder. The encoder may for example be embodied be a modulator which applies a modulation on each of the electromagnetic signals 26, the modulation corresponding to the effects data transmitted by the data source 18 and thereby being encoded within the electromagnetic signals. With reference to the embodiment of
Alternatively or additionally, as shown in
For example, a comm projector 100 can modulate the signal of an infrared emitter at three different frequencies in order to transmit effects data 18 on three independent channels. Receiving units 200 equipped with amplifiers and/or demodulators tuned to these three frequencies may then change state according to the signal they receive on three independent channels. For example, using red, green and blue LEDs coupled to each of these three associated state color allows the units 200 to display full-color video in real-time. Referring still to
In summary, the present inventions employs multi-communications protocols, including but not limited to wifi, bluetooth, other rf, audio, optical and other em, and projection to direct, spatially distinct, projection of optical data and positional signals, to coordinate a complex visual and audio presentation to any audience, gathering, assembly or manifestation of receivers which may be simultaneously or have previously received data, instructions, settings and commands to present video, audio and other effects.
(Improved Receiver module)
Referring to
The receiver may have a display module and other effects including LEDs, speakers, and moving, mechanical elements such as is shown in
The Augmented Reality Experience may overlay the Environmental Experience, using Camera Registration of real objects, signals or indicators to align the See-Through with the Augmented Experience.
(Application of Collapsible Design to Transformable Wristwatch-EyeGlass) (
List of Enumerated Elements
General Specification Concluding Statements
Further, though advantages of the present invention are indicated, it should be appreciated that not every embodiment of the invention will include every described advantage. Some embodiments may not implement any features described as advantageous herein and in some instances. Accordingly, the foregoing description and drawings are by way of example only. Various aspects of the present invention may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing and is therefore not limited in its application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings.
For example, aspects described in one embodiment may be 15 combined in any manner with aspects described in other embodiments. The invention may be embodied as a method, of which an example has been described. The acts performed as part of the method may be ordered in any suitable way.
Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include different acts than those which are described, and/or which may involve performing some acts simultaneously, even though the acts are shown as being performed sequentially in the embodiments specifically described above. Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements. Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,”, “comprising,” or “having,” “containing,” “involving,” and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items.
This continuation-in-part and divisional application claims the benefit of the earlier filing date and/or incorporates by reference in their entirety my related and earlier-filed applications and disclosures including Ser. No 13/294,011 (pending); Ser. No. 14/189,232 (pending); Ser. No. 16/196,044 (pending) and Ser. No. 16/819,091 (pending); provisional App. No. 63,230,081 filed Aug. 8, 2021 and provisional App. No. 63,222,599 filed Jul. 16, 2021. The aforementioned benefit of filings are recorded in the submitted ADS.
Number | Date | Country | |
---|---|---|---|
Parent | 16819091 | Mar 2020 | US |
Child | 17454325 | US | |
Parent | 16190044 | Nov 2018 | US |
Child | 16819091 | US |