This invention relates to the field of personal, head-worn 3D (3 Dimensional) and 2D (2 Dimensional) viewing devices.
Because the eyes of humans are spread a few inches apart, each eye experiences a unique perspective that is slightly shifted from the other—this is a principal widely known as ‘parallax’. Our brains combine the two 2-dimensional images that our eyes receive into a singular virtual 3D ‘image’—this is how humans perceive depth in the natural world.
Virtual parallax, on the other hand, mimics optical parallax by presenting the user with synthesized image pairs, one image for the right eye and one for the left eye. Such 3D images are created in various ways using various methods, but the result is always the same, with two images that are ‘seen’ from slightly different perspectives.
For instance, two optical cameras placed a few inches apart will ‘act’ as the eyes of the user and, since the eyes of humans are a few inches apart, any image pair taken using a pair of such cameras arranged in this way, using the same camera settings would, together as a pair, exhibit the features of virtual parallax. Then, with said images placed in front of the user's eyes, one before each eye, the user's brain would combine the two separate images together, creating within the user's mind a virtual 3D ‘scene’ which is represented by the image pair. This is the foundation of the 3D imaging paradigm.
In the case of 3D video (including gameplay) the general methodology is the same, but the image pairs are shown sequentially very rapidly, creating in the mind of the user a synthetic 3D ‘scene’ which manifests as fluid motion.
As a side note, 2D viewing of images and/or video is accomplished easily with 3D viewers, simply by duplicating a single 2D image or video so that each of the user's eyes sees the same image and/or video.
The most common 3D viewing method that manufactures have historically been using is what is known as the ‘side-by-side’ method. This method places two display screens side by side—or a single wide screen slit down the middle, placing a virtually-parallaxed image pair directly in front of the user's eyes, one image in front of each eye. The side-by-side method is a flawed paradigm as it imposes three severe limitations; the first being image resolution due to the ‘pixel density’ problem; the second being the size limitation of useable screens owing to the fact that the distance between the eyes of humans is small; and the third being widescreen viewing, which is directly related to the previous two problems. Devices using the side-by-side method do not offer native widescreen viewing because doing so would severely limit the total size of their display screens, since it is the width of side-by-side screens that is the limitation, and since those manufacturers who produce such devices want to take advantage of the fact that screen height is not limited within that paradigm.
With an average distance of around 2.3 inches between the eyes of most humans, it means that screens no wider than around 2.3 inches would function with the side-by-side method, since screens any wider than this would hit and block each other. This also means that pixel density within the side-by-side paradigm is fixed to an upper limit due to the small screen sizes, meaning that a true HD (High Definition) quality of UHD (Ultra High Definition, or 2160p) is well out of reach for such side-by-side devices, at least for a number of years, until display screen technology can catch up to the needs of such VR device manufacturers, by offering pixel densities far in excess of the current maximum which—as of this writing—is at around 800 PPI (Pixels Per Inch). Because of these stated problems, devices based on the side-by-side method are very poor choices for the viewing of modern TV shows, cinematic movies, high-end video games, high-resolution computer desktops, eSports streams, or any digital content that requires a combination of high resolution and native widescreen formatting.
The current invention is a 3D/2D viewing device that is worn on the face and head of the user, and because of its very high resolution and native 16:9 widescreen formatting will be ideally suited for the viewing of high-definition TV shows, cinematic movies, as well as viewing any other digital content that would benefit from these unprecedented device features. The current invention solves all three of the aforementioned problems with the side-by-side display method, and does so all at once through the use of optics. The proposed system mounts on the user's face and head in such a way that the user's eyes gaze comfortably and directly into the device. A system of adjustable and flexible head-straps is provided to secure the apparatus to the user's head in a comfortable and convenient fashion, and multiple versions and configurations of the complete apparatus are outlined within this text.
Said headset enables both 3D and 2D viewing of digital content, which includes still images, video, gameplay, GUI (Graphical User Interface) components, internet content, etc., via the use of two separate internal display screens.
The three disclosed problems associated with the side-by-side methodology are here solved by placing an optical system in front of each eye, which allows the display screens to be physically displaced, hence spatially separated, thereby bypassing the problem of the limited spacing between the user's two eyes.
This new spatial separation technique allows larger display screen sizes to be used, and also wider display screens—i.e. display screens that are widescreen formatted. The use of larger display screens puts to an immediate end the aforementioned problem of resolution, since 5.5-inch diagonally measured display screens are now being manufactured in the industry with UHD resolution. The new spatial separation technique of the current invention also puts an end to the aforementioned widescreen limitations of the side-by-side paradigm, allowing for the first time true native widescreen viewing in a head-worn 3D/2D device. It is this combination of high resolution and native widescreen, via this optical separation technique, which makes the detailed invention so innovative and unprecedented.
The two optical systems within the described apparatus are bisymmetrically arranged, as front-surface mirror-opposites of each other. Each optical system is composed of a converging lens and a front-surface mirror. It is the front surface mirrors in the device which allow the flexible placement of the display screens within the apparatus, and in this case the front-surface mirrors deflect the user's gaze both upward and outward, thus allowing the display screens to be placed well apart from one another. This wide spacing is what allows the display screens to be much larger than those used in the side-by-side methodology. Thus the first problem of the side-by-side method is solved here—that of the allowance of larger display screen sizes.
The allowance of larger display screens itself solves the second problem of the side-by-side method, that of pixel density. Because modern smartphones are already being manufactured in the industry with UHD display screens of 5.5 inches diagonal—which corresponds to a pixel density of 806 PPI—it means that the device specified in the current invention can use such display screens and thus feature UHD resolution per eye, which would be an industry first. It also means that the third problem in the side-by-side paradigm—the ‘widescreen problem’—can be solved at the same time, since such modern smartphone display screens use the global standard widescreen format of 16:9 ratio, landscape oriented.
Because the front-surface mirrors are arranged at a sharp compound angle, the front-surface mirrors must be cut to a special shape which, from the viewing angle of the user's eyes, appear to the user as rectangles of 16:9 landscape-oriented ratio, to conform with the 16:9 widescreen format of the display screens.
And it is no matter that front-surface mirrors arranged in such a fashion throw off the rotational orientation of whatever they're reflecting in a rather extreme manner. In the layout of this device, the display screens are simply rotated to compensate, so that the user sees a correctly aligned image for each eye, which allows the user's brain to combine the two perfectly aligned images together to produce a singular virtual 3D ‘scene’.
The converging lenses in the system re-focus the user's eyes in such a way that the display screens can be clearly seen at such close distance.
It is this very special combination of elements—the front-surface mirrors which facilitate the physical separation of the two display screens, the odd shape and extreme compound angle of those front-surface mirrors which creates an intended optical illusion, the rotational displacement that their extreme angles create, the rotational compensation realized in the final placement of the display screens, and the widescreen ‘shaping’ of all optical elements—converging lenses and front-surface mirrors, as well as the display screens—which together allow such a vast improvement in digital content viewing for a head-mounted device.
The accompanying drawings are for clarification purposes only, and should therefore not limit the interpretation of the textual specifications but only augment the specifications.
The following text describes in detail the functionality of the invention, along with descriptions of the various assemblies and parts, with reference to the associated drawings which are included for clarification of said text.
This text assumes an XYZ Cartesian coordinate system based in the normal three dimensions of space, within which the Y axis is along the forward line of sight of the user who is wearing the apparatus in question, in other words front and back of the user; the X axis runs left and right of the user; and the Z axis runs up and down from the user's perspective.
This invention is a VR (Virtual Reality) system which features a head-mounted apparatus that the user dons on his/her face using the included strap-based system, so that the user's eyes are optically interfaced with the apparatus. The strap-based system is user-adjustable, to accommodate different head sizes, and made of flexible material to provide constant tension for the sake of keeping the head-mounted apparatus—i.e. the headset—comfortably mounted to the user's face and head.
Right eye assembly 1 and left eye assembly 2 are mounted together to form a singular stereoscopic system, allowing the user to be exposed to 3D or 2D digital content supplied by the two 16:9 display screens, one display screen contained within screen assembly 3 and the other display screen contained within screen assembly 4, in such a way that the user believes that he/she is seeing a singular virtual 16:9, landscape-oriented image.
Before going into great detail about the apparatus, let it be known first that there are multiple versions of said complete apparatus disclosed here, some with different screen configurations. That being said, the functional principles are identical for all said versions of the device, regardless of screen configuration, because in all the configurations the screens—regardless of which type or size—are always in the same position. For most of the foregoing text it will be assumed that the version that is being presented is the version which contains one fixed display screen on the right side of the apparatus, screen assembly 3, and one moveable screen on the left side of the apparatus, screen assembly 4, which screen is provided by the user in the form of his/her smartphone, which the user installs temporarily into the device.
The device as a whole presents the user's right eye with the digital content that is emanating via right eye assembly 1 from the display screen in screen assembly 3, and presents the user's left eye with the digital content that is emanating via left eye assembly 2 via the display screen in screen assembly 4.
It should be borne in mind throughout the reading of this text that this right side assembly is identical in every way to the left side assembly 2, with the exception that it is bisymmetrically arranged, so that any discussion of either side applies equally to the opposite side, although in the bisymmetrical sense.
Right side assembly 1 rests gently against the user's face via the soft padding provided by foam interface 1h. This arrangement places the user's eye within very short range of converging lens 1C, which is a converging lens of +7.5 diopter strength, manufactured at a 16:9 ratio for widescreen viewing, with a width of 2 inches and a height of 1.2 inches, and which sits approximately 0.5 inches away from the cornea of the user's right eye along the Y axis. This converging lens acts as a re-focusing element, allowing the user's eye to comfortably focus on display screen 3A2, which is at a compound distance of 4.42 inches. This compound distance includes the distance between converging lens 1c and front-surface mirror 1D, and between front-surface mirror 1d and display screen 3A, both distances being added together.
Be it known that the distance between converging lens 1c and front-surface mirror 1D is not critical, nor is the distance between front-surface mirror 1D and display screen 3A, as a change in distance can be compensated by a change in diopter power of converging lens 1C. Far more critical, however, are the angular alignments between these three components, since the functionality of the apparatus as a whole fully depends on the eyes of the user seeing two 16:9 landscape-oriented images, one image for each eye, which said images have then converged—in the user's mind—into one single virtual image. This can only be accomplished when the angular alignment of all components is accurate.
Be it also known that decreasing the distance between said three components would accomplish the enlargement of the virtual image, and increasing said distance would accomplish a diminution of said virtual image, whichever is the goal of the manufacturer.
A fine balance has been struck with the current design, as stated herein, and as shown in the illustrations, with a distance of 1.13 inches from the center of the convex surface of converging lens 1C to the ‘center point’ of front-surface mirror 1D, said ‘center point’ of front-surface mirror 1D being the point through which the user's gaze would penetrate if the user was gazing straight forward along the Y axis.
And in the case that the user is gazing straight forward along the Y axis, that same gaze would strike a point on front-surface mirror 1D—which shall henceforth be called the ‘center point’ of said front-surface mirror within this text—and be deflected off of that same point on said front-surface mirror and land on the very center of display screen 3A2, which is at a distance of 3.29 inches from the ‘center point’ on front-surface mirror 1D to said center point on display screen 3A2. This renders a compound distance between converging lens 1D and display screen 3A2 of 4.42 inches, as measured using the aforementioned ‘center points’ on each of those three components—converging lens 1C, front-surface mirror 1d and display screen 3A2.
Said distances are also dependent on the precise angular positioning of said three components, in respect to each other and also in respect to the user.
As shown in
The exact compound angle of front-surface mirror 1D—if said front-surface mirror started out sitting exactly perpendicular to the user's forward gaze along the Y axis, as if the user was looking at his or herself in said front-surface mirror, and said gaze was striking center point 26 on said front-surface mirror—is 28.7 degrees clockwise rotation (
The last step in finding the optimum fixed position of front-surface mirror 1D—as seen from a head-on view of the front-surface mirror, as if looking directly into it—is to rotate said front-surface mirror 5.6 degrees clockwise, using center point 26 as the pivot point, and without deviating from the plane made by said front-surface mirror's surface.
Now that the deflected angle of the user's gaze via front-surface mirror 1D is properly ascertained, let it be known that display screen 3A2 should then be placed exactly perpendicular to the fully deflected gaze of the user, if said gaze was seen as a virtual line extending forward along the Y axis from the user's cornea (
The two front-surface mirrors in this device, front-surface mirror 1D and front-surface mirror 2D, are of a distinct shape and size, as shown in
The exact same thing can be said about front-surface mirror 2D, only that it is bisymmetrically designed and arranged, but with the same affect being a ‘virtual’ mirror with the same 16:9 landscape-oriented appearance, from the user's perspective.
This assembly comprises flange 4E which fits snugly around the lip of enclosure 2B, and on top of flange 4E is screen rest 4D, upon which the display screen of the phone rests and is masked by the shape of screen rest 4D. Enclosure 4C surrounds the phone, and also provides holes for accessing phone buttons and electronic jacks. 4A1 is the smartphone itself, which would be supplied by the user, and since most people already have a smartphone this option makes this version of the headset far more economical for the user. Lid 4B covers the smartphone mount so the phone doesn't fall out of the mount. Phone mount assembly 4 can also be used on the right-eye side of the device without any modification, if two smartphones are desired.
An alternative arrangement would be to manufacture a single version of assembly 3, with a fixed display screen that is 6 inches in diameter (3A1), and with software which allows the image on the display screen to be adjusted by the user in size, position, brightness/contrast, etc., so that the user can insert a smartphone of any size that has a display screen 6 inches diagonal or smaller, and be able to ‘tailor’ the properties of the image on the right display screen to match those of the image on the left.
Let it here be known that the device described in this document can be manufactured with multiple display screen configurations:
The first headset version would feature two display screens that are fixed and unmovable—as shown in
The second headset version would feature a fixed display screen on the right side, which is unmovable, along with a CPU, memory, graphics processing hardware, networking hardware, electronic plugs and rechargeable battery for that side. The left eye side would feature the phone mount assembly shown in
The third headset version would feature two phone-mount assemblies, similar to assembly 4A1 as shown in
The different display screen configurations will, because of the different types of display screens used in each version, will have different locations of hardware associated with computer processing, batteries, etc.
For instance, in the version featuring two user-supplied smartphones, all processing hardware, memory, networking components, batteries, etc. would be built into those smartphones, so no such hardware would be needed. The phones would simply be connected together via Micro-USB cable, as allowed via the access holes provided in
The versions of the apparatus that feature one or more fixed display screens could also feature the same hardware arrangement, with processor, memory, networking components and battery attached directly to the associated display screen, just as a smartphone is manufactured.
A specially designed WIFI antenna module could be placed in the front of one of the enclosures, as seen in
One alternative to this WIFI setup is shown in
Any head-worn device should ideally be designed in such a way as to reduce the physical strain on the neck and face of the user as much as possible. One way of accomplishing this is to move the center of gravity as far ‘back’ as possible—from the user's perspective—by moving said extra hardware away from the display screens and more toward the face/head of the user.
And since it is also beneficial to have the weight of the apparatus balanced left and right—along the X plane, with the center of gravity being close to the middle of the user's face—it would be best to have some hardware on the left side of the device, and some hardware on the right side of the device, both locations as close to the user's face as possible.
Flexible nylon straps 6D and 7D interconnect each module and secure the shoulder-array to the user in a comfortable and convenient manner, with cabling 6B and 7B providing electronic signal transferal from the modules to the headset.
This shoulder-array configuration allows for the greatest comfort in the head, face and neck of the user, offloading much of the weight associated with conventional VR hardware.
Said shoulder array could feature the alternative WIFI antenna arrangement shown in
This same arrangement could also be featured in the opposite shoulder-pack, to increase WIFI coverage, as indicated by WIFI antenna 10C, lead plate 10D, and WIFI radiation zone 10F.
Custom software is also needed for any version of this apparatus, due to the fact that the use of front-surface mirrors in the optics reverses or ‘flips’ the imagery on each display screen 180 degrees horizontally. For this reason, system software would require a function that would compensate for this, by ‘flipping’ all imagery 180 degrees horizontally before that imagery is placed on each display screen, so that each front-surface mirror brings the imagery back to its original orientation. This is because a mirror image of a mirror image is no longer reversed.
Another use of said video cameras might be to send local user environment info to remote sites or remote users, for various purposes.
The shape of enclosures 1 and 2 are only critical inasmuch as they do not obstruct the user's view of the imagery produced by the device as a whole, and that they minimize the size of the device as much as is possible and/or practical, so that the device is not overly bulky and awkward.
To accomplish these two objectives, enclosures 1B and 2B are designed in such a way that they allow the imagery from display screens 3A and 4A to pass unobstructed to front-surface mirrors 1D and 2D. Then enclosures 1A and 2A are designed in such a way that they pass said imagery from front-surface mirrors 1D and 2D to converging lenses 1C and 2C, and on to the user's eyes.
Since the cornea of the human eye is less than 0.25 inches in diameter, it means that the light rays that are travelling from display screen 3A to the user's right cornea can be seen as forming a virtual truncated pyramid, the base of which is represented as display screen 3A, which is a 16:9 rectangle, and the top of which is represented by a virtual 16:9 rectangle that is very small, that can be imagined to sit within the cornea of the user's right eye.
The exact same thing can be said of the left-eye side of the device, although in a bisymmetrical fashion.
The slope of the sides of said virtual truncated pyramid can be seen in the design of enclosures 1B and 2B, in that said enclosures slope from display screens 3A and 4A, inward to front-surface mirrors 1D and 2D. The shape of front-surface mirrors 1D and 2D are as they are since they represent a tilted cross-section of said virtual truncated pyramids.
Enclosures 1A and 2A do not exactly conform to the shape of said virtual truncated pyramids, since said enclosures have the additional functions of supporting and enclosing the optical components and other hardware, as well as providing the interface structure for the user's face, interface 1H and 2H, and supporting the strapping apparatus via slots 11 and 21.
The device featured in this documentation can be produced for different display screen sizes, in which case the sizes of front-surface mirrors 1D and 2D and converging lenses 1C and 2C would need to be adjusted to compensate, as well as the size of the enclosures. With the component sizes that are featured in this text, the size of the display screens should be 5.5 inches, diagonally measured, 16:9 ratio, which is the most common size of high-end smartphone display screens, as of the time of this writing.
The featured apparatus can also be manufactured with front-surface mirrors 1D and 2D at slightly different compound angles than those presented in this text, which would require slightly different positioning of display screens 3A and 4A, in which case said display screens would need to also be rotated to accommodate for the change of angle, so that the user sees properly positioned images, both with landscape orientation, with no horizontal deviation, which horizontal deviation would collapse and destroy the optical illusion intended for the proper usage of the device.
Although the main use of the described apparatus would be as a device for viewing 3D content, 2D content can be viewed on said device with no problem at all, a feature that would be accomplished via device software, in which case the software would feature a 2D function which would send the exact same image/video content to both display screens, instead of sending separate parallax imagery to each display screen, as is the case with 3D viewing.
All versions of said apparatus would include some type of wireless remote control to facilitate user interaction with the device hardware and software. A good choice of remote control would combine the features of an air-mouse, which functions as the pointing device, a standard gaming keypad with ‘left’/‘right’/‘up’/‘down’ keys, and an ‘enter’ key in the middle, and on the back of the remote a mini ‘qwerty’ keypad for text and numerical input, the entire remote making use of Bluetooth for wireless connection with the headset hardware and software. Since Bluetooth is ubiquitous with modern smartphones, any Bluetooth remote would offer some minimal functionality, and more exotic remotes for VR interaction would also fit well with the functionality offered by the viewer outlined in this text.
In the headset versions without integrated headphones, the user would be able to use their own supplied headphones or ear-buds, simply by plugging them into the audio output jack of the headset or shoulder-pack electronics, depending upon which headset alternative the user has opted for.
In the headset version with two fixed display screens, the headset should include a built-in electret condenser microphone located at the bottom of the headset, near the mouth of the user, to capture the sound of the user's voice for various use cases; telephone calls, voice-activated software features, interacting with remote users, personal notes, speech-to-text auto-dictation, etc.
This Non-provisional Patent Application includes extended temporal protection under 35 U.S.C. § 119(e) via Provisional Patent Application No. 62/489,428, filed in the U.S. on Apr. 24, 2017 by the same sole inventor, Stuart Brooke Richardson.