1. Field
The following invention disclosure is generally concerned with electronic vision systems and specifically concerned with highly dynamic and adaptive augmented reality vision systems.
2. Related Systems
Vision systems today include video cameras having LED displays, electronic documents, infrared viewers among others. Various types of these electronic vision systems have evolved to include computer-based enhancements. Indeed, it is now becoming possible to use a computer to reliably augment optically captured images with computer-generated graphics to form compound images. Systems known as Augmented Reality capture images of scenes being addressed with traditional lenses and sensors to form an image to which computer-generated graphics may be added.
In some versions, simple real-time image processing yields a device with means for superimposing graphics generated by a computer with optically captured images. For example, edge detection processing may be used to determine precise parts of an image scene which might be manipulated with the addition of computer generated graphics aligned therewith.
In one simple example of basic augmented reality now commonly observed, enhancements which relate to improvements in sports broadcast are found on the family television on winter Sunday afternoons. In an image of a sports scene including a football grid iron, there is sometimes particular significance of an imaginary line which relates to the rules of play; i.e. the first down line indicator. Since it is very difficult to envision this imaginary line, an augmented reality image makes understanding the game much easier. A computer determines the precise location and perspective of this imaginary line. The computer generates a high contrast enhancement to visually show same. In football, a “first down” line which can easily be seen during play as represented by an optically captured video makes it easy for the viewer to readily discern the outcome of a first down attempt thus improving the football television experience.
While augmented reality electronic vision systems are just beginning to be found in common use, one should expect more each day as these technologies are presently in rapid advance. Computers may now be arranged to enhance optically captured images in real time by adding computer-generated graphics thereto.
Some important versions of such imaging systems include those in which the computer generated portion of the compound image includes a level of detail which depends upon the size of a particular point of interest. Either by way of a manual or user selection step or by way of inference, the system declares a point-of-interest of object of high importance. The size of the object with respect to the size of the image field dictates to computer generation schemes the level of detail. When a point of interest is quite small in the image scene, the level of computer augmentation is preferable much less. Thus, dynamically augmented reality systems are those in which the level of augmentation responds to attributes of the scene among other important factors. It would be most useful if the level of augmentation were responsive to other image scene attributes. For example, instant weather conditions. Further, it would be quite useful if the level of augmentation were responsive to preferential user selections with respect to certain objects of interest. Still further, it would be most useful if augmented reality systems were responsive to a manual input in which a user specifies a level of detail. These and other dynamic augmented reality features and systems are taught and first presented in the following graphs.
While systems and inventions of the art are designed to achieve particular goals and objectives, some of those being no less than remarkable, these inventions of the art have nevertheless include limitations which prevent uses in new ways now possible. Inventions of the art are not used and cannot be used to realize advantages and objectives of the teachings presented herefollowing.
Comes now, Peter and Thomas Ellenby with inventions of dynamic vision systems including devices and methods of adjusting computer generated imagery in response to detected states of an optical signal, the imaged scene, the environments about the scene, and manual user inputs. It is a primary function of this invention to provide highly dynamic vision systems for presenting augmented reality type images.
Imaging systems ‘aware’ of the nature of imaging scenarios in which they are used, and further aware of some user preferences, adjust themselves to provide augmented reality images most suitable for the particular imaging circumstance to yield most highly relevant compound images. An augmented reality generator or computer graphics generation facility is responsive to conditions relating to scenes being addressed as well as certain user specified parameters. Specifically, an augmented reality imager provides computer-generated graphics (usually a level of detail) appropriate for environmental conditions such as fog or inclimate weather, nightfall, et cetera. Further, some important versions of these systems are responsive to user selections of particular objects of interest—or ‘points of interest’ (POI). In other versions, augmented reality is provided whereby a level of detail is adjusted for the relative size of a particular object of interest.
While augmented reality remains a marvelous technology being slowly integrated with various types of conventional electronic imagers, heretofore publically known augmented reality systems having a computer graphics generator are largely or wholly static. The present disclosure describes highly sophisticated computer graphics generators which are part of augmented reality imagers whereby the computer graphics facility is dynamic and responsive to particulars of scenes being imaged.
Either by measurement and sensors, among other means, imaging systems presented herein determine atmospheric, environmental and spatial particulars and conditions. Where these conditions warrant an increase in the level of detail—same is provided by the computer graphics generation facility. Thus an augmented reality imaging system may provide a low level of augmentation on a clear day. However, when a fog bank tends to obscure a view, the imager can respond to that detected condition and provide increased imagery to improve the portions of the optically captured image which are obscured by fog. Thus, an augmented reality system may be responsive to environmental conditions and states in that they are operable to adjust the level of augmentation to account for specific detected conditions.
These augmented reality imaging systems are not only responsive to environmental conditions but are also responsive to user chokes with respect to declared objects of interest or points of interest. Where a user indicates a preferential interested by selecting a specific object, the augmentation provided by a computer graphics generation facility may favor the selected object to the detriment of other objects less preferred. In this way, an augmented reality imaging system of this teaching can permit a user to “see through” solid objects which otherwise tend to interrupt a view of some highly important objects of great interest.
In a third most important regard these augmented reality systems provide computer-generated graphics which have a level of detail which depends on the relative size or a specified object with respect to the imager field-of-view size.
Accordingly, these highly dynamic augmented reality imaging systems are not static like their predecessors, but rather are responsive to detected conditions and selections which influence the manner in which the computer-generated graphics are developed and presented.
It is a primary object of the invention to provide vision and imaging systems.
It is an object of the invention to provide highly responsive imaging systems which adapt to scenes being imaged and the environments thereabout.
It is a further object to provide imaging systems with automated means by which a computer generated image portion is applied in response to states of the imaging system and its surrounds.
A better understanding can be had with reference to detailed description of preferred embodiments and with reference to appended drawings. Embodiments presented are particular ways to realize the invention and are not inclusive of all ways possible. Therefore, there may exist embodiments that do not deviate from the spirit and scope of this disclosure as set forth by appended claims, but do not appear here as specific examples. It will be appreciated that a great plurality of alternative versions are possible.
These and other features, aspects, and advantages of the present inventions will become better understood with regard to the following description, appended claims and drawings where:
In advanced electronic vision systems, optical images are formed by a lens when light falls incident upon an electronic sensor to form a digital representation of a scene being addressed. Presently, sophisticated cameras use image processing techniques to draw conclusions about the states of a physical scene being imaged, and states of the camera. These states include the physical nature of objects being imaged as well as those which relate to environments in which the objects are found. While it is generally impossible to manipulate the scene being imaged in response to analysis outputs, it is relatively easy to adjust camera subsystems accordingly.
In one illustrative example, a modern digital camera need only analyze an image signal superficially to determine an improper white balance setting due to artificial lighting. In response to detection of this condition, the camera can adjust the sensor white balance response to improve resulting images. Of course, an ‘auto white balance’ feature is found in most digital cameras today. One will appreciate that in most cases it is somewhat more difficult to apply new lighting to illuminate a scene being addressed to achieve an improved white balance.
While modern digital cameras are advanced indeed, they nevertheless do not presently use all of the information available to invoke the highest system response possible. In particular, advanced electronic cameras and vision systems have not heretofore included functionality whereby compound augmented reality type images which comprise image information from a plurality of sources is multiplexed together in a dynamic fashion. A compound augmented reality type image is one which is comprised of optically captured image information combined with computer-generated image information. In systems of the art, the contribution from these two image sources is often quite static in nature. An example, a computer-generated wireframe model may he overlaid upon a real scene of a cityscape to form an augmented reality image of particular interest. However, wireframe attributes are prescribed and preset via the system designer rather than dynamic or responsive to conditions of the image scene, image environment, or and the points of interest or image scene subject matter. The computer-generated portion of the image maybe the same (particularly with regard to detail level) regardless of the optical signal captured.
In an illustrative example, a system user 1 addresses a scene of interest—a cityscape view of San Francisco. In this example, the user views the San Francisco cityscape via an electronic vision system 2 characterized as an augmented reality imaging apparatus. Computer-generated graphics are combined with and superimposed onto optically captured images to form compound images which may be directly viewed. An image 3 of the cityscape includes the Golden Gate Bridge 4 and various buildings 5 in the city skyline. San Francisco is famous for its fog which comes frequently to upset the clear view of scenes such as the one illustrated as
Because the presence of fog is detectable, indeed it is detectable via many alternative means, these systems may be provided where dynamic element thereof are adjustable or responsive to values which characterize the presence of fog.
As a result of fog being present as sensed by the imaging system, the computer responds by adding enhancements appropriate for the particular situation. That is, the computer generated portion of the image is dynamic and responsive to environmental states of the scene being addressed. In best versions, the processes may be automated. The user does not have to adjust the device to encourage it to perform in this fashion, but rather the computer generated portion of the image is provided by a system which detects conditions of the scene and provides computer-generated imagery in accordance with those sensed or detected states.
To continue the example, as nightfall arrives the optical imager loses nearly all ability to provide for contrast. As such, the computer-generated portion of the image becomes more important than the optical portion. A further increase in detail of the computer-generated portion is called for. Without user intervention, the device automatically detects the low contrast and responds by turning up the detail of the computer generated portions of the image.
While environmental factors are a primary basis upon which these augmented reality systems might be made responsive, there are additional important factors related to scenes being addressed where the manner and performance of computer generated graphics is responsive. Namely, computer graphics generation facility may be made responsive to specified objects such that a greater detail of one object is provided, and sometimes at the expense of detail with respect to a less preferred object.
In a most important version of these electronic vision systems, a user selects a particular point-of-interest (POI) or object of high importance. Once so specified, the graphics generation can respond to the user selection by generating graphics which favor that object at the ‘expense’ of the others. In this way, to user selection influences augmented images and most particularly the computer-generated portion of the compound image such that detail provided is dependent upon selected objects within the scenes being addressed. Thus, depending upon the importance of an object as specified by a user, the computer-generated graphics are responsive.
With reference to the drawing
In review, systems have been described which provide a computer generator responsive to environmental features (fog, rain, et cetera), optical sensor states (low contrast); and user preferences with respect to points-of-interest. In each of these cases, an augmented reality image is comprised of optically captured image portion and a computer-generated image portion, where the computer generated image portion is provided by a computer responsive to various stimuli such that the detail level of the computer generated images varies in accordance therewith. The computer generated image portion, dependent upon dynamic features of the scene, the scene environments, or user's desires.
In another important aspect, the computer generated portion of the augmented image is made responsive to the size of a selected object with respect to the image field size.
The images of
Finally,
While
Once an augmented image is presented to a user, the level of augmentation being automatically decided by the computer in view of the environment, image conditions, object importance, among others, the presented image may be adjusted with respect to augmentation levels simply by sensing tactile controls which may be operated by the user. In this way, a default, level of augmentation may be adjusted ‘up’ or ‘down’ with inputs from its human operator.
One will now fully appreciate how augmented reality systems responsive to the states of scenes being addressed may be realized and implemented. Although the present invention has been described in considerable detail with clear and concise language and with reference to certain preferred versions thereof including, best modes anticipated by the inventors, other versions are possible. Therefore, the spirit and scope of the invention should not be limited by the description of the preferred versions contained therein, but rather by the claims appended hereto.