The disclosure relates to displaying images to an augmented reality heads-up display.
A heads-up display (HUD) may show vehicle occupants data or information without the vehicle occupants having to take their eyes away from a direction of vehicle travel. This allows a heads-up display to provide pertinent information to vehicle occupants in a way that may be less distracting than displaying the same information to a control console that may be out of their field of vision. The heads-up display may utilize augmented reality systems to insert virtual (e.g., computer-generated) objects into a field of view of a user in order to make the virtual objects appear to the user to be integrated into a real-world environment. The objects may include but are not limited to signs, people, vehicles, and traffic signals.
A see-through display may include virtual objects that are placed in a user's field of view according to objects that are detected via a camera. There is a finite amount of time between when an image is captured via the camera and when a virtual object that is related to an objected that is detected from the image is exhibited or viewable via a vehicle HUD. This time may be referred to as a glass to glass latency of the HUD system. The glass to glass latency may cause virtual objects that move with objects detected from the image to appear to move in a step-wise manner. Consequently, the motion of the virtual object in the HUD image when presented via the augmented reality system may not flow smoothly such that the image may be less realistic and/or less pleasing to the viewer. By updating an exhibited or displayed image with expected or projected virtual objects between times when images are captured via a camera, it may be possible to provide smoother motion for exhibited or displayed virtual objects that follow motion of objects in images captured by the camera. The present disclosure provides for a heads-up display device comprising: a display; a camera; and a display controller comprising a processor and memory storing non-transitory instructions executable by the processor to exhibit one or more virtual objects via the display, the one or more virtual objects generated via the processor, the one or more virtual objects exhibited in displayed images via the display at a rate faster than new images are captured via the camera, and where positions of the one or more virtual objects are adjusted in each of the displayed images each time the displayed images are updated. In this way, it may be possible to smoothly move a virtual object in a HUD image to track with an object that is captured via a camera.
In some examples, the new images that are captured at a fixed rate, the fixed rate may be related to system hardware and software throughput limits. The heads-up display device may also include wherein a position of the one or more virtual objects is based on a position of an identified object in an image captured via a camera, and wherein the one or more virtual objects are configured to make an object in the real-world more noticeable. The heads-up display device may include where the one or more objects appear to surround the object in the real-world to allow quicker identification of the second object by a user. The heads-up display device may also include where the one or more objects are placed to appear proximate to the object in the real-world via the heads-up display so as to allow a user to recognize the second object sooner. The heads-up display device may also include additional instructions to position the one or more virtual objects generated via the processor via the heads-up display based on a trajectory of a first identified object captured in a first image via the camera and the first identified object captured in a second image via the camera so that the virtual object may track or follow the second object as the second object moves in the real-world. The heads-up display device may also include where the trajectory of the first identified object is based on a position of the first identified object in the first image and a position of the first identified object in the second image. A change in the position of the second object may be the basis for predicting a future position of the second object so that the heads-up display system may track or follow the second object even when additional images of the second object are not captured by the camera. The heads-up display further comprises additional instructions to resize the one or more virtual objects in response to the trajectory of the first identified object.
A heads-up display system may include a camera to detect positions of objects in fields of view of vehicle occupants. The camera may generate images in which objects may be identified and the identified objects may be highlighted in a user's field of view via the heads-up display system generating a virtual object via light, such as a target box or other visual aid. The user may observe an identified object through a wind shield and a see-through virtual object may be generated in the user's field of view and in the heads-up display field of projection such that the virtual object appears to surround and/or follow the identified object from a user's point of view. However, the amount of time it takes to capture an image via the camera, identify objects in the captured image, render virtual objects, and display the virtual objects in a user's field of view may be longer than an amount of time it takes for a user to notice motion of the virtual object that is generated to call attention to the user. Consequently, the user may notice stepwise changes in the position of the virtual object. The present disclosure may provide ways of overcoming such limitations.
As one example, the present disclosure provides for a method for operating a heads-up display device, the method comprising: capturing a first image via a camera; identifying an object in the first image; generating a display image via a heads-up display, the display image including a virtual object that is placed in the display image based on a motion of the object. By placing the virtual object in the display image (e.g., an image that is projected via the heads-up display) based on the motion of the object, it may be possible to smoothly track the object with a virtual object in a user's field of view. In other words, the virtual object may appear to move continuously with the object so as to improve tracking of the object from the user's point of view.
In some examples, the method includes where the motion of the object is based on a trajectory of the object, and further comprises capturing a second image via the camera and identifying the object in the second image. The method further comprises estimating the motion of the object based on the first and second images so that the virtual object may be placed in a desired location in a user's field of view without having to capture a new image each time the virtual object's position is adjusted. The method also further comprises adjusting a position of the virtual object in the display image based on the motion of the object. The method further comprises adjusting a size of the virtual object based on a speed of a vehicle. This allows the virtual object to grow or shrink as an identified object grows or shrinks so that the size of the virtual object is adjusted with the size of the identified object according to the user's field of view. The method includes where the virtual object is configured to enhance visual identification of the object so that the user may be made aware of the identified object. The method includes where placing the virtual object in the display image based on the motion of the object includes placing the virtual object in the display image based on a location the object is expected to be relative to a user field of view, the location the object is expected to be being based on a position of the object in the first image. In this way, the virtual object may be moved without having to capture and process another image.
A heads-up display may track a position of an object via a camera and provide a virtual object in a field of view. However, by the time that the camera image is processed and an object in the image is identified, a virtual objected that is placed in a field of view of a user based on the object in the image may be placed in a position that is different than the current position of the object in the user's field of view. Therefore, it may be desirable to adjust the position of the virtual object in the user's field of view such that the virtual object may track or follow the object in the user's field of view more closely. The present disclosure may provide a way of making a virtual object appear to more closely follow an object in a user's field of view.
In one or more examples, the present method provides for a method for operating a heads-up display device, the method comprising: capturing a first image via a camera; identifying a position of an object in the first image; generating a second image via a heads-up display, the second image including a virtual object that is placed in the second image at a first position based on the position of the object in the first image; capturing a third image via the camera; identifying a position of the object in the third image; generating a fourth image via a heads-up display, the fourth image including the virtual object that is placed in the fourth image at a second position based on the position of the object in the third image; and generating a plurality of images via a heads-up display, the plurality of images generated at times between generation of the second image and generation of the fourth image, the plurality of images including the virtual object, the virtual object placed in the plurality of images based on expected positions of the object, the expected positions of the object located between the first position and the second position.
In some examples, the method includes where the expected positions of the object are based on the position of the object in the first image. In particular, the expected position of the object may be interpolated such that the virtual object may be placed near the object in the user's field of view. The method also includes where the expected positions of the object are based on the position of the object in an image captured via the camera before the first image. Thus, the interpolated position of the object may be based on known first and second positions of the object that were determined ahead of the interpolated position of the object. The method also further comprises adjusting a size of the virtual object based on a change in a size of the object. This may allow the virtual object to scale with the object in the user's field of view so that a size relationship between the object and the virtual object in the user's field of view may be maintained. In addition, the method further comprises estimating a position change of the object so that a position of the object at a future time may be estimated.
The disclosure may be better understood from reading the following description of non-limiting embodiments, with reference to the attached drawings, wherein below:
The disclosure provides for systems and methods that address the above-described issues that may arise in producing a heads-up display in an environment such as a vehicle. For example, the disclosure describes a display configuration that identifies objects in a user's field of view and generates virtual objects that may track or follow the identified objects so as to make the identified objects more noticeable to users. The virtual objects may be aligned with and/or positioned over and/or around identified objects in a user's optical path such that the virtual objects flow with the identified objects in the user's optical path in a way that may be visually pleasing to a user. The virtual objects may be placed in the user's optical path via a heads-up display according to estimated trajectories of the identified objects without having to deploy high speed image processing. In one or more examples, positions of virtual objects may be adjusted without capturing an image and identifying objects in the image each time the virtual objects are moved. As such, the amount of image processing to operate the heads-up display may be reduced. In addition, lowering the image processing requirements may reduce system cost.
The display 108 may be controlled via a display controller 114. The display controller 114 may be a dedicated display controller (e.g., a dedicated electronic control unit that only controls the display) or a combined controller (e.g., a shared electronic control unit that controls the display and one or more other vehicle systems). In some examples, the display controller 114 may be a head unit of the vehicle 104 (e.g., an infotainment unit and/or other in-vehicle computing system). The display controller may include non-transitory memory (e.g., read-only memory) 130, random access memory 131, and a processor 132 for executing instructions stored in the memory to control an output of the display 108. The display controller 114 may control the display 108 to project particular images based on the instructions stored in memory and/or based on other inputs. The display controller 114 may also control the display 108 to selectively turn off (e.g., dim) portions of backlighting from the backlight array 109 to conserve power and/or reduce heat based on a state of the display and/or ambient conditions of the display. For example, when regions of an image are black, portions of the backlight array 109 corresponding to the locations of the black regions of the images may be turned off. As another example, if a thermal load is predicted or detected to be above a threshold, selected light sources of the backlight array 109 may be turned off to reduce heat generation (e.g., alternating light sources in the array may be switched off so that the whole image may be displayed while only half of the light sources are generating heat, or the image may be reduced in size and light sources in the array corresponding to locations of newly black regions of the image may be turned off). Examples of inputs to the display controller 114 to control mechanism of display 108 include an eye tracking module 116, a sunload monitoring module 118, a user input interface 120, camera 128, and a vehicle input interface 122.
Eye tracking module 116 may include and/or be in communication with one or more image sensors to track movement of eyes of a user (e.g., a driver). For example, one or more image sensors may be mounted in a cabin of the vehicle 104 and positioned to capture (e.g., continuously) images of pupils or other eye features of a user in the vehicle. In some examples, the image data from the image sensors tracking the eyes may be sent to the display controller for processing to determine a gaze direction and/or other eye tracking details. In other examples, the eye tracking module 116 may include and/or may be in communication with one or more processing modules (e.g., local modules within the vehicle, such as a processing module of a head unit of the vehicle, and/or remote modules outside of the vehicle, such as a cloud-based server) configured to analyze the captured images of the eyes and determine the gaze direction and/or other eye tracking details. In such examples, a processed result indicating a gaze direction and/or other eye tracking information may be sent to the display controller. The eye tracking information, whether received by or determined at the display controller, may be used by the display controller to control one or more display characteristics. The display characteristics may include a location of display data (e.g., to be within a gaze direction), localized dimming control (e.g., dimming backlighting in regions that are outside of a gaze direction), a content of display data (e.g., using the gaze direction to determine selections on a graphical user interface and rendering content corresponding to the selections on the graphical user interface), and/or other display characteristics. In some examples, other eye tracking data may be used to adjust the display characteristics, such as adjusting a visibility (e.g., opacity, size, etc.) of displayed data responsive to detection of a user squinting.
Sunload monitoring module 118 may include and/or be in communication with one or more sun detection sensors, such as infrared sensors, to determine an amount of sunlight (or other light) impinging on the windshield and/or display elements. As described above with respect to the image sensors and eye tracking data, sunload monitoring data may be processed locally (e.g., by the sunload monitoring module 118, by the display controller 114, and/or by another in-vehicle processing module, such as a processing module of a head unit of the vehicle 104), remotely (e.g., by a remote service outside of the vehicle, such as a cloud-based server), and/or some combination thereof. The sunload monitoring data may include an amount of sunlight (or other light) in a given region of the vehicle that is associated with the display configuration (e.g., display units, regions of the windshield onto which display light from the display 108 may be projected, other locations in a path of light emitted from the display 108, etc.). The amount of light may be compared to one or more thresholds and used by the display controller 114 to adjust one or more display characteristics. For example, if sunload in a particular region of the windshield is above a threshold, the display controller and/or another suitable controller may adjust a physical position of an optical element (e.g., the 3D element 112) to increase a focal length of the display configuration and reduce a magnification of the optics of the display configuration, thereby decreasing a sunload on the display elements (e.g., a thin-film-transistor, TFT, of an LCD display).
The user input interface 120 and vehicle input interface 122 may be used to provide instructions to the display controller 114 to control the display based on user input and vehicle data/status, respectively. For example, user input to change a type of information displayed (e.g., to select between instrument data such as speed/RPM/etc. and navigation data such as turn directions), to select options when a graphical user interface is displayed, and/or to otherwise indicate user preferences may be provided to the display controller 114 and processed to alter a content and/or format of the data displayed via the display unit 107. The user input interface may receive user input from any suitable user input device, including but not limited to a touch screen, vehicle-mounted actuators (e.g., buttons, switches, knobs, dials, etc.), a microphone (e.g., for voice commands), an external device (e.g., a mobile device of a vehicle occupant), and/or other user input devices. The vehicle input interface 122 may receive data from vehicle sensors and/or systems indicating a vehicle status and/or other vehicle data, which may be sent to the display controller 114 to adjust content and/or format of the data displayed via the display system 102. For example, a current speed may be supplied (e.g., via a controller-area network, CAN, bus of the vehicle) to the vehicle input interface and sent to the display controller to update a display of a current speed of the vehicle. The vehicle input interface may also receive input from a navigation module of a head unit of the vehicle and/or other information sources within the vehicle.
Camera 128 may provide digital images to display controller 114 for tracking objects that may be in the path of vehicle 104. In some examples, two cameras may provide digital images to display controller 114. Camera 128 may capture digital images and transfer the digital images to display controller 114 at a fixed rate (e.g., every 16 milliseconds (ms)).
Referring now to
The system of
Turning now to
In the illustrated example, the display units 207 are utilized to display the various images via the windshield 106, however, it is to be understood that the images may be presented using other display arrangements such as mirror based systems (not shown). In addition, the display system may generate a plurality of virtual images or objects to track a plurality of objects in the real-world.
Referring now to
At time t0, virtual image or object 232 is displayed and laid over object 234 according to a position of object 232 as determined from a most recently captured camera image. Virtual image or object 233 is a next most recent virtual image or object that may be displayed and laid over object 234 according to a next most recent position of object 232 as determined via a newest or most recently captured camera image following the display of virtual image or object 232. In other words, virtual images or objects 232 and 233 may be the first objects or images displayed and placed over object 234 after a camera has captured a new image. Virtual images or objects 232, 232′, 232″, and 233 are the same virtual images or objects displayed at different times in the illustrated sequence.
Due to hardware and constraints and processing limitations, it may take a greater amount of time to display virtual images or objects that are based on newest captured images than may be desired. In particular, user 205, as shown in
The display positions of virtual objects 232′ and 232″ may be determined according to an estimated trajectory of object 234 as described in the methods of
The example of
A third image capture and display cycle begins with camera input being captured at 318, which begins at time t0. However, in this image capture and display cycle, the trajectory image information or data determined at 340, from prior image capture and display cycle beginning at 310, is applied at 342 to render and update the virtual object or image that tracks with the identified object. The updated virtual object may include size and positioning updates for the virtual object. The updated virtual object is displayed at 344 beginning at time t1, and the updated virtual object is the second image that is displayed based on the image that was captured at 310. Notice that the virtual object display event at 342 occurs during the perception time 320 for the image that is presently being proceeds (e.g., the image captured at 318). By displaying an updated version of the virtual image or object sooner than at 324, a smoother progression of the virtual image or object may be presented via the heads-up display to the user. The trajectory image information or data determined at 340 is applied a second time at 346 to render and second update the virtual object or image that tracks with the identified object. The second updated virtual object may also include size and positioning updates for the virtual object. The second updated virtual object is displayed at 348 beginning at time t2, and the updated virtual object is the third image that is displayed based on the image that was captured at 310. The trajectory image information or data determined at 340 is applied a third time at 350 to render and third update the virtual object or image that tracks with the identified object. The third updated virtual object may also include size and positioning updates for the virtual object. The third updated virtual object is displayed at 352 beginning at time t3, and the updated virtual object is the fourth image that is displayed based on the image that was captured at 310. The third image captured by the camera is processed at 320 and the trajectory of the identified object, if still present, is determined a second time at 358. The first virtual image or object that is based on the identified object in the third image is displayed at 324. The sequence repeats after step 324, the end of the third image capture and display cycle, at steps 326-332 and 360-370. Thus, a heads-up display device may update sizes and positions of virtual objects multiple times for each image that is captured via a camera. In some examples, the trajectory steps (e.g., 340 and 358) and the renderings that are based on the trajectories (e.g., 342, 346, 350, 360, 364, and 368), may be processed in parallel via a second core in the controller's processor so that the operations may be performed in parallel with receiving camera input (e.g., 318), perceiving objects (e.g., 320), rendering virtual objects (e.g., 322), and displaying the virtual objects (e.g., 324).
Referring now to
At 402, method 400 judges whether or not the heads-up display (HUD) is activated. The HUD may be activated via input to a user interface or automatically when a vehicle is powered up. If method 400 judges that the HUD is activated, the answer is yes and method 400 proceeds to 404 and 430. Otherwise, the answer is no and method 400 proceeds to exit.
At 404, method 400 captures an image via a camera. In one or more examples, the camera senses light via an analog light sensor and voltage levels representing photons sensed via a plurality of sensor pixels is converted into digital numeric values that are stored in controller memory, thereby representing a pixelated digital image. The digital image may be passed from the camera to the controller. Method 400 proceeds to 406.
At 406, method 400 processes the captured image and attempts to identify one or more objects that may be included in the image. In one or more examples, the captured image may be processed to segment portions of the image that correlate to objects to be identified. Segmentation borders may be provided, along with identifiers specifying an object identified within the provided borders. In an example, objects may be identified by filtering the image, detecting edges of potential objects, and comparing known objects to potential objects. In an example, a trained machine learning algorithm may be used to identify particular objects, such as pedestrians, road signs, animals, vehicles, bicycles, etc. Method 408 may identify the type of object (e.g., animal, person, vehicle, etc.), the size of the object in pixels and/or provide a bounding box, and the location of the object in the image. Method 400 proceeds to 408.
At 408, method 400 judges if objects (if any) identified in the most recently captured camera image where present in the second most recently captured camera image. If so, the answer is yes and method 400 proceeds to 410. A “yes” answer indicates that a trajectory of the identified object may be determined. If the answer is no, method 400 proceeds to 412.
At 410, method 400 determines a trajectory of each twice identified object in the most recent and second most recent images captured by the camera. The trajectories may be estimated as described in
At 412, method 400 renders virtual objects that are to be displayed based on the objects identified in the most recent captured camera image. The virtual objects to be displayed may be selected from a library of virtual objects, and the selected virtual object may be based on the identified object. The virtual objects or images may include but are not limited to target shapes (e.g., boxes, circles, etc.), poses, and icons. In one or more examples, method 400 scales the virtual objects according to the sizes of the identified objects. For example, if an identified object is 100 pixels wide and 200 pixels tall, the size of the virtual object may be Vx=Sf1*(Px), Vy=Sf2*(Py), where Vx is the width of the virtual object to be displayed, Vy is the height of the virtual object to be displayed, Px is the pixel width of the identified object, Py is the pixel height of the identified object, Sf1 is the width scaling factor between the identified object and the virtual object or image (e.g., a target box), and Sf2 is the height scaling factor between the identified object and the virtual object or image. The virtual objects may be scaled to cover the identified object, surround the identified object, or be fraction of the size of the identified object when the virtual object is exhibited and presented to the user via the HUD.
In one or more examples, the location in the field of projection of the heads-up display that a particular virtual object or image is placed may be a function of a location of the associated identified object in the most recent image captured by the camera. For example, if a center of the associated identified object is at a location 1000x, 2000y, where 1000x is 1000 pixels from a camera light sensor horizontal reference position (e.g., a particular corner of the sensor), and where 2000y is 2000 pixels from a camera light sensor vertical reference position, the field of projection of the heads-up display including position at which the particular virtual object or image is placed may be determined via a function (Hx,Hy)=MAP(x,y), where Hx is a horizontal location in the HUD field of projection, Hy is a vertical location in the HUD field of projection, x is the camera sensor horizontal pixel location for the center of the identified object, and y is the camera sensor vertical pixel location for the center of the identified object, and MAP is a function that maps camera sensor pixel locations to HUD field of projection locations. Method 400 proceeds to 414.
At 414, method 400 judges if virtual objects or images based on the most recently captured camera image are ready to display via the HUD. For example, as shown in
At 416, method 400 exhibits or displays virtual objects or images based on the most recently captured camera image to users via the HUD. The objects or images may be exhibited at a location in the heads-up field of projection where the user sees the virtual object or image and the identified object. For example, from the user's perspective, a virtual object or image may be displayed such that it appears to surround the identified object as shown at 232 of
At 418, method adjusts a value of time tn to be equal to a value of zero and time begins to accumulate in the timer. Method 400 returns to 402.
At 430, method 400 renders virtual objects or images that are to be displayed based on the trajectories of objects identified in the most recent captured camera image. As previously described, the virtual objects to be displayed may be selected from a library of virtual objects, and the selected virtual object may be based on the identified object.
Method 400 may display virtual objects or images that are based on interpolated trajectories of identified objects in images that are captured via the camera a predetermined number of times between times when images are captured via the camera. For example, as shown in
The process for revising attributes (e.g., size and positioning) of a virtual object or image in a displayed image is described herein, and the process may be extended to revising attributes for displaying a plurality of virtual objects or images in a displayed image. In one or more examples, method 400 may receive a trajectory for a particular object that is a basis for displaying a virtual object or image during an image capture and display cycle. The trajectory may be the basis for determining a change in position of the virtual object or image in a displayed image. The position of the virtual object or image in the displayed images during an image capture and display cycle may be revised for each displayed image. For example, if a person is identified in a camera captured image and the person's position in the captured image is 20000x, 10000y, 5000z with a trajectory that is determined to be 5000x, 0y, and 2000z, for an image cycle of 72 milliseconds, the person's position at the camera sensor may be extrapolated to be:
where pt1 is the position of the estimated or expected position of the identified object at the camera sensor at time t1 of the image capture and display cycle, Du is the total number of heads-up display updates during an image capture and display cycle, x is the horizontal component of the estimated or expected position of the identified object at the camera sensor, y is the vertical component of the estimated or expected position of the identified object at the camera sensor, and z is the depth (e.g., distance away) component of the estimated or expected position of the identified object at the camera sensor, pt2 is the position of the estimated or expected position of the identified object at the camera sensor at time t2 of the image capture and display cycle, and pt3 is the position of the estimated or expected position of the identified object at the camera sensor at time t3 of the image capture and display cycle. Thus, linear extrapolation from the identified object's position at the camera sensor may be a basis for estimating the position of the identified object at times in the future.
It should be noted that the z or depth component in this example does not provide a z position relative to the camera's sensor. Rather, the z component may be a basis for resizing the virtual object or image based on a change in size of the identified object between two images. Alternatively or in addition, the virtual object may be resized as a function of vehicle speed. For example, if a base size of a virtual image or object is 200 horizontal pixels by 300 vertical pixels, the virtual image or object may be scaled to a size via the following function: St1(x, y)=Rsize(Vx, Vy), where St1 is the new size of the virtual image or object, x is the camera sensor horizontal pixel distance, y is the camera vertical pixel distance, Rsize is a function that returns a resized virtual object or image, Vx is a base horizontal size of the virtual object or image, and Vy is a base vertical size of the virtual object or image. Thus, method 400 does not rely on the determination of absolute positions and sizes of identified objects. Rather, it may rescale virtual objects and images according to changes in sizes of the identified objects so as to simplify calculations. However, in some examples, method 400 may estimate the absolute size of an identified object and absolute distances to the identified objects, relative to the camera, to estimate positions and sizes of identified objects at times in the future. Method 400 renders images of virtual objects and images based on expected positions of identified objects in captured camera images based on the expected positions and new size values. Method 400 proceeds to 414.
At 420, method 400 judges if trajectories of identified objects from the most recent camera image capture have been determined. If so, the answer is yes and method 400 proceeds to 422. Otherwise, the answer is no and method 400 returns to 402. A “no” answer may be provided when the HUD system starts as shown in
At 422, method 400 judges if the timer is at a value that is equal to time t1, t2, t3-tn. The times t1, t2, t3-tn may be fixed times that are based on system hardware, data processing capabilities, and desired refresh rates. If method 422 judges that the value of the timer is equal to t1-tn, the answer is yes and method 400 proceeds to 424. Otherwise, the answer is no and method 400 returns to 402. It should be noted that the value of the timer does not have to equal exactly the time t1, t2, or tn to proceed to 424. For example, if the value of the timer is equal to or greater than time t1 and the virtual object or image position for time t1 has not been displayed, method 400 proceeds to 424. However, if the value of the timer is greater than time t1, but less than time t2 and the virtual object or image for time t1 has been displayed, method 400 may return to 402 until the value of the timer is greater than or equal to time the time t2.
At 424, method 400 displays the virtual objects or images based on the time t1-tn that the value of the timer is presently closest to. For example, if time t2 is 26 milliseconds (ms) and the amount of time in the timer is 26.01 milliseconds, method 400 displays the virtual objects or images based on the identified objects positions at time t2 as determined at 430. Therefore, if the value of the timer is equal to time t2 (e.g., 26 ms), then the positions of virtual objects or images in the heads-up display field of projection may be determined via the following mapping (Hx,Hy)=MAP(x,y) as previously described, where the value of x and y may be determined from positions pt1, pt2, pt3, or ptn previously mentioned. The sizes of the virtual objects or images may be determined as previously described at 430. Method 400 returns to 402.
In this way, virtual objects or images may be positioned in a heads-up display field of view based on positions of identified objects in an image that is captured via a camera. The positions of the virtual objects or images in the heads-up display field may be based on estimates of where an identified object will be at a future time.
Referring now to
At 502, method 500 retrieves locations of identified objects in a first image and locations of the identified objects in a second image from step 406, the second image a next image captured after the first image. The locations of the identified objects may be based on pixel locations of the identified objects in the captured image. The pixel locations in the captured image that includes the identified objects may be the same pixel locations of the camera sensor. For example, pixel (1000, 2000) of the camera sensor may correspond to pixel (1000, 2000) of the captured image. The locations of the identified objects in the first and second images may be the center of pixel areas that make up the identified objects. Method 500 proceeds to 504.
At 504, method 500 determines the x (e.g., horizontal axis) components of the identified objects from the locations of the identified objects in the first and second images. For example, if the horizontal pixel location of an identified object in a first image is 3000 and if the horizontal pixel location of the identified object in a second image is 3500, the x trajectory of the identified object may be x2−x1 or 3500−3000=500x. Method 500 proceeds to 506.
At 506, method 500 determines the y (e.g., vertical axis) components of the identified objects from the locations of the identified objects in the first and second images. For example, if the vertical pixel location of the identified object in the first image is 200 and the vertical pixel location of the in the identified object in the second image is 300, the y trajectory of the identified object may be y2−y1 or 300−200=100y. Method 500 proceeds to 508.
At 508, method 500 determines the z (e.g., depth axis) components of the identified objects from the locations of the identified objects in the first and second images. The z component of an identified object may be determined from a size of an identified object, and size of an identified object may be determined from an actual total number of pixels that represent or make up the image of the identified object on the camera sensor. For example, if the identified object occupies or covers 2000 pixels in the first image and if the identified object occupies or covers 3000 pixels in the second image, the z trajectory of the identified object may be z2−z1 or 3000−2000=1000z. Method 500 proceeds to exit.
In this way, method 500 may estimate trajectories of identified objects in images captured by the camera so that the size and position of virtual objects and images that are to be exhibited via a heads-up display in a heads-up display field of projection may be determined. Method 500 does not require absolute size or distance information of identified objects to be determined. As such, method 500 may reduce computational time. However, if desired, the sizes of identified objects and the position of the identified objects relative to camera position may be determined and used as a basis for estimating positions of the identified objects.
The methods of
The methods of
Referring now to
Referring now to
A first image 700 includes image 610 of object 234. Likewise, second image 750 includes image 610 of object 234. A center of object 234 is at a horizontal distance 702 away from the lower left corner (e.g., the reference location) of camera sensor 602 in first image 700. The center of object 234 is a horizontal distance 754 away from lower left corner of camera sensor 602 in second image 750. The center of object 234 is a vertical distance 704 away from lower left corner of camera sensor 602 in first image 700. The center of object 234 is a vertical distance 752 away from lower left corner of camera sensor 602 in second image 750. Thus, the x (e.g., horizontal) position and y (vertical) position of object 234 has changed from first image 700 to second image 750. In addition, the size of object 234 has increased from the first image 700 to the second image 750, which may indicate that object 234 is moving toward camera 128. In this way, a position and size of object 234 in first and second images may be indicative of a trajectory and motion of object 234.
The description of embodiments has been presented for purposes of illustration and description. Suitable modifications and variations to the embodiments may be performed in light of the above description or may be acquired from practicing the methods. For example, unless otherwise noted, one or more of the described methods may be performed by a suitable device and/or combination of devices, such as the display controller 214 described with reference to
As used in this application, an element or step recited in the singular and proceeded with the word “a” or “an” should be understood as not excluding plural of said elements or steps, unless such exclusion is stated. Furthermore, references to “one embodiment” or “one example” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. The terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements or a particular positional order on their objects. The following claims particularly point out subject matter from the above disclosure that is regarded as novel and non-obvious.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2021/073091 | 12/22/2021 | WO |