This relates generally to imaging systems, and more particularly, to imaging systems with multiple image sensors for generating and monitoring a wide angle view around the user of an imaging device.
Image sensors are commonly used in mobile electronic devices such as cellular telephones, cameras, and computers to capture images. The use of mobile electronic devices having image sensors has become common in public settings. For some mobile electronic device users, mobile electronic devices can pose a distraction that reduces the user's awareness of the surrounding environment. Distractions in public settings can cause accidents for the user such as causing the user to trip on or walk into obstacles, can generate a risk of theft of the mobile electronic device or other items belonging to the user, and can result in the user losing or leaving behind items due to a lack of awareness of their surroundings.
It would be desirable to provide mobile electronic devices with the ability to monitor a user's surroundings.
Electronic devices such as digital cameras, computers, cellular telephones, and other electronic devices may include image sensors that gather incoming light to capture an image. The image sensors may include arrays of image pixels. The pixels in the image sensors may include photosensitive elements such as photodiodes that convert the incoming light into image signals. Image sensors may have any number of pixels (e.g., hundreds, thousands, millions or more). A typical image sensor may, for example, have hundreds of thousands or millions of pixels (e.g., megapixels). Image sensors may include control circuitry such as circuitry for operating the image pixels, readout circuitry for reading out image signals corresponding to the electric charge generated by the photosensitive elements, and, if desired, other processing circuitry such as analog processing circuitry and digital processing circuitry. An image sensor may be coupled to additional processing circuitry such as circuitry on a companion chip to the image sensor, circuitry in the device that is coupled to the image sensor by one or more cables or other conductive lines, or external processing circuitry.
Processing circuitry 18 may include one or more integrated circuits (e.g., image processing circuits, microprocessors, storage devices such as random-access memory and non-volatile memory, etc.) and may be implemented using components that are separate from camera module 12 and/or that form part of camera module 12 (e.g., circuits that form part of an integrated circuit that includes image sensors 16 or an integrated circuit within module 12 that is associated with image sensors 16). Image sensor 16 may receive control signals from storage and processing circuitry 18 and may supply pixel data (e.g., image data that includes multiple pixel values) to storage and processing circuitry 18. Image data that has been captured by camera module 12 may be processed using storage and processing circuitry 18. Processed image data may, if desired, be provided to external equipment (e.g., a computer, external processing circuitry, a display, or other device) using wired and/or wireless communications paths coupled to processing circuitry 18.
If desired, device 10 may include communications circuitry such as communications circuitry 20 coupled to storage and processing circuitry 18. Communications circuitry 20 may support communicating between electronic device 10 and other electronic devices. For example, communications circuitry 20 may include radio-frequency circuitry (e.g., transceiver circuitry, baseband circuitry, front end circuitry, antenna circuitry, or any other desired radio-frequency circuitry) for conveying (e.g., transmitting and/or receiving) radio-frequency signals between storage and processing circuitry 18 and other electronic devices such as device 22 via communications link 30. Link 30 may be a wireless link or may be a wired link (e.g., a cable or other wired path) for conveying data between device 10 and device 22. Devices such as device 22 that are in communication with device 10 may sometimes be referred to herein as peripheral devices, peripheral imaging devices, peripheral camera devices, or peripheral imagers. Peripheral imaging device 22 may be a portable electronic device such as a camera, a cellular telephone, a video camera, a tablet computer, a laptop computer, a webcam, a security camera, or other imaging device or imaging system that captures digital image data.
As shown in
If desired, peripheral imaging device 22 may include storage and processing circuitry (not shown) for performing image processing operations on image data captured using camera module 24. If desired, peripheral imaging device 22 may include communications circuitry such as radio-frequency communications circuitry for supporting communications with electronic device 10. Peripheral imaging device 22 may transmit image data captured using camera module 24 to electronic device 10 over communications link 30. Communications circuitry 20 on electronic device 10 may receive image data from peripheral imaging device 22 and may pass the received image data to storage and processing circuitry 18.
Storage and processing circuitry 18 may perform image processing operations on image data captured by peripheral imaging electronic device 22. Storage and processing circuitry 18 may combine image data captured by camera module 12 with image data captured by camera module 24 when performing image processing operations.
If desired, camera module 12 may capture images from a first portion of a scene and camera module 24 may capture images from a second portion of the scene. Storage and processing circuitry 18 may combine image data received from peripheral device 22 with image data captured by image sensor 16 to perform image processing operations on both the first and second portions of the scene. For example, processing circuitry 18 may generate a wide-angle image of portions of a scene that are in front of and behind camera module 48 by combining image data captured by camera modules 24 and 48.
The example of
Device 10 may be operated by a user. When operating electronic devices such as device 10 in certain environments such as in a public setting, device 10 may pose a distraction to the user and may reduce the user's awareness of their surroundings. Such distractions may pose a risk to the user of device 10 and may lead to accidents while operating device 10. For example, information displayed on device 10 may divert a user's attention from their surroundings and may cause accidents such as causing the user to trip or run into approaching objects or may lead to other hazards such as theft (e.g., theft of device 10 or other items belonging to the user of device 10). If desired, device 10 and/or peripheral device 22 may perform image processing operations on image data captured using camera module 12 and camera module 24 to monitor the user's surroundings even when the user is distracted with other tasks.
Device 10 may include a front-facing camera module 50 formed on the same side of device 10 as display 46 and a rear-facing image sensor 48 formed on an opposing side of device 10 (e.g., rear-facing and front-facing camera modules such as camera modules 12 of
If desired, user 42 may use a peripheral imaging device that communicates with device 10 such as peripheral imaging device 22 of
If desired, peripheral device 22 may be placed on a wearable article such as wearable article 44 that is worn by user 42. For example, wearable article 44 may be an article of clothing worn by user 42 such as a necklace, hat, bracelet, tie, belt, or any other desired article of clothing. As another example, wearable article 44 may be a pin or clip that can be affixed to a portion of user 42 (e.g., affixed to the user's body or clothing) and/or that can be affixed to objects in the vicinity of user 42 such as a chair, bench, wheelchair, scooter, car, vehicle, bicycle, etc. In another example, wearable article 44 may be an accessory such as a purse, backpack, satchel, briefcase, handbag, or any other accessory carried by user 42. If desired, wearable article 44 may include multiple peripheral imaging devices 22 and/or user 42 may wear multiple articles 44 so that any desired number of peripheral camera modules 24 are used to capture images of the user's surroundings.
In order to capture image data from as much of environment 40 as possible, rear-facing camera module 48 and/or peripheral camera module 24 may be provided with a wide-angle lens. The wide-angle lens formed on camera module 48 may focus light from substantially all of the portion of environment 40 located in front of user 42 and the wide-angle lens formed on peripheral camera module 24 may focus light from substantially all of the portion of environment 40 located behind user 42. In this way, device 10 and peripheral device 22 may capture image data from all sides of user 42 and may monitor all sides of user 42 for potential hazards. As an example, camera module 48 and peripheral camera module 24 may generate image data in response to light received from all 360 degrees around user 42 or from any desired range of angles around user 42 (e.g., camera modules 48 and 24 may cover 360 degrees around user 42, may cover 320 degrees around user 42, may cover 340 degrees around user 42, or may cover any other desired portion of environment 40 around user 42). In general, the coverage of environment 40 provided by camera module 48 and 24 may depend on the field of view of the wide-angle lenses formed on modules 48 and 24, on the positioning of modules 48 and 24 relative to user 42, and the orientation of modules 42 and 24.
Peripheral camera module 24 worn by user 42 may capture image data from portions of environment 40 that are not within field of view 60 of camera module 48. Peripheral camera module 24 may include a wide angle lens having an angular field of view θ in the X-Y plane. In the example of
In the example of
The example of
Camera module 48 on device 10 may capture image data within field of view 60 from environment 40. Processing circuitry 18 on device 10 (as shown in
As another example, device 10 may monitor the location of objects of interest such as stationary object 70. Object 70 may, for example, be a valuable object such as a purse, keys, wallet, cell phone, tablet computer, laptop computer, a pet, a child, or other person or object that can be easily lost or stolen. Device 10 may continuously monitor the location of object 70 based on the captured image data to ensure that object 70 is always nearby user 42. If processing circuitry 18 determines that object 70 is no longer in sight, device 10 may issue an alert to user 42 to notify user 42 of the missing object.
Peripheral imaging device 22 may capture image data within field of view 62 from environment 40. Peripheral imaging device 22 may transmit the captured image data to device 10 for processing. In another suitable arrangement, peripheral imaging device 22 may perform processing operations on the image data using processing circuitry located on peripheral imaging device 22. Processing circuitry 18 on device 10 may process the captured image data to track potentially hazardous objects that are moving relative to user 42 that are located behind user 42 such as object 74 and/or to track objects of interest that are located behind user 42 such as object 76. Processing circuitry 18 may alert user 42 of potential hazards or when an object of interest is lost.
At step 100, processing circuitry 18 may determine a direction of motion of user 42 and/or camera module 48. For example, processing circuitry 18 may analyze frames of image data captured by camera module 48 and/or peripheral camera module 24 to determine a direction of motion of user 42 relative to environment 40. If desired, processing circuitry 18 may determine a direction of motion based on inertial data generated by inertial sensors in device 10 and/or device 22. As an example, processing circuitry 18 may determine that user 42 is walking in direction 52 as shown in
At step 102, processing circuitry 18 may identify objects in the image data that are approaching user 42. For example, processing circuitry 18 may process successive frames of image data to identify objects that move closer to user 42 between frames. Objects identified by processing circuitry 18 as approaching user 42 may include stationary objects that user 42 is approaching in environment 40 or may include moving objects that are actively approaching user 42. As an example, processing circuitry 18 may identify object 66 in image data captured by camera module 48 that is approaching user 42 from the front and may identify object 74 in image data captured by peripheral camera module 24 that is approaching user 42 from behind, as shown in
At step 104, processing circuitry 18 may determine whether the identified objects that are approaching user 42 are a potential hazard. For example, processing circuitry 18 may determine that the identified object is a potential hazard if user 42 is likely to collide with the identified object (e.g., based on the identified direction of motion of user 42), is likely trip over the identified object, etc. As another example, processing circuitry 18 may identify a speed of the moving object relative to user 42 and may determine that the object is a potential hazard if the speed of the object relative to user 42 is greater than a threshold speed. In another example, processing circuitry 18 may identify the size of the moving object and may determine that the object is a potential hazard if the size of the object is greater than a threshold size. In yet another example, processing circuitry 18 may identify a shape or appearance of the moving object and may determine that the moving object is a potential hazard if the shape or appearance of the moving object is sufficiently similar to the shape or appearance of known hazards. In general, processing circuitry 18 may perform any desired combination of these methods in determining whether the approaching object is a potential hazard. Determining the speed and size of the approaching object may allow processing circuitry 18 to determine whether the approaching object is a moving car (which could be a potential hazard) or some other less dangerous moving object, for example.
If processing circuitry 18 determines that the object is not a potential hazard, processing may loop back to step 100 as shown by path 106 to continue monitoring environment 40 for potential hazards. If processing circuitry 18 determines that the object is a potential hazard, processing may proceed to step 110 as shown by path 108.
At step 110, device 10 may issue a warning (e.g., an alert or alarm) to user 42 to inform user 42 of the hazard. For example, device 10 may issue an audio warning (e.g., a ringing alarm tone or other noise), a visual warning (e.g., device 10 may display a visual warning to user 42 using display 46), a haptic warning (e.g., by using vibrator circuitry to vibrate device 10), and/or any desired combination of visual, audio, and haptic warnings. If desired, device 10 may display image data using display 46 that shows the hazardous object to user 42. The image data displayed using display 46 may include, for example, the image data that caused processing circuitry 18 to determine that the moving object was hazardous, and/or live image data of the object. If desired, display 46 may highlight or otherwise point out the object while the image data is being displayed to user 42. Device 10 may identify a location of the hazardous and may identify evasive procedures that the user may take to avoid or mitigate the hazard. In this way, user 42 may become aware of the presence of the potentially hazardous object and the user may react or take other preventative measures to eliminate the potential hazard posed by the object.
As an example, processing circuitry 18 may determine that object 74 (as shown in
At step 120, camera module 48 and peripheral camera module 24 may begin capturing frames of image data from environment 40. Peripheral device 22 may transmit captured image data to device 10 for processing. Device 10 may temporarily store (e.g., sample) image data captured by camera module 48 and peripheral camera module 24 at storage and processing circuitry 18 at a selected sampling rate. For example, processing circuitry 18 may sample recent frames of image data captured by camera modules 48 and 24 and may overwrite the stored frames on a continuous basis (e.g., so that only recent frames of image data are stored on processing circuitry 18 at any given time).
At step 122, processing circuitry 18 may identify an object within the field of view of camera module 48 and/or peripheral camera module 24 as an object of interest. As an example, user 42 of device 10 may select an object in the image data to identify as an object of interest (e.g., by providing a user input to device 10). If desired, user 42 may point camera module 48 and/or peripheral camera module 24 at the object of interest to ensure that the object of interest is within the field of view of camera module 48 and/or peripheral camera module 24, and may identify the object of interest as such by providing a user input to device 10. In another suitable arrangement, processing circuitry 18 may autonomously identify an object of interest in the image data.
At step 124, processing circuitry 18 may track the identified object of interest through the fields of view of camera modules 48 and peripheral camera module 24 (e.g., using an interest point tracking algorithm or any other desired tracking algorithm). If desired, processing circuitry 18 may track the position of the object of interest, may track the relative position of the object of interest relative to the background of environment 40, may determine whether the object of interest is moving, and/or may monitor the angular subtense of the object of interest (e.g., the apparent size of the object of interest). If the object of interest becomes obscured from view of camera module 48 and camera module 22, leaves the field of view of camera module 48 and camera module 24, exhibits excessive motion relative to the background of environment 40, and/or excessively diminishes in size (e.g., indicating that the object is becoming far away from user 42), processing may proceed to step 126.
At step 126, processing circuitry 18 may retrieve recent frames of stored image data so that the recent frames are not overwritten by subsequently captured image data. The recent frames of image data may show the object of interest prior to when the object was lost from the field of view of camera modules 48 and 24. The recent frames of image data may, for example, be useful in determining how the object of interest was lost or stolen and may be useful in determining where the object of interest has gone.
At step 128, processing circuitry 18 may begin storing subsequent frames of image data captured by camera modules 48 and 24 at a higher sampling rate (e.g., a greater sampling rate than the selected sampling rate with which the frames of image data are normally sampled). In this way, more image data may be available for inspection immediately after the object of interest was lost from the field of view of camera modules 48 and 24 (e.g., the subsequently captured frames of image data may be useful in determining how the object of interest was lost or stolen or in determining where the object of interest has gone).
At step 130, device 10 may issue a warning (e.g., an alert or alarm) to user 42 to inform user 42 that the object of interest has been lost from the field of view. For example, device 10 may issue an audio warning (e.g., a ringing alarm tone or other noise), a visual warning (e.g., may display a visual warning to user 42 using display 46), a haptic (tactile) warning (e.g., by using vibrator circuitry to vibrate device 10), and/or any desired combination of visual, audio, and haptic warnings to alert the user that the object of interest has been lost from view. After receiving the warning, the user may assess environment 40 to confirm whether the object of interest has been lost or stolen.
If desired, user 42 may reset the warning (e.g., using a user input to device 10) and processing may loop back to step 122 as shown by path 132 to resume object tracking operations. In one example, the object of interest may have moved into a blind spot 64 (as shown in
At step 136, device 10 may replay the image data for user 42 (e.g., using display 46 as shown in
As an example, processing circuitry 18 may identify object 70 as shown in
The example of
If desired, user 42 may monitor environment 40 using any desired number of peripheral imaging devices 22. For example, user 42 may operate two peripheral imaging devices 22 in conjunction with electronic device 10 and each peripheral imaging device 22 may monitor objects of interest and potential hazards within a portion of environment 40. In the example where two peripheral imaging devices 22 are used, each peripheral imaging device 22 may have a wide-angle lens with a 120 degree field of view, for example (e.g., so that the three camera modules may have a total angular field of view of 360 degrees to cover all of the user's surroundings). In general, the wide-angle lenses of camera module 48 and each peripheral camera module 24 may cumulatively cover at least 320 degrees around user 42 in order to provide suitable monitoring capabilities of the surroundings of user 42 (e.g., camera module 48 and peripheral camera module 24 may provide a near surround-view of 320 degrees or more). In this way, user 42 may be provided with situational awareness of environment 40 even when occupied with distractions such as content displayed on device 10, so as to mitigate possible hazards to user 42 posed by objects within environment 40 and to prevent theft or loss of valuables or other objects of interest.
If desired, camera modules 48 and 24 may collect depth information from environment 40 (e.g., using phase detection pixels, time-of-flight pixels, or separate sensors such as ultrasonic detectors). Depth information collected by modules 48 and 24 may, for example, include information about the distance between objects in environment 40 and camera modules 48 and 24. In this scenario, the combined field of view (near-surround view) of camera modules 48 and 24 may be processed by processing circuitry 18 and/or displayed on device 10 (e.g., using display 46) or on remote display devices as a “birds-eye” that shows user 42 from above (e.g., a top down view of user 42 and the user's immediate surroundings). Such a birds-eye view may be useful for monitoring the user's surroundings for hazards and objects of interest and may, when displayed on device 10, provide the user with a broad view of portions of environment 40 that would otherwise be unseen by the user.
The processor system 300 generally includes a lens 396 for focusing an image on pixel array 201 of device 200 when a shutter release button 397 is pressed, central processing unit (CPU) 395, such as a microprocessor which controls camera and one or more image flow functions, which communicates with one or more input/output (I/O) devices 391 over a bus 393. Imaging device 200 also communicates with the CPU 395 over bus 393. The system 300 also includes random access memory (RAM) 392 and can include removable memory 394, such as flash memory, which also communicates with CPU 395 over the bus 393. Imaging device 200 may be combined with the CPU, with or without memory storage on a single integrated circuit or on a different chip. Although bus 393 is illustrated as a single bus, it may be one or more busses or bridges or other communication paths used to interconnect the system components.
Various embodiments have been described illustrating imaging systems and methods of operating image systems having multiple camera modules with wide-angle lenses for capturing image data from substantially all of a user's surroundings to monitor the surroundings for potential hazards and objects of interest.
The imaging system may include a mobile electronic imaging device such as a cellular telephone, laptop computer, tablet computer, or other portable electronic device. The imaging system may include one or more peripheral imaging devices that are remote to (e.g., formed separately from) the mobile electronic imaging device. The mobile electronic imaging device may include a first camera module having a first wide-angle lens and the peripheral imaging device may include a second camera module having a second wide-angle lens. The first and second wide-angle lenses may have angular fields of view of between 120 and 180 degrees, for example (e.g., the first and second wide-angle lenses may have a total angular field of view or near surround-view of between 320 and 360 degrees).
The first camera module may capture a first set of image data in response to light received from a first portion of a scene and the second camera module may capture a second set of image data in response to light received from a second portion of the scene (e.g., the first and second portions of the scene may overlap or may be different depending on the orientation of the first and second camera modules). As an example, the first portion of the scene may be located in front of the first camera module whereas the second portion of the scene may be located behind the first camera module (e.g., behind a user of the imaging system). The second camera module may wirelessly transmit the captured second set of image data to processing circuitry on the mobile electronic device. The processing circuitry may combine the first and second sets of captured image data to generate wide-angle image data (e.g., near-surround view image data of the user's surroundings).
The processing circuitry may track objects in the scene using the captured first and second sets of image data (e.g., using the wide-angle image data). If desired the processing circuitry may identify a direction of motion of the first camera module and may identify and track objects in the scene that are approaching the first and/or second camera modules using the first and second sets of captured image data. The processing circuitry may determine whether the approaching object is hazardous based on the direction of motion of the first camera module and the first and second sets of image data and may issue a warning (e.g., a visual, audio, and/or haptic warning) to a user in response to determining that the approaching object is hazardous.
If desired, the processing circuitry may determine whether the tracked object leaves the total field of view of the first and second wide-angle lenses (e.g., whether the tracked object is no longer within the field of view of the first and second wide-angle lenses) and may issue an alert to a user in response to determining that the tracked object has left the fields of view of the first and second wide-angle lenses. If desired, the processing circuitry may storing the first and second sets of image data at a first sampling rate and may increase the sampling rate in response to determining that the tracked object has left the fields of view of the first and second wide-angle lenses.
The foregoing is merely illustrative of the principles of this invention and various modifications can be made by those skilled in the art without departing from the scope and spirit of the invention. The foregoing embodiments may be implemented individually or in any combination.