The present invention relates to an image capture apparatus that is wearable on the body of a user.
There are known wearable cameras capable of performing shooting while being worn on the bodies of users, so as to enable activities of the users to be recorded. In addition, there are known ambisonic microphones capable of recording sound in a 360-degree spherical sound field using a plurality of microphones disposed on the front, rear, left, and right such that sound having a realistic sensation can be recorded (Japanese Patent Laid-Open No. 2019-54440).
However, when a wearable camera is being worn on the body of a camera operator to perform shooting, and sound is obtained using a plurality of ambisonic microphones, there is the issue that the microphones disposed on the front, rear, left, and right of the body of the camera operator become conspicuous. In addition, there is the issue that, when a plurality of microphones are mounted to a camera, the size of the camera increases.
The present invention has been made in consideration of the aforementioned problems, and realized techniques that can make an ambisonic microphone inconspicuous without increasing the size of an apparatus. In order to solve the aforementioned problems, the present invention provides an image capture apparatus that includes an annular case, and is wearable on a neck of a user, comprising: a main body that includes an image capture circuit; a mount portion that is connected to the main body; and a first microphone, a second microphone, a third microphone, and a fourth microphone, wherein the first microphone is provided at a first position corresponding to the same direction as a shooting direction of the image capture circuit, the second microphone is provided at a second position corresponding to the opposite direction to the shooting direction, the third microphone is provided at a third position on a second line that intersects a first line that connects the first position and the second position, and the fourth microphone is provided at a fourth position that is on the second line, and is opposite to the third position.
According to the present invention, it is possible to make an ambisonic microphone inconspicuous without increasing the size of an apparatus.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note, the following embodiments are not intended to limit the scope of the claimed invention. Multiple features are described in the embodiments, but limitation is not made to an invention that requires all such features, and multiple such features may be combined as appropriate. Furthermore, in the attached drawings, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
As will be described later with reference to
First, an external configuration of the wearable camera 1 according to the present embodiment will be described with reference to
The wearable camera (hereinafter, a “camera”) 1 includes a camera body 10, a mount portion 80, and a battery portion 90. The mount portion 80 is an necklace-type annular member for connecting the camera body 10 and the battery portion 90 to each other and wearing the camera body 10 to the neck of the user. The camera body 10 includes a face detection unit 13, a start button 14, a stop button 15, a shooting lens 16, and an indicator 17. Moreover, the camera body 10, the mount portion 80, and the battery portion 90 constitute an annular case. When the camera 1 is worn on the neck of the user, the annular case of the camera 1 is disposed so as to surround the neck of the user.
The face detection unit 13 detects position information regarding the jaw and the neck bottom of the user on which the camera 1 is worn, by irradiating, with infrared light, a lower portion (particularly, the jaw and the neck bottom) of the face of the user on which the camera 1 is worn, and capturing an image of reflected light using an infrared light image sensor built in the camera body 10.
The start button 14 is an operation member for instructing the camera 1 to start shooting of a moving image. The stop button 15 is an operation member for instructing the camera 1 to stop shooting of a moving image. The shooting lens 16 is an optical member for forming an image of a subject image using an image sensor 42 built in the camera body 10. The indicator 17 is a light emitting diode (LED) for displaying an operation state of the camera 1 by emitting light.
The mount portion 80 is provided with a plurality of microphones (hereinafter, mikes) 19L, 19R, 19F, and 19B for collecting ambient sound on the front, rear, left, and right sides of the user when the camera 1 is worn on the user. The mikes 19L, 19R, 19F, and 19B are non-directional, and constitute an ambisonic mike. The mike 19L is disposed on the left side of the neck of the user who is wearing the camera 1, obtains sound on the left side of the user, and converts the obtained sound into an analog sound signal. The mike 19R is disposed on the right side of the neck of the user who is wearing the camera 1, obtains sound on the right side of the user, and converts the obtained sound into an analog sound signal. The mike 19F is disposed on the front side of the neck of the user who is wearing the camera 1, obtains sound on the front side of the user, and converts the obtained sound into an analog sound signal. The mike 19B is disposed on the rear side of the neck of the user who is wearing the camera 1, obtains sound on the rear side of the user, and converts the obtained sound into an analog sound signal.
The mike 19L and the mike 19R are disposed on the line of a first axis AX1 that is defined relative to the camera 1. The first axis AX1 corresponds to a line that connects the mike 19L and the mike 19R. The mike 19F and the mike 19B are disposed on the line of a second axis AX2 that intersects the first axis AX1. The second axis AX2 corresponds to a line that connects the mike 19F and the mike 19B. The first axis AX1 and the second axis AX2 are orthogonal to each other as viewed from the upper side of
Analog sound signals output from the mikes 19L, 19R, 19F, and 19B are processed by a sound processing unit 104 to be described later with reference to
The mikes 19L, 19R, 19F, and 19B are disposed so as to surround the neck of the user in a state where the camera 1 is worn on the neck of the user. In a state where the camera 1 is worn on the neck of the user, the mike 19F is disposed on the front side of the neck of the user, the mike 19B is disposed on the rear side of the neck of the user, the mike 19L is disposed on the left side of the neck of the user, and the mike 19R is disposed on the right side of the neck of the user.
The mike 19L is disposed on a left side surface portion of the mount portion 80 in a state where the camera 1 is being worn, so as to be able to collect sound on the left side of the user who is wearing the camera 1. The mike 19R is disposed on a right side surface portion of the mount portion 80 in a state where the camera 1 is being worn, so as to be able to collect sound on the right side of the user. The mike 19F is disposed on a front surface portion of the camera body 10 in a state where the camera 1 is being worn, so as to be able to collect sound in the surroundings on the front side of the neck of the user. The mike 19B is disposed on a rear surface portion of the battery portion 90 in a state where the camera 1 is being worn, so as to be able to collect sound in the surroundings on the rear side of the neck of the user. Hereinafter, the mike 19L may also be referred to as a “left mike”, the mike 19R may also be referred to as a “right mike”, the mike 19F may also be referred to as a “front mike”, and the mike 19B may also be referred to as a “rear mike”.
Openings (sound holes) for guiding ambient sound to the respective mikes 19L, 19R, 19F, and 19B are provided in the mount portion 80. The mikes 19L, 19R, 19F, and 19B are exposed to the outside through the corresponding openings.
The mount portion 80 and the camera body 10 of the camera 1 are configured to allow the user to easily wear and remove the camera 1, due to a connection/disconnection mechanism (not illustrated) provided at two end portions of the camera body 10. In view of this, the camera 1 is worn on the neck of the user by the user hanging the mount portion 80 on the neck of the user in a state where the mount portion 80 is detached from the camera body 10, and connecting two end portions of the mount portion 80 to the two end portions of the camera body 10. The camera 1 is worn such that the battery portion 90 is positioned on the back side of the neck of the user, and the camera body 10 is positioned on the front side of the neck of the user. The camera body 10 is biased to the chest immediately under the neck of the user by the mount portion 80. Accordingly, the camera body 10 is positioned near front portions of the clavicles of the user, and the face detection unit 13 is positioned below the jaw of the user. An infrared light collecting lens 26, which will be described later with reference to
By disposing the camera body 10 on the front side of the neck of the user and disposing the battery portion 90 on the back side, the weight of the camera 1 can be distributed, thus relieving fatigue of the user who is wearing the camera 1, and reducing displacement of the camera 1 when the user moves.
Note that, in the present embodiment, an example is illustrated in which the camera 1 is worn such that the camera body 10 is positioned near the front portions of the clavicles of the user, but there is no limitation thereto. The camera 1 may be worn at any position of the body of the user besides the neck of the user, as long as the face direction detection unit 20 of the camera 1 can detect a direction in which the user is performing observation (hereinafter, referred to as an “observation direction”) and the direction in which a subject is present can be detected.
The battery portion 90 includes a charge cable connection portion 91, extension buttons 92L and 92R, and a notch portion 93.
The charge cable connection portion 91 is an adopter for connecting the camera 1 to a charge cable for connection to an external power supply (not illustrated) when charging the battery portion 90. The battery portion 90 performs charging using power supplied from the external power supply via the charge cable, and supplies the power to the camera body 10.
The extension buttons 92L and 92R are operation members for extending/contracting band portions 82L and 82R of the mount portion 80. The extension button 92L is capable of adjusting the length of the left band portion 82L. The extension button 92R is capable of adjusting the length of the right band portion 82R. Note that, in the present embodiment, a configuration is illustrated in which the lengths of the band portions 82L and 82R are individually adjusted using the extension buttons 92L and 92R, but a configuration may also be adopted in which the lengths of the band portions 82L and 82R can be adjusted at the same time using a single button. Hereinafter, the band portions 82L and 82R are collectively referred to as “band portions 82”.
The notch portion 93 is a shape portion for avoiding abutment with the spine portion of the neck of the user such that the battery portion 90 does not interfere with the spine portion. This makes it possible to reduce uneasiness when the camera 1 is worn, and to prevent the camera 1 from moving in the left-right direction during shooting or moving.
In
The A button 202 is an operation member for switching on or off a power supply for the display device 2, and accepts an instruction to switch on or off the power supply when a long press is performed, and accepts an instruction to start or end other processing when a short press is performed.
The display unit 203 is a display device constituted by an LCD or organic EL display for displaying an image transmitted from the camera 1 and displaying a menu screen for performing setting of the camera 1. In the present embodiment, a touch sensor is provided integrally with the display unit 203, making it possible to accept a touch operation performed on a screen (for example, the menu screen) that is being displayed. The B button 204 is an operation member for instructing a calibrator 3, which will be described later with reference to
The face sensor 206 detects the shape and features of the face of the user who is observing the display device 2, and a direction in which the user is performing observation. The face sensor 206 can be realized by a structure light sensor, a ToF sensor, and a millimeter-wave radar, for example.
The angular velocity sensor 207 is a gyro sensor for detecting movement (rotation and direction) of the display device 2 as angular velocities in three axial directions orthogonal to one another. The acceleration rate sensor 208 detects the posture of the display device 2 by detecting the gravity direction of the display device 2. In the display device 2, calibration processing, which will be described later with reference to
The display device 2 according to the present embodiment can realize the system according to the present embodiment by firmware for a smartphone or the like supporting/complying with firmware of the camera 1. Note that it is also possible to realize the system according to the present embodiment by firmware of the camera 1 supporting applications and the OS of a smartphone that serve as the display device 2.
The mount portion 80 includes a right mount portion 80R and a left mount portion 80L. The right mount portion 80R is positioned on the right side of the body of the user, and is connected to the right end portion of the camera body 10. The left mount portion 80L is positioned on the left side of the body of the user, and is connected to the left end portion of the camera body 10. The right mount portion 80R includes an angle holding portion 81R for holding the angle between the right end portion of the camera body 10 and the right mount portion 80R, the angle holding portion 81R being made of a hard material, and a band portion 82R made of a flexible material. The left mount portion 80L includes an angle holding portion 81L for holding the angle between the left end portion of the camera body 10 and the left mount portion 80L, the angle holding portion 81L being made of a hard material, and a band portion 82L made of a flexible material.
The right band portion 82R includes a right connection portion 83R and a right electric cable 84R. The left band portion 82L includes a left connection portion 83L and a left electric cable 84L. The right connection portion 83R is a connection surface between the angle holding portion 81R and the band portion 82R, and has a cross-sectional shape that is not an exact circle, and, here, the cross-section has an oblong shape. The left connection portion 83L is a connection surface between the angle holding portion 81L and the band portion 82L, and has a cross-sectional shape that is not an exact circle, and, here, the cross-section has an oblong shape. The right connection portion 83R and the left connection portion 83L have a positional relation in which the distance between the right connection surface 83R and the left connection surface 83L decreases toward the upper side of
The right electric cable 84R is disposed in the band portion 82R, and electrically connects the battery portion 90, the right mikes 19R and the camera body 10 to one another. The left electric cable 84L is disposed in the band portion 82L, and electrically connects the battery portion 90, the left mikes 19L and the camera body 10 to one another. The electric cables 84R and 84L are electric paths for supplying power of the battery portion 90 to the camera body 10, and transmitting/receiving signals to/from an external device.
The power button 11 is an operation member for switching power-on or power-off of the camera 1. In the present embodiment, the power button 11 is a slide lever, but there is no limitation thereto. The power button 11 may also be a button to be pressed, or may also be configured integrally with a slide cover (not illustrated) of the shooting lens 16, for example.
The shooting mode button 12 is an operation member for changing the shooting mode of the camera 1. In the present embodiment, the shooting mode can be switched to one of a still image mode, a moving image mode, and a preset mode. In the present embodiment, the shooting mode button 12 is a slide lever that enables one of “Photo”, “Normal”, and “Pre” to be selected. The shooting mode changes to the still image mode when “Photo” is set, the shooting mode changes to the moving image mode when “Normal” is set, and the shooting mode changes to the preset mode when “Pre” is set. Note that, as long as the shooting mode button 12 is a button for enabling the shooting mode to be changed, there is no limitation to the present embodiment. Three buttons “Photo”, “Normal”, and “Pre” may be provided as the shooting mode button 12, for example.
When the camera body 10 is biased to the chest of the user immediately below the neck, the chest connection pads 18R and 18L abut on the neck or chest of the user. The camera body 10 has an outer shape such that the entire length in the horizontal direction (left-right direction) thereof is longer than the entire length in the vertical direction (up-down direction) thereof when the camera is worn on the neck of the user, and the chest connection pads 18 are disposed near the two end portions of the camera body 10. Accordingly, it is possible to reduce blurring due to the camera body 10 rotating in the left-right direction during shooting. In addition, the chest connection pads 18 make it possible to prevent the power button 11 and the shooting mode button 12 from coming into contact with the body of the user. Furthermore, the chest connection pads 18 have a role of preventing heat from being transmitted to the body of the user when the temperature of the camera body 10 increases due to long-time shooting, and also have a role of adjusting the angle of the camera body 10.
The face detection unit 13 is provided at a central portion of the upper surface portion of the camera body 10, and the chest connection pads 18 protrude from the two end portions of the camera body 10.
An infrared light detector 27 is disposed below the face detection unit 13. The infrared light detector 27 includes an infrared light emission unit 22 and a light collecting lens 26. Each infrared light emission unit 22 is an infrared light LED that projects infrared light 23 (see
A left angle adjustment button 85L is an operation member for adjusting the angle of the left angle holding portion 81L relative to the left end portion of the camera body 10. Note that, similarly, a right angle adjustment button 85R (not illustrated) for adjusting the angle of the right angle holding portion 81R relative to the right end portion of the camera body 10 is provided on a right side surface portion of the camera body 10, at a position symmetrical to the left angle adjustment button 85L. Hereinafter, the right angle holding portion 81R and the left angle holding portion 81L are collectively referred to as “angle holding portions 81”. In addition, the right angle adjustment button 85R and the left angle adjustment button 85L are collectively referred to as “angle adjustment buttons 85”. The angle adjustment buttons 85 are positioned so as to be viewed also in
The user can change the angle between the camera body 10 and each angle holding portion 81 by swinging the angle holding portion 81 in the up-down direction in
Next, a functional configuration of the camera 1 according to the present embodiment will be described with reference to
The camera 1 includes the face direction detection unit 20, a shooting area determination unit 30, an image capture unit 40, an image processing unit 50, a primary recording unit 60, a communication unit 70, and another control unit 111. These are controlled by a camera control unit 101 (see
The face direction detection unit 20 includes the infrared light emission unit 22 and the infrared light detector 27, and detects the face direction of the user in order to estimate the observation direction of the user, and outputs an estimation result to the shooting area determination unit 30 and the sound processing unit 104.
The shooting area determination unit 30 performs computation based on the observation direction of the user estimated by the face direction detection unit 20, and determines processing information indicating a position and area of an image that is extracted as an image to be recorded, from an image captured by the image capture unit 40. The determined processing information is generated and is output to the image processing unit 50.
The image capture unit 40 performs image capturing, generates image data, and outputs the image data to the image processing unit 50.
The image processing unit 50 extracts and develops a portion of an image captured by the image capture unit 40 based on the processing information obtained from the shooting area determination unit 30. The extracted image is output to the primary recording unit 60 as an image of the observation direction of the user.
The primary recording unit 60 includes a primary memory 103 (see
The communication unit 70 performs wireless communication with the display device 2 (see
The display device 2 can communicate with the communication unit 70 using a wireless LAN that enables high-speed communication (hereinafter, high-speed wireless communication). In the present embodiment, a communication method that complies with the IEEE802.11ax (WiFi (registered trademark) 6) standard is adopted for high-speed wireless communication, but a communication method that complies with another standard such as the WiFi 4 standard or the WiFi 5 standard may be also applied. In addition, the display device 2 may be a device developed specifically for the camera 1.
As a method of communication between the communication unit 70 and the display device 2, low-power wireless communication may be used, both high-speed wireless communication and low-power wireless communication may be used, or high-speed wireless communication and low-power wireless communication may be switched. In the present embodiment, a large amount of data such as a moving image to be described later is transmitted through high-speed wireless communication, and a small amount of data such as a still image and data that requires a transmission time are transmitted through low-power wireless communication. In the present embodiment, Bluetooth (registered trademark) is adopted for low-power wireless communication, but near field (short distance) wireless communication such as near field communication (NFC) may also be applied.
The calibrator 3 performs initial setting and individual setting of the camera 1, and can communicate with the communication unit 70 through high-speed wireless communication, similarly to the display device 2. Note that the display device 2 may also have the function of the calibrator 3.
The simplified display device 4 can communicate with the communication unit 70 only through low-power wireless communication. The simplified display device 4 cannot transmit/receive an image to/from the communication unit 70 due to restrictions of communication capacity, but can perform transmission of a timing signal for starting/stopping shooting, transmission of an image merely for layout check, and the like. In addition, the simplified display device 4 may be a device dedicated for the camera 1 similarly to the display device 2, or may be a smart watch or the like.
The camera 1 includes the camera control unit 101, the power button 11, the shooting mode button 12, the face detection unit 13, the start button 14, the stop button 15, the shooting lens 16, and the indicator 17. In addition, the camera 1 includes an infrared lighting circuit 21, the infrared light emission unit 22, the infrared light collecting lens 26, and the infrared light detector 27, which constitute the face direction detection unit 20 (see
The camera 1 according to the present embodiment is provided with only one image capture unit 40, but may also be provided with two or more image capture units 40. Including a plurality of image capture units makes it possible to perform shooting of a 3D image, shooting of an image that has a wider field of view than that can be obtained by one image capture unit 40, shooting of an image in a plurality of directions, and the like.
In addition, the camera 1 includes a large capacity non-volatile memory 51, a built-in non-volatile memory 102, and the primary memory 103. Furthermore, the camera 1 includes the sound processing unit 104, a sound output unit 105, a vibration unit 106, an angular velocity sensor 107, an acceleration sensor 108, and various operating units 110.
The camera control unit 101 includes a processor such as a CPU that performs overall control of the camera 1. The functions of the shooting area determination unit 30, the image processing unit 50, and the other control unit 111 that have been described with reference to
The infrared lighting circuit 21 controls on and off of the infrared light emission unit 22. The infrared light emission unit 22 emits the user with the infrared light 23. The face detection unit 13 includes a visible light cut filter, and hardly allows visible light to pass, and allows the infrared light 23 and the reflected light beam 25 to pass. The infrared light collecting lens 26 collects the reflected light beam 25.
The infrared light detector 27 includes an infrared light image sensor for detecting the reflected light beam 25 collected by the infrared light collecting lens 26. The infrared light image sensor generates image data by photoelectrically converting the reflected light beam 25 collected by the infrared light collecting lens 26 and formed into an image by the infrared light image sensor, and outputs the generated image data to the camera control unit 101.
As shown in
The various operating units 110 are operation members for executing functions other than the afore-mentioned functions of the camera 1.
The image sensor drive circuit 41 includes a timing generation circuit and the like, and generates a timing signal for controlling an image capture operation that is performed by the image sensor 42. The image sensor 42 generates captured image signals by photoelectrically converting a subject image formed by the shooting lens 16 shown in
The built-in non-volatile memory 102 is a flash memory or the like, and stores a program to be executed by the camera control unit 101, constants and variables for executing the program, and the like. The camera 1 according to the present embodiment can change the shooting field of view (shooting area) and set the intensity of anti-vibration control, and thus setting values thereof are also stored in the built-in non-volatile memory 102.
The primary memory 103 is a RAM or the like, and temporarily stores image data that is being processed, or temporarily stores a result of computation performed by the camera control unit 101. Processed image data is written as a moving image file or a still image file to the large capacity non-volatile memory 51, and such a file is read out from the large capacity non-volatile memory 51. The large capacity non-volatile memory 51 may be a recording medium built in the camera body 10, or may be a removable recording medium such as a memory card, or may be used along with the built-in non-volatile memory 102.
The low-power wireless communication unit 61 performs data communication with the display device 2, the calibrator 3, and the simplified display device 4 through low-power wireless communication. The high-speed wireless communication unit 62 performs data communication with the display device 2, the calibrator 3, and the simplified display device 4 through high-speed wireless communication.
The sound input unit 19 that includes a plurality of mikes 19L, 19R, 19F, and 19B is connected to the sound processing unit 104. The sound processing unit 104 generates digital sound signals by sampling analog sound signals output from the mikes, for each predetermined period.
The indicator 17, the sound output unit 105, and the vibration unit 106 notify the user of the state of the camera 1 or issue an alert by emitting light, generating sound, and vibrating.
The angular velocity sensor 107 detects movement (rotation and direction) of the camera body 10. The acceleration sensor 108 detects posture information of the camera body 10. The posture information of the camera body 10 is, for example, inclination of the shooting optical axis of the camera body 10 (angles in a horizontal direction). Note that the angular velocity sensor 107 and the acceleration sensor 108 are built in the camera body 10, and the angular velocity sensor 207 and the acceleration rate sensor 208 are provided in the display device 2.
The display device 2 includes a display device control unit 201, the A button 202, the display unit 203, the B button 204, the in-camera 205, the face sensor 206, the angular velocity sensor 207, the acceleration rate sensor 208, a captured image signal processing circuit 209, and various operation units 211.
In addition, the display device 2 includes a built-in non-volatile memory 212, a primary memory 213, a large capacity non-volatile memory 214, a sound output unit 215, a vibration unit 216, an indicator 217, a sound processing unit 220, a low-power wireless communication unit 231, and a high-speed wireless communication unit 232.
The display device control unit 201 includes a processor such as a CPU that performs overall control of the display device 2.
The captured image signal processing circuit 209 has functions similar to those of the image sensor drive circuit 41, the image sensor 42, and the captured image signal processing circuit 43 of the camera 1, and such functions are collectively shown in
The various operation units 211 are operation members for executing functions other than the aforementioned functions of the display device 2.
The angular velocity sensor 207 detects movement of the display device 2. The acceleration rate sensor 208 detects the posture of the display device 2.
Note that the angular velocity sensor 207 and the acceleration rate sensor 208 are built in the display device 2, and have functions similar to those of the angular velocity sensor 107 and the acceleration sensor 108 built in the camera 1.
The built-in non-volatile memory 212 is a flash memory or the like, and stores a program to be executed by the display device control unit 201, and constants and variables for executing the program, and the like.
The primary memory 213 is a RAM or the like, and temporarily stores image data that is being processed, and temporarily stores a result of computation performed by the display device control unit 201. In the present embodiment, during moving image shooting, data detected by the angular velocity sensor 107 at the shooting time of each frame is associated with the frame, and is held in the primary memory 213.
Processed image data and image data transmitted from the camera 1 are written as a moving image file or a still image file to the large capacity non-volatile memory 214, and such a file is read out from the large capacity non-volatile memory 214. The large capacity non-volatile memory 214 may be a recording medium built in the display device 2, or may be a removable recording medium such as a memory card, or may be used along with the built-in non-volatile memory 212.
The sound output unit 215, the vibration unit 216, and the indicator 217 notify the user of the state of the display device 2 or issue an alert by generating sound, vibrating, and emitting light.
The sound processing unit 220 is connected to a sound input unit 219 that includes a plurality of mikes 219L and 219R for collecting sound in the surroundings of the display device 2, and generates digital sound signals by sampling analog sound signals output from the mikes.
The low-power wireless communication unit 231 performs data communication with the camera 1 through low-power wireless communication. The high-speed wireless communication unit 232 performs data communication with the camera 1 through high-speed wireless communication.
The face sensor 206 includes an infrared lighting circuit 221, an infrared light emission unit 222, an infrared light collecting lens 226, and an infrared light detector 227.
The infrared lighting circuit 221 controls on and off of the infrared light emission unit 222. The infrared light emission unit 222 emits the user with infrared light 223. The infrared light collecting lens 226 collects a reflected light beam 225. The infrared light detector 227 includes an infrared light image sensor for detecting the reflected light beam 225 collected by the infrared light collecting lens 226. The infrared light image sensor generates image data by photoelectrically converting the reflected light beam 225 collected by the infrared light collecting lens 226 and formed into an image by the infrared light image sensor, and outputs the generated image data to the display device control unit 201.
When the face sensor 206 is directed to the user, an infrared light emission surface 224, which is the entire face of the user, is emitted with the infrared light 223 projected from the infrared light emission unit 222 as shown in
Another function unit 230 executes a telephone function, other functions, and the like, other than the afore-mentioned functions of the display device 2.
Next, a configuration and functions of the sound processing unit 104 of the camera 1 according to the present embodiment will be described with reference to
The sound processing unit 104 includes an LRch A/D converter 121LR, an FBch A/D converter 121FB, a signal processing unit 122, and an ALC unit 123. The sound input unit 19 includes the mikes 19L, 19R, 19F, and 19B.
The LRch A/D converter 121LR converts the sound signal on the left side (the L channel) of the user obtained by the left mike 19L and the sound signal on the right side (the R channel) of the user obtained by the right mike 19R, from analog sound signals into digital sound signal. The FBch A/D converter 121FB converts the sound signal on the front side (the F channel) obtained by the front mike 19F and the sound signal on the rear side (the B channel) obtained by the rear mike 19B, from analog sound signals into digital sound signals. In A/D conversion processing that is performed by the Lch A/D converter 121L and the Rch A/D converter 121R, processing for multiplying each sound signal by a predetermined gain is performed so as to obtain a desired signal level in accordance with the sensitivity of the mike. A programmable gain amplifier (PGA) is used as a method for multiplying a signal by a gain, for example. In addition, various methods are used for A/D conversion, but delta-sigma A/D conversion is mainly used for sound signals.
The signal processing unit 122 executes gain adjustment and filter processing such as removal of low frequency components or removal of high frequency components, on sound signals of channels Lch, Rch, Fch, and Bch subjected to A/D conversion processing. In addition, the signal processing unit 122 generates sound signals of two channels, namely a left channel and a right channel from the sound signals of the four channels Lch, Rch, Fch, and Bch, based on face direction information θh of the user detected by the face direction detection unit 20, which will be described later with reference to
The auto level control (ALC) unit 123 adjusts the levels of the sound signals of the channels such that the sound signals of the two channels subjected to signal processing have appropriate sound volumes, and outputs the sound signals to the primary memory 103. Sound data obtained by performing predetermined sound processing on sound signals obtained by the mikes when the camera 1 was performing moving image shooting is associated with the moving image data, and is stored in the primary memory 103, and the sound data and the moving image data are stored in the large capacity non-volatile memory 51 as a moving image file and a sound file, respectively.
Next, processing for converting the sound signals of the four channels into sound signals of two channels based on the face direction information θh of the user detected by the face direction detection unit 20 will be described with reference to
First, a face direction detection method that is performed by the face direction detection unit 20 according to the present embodiment will be described.
As shown in
As shown in
The sound processing unit 104 converts the sound signals of the four channels Lch, Rch, Fch, and Bch into sound signals of two channels, based on the face direction information θh of the user.
A formula for conversion of sound signals can be expressed as follows. The sound signals of the four channels are denoted by X, Y, W, and Z, which correspond to sound signals output from the mikes 19L, 19R, 19F, and 19B, respectively. θh indicates the angle of the face, and L and R indicate sound signals of the two channels after conversion.
The sound signals of the two channels after conversion are processed by the sound processing unit 220 when reproducing an image on the display device 2, and are reproduced by the sound output unit 215,
Note that conversion processing of sound signals may also be executed by the display device 2.
In this case, the face direction information θh of the user is stored in the primary memory 103 in synchronization with the sound signals of the four channels processed by the sound processing unit 104, and is associated with the sound data, and the face direction information θh and the sound data are stored in the large capacity non-volatile memory 51 as an image file and a sound file. In addition, the image data and sound data stored in the primary memory 103 are output to the display device 2 by the communication unit 70.
The display device 2 stores the image data and sound data received from the camera 1, in the large capacity non-volatile memory 214 as an image file and a sound file. When reproducing the image file and sound file stored in the large capacity non-volatile memory 214, the display device 2 converts the sound signals of the four channels into sound signals of the two channels based on the face direction information θh added to the sound file.
Control processing of the camera 1 and the display device 2 will be described below.
The processing of the camera 1 in
Note that steps S100 to S700 in
In step S100, when the power button 11 is switched on, and the camera 1 is powered on, the camera control unit 101 is started. The camera control unit 101 reads out a program from the built-in non-volatile memory 102, and executes shooting preparation processing for performing shooting setting of the camera 1. The shooting preparation processing will be described later in detail with reference to
In step S200, the face direction detection unit 20 estimates a direction in which the user is performing observation.
In step S300, the shooting area determination unit 30 determines a shooting direction and a shooting field of view of the camera 1. Note that the face direction detection processing in step S200 and the shooting area determination processing in step S300 are repeatedly executed in a state where the power supply of the camera 1 is on. In addition, a configuration may also be adopted in which, during a period from when the image capture unit 40 starts shooting in accordance with an operation performed on the start button 14 until when shooting is stopped in accordance with an operation performed on the stop button 15, the processing of step S200 and the processing of step S300 are repeatedly performed, and, during a period other than that, the processing of step S200 and the processing of step S300 are not performed.
In step S400, the image capture unit 40 performs shooting of an image in accordance with the user giving an instruction to start shooting, and generates image data.
In step S500, the image processing unit 50 extracts an image from the image data generated in step S400 and develops the extracted area, based on the shooting direction and the shooting field of view determined in step S300.
In step S600, the primary recording unit 60 stores the image data developed in step S500 in the primary memory 103.
In step S700, the communication unit 70 transmits the image data stored in the primary memory 103 in step S600, to the display device 2 at a predetermined timing.
In step S800, the display device control unit 201 performs optical correction on the image data transmitted from the camera 1 in step S700.
In step S900, the display device control unit 201 performs anti-vibration processing on the image data subjected to the optical correction executed in step S800.
Note that the order of the processing of step S800 and the processing of step S900 may be reversed. In other words, optical correction may be performed after anti-vibration processing of image data is performed.
In step S1000, the display device control unit 201 stores the image data subjected to the optical correction in step S800 and the anti-vibration processing in step S900, in the large capacity non-volatile memory 214 (secondary recording).
In step S101, the camera control unit 101 determines whether or not the power button 11 is on. The camera control unit 101 waits when the power supply is off, and advances the procedure to step S102 when it is determined that the power supply has been switched on.
In step S102, the camera control unit 101 determines an operation mode selected using the shooting mode button 12. When it is determined that the operation mode is a moving image mode, the camera control unit 101 advances the procedure to step S103. When it is determined that the operation mode is a still image mode, the camera control unit 101 advances the procedure to step S106. When it is determined that the operation mode is a preset mode, the camera control unit 101 advances the procedure to step S108.
In step S103, the camera control unit 101 reads out settings of the moving image mode from the built-in non-volatile memory 102, and stores the settings in the primary memory 103. The settings of the moving image mode include, for example, a field-of-view setting value ang (which is set to 90° as an initial value in advance in the present embodiment) and an anti-vibration level designated as “high”, “moderate”, “off”, or the like.
In step S104, the camera control unit 101 causes the image sensor drive circuit 41 to start operating for the moving image mode.
In step S106, the camera control unit 101 reads out settings of the still image mode from the built-in non-volatile memory 102, and stores the settings in the primary memory 103. The settings of the still image mode include, for example, a field-of-view setting value ang (which is set to 45° as an initial value in advance in the present embodiment) and an anti-vibration level designated as “high”, “moderate”, “off”, or the like.
In step S107, the camera control unit 101 causes the image sensor drive circuit 41 to start operating for the still image mode.
In step S108, the camera control unit 101 reads out settings of the preset mode from the built-in non-volatile memory 102, and stores the settings in the primary memory 103. The preset mode is a custom shooting mode in which an external device such as the display device 2 changes the settings of the camera 1. The camera 1 is a small-sized wearable device, and is not provided with an operation member for performing detailed settings or a display unit for displaying a menu screen and the like, and thus the external device such as the display device 2 changes the settings of the camera 1.
A case is envisioned in which, for example, it is desirable to perform shooting with a field of view of 90° and a field of view of 110° in a row during moving image shooting. The field of view of 90° is set for a normal moving image mode, and thus, in order to change the field of view and perform shooting, there is a need to perform an operation of, first, performing shooting in the normal moving image mode, then ending shooting, displaying a setting screen of the camera 1 on the display device 2, and changing the field of view to 110°. In this case, when the display device 2 is performing some form of event such as when the display device 2 is on the phone, an operation of changing settings that is performed by the display device 2 becomes complicated.
In contrast, in the preset mode, when the field of view is set to 110° in advance, the field of view is immediately changed to 110° merely by setting the shooting mode button 12 to “Pre” after moving image shooting has ended with the field of view of 90°, and moving image shooting can be continued. In this manner, the user can change the settings of the moving image mode without suspending the current mode and performing a complicated operation.
Note that the settings of the preset mode may include not only the field of view, but also an anti-vibration level that is designated as “high”, “moderate”, “off”, or the like, settings of voice recognition, and the like.
In step S109, the camera control unit 101 causes the image sensor drive circuit 41 to start operating for the preset mode.
The camera control unit 101 causes the image capture unit 40 to start shooting processing in accordance with an operation performed on the start button 14, and to stop shooting processing in accordance with an operation performed on the stop button 15. When the image capture unit 40 starts recording processing, the camera control unit 101 executes recording processing of the sound signals subjected to signal processing performed by the sound processing unit 104.
The camera control unit 101 initializes the sound input unit 19 and the sound processing unit 104 before starting recording processing. The sound processing unit 104 starts to energize the mikes 19L, 19R, 19F, and 19B, inputs a clock signal, and prepares for obtaining sound signals from the respective mikes. In addition, regarding the sound processing unit 104, a gain of A/D conversion processing and the like are initialized.
As described above, according to the present embodiment, even when a plurality of mikes are conspicuous, or a plurality of mikes are mounted, image shooting and binaural recording can be performed in a state where the camera is being worn on the neck of the user, without increasing the size of camera.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2023-021866, filed Feb. 15, 2023 which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2023-021866 | Feb 2023 | JP | national |