The present invention relates to an image capturing apparatus with a sound pickup function.
Conventionally, in shooting that uses a camera, a photographer needs to keep the camera facing the shooting direction. This causes the photographer to concentrate on manipulating the camera for shooting, and makes it difficult for the photographer to focus on matters other than the act of shooting, thereby preventing the photographer from concentrating their focus on experiences in a shooting environment.
For example, in a case where a parent takes a shot of their child, the parent, who is the photographer, cannot play with their child, and cannot perform shooting if they play with their child. That is to say, it is difficult to perform shooting and gain an experience at the same time. Similarly, also in a case where shooting is performed simultaneously with an activity like a sport, it is difficult to experience the sport simultaneously with the execution of shooting while hand-holding a camera.
Conventionally, a wearable camera that can be worn on a body has been known. A photographer performs shooting while wearing such a wearable camera; as a result, for example, a parent, who is the photographer, can record images while gaining an experience of playing with their child.
Also, binaural recording has been conventionally known as a method of recording sounds. When a listener hears sounds recorded through binaural recording using stereo headphones and the like, the listener can enjoy sounds with a highly realistic sensation that makes them feel as if they were at the location.
In binaural recording, sounds need to be recorded at positions close to human ears. Conventional binaural recording adopts a sound recording method that uses binaural recording microphones, which are microphones embedded in ear parts of a model that simulates the shape of a human head. When binaural recording microphones in a head-shaped model are used, it is necessary to gain an experience and perform shooting while holding the model; this makes it difficult to concentrate on the experience, similarly to the case of conventional camera shooting.
Therefore, there are cases where binaural recording is performed by fitting, in the ears of a photographer, microphones that can be fit in left and right ears like earphones. Japanese Patent Laid-Open No. 2009-49947 discloses a sound recording apparatus that performs binaural recording using earphones with a noise-cancelling function, whereby noise-cancelling microphones in the left and right earphones are used for recording.
However, when the microphones according to Japanese Patent Laid-Open No. 2009-49947 are used, the ears of the photographer are plugged. As a result, it becomes difficult to hear ambient sounds simultaneously with the execution of sound recording, which is a hindrance to gaining an experience and performing shooting at the same time. By creatively arranging microphones in a camera casing of a small camera that can be worn on a body so as to enable binaural recording, a video that is provided with binaural sounds and offers a highly realistic sensation can be shot.
However, even if the microphones have been arranged creatively in the foregoing manner, when shooting is performed using a small camera worn on a body simultaneously with an experience of an activity, sounds attributed to rustling of clothes near the camera, sounds attributed to vibration transmitted from the body to the casing of the camera, and the like may be input to the microphones as sounds. If these sounds, which are noises, are input to the microphones, a large noise different from the sound that the photographer is hearing is superimposed and recorded, thereby impairing the quality of sounds obtained through binaural recording.
The present invention has been made in view of the above-described problems, and aims to improve the recording quality of sounds when binaural recording is performed using a camera that can be worn on a body.
According to an aspect of the present invention, there is provided an image capturing apparatus that can be worn by a user hung around a neck of the user, the image capturing apparatus comprising: an annular member that surrounds the neck of the user when the image capturing apparatus is worn on the user; an image capturing circuit; a first microphone that is arranged in the annular member at a position on a right side of the user and obtains environmental sounds; a second microphone that is arranged in the annular member at a position on a left side of the user and obtains environmental sounds; a third microphone that is arranged in the annular member in the vicinity of the first microphone and obtains a noise due to vibration of the image capturing apparatus; a fourth microphone that is arranged in the annular member in the vicinity of the second microphone and obtains a noise due to vibration of the image capturing apparatus; and a CPU; a memory storing a program that, when executed by the CPU, causes the CPU to function as a sound processing unit configured to generate a right-channel sound signal and a left-channel sound signal by using a sound signal obtained by the first microphone and a sound signal obtained by the second microphone, the sound processing unit executing processing for reducing a noise in the right-channel sound signal and a noise in the left-channel sound signal by using a sound signal output from the third microphone and a sound signal output from the fourth microphone, wherein the first microphone and the second microphone are exposed to an outside of the annular member so as to be capable of obtaining environmental sounds, whereas the third microphone and the fourth microphone are covered in the annular member.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note, the following embodiments are not intended to limit the scope of the claimed invention. Multiple features are described in the embodiments, but limitation is not made to an invention that requires all such features, and multiple such features may be combined as appropriate. Furthermore, in the attached drawings, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
In
The face direction detection window 13 is built in the main body unit 10. In order to detect the position of each part of the user's face, the face direction detection window 13 allows infrared light projected from infrared LEDs 22 (see
The start switch 14 is a switch for starting shooting. The stop switch 15 is a switch for stopping shooting. The photographing lens 16 directs light rays to be shot to a solid-state image sensor 42 (see
Microphones 19R and 19L are microphones for obtaining environmental sounds, which are sounds around the camera 1; the microphone 19L takes in sounds on the left surrounding side of the user (the observer's right in
In the camera 1, the mount unit 80 and the camera's main body unit 10 are configured so that the user easily wears and takes off the camera 1 using a non-illustrated connection/connection-cancellation mechanism provided on both ends of the camera's main body unit 10. Accordingly, the camera 1 is worn on the neck area of the user, the mount unit 80 being hung around the neck area of the user in a state where the mount unit 80 has been detached from the camera's main body unit 10 by the user, and connecting both ends of the mount unit 80 to both ends of the camera's main body unit 10. The camera 1 is worn so that the battery unit 90 is situated on the back side of the user, and the main body unit 10 is situated on the front side of the user's body. The camera 1 is supported by the mount unit 80, both ends of which are connected to the vicinity of the left and right ends of the main body unit 10, and is pushed in the direction toward the user's chest. Consequently, the main body unit 10 is located approximately in front of the collarbones of the user. At this time, the face direction detection window 13 is located below the user's chin. An infrared light collecting lens 26 shown in
Also, arranging the main body unit 10 in front of the body and the battery unit 90 on the back side of the body in the foregoing manner can achieve the advantageous effect of alleviating fatigue of the user through dispersion of weight, and the advantageous effect of suppressing displacement caused by, for example, a centrifugal force when the user is in motion.
Note that although the present embodiment has been described using an example in which the camera 1 is worn so that the main body unit 10 is located approximately in front of the collarbones of the user, no limitation is intended by this. The camera 1 may be worn on any location on the user's body, except for the head, as long as the face direction detection unit 20 can detect the user's observing direction and the shooting unit 40 can perform shooting in this observing direction.
In
The charging cable socket 91 is a socket into which a non-illustrated charging cable is inserted. An external power supply can charge internal batteries 94 and supply power to the main body unit 10 via the charging cable inserted into the charging cable socket 91.
The adjustment buttons 92L and 92R are buttons for adjusting the lengths of band units 82L and 82R of the mount unit 80. The adjustment button 92L is a button for adjusting the band unit 82L on the observer's left, and the adjustment button 92R is a button for adjusting the band unit 82R on the observer's right. Note that although the lengths of the band units 82L and 82R are adjusted respectively by the adjustment buttons 92L and 92R on an individual basis in the present embodiment, the lengths of the band units 82L and 82R may be simultaneously adjustable with one button. Hereinafter, the band units 82L and 82R are collectively referred to as band units 82.
The spine avoidance slit 93 is a slit for avoiding the area of the spine of the user so that the battery unit 90 does not come into contact with the area of the spine. By avoiding projections of the spine of a human body, discomfort of wearing can be alleviated, and furthermore, the camera 1 can be prevented from moving to the left or right during use.
In
The button A 802 is a button that functions as a power button of the display apparatus 800; it accepts a power on/off operation when long-pressed, and accepts instructions at other processing timings when short-pressed.
The display unit 803 displays videos shot by the camera 1, and displays menu screens necessary for settings. In the present embodiment, a transparent touch sensor is provided on a top surface of the display unit 803, and a touch operation performed on a screen that is currently displayed (e.g., a menu screen) can also be accepted.
The button B 804 functions as a calibration button 854 used in later-described calibration processing. The front-facing camera 805 is a camera capable of taking a shot of a person observing the display apparatus 800.
The face sensor 806 can detect the shape and the observing direction of the face of the person observing the display apparatus 800. Although a specific structure of the face sensor 806 is not particularly limited, it can be realized using various types of sensors such as a structured optical sensor, a ToF sensor, and a millimeter wave radar, for example.
The angular velocity sensors 807 are situated inside the display apparatus 800, and thus indicated by dash lines as meant in a perspective view. The display apparatus 800 of the present embodiment also has a later-described calibrator function; therefore, gyroscope sensors corresponding to three directions, namely X, Y, and Z, are mounted thereon. Note that the acceleration sensor 808 detects an orientation of the display apparatus 800.
A general smartphone is used as the display apparatus 800 of the present embodiment. The camera system of the present embodiment can be realized by making firmware on this smartphone compatible with firmware on the camera 1. Note that the camera system of the present embodiment can be realized also by making firmware on the camera 1 compatible with an application or an OS of the smartphone used as the display apparatus 800.
The mount unit 80 is connected to the main body unit 10 via a right connection unit 80R located on the right side of the user's body (the observer's left in
The band units 82 include connection surfaces 83 and an electrical cable 84. The connection surfaces 83 are surfaces where the angle maintaining units 81 and the band units 82 are connected, and have a cross-sectional shape that is not a perfect circle; here, they have an elliptic shape. Hereinafter, among the connection surfaces 83, the connection surface 83 located on the right side of the user's body (the observer's left in
The electrical cable 84 is arranged inside the band units 82, and electrically connects together the battery unit 90, the microphones 19R and 19L, and the main body unit 10. The electrical cable 84 is used to supply power of the battery unit 90 to the main body unit 10, and to exchange electrical signals with the outside.
The power supply switch 11 is a switch for switching between power on and off of the camera 1. Although the power supply switch 11 of the present embodiment is a switch in a form of a sliding lever, it is not limited thereto. For example, the power supply switch 11 may be a push-type switch, or may be a switch that is configured integrally with a non-illustrated sliding cover of the photographing lens 16.
The shooting mode switch 12 is a switch for changing a shooting mode, and can change among modes related to shooting. In the present embodiment, the shooting mode switch 12 can change the shooting mode to a still image mode, a moving image mode, and a later-described preset mode that uses the display apparatus 800. In the present embodiment, the shooting mode switch 12 is a switch in a form of a sliding lever in which the lever is slid to select one of “Photo”, “Normal”, and “Pri” shown in
The chest attachment pads 18 are components that come into contact with the user's body when the main body unit 10 is pushed against the user's body. As shown in
As shown in
The infrared detection processing apparatus 27 includes the infrared LEDs 22 and the infrared light collecting lens 26. The infrared LEDs 22 project infrared light 23 (see
An angle adjustment button 85L is a button provided on the angle maintaining unit 81L, and is used to adjust the angle of the main body unit 10. Note that, although not shown in the present drawing, an angle adjustment button 85R is also arranged on the angle maintaining unit 81R, which is located on the opposite side, at a position that forms symmetry with the angle adjustment button 85L. Hereinafter, the angle adjustment buttons 85R and 85L will be referred to as angle adjustment buttons 85 when they are mentioned collectively.
Although the angle adjustment buttons 85 are located at positions that are visible also in
The user can change the angle between the main body unit 10 and the angle maintaining units 81 by moving the angle maintaining units 81 in the up or down direction in
A microphone bushing 221a is arranged on the outer circumferential side in the cross-section of a mount unit 80L, and the microphone 19L for converting sounds that have been taken in from the left surrounding side of the user into electrical signals is arranged inside the microphone bushing 221a. The opening 80aL for taking in external environmental sounds is formed in the mount unit 80L at a position corresponding to the microphone 19L. The microphone 19L is composed of, for example, an electret condenser microphone (ECM). The microphone bushing 221a is formed of a rubber material, and fixes the microphone 19L so that the microphone 19L adheres tightly to an inner wall of the mount unit 80L.
A microphone 19NL is a microphone for mainly obtaining a noise due to vibration which is generated when the mount unit 80L has come into contact with the left side of the user's neck. The microphone 19NL is a microphone for converting vibration transmitted through the mount unit 80L, as sounds, into electrical signals, and is arranged inside a microphone bushing 221c located on the inner circumferential side in the cross-section of the mount unit 80L, similarly to the microphone 19L. As the microphone 19NL is a microphone for obtaining a noise due to vibration transmitted through the mount unit 80L, an opening for taking in environmental sounds is not formed in the mount unit 80L at a position corresponding to the microphone 19NL. The microphone 19L obtains sounds on the left surrounding side of the user, whereas the microphone 19NL obtains a noise due to vibration propagated through the mount unit 80L on the left side of the user. Note that the distance between a sound hole of the microphone 19L and a sound hole of the microphone 19NL is set to a distance smaller than a wavelength of a main component of environmental sounds (sounds to be obtained) so that noise included in the microphone 19L can be reduced.
The microphone 19R is configured similarly to the microphone 19L, and they are placed to exhibit a left-right symmetrically with respect to the camera 1. A microphone bushing 221b is arranged on the outer circumferential side in the cross-section of a mount unit 80R, and the microphone 19R for converting sounds that have been taken in from the right surrounding side of the user into electrical signals is arranged inside the microphone bushing 221b. The opening 80aR for taking in external environmental sounds is formed in the mount unit 80R at a position corresponding to the microphone 19R. The microphone 19R is composed of, for example, an electret condenser microphone (ECM). The microphone bushing 221b is formed of a rubber material, and fixes the microphone 19R so that the microphone 19R adheres tightly to an inner wall of the mount unit 80R.
A microphone 19NR is a microphone for mainly obtaining a noise due to vibration which is generated when the mount unit 80R has come into contact with the right side of the user's neck. The microphone 19NR is a microphone for converting vibration transmitted through the mount unit 80R, as sounds, into electrical signals, and is arranged inside a microphone bushing 221d located on the inner circumferential side in the cross-section of the mount unit 80R, similarly to the microphone 19R. As the microphone 19NR is a microphone for obtaining a noise due to vibration transmitted through the mount unit 80R, an opening for taking in environmental sounds is not formed in the mount unit 80R at a position corresponding to the microphone 19NR. The microphone 19R obtains sounds on the right surrounding side of the user, whereas the microphone 19NR obtains undesired sounds attributed to vibration propagated through the mount unit 80R on the right side of the user. Note that the distance between a sound hole of the microphone 19R and a sound hole of the microphone 19NR is set to a distance smaller than a wavelength of a main component of environmental sounds (sounds to be obtained) so that noise included in the microphone 19NR can be reduced.
Note that although the microphone 19L and the microphone 19NL oppose each other according to the arrangement shown in
In
The face direction detection unit 20 is a functional block composed of the infrared LEDs 22, the infrared detection processing apparatus 27, and so forth, analogizes an observing direction by detecting the direction of the user's face, and transmits the same to the recording direction and angle-of-view determination unit 30 and the sound processing unit 104.
The recording direction and angle-of-view determination unit 30 performs various types of computation based on the observing direction of the user analogized by the face direction detection unit 20, determines information of a position and a range that are used to perform a cutout from videos from the image capture unit 40, and transmits this information to the image cutout and development processing unit 50.
The image capture 40 converts light rays from a subject into image signals, and transmits these image signals to the image cutout and development processing unit 50.
The image cutout and development processing unit 50 performs a cutout from image signal from the image capture unit 40 and develops the cutout result using the information from the recording direction and angle-of-view determination unit 30, and transmits only videos in the direction viewed by the user to the primary recording unit 60.
The primary recording unit 60 is a functional block composed of a primary memory 103 (see
The transmission unit 70 performs radio communication with the display apparatus 800 (see
The display apparatus 800 is a display apparatus that can communicate with the transmission unit 70 via a wireless LAN that enables high-speed communication (hereinafter referred to as “high-speed radio”). Here, although the present embodiment uses radio communication compatible with the IEEE 802.11ax (Wi-Fi 6) standard as the high-speed radio, radio communication compatible with another standard, such as the Wi-Fi 4 standard and the Wi-Fi 5 standard, may be used thereas. Also, the display apparatus 800 may be a device that has been developed exclusively for the camera 1, or may be a general smartphone, tablet terminal, or the like.
Note that in communication between the transmission unit 70 and the display apparatus 800, low-power radio may be used, both of the high-speed radio and low-power radio may be used, or they may be used in alternation. In the present embodiment, high-volume data such as video files of videos composed of moving images, which will be described later, is transmitted over the high-speed radio, whereas low-volume data and data that can be transmitted over a long period of time are transmitted over the low-power radio. Here, although the present embodiment uses Bluetooth as the low-power radio, another close-range (short-range) radio communication, such as near-field communication (NFC), may be used thereas.
The calibrator 850 is a device that configures initial settings and personalized settings for the camera 1, and is a device that can communicate with the transmission unit 70 over the high-speed radio, similarly to the display apparatus 800. The details of the calibrator 850 will be described later. Furthermore, the display apparatus 800 may additionally have the functions of this calibrator 850.
The simple display apparatus 900 is, for example, a display apparatus that can communicate with the transmission unit 70 only over the low-power radio. The simple display apparatus 900 is a display apparatus that cannot exchange videos composed of moving images with the transmission unit 70 due to temporal constraints, but can exchange timing signals for starting and stopping shooting, exchange images that are simply intended for confirmation of the composition, etc. Furthermore, the simple display apparatus 900 may be a device that has been developed exclusively for the camera 1, similarly to the display apparatus 800, or may be a smartwatch or the like.
In
The camera 1 also includes an infrared LED lighting circuit 21, the infrared LEDs (infrared light-emitting diodes) 22, the infrared light collecting lens 26, and the infrared detection processing apparatus 27 that compose the face direction detection unit 20 (see
Furthermore, the camera 1 includes the image capture unit 40 (see
Note that although the camera 1 includes only one image capture unit 40 in the present embodiment, it may include two or more image capture units 40. Providing a plurality of image capturing units also enables shooting of 3D videos, shooting of videos with the angle of view wider than the angle of view that can be achieved using one image capture unit 40, shooting in a plurality of directions, and so forth.
The camera 1 also includes various types of memories such as a large-capacity nonvolatile memory 51, a built-in nonvolatile memory 102, and the primary memory 103.
Moreover, the camera 1 includes the sound processing unit 104, a speaker 105, a vibrating body 106, an angular velocity sensor 107, an acceleration sensor 108, and various types of switches 110.
The overall control CPU 101 controls the entirety of the camera 1. The recording direction and angle-of-view determination unit 30, the image cutout and development processing unit 50, and the other control unit 111 shown in
The infrared LED lighting circuit 21 controls the infrared LEDs 22 shown in
The infrared detection processing apparatus 27 includes a sensor that detects the reflected light rays 25 collected by the infrared light collecting lens 26. This sensor converts the reflected light rays 25, which have been collected by the infrared light collecting lens 26 to form an image thereof, into sensor data by way of photoelectric conversion, and transmits the sensor data to the overall control CPU 101.
As shown in
The various types of switches 110 are not shown in
The image capturing driver 41 includes a timing generator and the like, and generates various types of timing signals. It also controls shooting operations by outputting the timing signals to respective units related to image capturing. The solid-state image sensor 42 photoelectrically converts a subject image formed by the photographing lens 16 shown in
A flash memory or the like is used as the built-in nonvolatile memory 102; an activation program for the overall control CPU 101 and setting values of various types of program modes are stored therein. In the camera 1 of the present embodiment, alteration of the field of view for observation (the angle of view) and the effective level of anti-vibration control can be set, and thus setting values therefor are also recorded in the built-in nonvolatile memory 102.
The primary memory 103 is composed of a RAM or the like; it temporarily stores video data that is currently processed, and temporarily stores the results of computation performed by the overall control CPU 101. The large-capacity nonvolatile memory 51 is used in recording or readout of primary image data. Although the large-capacity nonvolatile memory 51 is described as a semiconductor memory that does not have a removable/attachable mechanism in the present embodiment to facilitate the understanding of explanation, no limitation is intended by this. For example, the large-capacity nonvolatile memory 51 may be composed of a removable/attachable recording medium, such as an SD card, or may be used in combination with the built-in nonvolatile memory 102.
The low-power radio unit 61 performs data communication with the display apparatus 800, the calibrator 850, and the simple display apparatus 900 over the low-power radio. The high-speed radio unit 62 performs data communication with the display apparatus 800, the calibrator 850, and the simple display apparatus 900 over the high-speed radio.
The sound processing unit 104 generates sound signals by processing analog signals that have been picked up by the microphones 19L and 19R for picking up external environmental sounds, which are on the observer's right and the observer's left, respectively, in
The LED 17, the speaker 105, and the vibrating body 106 notify the user of a status of the camera 1 and issue a warning by emitting light, producing a sound, and producing vibration.
The angular velocity sensor 107 is a sensor that uses a gyroscope or the like, and detects a movement of the camera 1 itself. The acceleration sensor 108 detects an orientation of the main body unit 10. Note that the angular velocity sensor 107 and the acceleration sensor 108 are built in the main body unit 10; the angular velocity sensors 807 and the acceleration sensor 808 that are separate therefrom are also provided inside the later-described display apparatus 800.
In
Also, the display apparatus 800 includes a built-in nonvolatile memory 812, a primary memory 813, a large-capacity nonvolatile memory 814, a speaker 815, a vibrating body 816, an LED 817, a sound processing unit 820, a low-power radio unit 861, and a high-speed radio unit 862.
The display apparatus control unit 801 is composed of a CPU, and controls the entirety of the display apparatus 800.
The captured signal processing circuit 809 bears functions equivalent to those of the image capturing driver 41, the solid-state image sensor 42, and the captured signal processing circuit 43 inside the camera 1; however, as these are not directly related to the contents of the present embodiment, they are collectively illustrated as one. Data output from the captured signal processing circuit 809 is processed inside the display apparatus control unit 801.
The various types of switches 811 are not shown in
The angular velocity sensor 807 is a sensor that uses a gyroscope or the like, and detects a movement of the display apparatus 800. The acceleration sensor 808 detects an orientation of the display apparatus 800.
Note that as stated earlier, the angular velocity sensor 807 and the acceleration sensor 808 are built in the display apparatus 800, and although they have functions similar to those of the angular velocity sensor 107 and the acceleration sensor 108 built in the above-described camera 1, they are separate therefrom.
A flash memory or the like is used as the built-in nonvolatile memory 812; an activation program for the display apparatus control unit 801 and setting values of various types of program modes are stored therein.
The primary memory 813 is composed of a RAM or the like; it temporarily stores video data that is currently processed, and temporarily stores the results of computation performed by the captured signal processing circuit 809. In the present embodiment, during recording of videos composed of moving images, gyroscope data that is detected by the angular velocity sensor 107 at the shooting time of each frame is held in the primary memory 813 in association with each frame.
The large-capacity nonvolatile memory 814 is used in recording or readout of image data in the display apparatus 800. In the present embodiment, the large-capacity nonvolatile memory 814 is composed of a removable/attachable memory such as an SD card. Note that it may be composed of a memory that is not removable/attachable, such as the large-capacity nonvolatile memory 51 in the camera 1.
The speaker 815, the vibrating body 816, and the LED 817 notify the user of a status of the display apparatus 800 and issue a warning by producing a sound, producing vibration, and emitting light.
The sound processing unit 820 includes a left microphone 819L and a right microphone 819R for picking up external sounds (analog signals), and generates sound signals by processing the analog signals that have been picked up.
The low-power radio unit 871 performs data communication with the camera 1 over the low-power radio. The high-speed radio unit 872 performs data communication with the camera 1 over the high-speed radio.
The face sensor 806 includes an infrared LED lighting circuit 821, an infrared LED 822, an infrared light collecting lens 826, and an infrared detection processing apparatus 827. The infrared LED lighting circuit 821 is a circuit that has functions similar to those of the infrared LED lighting circuit 21 of
When the face sensor 806 shown in
An other function unit 830 executes functions which are not directly related to the present embodiment and which are unique to a smartphone, such as a telephone function and other sensor functions.
The following describes how to use the camera 1 and the display apparatus 800.
As a supplement to the description,
In step S100, when the power of the camera 1 is turned on by turning the power supply switch 11 on, the overall control CPU 101 is activated, and the overall control CPU 101 reads out an activation program from the built-in nonvolatile memory 102. Thereafter, the overall control CPU 101 executes preparation operation processing for configuring settings before shooting by the camera 1. The details of the preparation operation processing will be described later using
In step S200, as a result of detection of a face direction by the face direction detection unit 20, face direction detection processing for analogizing the observing direction of the user is executed.
In step S300, the recording direction and angle-of-view determination unit 30 executes recording direction and range determination processing.
In step S400, the image capture unit 40 performs shooting and generates shooting data.
In step S500, the image cutout and development processing unit 50 executes recording range development processing in which an image is cut out from the image signal generated in step S400 with use of information of the recording direction and the angle of view determined in step S300, and processing for developing this range is executed.
In step S600, primary recording processing is executed in which the primary recording unit 60 stores the image signal developed in step S500 into the primary memory 103.
In step S700, processing of transfer to a display apparatus is executed in which the transmission unit 70 performs radio transmission of the image signal that has been primarily recorded in step S600 to the display apparatus 800 at a designated timing.
Step S800 and subsequent steps are executed on the display apparatus 800.
In step S800, the display apparatus control unit 801 executes optical correction processing for performing optical correction with respect to the image signal that has been transferred from the camera 1 in step S700.
In step S900, the display apparatus control unit 801 executes anti-vibration processing with respect to the image signal for which the optical correction has been performed in step S800.
Note that the order of step S800 and step S900 may be reversed. That is to say, the anti-vibration processing for the video may be executed first, and the optical correction may be performed later.
In step S1000, the display apparatus control unit 801 performs secondary recording that records the image signal for which the optical correction processing and the anti-vibration processing have been executed in steps S800 and S900 into the large-capacity nonvolatile memory 814, and the present processing is ended.
In step S101, the overall control CPU 101 determines whether the power supply switch 11 is on. It stands by when the power remains off, and proceeds to step S102 when the power is turned on.
In step S102, the overall control CPU 101 determines a mode that is selected by the shooting mode switch 12. In a case where the mode selected by the shooting mode switch 12 is the moving image mode as a result of the determination, processing proceeds to step S103.
In step S103, the overall control CPU 101 reads out various types of settings for the moving image mode from the built-in nonvolatile memory 102, stores them into the primary memory 103, and then proceeds to step S104. Here, the various types of settings for the moving image mode include a setting value ang for the angle of view (which is preset to 90° in the present embodiment), and an anti-vibration level designated by “high”, “medium”, “off”, etc.
In step S104, the overall control CPU 101 starts operations of the image capturing driver 41 for the moving image mode, and then exits from the present subroutine.
In a case where the mode selected by the shooting mode switch 12 is the still image mode as a result of the determination in step S102, processing proceeds to step S106.
In step S106, the overall control CPU 101 reads out various types of settings for the still image mode from the built-in nonvolatile memory 102, stores them into the primary memory 103, and then proceeds to step S107. Here, the various types of settings for the still image mode include a setting value ang for the angle of view (which is preset to 45° in the present embodiment), and an anti-vibration level designated by “high”, “medium”, “off”, etc.
In step S107, the overall control CPU 101 starts operations of the image capturing driver 41 for the still image mode, and then exits from the present subroutine.
In a case where the mode selected by the shooting mode switch 12 is the preset mode as a result of the determination in step S102, processing proceeds to step S108. Here, the preset mode is a mode in which an external device such as the display apparatus 800 sets a shooting mode with respect to the camera 1, and is one of the three shooting modes among which the shooting mode switch 12 can switch. Specifically, the preset mode is a mode for custom shooting. Here, as the camera 1 is a small wearable device, the camera 1 is not provided with operation switches, a setting screen, and the like for changing the detailed settings therefor, and the detailed settings for the camera 1 are changed using an external device such as the display apparatus 800.
For example, assume a case where an angle of view of 90° and an angle of view of 110° are desired to be shot continuously in the same moving image shooting. An angle of view of 90° is set in the normal moving image mode; therefore, in order to perform the aforementioned shooting, the following manipulation is required: first, perform shooting in the normal moving image mode, and thereafter, stop the shooting, and switch the display apparatus 800 to a setting screen for the camera 1 to change the angle of view to 110°. However, manipulating the display apparatus 800 is troublesome during some sort of event.
On the other hand, if the preset mode is set in advance as a mode that shoots moving images with an angle of view of 110°, simply sliding the shooting mode switch 12 to “Pri” after the shooting of moving images with an angle of view of 90° is ended can promptly switch to the shooting of moving images with an angle of view of 110°. That is to say, the user no longer needs to suspend the current action and perform the troublesome manipulation mentioned above.
Note that the contents set in the preset mode may include not only the angle of view, but also an anti-vibration level designated by “high”, “medium”, “off”, etc., settings for voice recognition, and so forth.
In step S108, the overall control CPU 101 reads out various types of settings for the preset mode from the built-in nonvolatile memory 102, stores them into the primary memory 103, and then proceeds to step S109. Here, the various types of settings for the preset mode include a setting value ang for the angle of view, and an anti-vibration level designated by “high”, “medium”, “off”, etc.
In step S109, the overall control CPU 101 starts operations of the image capturing driver 41 for the preset mode, and then exits from the present subroutine.
Next, binaural recording processing according to the present embodiment will be described. In the present embodiment, binaural recording is realized by obtaining sounds using the microphones 19L, 19R, 19NL, and 19NR. The following describes sound processing using the block diagram of
Shooting processing is executed as a result of the user inputting an instruction to the camera 1 by manipulating a non-illustrated button or by using a voice command.
Once the shooting processing has been started, the overall control CPU 101 of
The sound processing unit 104 makes a preparation for obtaining signals from each microphone by turning on the powers of the microphones. Furthermore, once the initialization processing for the sound processing unit 104 has ended, recording processing is started next.
In the recording processing, the sound processing unit 104 executes gain adjustment and filter processing that uses, for example, a low-cut or high-cut filter, with respect to the obtained sound signals, and outputs the sound signals. The overall control CPU 101 stores and records the output sound data into one file together with moving images that have been shot.
Reproduction of the recorded moving images is performed on, for example, a smartphone or a personal computer which is the display apparatus shown in
Sound signals obtained by the microphone 19L and sound signals obtained by the microphone 19R, which are respectively regarded as left-channel (Lch) sound signals and right-channel (Rch) sound signals, are converted from analog signals to digital signals in a recorded sound A/D conversion unit 202a. A certain amount of gain is applied before the A/D conversion so as to achieve a desired level in accordance with microphone sensitivity and the level of recorded sound signals, and then the A/D conversion is performed. For example, a programmable-gain amplifier (PGA) can be used as means for applying a gain. Note that although there are a variety of A/D conversion methods, it is assumed that delta-sigma A/D conversion is used in the present embodiment.
Sound signals obtained by the microphone 19NL and sound signals obtained by the microphone 19NR, which are respectively regarded as Lch reference sound source signals and Rch reference sound source signals, are similarly converted from analog signals to digital signals in a reference sound A/D conversion unit 202b.
The sound signals obtained by the microphones 19L and 19R are used as sound sources for recording of sounds, whereas the sound signals obtained by the microphones 19NL and 19NR are used as reference sound signal for noise reduction processing.
A sound signal processing unit 203 applies noise reduction processing that uses reference sound signals to the sound signals that have been picked up. For example, sound signals Lch and Rch with reduced noise can be obtained by predicting noise, or extracting noise components, from the reference sound signals with use of the TMS method and subtracting the noise or noise components from sound signals for recording.
The sound signals with reduced noise are adjusted by an auto level control (ALC) unit 214 so that they have an appropriate sound volume, and stored into the primary memory 103. Sound signals that have been obtained at a predetermined sampling period and stored into the primary memory 103 are incorporated in a moving image file as sounds of recorded moving images during the continuation of recording of a moving image signal by the camera 1. The moving image file is stored into the large-capacity nonvolatile memory 51.
Execution of the above-described sound processing makes it possible to record sounds in which noise attributed to vibration propagated through the camera 1 has been reduced, and append sound data that can binaurally reproduced to a recording file.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2023-021867, filed Feb. 15, 2023, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2023-021867 | Feb 2023 | JP | national |