This application is a U.S. National Phase of International Patent Application No. PCT/JP2019/045557 filed on Nov. 21, 2019, which claims priority benefit of Japanese Patent Application No. JP 2018-228311 filed in the Japan Patent Office on Dec. 5, 2018. Each of the above-referenced applications is hereby incorporated herein by reference in its entirety.
The present disclosure relates to an image capturing element, an image capturing device and a method, and more particularly to an image capturing element, an image capturing device and a method capable of preventing a reduction in communication safety.
Conventionally, as a system for monitoring services in hospital rooms and homes, or the like, there has been a system for confirming the safety of a remote target. In the case of such a system, for example, a safety determination has been made by capturing an image of the target with an image sensor of a terminal device, transmitting the captured image to a cloud via a network, and extracting a feature amount from the captured image on the cloud side to analyze the feature amount.
However, in a case where the captured image is transmitted from the terminal device to a cloud in this way, there is a risk that personal information may be leaked due to leakage of the captured image to a third party.
Meanwhile, a method of extracting a feature amount from an image and transmitting the extracted feature amount to a cloud has been considered (see, for example, NPL 1).
[NPL 1]
However, NPL 1 has not clarified generation of an image for extracting a feature amount. For example, in a case where a feature amount in a captured image obtained by an image capturing device is extracted by another device, there has been still a risk that personal information may be leaked in communication between the image capturing device and the other device.
The present disclosure has been made in view of such a situation, and is intended to be capable of preventing reduction in communication safety.
An image capturing element of one aspect of the present technology is an image capturing element including an image capturing section that captures an image of a subject to generate a captured image, and a feature amount extracting section that extracts a predetermined feature amount to be output to an outside from the captured image generated by the image capturing section.
An image capturing device of one aspect of the present technology is an image capturing device including an image capturing section that captures an image of a subject to generate a captured image, and a feature amount extracting section that extracts a predetermined feature amount to be output to an outside from the captured image generated by the image capturing section.
A method for capturing an image for one aspect of the present technology is a method for capturing an image, including generating a captured image by capturing an image of a subject, and extracting a predetermined feature amount to be output to an outside from the generated captured image.
In the image capturing element, image capturing device, and method of one aspect of the present technology, an image of a subject is captured, a captured image is generated, and a predetermined feature amount to be output to the outside is extracted from the generated captured image.
Hereinafter, modes for carrying out the present disclosure (hereinafter referred to as embodiments) will be described. Note that the description will be given in the following order.
<Literature That Supports Technical Content and Technical Terms>
The scope disclosed in the present technology includes not only contents described in the embodiments but also contents described in the following literature known at the time of filing the application. In other words, the contents described in this literature are also grounds for determining support requirements.
Patent Literature 1: Described above
<Service with Image Recognition>
Conventionally, as a service accompanied by image recognition, for example, there has been a system for monitoring services in a hospital room or a home, or the like. In such a system, for example, a safety determination has been made on the basis of a result of image recognition by capturing an image of the target with an image sensor of a terminal device to transmit the captured image to a cloud via a network, extracting a feature amount from the captured image on the cloud side, and analyzing the feature amount to perform the image recognition.
Therefore, advanced image recognition can be achieved by using a cloud database etc., and more accurate safety determination can be carried out under more diverse situations. Therefore, by applying such a system, offering of more advanced services can be achieved.
However, in the case of this method, it is necessary to transmit the captured image from the terminal device to a cloud. This captured image may include personal information (information regarding privacy) such as the user's face and room situations, for example. Therefore, if the captured image is leaked to a third party during communication, personal information may also be leaked. In other words, there is a risk that the safety of communication may be reduced by transmitting the captured image by communication.
In contrast, a method can also be considered in which, by giving priority to personal information protection, the target is monitored by use of a sensor other than the image sensor such as a pyroelectric sensor, for example, and the safety is confirmed with the sensing result. However, in the case of the pyroelectric sensor, for example, a positional relation cannot be discriminated in detail, and it is difficult to detect a change in posture and to identify a person. As described above, in a case where a sensor other than the image sensor is used, there has been a risk that the image recognition performance is more reduced as compared with the case of using the image sensor and that the quality of the service provided on the basis of the image recognition result may be more reduced.
Accordingly, the image sensor extracts the feature amount from the captured image for encoding and outputs the encoded data of the feature amount (Method 1 in the table of
In such a manner described above, the image capturing element or the image capturing device can output the feature amount of the captured image without outputting the captured image. Therefore, it is possible to prevent the reduction in communication safety and to protect privacy. In addition, since the safety can be detected by image recognition, more accurate safety determination can be achieved under more diverse situations. Hence, it becomes possible to provide more enhanced services.
In addition, the captured image may be subjected to signal processing for extracting the feature amount. For example, an image capturing element/image capturing device may be further provided with a signal processing section that performs signal processing for extracting a feature amount on a captured image generated by the image capturing section, and the feature amount extracting section may extract the feature amount from the captured image subjected to signal processing by the signal processing section. By such signal processing, the extraction of the feature amount from the captured image can be made easier or more accurate. In addition, since the captured image is not output, the signal processing can be limited to processing regarding the feature amount extraction. For example, the image resolution, frame frequency, content of demosaicing and color conversion processing, and the like can be set to a level required for feature amount extraction. Therefore, an increase in load of signal processing can be prevented. That is, an increase in processing time of the signal processing and power consumption can be prevented.
In addition, the extracted feature amount may be encoded. For example, in the image capturing element/image capturing device, the feature amount extracting section may encode the extracted feature amount. In such a manner described above, an increase in amount of data of the information to be output can be prevented. In addition, by transmitting the encoded data, even in a case where the data is leaked to a third party, it is difficult for the third party to obtain the original feature amount, so that the reduction in communication safety can be prevented.
In addition, the image capturing element/image capturing device may be provided with a transmitting section that transmits the extracted feature amount (or encoded data thereof). That is, the transmitting section can be formed inside the image capturing element/image capturing device, or can be formed outside these (in another device).
In addition, a header may be generated and added. For example, an image capturing element/image capturing device may be provided with a header generating section that generates header information regarding a feature amount, and an adding section that adds the header information generated by the header generating section to the feature amount extracted by the feature amount extracting section. In such a manner described above, various pieces of information regarding the information to be transmitted (for example, the feature amount) can be stored in the header and transmitted in association with the feature amount. Hence, the cloud can provide, for example, more diverse services on the basis of the information included in the header.
In addition, in the image capturing element, the above-mentioned image capturing section and feature amount extracting section may be packaged and formed on different substrates, respectively. That is, the sections may be packaged and the feature amount extraction/encoding may be performed in a logic layer (Method 1-1 in the table of
Note that, by separating the substrates of the image capturing section and the feature amount extracting section, these substrates can be stacked. That is, in the image capturing element, the substrate in which the image capturing section is formed and the substrate in which the feature amount extracting section is formed may be superimposed on top of another. In such a manner described above, the image capturing element can be further miniaturized. In addition, this can prevent an increase in the cost of the image capturing element.
In addition, a function of selecting an operation mode may be included (Method 1-1-1 in the table of
It should be noted that the selection may be made on the basis of a request from the outside of the image capturing element. In such a manner described above, for example, it is possible to request from the cloud for determining whether to output an image or a feature amount, and offering of a wider variety of services can be achieved.
In addition, an authentication function for the request source may be included (Method 1-1-1-1 in the table of
Further, the image capturing element/image capturing device may be provided with an image signal processing section that performs predetermined signal processing on the captured image generated by the image capturing section, and an encoding section that encodes a captured image subjected to signal processing by the image signal processing section. In other words, signal processing or encoding may be performed on the captured image to be output. By performing signal processing in this way, a decrease in image quality can be prevented. Furthermore, by encoding, an increase in the amount of data of the information to be output can be prevented.
Note that, in the case of the image output mode, the image to be output may be generated by use of a part (partial region) of the pixel region (photoelectric conversion region). In other words, a captured image having a lower resolution than that of the captured image from which the feature amount is extracted may be output. For example, in the image capturing element/image capturing device, in a case where the output of the captured image is chosen by the selecting section, the image capturing section may generate the captured image by using a part of the photoelectric conversion region. In such a manner described above, an increase in the amount of data of the information to be output can be prevented. In addition, by lowering the resolution, the amount of leaked personal information in a case where the image is leaked to a third party can be reduced.
In addition, a function of limiting the region of the image to be output on the basis of the feature amount may be included (Method 1-1-2 in the table of
<Communication System>
For example, the communication system 100 fulfills a monitoring service in a hospital room or a home by executing such processing. For example, the image capturing device 101 is installed in a room or the like of an observation target (monitoring target), and captures images of the observation target at that location periodically or triggered by a predetermined event. Then, the image capturing device 101 extracts a feature amount of the captured image and transmits the feature amount to the cloud 102. The cloud 102 receives the feature amount, and then analyzes the feature amount, thereby confirming the safety of the observation target, or the like. In addition, the cloud 102 provides a predetermined service (notification, for example) based on the confirmation result.
That is, the cloud 102 can confirm the safety on the basis of the captured image. Accordingly, the cloud 102 can carry out a more accurate safety determination under more diverse situations. Thus, it becomes possible to provide more enhanced services.
Further, since the image capturing device 101 transmits the feature amount to the cloud 102 instead of the captured image, even if a third party can illegally acquire the feature amount, only the feature amount as the data of the captured image can be simply obtained, and the captured image cannot be restored. Accordingly, by adopting such a mechanism, the communication system 100 can prevent the leakage of the information regarding the target to be observed.
In addition, in a case where the captured image is transmitted from the image capturing device 101 to the cloud 102, a complicated and expensive mechanism such as strong encryption is required in order to ensure sufficient communication safety and prevent leakage of personal information, so that there is a risk that the cost may increase. In contrast, this method only transmits the feature amount as described above and can be easily realized, thereby preventing the increase in cost.
In addition, the feature amount generally has a smaller amount of information than that of the captured image. Thus, by transmitting the feature amount as in the present method, the amount of information to be transmitted can be reduced as compared with a case of transmitting the captured image. As a result, an increase in communication load can be prevented. Therefore, an increase in cost of the communication system 100 (the image capturing device 101, the cloud 102, etc.) can be prevented.
As illustrated in
The image capturing element 112 performs processing related to image capturing. As illustrated in
The light receiving section 121 receives light from a subject (for example, an observation target) and performs photoelectric conversion to generate a captured image as an electric signal. The light receiving section 121 supplies the electric signal of the captured image to the ADC 122. The ADC 122 acquires the electric signal of the captured image and performs A/D conversion of the signal to generate digital data of the captured image (also referred to as captured image data). The ADC 122 supplies the captured image data to the feature amount extraction signal processing section 123.
When acquiring the captured image data supplied from the ADC 122, the feature amount extraction signal processing section 123 performs signal processing related to the feature amount extraction on the image data. The feature amount extraction signal processing section 123 supplies the captured image data subjected to the signal processing to the feature amount extraction encoding section 124.
Note that, since the captured image is used for extracting the feature amount (since it is not used for ornamental purposes etc.), also the signal processing for the captured image by the feature amount extraction signal processing section 123 only needs to be one that is related to the feature amount extraction. For example, the process may be a process having a relatively light load, such as resolution conversion, frame frequency conversion, demosaicing, and color conversion processing. In this way, an increase in the load of signal processing can be prevented by using the captured image only for feature amount extraction without outputting the captured image. As a result, an increase in processing time and power consumption can be prevented. As a matter of course, the content of this signal processing is optional as long as the content is related to feature extraction.
In addition, it is sufficient if the captured image may have an enough amount of information for extracting the feature amount. Therefore, for example, the feature amount extraction signal processing section 123 can reduce the resolution and frame frequency of the captured image within the range in which the feature amount extraction can be performed with sufficient accuracy by this signal processing (that is, the amount of information regarding the captured image can be reduced while the feature amount extraction accuracy is not reduced). As a result, an increase in the processing time and power consumption for feature amount extraction can be prevented.
The feature amount extraction encoding section 124 acquires the captured image data supplied from the feature amount extraction signal processing section 123, and extracts a predetermined feature amount of the captured image. In addition, the feature amount extraction encoding section 124 encodes the extracted feature amount and generates encoded data. The feature amount to be extracted and the encoding method are optional. By encoding the feature amount in this way, the amount of information to be transmitted can be further reduced. Further, the feature amount extraction encoding section 124 supplies the generated encoded data to the transmitting section 125.
The transmitting section 125 acquires the encoded data supplied from the feature amount extraction encoding section 124 and transmits the data to the cloud 102 (receiving section 131) via an optional network (communication medium) such as the Internet.
The cloud 102 performs processing related to feature amount analysis. As illustrated in
The data analysis processing section 132 acquires the encoded data supplied from the receiving section 131, and decodes the data, thereby restoring the feature amount data of the captured image. In addition, the data analysis processing section 132 analyzes the captured image by analyzing the restored feature amount, and confirms the safety of the observation target on the basis of the analysis result. Further, the data analysis processing section 132 may perform a predetermined service (for example, notification or the like) according to the safety confirmation result.
<Feature Amount Extraction Encoding Section>
As illustrated in
<Flow of Image Capturing Process>
An example of the flow of the image capturing process executed by the image capturing device 101 will be described with reference to the flowchart in
When the image capturing process is started, the light receiving section 121 captures an image of a subject in step S101 and generates an electric signal of the captured image. In step S102, the ADC 122 performs A/D conversion of the electrical signal of the captured image obtained in step S101 to generate captured image data.
In step S103, the feature amount extraction signal processing section 123 performs signal processing for feature amount extraction on the captured image data generated in step S102.
In step S104, the feature amount extraction encoding section 124 extracts the feature amount of the captured image from the captured image data appropriately subjected to the signal processing in step S103, and generates the encoded data of the feature amount by encoding the feature amount.
In step S105, the transmitting section 125 transmits the encoded data generated in step S104 (to the cloud 102).
When the process of step S105 is completed, the image capturing process ends.
By performing the image capturing process as described above, the image capturing device 101 can more safely provide information to the cloud 102. That is, the reduction in communication safety can be prevented.
<Image Capturing Element>
The image capturing element 112 described with reference to
The semiconductor substrate 171 and the semiconductor substrate 172 may be arranged side by side on a plane, but the substrates may be superposed on each other as in the example in
<Image Capturing Device>
Note that the transmitting section may be provided outside the packaged image capturing element 112 (provided on another chip). For example, an image capturing device 180 illustrated in
The packaged image capturing element 112 in this case basically has a similar configuration to the example illustrated in
The header generating section 181 generates header information including information regarding the encoded data output from the packaged image capturing element 112 (namely, information regarding the feature amount of the captured image), and supplies the header information to the adding section 182. The adding section 182 adds the header information supplied from the header generating section 181 to the encoded data output from the packaged image capturing element 112 (feature amount extraction encoding section 124). The adding section 182 supplies the encoded data to which the header information is added to the transmitting section 183.
The transmitting section 183 is a processing section similar to the transmitting section 125, and acquires the encoded data to which the header information is added and which is supplied from the adding section 182, thereby transmitting the data (to the cloud 102).
In other words, the image capturing device 180 generates header information, combines the header information with the encoded data of the feature amount, and transmits the encoded data to which the header information is added, outside the packaged image capturing element 112. In such a manner described above, an increase in the hardware scale and power consumption as a whole can be prevented.
<Flow of Image Capturing Process>
An example of the flow of the image capturing process in this case will be described with reference to the flowchart in
In step S125, the header generating section 181 generates the header information, and the adding section 182 adds the header information to the encoded data of the feature amount.
In step S126, the transmitting section 183 transmits the encoded data of the feature amount to which the header is added in step S125. When the process of step S126 is completed, the image capturing process ends.
By performing the image capturing process as described above, the image capturing device 180 can more safely provide information to the cloud 102. That is, the reduction in communication safety can be prevented.
<Communication System>
The image capturing element or the image capturing device may have a function of selecting an operation mode. For example, the image capturing element or the image capturing device may be able to select whether to output the captured image or to output the feature amount.
The control section 211 receives a request regarding the operation mode from the outside of the image capturing device 101, and controls the operation mode on the basis of the request. To be more specific, the control section 211 controls the selecting section 212 and the selecting section 215 to select (switch) the operation mode.
The image capturing element 112 in this case is provided with a mode of outputting the feature amount (encoded data thereof) and a mode of outputting the captured image (encoded data thereof) as operation modes. The control section 211 can select either mode by controlling the selecting section 212 and the selecting section 215.
Incidentally, the request source of the request regarding the operation mode is freely selected. For example, the request source may be a user of the image capturing device 101, or may be the cloud 102 (for example, the data analysis processing section 132) as illustrated in
The selecting section 212 controls the connection destination of the ADC 122 according to the control of the control section 211. For example, when the selecting section 212 connects the ADC 122 to the signal processing section 213, the ADC 122 supplies the captured image data to the signal processing section 213 via the selecting section 212. That is, the image capturing element 112 operates in a mode in which the captured image (encoded data thereof) is output. In addition, when the selecting section 212 connects the ADC 122 to the feature amount extraction signal processing section 123, the ADC 122 supplies the captured image data to the feature amount extraction signal processing section 123 via the selecting section 212. That is, the image capturing element 112 operates in a mode in which the feature amount (encoded data thereof) is output.
The signal processing section 213 acquires the captured image data supplied via the selecting section 212, and performs predetermined signal processing on the captured image data. The content of this signal processing is optional. The signal processing section 213 can perform signal processing independently of the feature amount extraction signal processing section 123. For example, the signal processing section 213 performs signal processing for higher image quality than the signal processing of the feature amount extraction signal processing section 123, so that the image capturing device 101 can output a captured image (encoded data thereof) having a higher quality. The captured image data appropriately subjected to signal processing is supplied to the encoding section 214. The encoding section 214 encodes the captured image data supplied from the signal processing section 213 and appropriately subjected to signal processing, and generates the encoded data of the captured image. Incidentally, the encoding method is optional. The encoding section 214 supplies the generated encoded data to the selecting section 215.
Note that the encoding of the captured image may be omitted. In other words, the encoding section 214 may be omitted. That is, uncoded captured image data (captured image data appropriately subjected to predetermined signal processing by the signal processing section 213) may be supplied from the signal processing section 213 to the transmitting section 125 via the selecting section 215 to be transmitted.
The selecting section 215 controls the connection source of the transmitting section 125 according to the control of the control section 211. For example, when the selecting section 215 connects the encoding section 214 to the transmitting section 125, the encoding section 214 supplies the encoded data of the captured image to the transmitting section 125 via the selecting section 215. That is, the image capturing element 112 operates in a mode in which the captured image (encoded data thereof) is output. In addition, when the selecting section 215 connects the feature amount extraction encoding section 124 to the transmitting section 125, the feature amount extraction encoding section 124 transmits the encoded data of the feature amount to the transmitting section 125 via the selecting section 215. In other words, the image capturing element 112 operates in a mode in which the feature amount (encoded data thereof) is output.
In such a manner described above, the image capturing device 101 can output not only the feature amount but also the captured image. Accordingly, convenience can be improved.
<Flow of Image Capturing Process>
An example of the flow of the image capturing process in this case will be described with reference to the flowchart of
In step S202, the control section 211 sets the operation mode of the image capturing element 112 to the mode corresponding to the request received in step S201 (namely, the requested mode).
In step S203, the selecting section 212 and the selecting section 215 determine whether or not to output the captured image according to the control of the control section 211 in step S202. In a case where it is determined that the mode of outputting the feature amount is chosen, the selecting section 212 connects the ADC 122 to the feature amount extraction signal processing section 123, and the selecting section 215 connects the feature amount extraction encoding section 124 to the transmitting section 125, and then the process proceeds to step S204.
In step S204, the light receiving section 121 captures an image of the subject and generates an electric signal of the captured image. In step S205, the ADC 122 performs A/D conversion of the electrical signal of the captured image obtained in step S204 to generate captured image data. In step S206, the feature amount extraction signal processing section 123 performs signal processing for feature amount extraction on the captured image data generated in step S205. In step S207, the feature amount extraction encoding section 124 extracts the feature amount of the captured image from the captured image data appropriately subjected to the signal processing in step S206 and encodes the feature amount to generate the encoded data of the feature amount. In step S208, the transmitting section 125 transmits the encoded data generated in step S207 (to the cloud 102). When the process of step S208 is completed, the image capturing process ends.
In addition, in a case where it is determined in step S203 that the mode of outputting the captured image is chosen, the selecting section 212 connects the ADC 122 to the signal processing section 213, and the selecting section 215 connects the encoding section 214 to the transmitting section 125, and then, the process proceeds to step S209.
In step S209, the light receiving section 121 captures an image of the subject and generates an electric signal of the captured image. In step S210, the ADC 122 performs A/D conversion of the electrical signal of the captured image obtained in step S209 to generate captured image data. In step S211, the signal processing section 213 performs predetermined signal processing on the captured image data generated in step S210. In step S212, the encoding section 214 encodes the captured image data appropriately subjected to signal processing in step S211 to generate the encoded data of the captured image. In step S213, the transmitting section 125 transmits the encoded data of the captured image generated in step S212 (to the cloud 102). When the process of step S213 is completed, the image capturing process ends.
By performing the image capturing process as described above, the image capturing device 101 can provide not only the feature amount but also the captured image to the cloud 102. Thus, convenience can be improved.
<Communication System>
In addition, the operation mode may be switched after the above-mentioned request (the request source) is authenticated and confirmed to be a legitimate request.
The authenticating section 231 authenticates the request source regarding the request from the outside of the image capturing device 101. The authentication method is optional. When it is confirmed that the request source is a legitimate request source (in other words, a legitimate request), the authenticating section 231 supplies the request to the control section 211. The control section 211 performs processing in a similar manner to in the case of
In such a manner described above, a request from a dishonest requester can be rejected. That is, the image capturing device 101 can more safely accept the request from the outside. Thus, the image capturing device 101 can output the captured image more safely.
Incidentally, the timing of performing the above authentication process is optional. For example, the process may be performed every time the operation mode is switched (that is, every time there is a request to change the operation mode), or only when the operation mode is switched for the first time (for example, only at login time). In addition, the authentication process for the request source may be performed only in the case of switching the operation mode to the mode of outputting the captured image (when there is such a request).
<Flow of Image Capturing Process>
An example of the flow of the image capturing process in this case will be described with reference to the flowchart of
Each process of steps S232 to S244 is executed in a similar manner to each process of steps S201 to S213 (
By performing the image capturing process as described above, the image capturing device 101 can more safely accept the request from the outside and output the captured image more safely.
<Communication System>
In addition, in a case where the image capturing device 101 outputs the captured image, the image may be generated by using a part (partial region) of the pixel region (photoelectric conversion region). That is, the angle of view may be limited, and a captured image having a lower resolution than the captured image from which the feature amount is extracted may be output. In such a manner described above, the data amount of captured image data can be reduced, so that an increase in the processing load in the image capturing element 112 can be prevented. Further, an increase in the output data rate when the data is output from the image capturing element 112 can be prevented. Therefore, an increase in the cost of the image capturing element 112 can be prevented. Similarly, an increase in the load of the processing of the cloud 102 and the processing on the image display side can also be prevented.
It should be noted that such limitation of the pixel region (limitation of the angle of view) may be set on the basis of the feature amount extracted from the captured image. That is, the image capturing section may generate the captured image by using a part of the photoelectric conversion region within the range based on the feature amount extracted by the feature amount extracting section. For example, on the basis of the image analysis of the captured image performed by analyzing the feature amount, the place where the abnormality occurs (partial region), the place where the observation target exists (partial region), or the like in the captured image can be specified as a region to be closely observed. By outputting the captured image of such region to be closely observed (the captured image obtained by deleting the other regions), the image capturing device 101 can output the captured image worthier of viewing. In other words, by not including other partial regions that are less valuable to be observed in the captured image, an unnecessary increase in the amount of data in the captured image can be prevented (typically, the data amount in the captured image can be reduced).
In addition, such a method of limiting the pixel region is optional. For example, a captured image may be generated once without limiting the angle of view, and a region to be closely observed may be cut out from the captured image by image processing. In addition, for example, the light receiving section 121 and the ADC 122 may be controlled to drive only the pixel region and the A/D converters corresponding to the region to be closely observed to generate a captured image of the region to be closely observed. By driving only the necessary parts as in the latter case, an increase in the image capturing load can be prevented.
In addition, the image capturing element 112 has an image drive control section 261 in addition to the configuration illustrated in
The light receiving section 121 drives only a part of the pixel region designated by the image drive control section 261 to generate a captured image. In addition, the ADC 122 drives only the necessary A/D converters according to the control of the image drive control section 261 to generate the captured image data of the region to be closely observed. In such a manner described above, it is possible to prevent the driving of unnecessary parts and to prevent the increase in load.
<Flow of Image Capturing Process>
An example of the flow of the image capturing process in this case will be described with reference to the flowchart of
In addition, in a case where it is determined in step S264 that the captured image is to be output, the process proceeds to step S270.
In step S270, the image drive control section 261 sets a region to be closely observed (also referred to as a close observation region) on the basis of the feature amount extracted in the past.
In step S271, the light receiving section 121 captures an image of a subject according to the control of the image drive control section 261 in step S270, and generates a captured image of the close observation region. In step S272, the ADC 122 performs A/D conversion of the captured image in the close observation region according to the control of the image drive control section 261 in step S270.
Each process of steps S273 to S275 is executed in a similar manner to each process of steps S242 to S244 (
When the process of step S269 or step S275 is completed, the image capturing process ends.
By performing the image capturing process as described above, the image capturing device 101 can prevent an increase in the load.
<Computer>
The series of processes described above can be executed by hardware or by software. In a case where a series of processes are executed by software, the programs constituting the software are installed in the computer. Here, the computer includes a computer incorporated in dedicated hardware, a general-purpose personal computer, for example, capable of executing various kinds of functions by installing various kinds of programs, or the like.
In a computer 900 illustrated in
An input/output interface 910 is also connected to the bus 904. An input unit 911, an output unit 912, a storage unit 913, a communication unit 914, and a drive 915 are connected to the input/output interface 910.
The input unit 911 includes a keyboard, a mouse, a microphone, a touch panel, an input terminal, and the like, for example. The output unit 912 includes a display, a speaker, an output terminal, and the like, for example. The storage unit 913 includes a hard disk, a RAM disk, a non-volatile memory, and the like, for example. The communication unit 914 includes a network interface, for example. The drive 915 drives a removable medium 921 such as a magnetic disk, an optical disc, a magneto-optical disk, or a semiconductor memory.
In the computer configured as described above, the CPU 901 loads the program stored in the storage unit 913, for example, into the RAM 903 via the input/output interface 910 and the bus 904 and executes the program, thereby performing the above-described series of processes. The RAM 903 also appropriately stores data and the like necessary for the CPU 901 to execute various kinds of processes.
The program executed by the computer (CPU 901) can be recorded in the removable medium 921 to be applied, which is a package medium or the like, for example. In that case, the program can be installed in the storage unit 913 via the input/output interface 910 by attaching the removable medium 921 to the drive 915.
In addition, this program can also be provided via a wired or wireless transmission medium such as a local area network, the Internet, and digital satellite broadcasting. In that case, the program can be received by the communication unit 914 and installed in the storage unit 913.
Moreover, this program can also be installed in advance in the ROM 902 or the storage unit 913.
<Applicable Target of the Present Technology>
The present technology can be applied to any image encoding/decoding method. That is, as long as it does not contradict the present technology described above, the specifications of various kinds of processes related to image encoding/decoding are optional and are not limited to the above-mentioned examples.
In addition, although the case where the present technology is applied to the image capturing device has been described above, the present technology can be applied not only to the image capturing device but also to any devices (electronic equipment). For example, the present technology can be applied also to an image processing device or the like that performs image processing on a captured image generated by another image capturing device.
In addition, the present technology can also be executed as any configuration to be mounted on an optional device or a device constituting a system, for example, as a processor as a system LSI (Large Scale Integration) or the like (for example, a video processor), a module using a plurality of processors (for example, a video module) or the like, a unit using a plurality of modules (for example, a video unit) or the like, a set in which other functions are further added to the unit (for example, a video set) or the like (namely, configuration of a part of a device).
In addition, the present technology can also be applied to a network system including a plurality of devices. For example, the present technology can also be applied to a cloud service that provides services related to images (moving images) to any terminals such as computers, AV (Audio Visual) equipment, portable information processing terminals, and IoT (Internet of Things) devices.
<Others>
In addition, various pieces of information (metadata etc.) regarding the encoded data (bit stream) may be transmitted or recorded in any form as long as the information is associated with the encoded data. Here, the term “associate” means to make the other data available (linkable) when processing one data, for example. That is, pieces of data associated with each other may be combined as one piece of data or may be individual pieces of data. For example, the information associated with the encoded data (image) may be transmitted through a transmission path different from that for the encoded data (image). In addition, for example, the information associated with the encoded data (image) may be recorded in a recording medium different from that for the encoded data (image) (or another recording area of the same recording medium). Note that this “association” may be applied to a part of the data instead of the entire data. For example, an image and information corresponding to the image may be associated with each other in any unit such as a plurality of frames, one frame, or a part within the frame.
In addition, in the present specification, terms such as “combine,” “multiplex,” “add,” “integrate,” “include,” “store,” “pack into,” “put in,” “insert” means bringing a plurality of objects together, for example, bringing encoded data and metadata into one data, and means one method of “associating” described above.
In addition, the embodiments of the present technology are not limited to the above-described embodiments, and various changes can be made without departing from the gist of the present technology.
In addition, for example, the present technology can also be executed as any configuration that constitutes a device or system, for example, as a processor as a system LSI (Large Scale Integration) or the like, a module that uses a plurality of processors or the like, a unit that uses a plurality of modules or the like, and a set in which other functions are added to the unit, or the like (namely, a part of the configuration of a device).
Incidentally, in the present specification, the system means an assembly of a plurality of constituent elements (devices, modules (components), etc.), and it does not matter whether or not all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected to one another via a network, and a device in which a plurality of modules is housed in one housing are both systems.
In addition, for example, the configuration described as one device (or processing section) may be divided to be configured as a plurality of devices (or processing sections). On the contrary, the configurations described above as a plurality of devices (or processing sections) may be collectively configured as one device (or processing section). Further, as a matter of course, a configuration other than the above may be added to the configuration of each device (or each processing section). Furthermore, as long as the configuration and operation of the entire system are substantially the same, a part of the configuration of one device (or processing section) may be included in the configuration of another device (or another processing section).
Moreover, for example, the present technology can have a cloud computing configuration in which one function is processed by a plurality of devices via a network in a shared and cooperative manner.
Further, for example, the above-mentioned program can be executed in any device. In that case, it is sufficient if the device has necessary functions (functional blocks etc.) and can obtain necessary information.
Furthermore, for example, each step described in the above-mentioned flowchart can be executed by one device and can also be executed by a plurality of devices in a shared manner. Moreover, in a case where one step includes a plurality of processes, the plurality of processes included in the one step can be executed by one device or can also be executed by a plurality of devices in a shared manner. In other words, a plurality of processes included in one step can be executed as processes of a plurality of steps. On the contrary, the processes described as a plurality of steps can be collectively executed as one step.
It should be noted that, in the program executed by the computer, the processing of the steps for describing the program may be executed in chronological order according to the order described in the present specification, or may be executed in parallel or individually at a necessary timing such as the time of being called. That is, as long as there is no contradiction, the processing of each step may be executed in an order different from the above-mentioned order. Further, the processing of the steps for describing this program may be executed in parallel with the processing of another program, or may be executed in combination with the processing of another program.
Incidentally, a plurality of the present techniques described in the present specification can be independently implemented alone as long as there is no contradiction. As a matter of course, any of a plurality of the present technologies can be used in combination. For example, some or all of the present techniques described in any of the embodiments may be combined with some or all of the present techniques described in other embodiments for execution. Further, some or all of any above-mentioned present techniques can also be carried out in combination with other techniques not described above.
It should be noted that the present technology can also have the following configurations.
(1) An image capturing element including:
(2) The image capturing element described in item (1), in which
(3) The image capturing element described in item (2), in which
(4) The image capturing element described in any one of items (1) to (3), further including:
(5) The image capturing element described in any one of items (1) to (4), in which
(6) The image capturing element described in any one of items (1) to (5), further including:
(7) The image capturing element described in any one of items (1) to (6), further including:
(8) The image capturing element described in item (7), in which
(9) The image capturing element described in item (8), further including:
(10) The image capturing element described in any one of items (7) to (9), further including:
(11) The image capturing element described in any one of items (7) to (10), in which
(12) The image capturing element described in item (11), in which
(13) An image capturing device including:
(14) The image capturing device described in item (13), further including:
(15) The image capturing device described in item (14), further including:
(16) A method for capturing an image, including:
Number | Date | Country | Kind |
---|---|---|---|
2018-228311 | Dec 2018 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/045557 | 11/21/2019 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2020/116177 | 11/6/2020 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20050062853 | Yagi et al. | Mar 2005 | A1 |
20080180534 | Murayama | Jul 2008 | A1 |
20160210536 | Cho | Jul 2016 | A1 |
20180039882 | Ikeda et al. | Feb 2018 | A1 |
20190034748 | Matsumoto et al. | Jan 2019 | A1 |
20200226457 | Ikeda et al. | Jul 2020 | A1 |
20210105426 | Matsumoto et al. | Apr 2021 | A1 |
Number | Date | Country |
---|---|---|
2005276018 | Mar 2006 | AU |
PI0514555 | Jun 2008 | BR |
2459220 | Mar 2003 | CA |
2578005 | Mar 2006 | CA |
1552040 | Dec 2004 | CN |
101006717 | Jul 2007 | CN |
105227914 | Jan 2016 | CN |
107113406 | Aug 2017 | CN |
108256031 | Jul 2018 | CN |
108781265 | Nov 2018 | CN |
109478557 | Mar 2019 | CN |
111526267 | Aug 2020 | CN |
112017003898 | Apr 2019 | DE |
1435588 | Jul 2004 | EP |
1788802 | May 2007 | EP |
3439288 | Feb 2019 | EP |
2003-078829 | Mar 2003 | JP |
2003244347 | Aug 2003 | JP |
2005277989 | Oct 2005 | JP |
2018-026812 | Feb 2018 | JP |
2018191230 | Nov 2018 | JP |
6540886 | Jul 2019 | JP |
6788757 | Nov 2020 | JP |
2021-108489 | Jul 2021 | JP |
10-2005-0025115 | Mar 2005 | KR |
10-2007-0053230 | May 2007 | KR |
10-2019-0032387 | Mar 2019 | KR |
10-2021-0134066 | Nov 2021 | KR |
2007002073 | Apr 2007 | MX |
2007106899 | Aug 2008 | RU |
583879 | Apr 2004 | TW |
201810134 | Mar 2018 | TW |
2003023712 | Mar 2003 | WO |
2006022077 | Mar 2006 | WO |
2017168665 | Oct 2017 | WO |
2018025116 | Feb 2018 | WO |
Entry |
---|
Extended European Search Report of EP Application No. 19892376.5, dated Feb. 4, 2022, 09 pages. |
International Search Report and Written Opinion of PCT Application No. PCT/JP2019/045557, dated Feb. 4, 2020, 09 pages of ISRWO. |
“Information technology—Multimedia content description interface—Part 13: Compact descriptors for visual search”, ISO/IEC DIS 15938-13, ISO/IEC JTC1/SC29/WG11, Sep. 2015, 7 pages. |
Number | Date | Country | |
---|---|---|---|
20220030159 A1 | Jan 2022 | US |