This application claims priority to Japanese Application No. 2019-119644, filed in Japan on Jun. 27, 2019, the entirety of which is incorporated by reference.
An omnidirectional imaging system that uses a plurality of wide-angle lenses such as fish-eye lenses and super-wide-angle lenses to create an omnidirectional image from a plurality of images is known. Hereinafter, such an omnidirectional image is referred to as a spherical image. However, since the plurality of images are taken by a plurality of lenses, the omnidirectional image looks unnatural depending on the subject taken by each lens and the surrounding conditions.
Embodiments of the present application described herein provide an imaging device, an imaging system and an imaging method.
An imaging device in accordance with the present application includes a first imaging sensor, a second imaging sensor and processing circuitry. The processing circuitry is configured to obtain a first photometric value of the first imaging sensor and a second photometric value of the second imaging sensor; determine whether a difference between the first photometric value and the second photometric value exceeds a predetermined threshold; in a case that the difference exceeds the predetermined threshold, set a first imaging condition of the first imaging sensor independently from a second imaging condition of the second imaging sensor; in a case that the difference does not exceed the predetermined threshold, set the first imaging condition and the second imaging condition to be equal to one another; control the first imaging sensor, operating according to the first imaging condition, to capture a first image; control the second imaging sensor, operating according to the second imaging condition, to capture a second image; perform a first zenith correction, around a first optical axis of the first imaging sensor, for correcting the first image based on imaging conditions of the imaging device to generate a first corrected image, the imaging conditions including the difference between the first and second photometric values; perform a second zenith correction, around a second optical axis of the second imaging sensor, for correcting the second image based on the imaging conditions of the imaging device to generate a second corrected image; and generate a connected image by combining the first and second corrected images.
Another imaging device in accordance with the present application includes processing circuitry configured to obtain a first brightness value of a first image captured by a first imaging sensor; obtain a second brightness value of a second image captured by a second imaging sensor; determine whether a difference between the first brightness value and the second brightness value exceeds a predetermined threshold; in a case that the difference exceeds the predetermined threshold, set a first imaging condition of the first imaging sensor independently from a second imaging condition of the second imaging sensor; in a case that the difference does not exceed the predetermined threshold, set the first imaging condition and the second imaging condition to be equal to one another; control the first imaging sensor, operating according to the first imaging condition, to capture a third image; control the second imaging sensor, operating according to the second imaging condition, to capture a fourth image; perform a first zenith correction, around a first optical axis of the first imaging sensor, for correcting the third image based on imaging conditions of the imaging device to generate a first corrected image, the imaging conditions including the difference between the first and second brightness values; perform a second zenith correction, around a second optical axis of the second imaging sensor, for correcting the fourth image based on the imaging conditions of the imaging device to generate a second corrected image; and generate a connected image by combining the first and second corrected images.
An imaging method in accordance with the present application includes obtaining a first photometric value of a first imaging sensor; obtaining a second photometric value of a second imaging sensor; determining, by processing circuitry, whether a difference between the first photometric value and the second photometric value exceeds a predetermined threshold; in a case that the difference exceeds the predetermined threshold, setting a first imaging condition of the first imaging sensor independently from a second imaging condition of the second imaging sensor; in a case that the difference does not exceed the predetermined threshold, setting the first imaging condition and the second imaging condition to be equal to one another; controlling, by the processing circuitry, the first imaging sensor that operates according to the first imaging condition to capture a first image; controlling, by the processing circuitry, the second imaging sensor that operates according to the second imaging condition to capture a second image; performing a first zenith correction, around a first optical axis of the first imaging sensor, for correcting the first image based on imaging conditions of the imaging device to generate a first corrected image, the imaging conditions including the difference between the first and second photometric values; performing a second zenith correction, around a second optical axis of the second imaging sensor, for correcting the second image based on the imaging conditions of the imaging device to generate a second corrected image; and generating a connected image by combining the first and second corrected images.
A more complete appreciation of the present application and the many attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings.
Exemplary implementations of the present application are described below, but no limitation is indicated therein, and various applications and modifications may be made without departing from the scope of the application. In the implementations described below, as illustrated in
Hereinafter, the schematic configuration of the omnidirectional imaging system 1 according to the present implementation is described with reference to
The information processing device 50 is a terminal that communicates with the omnidirectional imaging device 10 and performs operations on the omnidirectional imaging device 10, reception of captured images, and the like. In
The imaging body 12 illustrated in
The relative positions of the optical elements (lenses, prisms, filters, and aperture stops) of lens system 20A and lens system 20B are determined with reference to the corresponding solid-state image sensor 22A and solid-state image sensor 22B. More specifically, positioning is made such that the optical axis of the optical elements of each of the lens system 20A and 20B is positioned at the central part of the light receiving area of a corresponding one of the solid-state image sensor 22A and 22B orthogonally to the light receiving area, and such that the light receiving area serves as the imaging plane of corresponding one of the fish-eye lenses. In order to reduce the parallax, folded optics may be adopted. Folded optics is a system in which light, converged by two lens systems 20A and 20B, can be divided to two image sensors by the two rectangular prisms. However, the present application is not limited to this configuration, and a three-fold refraction structure may be used in order to further reduce parallax, or a straight optical system may be used to reduce costs.
In the implementation illustrated in
The circuitry or processing circuitry may include general purpose processors, special purpose processors, integrated circuits, ASICs (“Application Specific Integrated Circuits”), conventional circuitry, CPUs, controllers, and/or combinations thereof which are configured or programmed to perform the disclosed functionality. Processors and controllers are considered processing circuitry or circuitry as they include transistors and other circuitry therein. In this disclosure, any circuitry, units, controllers, or means are hardware carry out or are programmed to perform the recited functionality. The hardware may be any hardware disclosed herein or otherwise known which is programmed or configured to carry out the recited functionality. When the hardware is a processor or controller which may be considered a type of circuitry, the circuitry, means, or units are hardware and/or the hardware and processor may be configured by executable instructions as described in this application.
The processor 100 includes Image Signal Processors (ISP) 108, Direct Memory Access Controllers (DMAC) 110, and an arbiter (ARBMEMC) 112 for arbitrating a memory access. In addition, the processor 100 includes a Memory Controller (MEMC) 114 for controlling the memory access, a distortion correction and image synthesizing block 118, and a face detecting block 201. The ISPs 108A and 108B respectively performs Automatic Exposure (AE) control, Automatic white balance (AWB) setting, and gamma setting on images input through signal processing by the solid-state imaging sensors 22A and 22B. In
The MEMC 114 is connected to an SDRAM 116 which temporarily stores data used in the processing of the ISP 108A, 108B and distortion correction and image synthesis block 118. The distortion correction and image synthesis block 118 performs distortion correction and vertical correction on the two partial images from the two pairs of the lens systems 20 and the solid-state image sensor 22 on the basis of information from a motion sensor 120 and synthesizes them. The motion sensor 120 may include a triaxial acceleration sensor, a triaxial angular velocity sensor, a geomagnetic sensor, and the like. A face detection block 119 performs face detection from the image and specifies the position of the person's face. In addition to the face detection block 119 or instead of the face detection block 119, an object recognition block for recognizing other subjects such as a full body image of a person, a face of an animal such as a cat or dog, a car or a flower may be provided.
The processor 100 further comprises a DMAC 122, an image processing block 124, a CPU 130, an image data transferrer 126, an SDRAMC 128, a memory card control block 140, a USB block 146, a peripheral block 150, an audio unit 152, a serial block 158, an LCD (Liquid Crystal Display) driver 162, and a bridge 168.
The CPU 130 controls the operations of the elements of the omnidirectional imaging device 10. At image processing block 124 performs various kinds of image processing on image data. The processor 100 comprises the resize block 132. The resize block 132 enlarges or shrinks the size of image data by interpolation. The processor 100 comprises a still-image compression block 134. The still-image compression block 134 is a codec block for compressing and expanding the still images such as those in JPEG/TIFF format. The still-image compression block 134 is used to store the image data of the generated spherical image, or to reproduce and output the stored image data. The processor 100 comprises a moving-image compression block 136. The moving-image compression block 136 is a codec block for compressing and expanding the moving images such as those in MPEG-4 AVC/H.264 format. The moving-image compression block 136 is used to store the video data of the generated spherical image, or to reproduce and output the stored video data. The processor 100 includes power controller 137.
The image data transferrer 126 transfers the images processed by the image processing block 124. The SDRAMC 128 controls the SDRAM 138 connected to the processor 100 and temporarily storing image data during image processing by the processor 100. The memory card control block 140 controls data read and write to a memory card and a flash ROM 144 inserted to a memory card throttle 142 in which a memory card is detachably inserted. The USB block 146 controls USB communication with an external device such as personal computer connected via a USB connector 148. The peripheral block 150 is connected to a power switch 166.
The audio unit 152 is connected to a microphone 156 for receiving an audio signal from a user and a speaker 154 for outputting the audio signal, to control audio input and output. The serial block 158 controls serial communication with the external device and is connected to a wireless NIC (network interface card) 160. The LCD driver 162 is a drive circuit for the LCD 164 and converts the image data to signals for displaying various kinds of information on an LCD 164. In addition to what is shown in
The flash ROM 144 stores a control program written in a code that can be decoded by the CPU 130 and various parameters. When a power supply is turned on by operating the power switch 166, the control program is loaded to a main memory, and the CPU 130 controls operations of the respective units of the device according to the program read into the main memory. Concurrently, the SDRAM 138 and a local SRAM (Static Random Access Memory) temporarily store data required for control. By using rewritable flash ROM 144, the control program and the parameter for control can be changed, and a version of the function can be easily updated.
The information processing device 50 may include an input device 58, an external storage 60, a display 62, a wireless NIC 64, and a USB connector 66. The input devices 58 are input devices, such as a mouse, a keyboard, a touchpad, and a touchscreen, and provide a user interface. The external storage 60 is a removable recording medium mounted, for example, in a memory card slot, and records various types of data, such as image data in a video format and still image data.
The display 62 performs the display of an operation screen, the display of the monitor image of the image captured by the omnidirectional imaging device 10 that is ready to capture or is capturing an image, and the display of the stored video or still image for reproducing or viewing. The display 62 and the input device 58 enable, through the operation screen, making instructions for image capturing or changing various kinds of setting in the omnidirectional imaging device 10.
The wireless NIC 64 establishes a connection for wireless LAN communication with an external device such as the omnidirectional imaging device 10. The USB connector 66 provides a USB connection to an external device such as the omnidirectional imaging device 10. By way of example, the wireless NIC 64 and the USB connector 66 are described. However, limitation to any specific standard is not intended, and connection to an external device may be established through another wireless connection such as Bluetooth (registered trademark) and wireless USB or through a wired connection such as wired local area network (LAN).
When power is supplied to the information processing device 50 and the power thereof is turned on, the program is read from a ROM or the HDD 56, and loaded into the RAM 54. The CPU 52 follows the program read into the RAM 54 to control the operations of the parts of the device, and temporarily stores the data required for the control in the memory. This operation implements functional units and processes of the information processing device 50, as will be described later. As examples of the program include an application for giving various instructions to the connected the omnidirectional imaging device 10 and requesting an image through a bus 68.
<Entire Image Processing>
The area division average processing is processing for dividing an image area included in the captured image into a plurality of areas and calculating an integration value (or integration average value) of luminance for each divided area. The results of this processing are used in the AE control processing.
After the first image signal processing (ISP1) the ISPs 108A and 108B further perform a second image signal processing to the images and the images are stored in the memory 300. The ISPs 108A and 108B perform any of white balance 176, Bayer interpolation, color correction, gamma (γ) correction, YUV conversion and edge enhancement (Y CF LT) as the second image signal processing.
A color filter of one of colors of red (R), green (G), and a blue (B) is attached on a photodiode on each of the solid-state image sensors 22A and 22B. The color filter accumulates a light amount from an object. Since the amount of light to be transmitted varies according to the color of the filter, an amount of charges accumulated in the photodiode varies. The color having the highest sensitivity is G, and the sensitivity of R and B is lower than and about half of the sensitivity of G. In the white balance (WB) processing 176, processing for applying gains to R and B is performed to compensate the differences in the sensitivity and to enhance whiteness of the white in the captured image. Furthermore, since a color of an object changes according to a light source color (for example, sunlight and fluorescent light), a function is provided for changing and controlling the gains of R and B so as to enhance whiteness of the white even when the light source is changed. The parameter of the white balance processing is calculated based on integration value (or accumulation average value) data of RGB for each divided area calculated by the area division average processing.
In the ISP 108A, relative to a Bayer RAW image output from the solid-state image sensor 22A, the first image signal processing is performed. The image is stored in the memory 300. In the ISP 108B, similarly, relative to a Bayer RAW image output from the solid-state image sensor 22B, the second image signal processing is performed. The image is stored in the memory 300.
An automatic exposure control unit 170 performs processing to set the exposure of each of the solid-state image sensor 22A and the solid-state image sensor 22B is set to a proper exposure by using an area integrated value obtained by the area division average processing so that the brightness at the image boundary portion of the two images are similar to each other. (it means compound-eye AE). Each of the solid-state image sensors 22A and 22B may has an independent simple AE processing function, and each of the solid-state image sensor 22A and the solid-state image sensor 22B can independently set a proper exposure. In a case where change in an exposure condition of each of the sensors A and B is reduced and the exposure condition is stable, a process shifts to compound-eye AE control for two images from both solid-state image sensors. The automatic exposure control unit 170 may be executed on one ISP 108 or the automatic exposure control unit 170 may be distributed mounted on both ISPs 108A and 108B and exchanges information with each other while considering the information of the other ISP, and the exposure condition parameter of the own solid-state image sensor 22 may be determined.
As the exposure condition parameters, shutter speed, ISO sensitivity, and aperture value and the like can be used, but the aperture value may be fixed value. In compound-eye AE, by setting the shutter speeds of the solid-state image sensor 22A and 22B to be the same, a moving object across the solid-state image sensors 22A and 22B can be satisfactorily connected. The exposure condition parameters for the solid-state image sensors 22A and 22B are set from the automatic exposure control unit 170 to AE registers 172A and 172B of the solid-state image sensors 22A and 22B.
With respect to the solid-state image sensor 22A, of which the first image signal processing has been performed, the second image signal processing including a white balance processing 176A is performed. The processed data is stored in the memory 300. Similarly, with respect to the solid-state image sensor 22B, of which the first image signal processing has been performed, the second image signal processing including a white balance processing 176B is performed. The processed data is stored in the memory 300. Based on the integration value data of RGB for each divided area calculated by the area dividing average process, the white balance calculating unit 174 calculates the parameters of the white balance processing in each of the solid-state image sensors 22A and 22B.
The image data after the second image signal processing is sent to the distortion correction/synthesizing operation block 118 and the distortion correction/synthesizing operation block 118 performs the distortion correction/synthesizing operation and an omnidirectional image is generated. Then, based on the information received from the motion sensor 120, the distortion correction/synthesizing operation performs vertical correction representing inclination correction. When the image is a still image, for example, the image is appropriately JPEG compressed in the still-image compression block 134 shown in
Hereinafter, a description relating to generation of an omnidirectional image and the generated omnidirectional image is provided with reference to
First, images directly captured by each of the solid-image sensors 22A and 22B roughly cover a hemisphere of the whole sphere as a field of view. Light that passes through each lens system 20A/20B is focused on the light receiving area of the corresponding solid-image sensor 22A/22B to form an image according to a predetermined projection system. The solid-image sensor 22A/22B is a two-dimensional image sensor defining a planar area of the light receiving area. Accordingly, the image formed by the solid-image sensor 22A/22B is image data represented by a plane coordinate system. Such a formed image is a typical fish-eye image that contains an image circle as a whole in which each captured area is projected, as illustrated in a partial image A and a partial image B in
A plurality of the partial images captured by the plurality of solid-state image sensors 22A and 22B is then subjected to distortion correction and synthesis processing to form an omnidirectional image (spherical image). In the synthesis processing, an omnidirectional image, which constitutes a complementary hemispherical portion, is generated from each planar partial image. Then, the images including the respective hemispherical portions are joined together by a stitching processing by matching the overlapping areas of the hemispherical portions, and the omnidirectional images are synthesized to generate a full omnidirectional image including a whole sphere. The images of the respective hemispherical portions includes overlapping areas, but in the synthesis process the overlapping areas are blended to look the joint naturally between the two images
As illustrated
As illustrated in
In the description of
The following describes the zenith correction and the rotation correction using information from the motion sensor 120 with reference to
As shown in
In the implementation to be described, the front and rear of the omnidirectional imaging device 10 are defined for convenience as follows. That is, the lens system 20A on the side opposite to the shutter button 18 is a front lens, and the side photographed with the front lens is the front (F) side. In addition, the lens system 20B on the side where the shutter button 18 is provided is a rear lens, and the side photographed with the rear lens is a rear (R) side.
As described above, the image data of an omnidirectional image format is expressed as an array of pixel values where the vertical angle φ corresponding to the angle with reference to a certain axis z0 and the horizontal angle θ corresponding to the angle of rotation around the axis z0 are the coordinates. If no correction is made, the certain axis z0 is defined with reference to the omnidirectional imaging device 10. For example, the axis z0 is defined as the central axis z0, which defines the horizontal angle θ and the vertical angle φ, passing through the center of the housing 14 from the bottom to the top where the top is the imaging body 12 side and the bottom is the opposite side of the omnidirectional imaging device 10 in
The zenith correction (correction in the direction of roll and the direction of pitch) is a correction processing that corrects the omnidirectional images (
In an exemplary flow of processing, a partial image is converted into an image including each hemispherical portion, and the obtained images are combined to generate an omnidirectional image, and the zenith and rotation processing (correction) is performed on the generated omnidirectional image. However, the order of the conversion process, the synthesis process, and the zenith and rotation process is not particularly limited.
Additionally, a synthesis process may be performed after zenith and rotation correction on each of partial image A and partial image B (and two omnidirectional images including complementary hemispherical parts converted from each partial image). And as a different example, in addition to rotating coordinate conversion for images in the omnidirectional image format, the conversion table for converting partial images to omnidirectional images reflects the effects of zenith and rotation correction, and based on the corrected conversion table, the corrected omnidirectional image can be directly obtained from the partial image A and the partial image B.
An example of an omnidirectional image displayed on a plane will be described.
Here, as a method for improving the image quality when there is a large contrast in brightness in one image, there is a method of image capture (“shooting”) with HDR processing. HDR processing can improve the image quality of an image by performing a process of combining a plurality of images shot with different exposures when there is a large difference in brightness in one image.
On the other hand, when shooting is performed by combining the compound-eye AE control and the HDR processing as shown in
The shooting mode switching unit 201 switches the normal shooting mode and the brightness difference scene mode. The shooting mode switching unit 201 switches the shooting mode based on the brightness between partial images obtained by the solid-image sensors 22A and 22B.
When the brightness difference scene mode is selected, the automatic exposure control unit 170 sets imaging conditions for appropriate exposure for the solid-image sensors 22A and 22B based on the photometric values obtained by the image sensors 22A and 22B. Noted that the acquisition of the photometric value is not necessarily performed by the solid-image sensor 22, but may be performed by a photometric sensor or the like. In this case, the omnidirectional imaging device 10 comprises a photometric sensor corresponding to the each solid-image sensor 22A and 22B. That is, the omnidirectional imaging device 10 comprises a photometric sensor that measures a photometric value for setting the imaging condition of the solid-image sensor 22A and a photometric sensor that measures a photometric value for setting the imaging condition of the solid-image sensor 22B. The imaging conditions to be set can include various parameters such as shutter speed, ISO sensitivity, aperture value and the like.
Each shooting mode can be selected from manually operating applications of the omnidirectional imaging device 10 or the information processing device 50 by input of a user, but the implementation is not particularly limited those. For example, the shooting mode switching unit 201 may automatically switch to set the brightness difference scene when the omnidirectional imaging device 10 or the information processing device 50 compares the photometric values acquired by the solid-image sensor 22 or the photometric sensor, and the difference is larger than a predetermined threshold value.
Similarly, in the brightness difference scene mode, a white balance value calculation unit 174 calculates the white balance value of each solid-image sensor 22A and 22B based on the information acquired by each of the solid-image sensors 22A and 22B at the shooting, and the white balance value calculation unit 174 performs white balance processing.
In addition, when switching from the normal shooting mode to the brightness difference scene mode, initial values of the imaging condition of each solid-image sensor in brightness difference scene mode may be set to the image condition set in the normal shooting mode at the time of switching. Thereby, a convergence of the imaging condition by feedback control is accelerated.
Next, processing for selecting a shooting mode will be described with respect to the process performed by processing circuitry of the omnidirectional imaging device 10 and illustrated in
In step S103, the shooting mode switching unit 201 calculates a brightness difference between the two images based on the information acquired in step S102. When the brightness difference is larger than the threshold (YES in S103), the shooting mode switching unit 201 determines to proceed to step S104, and the shooting mode switching unit 201 then selects the brightness difference scene mode for performing image capture by the omnidirectional imaging device 10. When the brightness difference is smaller than the threshold (NO in S103), the shooting mode switching unit 201 determines to proceed to step S105, and the shooting mode switching unit 201 selects the normal shooting mode for performing image capture by the omnidirectional imaging device 10. Thereafter, the process ends in step S106.
Next, an exemplary omnidirectional image captured in the brightness difference scene mode will be described.
With such an arrangement, the entire image can be fit well. The number of boundary portions with a large difference in brightness can be reduced, an uncomfortable feeling that the user sees the image is reduced. Further, for a horizontally long omnidirectional image, by arranging the partial images side by side, the length of the boundary between the partial images can be set to the length of the short side direction of the omnidirectional image. That is, since the length of the boundary can be shortened, the uncomfortable feeling that the user sees the image is reduced.
In addition, in the case of an omnidirectional image captured in the brightness difference scene mode, the processor 100 may perform control to invalidate the connection position detection processing function of the distortion correction and image synthesis block 118, and may not perform the connection position detection processing for the overlapping areas of the partial images. Since the two partial images shot in the brightness difference scene mode have a large light/dark difference, joining them after the performing connection position detection processing makes the connected portions unnatural and causes a deterioration in image quality. Therefore, the omnidirectional image that looks more natural is generated by not performing the connection position detection process of partial images.
Further, when the contrast difference is particularly remarkable in the brightness difference scene mode, the processing circuitry of an omnidirectional imaging device 10 may perform a process to insert a boundary line at a boundary portion of a partial image at the time of outputting the horizontal parallel arrangement image as shown in
Further, when an omnidirectional image is output in the brightness difference scene mode, depending on the rotation direction of the device, the visibility may be impaired by performing a zenith correction. In particular, an orientation of the omnidirectional imaging device 10 may be considered when performing a zenith correction in order to yield a correct orientation of output images for better visibility of a user. For example, there may be a situation in which it is preferable not to perform the zenith correction for rotation in the pitch direction, but to perform zenith correction for rotation in the roll direction.
The left figure of
In the left diagram of
When imaged in the upright posture, the generated horizontal parallel arrangement image is an omnidirectional image as shown in the right figure of
The case is that the omnidirectional imaging device 10 is rotated in the roll direction and imaged. As shown in the left figure of
The case is that the omnidirectional imaging device 10 is rotated in the pitch direction and imaged. As shown in the left figure of
When zenith correction is performed in the case, the omnidirectional image includes a car on the left side and a person on the right side as shown in the right figure of
Therefore, it is preferable that the omnidirectional imaging system 1 of the present implementation performs a zenith correction for rotation in the roll direction and does not perform a zenith correction for rotation in the pitch direction. Note that rotation correction may or may not be performed for rotation in the yaw direction.
In the implementations described, the description has been made mainly using still images as examples, but the implementations are not particularly limited. Therefore, the implementation described for not only still images but also moving images may be applied.
As described above, according to the implementations, it is possible to provide an imaging device, an imaging system, a method, and a non-transitory computer readable medium storing executable instructions or a program for reducing a sense of discomfort of the omnidirectional image and improve the image quality regardless of the subject or the scene.
In the implementation described above, the omnidirectional imaging system 1 is described using the omnidirectional imaging device 10 and the omnidirectional imaging device 10 including the information processing device 50 communicating with the omnidirectional imaging device 10 as examples. However, the configuration of the omnidirectional imaging system 1 is not limited to the configuration described above. Therefore, not all the functional means described in the implementations are necessarily included in the omnidirectional imaging device 10. For example, in other implementations, the system of the above implementation may be realized by the cooperation between the omnidirectional imaging device 10 and the information processing device 50. In addition, the system of the above implementation may be an image processing system that processes an image captured by an external imaging unit.
The functions of the omnidirectional imaging system can be realized by a computer-executable program written in legacy programming language such as assembler, C, C++, C#, JAVA (registered trademark) or object-oriented programming language. Such a program can be stored in a storage medium such as ROM, EEPROM, EPROM, flash memory, flexible disc, CD-ROM, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, blue ray disc, SD card, or MO and distributed through an electric communication line. Further, a part or all of the above functions can be implemented on, for example, a programmable device (PD) as field programmable gate array (FPGA) or implemented as application specific integrated circuit (ASIC). To realize the functions on the PD, circuit configuration data as bit stream data and data written in HDL (hardware description language), VHDL (very high speed integrated circuits hardware description language), and Verilog-HDL stored in a storage medium can be distributed.
Although the present application has been described in terms of exemplary implementations, it is not limited thereto. It should be appreciated that variations or modifications may be made in the implementations described by persons skilled in the art without departing from the scope of the present application as defined by the following claims.
Number | Date | Country | Kind |
---|---|---|---|
2019-119644 | Jun 2019 | JP | national |