Embodiments of the present disclosure relate to an image processing apparatus, a reading device, an image forming apparatus, an image processing method, and a recording medium.
As known in the art, a latent image that cannot be recognized under visible light is formed on some public certificates, for example. A reading device that emits invisible light such as infrared light reads a document on which a latent image is formed, whereby the latent image is perceptible to the naked human eye in an image that is output from the reading device. Thus, authenticity is checked.
PTL 1 discloses a technology for solving difficulty in viewing an infrared image in a night imaging mode of a monitoring camera. The disclosed technology enhances visual recognizability of an image by combining an infrared light image and a visible color image.
Japanese Patent No. 6243087
However, since an image generation mode using visible light is typically selected, a setting for generating an image has to be changed each time an image output using invisible light is performed. When the image output using invisible light is performed without changing the setting, an image processing is performed with a color setting of a case using visible light. In this case, there is a drawback that the appearance of the output image is unnaturally colored differently from the appearance of an original of an object, and a person checking the output image feels strange.
In light of the above, an object of the present disclosure is to provide an image processing apparatus, a reading device, an image forming apparatus, and an image processing method that enable selection of an image generation mode according to a coloring of an original of an object when a setting is configured to use invisible light.
An embodiment of the present disclosure includes an image processing apparatus. The image processing apparatus includes a light source to irradiate an object with at least invisible light; an image sensor having sensitivity to a visible light wavelength range and an invisible light wavelength range; an image processor to generate an image according to image information that is output by the image sensor; a switching unit to switch the image processor to a single-color image generation mode in which a single-color image is generated, the single-color image being an image of a single color; and a controller to control the switching unit to switch the image processor to the single-color image generation mode in response to selection of an operating mode in which the invisible light is emitted.
An embodiment of the present disclosure includes a reading device. The reading device includes a scanner including the light source and the image sensor of the above-described image processing apparatus.
An embodiment of the present disclosure includes an image forming apparatus. The image forming apparatus includes a scanner including the light source and the image sensor of the above-described image processing apparatus; and an image forming section to form an image according to an output image output from the image processor.
An embodiment of the present disclosure includes an image processing method. The image processing method includes irradiating an object with at least invisible light; outputting image information of the object from an image sensor having sensitivity to a visible light wavelength range and an invisible light wavelength range; generating an image according to the image information that is output from the image sensor; and switching to a single-color image generation mode in which a single-color image is generated, in response to selection of an operating mode in which the invisible light is emitted, the single-color image being an image of a single color.
An embodiment of the present disclosure includes a recording medium storing a program storing instructions which, when executed by one or more processors of a computer, causes the one or more processors to perform an image processing method. The method includes irradiating an object with at least invisible light; outputting image information of the object from an image sensor having sensitivity to a visible light wavelength range and an invisible light wavelength range; generating an image according to the image information that is output from the image sensor; and switching to a single-color image generation mode in which a single-color image is generated, in response to selection of an operating mode in which the invisible light is emitted, the single-color image being an image of a single color.
According to one or more embodiments of the present disclosure, in a case that a setting of using invisible light is configured, an image generation mode suitable for coloring of an original of an object is selected.
A more complete appreciation of embodiments of the present disclosure and many of the attendant advantages and features thereof can be readily obtained and understood from the following detailed description with reference to the accompanying drawings.
The accompanying drawings are intended to depict embodiments of the present disclosure and should not be interpreted to limit the scope thereof. The accompanying drawings are not to be considered as drawn to scale unless explicitly noted. Also, identical or similar reference numerals designate identical or similar components throughout the several views.
In describing embodiments illustrated in the drawings, specific terminology is employed for the sake of clarity. However, the disclosure of this specification is not intended to be limited to the specific terminology so selected and it is to be understood that each specific element includes all technical equivalents that have a similar function, operate in a similar manner, and achieve a similar result.
Referring now to the drawings, embodiments of the present disclosure are described below. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
Hereinafter, embodiments of an image processing apparatus, a reading device, an image forming apparatus, and an image processing method are described in detail with reference to the accompanying drawings.
The imaging device 2 includes a light source 21 and an image sensor 22. The light source 21 includes a light source that can irradiate an object P with at least invisible light. The light source 21 may include a light source that irradiates the object P with visible light. The image sensor 22 is an image sensor such as a charge-coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) that is sensitive to a visible light wavelength range and an invisible light wavelength range. The description is given of an example in which an image sensor including pixels of three colors of a red (R) pixel, a green (G) pixel, and a blue (B) pixel for explanatory convenience, this is just an example. Any suitable image sensor is applicable provided that it includes pixels of at least two colors.
In response to receiving a request signal to start imaging from the controller 11, the imaging device 2 starts imaging in a designated operating mode and transmits image information to the image processor 13. The image processor 13 performs image processing in an image generation mode corresponding to the operating mode of the imaging device 2 on the basis of the image information that is output from the imaging device 2.
The operating mode of the imaging device 2 includes a first operating mode for outputting visible image information and a second operating mode for outputting invisible image information. The visible image information is image information that is output in response to receiving, by the image sensor 22, reflected light from the object P under visible light. The invisible image information is image information that is output in response to receiving, by the image sensor 22, reflected light from the object P under invisible light.
As an example, in the first operating mode, the imaging device 2 images the object P with the invisible light of the light source 21 off, and outputs the visible image information of the object P from the image sensor 22. When the light source 21 includes a visible light source, lighting of visible light is turned on. Further, in the second operating mode, the imaging device 2 images the object P with the invisible light of the light source 21 on, and outputs the invisible image information of the object P from the image sensor 22. When the light source 21 includes a visible light source, lighting of visible light is turned off.
The image generation mode of the image processor 13 includes multiple image generation modes. As an example, the image processor 13 includes a multicolor image generation mode and a single-color image generation mode. The multicolor image generation mode is an image generation mode in which visible image information of multiple colors from the imaging device 2 is multiplied by a coefficient for correcting a variation in sensitivity between the colors, thereby an output image of the multiple colors is generated. Examples of the multicolor image generation mode include a color image generation mode in which an RGB color image is generated. The single-color image generation mode is an image generation mode in which an output image of a single color of the object P is generated.
The controller 11 receives an operation instruction from an operation unit used by, for example, a user to configure settings. The controller 11 receives an imaging start instruction and an operating mode from the operation unit, and transmits signals respectively corresponding to the imaging start instruction and the operating mode to the imaging device 2. The switching unit 12 detects whether the signal transmitted from the controller 11 to the imaging device 2 is an instruction for the second operating mode. In response to detecting the instruction of the second operating mode, the switching unit 12 transmits a signal for switching the image generation mode to an image generation mode corresponding to the second operating mode to the image processor 13. For example, when a default setting of the image generation mode of the image processor 13 is the multicolor image generation mode, the multicolor image generation mode is switched to the single-color image generation mode.
The image processor 13 applies the setting of the multicolor image generation mode or the single-color image generation mode to the image information output from the image sensor 22, and outputs a generated image to which the setting is applied as an output image.
In the first operating mode, a setting of the multicolor image generation mode is applied to the image processor 13. Accordingly, visible image information of each of the colors is multiplied by a coefficient for correcting variation in sensitivity between the colors to generate output images of the colors.
As a result, the variation in the sensitivity between the color is corrected, and an image obtained by combining the output images of the colors is reproduced with colors close to the original.
As illustrated in
When an image captured under visible light is output, a coefficient for correcting variation in sensitivity between colors of the image sensor 22 is determined for R image information, G image information, and B image information output from the image sensor 22, and the determined coefficient is applied. On the other hand, in a case of outputting an image captured under invisible light, when the output image is generated in the multicolor image generation mode, the coefficient that is effective in a visible light wavelength range is applied. Since the coefficient is not effective an invisible image, G is emphasized compared to R and B. for example. In such a case, as illustrated in
On the other hand, in a case of outputting an image captured under invisible light, when the mode is switched to the single-color image generation mode, the output image is not colored.
Further, in a case that an output image is stored in a storage device, since the output image is a single-color image as illustrated in
When no request to start imaging is received (step S2: No), the controller 11 keeps the standby state of step S1. In response to receiving the request to start imaging (step S2: Yes), the controller 11 transmits an imaging start signal to the imaging device 2 via the switching unit 12. The switching unit 12 detects whether the signal transmitted from the controller 11 is a second imaging start signal indicating that imaging is to be started in the second operating mode, in other words, a mode for outputting invisible image information (step S3).
In response to detecting that the imaging start signal transmitted by the controller 11 is the second imaging start signal (step S3: Yes), the switching unit 12 transmits a signal for switching to the single-color image generation mode to the image processor 13, to switch a mode of the image processor 13 from the multicolor image generation mode, which is set by default, to the single-color image generation mode (step S4).
When the imaging start signal transmitted by the controller 11 is not the second imaging start signal (step S3: No), the switching unit 12 keeps the default setting without switching the setting of the image processor 13 (step S5).
The imaging device 2 starts imaging in the operating mode corresponding to the request signal transmitted from the switching unit 12 after the setting of the image processor 13 (step S6). The image processor 13 applies the default image generation mode or the switched image generation mode to image information that is output from the imaging device 2 to generate an image (step S7), and outputs the generated image as an output image (step S8).
An output destination to which the output image is to be output may be any desired location determined according to a configuration. Examples of the output destination include a display or a storage device. In another example, in a case that a printing mechanism for printing on a paper medium is provided, the output destination may be a printing device.
As described above, when the operating mode of the imaging device 2 is the second operating mode for an invisible image, the image processing apparatus 1 automatically switches the setting of the image processor 13 to a setting for obtaining an output image corresponding to the second operating mode. Thus, even when the imaging device 2 operates in the second operating mode, an image that does not give strange feelings is obtained as compared with an original.
The image processing apparatus described in the present embodiment is applicable to a reading device. Further, the image processing apparatus can be applied not only to a reading device but also to an in-vehicle camera, for example. An example in which the image processing apparatus is applied to a reading device is described below. In the following description, differences from the embodiment are described, and redundant descriptions that are described in the embodiment are omitted as appropriate.
An object to be read by the reading unit 31 is a document P1. The document P1 is, for example, a public certificate such as a certificate of residence. Some public certificates include latent image information for determining authenticity. A description is given of one example in which the document P1 is such the document. A document such as a public certificate is just one example. The document P can be any other suitable document, provided that the document includes visible information that is perceptible to the naked human eye under visible light and latent image information that can be checked on an image obtained by reading the document with the document being irradiated by invisible light.
Most latent image information such as public certificates can be read by infrared light. Infrared light is just one example of invisible light. As invisible light, light in a short wavelength range such as ultraviolet or X-ray may be applied.
A typical image sensor also has sensitivity in a near infrared region (approximately from 750 nm to near 1100 nm) (see
The R pixel m1 of the image sensor 312 reads light in an R wavelength range, under visible light, to output visible image information as R image information 51. The G pixel m2 reads light in a G wavelength range under visible light, to output visible image information as G image information 52. The B pixel m3 reads light in a B wavelength range under visible light, to output visible image information as B image information 53. Under invisible light, the R pixel m1, the G pixel m2, and the B pixel m3 read light in an invisible light wavelength range, to output invisible image information as the R image information 51, the G image information 52, and the B image information 53, respectively. The image information items are transmitted to the image processor 34. The image processor 34 generates an output image with the setting of the multicolor image generation mode or the single-color image generation mode. The output image that is output by the image processor 34 may be output to, for example, a display or a storage device of the reading device 3. Alternatively, the output image may be output from an external output terminal to an external device.
In the reading device 3 of the example, the image sensor 312 having the RGB pixels acquires image information of three colors under visible light and combines the acquired image information, to obtain a color output image. In other words, the reading device 3 of the example can be used as a color scanner and a monochrome scanner. Further, since the reading device 3 has a light source of invisible light and can output invisible image information, the reading device can also be used as an invisible light scanner used for special purposes. The single reading device 3 can be used as being switched between a color scanner, a monochrome scanner, and an invisible light scanner, thus dramatical enhancement in convenience is expected.
As Modification 1 of the example, a configuration of a reading device having a background correction unit that corrects a background level of a document is described. Currently, a terminal apparatus provided in a public space such as a convenience store can output a public certificate by use of the Individual Number card called “My Number Card” under the Japan's Social Security and Tax Number System. A certificate that is output by the terminal apparatus provided in such the public space is printed on general paper, while a certificate issued by a government office is printed on a cardboard of thick paper or paper with a colored background pattern. For this reason, some terminal apparatus provided in such the public space has a unique fraud prevention mechanism that embeds information for authenticity determination to a certificate printed on general paper. A latent image is one of the information for authenticity determination embedded in a certificate. The latent image can be read by a reading device using an infrared light source. However, unlike a certificate issued by a government office, there is no strict regulation with regard to paper on which printing is performed by the terminal apparatus provided in the public space. For this reason, even when paper used by the terminal apparatus is white to the naked human eye, when certificates printed by different terminal apparatuses are read by the reading device using infrared light, the backgrounds of output images that are output from the reading device vary in density, such as white or gray, depending on paper used in the terminal apparatuses.
As illustrated in
For this reason, in an image of a certificate read by the reading device using infrared light, the background may have a density different from the density of the appearance of an original, such as gray. An output image that is output from the reading device having the different background density from the density of an original certificate gives strange feelings as evidence of a stored image.
Since the image generation mode of the image processor 34 switches from the multicolor image generation mode to the single-color image generation mode in the second operating mode, the background correction processing unit 341 operates in the single-color image generation mode.
The background correction processing unit 341 performs background correction processing for correcting a background level of a document on image information output in the second operating mode. For example, the background correction processing unit 341 uniformly corrects areas of invisible images corresponding to areas of “white” in a visible image to an image level of “white.” Thus, a gray area in invisible image is corrected to the background level of the document.
In a flow illustrated in
Steps S11 to S18 as an overall flow of
Since the background correction processing is not performed, the difference in reflectance in the infrared wavelength range between the three types of paper appears on the output images of
By contrast,
Thus, by performing the background correction processing such as processing of correcting a background area to white, even when different white papers having different reflectances in an invisible light wavelength range are used, the background is unified to white.
Although, in the present modification, the description given of a case in which the background of paper is white, and the image level of the background is unified to white, this is just one example. Alternatively, a density may be changed according to a density of paper or black character information printed on the paper. For example, assuming that black is 0 level and white is 255 level, the image level may be slightly lowered to a level of about 200 to 230. Alternatively, the image level may be further lowered so that the background is corrected to a constant level without degrading the density of black character information or the like, in other words, without degrading visual recognizability.
With the configuration according to Modification 1 as described above, the background correction is performed on an invisible image that is read in the second operating mode. Even in a case that a defined type of paper is not used, an output image that is the same or substantially the same to the naked human eye as an original certificate can be obtained. Thus, such the output image including the invisible image information can be evidence of a stored image including visible information.
As modification 2 of the example, a configuration of a reading device having a print correction processing unit that performs image correction suitable for printing is described. In a case where latent image information in a document is read by a reading device using invisible light and an output image obtained by the reading is printed on paper or the like, low density of the latent image information is enough when required density is the one that is visually recognized. However, when the latent image information is a code information, the low density causes an error in reading the code. This requires correction of density. Accordingly, in Modification 2, a configuration is described in which the printing correction processing unit is added so that density correction of latent image information can be performed when the latent image information is to be printed.
Since the image generation mode of the image processor 34 switches from the multicolor image generation mode to the single-color image generation mode, the print correction processing unit 342 operates in the single-color image generation mode.
The print correction processing unit 342 of the image processor 34 performs image correction suitable for printing.
In step S24, when the switching unit 33 switches the multicolor image generation mode of the image processor 13 to the single-color image generation mode, the print correction processing is further turned on.
However, when multiple persons use code information printed on paper by reading the code information with their own imaging devices such as mobile cameras, white of a background and black of dots have to be corrected to appropriate levels so that the code information can be recognized by any imaging device, thereby the imaging devices used by the multiple persons can read the code information.
In
By contrast, when the image processor 34 performs the print correction processing, as illustrated in
As described above, when latent image information is, for example, code information that is to be read by another device, the print correction processing can enhance the recognition rate.
Invisible latent image information that is not perceptible to the naked human eye can be embedded by printing on a document using a material exhibiting different absorption and transmission characteristics in a visible light wavelength range and an infrared wavelength range. Since a general-purpose image sensor can read infrared light, the embedded latent image information can be visualized by outputting a single-color image in a reading mode using infrared light. As Modification 3 of the example, a configuration of a reading device in which a near-infrared (NIR) light source is used as the light source 311 is described.
In the first operating mode, the image sensor 312 reads the document P1 with pixels of RGB under visible light and outputs multiple pieces of image information for colors of RGB, respectively. In the second operating mode, the document P1 is irradiated with near-infrared light, and the image sensor 312 reads the document P1 in a single color with the pixels of RGB. Also in this case, multiple pieces of monochromatic image information are output for three colors, respectively. Although the multiple pieces of image information respectively corresponding three colors are output in the second operating mode, the image processor 34 is switched to the single-color image generation mode in the second operating mode. Accordingly, for example, image information corresponding to one of the three colors can be output as a monochrome image or the multiple pieces of image information respectively corresponding to the three colors can be adjusted to be monochrome and output. Thus, an image is generated in a manner different from that in the multicolor image generation mode.
As described above, by using a general-purpose image sensor, the same effect can be obtained in a simpler and lower-cost reading device.
As Modification 4 of the example, a configuration of the reading device 3 including an infrared (IR) pixel having peak sensitivity in a near-infrared wavelength range in addition to the R pixel, the G pixel, and the B pixel is described.
In the first operating mode, the image sensor 312a reads the document P1 with the pixels of RGB (the R pixel m1, the G pixel m2, and the B pixel m3), to output multiple pieces of image information (the R image information 51, the G image information 52, and the B image information 53) respectively corresponding to the RGB colors. In the second operating mode, the image sensor 312a reads the document P1 in monochrome with the IR pixels m4. In the second operating mode, since the image processor 34 is switched to a setting of the single-color image generation mode, image information 54 of the IR pixel m4 can be output as a monochrome image. The image information 54 of the IR pixel m4 corresponds to infrared image information.
With the reading device 3 having the configuration as illustrated
By acquiring a visible image at the same time when acquiring an evidence image for verifying authenticity of the certificate, an original certificate image can also be kept. Currently, a copy of an original document is required for an application at a government office or the like, and an image for checking authenticity is required for response in case of emergency. The configuration of acquiring the original document and the image for checking authenticity at the same time in a single scan enhances convenience.
As Modification 5 of the example, a configuration of a reading device that can change a correction level of background correction processing is described.
The background correction level setting unit 41 can be set by the controller 32, and a correction value set in the background correction level setting unit 41 is set in the background correction processing unit 341.
In the second operating mode, a correction value of the background correction level setting unit 41 is set in the background correction processing unit 341 in step S21. After the process of step S21, the reading processing is performed in the second operating mode (step S16), the single-color image generation is performed by the image processor 34, and the background correction processing using the correction value is performed (step S17).
Although the description given above is of a case in which the background correction is not performed in other reading modes than the second operating mode, the background correction may be performed in other reading mode.
It is assumed that there is a document on which characters (A, B, and C) are printed with an insufficient application amount of invisible light ink, as illustrated in
By contrast, as illustrated in
As Modification 6, an operation is described that is performed when the single-color image generation mode is set to single-color binary image generation mode. The single-color image generation mode includes a single-color multilevel image generation mode and a single-color binary image generation mode. The single-color multilevel image generation mode is a mode for generating a single-color multilevel (also referred to as called as “gray scale”) image. The single-color binary image generation mode is a mode for generating a black-and-white binary image.
As Modification 7, an operation is described that is performed when the single-color image generation mode is set to single-color multilevel image generation mode.
With regard to invisible image information in which density itself forms a design or invisible image information in which a difference in density has meaning, such as a logo mark as illustrated in
Control is described that is performed in a case that an image generation mode is switched to a mode such as a default mode, which is a mode other than the single-color image generation mode, after an operation in the single-color image generation mode is completed. With regard to a reading device, a color scan operation or a monochrome scan operation that performs scanning with visible light, or a copy operation is selected in more cases than scanning with invisible light in general offices or public spaces, for example. For this reason, by switching to, for example, a full-color mode or a mode individually set as a default instead of an invisible light scan mode for relatively special usage, convenience for a user is enhanced.
A scanner is often set by default to an automatic color mode in which a document is read and determination is determined as to whether the read document is color or monochrome or a full color mode for convenience. When a document is scanned using invisible light with the default setting unintentionally, an image may be output with unnatural coloring or density processing, which is different from the one perceptible to the naked human eye. This causes the drawback that a user feels strange when the user views the output image. In this case, the document has to be scanned using invisible light with a mode that is reset to a suitable one, this takes time and efforts. Further, in a case where one temporarily keeps a certificate handed over from an owner of the certificate and has to return the certificate to the owner immediately after checking a scan/copy operation, rescanning of the certificate cannot be performed. In the present embodiment, when the second operating mode for generating an image using invisible light is selected, an operating mode is automatically switched to the single-color image generation mode. Accordingly, an image generation mode optimum for an operating mode using invisible light is automatically set without requiring a user to switch the setting, and unnatural coloring of the output image can be prevented. In addition, the image processing apparatus according to the embodiment is switched to the single-color image generation mode, which is most suitable, when an operating mode using invisible light is selected, regardless of whether an object is a moving object or a stationary object, and thus image quality enhanced. Further, scanning and copying are performed with appropriate settings at the time of the first reading. Each of the image processing apparatus 1 and the reading device 3 includes, for example, a central processing unit (CPU) and a memory. The CPU executes a program loaded to the memory, to implement one or more of the controller 11, the switching unit 12, the image processor 13, the controller 32, the switching unit 33, the image processor 34, the background correction processing unit 341, the print correction processing unit 342, and the background correction level setting unit 41. Alternatively, one or more of the controller 11, the switching unit 12, the image processor 13, the controller 32, the switching unit 33, the image processor 34, the background correction processing unit 341, the print correction processing unit 342, and the background correction level setting unit 41 may be implemented by hardware such as an application specific integrated circuit (ASIC).
The sheet feeding section 5 includes sheet trays 521 and 522 and a sheet roller unit 523. Different sizes of recording media are placed on the sheet trays 521 and 522. The sheet roller unit 523 includes a plurality of roller pairs that convey recording media from the sheet trays 521 and 522 to an image forming position at which the image forming section 4 forms images on the recording media.
The image forming section 4 includes an exposure device 431, photoconductor drums 432 each having a drum shape, developing devices 433, a transfer belt 434, and a fixing device 435. The image forming section 4 exposes the photoconductor drums 432 with the exposure device 431 according to image data of a document read by an image reader inside the ADF 3A to form latent images on the photoconductor drums 432 and supplies toner of different colors to the photoconductor drums 432 by the developing devices 433 to develop the latent images on the photoconductor drums 432. The image forming section 4 transfers toner images developed on the photoconductor drums 432 by the transfer belt 434 to a recording sheet supplied from the sheet feeding section 5 and fuses the toners of the toner images transferred to the recording sheet by the fixing device 435 to fix a composite color image to the recording sheet.
Thus, the reading device of the example or the modifications is applicable to the image forming apparatus.
The above-described embodiments are illustrative and do not limit the present invention. Thus, numerous additional modifications and variations are possible in light of the above teachings. For example, elements and/or features of different illustrative embodiments, examples, modifications may be combined with each other and/or substituted for each other within the scope of the present invention. Any one of the above-described operations may be performed in various other ways, for example, in an order different from the one described above.
The present invention can be implemented in any convenient form, for example using dedicated hardware, or a mixture of dedicated hardware and software. The present invention may be implemented as computer software implemented by one or more networked processing apparatuses. The processing apparatuses include any suitably programmed apparatuses such as a general purpose computer, a personal digital assistant, a Wireless Application Protocol (WAP) or third-generation (3G)-compliant mobile telephone, and so on. Since the present invention can be implemented as software, each and every aspect of the present invention thus encompasses computer software implementable on a programmable device. The computer software can be provided to the programmable device using any conventional carrier medium (carrier means). The carrier medium includes a transient carrier medium such as an electrical, optical, microwave, acoustic or radio frequency signal carrying the computer code. An example of such a transient medium is a Transmission Control Protocol/Internet Protocol (TCP/IP) signal carrying computer code over an IP network, such as the Internet. The carrier medium may also include a storage medium for storing processor readable code such as a floppy disk, a hard disk, a compact disc read-only memory (CD-ROM), a magnetic tape device, or a solid state memory device.
The functionality of the elements disclosed herein may be implemented using circuitry or processing circuitry which includes general purpose processors, special purpose processors, integrated circuits, application specific integrated circuits (ASICs), digital signal processors (DSPs), field programmable gate arrays (FPGAs), conventional circuitry and/or combinations thereof which are configured or programmed to perform the disclosed functionality. Processors are considered processing circuitry or circuitry as they include transistors and other circuitry therein. In the disclosure, the circuitry, units, or means are hardware that carry out or are programmed to perform the recited functionality. The hardware may be any hardware disclosed herein or otherwise known which is programmed or configured to carry out the recited functionality. When the hardware is a processor which may be considered a type of circuitry, the circuitry, means, or units are a combination of hardware and software, the software being used to configure the hardware and/or processor.
This patent application is based on and claims priority to Japanese Patent Application No. 2022-073501, filed on Apr. 27, 2022, in the Japan Patent Office, the entire disclosure of which is hereby incorporated by reference herein.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2022-073501 | Apr 2022 | JP | national |
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/IB2023/053565 | 4/7/2023 | WO |