This application is based on and claims priority under 35 USC 119 from Japanese Patent Application No. 2022-131307 filed Aug. 19, 2022.
The present disclosure relates to an information processing apparatus, an information processing method, and a non-transitory computer readable medium.
The following technologies relate to specifying an area to be processed in a document.
Japanese Patent No. 6019872 discloses a data conversion kit designed to alleviate the burden of preparations for automatic processing after acquiring a large number of sources to be converted into data and utilized.
The data conversion kit includes an information processing terminal that captures an image of a source to be acquired, and a support which is a card, a sheet, or a thin plate of paper, resin, or fabric that supports the source during image capture.
In the data conversion kit, the support is provided with: a placement area where a source to be acquired is placed, the placement area being rectangular in a plan view and conforming to the general shape and dimensions of the source to be acquired; and an outer area adjacent to the outside of the placement area, the outer area being where information stipulating the content of processing to be executed on a captured image that the information processing terminal acquires by capturing an image of the source to be acquired is written down manually by the user or is attached or printed as a sticker.
In addition, in the data conversion kit, the information processing terminal is provided with: an image capture unit that captures an image of both the outer area and the source to be acquired that is placed in the placement area on the support; and an information processing unit that obtains the captured image by extracting, from the image captured by the image capture unit, the portion of the placement area or the source to be acquired that is placed in the placement area, and also reads the information expressed in the outer area to execute multiple processes corresponding to the information on the captured image.
Japanese Unexamined Patent Application Publication No. 2013-161425 discloses a trimming program for a mobile phone, the trimming program being for the purpose of automatically trimming photographic data captured by a camera of the mobile phone.
The trimming program for a mobile phone is installable in a mobile phone including a camera, a processor that processes data obtained by image capture, and a memory, and the program processes photographic data obtained by using the camera to capture an image of an area on a document enclosed by a rectangular virtual frame formed by placing four markers indicating the desire to save the area as an image, together with the four markers.
Additionally, the trimming program for a mobile phone includes a step of automatically detecting the four markers from the photographic data and a step of generating image data by automatically trimming, from the photographic data, the rectangular area demarcated by the detected four markers.
In the case of using markers to specify a partial area to be processed in a document, if there are multiple documents, there is the problem of providing the markers to each of the multiple documents.
Aspects of non-limiting embodiments of the present disclosure relate to specifying an area to be processed for each of multiple documents without having to provide markers indicating the area to be processed in each document.
Aspects of certain non-limiting embodiments of the present disclosure address the above advantages and/or other advantages not described above. However, aspects of the non-limiting embodiments are not required to address the advantages described above, and aspects of the non-limiting embodiments of the present disclosure may not address advantages described above.
According to an aspect of the present disclosure, there is provided an information processing apparatus comprising a processor configured to, in a case of performing predetermined processing with respect to an area to be processed in a plurality of documents, perform the processing on a first document by using an area to be processed that is specified using a predetermined marker as a reference area, and perform the processing on a second or subsequent document with the reference area specified in the first document as the area to be processed.
An exemplary embodiment of the present disclosure will be described in detail based on the following figures, wherein:
Hereinafter, an exemplary embodiment of the present disclosure will be described in detail and with reference to the drawings. Note that the present exemplary embodiment describes an example in which an information processing apparatus according to the technology of the present disclosure is applied to an image processing apparatus which is set up in an office and which includes a document camera. However, the technology of the present disclosure is not limited to being applied to an office setting and may also be applied to any place where an image processing apparatus could be installed, such as at a school or inside the home. Moreover, the technology of the present disclosure is not limited to being applied to an image processing apparatus and may also be applied to any apparatus that executes some kind of processing with respect to a multi-page document, such as an image reading apparatus that reads images or an image transmission apparatus that transmits read images to another apparatus.
First,
As illustrated in
Also, the image processing apparatus 10 according to the present exemplary embodiment is provided with a document camera 70 configured to capture an image of the upper side of the document bed 30. The document camera 70 according to the present exemplary embodiment is provided on the other end of an arm 72 of which one end is secured to the back side of the document bed 30, and is positioned so that the image capture angle of view substantially coincides with a document placement area 32 on the document bed 30.
Note that in the present exemplary embodiment, a camera that captures a color image is applied as the document camera 70, but the document camera 70 is not limited thereto. For example, a camera that captures a monochrome image or a grayscale image may also be applied as the document camera 70.
On the other hand, the UI unit 40 according to the present exemplary embodiment is provided with an input unit 14 with various types of switches and a display unit 15 including a liquid crystal display panel or the like. The display unit 15 according to the present exemplary embodiment is configured as what is called a touch panel display provided with a transmissive touch panel on the front side of the display.
Note that in the present exemplary embodiment, a digital multi-function device including functions such as an image printing function, an image reading function, and an image transmitting function is applied as the image processing apparatus 10. However, the configuration is not limited to the above, and another image processing apparatus, such as an image processing apparatus including only an image printing function or an image processing apparatus including only an image printing function and an image reading function, may also be applied as the image processing apparatus 10.
Next,
As illustrated in
The storage unit 13 according to the present exemplary embodiment is realized by a hard disk drive (HDD), a solid-state drive (SSD), flash memory, or the like. An information processing program 13A is stored in the storage unit 13, which acts as a storage medium. The information processing program 13A is stored (installed) in the storage unit 13 by connecting the recording medium 17 with the information processing program 13A written thereto to the media reading/writing device 16 and causing the media reading/writing device 16 to read out the information processing program 13A from the recording medium 17. The CPU 11 reads out and loads the information processing program 13A from the storage unit 13 into the memory 12, and sequentially executes processes included in the information processing program 13A.
In addition, a marker-related information database 13B and a process-related information database 13C are stored in the storage unit 13. Note that the marker-related information database 13B and the process-related information database 13C will be described in detail later.
Next,
As illustrated in
When performing predetermined processing on an area to be processed in multiple documents, the processing unit 11A according to the present exemplary embodiment performs the processing on the first document by using an area to be processed that is specified using predetermined markers as a reference area. In contrast, for the second and subsequent documents, the processing unit 11A according to the present exemplary embodiment performs the processing with the reference area specified in the first document as the area to be processed.
In the present exemplary embodiment, a marker expressed by a machine-readable code, such as a barcode or a two-dimensional code, is applied as the markers. Also, in the present exemplary embodiment, a code that includes information indicating the content of the processing is applied as the machine-readable code. However, the markers are not limited to the above, and markers that do not include a machine-readable code or other code may also be applied. In this case, the content of the processing may be stipulated according to differences in appearance, such as the shape, size, and color of the markers. Furthermore, even if a machine-readable code is used as the markers, the content of the processing may be stipulated according to the type of color of the markers.
Also, the processing unit 11A according to the present exemplary embodiment applies the reference area to the second and subsequent documents if the second or subsequent document conforms to predetermined characteristics of the first document.
In the present exemplary embodiment, the two characteristics of the dimensions and color of the document area are applied as the characteristics, but the characteristics are not limited thereto. For example, one characteristic or a combination of multiple characteristics from among the layout, color scheme, and title of the document may also be applied in addition to the above two characteristics. As a further example, some kind of characteristic related to the document, such as the presence or absence of ruled lines, the size of characters, or the presence or absence of a predetermined logo, may also be applied as the characteristics.
Also, in the present exemplary embodiment, processing related to an image reading process using the document camera 70 is applied as the processing, but the configuration is not limited thereto. For example, processing unrelated to the document camera 70, such as a scan process by the image processing apparatus 10 for reading a document as an image, or the processing of an image captured by a camera-equipped smartphone acting as an image processing apparatus, may also be applied as the processing.
Also, in the case in which the area to be processed is not specified successfully, the processing unit 11A according to the present exemplary embodiment presents an indication that specification is unsuccessful. Note that in the present exemplary embodiment, a presentation given via a display by the display unit 15 of the UI unit 40 is applied as the presentation indicating that specification is unsuccessful, but the configuration is not limited thereto. For example, a presentation given by image formation by the image processing apparatus 10 or a presentation given by sound produced by a sound playback device may also be applied as the presentation indicating that specification is unsuccessful.
Also, in the present exemplary embodiment, the case in which the area to be processed contains no characters is applied as the case in which the area to be processed is not specified successfully, but the configuration is not limited thereto. For example, cases such as when the area to be processed is itself difficult to recognize because of a folded or soiled document may also be applied as the case in which the area to be processed is not specified successfully.
Next,
As illustrated in
As described above, a machine-readable code is used for markers 90A, 90B, and so on according to the present exemplary embodiment. Note that hereinafter, the markers 90A, 90B, and so on will be collectively referred to as the “markers 90” when not being distinguished individually.
Various types of markers 90 are prepared as the markers 90 according to the present exemplary embodiment and are classified into multiple types, such as area specification markers for specifying the area to be processed according to the positions where the markers are placed and execution process specification markers indicating the content of the processing to be executed.
Additionally, as illustrated in
In the present exemplary embodiment, markers that also indicate the content of the processing to be executed with respect to the specified area to be processed are applied as the area specification markers 90, and examples of the content of the processing in this case include a trimming process, a masking process, an optical character recognition/reader (OCR) process, and the like. Note that examples of the content of the processing for the execution process specification markers 90 include a forwarding process via email attachment or email notification, a printing (image forming) process, and the like.
In the example illustrated in
Also, in the example illustrated in
In the image processing apparatus 10 according to the present exemplary embodiment, as described above, when performing predetermined processing on an area to be processed in multiple documents 80, the processing is performed on the first document 80 by using an area to be processed that is specified using the markers 90 as a reference area. In contrast, for the second and subsequent documents 80, the processing is performed with the reference area specified in the first document 80 as the area to be processed.
Next,
The marker-related information database 13B according to the present exemplary embodiment is a database in which information related to the markers 90 described above is registered. In one example, as illustrated in
The marker type is information indicating the type of the markers 90 described above, and the placement method is information indicating the method of placing the markers 90 described above. Also, the marker ID is preassigned information which is different for each type of marker and each type of processing content to identify the corresponding marker, and the processing content is information indicating the content of the processing represented by the corresponding marker.
In the example illustrated in
That is, in the present exemplary embodiment, information indicating the content of the processing is itself not included in the marker 90, but information indicating the marker ID is included in the marker 90. Furthermore, in the present exemplary embodiment, by acquiring information indicating the content of the processing corresponding to the marker ID from the marker-related information database 13B, the content of the corresponding processing is specified. However, the configuration is not limited to the above, and information indicating the content of the processing itself may also be included in the marker 90. In this case, the marker-related information database 13B would be unnecessary.
Next,
The process-related information database 13C according to the present exemplary embodiment is a database in which information related to processed specified by the user using the markers 90 is registered. In one example, as illustrated in
The characteristics are information indicating the characteristics described above, the processing content is information indicating the content of processing to be performed on a document with the corresponding characteristics, and the set value is information to be set with respect to the corresponding processing.
Note that in the case in which the markers 90 are area specification markers, an area to be processed in a single location is specified by a set of four markers 90, and singular coordinate information indicating the area to be processed that is specified by the placement positions of the set of markers 90 is applied as the set value in this case. In the present exemplary embodiment, information indicating coordinates in a two-dimensional coordinate system with the origin set to the upper-left corner of the document 80 is applied as the coordinate information, but the coordinate information is not limited thereto. For example, information indicating coordinates in a two-dimensional coordinate system with the origin set to any of the lower-left corner, the upper-right corner, and the lower-right corner among the four corners of the document 80 may also be applied as the coordinate information. Also, in the present exemplary embodiment, as illustrated in
In this way, in the present exemplary embodiment, since one of the corners of the document 80 is applied as the coordinate system indicating the position of the area to be processed, it is possible to place the document 80 at any position in the document placement area 32.
Also, in the case in which the markers 90 are markers for a purpose other than area specification, a single set value may be set with respect to a single marker 90. For example, in the case in which the content of the processing is a forwarding process via email attachment, information indicating the forwarding destination (in the present exemplary embodiment, a device name) is the set value, whereas in the case in which the content of the processing is printing, a set value is unnecessary.
Note that, to avoid confusion, the present exemplary embodiment describes a case in which a relevant set value, such as the above information indicating a forwarding destination, is registered in advance, but the configuration is not limited thereto. For example, the user or someone else may also be prompted to input a set value, as appropriate.
In the example illustrated in
Next,
The information processing illustrated in
In step 100 of
In step 104, the CPU 11 detects images of the markers 90 from the image expressed by the acquired image information. In step 106, the CPU 11 specifies the marker ID indicated by the detected markers 90, and reads out all information corresponding to the specified marker ID (hereinafter referred to as “marker-related information”) from the marker-related information database 13B. Note that in the present exemplary embodiment, the images of the markers 90 are detected using known pattern matching technology of the related art, but the detection is not limited thereto. For example, the images of the markers 90 may be detected by artificial intelligence (AI) using a pre-trained segmentation model such as a semantic segmentation model or an instance segmentation model.
In step 108, the CPU 11 specifies predetermined types of characteristics (in the present exemplary embodiment, the dimensions and color of the document 80) from the image (hereinafter referred to as the “document image”) 15A of the document 80 included in the image expressed by the acquired image information. Note that in the present exemplary embodiment, standard sizes such as A4 size and B5 size are applied as the dimensions of the document 80, but the dimension are not limited thereto, and the actual size in height and width may also be applied as the dimensions of the document 80.
In step 110, the CPU 11 derives the set value corresponding to the processing according to each of the detected markers 90 as described above.
In step 112, the CPU 11 uses the various information obtained by the above processing to control the display unit 15 to display a preview screen with a predefined layout.
As illustrated in
If the preview screen illustrated in
Accordingly, in step 114, the CPU 11 determines whether the execute button 15E is designated by the user, and in the case of a negative determination, regards the register button 15D as being designated and proceeds to step 116.
In step 116, the CPU 11 registers, in the process-related information database 13C, information indicating the characteristics displayed by the characteristics image 15C in association with the processing content and set values displayed by the table image 15B, and thereafter proceeds to step 136.
On the other hand, in the case of a positive determination in step 114, that is, if the execute button 15E is designated by the user, the CPU 11 proceeds to step 118, executes the processing specified by each of the markers 90, and then proceeds to step 136.
If the user designates the execute button 15E on the preview screen, the processing with respect to the first document 80 is regarded as finished, and the document 80 placed on the document bed 30 is replaced by the next document 80. Also, in the case of wanting to designate new processing except for the specification of the area to be processed, the user places markers 90 corresponding to the processing in the document placement area 32 of the document bed 30. In response to the operations by the user, the processing in step 100 returns a negative determination, and the flow proceeds to step 120. Thereafter, the user successively performs similar operations for the documents 80 to be processed.
In step 120, the CPU 11 instructs the document camera 70 to capture an image, and acquires image information obtained by the document camera 70 in response to the instruction.
In step 122, similarly to the processing in step 108, the CPU 11 specifies the predetermined types of characteristics (in the present exemplary embodiment, the dimensions and color of the document 80) from the document image 15A corresponding to the acquired image information, that is, from the document image 15A of the second or subsequent document 80.
In step 124, the CPU 11 determines whether information indicating characteristics matching the specified characteristics is registered in the process-related information database 13C, and proceeds to step 130 in the case of a negative determination or proceeds to step 126 in the case of a positive determination.
In step 126, the CPU 11 reads, from the process-related information database 13C, the processing content and set value corresponding to the information indicating the characteristics determined to be matching in the processing of step 124. In step 128, the CPU 11 executes processing corresponding to the read processing content and set value, and thereafter proceeds to step 130.
Note that when executing the processing in step 128, in the case in which the area to be processed that is indicated by the read set value is not specified successfully, the CPU 11 controls the display unit 15 to display an error display screen including an error message 15F, an example of which is illustrated in
In step 130, the CPU 11 determines whether the markers 90 are included in the image expressed by the acquired image information, and proceeds to step 136 in the case of a negative determination, or proceeds to step 132 in the case of a positive determination.
In step 132, the CPU 11 reads marker-related information corresponding to the detected markers 90 from the marker-related information database 13B, similarly to the processing in step 106. In step 134, the CPU 11 executes the processing corresponding to the read marker-related information, and then proceeds to step 136.
In step 136, the CPU 11 determines whether a predetermined end timing as a timing at which to end the current information processing has been reached, and returns to step 100 in the case of a negative determination or ends the current information processing in the case of a positive determination. In the present exemplary embodiment, the timing at which a marker 90 is used to indicate that there is no next document 80 is applied as the end timing, but the end timing is not limited thereto. For example, the timing at which instruction input for ending the information processing is given through the UI unit 40 may also be applied as the end timing.
Note that the exemplary embodiment above describes a case of applying the technology of the present disclosure to an image processing apparatus 10 with the UI unit 40 located on the front side of the apparatus, as illustrated in
Also, the exemplary embodiment above describes a case in which the area to be processed that has been specified in the first document 80 is applied without exception to the second and subsequent documents 80, but the configuration is not limited thereto. For example, a marker 90 may also be used to indicate whether the area to be processed that has been specified in the first document 80 is to be applied to the second and subsequent documents 80.
Also, the exemplary embodiment above describes a case in which information related to all of the applied markers 90 is displayed in the table image 15B on the preview screen, but the configuration is not limited thereto. For example, only information corresponding to the markers 90 that the user has specified with respect to the displayed document image 15A may be displayed as the table image 15B.
Also, although the physical type of the markers is not specifically mentioned in the exemplary embodiment above, any of various physical types of markers, such as a magnet type, a flat marble type, a tag type, or a sticker type, may be applied.
Also, although not mentioned in the exemplary embodiment above, characters indicating the processing content or the like may also be written on the markers 90, for example.
Also, the exemplary embodiment above describes a case in which only information indicating the content of the processing is included in the markers 90, but the configuration is not limited thereto. For example, the set value in addition to information indicating the content of the processing may also be included in the markers 90.
Also, the exemplary embodiment above describes a case in which various databases are registered in the image processing apparatus 10, but the configuration is not limited thereto. For example, various databases may be registered in an external apparatus such as a server apparatus configured to communicate with the image processing apparatus 10, and the external apparatus may be accessed, as appropriate.
The foregoing describes an exemplary embodiment, but the technical scope of the present disclosure is not limited to the scope described in the exemplary embodiment above. Various modifications or alterations may be made to the exemplary embodiment above within a scope that does not depart from the gist of the present disclosure, and any embodiments obtained by such modifications or alterations are also included in the technical scope of the present disclosure.
Furthermore, the exemplary embodiment above does not limit the present disclosure as stated in the claims, and not all combinations of features described in the exemplary embodiment are necessarily required as means for addressing the issues of the present disclosure. The exemplary embodiment described above includes various levels of disclosure, and the various disclosures are elicited through the combination of the multiple structural elements disclosed herein. Even if several structural elements are removed from among all of the structural elements illustrated in the exemplary embodiment, the configuration with the several structural elements removed therefrom may still be elicited as a disclosure insofar as an effect is obtained.
In the embodiments above, the term “processor” is broad enough to encompass one processor or plural processors in collaboration which are located physically apart from each other but may work cooperatively. The order of operations of the processor is not limited to one described in the embodiments above, and may be changed.
Furthermore, the exemplary embodiment above describes a case in which various processing is achieved by a software configuration using a computer by executing a program, but the present disclosure is not limited thereto. For example, various processing may also be achieved by a hardware configuration, or by a combination of a hardware configuration and a software configuration.
Otherwise, the configuration of the image processing apparatus 10 described in the exemplary embodiment above is an example, and obviously, unnecessary portions may be removed or new portions may be added within a scope that does not depart from the gist of the present disclosure.
Also, the flow of the information processing described in the exemplary embodiment above is an example, and obviously, unnecessary steps may be removed, new steps may be added, or the processing sequence may be rearranged within a scope that does not depart from the gist of the present disclosure.
In the embodiments above, the term “processor” refers to hardware in a broad sense. Examples of the processor include general processors (e.g., CPU: Central Processing Unit) and dedicated processors (e.g., GPU: Graphics Processing Unit, ASIC: Application Specific Integrated Circuit, FPGA: Field Programmable Gate Array, and programmable logic device).
In the embodiments above, the term “processor” is broad enough to encompass one processor or plural processors in collaboration which are located physically apart from each other but may work cooperatively. The order of operations of the processor is not limited to one described in the embodiments above, and may be changed.
The foregoing description of the exemplary embodiments of the present disclosure has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in the art. The embodiments were chosen and described in order to best explain the principles of the disclosure and its practical applications, thereby enabling others skilled in the art to understand the disclosure for various embodiments and with the various modifications as are suited to the particular use contemplated. It is intended that the scope of the disclosure be defined by the following claims and their equivalents.
An information processing apparatus comprising:
The information processing apparatus according to (((1))), wherein the marker is a marker expressed by a machine-readable code.
The information processing apparatus according to (((2))), wherein the machine-readable code includes information indicating content of the processing.
The information processing apparatus according to (((3))), wherein the machine-readable code stipulates the content of the processing by type of color.
The information processing apparatus according to any one of (((1))) to (((4))), wherein the processor is configured to apply the reference area to the second or subsequent document if the second or subsequent document conforms to a predetermined characteristic of the first document.
The information processing apparatus according to (((5))), wherein the characteristic is at least one of a dimension, color, layout, color scheme, and title of the document.
The information processing apparatus according to any one of (((1))) to (((6))), wherein the processing is processing using a document camera.
The information processing apparatus according to (((7))), wherein the processing using the document camera is processing related to image reading processing.
The information processing apparatus according to any one of (((1))) to (((8))), wherein the processor is configured to, in a case in which the area to be processed is not specified successfully, present an indication that specification is unsuccessful.
The information processing apparatus according to (((9))), wherein the case in which the area to be processed is not specified successfully is a case in which the area to be processed contains no characters.
Number | Date | Country | Kind |
---|---|---|---|
2022-131307 | Aug 2022 | JP | national |