1. Technical Field
The present invention relates to an image-output control device, a method of controlling image output, a program for controlling image output, and a printing device.
2. Related Art
Users print arbitrary photographs (for example, an identification photograph (ID photo) that is used for a resume, a driver's license, a passport, or the like) by using printers. As technology related thereto, a printing device in which a user selects an ID photo mode from a selection screen of a printing mode, then, selection of the type of a printing sheet and the size of an ID photo is received from the user, selection of an image to be printed as the ID photo is received from the user, then, a face area is extracted from the selected image, an area (clip area) including the extracted face area which will be printed as the ID photo is determined, and an image of the clip area is printed on the selected printing sheet has been known (see JP-A-2007-253488).
In JP-A-2007-253488, the user selects the ID photo mode by operating an operation panel, watching a display, sequentially checks a plurality of images that is read out from a memory card to be displayed in the display, and selects an image desired to be printed as an ID photo. However, to determine whether an image is appropriate as an ID photo while checking contents of images that is photographed and stored in the memory card one after another is a heavy load for the user. In particular, when a plurality of images is saved in the memory card, a load needed for the determination is increased further.
In addition, even when a plurality of face images (predetermined images) is included in one image, the user may hesitate to determine a face image that is appropriate as the ID photo. Furthermore, even when a correction process or the like is performed for an image face, it is too troublesome for the user to determine a correction process that is appropriate to each face image.
An advantage of some aspects of the invention is that it provides an image-output control device, a method of controlling image output, an image-output control program, and a printing device capable of allowing a user to easily recognize and perform selection of processes appropriate to each predetermined image such as a face image.
According to a first aspect of the invention, there is provided an image-output control device including: a detection unit that detects a predetermined image from a target image; and an output control unit that outputs menu display, which is a menu display that can receive selection of a process to be performed for the target image and the predetermined image, for each predetermined image detected from the target image to a predetermined output target. According to the image-output control device, even when a plurality of predetermined images is detected from the target image, menu display for each predetermined image is output to the output target. Accordingly, a user can recognize selection of a process appropriate to each predetermined image in an easy manner by watching the menu display, and thereby appropriate selection can be made.
In the above-described image-output control device, the output control unit may be configured to output the menu display, which has different items in accordance with the state of each detected predetermined image, for each predetermined image. In such a case, the menu display can arrange optimal items in accordance with the state of a corresponding predetermined image.
As an example, it may be configured that the detection unit detects a face image from the target image as the predetermined image, and the output control unit outputs the menu display corresponding to a face image, which is positioned in an approximately front direction, of the detected face image which includes an item of an identification photograph printing process. In addition, the output control unit may be configured to analyze color information for each detected predetermined image and output the menu display corresponding to a predetermined image, for which the result of analysis of the color information corresponds to a predetermined correction condition, which includes an item of a predetermined color correcting process. In addition, the output control unit may be configured to analyze a shape of each detected predetermined image and output the menu display corresponding to a predetermined image, for which the result of analysis of the shape corresponds to a predetermined correction condition, which includes an item of a predetermined shape correcting process. In such a case, the user does not need to be bothered for checking whether each predetermined image is appropriate as an ID photo, each predetermined image is to be corrected in color, or each predetermined image is to be corrected in shape. In addition, the user can recognize selection of a process that is appropriate to each predetermined image in an easy manner by viewing the items included in each menu display.
In the above-described image-output control device, the output control unit may be configured to output the target image and the menu display for each predetermined image in a state in which each common reference sign is assigned to the predetermined image and the menu display that are in correspondence with each other. In such a case, the user can visually recognize correspondence relationship between each predetermined image and corresponding menu display in an easy manner.
In the above-described image-output control device, the output control unit may be configured to print the target image and the menu display for each predetermined image on a printing medium. In such a case, the user can acquire so-called an order sheet in which the target image and menu display for each predetermined image are printed on one printing medium. However, the predetermined output target according to an embodiment of the invention is not limited to the printing medium, and the output control unit may be configured to output the target image and the menu display for each predetermined image to a predetermined screen.
The technical idea of the invention may be conceived as a method of controlling image output that includes the processing steps performed by the units of the above-described image-output control device or a program for controlling image output that allows a computer to perform functions corresponding to the units of the above-described image-output control device, in addition to the above-described image-output control device. In addition, the invention may be conceived as a printing device including: a detection unit that detects a predetermined image from a target image; and an output control unit that outputs menu display, which is a menu display that can receive selection of a process to be performed for the target image and the predetermined image, for each predetermined image detected from the target image to a predetermined output target.
The invention will be described with reference to the accompanying drawings, wherein like numbers reference like elements.
An embodiment of the invention will be described in the following order.
1. Schematic Configuration of Printer
2. Image Outputting Process
3. Modified Example
The printer engine 16 is a printing mechanism that performs a printing operation based on print data. The card I/F 17 is an I/F used for exchange data with the memory card MC that is inserted into a card slot 172. In the memory card MC, image data is stored, and the printer 10 can acquire the image data that is stored in the memory card MC through the card I/F 17. As a recording medium used for providing the image data, various media other than the memory card MC can be used. It is apparent that the printer 10 can receive the image data as an input from the external devices, other than the recording medium, that are connected thereto through the I/F unit 13. The printer 10 may be a printing device that is dedicated for consumer use or may be an office printing device (so called a mini laboratory device) that is dedicated for DPE. The operation unit 14 and the display unit 15 may be an input operation unit (a mouse, a keyboard, or the like) or a display that is configured separated from a printer 10 main body. The printer 10 may receive the print data from a PC or a server that is connected thereto through the I/F unit 13.
In the internal memory 12, a face image detecting unit 20, a display control section 30 and a print control section 40 are stored. The face image detecting unit 20 is a computer program that is used for performing a face image detecting process to be described later under a predetermined operating system. The face image detecting unit 20 corresponds to an example of a detection unit according to an embodiment of the invention. The display control section 30 is a computer program for acquiring or generating an image such as a user interface (UI) image, which is used for receiving various directions from a user, a message, a thumb-nail image, or the like, to be output (displayed) in the display unit 15. In addition, the display control section 30 is also a display driver that controls the display unit 15 to display the UI image, the message, the thumb-nail image, or the like in a screen of the display unit 15. The print control section 40 is a computer program for generating the print data based on the image data and controls the printer engine 16 to print an image in a printing medium based on the print data. In addition, the print control section 40 controls the printer engine 16 to print an order sheet to be described later. The display control section 30 and the print control section 40 correspond to an output control unit according to an embodiment of the invention.
The CPU 11 implements the function of each of these units by reading out the program from the internal memory 12 and executing the program. In addition, in the internal memory 12, various types of data and programs such as trimming frame data 14b and neural networks NN1 and NN2 are stored. The printer 10 may be a multi-function device that has various types of functions such as a copy function, a scanner function (image reading-out function), in addition to the print function.
2-1. Detection of Face Image
In Step (hereinafter, notation of “Step” will be omitted) S100, the face image detecting unit 20 acquires image data D representing an image (target image) of one sheet to be processed from the recording medium or the external device, or the like. The image data D is bit map data that is formed from a plurality of pixels. Each pixel is represented as a combination of gray scales (for example, 256 gray scales of “0” to “255”) of RGB channels. The image data D may be compressed in a stage being recorded in a recording medium or the like, and colors of the pixels may be represented in a different color space. In such a case, the face image detecting unit 20 acquires the image data D as the RGB bit map data by expanding the image data D or performing conversion of the color space.
In S200, the face image detecting unit 20 detects a face image from the image data D. In this embodiment, a predetermined image is assumed to be an image of a person's face for descriptions. However, the predetermined image that can be detected by the configuration according to an embodiment of the invention is not limited to the image of a person's face. Thus, various targets such as artifacts, living things, natural objects, or landscapes can be detected as the predetermined image.
In S200, the face image detecting unit 20 can employ any technique, as long as it can detect a face image from the image data D. In this embodiment, as an example, detection is performed by using a neural network.
In S205, the face image detecting unit 20 sets one detection window SW for the image data D. The detection window D is an area located on the image data D and becomes a target for detecting (determining existence of) a face image. In addition, the face image detecting unit 20 may be configured to reduce the size of the image data D before performing the process of S205. When detection of a face image is performed for the image data D of an original image size as a target, the process load is heavy. Thus, the face image detecting unit 20 reduces the image size of the image data D by decreasing the number of pixels of the image data or the like, and the process of S205 and thereafter is performed for the image data D after reduction as the target. The face image detecting unit 20, for example, reduces the image data D into a size (320 pixels×240 pixels) of QVGA (Quarter Video Graphics Array). Moreover, the face image detecting unit 20 may convert the image data D into a gray image before performing the process of S205. The face image detecting unit 20 converts the RGB data of each pixel of the image data D into a brightness value Y (0 to 255) and generates image data D as a monochrome image having one brightness value Y for each pixel. Generally, the brightness value Y can be calculated by adding R, G, and B together with predetermined weighting factors applied. The conversion of the image data D into a gray image is performed in advance in consideration of alleviation of the load at the time of calculating characteristic amounts to be described later. A method of setting the detection window SW is not particularly limited. However, as an example, the face image detecting unit 20 sets the detection window SW as below.
When returning the detection window SW to the leading position, the face image detecting unit 20 sets a detection window SW of which the size of the rectangular is smaller than that up to that time. Thereafter, the face image detecting unit 20, same as described above, sets the detection window SW in each position while moving the detection window SW up to the final position of the image data D with the size of the detection window SW maintained. The face image detecting unit 20 repeats such movement and setting of the detection window SW while gradually reducing the size of the detection window SW for the number of times determined in advance. As described above, when one detection window SW is set in S205, the process of S210 and thereafter is performed.
In S210, the face image detecting unit 20 acquires image data (window image data) XD formed of pixels within the detection window SW which is set as the image data D in the previous S205.
In S215, the face image detecting unit 20 calculates a plurality of characteristic amounts based on the window image data XD acquired in the previous S210. These characteristic amounts can be acquired by applying various filters to the window image data XD and calculating characteristic amounts (an average value, a maximum value, a minimum value, and a standard deviation of brightness) that represent image characteristics such as an average brightness value, an edge amount, and contrast within the filters.
However, the magnitudes of the weighting factors w and the value of the bias b for linear combination of the units U are only initially set to appropriate values. Thus, there is an error between an output value K that is acquired by inputting the characteristic amounts CA, CA, CA . . . of the learning image data and an ideal output value K (1 or 0). The weighting factors w for the units U and bias b for minimizing such an error are calculated by using a numerical optimizing technique such as a gradient technique. The above-described error propagates from the latter-level layer to the former-level layer, and thus, the weighting factors w and the bias b for the latter-level units U are sequentially optimized. In this embodiment, a “face image” is a concept including not only an image of a face photographed so as to face the front side but also an image of a face (a side face) facing the right or left side or a face (a turning up face or a turning down face) facing the upper or lower side. Accordingly, in the learning image data, in which a face image exists, that is used for learning of the neural network NN1, image data in which a face facing the right or left side exists, image data in which a face turning up or turning down exists, and the like are included, in addition to image data in which a face facing the front side exists. By preparing the neural network NN1 that is optimized by performing learning by using a plurality of learning image data inside the internal memory 12 in advance, it can be determined whether a face image exists in the window image data XD based on the characteristic amounts CA, CA, CA . . . .
When “Yes” is determined in S225, it can be determined that a face image exists in the detection window SW set in the previous S205. However, according to this embodiment, it is additionally determined whether the face image existing in the detection window SW is a “front face”. The front face means that a case where a face image facing the left or right side and a case where a face turning up or turning down, as described above, are excluded. In other words, the front face includes a case where a face image of which the direction of the face exactly faces the front side in the target image and a case where the face image of which the direction of the face is slightly deflected horizontally or vertically but all of the face organs (left and right eyes, the nose, and the mouth) face almost the front side, so that the face image can be used as an ID photo without any restriction.
In S230 to S240, the face image detecting unit 20 performs a same process as that of S215 to S225 by using a neural network NN2. In other words, the characteristic amounts CA, CA, and CA are acquired (S230. However, a filter FT that is applied to the window image data XD may be different from the filter FT that is used in S215) based on the window image data XD acquired in the previous S210, the acquired characteristics amounts CA, CA, and CA are input to the neural network NN2 that is conserved in the internal memory 12 in advance (S235), and the process is branched based on whether the output value K from the neural network NN2 is equal to or larger than a predetermined value (S240).
Both the neural network NN1 and the neural network NN2 have the basic structure as shown in
In S240, for example, when the output value K of the neural network NN2 is equal to or larger than “0.5”, the face image detecting unit 20 determines that the value represents existence of the front face in the window image data XD, and the process proceeds to S245. On the other hand, when the output value K of the neural network NN2 is smaller than 0.5, the face image detecting unit 20 determines that the value represents existence of a face image (non-front face) other than a front face in the window image data XD, and the process proceeds to S250.
In S245, after the face image detecting unit 20 associates information of the position (for example, the center position of the detection window SW in the image data D) and the size of the rectangle of the detection window SW set in the previous S205 with the image data D acquired in S100, the face image detecting unit 20 issues identification information representing a front face and records the information in a predetermined area of the internal memory 12. As described above, recording information on the detection window SW in which a front face is determined to exist corresponds to an example of detecting a front face. On the other hand, in S250, after the face image detecting unit 20 associates the information such as the position, the size, and the like of the detection window SW set in the previous S205 with the image data D acquired in S100, the face image detecting unit 20 attaches identification information representing a non-front face and records the information in a predetermined area of the internal memory 12. As described above, recording the information on the detection window SW in which a non-front face is determined to exist corresponds to an example of detection of a non-front face.
In S255, the face image detecting unit 20, under the idea of the method of setting the detection window SW described with reference to
2-2. Image Output for Display Unit
In S300 (
In S400, the display control section 30 determines items of a menu UI for each face image in accordance with the state of each detected face image (the information on the detection window SW which is recorded in the internal memory 12). The menu UT, as described below, is a UI that is output to the display unit 15 and is used for receiving selection of a process for the face image process for each item from a user. As a process for the face image, for example, there are an ID photo printing process, a color correcting process, a shape correcting process, and the like. The display control section 30 determines whether items of the process are to be assigned to each detected face image.
For example, the display control section 30 determines whether the item of the ID photo printing process is to be assigned in accordance with the identification information that is attached to the information on the detection window SW which is recorded in the internal memory 12. In other words, the display control section 30 reads out the information on the detection window SW which is recorded in the internal memory 12. Then, when the identification information representing a front face is attached in the read-out information, the display control section 30 assigns the item of “ID photo printing” to the information on the read-out detection window SW. On the other hand, when the identification information representing a non-front face is attached in the information of the detection window SW read out from the internal memory 12, the display control section 30 does not assign the item of “ID photo printing” to the information on the read-out detection window SW.
In addition, the display control section 30 interprets color information (for example, RGB) for an area of the image data D (referred to as face image data) that is represented by information on the detection window SW which is recorded in the internal memory 12 as a target. When the result of interpretation of color information corresponds to a predetermined color correcting condition, an item of the color correcting process is assigned to the information on the detection window SW. In addition, for example, when a red-eye area is detected by performing detection of so-called a red-eye area based on the color information of the face image data, the display control section 30 assigns an item of “red-eye correction” to the information on the detection window SW. For detection of a red-eye area, various known techniques may be employed, and a technique disclosed in JP-A-2007-156694 can be used. The “red-eye correction” is one type of the color correcting process.
In addition, the display control section 30 determines whether the face image is so-called color-blurred (red blurred or orange blurred) image based on the color information of the face image data. When a color-blurred image is determined, the display control section 30 assigns an item of “color-blur correction” to the information on the detection window SW. Whether an image is blurred can be determined, for example, based on relative deviations of average values Rave, Gave, and Bave of the histograms by generating the histograms for the R, G, and B. Among |Rave−Gave|, |Rave−Bave|, and |Bave−Gave| that are differences of average values Rave, Gave, and Bave, when |Rave−Gave| and |Rave−Bave| have a difference equal to or larger than |Bave−Gave|, Rave>Gave, and Rave>Bave, it can be determined that the face image data is in a red-blurred state or an orange-blurred state. In addition, the display control section 30 determines whether the face image data is a backlight image based on the color information of the face image data. When the face image is determined to be a backlight image, an item of “backlight correction” is assigned to the information on the detection window SW. Whether the face image is a backlight image is determined by generating a histogram of brightness (one type of the color information) of the face image data and analyzing the shape of the histogram. For example, in the histogram of the brightness, when the histogram is divided into a predetermined brightness range located on the low brightness side and a predetermined brightness range located on the high brightness side so as to form two distribution peaks, and the number of pixels constituting two peaks exceeds a predetermined reference number, the face image is determined to be a backlight image. The “color blur correction” or the “backlight correction” is one type of the color correcting process. As a technique for determining a color-blurred image or a backlight image, a technique other than the above-described technique may be used. In addition, the items of the color correcting process which are included in the menu UI are not limited to the above-described items.
In addition, the display control section 30 sets the face image data included in the information on the detection window SW which is recorded in the internal memory 12 as a target and analyzes the shape of a face image included in the face image data. When the result of analysis of the shape corresponds to a predetermined shape correcting condition, the display control section 30 assigns an item of the shape correction process to the information on the detection window SW. For example, the display control section 30 detects the height of a face (for example, a length between a top of the head to a chin) and the width of a face (for example, the width of a face at the height of a cheek) based on the face image data. When the ratio (L1/L2) of the height (L1) of the face to the width (L2) of the face is smaller than a predetermined threshold value, it can be assumed that the face image is a round face or an angled-cheek face. Accordingly, in such a case, it is determined that the shape corresponds to the shape correcting condition, and thus, an item of “small face correction” is assigned to the information on the detection window SW. The “small face correction” is one type of the shape correcting process. For example, the height (L1) of the face and the width (L2) of the face can be acquired based on the result of predetermined template matching for the face image data and the result of detection of peripheral edges of a face. Alternatively, a technique disclosed in JP-A-2004-318204 may be used for detecting the height (L1) of the face and the width (L2) of the face.
In addition, the display control section 30 may be configured to determine the gender of the face based on the face image data. In such a case, when the gender is female, it may be configured that an item of “small face correction” is assigned to the information on the detection window SW. In addition, when the size (a size ratio of the detection window SW to the image data D) of the face image data is smaller than a predetermined reference value, the display control section 30 may be configured to determine that the effect of the small face correction is not exhibited for most of the cases and not assign the item of the “small face correction” to the information on the detection window SW. The item of the shape correcting process which is described in the menu UI is not limited to the “small face correction”. For example, an item of “eye size correction” for changing the size of the eyes may be assigned to the information on the detection window SW in accordance with the result of detection (detection of an eye area) for an organ within the face image data and the result of size detection of the organ.
In S500, the display control section 30 displays an image (target image) represented by the image data D acquired in S100 and the menu UI for each face image which is formed by items determined for each face image in S400 altogether on a screen of the display unit 15. The menu UI corresponds to one example of menu display in which selection of processes performed for a specific image can be received.
In addition, as the result of branching in S300, when the flow shown in the flowchart of
In
2-3. Process after Image Output
As described above, in the display unit 15, when a menu UT is displayed for a face image included in the target image, the user can direct the printer 10 a next operation by arbitrarily selecting an item within the menu UT through the operation unit 14. When detecting pressing on any item of the menu UT in the display unit 15 or detecting selection of any item of the menu UI in accordance with an operation of a predetermined button or the like, the printer 10 performs a process (printing an ID photo or correction of the red-eye, correction of the blurred color, correction of backlight, correction of a small face, correction of the eye size, or the like) corresponding to the selected item for an area of the image data D which includes at least a face image (face image data) corresponding to the menu UI including the selected item. The form of each process performed by the printer 10 is not particularly limited. For example, when the “red-eye correction” is selected, the printer 10 performs red-eye correction by using a known technique. When the “correction of the blurred color” is selected, the printer 10, for example, performs correction of gray scales of RGB so as to eliminate deviations of average values of the RGB histograms of the face image data. When the “backlight correction” is selected, the printer 10, for example, performs correction for increasing brightness values of pixels of the face image data. When the “small face correction” is selected, the printer 10, for example, determines areas on the periphery of left and right cheeks of the face image to be corrected and deforms the determined areas to be shrunken toward the center of the face. When the “correction of the eye size” is selected, the printer 10, for example, determines left and right eye areas of the face image to be corrected and deforms the determined areas of the eyes to be enlarged.
The display control section 30 may be configured to allow the user to check the result of correction by displaying the target image that includes a face image after the color correction or the shape correction as described above in the display 15 again. In order to display the target image again, the display control section 30 may be configured to determine new items of the menu UI for the face image included in the target image again and display the menu UI together.
In addition, when the “ID photo printing” is selected from the menu UI, the printer 10 transits to an ID photo printing mode. In the ID photo printing mode, the print control section 40 determines a rectangular area, of which the size ratio with respect to the detection window SW is determined in advance, including the detection window SW on its center from the image data D (the image data D before being converted into a gray image) based on the information on the detection window SW corresponding to the face image (for example, the face image 2) for which the “ID photo printing” is selected. The, the print control section 40 cuts out (trims) the determined rectangular area from the image data D. Then, the print control section 40 appropriately performs pixel-number conversion (enlargement or reduction) for the image data of the cut-out rectangular area in accordance with the size of the ID photo which is set in advance (or set by the user). The print control section 40 generates print data by performing a needed process such as a color converting process or a half-tone process for the image data after the pixel-number converting process.
The print control section 40 allows the printer engine 16 to perform printing based on the print data by supplying the generated print data to the printer engine 16. Accordingly, the printing process in the ID photo printing mode, that is, printing an ID photo having a front face is completed.
In addition, when transited to the ID photo printing mode, the printer 10 may allow the user to designate a trimming range without performing all the processes automatically until completion of printing the ID photo. When detecting transition to the ID photo printing mode, the display control section 30 reads out trimming frame data 14b from the internal memory 12. Then, the display control section 30 marks the trimming frame on the target image based on the trimming frame data 14b.
The user directs movement, enlargement, or reduction of the trimming frame such that the entire front face (in the example of
As described above, according to this embodiment, the printer 10 detects a face image from a target image, determines items (items representing processes to be performed for the face image) of the menu UI for each face image in accordance with the state of each detected face image, for example, that is, various states such as a front face or not, the color state, the shape, the gender, and the like, and displays the menu UI for each face image for which the items are determined as described above at a time when the target image is displayed in the display unit 15. In addition, when a plurality of face images is detected from the target image, reference signs common to a face image and a menu UI that correspond to each other are attached thereto in the display unit 15. As a result, when a recording medium is inserted into the printer 10, the user can visually recognize processes to be performed for each face image located on the target image in an easy manner by watching the target image output to the display unit 15. Accordingly, the user can select a process for each face image in a very easy manner.
In the description above, it is assumed that an output target of the target image and the menu display for each face image that is included in the target image is the screen of the display unit 15. However, the output target of the target image and the menu display may be a printing medium (printing sheet). In other words, the printer 10 may be configured to print (output) the target image and the menu display on the printing medium by controlling the printer engine 16 by using the print control section 40, in addition to (or instead of) displaying the menu UI-attached target image for each face image in the display unit 15 as a result of performing the process shown in
The user can arbitrary select any check box CB on the order sheet OS and write a predetermined mark in the selected check box CB with a pen or the like. Then, for example, the user has the order sheet OS on which the mark is written read by an image reading unit (scanner) of the printer 10 that is not shown in the figure. When a predetermined mark is written in a check box CB on the order sheet OS that is read by the image reading unit, the printer 10 performs a process (a process on which the degree of correction indicated by the check box CB, in which the mark is written, is also reflected) indicated by the check box CB for the face image corresponding to the check box CB in which the mark is written.
When printing the order sheet OS, the printer 10 does not need to print both the menu UI and the item selection entry field A for each face image and may be configured to print any one of them. For example, a configuration in which a user writes a predetermined mark in a design of the menu UI that is printed on a printing medium and the image reading unit reads out the written mark from the design may be used.
When displaying a target image and the menu UI for each face image in the display unit 15, the printer 10 may be configured to display the above-described check box CB for each item as a part of the menu UI. Under such a configuration, designation for the degree of each correction process can be received from a user on the screen of the display unit 15.
Next, a technique other than the technique that uses the neural network in a face image detecting process that is performed by the face image detecting unit 20 in S200 will be described.
Next, the face image detecting unit 20 performs the front face existence determining process shown in
As above, an example in which the image-output control device and the method of controlling image output according to embodiments of the invention are implemented as the printer 10, and the program for controlling image output according to an embodiment of the invention is executed in cooperation with the printer 10 has been shown. However, the invention may be implemented in an image-output process by using an image device such as a computer, a digital still camera, a scanner, or a photo viewer. Moreover, the invention may be applied to an ATM (automated teller machine) or the like that performs a personal authentication. For the determination process of the face image detecting unit 20, various determination techniques in the characteristic amount space of the above-described characteristic amounts may be used. For example, a support vector machine may be used.
The present application claims the priority based on a Japanese Patent Application No. 2008-084249 filed on Mar. 27, 2008, the disclosure of which is hereby incorporated by reference in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2008-084249 | Mar 2008 | JP | national |