The present invention relates to an image processing apparatus, an information processing method, and a program.
In the related arts, in the case of scanning a document and storing as electronic data, there are: a line scanner using a line sensor for image pickup; and a camera scanner using a 2-dimensional imaging sensor. Particularly, in the case of the camera scanner in which a camera is disposed over a stage (or a bookrest) and an original is put onto the stage in a face-up state and is photographed, so long as a sheet of original, merely by putting the original, it can be soon scanned, and even a thick original such as a book can be easily put onto the stage and can be scanned. Further, a camera scanner in which not only a document such as paper or book but also a solid object is put onto a stage and a solid shape is scanned has been disclosed in PTL 1. According to the camera scanner disclosed in PTL 1, a light projecting unit is provided together with a camera for image pickup, a measurement pattern which is projected from the light projecting unit is photographed by the camera, and a solid shape is measured by a principle of triangular surveying. According to the camera scanner, the solid shape of the object put on the stage is calculated, whether it is a flat original, a book, or a solid object is discriminated, and the photographing is performed in a proper photographing mode in accordance with the discriminated object. According to a camera scanner disclosed in PTL 2, it has a similar construction, a measurement pattern is always projected by a light projecting unit from timing when no object is put on the stage, and the solid shape is continuously measured, thereby detecting that the object has been put on the stage.
On the other hand, according to a user interface system disclosed in PTL 3, a computer display screen is projected onto a desk by a projector and the computer display screen is operated with a fingertip. An infrared camera is used to detect a fingertip. According to the above user interface system, by reading a bar code printed on a paper document, a book, or the like on the desk, a link with electronic information can be generated.
PTL 1: Japanese Patent No. 4012710
PTL 2: Japanese Patent No. 3954436
PTL 3: Japanese Patent No. 3834766
However, in the camera scanners of PTL 1 and PTL 2, since a user interface unit is limited, it is difficult to improve an operability to the user. On the other hand, in the user interface system of PTL 3, although an intuitive operation by the fingertip can be performed, a target of an object to be put onto the desk is limited only to a document such as a paper document or book. Therefore, in the user interface system, it is difficult to perform an operation to an object such as a solid object other than the document put on the desk and it is also difficult to perform such dynamic control that the operability is changed in accordance with a feature of a target such as a paper document or book.
It is an object of the invention to improve an operability of the user in an image processing apparatus such as a camera scanner or the like.
An image processing apparatus of the invention comprises: a pickup image obtaining unit configured to obtain a pickup image on a stage through an imaging unit; a distance image obtaining unit configured to obtain a distance image on the stage through a solid measuring unit; a detecting unit configured to detect a putting of an object on the stage on the basis of the pickup image obtained by the pickup image obtaining unit; a projecting unit configured to project an operation instruction regarding a reading of the object onto the stage through a projector when the putting of the object is detected by the detecting unit; a recognizing unit configured to recognize a gesture of a user on the stage on the basis of the distance image obtained by the distance image obtaining unit; and a reading unit configured to obtain a read image of the object in accordance with the gesture recognized by the recognizing unit to the operation instruction projected by the projecting unit.
According to the invention, the operability of the user in the image processing apparatus such as a camera scanner or the like can be improved.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Best modes for embodying the invention will be described hereinbelow by using the drawings.
As illustrated in
(Construction of Camera Scanner)
As illustrated in
A stage 204 over which the camera scanner 101 is disposed is also illustrated in
Although it is assumed that the camera unit 202 picks up an image at a single resolution, it is desirable that a high resolution image pickup and a low resolution image pickup can be performed. Although not shown in
As an example of the case of converting the coordinate system, relations among the orthogonal coordinate system, a space expressed by using the camera coordinate system in which the camera unit 202 is set to a center, and the image plane which is photographed by the camera unit 202 are illustrated in
[Xc,Yc,Zc]T=[Rc|tc][X,Y,Z,1]T (1)
Where, Rc and tc are constructed by an external parameter which is obtained by an attitude (rotation) and a position (translation) of the camera to the orthogonal coordinate system. Rc is called a matrix of revolution of 3×3 and tc is called a translation vector. On the contrary, the 3-dimensional dot defined by the camera coordinate system can be converted into the orthogonal coordinate system by the following equation (2).
[X,Y,Z]T=[Rc−1|−Rc−1tc][Xc,Yc,Zc,1]T (2)
Further, the 2-dimensional camera image plane which is photographed by the camera unit 202 is such a plane that 3-dimensional information in a 3-dimensional space has been converted into 2-dimensional information by the camera unit 202. That is, it can be converted by perspective projection converting the 3-dimensional dot Pc[Xc, Yc, Zc] in the camera coordinate system into 2-dimensional coordinates pc[xp, yp] on the camera coordinate plane by the following equation (3).
λ[xp,yp,1]T=A[Xc,Yc,Zc]T (3)
Where, A is called an internal parameter of the camera and is a matrix of 3×3 expressed by a focal distance, an image center, and the like.
As mentioned above, by using the equations (1) and (3), a 3-dimensional dot group expressed by the orthogonal coordinate system can be converted into the 3-dimensional dot group coordinates in the camera coordinate system or the camera image plane. It is assumed that the internal parameter of each hardware device and the position attitude (external parameter) to the orthogonal coordinate system have previously been calibrated by a well-known calibration method. Hereinbelow, the 3-dimensional dot group expresses 3-dimensional data (solid data) in the orthogonal coordinate system unless otherwise specified.
(Hardware Construction of Controller of Camera Scanner)
As illustrated in
Upon activation such as power-on or the like, the CPU 302 executes the activating program stored in the ROM 304. The activating program is used by the CPU 302 to read out the controlling program stored in the HDD 305 and to develop into the RAM 303. When the activating program is executed, the CPU 302 subsequently executes the controlling program developed in the RAM 303 and makes control. The CPU 302 also stores data which is used in the operation according to the controlling program into the RAM 303 and reads out and writes it. Various kinds of settings necessary for the operation according to the controlling program and image data generated by a camera input can be further stored in the HDD 305 and is read out and written by the CPU 302. The CPU 302 communicates with another equipment on the network 104 through the network I/F 306.
The image processor 307 reads out the image data stored in the RAM 303, processes it, and writes back into the RAM 303. Image processes which are executed by the image processor 307 are a rotation, a zoom, a color conversion, and the like.
The camera I/F 308 is connected to the camera unit 202 and the distance image sensor unit 208, obtains the image data from the camera unit 202, obtains the distance image data from the distance image sensor unit 208, and writes into the ROM 303 in response to instructions from the CPU 302. The camera I/F 308 transmits a control command from the CPU 302 to the camera unit 202 and the distance image sensor unit 208 and performs settings of the camera unit 202 and the distance image sensor unit 208.
The controller unit 201 can further include at least one of the display controller 309, serial I/F 310, audio controller 311, and USB controller 312. The display controller 309 controls a display of the image data to a display in response to an instruction from the CPU 302. In this instance, the display controller 309 is connected to the projector 207 and the LCD touch panel 330.
The serial I/F 310 inputs and outputs a serial signal. In this instance, the serial I/F 310 is connected to the turntable 209 and transmits instructions of a start/end of the rotation and an angle of the rotation from the CPU 302 to the turntable 209. The serial I/F 310 is also connected to the LCD touch panel 330. When the LCD touch panel 330 is depressed, the CPU 302 obtains coordinates of the depressed position through the serial I/F 310.
The audio controller 311 is connected to the speaker 340, converts audio data into an analog audio signal in response to an instruction from the CPU 302 and outputs an audio sound through the speaker 340.
The USB controller 312 controls an externally attached USB device in response to an instruction from the CPU 302. In this instance, the USB controller 312 is connected to an external memory 350 such as a USB memory, SD card, or the like and reads/writes data from/into the external memory 350.
(Functional Construction of Camera Scanner)
As mentioned above, the controlling program of the camera scanner 101 is stored in the HDD 305 and is developed into the RAM 303 and executed by the CPU 302 upon activation.
A main control unit 402 is a center of the control and controls each of other modules in the functional construction 401 as illustrated in
An image obtaining unit 416 is a module for executing an image inputting process and is constructed by a camera image obtaining unit 407 and a distance image obtaining unit 408. The camera image obtaining unit 407 obtains the image data output from the camera unit 202 through the camera I/F 308 and stores into the RAM 303 (pickup image obtaining process). The distance image obtaining unit 408 obtains the distance image data output from the distance image sensor unit 208 through the camera I/F 308 and stores into the RAM 303 (distance image obtaining process). Details of the process of the distance image obtaining unit 408 will be described hereinafter by using
A recognition processing unit 417 is a module for detecting a motion of an object on the stage 204 from the image data obtained by the camera image obtaining unit 407 and the distance image obtaining unit 408 and recognizing and is constructed by a gesture recognizing unit 409 and an object detecting unit 410. The gesture recognizing unit 409 continuously obtains the image on the stage 204 from the image obtaining unit 416. When a gesture such as a touch or the like is detected, the gesture recognizing unit 409 notifies the main control unit 402 of it. When a notification of an object putting waiting process or an object removal waiting process is received from the main control unit 402, the object detecting unit 410 obtains the image derived by photographing the stage 204 from the image obtaining unit 416. The object detecting unit 410 executes a process for detecting timing when the object is put onto the stage 204 and rests or timing when the object is removed. Details of the processes of the gesture recognizing unit 409 and the object detecting unit 410 will be described hereinafter by using
A scan processing unit 418 is a module for actually scanning a target and is constructed by a flat original image photographing unit 411, a book image photographing unit 412, and a solid shape measuring unit 413. Each of the flat original image photographing unit 411, the book image photographing unit 412, and the solid shape measuring unit 413 executes a proper process (reading process) to a flat original, a book, or a solid object and outputs data (read image) of a format according to each object. Details of the processes of those modules will be described hereinafter by using
A user interface unit 403 is constructed by a GUI parts generating/display unit 414 and a projection area detecting unit 415. The GUI parts generating/display unit 414 receives a request from the main control unit 402 and generates GUI parts such as message, button, and the like. The GUI parts mentioned here are an example of objects constructing the operation display. The GUI parts generating/display unit 414 requests a display of the generated GUI parts to a display unit 406. A display position of the GUI parts on the stage 204 is detected by the projection area detecting unit 415.
The display unit 406 displays the requested GUI parts to the projector 207 or the LCD touch panel 330 through the display controller 309. Since the projector 207 is disposed so as to face the stage 204, it can project the GUI parts onto the stage 204. The user interface unit 403 receives a gesture operation such as a touch or the like recognized by the gesture recognizing unit 409 or an inputting operation from the LCD touch panel 330 through the serial I/F 310, and further, their coordinates. The user interface unit 403 discriminates the operation contents (depressed button or the like) while making the contents of the operation display screen which is being drawn and the operation coordinates correspond to each other. The user interface unit 403 notifies the main control unit 402 of the operation contents, thereby receiving the operation of the operator.
A network communicating unit 404 communicates with other equipment on the network 104 by TCP/IP through the network I/F 306.
A data managing unit 405 stores various kinds of data such as work data and the like generated when the CPU 302 executes the controlling program into a predetermined area on the HDD 305 and manages. Those data is, for example, scan data generated by the flat original image photographing unit 411, book image photographing unit 412, and solid shape measuring unit 413.
(Description of Distance Image Sensor and Distance Image Obtaining Unit)
A construction of the distance image sensor unit 208 is illustrated in
When the process is started, in S501, the distance image obtaining unit 408 projects a 3-dimensional measurement pattern (solid shape measurement pattern) 522 by infrared rays to a target 521 by using the infrared pattern projecting unit 361 as illustrated in
In S503, the distance image obtaining unit 408 matches the coordinate system of the infrared camera image 524 with that of the RGB camera image 523 by using a coordinate system conversion from the coordinate system of the infrared camera 362 to the coordinate system of the RGB camera 363. It is now assumed that relative positions of the infrared camera 362 and the RGB camera 363 and their internal parameters have already been known by a previous calibrating process.
In S504, the distance image obtaining unit 408 extracts correspondence points between the 3-dimensional measurement pattern 522 and the infrared camera image 524 subjected to the coordinate conversion in S503 as illustrated in
In S506, the distance image obtaining unit 408 stores the RGB values of the RGB camera image 523 into each pixel of the distance image, thereby generating a distance image having four values of R, G, B, and distance per pixel. As for the distance image obtained here, the distance image sensor coordinate system defined by the RGB camera 363 of the distance image sensor unit 208 is used as a reference.
Therefore, in S507, the distance image obtaining unit 408 converts the distance information obtained as a distance image sensor coordinate system into a 3-dimensional dot group in the orthogonal coordinate system as mentioned by using
Although the infrared pattern projection system is used as a distance image sensor unit 208 in the embodiment as mentioned above, a distance image sensor of another system can be also used. For example, a stereo system in which a stereo solid viewing is performed by the two RGB cameras or a TOF (Time of Flight) system in which a distance is measured by detecting a flying time of a laser beam may be used.
(Description of Gesture Recognizing Unit)
Details of the process of the gesture recognizing unit 409 will be described by using a flowchart of
In S602, the gesture recognizing unit 409 obtains a 3-dimensional dot group of the object existing on the stage 204 as shown in S621 to S622.
In S621, the gesture recognizing unit 409 obtains one frame of the distance image and one frame of the 3-dimensional dot group from the distance image obtaining unit 408.
In S622, the gesture recognizing unit 409 eliminates the dot group existing on the plane including the stage 204 from the obtained 3-dimensional dot group by using the plane parameters of the stage 204.
In S603, the gesture recognizing unit 409 executes a process for detecting a shape and a fingertip of the hand of the user from the obtained 3-dimensional dot group as shown in S631 to S634. The process of S603 will be described by using diagrams schematically illustrating a method of a fingertip detecting process shown in
In S631, the gesture recognizing unit 409 extracts a skin-colored 3-dimensional dot group existing at a height which is equal to or larger than a predetermined height from the plane including the stage 204 from the 3-dimensional dot group obtained in S602, thereby obtaining a 3-dimensional dot group of the hand. Reference numeral 701 in
In S632, the gesture recognizing unit 409 generates a 2-dimensional image obtained by projecting the extracted 3-dimensional dot group of the hand to the plane of the stage 204, thereby detecting an outer shape of the hand. Reference numeral 702 in
In S633, with respect to each dot on the detected outer shape of the hand, the gesture recognizing unit 409 calculates a curvature of the outer shape at such a dot and detects the dot, as a fingertip, in which the calculated curvature is smaller than a predetermined value.
In S634, the gesture recognizing unit 409 calculates the number of detected fingertips and the coordinates of each fingertip. At this time, as mentioned above, since a correspondence relation between each dot of the 2-dimensional image projected to the stage 204 and each dot of the 3-dimensional dot group of the hand has been stored, the gesture recognizing unit 409 can obtain the 3-dimensional coordinates of each fingertip. Although the method of detecting the fingertip from the image obtained by projecting from the 3-dimensional dot group to the 2-dimensional image has been described this time, the image serving as a target of the fingertip detection is not limited to such an image. For example, an area of the hand is extracted from a background difference of the distance image or a skin-colored area of the RGB image and a fingertip in the hand area may be detected by a method (calculation of a curvature of an outer shape, or the like) similar to that mentioned above. In this case, since the coordinates of the detected fingertip are the coordinates on the 2-dimensional image such as an RGB image or distance image, it is necessary that the gesture recognizing unit 409 converts into the 3-dimensional coordinates of the orthogonal coordinate system by using the distance information of the distance image at such coordinates. At this time, a center of the curvature circle used when detecting the fingertip instead of the dots on the outer shape serving as a fingertip point may be used as a fingertip point.
In S604, the gesture recognizing unit 409 executes a gesture discriminating process from the detected shape and fingertip of the hand as shown in S641 to S646. In S641, the gesture recognizing unit 409 discriminates whether or not the number of fingertips detected in S603 is equal to 1. If it is not equal to 1, the gesture recognizing unit 409 advances to S646 and decides the absence of the gesture. If the number of detected fingertips is equal to 1 in S641, the gesture recognizing unit 409 advances to S642 and calculates a distance between the detected fingertip and the plane including the stage 204.
In S643, the gesture recognizing unit 409 discriminates whether or not the distance calculated in S642 is equal to or less than a predetermined value. If it is equal to or less than the predetermined value, the gesture recognizing unit 409 advances to S644 and decides that the touch gesture in which the stage 204 was touched with the fingertip exists. If the distance calculated in S642 is larger than the predetermined value in S643, the gesture recognizing unit 409 advances to S645 and decides that the gesture in which the fingertip was moved (gesture in which the fingertip exists on the stage 204 although it does not touch) exists.
In S605, the gesture recognizing unit 409 notifies the main control unit 402 of the decided gesture, is returned to S602, and repeats the gesture recognizing process.
By the above processes, the gesture recognizing unit 409 can recognize the gesture of the user on the basis of the distance image.
(Process of Object Detecting Unit)
The process of the object detecting unit 410 will be described by using flowcharts of
In S802, the object detecting unit 410 detects that an object has been put on the stage 204 (object putting detecting process). Details of the process will be described hereinafter by using
In S803, the object detecting unit 410 detects that the object on the stage 204 in which the putting was detected in S802 has been removed (object removal detecting process). Details of the process will be described hereinafter by using
When the object putting detecting process is started, the object detecting unit 410 obtains one frame of the camera image from the camera image obtaining unit 407 in S821.
In S822, the object detecting unit 410 calculates a difference between the obtained camera image and the previous frame camera image and calculates a difference value in which their absolute values are added.
In S823, the object detecting unit 410 discriminates whether or not the calculated difference value is equal to or larger than a predetermined value (is equal to or larger than a threshold value). If the calculated difference value is smaller than the predetermined value (is smaller than the threshold value), the object detecting unit 410 decides that no object exists on the stage 204. The object detecting unit 410 advances to S828, stores the camera image of the present frame as a previous frame camera image, is returned to S821, and continues the process. If the difference value is equal to or larger than the predetermined value in S823, the object detecting unit 410 advances to S824 and calculates a difference value between the camera image obtained in S821 and the previous frame camera image in a manner similar to S822.
In S825, the object detecting unit 410 discriminates whether or not the calculated difference value is equal to or smaller than a predetermined value. If the calculated difference value is larger than the predetermined value in S825, the object detecting unit 410 decides that the object on the stage 204 was moved. The object detecting unit 410 advances to S828, stores the camera image of the present frame as a previous frame camera image, is returned to S821, and continues the process. If the calculated difference value is equal to or smaller than the predetermined value in S825, the object detecting unit 410 advances to S826. In S826, from the number of times of a state where a discrimination result of S825 is continuously YES, the object detecting unit 410 discriminates whether or not the difference value is equal to or smaller than the predetermined value, that is, whether or not a state where the object on the stage 204 rests has continued the number of times as many as the predetermined number of frames. If it is determined in S826 that the state where the object on the stage 204 rests does not continue the number of times as many as the predetermined number of frames, the object detecting unit 410 advances to S828, stores the camera image of the present frame as a previous frame camera image, is returned to S821, and continues the process. If it is determined in S826 that the state where the object on the stage 204 rests has continued the number of times as many as the predetermined number of frames, the object detecting unit 410 advances to S827, notifies the main control unit 402 that the object has been put, and ends the object putting detecting process.
When the object removal detecting process is started, the object detecting unit 410 obtains one frame of the camera image from the camera image obtaining unit 407 in S831.
In S832, the object detecting unit 410 calculates a difference value between the obtained camera image and the stage background camera image.
In S833, the object detecting unit 410 discriminates whether or not the calculated difference value is equal to or smaller than a predetermined value. If the calculated difference value is larger than the predetermined value in S833, since the object still remains on the stage 204, the object detecting unit 410 is returned to S831 and continues the process. If the calculated difference value is equal to or smaller than the predetermined value in S833, since no object exists on the stage 204, the object detecting unit 410 notifies the main control unit 402 that the object was removed, and ends the object removal detecting process. By the above processes, the object detecting unit 410 can detect the putting and removal of the object on the stage 204 on the basis of the camera image. In addition, when the object is a flat object such as paper or the like, the object detecting unit 410 cannot detect the putting and removal of the object on the stage 204 only from the distance image. However, as mentioned above, by using the camera image, they can be detected.
(Description of Flat Original Image Photographing Unit)
The process which is executed by the flat original image photographing unit 411 will be described by using a flowchart of
When the process is started, in S901, the flat original image photographing unit 411 obtains one frame of the image from the camera unit 202 through the camera image obtaining unit 407. Since the coordinate system of the camera unit 202 does not accurately face the stage 204 as illustrated in
In S902, the flat original image photographing unit 411 calculates a difference between the stage background camera image and the camera image obtained in S901, generates a difference image, and subsequently, binarizes in such a manner that the pixel having a difference is displayed in black and the pixel having no difference is displayed in white. Therefore, the difference image generated by the flat original image photographing unit 411 becomes an image (having a difference) in which an area of the target 1001 is displayed in black as shown in an area 1002 in
In S905, the flat original image photographing unit 411 projective-transforms the extracted original area image from the camera coordinate system to the stage 204, thereby converting into an image 1003 viewed from directly above the stage 204 as illustrated in
Therefore, in S906, the flat original image photographing unit 411 rectangular-approximates the image 1003, subsequently rotates so that its rectangle becomes horizontal, and obtains a non-inclined image like an image 1004 illustrated in
(Process of Book Image Photographing Unit)
The process which is executed by the book image photographing unit 412 will be described by using flowcharts of
In
In S1102, the book image photographing unit 412 executes a process for calculating a 3-dimensional dot group of a book object put on the stage 204 from the obtained camera image and the distance image as shown in S1111 to S1116.
In S1111, the book image photographing unit 412 calculates a difference between the camera image and the stage background camera image every pixel, binarizes, and generates a camera difference image 1203 in which a book area 1213 is displayed in black as illustrated in
In S1112, the book image photographing unit 412 converts the camera difference image 1203 from the camera coordinate system to the distance image sensor coordinate system and generates a camera difference image 1204 containing an object area 1214 viewed from the distance image sensor unit 208 as illustrated in
In S1113, the book image photographing unit 412 calculates a difference between the distance image and the stage background distance image every pixel, binarizes, and generates a distance difference image 1205 in which an object area 1215 is displayed in black as illustrated in
Therefore, in S1114, the book image photographing unit 412 obtains a sum of the camera difference image 1203 and the distance difference image 1205, generates an object area image 1206 illustrated in
In S1116, the book image photographing unit 412 converts the distance image extracted in S1115 into the orthogonal coordinate system, thereby generating a 3-dimensional dot group 1217 illustrated in
In S1103, the book image photographing unit 412 executes a book image distortion correcting process from the obtained camera image and the calculated 3-dimensional dot group, thereby generating a 2-dimensional book image. The process of S1103 will be described in detail in
The book image distortion correcting process of S1103 will be described by using a flowchart of
In S1122, the book image photographing unit 412 extracts an object area from the camera image 1201 by using the image obtained by converting the object area 1216 in the object area image 1206 into the camera coordinate system.
In S1123, the book image photographing unit 412 projective-transforms the extracted object area image into the stage plane.
In S1124, the book image photographing unit 412 rectangular-approximates the projective-transformed object area image and rotates so that its rectangle becomes horizontal, thereby generating a book image 1208 in
In S1125, the book image photographing unit 412 sets the leftmost dot of the book image 1208 to P (dot P in
In S1126, the book image photographing unit 412 obtains a height (h1 in
In S1127, the book image photographing unit 412 sets a dot away from the dot P of the book image 1208 by a predetermined distance (x1 in
In S1128, the book image photographing unit 412 obtains a height (h2 in
In S1129, the book image photographing unit 412 calculates a distance (l1 in
l1=√{square root over (x12+(h1−h2)2)} (4)
In S1130, the book image photographing unit 412 corrects a distance between P and Q by the calculated distance l1 and copies the pixel to positions of dots P′ and Q′ on an image 1219 in
In S1131, the book image photographing unit 412 sets the processed dot Q to the dot P, is returned to S1128, and executes the same process, so that the correction between the dot Q and a dot R in
In S1132, the book image photographing unit 412 discriminates whether or not the distortion correcting process has been ended with respect to all dots. If it has been finished, the distortion correcting process of the book object is ended. As mentioned above, by executing the processes of S1102 and S1103, the book image photographing unit 412 can generate the book image subjected to the distortion correction.
After the book image subjected to the distortion correction was generated, in S1104, the book image photographing unit 412 performs a gradation correction to the generated book image.
In S1105, the book image photographing unit 412 performs a compression and a file format conversion to the generated book image in accordance with a predetermined image format (for example, JPEG, TIFF, PDF, or the like).
In S1106, the book image photographing unit 412 stores the generated image data as a file into a predetermined area in the HDD 305 through the data managing unit 405 and ends the process.
(Description of Solid Shape Measuring Unit)
The process which is executed by the solid shape measuring unit 413 will be described by using flowcharts of
When the process is started, in S1301, the solid shape measuring unit 413 instructs a rotation to the turntable 209 through the serial I/F 310, thereby rotating the turntable 209 by a predetermined angle at a time. The smaller a rotation angle here is, the higher a final measuring accuracy is. However, the number of measurement times increases and it takes a time in accordance with the high accuracy. Therefore, it is sufficient to predetermine a proper rotation angle as an apparatus.
In S1302, the solid shape measuring unit 413 executes a 3-dimensional dot group measuring process to the object on the turntable 209 provided in the stage 204 by using the camera unit 202 and the projector 207. A flowchart of
When the 3-dimensional dot group measuring process is started, in S1311, the solid shape measuring unit 413 projects a 3-dimensional shape measurement pattern 1402 from the projector 207 to a target 1401 on the turntable 209 illustrated in
In S1312, the solid shape measuring unit 413 obtains one frame of the camera image from the camera unit 202 through the camera image obtaining unit 407.
In S1313, the solid shape measuring unit 413 extracts a correspondence point between the 3-dimensional shape measurement pattern 1402 and the obtained camera image in a manner similar to S504 in
In S1314, the solid shape measuring unit 413 calculates a distance in each pixel on the camera image from a positional relation between the camera unit 202 and the projector 207 and generates a distance image. A measuring method here is the same as the measuring method described in S505 in
In S1316, the solid shape measuring unit 413 eliminates the 3-dimensional dot group included in the stage plane from the calculated 3-dimensional dot groups by using the plane parameters of the stage 204.
In S1317, from the remaining 3-dimensional dot groups, the solid shape measuring unit 413 eliminates the dots, as noises, whose positions are largely deviated and generates a 3-dimensional dot group 1403 of the target 1401. The dot whose position is largely deviated is, for example, a dot which is deviated from a predetermined position.
In S1318, the solid shape measuring unit 413 turns off the 3-dimensional shape measurement pattern 1402 projected from the projector 207.
In S1319, the solid shape measuring unit 413 obtains the camera image from the camera unit 202 through the camera image obtaining unit 407, stores as a texture image when viewed from its angle, and ends the 3-dimensional dot group measuring process.
When the solid shape measuring unit 413 executes the 3-dimensional dot group measuring process of S1302 at the second and subsequent times, in S1301, the turntable 209 was rotated and the measurement was performed. Therefore, as illustrated in
In S1303, the solid shape measuring unit 413 rotates the 3-dimensional dot group 1404 measured in S1302 in the opposite direction by the rotation angle of the turntable from the initial position, thereby calculating a 3-dimensional dot group 1405 whose position is matched with that of the 3-dimensional dot group 1403.
In S1304, the solid shape measuring unit 413 executes a process for combining the 3-dimensional dot group calculated in S1303 and the 3-dimensional dot group which has already been combined. In the combining process of the 3-dimensional dot groups, an ICP (Iterative Closest Point) algorithm using feature points is used. In the ICP algorithm, the solid shape measuring unit 413 extracts 3-dimensional feature points each serving as a corner from the two 3-dimensional dot groups 1403 and 1404 as targets to be combined. The solid shape measuring unit 413 makes the feature point of the 3-dimensional dot group 1403 and the feature point of the 3-dimensional dot group 1404 correspond to each other, calculates distances among all correspondence points, and adds. While moving the position of the 3-dimensional dot group 1404, the solid shape measuring unit 413 repetitively calculates the position where the sum of the distances among the correspondence points is minimum. When the number of repetition times reaches an upper limit or the position where the sum of the distances among the correspondence points is minimum is calculated, by moving the 3-dimensional dot group 1404 and, thereafter, overlaying with the 3-dimensional dot group 1403, the solid shape measuring unit 413 combines the two 3-dimensional dot groups 1403 and 1404. In this manner, the solid shape measuring unit 413 generates a 3-dimensional dot group 1406 after the combination and ends the 3-dimensional dot group combining process.
When the 3-dimensional dot group combining process of S1304 is ended, in S1305, the solid shape measuring unit 413 discriminates whether or not the turntable 209 has been rotated by one revolution. If the turntable 209 is not rotated by one revolution yet, the solid shape measuring unit 413 is returned to S1301, further rotates the turntable 209, subsequently, executes the process of S1302, and measures a 3-dimensional dot group of another angle. The solid shape measuring unit 413 executes a process for combining the 3-dimensional dot group 1406 which has already been combined in S1303 to S1304 and the 3-dimensional dot group which was newly measured. By repeating the processes of S1301 to S1305 as mentioned above until the turntable 209 is rotated by one revolution, the solid shape measuring unit 413 can generate 3-dimensional dot groups of the whole circumference of the target 1401.
If it is determined in S1305 that the turntable 209 has been rotated by one revolution, the solid shape measuring unit 413 advances to S1306 and executes a process for generating a 3-dimensional model from the generated 3-dimensional dot groups. When the 3-dimensional model generating process is started, in S1331, the solid shape measuring unit 413 performs a noise elimination and a smoothing from the 3-dimensional dot group.
In S1332, the solid shape measuring unit 413 generates a triangular patch from the 3-dimensional dot group, thereby meshing.
In S1333, the solid shape measuring unit 413 maps the texture stored in S1319 onto the plane obtained by meshing. The solid shape measuring unit 413 can generate a 3-dimensional model which was texture-mapped as mentioned above.
In S1307, the solid shape measuring unit 413 converts data after the texture mapping into a standard 3-dimensional model data format such as VRML, STL, or the like, stores into a predetermined area on the HDD 305 through the data managing unit 405, and ends the process.
(Description of Main Control Unit)
A process of a scan application which is executed by the main control unit 402 will be described by using a flowchart of
In
In S1501, when the object putting waiting process is started, in S1501, the main control unit 402 projects a display screen of
In S1512, the main control unit 402 activates the process of the object detecting unit 410. The object detecting unit 410 starts the execution of the processes described in the flowcharts of
When the object putting waiting process of S1501 is ended, the main control unit 402 subsequently executes a scan executing process of S1502. When the scan executing process of S1502 is started, in S1531, the main control unit 402 projects a scan start display screen illustrated in
In S1532, the main control unit 402 waits until the touch to the scan start button 1615 is detected. When the touch to the scan start button 1615 is detected in S1532, the main control unit 402 advances to S1533 and discriminates whether or not the 2D scan button 1612 is in the selection state.
If the 2D scan button 1612 is in the selection state in S1533, the main control unit 402 advances to S1534, executes the process of the flat original image photographing unit 411, and ends the scan executing process.
If the 2D scan button 1612 is not in the selection state in S1533, the main control unit 402 advances to S1535 and discriminates whether or not the book scan button 1613 is in the selection state. If the book scan button 1613 is in the selection state in S1535, the main control unit 402 advances to S1536, executes the process of the book image photographing unit 412, and ends the scan executing process.
If the book scan button 1613 is not in the selection state in S1535, the main control unit 402 advances to S1537, and discriminates whether or not the 3D scan button 1614 is in the selection state. If the 3D scan button 1614 is in the selection state in S1537, the main control unit 402 advances to S1538, executes the process of the solid shape measuring unit 413, and ends the scan executing process. If the 3D scan button 1614 is not in the selection state in S1537, the main control unit 402 decides that each of the 2D scan button 1612, book scan button 1613, and 3D scan button 1614 is not in the selection state. Therefore, the main control unit 402 is returned to S1532 and waits until the touch to the scan start button 1615 is detected after any one of those buttons entered the selection state.
When the scan executing process of S1502 is ended, the main control unit 402 subsequently executes an object removal waiting process of S1503.
When the object removal waiting process of S1503 is started, in S1521, the main control unit 402 displays a scan end screen illustrated in
In S1522, the main control unit 402 waits for a reception of an object removal notification from the object detecting unit 410. The object removal notification is notified by the object detecting unit 410 in S834 in
According to the embodiment 1 mentioned above, the user can select any one of a mode to scan the flat original, a mode to scan the thick book, and a mode to measure the solid shape. A case where all of the three kinds of scanning modes are unnecessary, for example, a case where it is sufficient to execute two kinds of the scan of the flat original and the scan of the thick book due to the setting or the like of the user is also considered. In such a case, it is sufficient that the main control unit 402 displays through the user interface unit 403 so that the two scans which are executed can be selected. More specifically speaking, the main control unit 402 projects only the 2D scan button 1612, book scan button 1613, and scan start button 1615 in
In the camera scanner 101 of the construction of the embodiment 1, the distance image sensor unit 208 can measure a shape of the object put on the stage 204. Therefore, in the embodiment 2, when the object is put on the stage 204, whether or not it is a flat object or a solid object is discriminated by using the distance image sensor unit 208 and a proper process is executed to each object, thereby improving the operability. In the embodiment 2, in the process of the functional construction 401 described in the embodiment 1, the processes of the flowcharts of
When the object kind discriminating process of S1701 is started, in S1711, the object detecting unit 410 obtains one frame of the distance image through the distance image obtaining unit 408 and converts into the 3-dimensional dot group.
In S1712, the object detecting unit 410 obtains a height of the dot, as a height of the object, in which the height from the stage plane is maximum among the 3-dimensional dot groups included in the object on the stage 204 and discriminates whether or not the obtained height is equal to or less than a predetermined value.
If the height of the object is equal to or less than the predetermined value in S1712, the object detecting unit 410 advances to S1713 and notifies the main control unit 402 that the flat original was put onto the stage 204. If the height of the object is larger than the predetermined value in S1712, the object detecting unit 410 advances to S1714 and notifies the main control unit 402 that the solid object was put onto the stage 204.
After either S1713 or S1714 was executed, the object detecting unit 410 ends the object kind discriminating process.
As mentioned above, in the process which is executed by the main control unit 402 in the embodiment 2, details of the scan executing process of S1502 merely differ in the flowchart of
Therefore, the details of the scan executing process which is executed by the main control unit 402 in the embodiment 2 will be described by using a flowchart of
When the scan executing process of S1502 is started, in S1721, the main control unit 402 discriminates whether or not a fact that the flat original was put has been notified from the object detecting unit 410. If the putting of the flat original is not notified in S1721, the main control unit 402 advances to S1723 and discriminates whether or not a fact that the solid object was put has been notified from the object detecting unit 410. If the putting of the solid object is not notified in S1723, the main control unit 402 is returned to S1721 and waits until the notification of the putting of the flat original or the notification of the putting of the solid object is received. When the object detecting unit 410 executes S1713 in
In the scan start screen in
In S1725, the main control unit 402 waits until a touch to the scan start button 1733 is detected. When the touch to the scan start button 1733 is detected in S1725, the main control unit 402 advances to S1726 and discriminates whether or not the book scan button 1731 is in the selection state.
If the book scan button 1731 is in the non-selection state in S1726, the main control unit 402 advances to S1728 and discriminates whether or not the 3D scan button 1732 is in the selection state. If the main control unit 402 determines that the 3D scan button 1732 is in the non-selection state in S1728, this means that the touch to each of the book scan button 1731 and the 3D scan button 1732 is not detected. Therefore, the main control unit 402 is returned to S1725 and waits until the touch to the scan start button 1733 is detected.
In S1726, if the book scan button 1731 is in the selection state, the main control unit 402 advances to S1727 and executes the process of the book image photographing unit 412.
In S1728, if the 3D scan button 1732 is in the selection state, the main control unit 402 advances to S1729 and executes the process of the solid shape measuring unit 413.
When any one of the three processes of the process of the flat original image photographing unit in S1722, the process of the book image photographing unit in S1727, and the process of the solid shape measuring unit in S1729 is executed, the main control unit 402 ends the scan executing process.
As mentioned above, in the object detecting unit 410, by discriminating whether or not the flat original has been put or the solid object has been put, at the time of the scan executing process, if the flat original was put, the user can scan the flat original without selecting the scanning mode and the operability is improved. Even if the solid object was put, by presenting the button for selecting the book scan and the button for selecting the 3D scan to the user, the user can select the proper scanning mode.
When the user does not need to execute either the book scan or the 3D scan, it is sufficient that the user interface unit 403 projects only the scan start button 1733 in the display screen of
In the embodiment 2, when the solid object is put onto the stage 204, the user interface unit 403 presents the selection items about whether the book scan is executed or the 3D scan is executed to the user as illustrated in
When the scan executing process is started, in S1801, the main control unit 402 projects a scan start display screen illustrated in
In S1802, the main control unit 402 waits until a touch to the scan start button 1823 is detected. When the touch to the scan start button 1823 is detected in S1802, the main control unit 402 advances to S1803 and discriminates whether or not the document scan button 1821 is in the selection state.
If the document scan button 1821 is not in the selection state in S1803, the main control unit 402 advances to S1808 and discriminates whether or not the 3D scan button 1822 is in the selection state.
If the main control unit 402 determines that the 3D scan button 1822 is not in the selection state in S1808, this means that the touch to each of the document scan button 1821 and the 3D scan button 1822 is not detected and both of those buttons are not set in the selection state. Therefore, the main control unit 402 is returned to S1802, receives the selection of either the document scan button 1821 or the 3D scan button 1822, and waits until the touch to the scan start button 1823 is detected.
In S1803, if the document scan button 1821 is in the selection state, the main control unit 402 advances to S1804 and discriminates whether or not there is the flat original putting notification from the object detecting unit 410.
If there is no flat original putting notification in S1804, the main control unit 402 advances to S1806 and discriminates whether or not there is the solid object putting notification.
If there is no solid object putting notification in S1806, the main control unit 402 is returned to S1804 and waits for either the flat original putting notification or the solid object putting notification. If it is determined that there is the flat original putting notification in S1804, the main control unit 402 advances to S1805 and executes the process of the flat original image photographing unit 411.
If it is determined that there is the solid object putting notification in S1806, the main control unit 402 advances to S1807, interprets that a process for scanning the solid object as a document has been instructed from the user, and executes the process of the book image photographing unit 412.
If the 3D scan button 1822 is in the selection state in S1808, the main control unit 402 advances to S1809 and executes the process of the solid shape measuring unit 413.
After the execution of the process of one of S1805, S1807, and S1809 was ended, the main control unit 402 ends the scan executing process.
By presenting the document scan or the 3D scan as a selection item of the scanning mode to the user as mentioned above, the user can execute the operation matched with a purpose for scanning the document or measuring the solid object and the operability is improved. When the main control unit 402 detects the selection of the document scan, if the solid object was put on the stage 204, by executing the process of the book image photographing unit 412, the document image subjected to the proper distortion correction can be obtained.
In the embodiments 2 and 3, the main control unit 402 discriminates whether or not the target put on the stage 204 is the flat original or the solid object and, thereafter, executes the process. In the embodiment 4, when it is determined that the target put on the stage 204 is the solid object, the main control unit 402 discriminates whether or not the target is a book, and executes the process. In the embodiment 4, since details of the object kind discriminating process of S1701 in
In S1902, the object detecting unit 410 obtains a height of the dot, as a height of the object, in which the height from the stage plane is maximum among the 3-dimensional dot groups included in the object on the stage 204, and discriminates whether or not the obtained height is equal to or smaller than a predetermined value. If the height of the object is equal to or less than the predetermined value in S1902, the object detecting unit 410 advances to S1903 and notifies the main control unit 402 that the flat original was put on the stage 204. If the height of the object is larger than the predetermined value in S1902, the object detecting unit 410 advances to S1904 and discriminates whether or not the solid object put on the stage 204 is the book. The process of S1904 will be described hereinafter.
In S1905, the object detecting unit 410 discriminates whether or not it was decided that the solid object is the book in S1904. If it is determined in S1905 that the solid object is the book, the object detecting unit 410 advances to S1906 and notifies the main control unit 402 that the book was put on the stage 204. If it is determined in S1905 that the solid object is not the book, the object detecting unit 410 advances to S1907 and notifies the main control unit 402 that the solid object was put on the stage 204.
After any one of S1903, S1906, and S1907 was executed, the object detecting unit 410 ends the object kind discriminating process.
Subsequently, details of the book discriminating process of S1904 will be described.
When the book discriminating process is started in S1904, the object detecting unit 410 advances to S1911, projective-transforms the target into the stage plane, and discriminates whether or not the target image is close to a rectangle when viewed from directly above the stage plane. The discrimination here is made by a method whereby a circumscribed rectangle of the target image after the projective transformation is calculated, if a difference between an area of the circumscribed rectangle and an area of the target image is equal to or less than a predetermined value, it is decided that the target image is close to the rectangle, and if it is larger than the predetermined value, it is decided that the target image is not close to the rectangle.
If it is decided that the target image is not close to the rectangle in S1911, the object detecting unit 410 advances to S1915 and determines that the target is other than the book. If it is decided that the target image is close to the rectangle in S1911, the object detecting unit 410 advances to S1912 and discriminates whether or not a ratio between the area of the target viewed from directly above the stage plane and the height of the target from the stage plane is equal to or less than a predetermined value, that is, whether or not the target image is close to a flat shape.
If it is decided that the target image is not close to the flat shape in S1912, the object detecting unit 410 advances to S1915 and determines that the target is other than the book. If it is decided that the target image is close to the flat shape in S1912, the object detecting unit 410 advances to S1913 and discriminates whether or not characters are included on the surface of the target by using an OCR technique.
If it is decided that no characters are included in S1913, the object detecting unit 410 advances to S1915 and determines that the target is other than the book. If it is decided that the characters are included in S1913, the object detecting unit 410 advances to S1914, determines that the target is the book, and ends the book discriminating process.
In the book discriminating process of S1904, it is not always necessary that the object detecting unit 410 has to execute all of the discriminating processes of S1911 to S1913. For example, the object detecting unit 410 may execute one or a combination of an arbitrary plurality of discriminating processes among the discriminating processes of S1911 to S1913. Conditions in the discriminating processes of S1911 to S1913 are an example of the book discriminating conditions and may be conditions other than the conditions shown here.
When the scan executing process is started, in S2001, the main control unit 402 discriminates whether or not there is the flat original putting notification from the object detecting unit 410. If it is determined that there is no flat original putting notification in S2001, the main control unit 402 advances to S2003 and discriminates whether or not there is the book putting notification from the object detecting unit 410. If it is determined that there is no book putting notification in S2003, the main control unit 402 advances to S2005 and discriminates whether or not there is the solid object putting notification. If it is determined that there is no solid object putting notification in S2005, the main control unit 402 is returned to S2001 and waits for reception of one of the flat original putting notification, book putting notification, and solid object putting notification from the object detecting unit 410.
If it is determined that there is the flat original putting notification in S2001, the main control unit 402 advances to S2002 and executes the process of the flat original image photographing unit 411. If it is determined that there is the book putting notification in S2003, the main control unit 402 advances to S2004 and executes the process of the book image photographing unit 412.
If it is determined that there is the solid object putting notification in S2005, the main control unit 402 executes the process of the solid shape measuring unit 413. However, in the process of the solid shape measuring unit 413, since the shape measurement is performed a plurality of number of times while rotating the turntable by one revolution, it takes a long time. Therefore, if the user can explicitly instruct the start of the process of the solid shape measuring unit 413, it is desirable from a viewpoint of the operability. Therefore, in S2006, the main control unit 402 projects a display screen illustrated in
In S2007, the main control unit 402 waits until a touch to the 3D scan start button 2021 is detected. When the touch to the 3D scan start button 2021 is detected in S2007, the main control unit 402 advances to S2008 and executes the process of the solid shape measuring unit 413.
After the execution of the process of any one of S2002, S2004, and S2008 was ended, the main control unit 402 ends the scan executing process.
The object detecting unit 410 discriminates whether or not the object put on the stage 204 is the flat original, book, or solid object as mentioned above, so that the proper scanning process can be executed at the time of the execution of the scan. As for the processes of the flat original image photographing unit 411 and the book image photographing unit 412 which do not require a relatively long time, by executing them without waiting for the scan start instruction from the user after the object was put, the scan can be rapidly executed. On the other hand, as for the process of the solid shape measuring unit 413 which requires a relatively long time, by starting the execution of the process after waiting for the user's start instruction, the operability of the user can be improved.
In the embodiment 1, the solid shape measuring unit 413 executes the measurement of the solid shape of the target on the turntable 209 by the camera unit 202 and the projector 207. In the process of the solid shape measuring unit 413, the positions of the camera unit 202 and the projector 207 are fixed and by measuring the solid shape a plurality of number of times while rotating the turntable 209, a measuring accuracy is raised with respect to the side surface of the target. However, when the height of the target is equal to or larger than the height of the camera unit 202, there is a possibility that a measuring accuracy of the upper surface deteriorates. Since the distance image sensor unit 208 is disposed at an upper position than the camera unit 202 as illustrated in
An example of the process of the solid shape measuring unit 413 in the embodiment 5 is illustrated in a flowchart of
In S1301, the solid shape measuring unit 413 rotates the turntable by a predetermined angle.
In S1302, the solid shape measuring unit 413 performs the 3-dimensional dot group measurement by the camera unit 202 and the projector 207.
In S2101, the solid shape measuring unit 413 executes the process of the distance image obtaining unit 408 described in
In S2102, the solid shape measuring unit 413 forms one 3-dimensional dot group by combining the 3-dimensional dot group measured in S1301 and the 3-dimensional dot group measured in S2101 by using the ICP algorithm described in the embodiment 1. By executing the processes as mentioned above, a 3-dimensional dot group in which the measuring accuracies of both of the side surface and the upper surface of the target are high can be obtained.
In S1303, the solid shape measuring unit 413 rotates the 3-dimensional dot group combined in S2102 in the opposite direction by the rotation angle from the initial position of the turntable. In S1304, the solid shape measuring unit 413 further executes the combining process with the 3-dimensional dot group combined so far.
In S1305, the solid shape measuring unit 413 discriminates whether or not the turntable has been rotated by one revolution, and repeats the processes of S1301 to S1304 until it is rotated by one revolution. If it is determined that the turntable has been rotated by one revolution in S1305, the solid shape measuring unit 413 advances to S1306, executes the 3-dimensional model generating process, format-converts the 3-dimensional model data calculated in S1307, stores, and ends the process.
As mentioned above, in addition to the solid shape measuring process by the camera unit 202 and the projector 207, the solid shape measuring process by the distance image sensor unit 208 is executed and, subsequently, by combining their measurement results, the more accurate solid shape measurement can be performed.
According to each of the foregoing embodiments, the user's operability in the image processing apparatus such as a camera scanner or the like can be improved. More specifically speaking, the projection of the user interface by the projector, the gesture recognition by the distance image sensor, and the detection of the object on the stage can be performed and the following three kinds of reading operations can be performed.
(1) Reading of the flat original by the camera
(2) Reading of the thick document by the camera and distortion correction by the distance image sensor
(3) Projection of the 3-dimensional measurement patterns by the projector and solid shape measurement by the camera
Therefore, it is sufficient that the user puts the target onto the stage and operates the user interface which is projected onto the stage. The user can completely perform the work on the stage, and the operability can be largely improved.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2014-095535, filed May 2, 2014, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2014-095535 | May 2014 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2015/063537 | 4/30/2015 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2015/167020 | 11/5/2015 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
7001024 | Kitaguchi et al. | Feb 2006 | B2 |
20020113946 | Kitaguchi et al. | Aug 2002 | A1 |
20130250379 | Rigazio | Sep 2013 | A1 |
20130330006 | Kuboyama | Dec 2013 | A1 |
Number | Date | Country |
---|---|---|
3834766 | Oct 2006 | JP |
3954436 | Aug 2007 | JP |
4012710 | Nov 2007 | JP |
2012053545 | Mar 2012 | JP |
Entry |
---|
International Search Report issued in Intl. Appln. No. PCT/JP2015/063537 dated Jun. 30, 2015. |
Written Opinion issued in Intl. Appln. No. PCT/JP2015/063537 dated Jun. 30, 2015. |
International Preliminary Report on Patentability issued in International Patent Application No. PCT/JP2015/063537, dated Nov. 17, 2016. |
Number | Date | Country | |
---|---|---|---|
20170070635 A1 | Mar 2017 | US |