This application is based upon and claims under 35, USC 119 the benefit of priority from the prior Japanese Patent Application No. 2010-165642 filed on Jul. 23, 2010, the entire contents of which are incorporated herein by reference.
1. Field of the Invention
The present invention relates to an image processor and an image processing method for turning a digital photographic image into a “painting” or an image having a painting effect.
2. Description of the Related Art
In recent years, as digital cameras are widely used, it is getting general practice to store photographs in the form of digital image data. This changes the way in which users enjoy photographing by looking at photographs taken or captured images on their digital cameras used to capture those digital images or on personal computers into which image data of the captured images are transferred for storage therein. For example, a technique (an image turning or more particularly painting effect application technique) has been proposed in which image data is subjected to image processing to turn an original digital image into an image having a painting effect (such as of oil painting) which gives it a unique touch based on the original for display (refer to JP-A-8-44867, for example).
In addition, JP-A-2002-298136 describes a technique in which an original image produced by a digital camera or through CG (Computer Graphics) which creates a “mechanical touch” is turned into an image having a painting effect using a computer.
However, the image having the painting effect that is obtained by the image processing described above is represented as being blurred (less sharp) when compared with the original image by making the image look like a painting painted. Because of this, for example, with a photographic image (for example, a group photo) captured by the user of a digital camera which shows the face of a person as a subject looking small, when an image processing like the one described above is implemented on the photographic image, there may be a situation in which the face of the subject person, which is represented sharp in the photographic image, is represented as being painted out in a painting-style image resulting from the image processing. As this occurs, the face of the person cannot be recognized. However, this will be no problem in case that person is a person (a pedestrian, for example) whose image was captured irrespective of the intention of the camera user. However, there will be caused a problem when the person is a person (a main subject) whose image was captured positively by the camera user, and it is said that the above image processing is not suitable.
The invention has been made in view of these situations, and an object thereof is to provide an image processor and an image processing method which can implement preferably an image turning operation of turning a photographic image showing a person as a subject into an image having a painting effect or a painting-style image.
An image processor of the invention comprises a face detection part for capturing an image and detecting an image of a face shown in the image so captured, a determination part for determining whether or not the image of the face detected by the face detection part meets a predetermined criterion, and an image turning operation implementing part for determining on execution or non-execution of an image turning operation of turning the image into a painting-style image based on the result of the determination made by the determination part and executing the image turning operation when it is determined that the image turning operation be executed.
According to the invention, the image turning operation can preferably be implemented in which a photographic image showing the face of a person or the like is turned into an image having a painting effect or a painting-style image.
An embodiment according to the invention will be described below by reference to the drawings.
Although the invention will be understood sufficiently from the following detailed description and the accompanying drawings, the detailed description and accompanying drawings are mainly intended for description, and the scope of the invention is not limited thereby in any way.
At first, an image processor according to the invention will be described briefly. In this embodiment, an image processor will be described as being a digital camera. The image processor has an image turning or more particularly painting effect application mode as an image capture mode. The painting effect application mode is the image capture mode in which automatically implemented is an image turning operation of turning a photographic image captured by the image processor in accordance with an image capturing operation by the user of the image processor into an image having a painting effect (hereinafter, referred to as a painting-style image). The painting-style image obtained by this image processor is such as to be represented as if it were painted based on the original photographic image, and those who look at the painting-style image can feel an impression which differs from the effect (the painting style) given by the original photographic image. Note that painting styles include an oil painting touch, a watercolor painting touch, a pastel painting touch and the like. The image turning operation of turning the photographic image into the painting-style image can make use of the processing implemented in Photoshop (the registered trademark),
Following this, the configuration of an image processor 1 will be described by reference to
The control unit 10 controls the individual constituent units of the image processor 1, as well as the image processor in whole. In addition, the image control unit 10 includes a face detection part 10a, a determination part 10b and an image turning operation implementing part 10c and implements an image processing, which will be described later.
The storage unit 20 stores, under the control of the control unit 10, various data for use in implementing an image processing, which will be described later. In addition, the storage unit 20 stores various image data generated by the control unit 10 during processing and recorded image data read out from a recording medium 100 by the read/write unit 60 as required.
The image capture unit 30 captures a photographic image under the control of the control unit 10. The image capture unit 30 generates a captured image signal representing the photographic image so captured and generates photographic image data representing a photographic image of one frame (one photograph) based on the captured image signal generated. The image capture unit 30 supplies the control unit 10 with the photographic image data so generated. The control unit 10 receives the photographic image data so supplied. The control unit 10 may obtain the photographic image data through a configuration in which the image capture unit 30 supplies the control unit 10 with predetermined data which represents the photographic image and the control unit 10 implements a predetermined operation on the predetermined data supplied to generate photographic image data. The photographic image is a still image, and the photographic image data is still image data which represents the still image.
The input unit 40 is an operation unit which is operated by the user and supplies the control unit 10 with operation signals corresponding to the contents of the operations implemented thereon by the user.
The display unit 50 displays a menu screen, a live view display screen and painting-style images (images of an oil painting touch, a patched paper work touch, a black-and-white drawing touch and the like) under the control of the control unit 10 (the image turning operation implementing part 10c).
The read/write unit 60 reads out recorded image data from the recording medium 100 and writes image data on the recording medium 100 for recording under the control of the control unit 10.
Next, a hardware configuration of the image processor 1 will be described by reference to
The control unit 10 shown in
The primary storage unit 12 is realized by a RAM (Random Access Memory). The primary storage unit 12 stores temporarily data used and data generated by the CPU 11 as required.
The storage unit shown in
The image capture unit 30 shown in
The image capture unit 31 captures a photographic image in accordance with an image capturing operation performed by the user of the image processor 1, generates a captured image signal which represents the photographic image captured, implements various types of processing (processing carried out in the AFE, DPS and the like) on the captured image signal so generated and generates digital photographic image data. The various types of processing include, for example, a correlated double sampling operation, an automatic gain control operation which his implemented on a captured image signal after the correlated double sample operation, an analog/digital conversion operation of converting an analog captured image signal after the automatic gain control operation into a digital signal, and operations to be implemented so as to increase the image quality such as edge enhancement, auto white balance, auto-iris and the like.
The image capture unit 31 supplies the primary storage unit 12 with the photographic image data generated. The primary storage unit 12 stores the photographic image data received from the image capture unit 31 as still image data. The CPU 11 uses the still image data stored in the primary storage unit 12 to implement the image processing, which will be described later. Operations which are implemented by the DSP (Digital Signal Processor) of the image capture unit 31 may be implemented by the CPU 11.
The input unit 40 shown in
The display unit 50 shown in
Note that the input unit 40 and the display unit 50 which are shown in
The read/write unit 60 shown in
The recording medium 100 is realized by the memory card 101 which is of a flash memory type. An SD memory card is used as the memory card 101.
Next, referring to
After electric power is introduced into the image processor 1, the user selects the painting effect application mode from the menu screen displayed on the display unit 50 by using the input unit 40. In addition, the user selects one of painting effects or styles including watercolor painting, color pencil sketch, pastel painting, pointillism, air brush painting, oil painting, and Gothic oil painting by using the input unit 40. The input unit 40 supplies an operation signal corresponding to the operation implemented by the user with the face detection part 10a.
The face detection part 10a makes the image capture unit 30 capture present images at a predetermine frame rate and obtains sequentially photographic image data (live view image data) which represent the present images from the image capture unit 30. The live view image data is image data of a low resolution for live view. The face detection part 10a displays live view images represented by the live view image data sequentially obtained from the image capture unit 30 on the display screen of the display unit 50, whereby the user can acquire images while verifying live view images.
Next, the face detection part 10a determines whether or not the painting effect application mode is finished (step S101). When the user finishes the painting effect application mode by operating the input unit 40, the face detection unit 10a receives an operation signal corresponding to the operation implemented by the user from the input unit 40. When receiving this operation signal, the face detection unit 10a determines that the painting effect application mode has been finished (step S101: YES) and ends the image processing. On the other hand, the face detection unit 10a determines that the painting mode application mode has not yet been finished in any other cases (step S101: NO).
If the face detection part 10a determines that the painting effect application mode has not yet been finished (step S101: NO), the face detection part 10 determines whether or not there has been carried out an image capturing operation (step S102). The user implements an image capturing operation at a predetermined timing by using the input nit 40 while verifying live view images. When receiving an operation signal corresponding to an image capturing operation by the user from the input unit 40, the face detection unit 10a determines that the image capturing operation has been implemented by the user (step S102: YES) and makes the image capture unit 30 capture a photographic image in response to the determination, thereby obtaining photographic image data representing the photographic image from the image capture unit 30. Since the photographic image data constitutes basically image data for storage, the image data may be made up of image data of a high resolution (the control unit 10 (the face detection part 10a) controls, for example, the image capture unit 30 so as to adjust the resolution or the like).
If the face detection part 10a determines that no image capturing operation has not been implemented by the user based on the fact that an operation signal corresponding to an image capturing operation implemented by the user has not yet been supplied (step S102: NO), returning to step S101 described above, the face detection part 10a determines whether or not the painting effect application mode has been finished. Namely, the image processor 1 waits until an image is captured by the user or the painting effect application mode is finished.
On the other hand, if the face detection part 10a determines that there has been implemented the image capturing operation (step S102: YES), the face detection part 10a determines whether or not a photographic image represented by the photographic image data includes the face of a person (step S103).
Specifically, the face detection part 10a attempts to detect an image of the face in the photographic image by an appropriate processing operation such as a template matching which uses template image data representing a predetermined face stored in advance in the storage unit 20. For example, an operation is implemented in which the image of the predetermined face (the template image) represented by the template image data is shifted pixel by pixel horizontally or vertically on the photographic image represented by the photographic image data obtained by the face detection part 10a so as to implement sequentially comparisons between the template image and an image of an area which is superposed on the template image. As this occurs, the face detection part 10a obtains sequentially similarity between the template image and the image of the area which is superposed on the template image for comparison of the similarities sequentially obtained with a preset threshold. Then, if the similarities are equal to or lager than the threshold, determining that the area which is superposed on the template image is the area of the image of the face, the face detection part 10a specifies the area as the image of the face. This specification means that the face detection part 10a has detected the face on the photographic image and determines that the face of a person is included on the photographic image. SAD (Sum of Absolute Difference), SSD (Sum of Squared Difference) or the like is used in calculation of the similarity. In addition, In the comparison of the images, the image of the predetermined face may be increased or decreased in size for comparison.
If no image of the face of a person is detected by the face detection part 10a (step S103: NO), the image turning operation implementing part 10c implements an image turning operation of turning the photographic image represented by the photographic image data into a painting-style image and displays the resulting painting-style image on the display unit 50 (step S108).
In step S108, the image turning operation implementing part 10c implements the image turning operation of turning the photographic image represented by the photographic image data into a painting-style image in accordance with the style of a painting that the user selects by using the input unit 40 when the painting effect application mode is selected. For example, when the user selects the style of a watercolor painting, the image turning operation implementing part 10c changes various parameters possessed by the photographic image data to parameters which represent the photographic image as being painted with the touch of a watercolor painting and generates painting-style image data representing the painting-style image. When referred to here, the parameters are numeric values which specify the operation intensity when implementing the image processing for turning the photographic image into the painting-style image. In addition, the parameters include parameters representing brightness, contrast, gray level, tone, sharpness and the like. Additionally, the parameters are specified by styles of paintings such as watercolor painting, color pencil sketch, pastel painting, pointillism, air brush painting, oil painting, and Gothic oil painting. As the image turning operation of turning the photographic image into the painting-style image, there is an operation using various types of filters which are used in the Photoshop (the registered trademark), for example.
The image turning operation implementing part 10c generates the painting-style image data which represent the painting-style image by implementing the above image processing operation. The image turning operation implementing part 10c stores the painting-style image data generated in a predetermined recording area (the temporary storage unit 12 shown in
If it is determined as NO in step S103, it is considered that a person (a main subject) whose image is captured positively by the user is not shown in the photographic image captured. Because of this, when the photographic image represented by the photographic image data is turned into the painting-style image, even in the event that there is a portion where the painting-style image is represented as being painted out, it is considered that that will be no problem because the image of the main subject (in particular, the face thereof) does not exist in the photographic image capture and there is caused no such situation that the image of the main subject is painted out.
In the template matching described above, the face detection part 10a may only have to use one or more template images. However, when the template images used are images which show only persons whose faces are directed to the front (persons whose line of sight is directed to the camera), persons other than a person whose face is directed to the front cannot be detected in the photographic image. For example, as is shown in
On the other hand, if the face detection part 10a determines that the face of a person is shown in the photographic image represented by the photographic image data (step S103: YES), namely, if the image of the face is detected in the photographic image, the determination part 10b determines whether or not the image of the face of the person detected by the face detection part 10a meets a primary criterion (step S104).
Here, the primary criterion is a criterion based on areas defined by use of the three division method. The three division method is one of rules of thumb with respect to the composition of the screen. In the three division method, as is shown in
For example, as is shown in
If the determination part 10c determines that the person shown in the photographic image (the image of the face detected in the photographic image, and this will be true in the following description) does not meet the first criterion (step S104: NO, refer to
In this way, when the image of the face of the person shown in the photographic image does not meet the first criterion (resides out of the area T1), the image turning operation of turning the photographic image into the painting-style image is implemented. An area residing out of the area T1 constitutes a peripheral edge portion of the photographic image and the area T1 is the area defined by the three division method. Therefore, when the image of the face of the person shown in the photographic image resides out of the area T1, it is highly possible that the image is not the image of the face of the person whose image is positively captured by the user of the camera with intension but the image of the pedestrian whose image happens to be captured by the user without any intention. Therefore, it is highly possible that the user is not interested in the person shown in the photographic image and that there will be no problem even in the event that the image turning operation is implemented on the photographic image so as to turn it into a painting-type image, resulting in the fact that the face of the person is represented as being painted out so that the user cannot recognize the face of the person in the painting-type image.
On the other hand, in step S104, if the determination part 10b determines that the image of the face of the person meets the first criterion (step S104: YES, refer to
Here, the second criterion is a criterion based on whether or not the image of the face detected above is smaller in size than a preset range. For example, as is shown in
Here, the preset reference area constitutes a criterion by which it is determined whether or not the image of the face of the person which is an object to be judged by the criterion is represented as being painted out when the photographic image is turned into a painting-type image. Namely, when the image of the face of the person shown in the photographic image is smaller than the reference area, it is highly possible that when the photographic image is turned into a painting-style image, the image of the face of the person is represented as being painted out in the resulting painting-style image. On the other hand, it is little possible that when the image of the face of the person shown in the photographic image is larger than the reference area, even in the event that the photographic image is turned into a paint-style image, the image of the face of the person is represented as being painted out in the resulting painting-style image.
If the determination part 10b determines in step S105 that the image of the face of the person does not meet the second criterion (step S105: NO, refer to
On the other hand, if the determination part 10b determines in step S105 that the image of the face of the person meets the second criterion (step S105: YES, refer to
Here, the third criterion constitutes a criterion by which whether or not the face and the line of sight of the person are directed to the front is determined. When the face and the line of sight of the person are directed to the front, it is highly possible that this person faces the camera and is looking at the camera, and it is also highly possible that the person whose image is positively captured by the user of the camera (the person constitutes a main subject). On the other hand, on any other occasions, it is highly possible that the person does not face the camera and is not looking at the camera. Therefore, it is highly possible that the person shown in the photographic image is not the person whose image is positively captured by the user of the camera (the person is a pedestrian).
The determination part 10b detects, for example, the face of the person and the direction of the line of sight from the image of the face of the person detected by the face detection part 10a. The orientation of the face of the person is detected by implementing as required an operation similar to the template matching implemented in step S102. Specifically, the determination part 10b detects, by using image data each representing eyes, nose and mouth stored in advance in the storage unit 20, the eyes, nose and mouth of the person in the image of the face of the person detected by the face detection part 10a through implementation of template matching and detects the orientation of the face based on a positional relationship between the eyes, nose and mouth which are so detected. For example, when the positions of the two eyes are aligned substantially laterally symmetrically with respect to a reference line which is a line connecting the position of the nose and the position of the mouth, the determination part 10b detects that the face of the person is oriented to the front. On the other hand, when the positions of the two eyes are aligned in a position lying close to a left-hand side with respect to the line connecting the position of the nose and the position of the mouth which is positioned centrally, the detection part 10b detects that the line of sight is directed towards the left (towards the right in an actual space).
To detect the direction of the line of sight, the positions of the pupil and the sclera are specified further from the area of the eye detected in detecting the orientation of the face of the person, whereby the direction of the line of sight is detected based on the position of the pupil within the sclera. For example, when the pupil is substantially positioned in the center of the sclera, it is detected that the line of sight is directed to the front. On the other hand, when the pupil is positioned further rightwards than the center of the sclera, the detection part 10b detects that the line of sight is directed towards the right (towards the left in the actual space).
When both the face and the line of sight of the person which are specified by the image F1 of the face of the person are directed towards the front as is shown in
When the template images represented by the template image data which are used in the template matching operation use only the images which show the persons who face the front (the persons whose lines of sight are directed to the camera), the face detection part 10a cannot detect any other persons than those who face the front in the photographic image. As this occurs, the determination part 10b does not detect the orientation of the face of the person detected (the person who faces the front) but detects the direction in which the line of sight is directed, and determines that the third criterion is met when it detects that the line of sight is directed towards the front.
In step S106, if the determination part 10b determines that the image of the face of the person does not meet the third criterion (step S106: NO, refer to
When the image of the face of the person shown in the photographic image does not meet the third criterion (when at least either the face or the line of sight of the person is not oriented towards the front), the image turning operation of turning the photographic image into a painting-style image is implemented. Namely, in this case, even in the event that the photographic image is turned into the painting-style image and the image of the face of the person shown in the resulting painting-style image is represented as being painted out, it is highly possible that the person shown in the painting-style image is a pedestrian or the like in which the user of the camera is not interested, and it is highly possible that there will be no problem although the face of the person cannot be recognized.
On the other hand, in step S106, if the determination part 10b determines that the image of the face of the person meets the third criterion (step S106: YES, refer to
Thus, according to the image processor 1 of the embodiment of the invention, the image of the face of the person shown in the photographic image represented by the photographic image data obtained is detected. Then, it is determined based on the fact that the image of the face of the person detected meets the first to third criteria whether or not the photographic image can be turned into a painting-style image, and the photographic image is turned into a painting-style image based on the result of the determination. In this embodiment, when the photographic image which shows the person is shown is turned into a painting-style image, whether or not the photographic image is turned into a painting-style image is determined in consideration of the position (the area T1) and the size (the range S1) where the person is shown in the photographic image or the orientation of the face and the line of sight of the person. When the image of the face of the person who is considered to constitute the main subject is small as has been described above, the image turning operation of turning the photographic image into a painting-style image is not implemented, and therefore, a drawback of the face of the person being painted out can be suppressed. In this way, in this embodiment, whether or not the image turning operation is implemented is determined in consideration of the position (the area T1) and the size (the range S1) or the orientation of the face and the line of sight of the person, that is, by making the predetermined determination on the image of the face, and therefore, a good image processing can be implemented.
In the embodiment, it is described that if the determination part 10b determines that the image of the face of the person does not meet any of the first to third criteria, the image turning operation implementing part 10c automatically implements the image turning operation of turning the photographic image into a painting-style image. However, as this occurs, the image turning operation implementing part 10c may display a short sentence reading the “image turning to painting-style image is OK” on the display screen of the display unit 50 so as to inform the user of the camera that the photographic image can be turned into a painting-style image. In this case, the user of the camera implements a selecting operation by using the input unit 40 on whether or not the photographic image be turned into a painting-style image. The image turning operation implementing part 10c may receive the operation signal corresponding to the selecting operation implemented from the input unit 40 so as to implement the image turning operation of turning the photographic image into a painting-style image in accordance with the operating signal received. In this way, the image turning operation implementing part 10c may implement the image turning operation of turning the photographic image into a painting-style image in accordance with the result of the determination made by the determination part 10b via the operation of the user of the camera.
In addition, the image turning operation implementing part 10c may implement the image turning operation of turning the photographic image into a painting-style image even when the determination part 10b determines that all of the first to third criteria are not met, that is, even in step S107. However, the intensity of the image turning operation referred to herein is made smaller than the intensity of the operation in step S108. Here, the larger the intensity of the operation, the larger the change in image before and after the image turning operation. Because of this, the change in mage before and after the image turning operation becomes small by decreasing the intensity of the image turning operation. In addition, since the intensity of the image turning operation is regulated by the parameters as has been described above, increasing or decreasing the intensity of the image turning operation is implemented by regulating the numeric values of the parameters (note that a changing method like this is performed according to a preset method). By adopting this configuration, even when the image of the face of the person who is considered to constitute the main subject in the photographic image is small, the drawback of the face of the main subject being painted out can be suppressed by implementing an image turning operation which produces a smaller change in the resulting painting-style image from the original photographic image. Moreover, since the image turning to the painting-style image can be implemented, a image can be generated which can induce the interest of the user of the camera. In addition, the image turning operation implementing part 10c may implement a (predetermined) image turning operation in step S107 which changes the image a little. By doing so, as with the case which has just been described above, the drawback of the face of the main subject being painted out can be suppressed. Moreover, since the image turning to a painting-style image can be implemented, an image is generated which induces the interest of the use of the camera.
In addition, the image turning operation implementing part 10c may implement the image turning operation by changing the intensity of the image turning operation or the styles of paintings in accordance with the results of the determinations made by the face detection part 10a or the determination part 10b in steps S103 to S106. Specifically, the image turning operation implementing part 10c changes the intensity of the image turning operation or the styles of paintings in step S108 in accordance with the negative determination NO being made in which step. For example, stored in advance in the storage unit 20 is a reference table (not shown) which records information which specifies any of the steps S103 to S106 and intensities of the operation and styles of paintings in a corresponding fashion. The image turning operation implementing part 10c specifies the step in which the negative determination NO is made (that is, the step just therebefore) and refers to the reference table to obtain an intensity of the image turning operation and a style of a painting which correspond to the information which specifies the same step as that in which the negative determination NO is made. Then, the image turning operation implementing part 10c implements the image turning operation on the photographic image based on the intensity of the image turning operation and the style of the painting obtained. Basically, the contents (existence, position and size of the image of the face of a person) of the photographic image on which the image turning operation is to be implemented in step S108 differ depending on in which of the steps S103 to S106 the negative determination NO is made. Because of this, a suitable image turning operation can be implemented on the photographic image by setting the intensity of the operation and the style of the painting in accordance with the contents which so differ.
For example, in the case described above, the intensity of the image turning operation is set so as to be decreased as the image processing proceeds from step S103 to step S106. In addition, the style of the painting is set so that the change in image before and after the image turning operation becomes small as the step in which the negative determination NO is made proceeds from step S103 to step S106. By these settings, there exists a possibility that a suitable image turning operation can be implemented on the photographic image.
Here, changing the style of the painting means that the style of the painting is changed to styles of oil painting, watercolor painting, color pencil sketch and the like, and changes in brightness and tone are also included.
In addition, changing the intensity means that even with the style of the painting remaining the same, the intensity of the image turning operation is weak when the degree at which the resulting image differs from the original image is small, whereas the intensity of the image turning operation is strong when the degree at which the resulting image differs from the original image is large.
In addition, in the embodiment, as a matter of convenience, the single person is described as being shown in the photographic image. However, the mage processor according to the invention can also be applied to a case where a plurality of persons is shown in the photographic image. In this case, for example, when the image processor 1 implements the image turning operation, in the event that there is even a single person which is determined by the determination part 10b as meeting the first to third criteria, the image turning operation implementing part 10c does not turn the photographic image into a painting-style image or implements the image turning operation by matching the intensity of the image turning operation to the person so determined.
In addition, in the embodiment, the determination part 10b determines whether or not the image of the face of the person detected by the face detection part 10a meets the criteria by using the first to third criteria. However, the user of the camera can select the first to third criteria as required for use.
Additionally, in the embodiment, the face detection part 10a is described as detecting the image of the face of the person. However, the image processor 1 of the invention can also be applied to the detection of the face of a pet such as dog, cat and the like.
Further, in the embodiment, the image processor 1 is described as turning the photographic image captured in accordance with the image capturing operation by the user of the image processor 1 into the painting-style image. However, the invention is not limited thereto, and hence, recorded images which are represented by the recorded image data recorded in the storage unit 20 and the recording medium 100 may be used to be turned into a painting-style image.
In addition, the image processor according to the invention is described as implementing the image turning operation of turning the photographic image (still image) into a painting-style image. However, the image processor 1 may implement an image turning operation of turning an image used for live view display or a dynamic or moving image (a group of images made up of a plurality of continuous still images) into a painting-style image (a group of painting-style images).
Additionally, in the embodiment, as the image processor according to the invention, the digital camera is described as the example thereof. However, the invention can also be applied to such image processors as a digital photo frame, a personal computer and the like in addition to the digital camera.
Note that the image processing program 21a of the embodiment may be recorded in a portable recording medium. This portable recording medium includes, for example, a CD-ROM (Compact Disk Read Only Memory) or a DVD-ROM (Digital Versatile Disk Read Only Memory). In addition, the image processing program 21a may be installed in the image processor 1 from the portable recording medium via a reading unit of any kind. Further, the image processing program 21a may be downloaded and installed in the image processor 1 from a network, not shown, such as the internet via a communication unit. Additionally, the image processing program 21a may be stored in a storage unit such as a server which can communicate with the image processor 1 to send a command to the CPU or the like. Further, a readable recording medium (for example, RAM, ROM (Read Only Memory), CD-R, DVD-R, hard disk, or flash memory) which records the image processing program 21a is a recording medium which records a program which can be read by a computer.
Thus, the invention is not limited to the embodiment that has been described heretofore and can be modified variously in steps of carrying it out without departing from the spirit and scope thereof. In addition, the invention may be carried out by combining as many of the functions executed in the embodiment as possible as required. The embodiment described above includes various steps and various inventions can be extracted by combining the plurality of constituent requirements disclosed as required. For example, in the event that the advantage can be obtained although some of all the constituent requirements described in the embodiment are deleted, the resulting configuration which lacks the constituent requirements so deleted can be extracted as an invention.
Number | Date | Country | Kind |
---|---|---|---|
2010-165642 | Jul 2010 | JP | national |