This application claims the benefit of Japanese Priority Patent Application JP 2012-236784 filed Oct. 26, 2012, the entire contents of which are incorporated herein by reference.
The present disclosure relates to an information processing apparatus. More particularly, the present disclosure relates to an information processing apparatus that displays an image, a display apparatus, and a control method for such an information processing apparatus, and a program that causes a computer to execute such a method.
Imaging apparatuses such as a digital still camera and a digital video camera (e.g., camera-integrated recorder) have prevailed in recent years. The imaging apparatus includes an imaging system that images a subject such as a person, and records an image generated by the imaging system as an image file.
Further, an imaging apparatus capable of generating an image (e.g., panoramic image) taken in an imaging area relatively widened in a specific direction by combining a plurality of images has been proposed. For example, the following imaging apparatus has been proposed. The imaging apparatus includes three imaging systems and generates a panoramic image by arranging and combining images output from the respective imaging systems (see, for example, Japanese Patent Application Laid-open No. 2007-166317, for example).
With the above-mentioned apparatuses in the related art, it is possible to easily take a panoramic image.
When a person stands upright on the ground, eyes of the person are horizontal to the ground. Therefore, an area that the person can view is wide in left- and right-hand directions (horizontal direction (direction parallel to ground, direction perpendicular to direction of gravitational force)) and narrow in upper and lower directions (vertical direction (direction perpendicular to ground, direction parallel to direction of gravitational force)). When the person captures a panoramic image, an image (image long in horizontal direction) reflecting a vision of the person that is wide in the left- and right-hand directions is often recorded. However, an information processing apparatus such as a smart phone and a cell phone including an imaging unit is often used in a vertical state (with longitudinal direction thereof being parallel to direction of gravitational force). Therefore, it is assumed that, when the information processing apparatus is used in the vertical state, a panoramic image cannot be appropriately displayed on the display unit in the vertical state and it is difficult to view the panoramic image.
In view of the above-mentioned circumstances, it is desirable to appropriately display an image.
According to a first embodiment of the present disclosure, there is provided an information processing apparatus including: a plurality of imaging units that are arranged, on a surface almost parallel to a display surface of a display unit, in a direction almost orthogonal to a long side of the display unit; and a display control unit configured to control, in a state in which the long side of the display unit is almost parallel to a direction of gravitational force, the display unit to display a plurality of areas of a horizontally long image in a plurality of rows in a longitudinal direction of the display unit, the horizontally long image being obtained by connecting a plurality of images generated by the plurality of imaging units, the display control unit being configured to control the display unit to display, as the plurality of areas of the horizontally long image, a first image and a second image being two images including areas of both end portions in a longitudinal direction of the horizontally long image, and a third image being an image including an area present between the both end portions. In addition, there are provided a control method for such an information processing apparatus and a program that causes a computer to execute such a method. This provides an effect of displaying, in the state in which the long side of the display unit is almost parallel to the direction of gravitational force, the plurality of areas of the horizontally long image in the plurality of rows in the longitudinal direction of the display unit, and of displaying, as the plurality of areas of the horizontally long image, the two images (first image and second image) including the areas of the both end portions in the longitudinal direction of the horizontally long image, and the image (third image) including the area present between the both end portions.
Further, according to a second embodiment of the present disclosure, there is provided an information processing apparatus including: a plurality of imaging units that are housed in a second casing being a casing attached to a first casing that houses a display unit and rotatable about an axis of rotation almost parallel to a short side of the display unit, and that are arranged on a surface of the second casing in a direction almost parallel to the axis of rotation; and a display control unit configured to control, in a state in which a long side of the display unit is almost parallel to a direction of gravitational force, the display unit to display a plurality of areas of a horizontally long image in a plurality of rows in a longitudinal direction of the display unit, the horizontally long image being obtained by connecting a plurality of images generated by the plurality of imaging units, the display control unit being configured to control the display unit to display, as the plurality of areas of the horizontally long image, a first image and a second image being two images including areas of both end portions in a longitudinal direction of the horizontally long image, and a third image being an image including an area present between the both end portions. In addition, there are provided an information processing method having such a configuration and a program that causes a computer to execute such a method. This provides an effect of displaying, in the state in which the long side of the display unit is almost parallel to the direction of gravitational force, the plurality of areas of the horizontally long image in the plurality of rows in the longitudinal direction of the display unit, and of displaying, as the plurality of areas of the horizontally long image, the two images (first image and second image) including the areas of the both end portions in the longitudinal direction of the horizontally long image, and the image (third image) including the area present between the both end portions.
Further, in the first and second embodiments, the display control unit may be configured to control the display unit to display the first image and the second image in the same row and the third image in another row. This provides an effect of displaying the first image and the second image in the same row and the third image in the other row.
Further, in the first and second embodiments, the display control unit may be configured to control the display unit to display the third image in a row above a row in which the first image and the second image are displayed. This provides an effect of displaying the third image in the row above the row in which the first image and the second image are displayed.
Further, in the first and second embodiments, the display unit may be configured to control the display unit to display the first image to the third image such that the third image has a horizontal size larger than a horizontal size of each of the first image and the second image. This provides an effect of displaying the first image to the third image such that the third image has the horizontal size larger than the horizontal size of each of the first image and the second image.
Further, in the first and second embodiments, the display control unit may be configured to control the display unit to display an image in an area other than display areas of the first image to the third image, the image being obtained by reducing a whole of the horizontally long image. This provides an effect of displaying the image in the area other than display areas of the first image to the third image, the image being obtained by reducing the whole of the horizontally long image.
Further, in the first and second embodiments, the information processing apparatus may further include an attitude detection unit configured to detect an attitude of the information processing apparatus, in which the display control unit may be configured to control, when the long side of the display unit is almost perpendicular to the direction of gravitational force, the display unit to display the first image to the third image in a smaller number of rows than the number of rows when the long side of the display unit is almost parallel to the direction of gravitational force. This provides an effect of displaying, when the long side of the display unit is almost perpendicular to the direction of gravitational force, the first image to the third image in a smaller number of rows than the number of rows when the long side of the display unit is almost parallel to the direction of gravitational force.
Further, in the first and second embodiments, the display control unit may be configured to control, when controlling the display unit to display an image being a display target that has a ratio of a horizontal size to a vertical size smaller than a ratio of a horizontal size to a vertical size of the horizontally long image, the display unit to display the first image to the third image in a smaller number of rows than the number of rows when the horizontally long image is displayed. This provides an effect of displaying, when displaying the image being the display target that has the ratio of the horizontal size to the vertical size smaller than the ratio of the horizontal size to the vertical size of the horizontally long image, the first image to the third image in the smaller number of rows than the number of rows when the horizontally long image is displayed.
Further, in the first and second embodiments, the information processing apparatus may further include a specific-target detection unit configured to detect a specific target included in the horizontally long image, in which the display control unit may be configured to control the display unit to display an image including an area in which the detected specific target is present, as the third image. This provides an effect of displaying the image including the area in which the detected specific target is present, as the third image.
Further, in the first and second embodiments, the information processing apparatus may further include a record control unit configured to record, when recording the horizontally long image, specific-target information on a recording medium in association with the horizontally long image, the specific-target information relating to the area in which the specific target is present, in which the display control unit may be configured to control, when controlling the display unit to display the horizontally long image recorded on the recording medium, the display unit to display an image relating to the area in which the specific target is present using the specific-target information recorded in association with the horizontally long image. This provides an effect of displaying, when displaying the horizontally long image recorded on the recording medium, the image relating to the area in which the specific target is present using the specific-target information recorded in association with the horizontally long image.
Further, in the first and second embodiments, the display control unit may be configured to control the display unit to display the third image while moving an area displayed as the third image from one end portion to the other end portion of the both end portions. This provides an effect of displaying the third image while moving the area displayed as the third image from the one end portion to the other end portion of the both end portions of the horizontally long image.
Further, in the first and second embodiments, the information processing apparatus may further include an operation receiving unit configured to receive a switching instruction operation from a user that relates to a display method for the third image, in which the display control unit may be configured to select, based on the switching instruction operation, either one of a display method of displaying the third image while moving an area displayed as the third image from one end portion to the other end portion of the both end portions, and a display method of displaying a certain area as the third image without moving the area displayed as the third image, and to control the display unit to display the third image. This provides an effect of selecting, based on the switching instruction operation that relates to the display method for the third image, either one of the display method of displaying the third image while moving the area displayed as the third image from the one end portion to the other end portion of the both end portions, and the display method of displaying the certain area as the third image without moving the area displayed as the third image, and of displaying the third image.
Further, in the first and second embodiments, when the display method for the third image is switched, a display method according to the switching may be stored by the time a display of the third image based on the switching is completed, and when the horizontally long image is newly displayed after the display of the third image based on the switching is completed, the third image in the horizontally long image may be displayed using the stored display method. This provides an effect of storing, when the display method for the third image is switched, the display method according to the switching by the time the display of the third image based on the switching is completed, and of displaying, when the horizontally long image is newly displayed after the display of the third image based on the switching is completed, the third image in the horizontally long image using the stored display method.
Further, in the first and second embodiments, the information processing apparatus may further include an operation receiving unit configured to receive, when a reduced image obtained by reducing a whole of the horizontally long image is displayed in an area other than display areas of the first image to the third image, a designation operation for designating a position in the reduced image, in which the display control unit may be configured to control, if the third image is displayed by a display method of displaying the third image while moving an area displayed as the third image from one end portion to the other end portion of the both end portions of the horizontally long image when the designation operation is performed, the display unit to display an image including the position in the reduced image designated by the designation operation and a peripheral area thereof as a new third image, and perform a display of the third image while performing the movement with the position being a start point, and control, if the third image is displayed by a display method of displaying a certain area as the third image without moving the area displayed as the third image when the designation operation is performed, the display unit to display an image including the position and a peripheral area thereof as a new third image. This provides an effect of displaying, if the third image is displayed by the display method of displaying the third image while moving the area displayed as the third image from the one end portion to the other end portion of the both end portions when the designation operation is performed, the image including the position in the reduced image designated by the designation operation and the peripheral area thereof as the new third image, and performing the display of the third image while performing the movement with the position being the start point, and of displaying, if the third image is displayed by the display method of displaying the certain area as the third image without moving the area displayed as the third image when the designation operation is performed, the image including the position and the peripheral area thereof as the new third image.
Further, in the first and second embodiments, the display control unit may be configured to switch, during live view or during postview in accordance with a user operation, between a first display method of displaying the third image with a position of the third image in the horizontally long image being fixed and a second display method of displaying an image including an area in which a specific target included in the horizontally long image is present as the third image, and switch, during reproduction in accordance with a user operation, between a third display method of displaying the third image while moving an area displayed as the third image from one end portion to the other end portion of the both end portions and the second display method. This provides an effect of switching, during the live view or during the postview in accordance with the user operation, between the first display method and the second display method, and of switching, during the reproduction in accordance with the user operation, between the third display method and the second display method.
Further, in the first and second embodiments, the display control unit may be configured to switch, during live view in accordance with a user operation, between a first display method of displaying the third image with a position of the third image in the horizontally long image being fixed and a second display method of displaying an image including an area in which a specific target included in the horizontally long image is present as the third image, and switch, during postview or reproduction in accordance with a user operation, between a third display method of displaying the third image while moving an area displayed as the third image from one end portion to the other end portion of the both end portions and the second display method. This provides an effect of switching, during the live view in accordance with the user operation, between the first display method and the second display method, and of switching, during the postview or the reproduction in accordance with the user operation, between the third display method and the second display method.
Further, in the first and second embodiments, the display control unit may be configured to control, when the information processing apparatus is shipped from a factory or when a user uses the information processing apparatus, if an image display method by a live view operation of displaying an image generated by the imaging unit on the display unit before the user records the image, an image display method by a postview operation of displaying the recorded image on the display unit immediately after the user records the image, and an image display method by a reproduction operation of displaying an image specified by the user on the display unit are individually set, the display unit to display each image based on the setting. This provides an effect of displaying, when the information processing apparatus is shipped from the factory or when the user uses the information processing apparatus, if the image display method by the live view operation, the image display method by the postview operation, and the image display method by the reproduction operation are individually set, each image based on the setting.
Further, according to a third embodiment of the present disclosure, there is provided a display apparatus including: a display unit configured to display an image; and a display control unit configured to control, when controlling the display unit to display a horizontally long image in a state in which a long side of the display unit is almost parallel to a direction of gravitational force, the display unit to display a plurality of areas of the horizontally long image in a plurality of rows in a longitudinal direction of the display unit, the display control unit being configured to control the display unit to display, as the plurality of areas of the horizontally long image, a first image and a second image being two images including areas of both end portions in a longitudinal direction of the horizontally long image, and a third image being an image including an area present between the both end portions. In addition, there are provided a control method for such a display apparatus and a program that causes a computer to execute such a method. This provides an effect of displaying, when displaying the horizontally long image in the state in which the long side of the display unit is almost parallel to the direction of gravitational force, the plurality of areas of the horizontally long image in the plurality of rows in the longitudinal direction of the display unit, and of displaying, as the plurality of areas of the horizontally long image, the two images (first image and second image) including the areas of the both end portions in the longitudinal direction of the horizontally long image, and the image (third image) including the area present between the both end portions.
Further, in the third embodiment, the display control unit may be configured to control the display unit to display the first image and the second image in the same row and the third image in another row. This provides an effect of displaying the first image and the second image in the same row and the third image in the other row.
Further, in the third embodiment, the display control unit may be configured to control the display unit to display the third image in a row above a row in which the first image and the second image are displayed. This provides an effect of displaying the third image in the row above the row in which the first image and the second image are displayed.
Further, in the third embodiment, the display unit may be configured to control the display unit to display the first image to the third image such that the third image has a horizontal size larger than a horizontal size of each of the first image and the second image. This provides an effect of displaying the first image to the third image such that the third image has the horizontal size larger than the horizontal size of each of the first image and the second image.
Further, in the third embodiment, the display control unit may be configured to control the display unit to display an image in an area other than display areas of the first image to the third image, the image being obtained by reducing a whole of the horizontally long image. This provides an effect of displaying the image in the area other than the display areas of the first image to the third image, the image being obtained by reducing a whole of the horizontally long image.
Further, in the third embodiment, the display apparatus may further include an attitude detection unit configured to detect an attitude of the display apparatus, in which the display control unit may be configured to control, when the long side of the display unit is almost perpendicular to the direction of gravitational force, the display unit to display the first image to the third image in a smaller number of rows than the number of rows when the long side of the display unit is almost parallel to the direction of gravitational force. This provides an effect of displaying, when the long side of the display unit is almost perpendicular to the direction of gravitational force, the first image to the third image in a smaller number of rows than the number of rows when the long side of the display unit is almost parallel to the direction of gravitational force.
Further, in the third embodiment, the display control unit may be configured to control, when controlling the display unit to display an image being a display target that has a ratio of a horizontal size to a vertical size smaller than a ratio of a horizontal size to a vertical size of the horizontally long image, the display unit to display the first image to the third image in a smaller number of rows than the number of rows when the horizontally long image is displayed. This provides an effect of displaying, when displaying the image being the display target that has the ratio of the horizontal size to a the vertical size smaller than the ratio of the horizontal size to the vertical size of the horizontally long image, the first image to the third image in a smaller number of rows than the number of rows when the horizontally long image is displayed.
According to the embodiments of the present disclosure, it is possible to exert an excellent effect that an image can be appropriately displayed.
These and other objects, features and advantages of the present disclosure will become more apparent in light of the following detailed description of best mode embodiments thereof, as illustrated in the accompanying drawings.
Hereinafter, embodiments for carrying out the present disclosure (hereinafter, referred to as embodiments) will be described. Descriptions will be made in the following order.
1. First Embodiment (display control: example of displaying plurality of areas of panoramic image in plurality of rows in longitudinal direction of display unit)
2. Second Embodiment
The information processing apparatus 100 is realized by an information processing apparatus (e.g., smart phone with multi eye camera and cell phone with multi eye camera) including a plurality of imaging units, for example. Note that, for the sake of description, the information processing apparatus 100 is simplified and shown in
In the first embodiment of the present disclosure, a ratio of a horizontal size to a vertical size of an image is defined as an “aspect ratio” and described. Further, an image having an aspect ratio larger than the aspect ratio (16:9) of a digital Hi-vision image (high-definition television image) is defined as “panoramic image” and described.
The information processing apparatus 100 includes an electronic substrate 101, a battery storage unit 102, a changeover switch 111, a determination key 112, an imaging unit 130, and a display unit 140.
The electronic substrate 101 is an electronic substrate that mainly performs functions other than an image-capturing function. In the first embodiment of the present disclosure, a plurality of imaging systems (imaging unit 130) arranged parallel to a short side of the casing of the information processing apparatus 100 are arranged as close as possible to an outer edge of the short side of the casing. That is, the electronic substrate that performs the functions other than the image-capturing function and a image-displaying function is not provided between the plurality of imaging systems (imaging unit 130) and the outer edge of the short side of the casing. Otherwise, it is favorable that the area of the electronic substrate provided between the plurality of imaging systems (imaging unit 130) and the outer edge of the short side of the casing is set to be ½ or less of the area of the electronic substrate that is arranged in the information processing apparatus 100 and performs the functions other than the image-capturing function and the image-displaying function.
The battery storage unit 102 is an area in which a battery is stored.
The changeover switch 111 is an operation member (so-called toggle switch) used for switching two functions. For example, the changeover switch 111 is used for switching to either one of a still-image capturing mode for recording a still image and a moving-image capturing mode for recording a moving image. Further, for example, the changeover switch 111 is used for changing a scroll method (whether or not to perform scroll shown in
The determination key 112 is an operation member that is pressed for setting various functions by a user. For example, when being pressed on the still-image capturing mode, the determination key 112 functions as a shutter button.
Note that a numeric keypad or an arrow key is appropriately displayed on the display unit 140 according to a user operation or automatically. Then, the displayed numeric keypad or the arrow key can be operated by the user.
The imaging unit 130 serves to image a subject and generate image data. Note that, in the first embodiment of the present disclosure, the generation of the image data by the imaging unit 130 includes the meaning of imaging by the imaging unit 130. Note that circles in the imaging unit 130 shown in Part “b” of
The display unit 140 is a display apparatus that displays various images, and configured by a touch panel, for example. For example, an image generated by an imaging operation is displayed on the display unit 140. For example, a liquid crystal display (LCD) panel or an organic electro luminescence (EL) panel may be used as the display unit 140. By the way, an aspect ratio of the display unit 140 of an information processing apparatus with a camera (e.g., smart phone with multi eye camera) or a general imaging apparatus is often 4:3 or 16:9. Therefore, in the first embodiment of the present disclosure, an example in which the aspect ratio of the display unit 140 is 4:3 is described.
As shown in Parts “b” to “d” of
The multi-eye information processing apparatus 100 is used as described above, and hence, for example, in the case where a panoramic image being a still image is captured, images captured by the plurality of imaging systems at the same time can be coupled to form the panoramic image immediately after a shutter button is pressed. Note that a single-eye imaging apparatus capable of generating a panoramic image by an operation of rotating an imaging apparatus in the horizontal direction with an imaging position (position of photographer) being a center of rotation (so-called panning operation) exists. In the case where a panoramic image is generated by the single-eye imaging apparatus, the panning operation is necessary as described above. Therefore, the panoramic image cannot be generated immediately after the shutter button is pressed. In contrast, the information processing apparatus 100 can capture a panoramic image for a shorter time in comparison with the single-eye imaging apparatus.
[Internal Configuration Example of Information Processing Apparatus]
The information processing apparatus 100 includes an application processor 11, a digital base band processing unit 12, an analog base band processing unit 13, and a radio frequency (RF) processing unit 14. The information further processing apparatus 100 includes a battery 15, a microphone 16, a speaker 17, an antenna 18, a changeover switch 111, a determination key 112, the imaging unit 130, and the display unit 140. Further, the information processing apparatus 100 further includes an attitude detection unit 150, a program memory 160, an image memory 170, a recording medium 180, and a digital signal processor (DSP) 200. Note that the RF processing unit 14 includes the antenna 18 and the analog base band processing unit 13 includes the microphone 16 and the speaker 17.
The application processor 11 controls each unit of the information processing apparatus 100 based on various programs stored in a built-in memory. The application processor 11 includes, for example, a central processing unit (CPU), a read only memory (ROM), and a random access memory (RAM).
For example, in the case where a telephone reception operation is performed, radio waves received by the antenna 18 are demodulated by the digital base band processing unit 12 through the RF processing unit 14 and the analog base band processing unit 13. Then, a demodulation result of the digital base band processing unit 12 is output from the speaker 17 through the analog base band processing unit 13.
On the other hand, in the case where a telephone transmission operation is performed, sound input from the microphone 16 is modulated by the digital base band processing unit 12 through the analog base band processing unit 13. Then, demodulated audio data is transmitted from the antenna 18 through the analog base band processing unit 13 and the RF processing unit 14.
Further, when the user performs an imaging-operation starting instruction operation, the imaging operation is performed in the information processing apparatus 100. For example, when the user performs the imaging-operation starting instruction operation, the application processor 11 instructs respective units (imaging unit 130, DSP 200, and the like) involved in the imaging operation to start the imaging operation, and activates those units. Then, the activated units perform the imaging operation and a generated image is displayed on the display unit 140. When the user performs an image recording instruction operation, the generated image is recorded on the recording medium 180. Further, when the user performs an instruction operation to wirelessly transmit an image, the generated image is wirelessly transmitted. For example, generated image data is modulated by the digital base band processing unit 12 and transmitted from the antenna 18 through the analog base band processing unit 13 and the RF processing unit 14. Note that the battery 15 is a battery that supplies power to the information processing apparatus 100.
Note that the changeover switch 111, the determination key 112, the imaging unit 130, the display unit 140, the attitude detection unit 150, the program memory 160, the image memory 170, the recording medium 180, and the DSP 200 will be described in detail with reference to
[Internal Configuration Example of Imaging Unit]
The imaging unit 130 includes three imaging systems (first imaging system 191 to third imaging system 193), a power-supply control unit 207, and power-supply units 208 and 209. Further, three imaging systems are arranged in a specific direction. That is, the first imaging system 191 is arranged in the middle while the second imaging system 192 and the third imaging system 193 are arranged on both sides of the first imaging system 191.
The first imaging system 191 includes an optical system 131, the imaging device 134, and an interface (I/F) 137 with the DSP. Further, the second imaging system 192 includes an optical system 132, an imaging device 135, and an I/F 138 with the DSP. Further, the third imaging system 193 includes an optical system 133, an imaging device 136, and an I/F 139 with the DSP. Note that the configurations of the first imaging system 191 to the third imaging system 193 are almost the same. Therefore, the configuration of the first imaging system 191 is mainly described and descriptions of the second imaging system 192 and the third imaging system 193 are omitted.
The optical system 131 is constituted of a plurality of lenses (including zoom lens and focus lens) that collect light from a subject. Further, the amount of light passing through those lenses (i.e., exposure) is adjusted by a diaphragm (not shown). Then, the collected light from the subject is input into the imaging device 134.
The imaging device 134 is an imaging device that focuses a subject image input via the optical system 131 and generates an image signal. That is, the imaging device 134 receives light from the subject input via the optical system 131 and performs photoelectric conversion, to thereby generate an analog image signal depending on the amount of light received. The analog image signal thus generated by the imaging device 134 is provided to the DSP 200 via the I/F 137 with the DSP. Note that, for example, a solid-state imaging device of a charge coupled device (CCD) type or a complementary metal oxide semiconductor (CMOS) type may be used as the imaging device.
The I/F 137 with the DSP is an interface for connecting the imaging device 134 and the DSP 200 to each other.
The power-supply control unit 207 controls, based on a power-supply controlling instruction from an imaging control unit 201 (shown in
The power-supply unit 208 supplies power to the first imaging system 191 based on a control of the power-supply control unit 207. Further, the power-supply unit 209 supplies power to the second imaging system 192 and the third imaging system 193 based on a control of the power-supply control unit 207. Note that the power-supply units 208 and 209 are realized by, for example, a commercially available power-supply integrated circuit (IC).
Further, each of the first imaging system 191 to the third imaging system 193 is connected to the DSP 200 via a single data line and seven kinds of signal lines. A description is made with the single data line that connects the first imaging system 191 and the DSP 200 to each other being denoted by L1 and the seven kinds of signal lines being referred to as signal lines L2 to L8. Note that the data line and signal lines of the second imaging system 192 and the third imaging system 193 are almost the same as the data line and signal lines of the first imaging system 191. Therefore, the data line and signal lines of the first imaging system 191 are mainly described and descriptions of the second imaging system 192 and the third imaging system 193 are omitted.
The data line L1 is a data line for transmitting image data from the imaging device 134 to the DSP 200. This data line L1 is favorably constituted of a plurality of data lines for increasing a transmission rate of image data, for example. Further, in order to increase the transmission rate of image data and increasing a noise resistance on a transmission path, a high-speed and differential-transmission data line is favorably used as the data line L1. For example, a low voltage differential signaling (LVDS) is favorably used for the data line L1.
The signal line L2 is a bi-directional communication line between the imaging device 134 and the DSP 200. For example, a four-line-structure serial communication line may be used as the signal line L2. The signal line L2 is used when various setting values necessary for using the imaging device 134 are set from a side of the DSP 200, for example. As an example, setting values for thinning-out image data output from the imaging device 134 to the DSP 200 and outputting the thinned-out image data are written in registers 370 and 380 (shown in
The signal line L3 is a clock signal line for supplying a clock from the DSP 200 to the imaging device 134. Using a clock supplied via the signal line L3, the imaging device 134 performs an imaging operation at one pixel per clock cycle. Alternatively, a multiplier may be installed in the imaging device 134, the clock supplied from the DSP 200 may be multiplied in the imaging device 134, and the imaging operation at one pixel per clock cycle after the multiplication may be performed.
The signal line L4 is a reset signal line for supplying a reset signal from the DSP 200 to the imaging device 134.
The signal line L5 is a signal line for controlling ON and OFF of the imaging operation of the imaging device 134 from the DSP 200. That is, the signal line L5 is a signal line for notifying the stop and start of operation from the DSP 200 to each imaging device. For example, if the execution of an imaging mode on which only one imaging device is used out of the three imaging systems is instructed by the user, power consumption can be reduced by stopping the imaging operations of the two imaging devices not used.
The signal line L6 is a vertical synchronization signal line. That is, the signal line L6 is a signal line for notifying a synchronization signal indicating imaging timing for each frame from the DSP 200 to the imaging device 134.
The signal line L7 is a horizontal synchronization signal line. That is, the signal line L7 is a signal line for notifying a synchronization signal indicating imaging timing for each line in one frame from the DSP 200 to imaging device 134.
The signal line L8 is a shutter signal line. For example, when an operation member (e.g., determination key 114) for performing captured image recording is depressed by the user in the information processing apparatus 100, a shutter signal corresponding to this depression is notified from the DSP 200 to the imaging device 134 via the signal line L8.
Further, it is assumed that, in the case where an imaging mode is set, either one of a single-eye imaging operation and a multi-eye imaging operation can be selected based on the user operation. If the single-eye imaging operation is selected, an imaging operation using image data generated by the first imaging system 191 is performed. On the other hand, if the multi-eye imaging operation is selected, an imaging operation using image data each generated by the first imaging system 191 to the third imaging system 193 is performed. Further, if the single-eye imaging operation is selected, a normal image is generated. If the multi-eye imaging operation is selected, a panoramic image (e.g., shown in Part “b” of
[Arrangement Configuration Examples of Imaging Devices]
In Part “a” of
In Part “a” of
In Part “a” of
Note that the center imaging device of the three imaging devices constituting the imaging unit may be arranged such that the longitudinal direction of the center imaging device almost coincides with the direction orthogonal to the arraying direction (see, for example, Japanese Patent Application Laid-open No. 2011-44837.)
In Part “b” of
In Part “a” of
In Part “b” of
In Part “c” of
[Configuration Example of DSP]
The DSP 200 includes the imaging control unit 201, a CPU 202, direct memory access (DMA) controller 203, a data bus 204, a program memory I/F 205, and an image memory I/F 206. The DSP 200 further includes an image device I/F 210, image buffers 211 to 219, an image-signal processing unit 220, resolution converting units 231, 241, and 251, and image rotation processing units 232 and 242. The DSP 200 further includes a display unit I/F 233, an external display apparatus I/F 243, an encoding/decoding unit 252, a recording medium I/F 253, oscillating circuits 264 to 266, and a clock generating circuit 270. The DSP 200 further includes an auto focus (AF) control unit 281 and a face detection unit 282. The DSP 200 further includes an automatic exposure (AE) control unit 283 and an auto white balance (AWB) control unit 284. The CPU 202, the DMA controller 203, the image memory I/F 206, the image buffers 211 to 219, the image-signal processing unit 220, and the like are connected to the data bus 204. Signals from the changeover switch 111, the determination key 112, and the attitude detection unit 150 are input to the imaging control unit 201.
The attitude detection unit 150 detects a change of an attitude of the information processing apparatus 100 by detecting acceleration, motion, tilt, and the like of the information processing apparatus 100, and outputs a detection result (attitude information relating to detected attitude change) to the imaging control unit 201. For example, the attitude detection unit 150 detects, as the attitude change of the information processing apparatus 100, rotation angles about three axes (e.g., X-axis, Y-axis, and Z-axis) and outputs a detection result thereof to the imaging control unit 201. A sensor capable of detecting the rotation angles about the three axes in the information processing apparatus 100 can be used as the attitude detection unit 150. It should be noted that a sensor capable of detecting a rotation angle about at least one axis may be used. For example, a fall sensor, a gravity sensor, a gyro sensor, an acceleration sensor capable of detecting a acceleration direction, and an angular velocity sensor capable of detecting a rotational motion can be used as the attitude detection unit 150.
The imaging control unit 201 controls the units related to imaging processing. For example, the imaging control unit 201 determines, on the basis of a detection result from the attitude detection unit 150, an attitude of the information processing apparatus 100 and performs imaging control for the units on the basis of a result of the determination. The imaging control unit 201 performs imaging control for the units on the basis of input signals from the changeover switch 111 and the determination key 112.
In the first embodiment of the present disclosure, the user can set in advance an imaging mode (image size, etc.) in recording an image generated by the imaging unit 130. For example, a menu screen for setting the imaging mode is displayed on the display unit 140. The user inputs desired setting content using the determination key 114 on the menu screen. The imaging mode includes, for example, the number of imaging devices used during imaging and a vertical image size and a horizontal image size of an image during recording. For example, the imaging mode includes a vertical back porch and a vertical front porch representing an interval between a valid area of an image and a vertical synchronization signal and a horizontal back porch and a horizontal front porch representing an interval between the valid area of the image and a horizontal synchronization signal. The imaging control unit 201, the units in the DSP 200, and the imaging devices 134 to 136 include registers that store the imaging mode.
When the imaging mode is set by the user, the imaging control unit 201 notifies the units in the DSP 200 and the imaging devices 134 to 136 of the set imaging mode and causes the registers included in the units to store the imaging mode. In this way, the setting content of the imaging mode set by the user is stored in the registers included in the units. Consequently, the user can easily switch and use a plurality of photographing conditions.
The imaging control unit 201 notifies, for example, on the basis of the setting content of the imaging mode stored in the register incorporated therein, the units in the DSP 200 and the imaging devices 134 to 136 of a vertical synchronization signal, a horizontal synchronization signal, and a clock signal. The imaging control unit 201 notifies, for example, on the basis of the setting content of the imaging mode stored in the register incorporated therein, the units related to display in the DSP 200 and the display unit 140 of the vertical synchronization signal, the horizontal synchronization signal, and the clock signal. The imaging control unit 201 outputs, for example, to the power-supply control unit 207, a signal for controlling ON and OFF of a power supply.
The CPU 202 controls the entire DSP 200 on the basis of various programs stored in the program memory 160. For example, the CPU 202 detects, on the basis of a detection result (attitude information) from the attitude detection unit 150, which of the horizontal state and the vertical state the information processing apparatus 100 (display unit 140) is in, and performs display control on the basis of the detected state. The control content will be described in detail with reference to
The DMA controller 203 controls transfer of data among memories on the basis of the control by the CPU 202.
The program memory I/F 205 is an interface for connecting the program memory 160 and the DSP 200.
The image memory I/F 206 is an interface for connecting the image memory 170 and the DSP 200.
The imaging device I/F 210 is an interface for connecting the imaging devices 134 to 136 and the DSP 200. Specifically, image data generated by the imaging devices 134 to 136 are input to the imaging device I/F 210. For example, when the data line L1 for transmitting the image data from the imaging devices 134 to 136 is an LVDS type having micro amplitude, the image data from the imaging devices 134 to 136 is converted into GND potential or power supply potential in the DSP I/Fs 137 to 139. The image buffers 211 to 219 in three systems corresponding to the imaging devices 134 to 136 are provided at a post stage of the imaging device I/F 210.
The image buffers 211 to 219 are image buffers that store the image data output from the imaging devices 134 to 136. The stored image data are written in the image memory 170 via the data bus 204. For example, three image buffers are provided for each of the imaging devices. The image buffers are connected to the data bus 204. For example, three image buffers 211 to 213 are provided for the imaging device 134. Three image buffers 214 to 216 are provided for the imaging device 135. Three image buffers 217 to 219 are provided for the imaging device 136. In the first embodiment of the present disclosure, even while the image data are read out from the image buffers 211 to 219 in order to write the image data in the image memory 170, image data input anew from the imaging devices 134 to 136 are sequentially stored in the image buffers 211 to 219. Therefore, it is favorable to provide two or more image buffers for each of the imaging devices 134 to 136 as the image buffers 211 to 219.
It is favorable that a capacity of one of the image buffers 211 to 219 is larger than bit width of the data bus 204. For example, when the data bus 204 has 128-bit width, it is favorable that the image buffers have a capacity equal to or larger than 128 bits. It is more favorable that the capacity of one of the image buffers 211 to 219 is equal to or lager than a double of the bit width of the data bus 204. For example, when the data bus 204 has 128-bit width, it is favorable that the image buffers have a capacity equal to or larger than 256 bits.
On the other hand, the capacity of one of the image buffers 211 to 219 can be set to be equal to or smaller than an image data amount of one image generated by one imaging device. For example, it is favorable that the capacity of one of the image buffers 211 to 219 is equal to or smaller than a data amount of image data generated by pixels for one line of the imaging device 134.
In the first embodiment of the present disclosure, bit width of a data line connecting the imaging devices 134 to 136 and the DSP 200 is set to, for example, 12 bits. For example, bit width of the data bus 204 of the DSP 200 is set to 128-bit width and the capacity of one of the image buffers 211 to 219 is set to 128 bits. The image-signal processing unit 220 applies, on the basis of the control by the imaging control unit 201, various kinds of image signal processing to image data input via the image buffers 211 to 219 and the data bus 204. The internal configuration of the image-signal processing unit 220 will be described in detail with reference to
The resolution converting unit 231 performs, on the basis of the control by the imaging control unit 201 or the CPU 202, resolution conversion for causing the display unit 140 to display images and outputs image data subjected to the resolution conversion to the image rotation processing unit 232.
The resolution converting unit 241 performs, on the basis of the control by the imaging control unit 201 or the CPU 202, resolution conversion for causing an external display device 245 to display images and outputs image data subjected to the resolution conversion to the image rotation processing unit 242.
The image rotation processing unit 232 applies, on the basis of the control by the imaging control unit 201 or the CPU 202, rotation processing to the image data subjected to the resolution conversion and outputs the image data subjected to the rotation processing to the display unit I/F 233.
The image rotation processing unit 242 applies, on the basis of the control by the imaging control unit 201 or the CPU 202, rotation processing to the image data subjected to the resolution conversion and outputs the image data subjected to the rotation processing to the external display device I/F 243.
The display unit I/F 233 is an interface for connecting the display unit 140 and the DSP 200.
The external display device I/F 243 is an interface for connecting the external display device 245 and the DSP 200. The external display device 245 is, for example, a television.
The resolution converting unit 251 converts, on the basis of the control by the imaging control unit 201 or the CPU 202, resolution for recording of images and outputs image data subjected to the resolution conversion to the encoding/decoding unit 252. For example, the resolution converting unit 251 performs resolution conversion processing for converting resolution to a recorded image size desired by the user and resolution conversion processing for generating a thumbnail image.
The encoding/decoding unit 252 performs, on the basis of the control by the imaging control unit 201 or the CPU 202, encoding for compressing image data output from the resolution converting unit 251, and outputs the encoded image data to the recording medium I/F 253. Further, for displaying image data recorded on the recording medium 180 on the display unit 140, the encoding/decoding unit 252 inputs and encodes the image data recorded on the recording medium 180 via the recording medium I/F 253. The encoded image data is stored in the image memory 170.
The recording medium I/F 253 is an interface for connecting the recording medium 180 and the DSP 200.
The recording medium 180 is a recording medium that records the image data supplied via the recording medium I/F 253. The recording medium 180 may be incorporated in the information processing apparatus 100 or may be detachably attached to the information processing apparatus 100. As the recording medium 180, for example, a tape (e.g., magnetic tape) or an optical disk (e.g., recordable digital versatile disc (DVD)) can be used. As the recording medium 180, for example, a magnetic disk (e.g., hard disk), a semiconductor memory (e.g., memory card), or a magneto-optical disk (e.g., MiniDisc (MD)) may be used. Note that the image data recorded on the recording medium 180 will be described in detail with reference to
Note that the oscillating circuits 264 to 266 and the clock generating circuit 270 will be described in detail with reference to
The AF control unit 281 performs focus control on image data input via the image buffers 211 to 219 and the data bus 204 so as to be focused on the subject included in a predetermined area in that image (captured image). Note that the predetermined area can be, for example, an area in the middle of the captured image, an area specified by the user, or an area including a position of a face detected by the face detection unit 282. Further, when a plurality of predetermined areas are present, focus control is performed for each of the predetermined areas. Then, information on a position (focus position) focused in the captured image is output to the CPU 202 and the imaging control unit 201. Further, the information on the focus position is stored in the AF control unit 281.
The face detection unit 282 detects a face of a person included in an image (captured image) of the image data input via the image buffers 211 to 219 and the data bus 204, and outputs a detection result thereof to the CPU 202 and the imaging control unit 201. Alternatively, the face detection unit 282 may detect a face of a person included in an image of image data read out from the recording medium 180. Note that, as a method of detecting a face included in an image, for example, a face detection method by matching between a template in which luminescence distribution information of a face is recorded and a content image can be used (for example, see Japanese Patent Application Laid-open No. 2004-133637). A face detection method based on a skin-color part or the amount of feature of a human face included in an image can also be used. Those face detection methods make it possible to determine a position and a size of a face of a person in an image. Further, a detection result of a face is stored in the face detection unit 282. Note that the face detection unit 282 is an example of a specific-target detection unit described in scope of claims.
The AE control unit 283 is an automatic exposure control unit for automatically adjusting a shutter speed and a stop value with respect to the image data input via the image buffers 211 to 219 and the data bus 204, and outputs a detection result thereof to the CPU 202 and the imaging control unit 201.
The AWB control unit 284 performs automatic white balance adjustment and the like on the image data input via the image buffers 211 to 219 and the data bus 204, and outputs a detection result thereof to the CPU 202 and the imaging control unit 201.
[Internal Configuration Example of Image-Signal Processing Unit 220]
The image-signal processing unit 220 includes a pixel addition processing unit 221, a demosaic processing unit 222, a YC conversion processing unit 223, an image combination processing unit 224, a sharpness processing unit 225, a color adjustment processing unit 226, and an RGB conversion processing unit 227.
The pixel addition processing unit 221 applies pixel addition processing and pixel thinning-out processing to the image data generated by the imaging devices 134 to 136.
The demosaic processing unit 222 performs demosaic processing (interpolation processing) such that intensities of all channels for R, G, and B are set the same in pixel positions of the image data (mosaic images) generated by the imaging devices 134 to 136. The demosaic processing unit 222 supplies RGB images subjected to the demosaic processing to the YC conversion processing unit 223. Specifically, the demosaic processing unit 222 interpolates Bayer data having only pixel data for one color per one pixel and calculates three pixel data of R, G, and B for one pixel.
The YC conversion processing unit 223 applies YC matrix processing and band limitation for chroma components to the RGB images generated by the demosaic processing unit 222 to thereby generate a luminance signal (Y) and a color difference signal (Cr, Cb). The generated luminance signal (Y image) and color difference signal (C image) are supplied to the image combination processing unit 224.
The image combination processing unit 224 applies image combination to the image data generated by the YC conversion processing unit 223 and outputs the combined image data to the sharpness processing unit 225. Note that this image combination processing will be described in detail with reference to
The sharpness processing unit 225 applies sharpness processing (processing for highlighting contour of subject) for extracting a portion with a large signal change and highlighting the portion to the image data generated by the image combination processing unit 224. The sharpness processing unit 225 supplies the image data subjected to the sharpness processing to the color adjustment processing unit 226.
The color adjustment processing unit 226 applies adjustment of a hue and chroma to the image data subjected to the sharpness processing by the sharpness processing unit 225.
The RGB conversion processing unit 227 converts the image data subjected to the adjustment of a hue and chroma by the color adjustment processing unit 226 from YCbCr data to RGB data.
A flow of image data of a signal of the image-signal processing unit 220 is described. For example, it is assumed that each of signal processing units in the image-signal processing unit 220 directly reads image data from the image memory 170 through the data bus 204 and writes the image data after signal processing into the image memory 170 through the data bus 204. This is advantageous in that the image-signal processing unit 220 can read image data in a desired position in the image data at desired timing. However, a data amount that needs to be transmitted through the data bus 204 increases, and hence it is necessary to increase an operating frequency of the data bus 204. Therefore, there is a fear that design of the data bus 204 is difficult and power consumption increases.
For example, it is assumed that each of the signal processing units in the image-signal processing unit 220 receives image data from a signal processing unit at a pre-stage not through the data bus 204 and passes the image data after the signal processing to a signal processing unit at a post-stage not through the data bus 204. In this case, the data bus 204 is not used. This is advantageous in that design of an LSI is easy and power consumption can be reduced. However, there is a fear that each of the signal processing units is not able to read image data in a desired position in the image data at desired timing.
Therefore, in the first embodiment of the present disclosure, between the demosaic processing unit 222 and the color adjustment processing unit 226 having a substantially fixed image size, in order to reduce the operating frequency of the data bus 204 and power consumption, image data is directly passed among the signal processing units. At a pre-stage of a signal processing unit that uses a large amount of image data as in resolution conversion, when image data is written in the image memory 170 and the resolution conversion is performed, desired image data is read out from the image memory 170.
[Configuration Example of Clock Generating Circuit]
The clock generating circuit 270 includes multipliers for high-frequency clock 20 and 24, frequency dividers for high-frequency clock 21 and 25, multipliers for low-frequency clock 22 and 26, and frequency dividers for low-frequency clock 23 and 27. The clock generating circuit 270 further includes multipliers for clock 28 and 30 and frequency dividers for clock 29 and 31. The multipliers multiply a frequency of an input clock. The frequency dividers reduce a frequency of an input clock into 1/n (n is arbitrary integer). In this example, the clock generating circuit 270 generates at least six kinds of clocks according to connection destinations of the units in the DSP 200 shown in
Oscillators 261 to 263 are oscillation sources for generating clock signals supplied into the DSP 200. For example, quartz oscillators are used.
The oscillating circuits 264 to 266 generate clock signals supplied into the DSP 200 and output the generated clock signals to the clock generating circuit 270.
Two kinds of the six kinds of clocks generated by the clock generating circuit 270 are clocks supplied to the imaging devices 134 to 136. One kind of the clocks supplied to the imaging devices 134 to 136 is a clock having a relatively large frequency for generating an image having a relatively large number of pixels. A clock output from the oscillating circuit 264 is input to the multiplier for high-frequency clock 20 and multiplied and the multiplied clock is input to the frequency divider for high-frequency clock 21 and divided. In this way, the clock having the relatively large frequency is generated. The other one kind is a clock having a relatively small frequency for generating an image having a relatively small number of pixels. A clock output from the oscillating circuit 264 is input to the multiplier for low-frequency clock 22 and multiplied and the multiplied clock is input to the frequency divider for low-frequency clock 23 and divided. In this way, the clock having the relatively small frequency is generated. The clocks divided by the frequency divider for high-frequency clock 21 and the frequency divider for low-frequency clock 23 are output as clocks generated in the clock generating circuit 270 and supplied to the imaging devices 134 to 136 through the inside of the DSP 200. The clocks supplied to the imaging devices 134 and 136 are not limited to the two kinds described in this example. It is favorable that a larger number of kinds of clocks are generated and used according to the size of an image generated by imaging operation.
The other two kinds among the six kinds of clocks generated by the clock generating circuit 270 are clocks used in the inside of the DSP 200. One kind of the clocks used in the inside of the DSP 200 is a clock having a relatively large frequency for generating an image having a relatively large number of pixels. A clock output from the oscillating circuit 264 is input to the multiplier for high-frequency clock 24 and multiplied and the multiplied clock is input to the frequency divider for high-frequency clock 25 and divided. In this way, the clock having the relatively large frequency is generated. The other one kind is a clock having a relatively small frequency for generating an image having a relatively small number of pixels. A clock output from the oscillating circuit 264 is input to the multiplier for low-frequency clock 26 and multiplied and the multiplied clock is input to the frequency divider for low-frequency clock 27 and divided. In this way, the clock having the relatively small frequency is generated. The clocks divided by the frequency divider for high-frequency clock 25 and the frequency divider for low-frequency clock 27 are output as clocks generated in the clock generating circuit 270 and supplied to the inside of the DSP 200. The clocks used in the inside of the DSP 200 are not limited to the two kinds described in this example. It is favorable that a larger number of kinds of clocks are generated and used according to the size of an image generated by an imaging operation.
The remaining two kinds among the six kinds of clocks generated by the clock generating circuit 270 are a pixel clock for displaying an image in the display unit 140 and a pixel clock for displaying an image in a display device (e.g., external display device 245) on the outside of the information processing apparatus 100. A clock output from the oscillating circuit 265 is input to the multiplier for clock 28 and multiplied and the multiplied clock is input to the frequency divider for clock 29 and divided. In this way, the pixel clock for displaying an image in the display unit 140 is generated. A clock output from the oscillating circuit 266 is multiplied by the multiplier for clock 30 and multiplied and the multiplied clock is input to the frequency divider for clock 31 and divided. In this way, the pixel clock for displaying an image in the display device on the outside of the information processing apparatus 100 is generated. The clock divided by the frequency divider for clock 29 is output as a clock generated in the clock generating circuit 270 and supplied to the display unit 140 through the inside of the DSP 200. The clock divided by the frequency divider for clock 31 is output as a clock generated in the clock generating circuit 270 and supplied to the display device on the outside of the information processing apparatus 100 through the inside of the DSP 200. The clocks for image display are not limited to the two kinds described in this example. It is favorable that a larger number of kinds of clocks are generated and used according to specifications of a display device connected to the information processing apparatus 100.
In the example shown in
In the example shown in
[Configuration Example of Imaging Device and Pixel Readout Example]
The imaging device 134 includes pixels 40 to 47, a vertical scanning circuit 340, and a horizontal scanning circuit 345. The imaging device 134 includes ADCs (analog/digital (A/D) converters) 350 to 353, adders 354 to 357 and 366, and column latches 358 to 361. Further, the imaging device 134 includes switches 362 to 365, an output latch 367, an output circuit 368, registers 370 and 380, and multipliers/dividers 391 and 392. Note that, in general, an array of imaging devices in a longitudinal direction will be referred to as column and an array of imaging devices in a lateral direction will be referred to as row. Therefore, in the following description, the names “column” and “row” are used as appropriate. In this example, in the imaging device 134, a part of pixels (pixels 40 to 47) and units related to the pixels are representatively described. Illustration and description of other components are omitted.
In the imaging device 134, vertical control lines 341 to 344 are wired in a row direction and every other pixels present on the same line are connected to the same vertical control line. Data readout lines 346 to 349 are wired in a column direction and pixels present on the same line share one readout line.
The vertical scanning circuit 340 turns on and off switches between the pixels 40 to 47 and the data readout lines 346 to 349 through the vertical control lines 341 to 344 wired in the row direction. Specifically, in the pixels in the row direction, every other pixels among the pixels present on the same line in the row direction are turned on and off in common. Image data of the pixels 40 to 47 are output to the data readout lines 346 to 349 through switches between the pixels and the data readout lines corresponding to the pixels.
The horizontal scanning circuit 345 turns on and off the switches 362 to 365 between the column latches 358 to 361 and an output data line 369. It is possible to readout signals of all the pixels in a time division manner while sequentially selecting the pixels according to selection of ON and OFF of the switches by the vertical scanning circuit 340 and ON and OFF of the switches 362 to 365 by the horizontal scanning circuit 345. The output data line 369 is output data line for outputting output results of the columns from the imaging devices 134.
In the imaging device 134, the pixels 40 to 47 are arranged in a two-dimensional square lattice shape. The configurations of the pixels 40 to 47 are the same, and hence the pixel 40 is described as an example. The pixel 40 includes a photodiode 51 as a light receiving unit, an amplifier 52, and a switch 53. The photodiode 51 converts light irradiated on the pixel into charges corresponding to an amount of the light. The amplifier 52 is an amplifier that amplifies a signal of the charges converted by the photodiode 51. The switch 53 is a switch that controls charge transfer of the pixel 40 according to ON and OFF of a vertical control line 342.
The columns include the ADCs 350 to 353, the adders 354 to 357, and the column latches 358 to 361. The ADC 350, the adder 354, and the column latch 358 connected to the data readout line 346 are described below as examples.
The ADC 350 is an AD converter that converts image data from the pixels as analog values into digital data (digital values).
The adder 354 adds, every time image data is converted into digital data by the ADC 350, the new digital data after the conversion to digital data stored in the column latch 358.
The column latch 358 is a column latch that sequentially stores the digital data converted by the ADC 350. The column latch is a name indicating a data storing circuit that stores digital data after AD conversion. As the data storing circuit, besides a latch including a linear circuit, circuits that can store digital data such as a flip-flop including a synchronization circuit can be used.
For example, image data output from the pixel 40 is output to an output data line 390 through the switch 362 connected to the data readout line 346 after passing through the ADC 350, the adder 354, and the column latch 358. in the first embodiment of the present disclosure, like the data readout lines of the columns, the output data line 390 includes the adder 366 and the output latch 367 and performs addition and storage of image data. Image data stored in the output latch 367 is output to the output data line 369 through the output circuit 368. Note that image data from the output data line 369 is output to the above-mentioned data line L1.
The multipliers/dividers 391 and 392 perform, on the basis of the control from the DSP 200, multiplication of a frequency of an input clock and dividing of the frequency of the input clock. The multipliers/dividers 391 and 392 supply a generated clock to the vertical scanning circuit 340, the horizontal scanning circuit 345, and the output circuit 368.
The signal line 393 is a vertical synchronization signal line for supplying a vertical synchronization signal from the DSP 200. The signal line 394 is a horizontal synchronization signal line for supplying a horizontal synchronization signal from the DSP 200.
The signal line 395 is a clock signal line for supplying a clock signal from the DSP 200. The signal line 396 is a signal line for controlling ON and OFF of an imaging operation from the DSP 200 and a signal line for controlling pixel thinning-out. The signal line 397 is a bi-directional communication line between the imaging device 134 and the DSP 200. The signal line 398 is a power supply line.
Note that the registers 370 and 380 are registers in which setting values concerning an imaging operation are stored. An example of stored contents is shown in
In the registers 370 and 380, setting values concerning an imaging operation are stored. The setting values are supplied to the vertical scanning circuit 340 and the horizontal scanning circuit 345. Note that the setting values may be changeable by user operation.
The abscissa shown in
In the example shown in
Image data of all the imaging devices 134 connected to certain rows (e.g., lines of pixels 40 to 43) are output to the data readout lines 346 to 349 of the columns by using the vertical control lines 341 to 344. Subsequently, the pixel data output to the data readout lines 346 to 349 are AD-converted by the ADCs 350 to 353 of the columns. Subsequently, outputs of the ADCs 350 to 353 are stored in the column latches 348 to 361 of the columns. For example, pixel data d1 to d4 are stored in the column latches 358 to 361 shown in
Thereafter, similarly, every time readout of one line in the horizontal direction is completed, the vertical scanning circuit 340 turns on the readout switches from the pixels to the vertical signal lines in order row by row. Consequently, pixel data in the rows are input to the ADCs 350 to 353. After the input pixel data are AD-converted by the ADCs 350 to 353, the pixel data are stored in the column latches 358 to 361 of the columns. For example, pixel data d5 to d8 are stored in the column latches 358 to 361 shown in
In the example shown in
The vertical scanning circuit 340 turns on the readout switches from the pixels to the vertical signal lines 346 to 349 only in a desired column. Consequently, only pixel data in a specific row is input to the ADCs 350 to 353 and AD-converted by the ADCs 350 to 353. Outputs of the ADCs 350 to 353 are stored in the column latches 358 to 361 in the columns. For example, the readout switches connected to the vertical control lines 342 and 344 are turned on. In this way, the pixel data d1 and d3 are stored in the column latches 358 and 360 shown in
The horizontal scanning circuit 345 turns on the readout switches from the column latches 358 to 361 to the output data line 390 only in a desired column. This makes it possible to read out only specific pixel data in one line in order.
For example, in the horizontal direction, when one piece of pixel data is read out from N pieces of pixel data, 1/N thinning-out readout is performed in the horizontal direction. For example, when one piece of pixel data of two pieces of pixel data is read out, ½ thinning-out readout is performed in the horizontal direction. When one piece of pixel data among four pieces of pixel data is read out, ¼ thinning-out readout is performed in the horizontal direction.
Simultaneously with a thinning-out operation in the horizontal direction (i.e., row direction), a thinning-out operation in the vertical direction (i.e., column direction) can also be performed. For example, when pixel data in one line among M lines is read out in the vertical direction, 1/M thinning-out readout is performed in the vertical direction. For example, when pixel data in one row of two rows is read out, ½ thinning-out readout is performed in the vertical direction. When pixel data in one row among four rows is read out, ¼ thinning-out readout is performed in the vertical direction.
In the example shown in
As in the case of all pixel readout, data of all the imaging devices 134 connected to a certain row are output to the data readout lines included in the columns by using a vertical control line, AD-converted, and stored in the column latches. Unlike the case of the all pixel readout, data of all the imaging devices 134 connected to another row are output to the data readout lines included in the columns by using another vertical control line and AD-converted. The data are added to the data stored in the column latches by using an adder. Values of pixel data are added up by this method by a desired number of lines in the vertical direction. The data after the addition are stored in the column latches. For example, pixel data d1+d5, d2+d6, d3+d7, and d4+d8 are stored in the column latches 358 to 361. After adding up N pieces of pixel data in the vertical direction, an addition result is output as one piece of pixel data. In this way, 1/N pixel addition readout in the vertical direction is performed.
Subsequently, the horizontal scanning circuit 345 sequentially turns on, in order column by column, the readout switches from the column latches to the output data line 390. In this case, data read out from the column latches to the output data line 390 are added up by the adder 366 of the output data line 390 and stored in the output latch 367. Addition processing is repeated by a desired number of columns in the horizontal direction and data after the addition is output to the imaging device 134. For example, data d1+d5+d2+d6 obtained by adding up the pixel data d1+d5 and the pixel data d2+d6 is output to the imaging device 134 via the output data line 369. The pixel data in M rows are added up in the horizontal direction. In this way, 1/M pixel addition readout is performed in the horizontal direction. The processing described above enables addition processing in the horizontal direction (row direction) and addition processing in the vertical direction (column direction).
When the thinning-out readout shown in
As shown in Part “a” of
As shown in Part “b” of
The output signal lines of the scanning circuit 105 and control lines indicating which of the two kinds of pixel thinning-out ratios is selected are connected to signal lines having numbers of multiples of 2 rather than the multiples of 4 among the 1024 output signal lines of the scanning circuit 104. The two kinds of control lines are control lines corresponding to the all pixel readout and the ½ thinning-out readout.
A control line indicating whether the all pixel readout is selected is connected to the signal lines other than the signal lines described above among the 1024 output signal lines of the scanning circuit 104.
According to outputs (scan_out[n(0≦n≦1023: n is an integer) from the scanning circuits shown in Part “b” of
[Arrangement Configuration Example of Imaging Systems]
In the imaging unit 130, an angle formed by the optical axis 194 of the first imaging system 191 and the optical axis 195 of the second imaging system 192 is represented as θ0. Similarly, an angle formed by the optical axis 194 of the first imaging system 191 and the optical axis 196 of the third imaging system 193 is represented as θ0. Note that the third imaging system 193 is arranged in a position line-symmetrical to the second imaging system 192 across the optical axis 194 of the first imaging system 191.
In the first imaging system 191, an angle formed by a line on the outermost side of an incident light path of light made incident on the imaging device 134 and the optical axis 194 is represented as θ1. In the second imaging system 192, an angle formed by a line on the outermost side of an incident light path of light made incident on the imaging device 135 and the optical axis 195 is represented as θ2. Similarly, in the third imaging system 193, an angle formed by a line on the outermost side of an incident light path of light made incident on the imaging device 136 and the optical axis 196 is represented as θ2. For example, as shown in
An imaging area 301 on the subject surface 300 is specified by the angle of view 2×θ1 of the light made incident on the imaging device 134. Similarly, an imaging area 302 on the subject surface 300 is specified by the angle of view 2×θ2 of the light made incident on the imaging device 135 and an imaging area 303 on the subject surface 300 is specified by the angle of view 2×θ2 made incident on the imaging device 136. in the first embodiment of the present disclosure, images generated by the imaging devices 134 to 136 are combined to generate a panorama image. Therefore, the angles formed by the optical axes are set such that the imaging area 301 on the subject surface 300 and the imaging area 302 on the subject surface 300 partially overlap. Specifically, the angle θ0 formed by the optical axis 194 of the first imaging system 191 and the optical axis 195 of the second imaging system 192 and the angle θ0 formed by the optical axis 194 of the first imaging system 191 and the optical axis 196 of the third imaging system 193 are set such that the imaging areas 301 and 302 partially overlap. Note that the optical axes 194 to 196 are included in the same plane. The first to third imaging systems 191 to 193 are arranged such that the optical axes 194 to 196 cross at one point (intersection P0).
A lens center of the first imaging system 191 is represented as R1, a lens center of the second imaging system 192 is represented as R2, and a lens center of the third imaging system 193 is represented as R3. A distance between the lens center R1 and the intersection P0 is represented as L11, a distance between the lens center R2 and the intersection P0 is represented as L21, and a distance between the lens center R3 and the intersection P0 is represented as L31. In this case, it is favorable that the first to third imaging systems 191 to 193 are arranged such that the distances L11, L21, and L31 are equal.
[Correction Example of Trapezoidal Distortion]
In Part “a” of
In Part “a” of
In Parts “b” of
As shown in Part “a” of
In
A planar subject area that is a plane including the intersection S11 and orthogonal to the optical axis 194 and is made incident on the first imaging system 191 is represented as a subject surface S10.
An intersection of the optical axis 195 of the second imaging system 192 and the subject surface 300 is represented as S21. An intersection of a right visible outline of an angle of view of the second imaging system 192 and the subject surface 300 is represented as S32. An intersection of a left visible outline of the angle of view of the second imaging system 192 and the subject surface 300 is represented as S43.
A planar subject area that is a plane including the intersection S21 and orthogonal to the optical axis 195 and is made incident on the second imaging system 192 is represented as subject surface S20.
A planar subject area that is a plane including the intersection S32 and orthogonal to the optical axis 195 and is made incident on the second imaging system 192 is represented as a subject surface S30.
A planar subject area that is a plane including the intersection S43 and orthogonal to the optical axis 195 and is made incident on the second imaging system 192 is represented as a subject surface S40.
An intersection of the subject surface S30 and the optical axis 195 is represented as S31. An intersection of the subject surface S40 and the optical axis 195 is represented as S41.
An intersection of the right visible outline of the angle of view of the second imaging system 192 and the subject surface S20 is represented as S22. An intersection of the right visible outline of the angle of view of the second imaging system 192 and the subject surface S40 is represented as S42.
An intersection of the left visible outline of the angle of view of the second imaging system 192 and the subject surface S20 is represented as S23. An intersection of the right visible outline of the angle of view of the second imaging system 192 and the subject surface S30 is represented as S33.
An intersection of a segment 197 passing through the lens center R2 of the second imaging system 192 and perpendicular to the subject surface 300 and the subject surface 300 is represented as S51.
For example, when the subject surface S40 including the point S43 at the left end of the angle of view and the subject surface S30 including the point S32 at the right end of the angle of view are compared, the subject surface S40 is present in a position farther from the lens center R2 than the subject surface S30. Therefore, when the subject surface S40 is imaged, an imaged area is wider than an area imaged when the subject surface S30 is imaged. For example, it is assumed that segments having the same length are arranged as subjects on the subject surface S40 and the subject surface S30. In this case, when a captured image generated for the subject surface S30 and a captured image generated for the subject surface S40 are compared, the segment included in the captured image generated for the subject surface S40 is shorter.
Therefore, for example, when the subject 312 shown in Part “a” in
Similarly, when the subject 313 shown in Part “a” in
A distance between the intersection S11 and the lens center R1 is represented as L12. A distance between the intersection S13 and the lens center R1 and a distance between the intersection S12 and the lens center R1 is represented as L13.
A distance between the intersection S21 and the lens center R2 is represented as L22. A distance between the intersection S31 and the lens center R2 is represented as L30. A distance between the intersection S41 and the lens center R2 is represented as L40.
A distance between the intersection S32 and the lens center R2 is represented as L23. A distance between the intersection S43 and the lens center R2 is represented as L24. A distance between the intersection S51 and the lens center R2 is represented as L51. Note that distances L61 to L66 will be described with reference to
The following formula holds according to the formula of the trigonometric function.
L21+L22=L11+L22=(L11+L12)/cos θ0
The following Formula 1 is obtained according to this formula.
L22={(L11+L12)/cos θ0}−L11 Formula 1
where L11=L21
Concerning the distance L51, the following Formula 2 is obtained according to the formula of the trigonometric function and Formula 1.
Concerning the distance L23, the following Formula 3 is obtained according to the formula of trigonometric function and Formula 2.
Concerning the distance L30, the following Formula 4 is obtained according to the formula of trigonometric function and Formula 3.
When the distances L11 and L12 in the optical axis 194 of the first imaging system 191 are determined, the distance L30 can be calculated by using Formula 4. By calculating the distance L30 in this way, it is possible to calculate a value XR (=L12/L30) of a ratio of the distance L30 to the distance L12. Note that XR is smaller than 1.
Concerning the distance L24, the following Formula 5 is obtained according to the formula of the trigonometric function and Formula 2.
Concerning the distance L40, the following Formula 6 is obtained according to the formula of trigonometric function, Formula 1, and Formula 5.
When the distances L11 and L12 in the optical axis 194 of the first imaging system 191 is determined, the distance L40 can be calculated by using Formula 6. By calculating the distance L40 in this way, a value XL (=L12/L40) of a ratio of the distance L40 to the distance L12 can be calculated. Note that XL is larger than 1.
Next, a correcting method for correcting a trapezoidal distortion will be described by using the values XR and XL of the ratios.
Concerning the captured image 315 shown in Part “a” of
As described above, concerning the captured image 315 shown in Part “a” of
Similarly, concerning the captured image 316 shown in Part “a” of
When such trapezoidal distortion correction is performed, for example, coordinates of pixels in a captured image distorted in a trapezoidal shape are measured and the values XR and XL of the ratios are calculated in advance. It is possible to perform, using the values XR and XL of the ratios calculated in advance, the trapezoidal distortion correction processing in a software manner with an arithmetic apparatus such as a CPU incorporated in the information processing apparatus 100.
Note that the example of the correcting method for correcting a trapezoidal distortion of a captured image caused by a three-eye imaging operation is described above. However, correction may be performed by other trapezoidal distortion correcting methods (see, for example, Japanese Patent Application Laid-open No. HEI 08-307770).
[Combination Example of Captured Images]
In Part “a” of
As described above, the same subject is included in the area at the right end of the corrected image 317 and the area at the left end of the captured image 314. The same subject is included in the area at the left end of the corrected image 318 and the area at the right end of the captured image 314. A method of calculating the areas including the same subjects is described below.
In
The following Formulas 7 and 8 hold according to the formula of the trigonometric function.
L61=(L11+L12)×tan θ0 Formula 7
L62=L12×tan θ1 Formula 8
The following Formula 9 is obtained according to the formula trigonometric function and Formula 2.
The following Formula 10 is obtained by using Formulas 7 and 8.
The following Formula 11 is obtained according to the formula of the trigonometric function and Formula 2.
The following Formula 12 is obtained by using Formulas 10 and 11 obtained above.
When the distances L11 and L12 in the optical axis 194 of the first imaging system 191 are determined, the distance L66 can be calculated by using Formula 12. Distances can be calculated in the same manner concerning common areas of an area at the right end of the captured image generated by the first imaging system 191 and an area at the left end of the captured image generated by the third imaging system 193.
In Part “b” of
In Part “c” of
In Part “c” of
Similarly, concerning the captured image 314, a half of the common area calculated concerning the right end of the captured image 314 is deleted. An area (area at left end) equivalent to the half of the common area is deleted concerning the corrected image 318 corresponding to the captured image generated by the third imaging system 193. In Part “b” of
In Part “c” of
Note that these kinds of image combination processing are performed by the image combination processing unit 224. In the example shown in
When such image combination processing is performed, for example, concerning images after trapezoidal distortion correction, overlapping areas of the images are measured in advance. It is possible to perform, using measured values, deletion processing for the overlapping areas of the images in a software manner with an arithmetic apparatus such as a CPU incorporated in the information processing apparatus 100.
In this example, the three captured images are combined on the basis of a convergence angle. However, for example, the image combination processing may be performed by using other image combining methods. For example, it is possible to use an image combining method for, concerning an overlapping portion of two images generated by two imaging systems, matching patterns of the two images and combining the two images by the pattern matching. It is also possible to use an image combining method for calculating a change in a density level in two images generated by two imaging systems, calculating an overlapping portion on the basis of the change in the density level, and combining the two images.
[Configuration Example of Image File]
Next, image data (image file) recorded on the recording medium 180 by still image recording processing in the information processing apparatus 100 will be described in detail with reference to the drawings.
The DCF is a file system standard for realizing mutual use of images between apparatuses such as a digital still camera and a printer via the recording medium. In the DCF, how to name a file and a configuration of a folder in the case of recording on the recording medium with an exchangeable image file format (Exif) as a base are defined. The Exif is a standard for adding image data and camera information in an image file and defines a format (file format) for recording an image file. In Part “a” of
The image file 181 is a still image file to be recorded by a DCF standard. As shown in Part “a” of
As shown in Part “b” of
The maker note 185 is generally an area in which data unique to the user is recorded, and an extended area (TAGID=37500, MakerNote) in which each maker can freely record information. As shown in Part “c” of
The single-eye imaging/multi-eye imaging 186 is information indicating which of image data generated using only the imaging device 134 (single-eye imaging) and image data of pieces of image data generated using the imaging devices 134 to 136 (multi-eye imaging) it is. For example, “0” is stored in the case of the image data generated by single-eye imaging or “1” is stored in the case of the image data generated by multi-eye imaging.
The presence and absence of a panoramic image 187 is information indicating which of an image (panoramic image) having an aspect ratio beyond a certain value and an image (normal image) other than such an image it is. For example, “0” is stored in the case of the normal image or “1” is stored in the case of the panoramic image.
The focus position information 188 is information on a focus position in a captured image. For example, information on a focus position detected by the AF control unit 281 is stored.
The face information 189 is information including a position and a size of a face included in an image generated by the imaging unit 130. For example, a position (coordinates) at an upper left corner in a rectangular area including a face in a captured image is stored as the position of a face, and a length (longitudinal width and lateral width) in the vertical direction and the horizontal direction of the rectangular area of the captured image is stored as the size of a face. The face information is detected by the face detection unit 282.
The moving body information 190 is information including a position and a size of a moving body (moving object (e.g., running car)) included in an image generated by the imaging unit 130. For example, a position at an upper left corner of the rectangular area including the moving body in the captured image is stored as the position of the moving body, and length (longitudinal width and lateral width) in the vertical direction and the horizontal direction of the rectangular area of the captured image is stored as the size of the moving body. The moving body information is detected by the CPU 202.
Further, using the accompanying information 182 (attribute information 184 and maker note 185) recorded in the image file 181, it is possible to display various images during reproduction.
[Display Timing Example of Still Image]
Now, a timing when an image display is necessary in the information processing apparatus that handles a still image is described. The timing when an image display is necessary in the information processing apparatus that handles a still image mainly includes four timings of a live view display, a postview display, a reproduction display, and a list display (so-called thumbnail display) of recorded images. Those image displays are described.
Note that, as mentioned above, a single-eye imaging apparatus capable of generating a panoramic image by an operation of rotating an imaging apparatus in the horizontal direction with an imaging position (position of photographer) being a center of rotation (so-called panning operation) exists. When the panoramic image is generated by this single-eye imaging apparatus, an image displayed on the display unit at an imaging start of a panoramic image forms one end of the panoramic image and an image displayed on the display unit at an imaging end of the panoramic image forms other end of the panoramic image. Therefore, the user can relatively easily know where imaging of the panoramic image starts and where the imaging of the panoramic image ends.
Further, when the panoramic image is generated by the single-eye imaging apparatus, a range of the panoramic image as an imaging target can be changed during imaging. For example, an image corresponding to a position at which imaging of the panoramic image is started forms one end of the panoramic image and an image corresponding to a position at which the imaging of the panoramic image is stopped forms other end of the panoramic image.
In contrast, the information processing apparatus 100 is capable of capturing a panoramic image instantly when a shutter button (e.g., determination key 112) is depressed. That is, a panoramic image can be instantly captured at a timing desired by the user. In the information processing apparatus 100, capturing of the panoramic image is instantly completed in this manner, it is necessary to determine an angle of view (e.g., range in horizontal direction desired by user) before capturing. For example, during a live view display before capturing the panoramic image, a check in the display unit 140 is necessary. For example, it is necessary to check whether a main subject is included in an angle of view and whether the subject is in a good state (whether person doesn't close eyes or whether it is moment when butterfly sits on flower). Further, it is necessary to check an imaging area from a right end to a left end of an image (whether desired subject is included).
(1) Live View Display
The live view display is an operation of displaying a captured image generated by the imaging unit 130 on the display unit 140 before a still-image recording instruction operation is performed in a state in which a still image recording mode is set. That is, the live view display is a moving-image display in which an image currently input into the imaging unit is continuously displayed in order for the user to check the subject before the imaging operation (i.e., depression of shutter button). Further, an image (captured image) to be displayed in the live view display will be referred to as a live view image.
Aims of the live view display in the information processing apparatus are mainly the following (a) to (c).
(a) To check, by the user of the information processing apparatus 100, whether a main subject is included in an angle of view (in other words, area recoded as image) while viewing the live view image.
(b) To adjust and determine, while checking an area to be recoded from the left to the right as an angle of view (in other words, area to be recorded as image, particularly, area to be recorded from left to right in case of panoramic image), that area.
(c) To determine an imaging timing by checking the state of a main subject (e.g., whether person doesn't close eyes or whether it is moment when butterfly sits on flower).
Further, each area can be adjusted by sequentially feeding back and displaying adjustment results in the live view display.
(2) Postview Display
The postview display is an operation of automatically displaying, when a still-image recording instruction operation is performed in a state in which a still image recording mode is set and then recording processing based on the recording instruction operation is completed, the recorded captured image for a predetermined period. That is, the postview display is a display for the user to check a recorded image after an imaging operation (i.e., after recording processing of panoramic image by depression of shutter button is completed). An image (captured image) displayed in the postview display will be referred to as a postview image.
It is also assumed that, for example, in order to reduce power consumption of the information processing apparatus 100, a clock frequency is intentionally decreased based on an instruction by the user or information on a remaining battery amount. In this case, there is a possibility that the image signal processing and the image recording processing of a still image are not completed within a predetermined period of time. In such a case, after the user performs a still-image recording instruction operation, a monitoring image is not displayed on the display unit 140 for a predetermined period. Specifically, from a point of time when the recording instruction operation is performed to a point of time when recording processing of the captured image is completed in the information processing apparatus 100, a message that capturing of new captured image cannot be performed because the image signal processing and the image recording processing are performed is displayed on the display unit 140. For example, a solid color image (e.g., black or dark blue) is displayed on the display unit 140 and, on this solid color image, characters of “Under processing” or “Wait for a while” or a mark indicating that processing is being executed within the apparatus (e.g., hourglass) is displayed. In this manner, an image indicating that capturing of a new captured image cannot be performed will be referred to as “black image” and this display operation will be referred to as a “black image display operation.” That is, when the user performs a still-image recording instruction operation, a black image is displayed on the display unit 140 by the black image display operation, and, after a predetermined period, the postview image is displayed on the display unit 140 by a postview operation.
Aims of the postview display in the information processing apparatus are mainly the following (d) and (e).
(d) To check an image recorded by imaging and determine, based on this, whether or not re-capturing of the image is necessary.
For example, whether or not an angle of view (i.e., area of image recorded by imaging) is satisfactory or whether or not the state of the subject is satisfactory (e.g., whether it is moment when butterfly sits on flower) is checked. Whether or not re-capturing of the image is necessary is also determined with the check.
For checking the angle of view, it is important to check the right end and the left end of the image in order to know which area is imaged and recorded as the panoramic image. Further, for checking the state of the subject, it is important to check how a main subject is imaged and recorded in detail.
(e) To view image recorded by imaging.
Viewing an image has the same aim as that of the reproduction display of an image shown in the following. Further, the check of the captured image and the determination to re-capture the image are only for the postview display, and re-capturing can no longer be performed in many cases if an occasion is missed. Therefore, the main aim of the postview display is (d) to check the image and determine whether or not to re-capture the image rather than (e) to view the image. Note that the postview display is aimed to check an image recorded by still-image recording processing, and hence a real-time moving-image display as in the live view is not performed and a display of a recorded still image is performed.
(3) Reproduction Display
The reproduction display is an operation of reproducing content (e.g., panoramic image) stored in the recording medium 180 by the user operation at an arbitrary point of time.
Further, a main aim of the reproduction display of the image in the information processing apparatus is to view an image recorded by imaging. In viewing the panoramic image, it is favorable that both of viewing a whole of an image and viewing the details of the main subject in detail is possible.
Note that the reproduction display shown here is aimed to view an image recorded by the still-image recording processing, and hence the real-time moving-image display as in the live view is not performed and a display of a recorded still image is performed.
(4) List Display (So-Called Thumbnail Display) of Recorded Images
When a plurality of images are captured and recorded, it is convenient that a plurality of images can be viewed in one glance in order to know which images are recorded. It should be noted that a plurality of recorded images are displayed on the display unit 140, and hence the size of a single image inevitably becomes small.
It is favorable that, in this state, in each image (in particular, panoramic image), the user can view what is included in the image.
Note that the thumbnail display is aimed to display recorded images in a list, and hence only a still-image display is performed.
[Display Timing example of Moving Image]
Next, a timing at which an image display is necessary in the information processing apparatus that handles a moving image will be described. Timing at which an image display is necessary in the information processing apparatus that handles a moving image mainly includes three timings of a real-time display, a reproduction display, a list display (so-called thumbnail display) of recorded images. Those image displays are described.
(1) Real-Time Display
A main aim of the real-time display in the information processing apparatus is to check an angle of view of an image as a recording target and check a state of a subject before and during imaging in a state in which a moving-image recording mode is set.
Note that the real-time display is aimed to check a moving image in real-time before and during recording, and hence only the moving-image display is performed.
(2) Reproduction Display
A main aim of the reproduction display of an image in the information processing apparatus is to view an image recorded by imaging. In viewing the panoramic image, it is favorable that two things: viewing a whole of an image and viewing the details of the main subject in detail are possible.
Note that the reproduction display shown here is aimed to view a moving image recorded, and hence only the moving-image display is performed.
(3) List Display (So-Called Thumbnail Display) of Recorded Image
When recording is performed by capturing a plurality of images, it is convenient that a representative image (e.g., image at start of capturing moving image) can be viewed in one glance as a still image in order to know which images are recorded. It should be noted that a plurality of recorded images are displayed on the display unit 140, and hence the size of a single image inevitably becomes small. It is favorable that, in this state, in each image (in particular, panoramic image), the user can view what is included in the image.
Note that the thumbnail display is aimed to display a head image of the recorded moving image in a list and only a still-image display is performed.
Note that, in the first embodiment of the present disclosure, the “image” described herein has the meaning of an image itself and also has the meaning of image data for displaying the image.
[Functional Configuration Example of Information Processing Apparatus]
The information processing apparatus 100 includes the display unit 140, the attitude detection unit 150, an image generation unit 510, an image memory 520, a record control unit 530, a storage unit 540, a display control unit 550, and an operation receiving unit 560.
The image generation unit 510 generates a captured image by imaging a subject, and stores the generated captured image in the image memory 520. This captured image is, for example, a captured image generated using one imaging device 134 or a captured image (e.g., panoramic image) by combining three captured images generated using three imaging devices 134 to 136. Those captured images are generated depending on a user operation received by the operation receiving unit 560. Further, the image generation unit 510 outputs pieces of information during generation of the captured image to the display control unit 550 and the record control unit 530. The pieces of information during generation of the captured image are, for example, information (face information) relating to a face included in the captured image and information (focus position information) relating to a focus position (e.g., pieces of information of Part “c” of
The image memory 520 stores the captured images generated by the image generation unit 510 or captured images (image files) acquired from the storage unit 540 by the display control unit 550. The image memory 520 supplies the stored captured images to the recording control unit 530 or the display control unit 550. Note that, for example, the image memory 520 corresponds to the image memory 170 shown in
The recording control unit 530 records the captured image generated by the image generation unit 510 and stored in the image memory 520 in the storage unit 540 as an image file in accordance with the user operation received by the operation receiving unit 560. Moreover, the recording control unit 530 records information (respective kinds of information produced when captured image is generated), which is output from the image generation unit 510, in the image file when recording the captured image. For example, when the panoramic image (horizontally long image) is recorded, the record control unit 530 records specific-target information (e.g., face information 189 shown in Part “c” of
The storage unit 540 stores the captured image generated by the image generation unit 510 as an image file, and provides a stored image file to the display control unit 550. Note that the storage unit 540 corresponds to, for example, the recording medium 180 shown in
According to the user operation received by the operation receiving unit 560, the display control unit 550 causes the display unit 140 to display the captured image generated by the image generation unit 510 and stored in the image memory 520. Further, according to the user operation received by the operation receiving unit 560, the display control unit 550 acquires an image file stored in the storage unit 540, stored in the image memory 520, and displays a captured image of the image file on the display unit 140. For example, the display control unit 550 is capable of displaying, on the basis of information recorded in an acquired image file (pieces of information during generation of captured image), images of the plurality of areas in the captured image stored in the image memory 520. For example, when displaying the panoramic image stored in the storage unit 540, the display control unit 550 is capable of displaying an image relating to an area in which the specific target is present, using the specific-target information recorded in association with the panoramic image. For example, it is possible to display an image of an area in which a face of a person is present using face information (face information 189 shown in Part “c” of
The display control unit 550 displays an image on the display unit 140, for example, during the live view display, during the postview display, and during the reproduction display. In this case, for example, the display control unit 550 performs control to display the plurality of areas in the panoramic image generated by the image generation unit 510 in a plurality of rows in the longitudinal direction of the display unit 140.
For example, as shown in
Alternatively, for example, as shown in
Alternatively, for example, as shown in
Alternatively, for example, as shown in
Alternatively, for example, as shown in
Note that, for example, as shown in
Furthermore, the display control unit 550 displays various setting screens on the display unit 140 in accordance with the user operation received by the operation receiving unit 560. Note that, for example, the display control unit 550 corresponds to the DSP 200 (imaging control unit 201, CPU 202, encoding/decoding unit 252, resolution conversion unit 231, image rotation processing unit 232, and the like shown in
The display unit 140 displays various images based on the control of the display control unit 550. Note that, for example, the display unit 140 corresponds to the display unit 140 shown in
The operation receiving unit 560 is an operation receiving unit that receives operation inputs from the user, and outputs the content of the received operation inputs to the respective units. For example, when a still-image recording instruction operation is performed by the user, the operation receiving unit 560 outputs the content of the instruction to the image generation unit 510, the recording control unit 530, and the display control unit 550. Moreover, when a display instruction operation to display the image files stored in the storage unit 540 is performed by the user, the operation receiving unit 560 outputs the content of the instruction to the display control unit 550. Moreover, when the image files stored in the storage unit 540 are displayed, and an instruction operation to change the displayed state is performed, the operation receiving unit 560 outputs the content of the instruction to the display control unit 550. Note that, for example, the operation receiving unit 560 corresponds to the respective operation members (changeover switch 111, determination key 112, and the like) shown in
The attitude detection unit 150 detects the attitude of the information processing apparatus 100 (display unit 140), and outputs a detection result thereof to the display control unit 550. For example, the attitude detection unit 150 detects whether the attitude of the information processing apparatus 100 is in the horizontal state or in the vertical state. Note that the attitude detection unit 150 corresponds to the attitude detection unit 150 shown in
[Display Example of Panoramic Image]
Next, an example in which image data is read out from the imaging devices 134 to 136 and an image thereof is displayed on the display unit 140 will be described.
In Part “a” of
Now, a traditional information processing apparatus is described. For example, in a multi-eye imaging apparatus (e.g., multi-eye digital still camera) constituted of a single casing, in general, the display apparatus is arranged in the casing such that the longitudinal direction of the rectangular casing corresponds to the longitudinal direction of the rectangular display apparatus. A single image as a recording target or a display target is a rectangle having an aspect ratio of 4:3 or 16:9, and hence it is important to allow the user to easily view those images as much as possible. In view of this, in order to display a whole of an image to be as large as possible in the rectangular display apparatus, the plurality of imaging systems including an optical system (lens and the like) and an imaging device are arranged within the casing such that an optical center arrangement line (line linking optical centers of plurality of imaging systems) and the longitudinal direction of the casing coincide with each other.
Note that, as in the single-eye imaging apparatus included in a single imaging system including a single optical system and a single imaging device that have the same optical center, the display apparatus is arranged in the casing such that the longitudinal direction of a rectangular casing and the longitudinal direction of a rectangular display apparatus coincide with each other. Further, such an arrangement is necessary also in an apparatus that handles a panoramic image having a large aspect ratio of an image.
In this manner, when imaging of a panoramic image (panoramic image extending in horizontal direction) is performed using the information processing apparatus in which the plurality of imaging systems are arranged in a direction parallel to the longitudinal direction of the information processing apparatus, it is assumed that the information processing apparatus is often used in the horizontal state. However, in the case of an information processing apparatus such as a smart phone and a cell phone, a rectangular casing and a rectangular display apparatus are often used in the vertical state. Concerning those information processing apparatuses, when the imaging system is arranged such that the long side of the display apparatus and the optical center arrangement line are in parallel, it is necessary to put an information processing apparatus into the horizontal state every time imaging of the panoramic image is performed, which may make it difficult for the user unfamiliar with capturing the panoramic image to use the apparatus.
In this manner, for example, when functions of the information processing apparatus other than an image-capturing function are used, the user often uses the information processing apparatus in the vertical state. On the other hand, concerning imaging the panoramic image and displaying the panoramic image, it is assumed that the information processing apparatus is used in the horizontal state. In this manner, it is cumbersome for the user to change the orientation of the casing depending on a function to be used.
In such an information processing apparatus, it is assumed that the optical center arrangement line (line linking optical centers of plurality of imaging systems) is parallel to the longitudinal direction of the casing and the optical center arrangement line is arranged near a center of a short side of the casing (end side of casing in longitudinal direction). However, when the imaging systems (optical systems) are arranged in this manner, there is a fear that design of the electronic substrate that serves functions other the image-capturing function becomes difficult. That is, many imaging system components including a lens and an imaging device have a large height among components of the information processing apparatus. Therefore, when the imaging system components are arranged near the center of the short side of the casing (end side of casing in longitudinal direction) such that the optical center arrangement line is parallel to the longitudinal direction of the casing, there is a fear that arrangement of the electronic substrate is interfered. For example, a concave portion on an upper side of the electronic substrate 101 shown in Part “e” of
In view of this, in the first embodiment of the present disclosure, an example in which the plurality of imaging units are arranged in the orthogonal direction orthogonal to the longitudinal direction of the display unit is shown. Further, an example in which the plurality of areas in the panoramic image including the plurality of images generated by the plurality of imaging units are displayed in a plurality of rows in the longitudinal direction of the display unit is shown.
Further, in the first embodiment of the present disclosure, an example in which the panoramic image is divided and displayed when the panoramic image generated by using the plurality of imaging systems is displayed in a state in which the longitudinal direction in the display unit 140 of the information processing apparatus 100 and the vertical direction almost coincide with each other is shown. Note that, in the example shown below, the longitudinal direction in the display unit 140 will be sometimes referred to as a Y-coordinate direction.
[Display Example of Panoramic Image]
In Part “a” of
In Part “b” of
In this manner, the image (middle image) 604 arranged in an upper row in the longitudinal direction of the display unit 140 is set to be an image near the center of the panoramic image. Further, the image (left image) 605 arranged on a lower left-hand side in the longitudinal direction of the display unit 140 is set to be an image at a left end in the panoramic image 600 and an image (right image) 606 arranged on a lower right-hand side is set to be an image at a right end in the panoramic image 600.
In this manner, at least three images are displayed at at least two positions in the longitudinal direction of the display unit 140. Note that, in the first embodiment of the present disclosure, as described above, an image arranged in an upper row in the longitudinal direction of the display unit 140 will be referred to as the middle image. Further, an image arranged on a lower left-hand side in the longitudinal direction of the display unit 140 will be referred to as the left image and an image arranged on a lower right-hand side will be referred to as the right image.
Note that, although a display example when the middle image and the left image and the middle image and the right image are separated from each other in the horizontal direction, respectively, is shown in
In Part “a” of
In Part “a” of
In this manner, in the first embodiment of the present disclosure, a left area with respect to the middle image, which is not displayed in the middle image, is displayed as a whole or part of the left image. Further, a right area with respect to the middle image, which is not displayed in the middle image, is displayed as a whole or part of the right image. Further, part of a right area with respect to the left image and part of the right area with respect to the left image are displayed as a whole or part of the middle image.
Note that, when the left image, the middle image, and the right image are displayed, it is favorable that those images (left image, middle image, and right image) are displayed at almost the same display magnification. Further, it is favorable that those images are displayed with almost the same vertical width (vertical size).
In this manner, by displaying three images in two rows in the longitudinal direction in the display unit 140, it is possible to display the images with a horizontal width (horizontal size) smaller than in the case of displaying the whole of the panoramic image on the display unit 140. That is, in the display unit 140 with a certain width, it is possible to display an enlarged image in comparison with the case of displaying the whole of the panoramic image. With this, for example, a car (present at left end) and a dog (present at right end) included in the panoramic image 600 are displayed as images having a relatively large size, and hence the user can easily check them.
The main aims in the case of the live view display are the above-mentioned aims (a) to (c). For example, the aims are (a) to check whether or not a main subject is included within an angle of view, (b) to adjust and determine, while checking an area to be recorded from the left to the right as an angle of view, the area, and (c) to determine an imaging timing by checking the state of a main subject.
In this manner, for achieving the aims (a) to (c), it is not necessarily necessary to display the whole of the panoramic image on the display unit 140 and it is important to display the main subject and the left and right ends of an angle of view in an enlarged state. Therefore, as shown in
If the middle image showing the main subject and the left and right images respectively showing the left and right ends of the angle of view are at different display magnifications (magnification ratios) when the middle image, the left image, and the right image are displayed, there is a fear that the user is confused when the user checks those images.
Further, if the middle image showing the main subject and the left and right images respectively showing the left and right ends of the angle of view have different vertical widths (vertical sizes), there is a fear that the user thinks those images as having different magnification ratios for a moment. Therefore, it is favorable that the display magnification (magnification ratio) of the middle image and the left and right images are the same. Further, it is favorable that the vertical width (vertical size) of the middle image and the left and right images are the same.
For example, in the case of various information processing apparatuses (e.g., cell phone and smart phone), display content in the display unit is often displayed such that the user can read the display content from the top to the bottom. Therefore, the user is often used to view the display unit from the top to the bottom.
Further, for example, when the live view display is performed, in many cases, the user first checks whether the main subject is appropriately included within an angle of view. Therefore, it is assumed that the user first views the middle image rather than the left image and the right image. In view of this, when the middle image, the left image, and the right image are arranged at different Y-coordinate positions in the display unit 140, it is favorable to arrange the middle image above the left image and the right image. Further, it is favorable to arrange the left image and the right image at the same Y-coordinate position.
[Example in which Images are Displayed Such that Middle Image has Horizontal Width (Horizontal Size) Larger than that of Other Images]
In the examples shown in
In the example shown in
With this, for example, in the case of the live view display, the user can check not only the state of the main subject but also the state of surroundings thereof in the middle image, and hence the user can easily check the vicinity of the center of the panoramic image. Therefore, the user also can easily check the background and the like around the person. In this manner, convenience for the user is improved.
Alternatively, regarding a relation between the middle image and each of the left image and the right image, a size other than the size shown in
In Part “a” of
In Part “b” of
[Example in which Whole of Panoramic Image is Displayed]
In the above-mentioned example, the panoramic image is divided and displayed. Although the details of the panoramic image can be checked using the images (middle image, left image, and right image), it is assumed that some users desire to check also a whole of the panoramic image. In view of this, an example in which the whole of the panoramic image or an image similar to this image (e.g., partial panoramic image) are also displayed together with the images (middle image, left image, and right image) is shown below.
In Part “a” of
In particular, when an image is displayed in a state in which the longitudinal direction in the display unit 140 of the information processing apparatus 100 and the vertical direction almost coincide with each other, it is favorable to display the image as shown in Part “b” of
In this manner, in the example shown in
The entire image of the panoramic image (or image similar to this image) is displayed together with the middle image, the left image, and the right image in this manner, and hence the entire content of the panoramic image can be easily checked. That is, the display of the middle image, the left image, and the right image enables, for example, a check of the state of the main subject necessary during the live view display and a check of an angle of view from the left to the right to be performed. In addition, the display of the entire image of the panoramic image (or image similar to this image) enables the entire content of the panoramic image to be easily checked.
In the example shown in Part “a” of
In the example shown in Part “b” of
In this manner, the display position of the entire image (or similar image) 638 of the panoramic image may be appropriately changed. Further, the display position may be changed depending on user preferences. For example, the display position may be changed by the user operation (e.g., movement by touch operation).
[Example in which Middle Image is Determined Based on Subject]
In the above-mentioned example, the image of the middle part (middle part in horizontal direction) of the panoramic image is set as the middle image. As described above, for example, the main aims in the case of the live view display are the above-mentioned three points (aims (a) to (c)).
For example, a case where a panoramic image in which the main subject is a person and the background is a distant landscape is captured is assumed. In this case, not only capturing an image having a composition in which a person as the main subject is located at the center of the image but also capturing an image having a composition in which the person is located at a position deviated to either one of the left and the right from the center of the image are performed. In such a case, if the area of the image displayed as the middle image is fixed at the center of the panoramic image (i.e., center of angle of view), it is assumed that an inconvenience may occur. For example, it is also assumed that the main subject (e.g., person) is not included in the middle image displayed as the live view display before imaging. In this case, there is a fear that the above-mentioned aims (a) and (c) cannot be achieved.
Further, for example, a case where, when a butterfly that is about to sit on a flower is imaged as the main subject, capturing an image having a composition in which this butterfly is present on a side of either one of the left and right ends rather than in the vicinity of the center of the panoramic image is performed is assumed. In this case, there is a fear that a timing for checking whether it is a moment when the butterfly sits on the flower and capturing an image cannot be determined. In view of this, an example in which the middle image is determined based on the subject is shown below.
[Example in which Middle Image is Determined Based on Position of Face Included in Panoramic Image]
In the example of
For example, as shown in Part “a” of
In the example shown in
Alternatively, a feature may be detected based on a result of an image analysis other than the above-mentioned processing (face detection, moving-body detection, and focusing) during imaging and an area of the detected feature and an area including surroundings thereof may be set as the middle image.
In the example of
In this manner, in order to appropriately check the main subject, an image including a feature detected based on image analysis and surroundings thereof can be displayed as the middle image without fixing the middle image at a center portion of the panoramic image (center portion of angle of view).
[Example in which Coordinates of Feature are Recorded]
The main aims of the postview display are, as described above, for example, to check an image recorded by imaging and to determine whether or not it is necessary to re-capture an image. Note that to check an image recorded by imaging is, for example, to check whether or not the state in which the main subject is recorded is satisfactory (e.g., to check whether moment when butterfly sits on flower is captured when butterfly that is about to sit on flower is imaged). For example, whether or not an angle of view (i.e., area of image recorded by imaging) is satisfactory is checked.
For example, a case where an image in which the main subject is a person and the background is a distant landscape is captured is assumed. In this case, as described above, not capturing an image having a composition in which the main subject (person) is located at the center of the image but capturing an image having a composition in which the person is located at a position deviated to either one of the left and the right from the center of the image may be performed.
In such a case, if the area of the middle image displayed on the display unit 140 is fixed at the center of the angle of view, there is a fear that a check of the image cannot appropriately be performed, for example, during the postview display after imaging. For example, when checking whether or not the state in which the main subject is recorded is satisfactory and determining whether or not it is necessary to re-capture an image, it is assumed that an inconvenience that it is difficult to make a determination because the main subject is not included in the middle image occurs.
Therefore, in order to appropriately perform those checks, it is favorable that, as described above, an area including a feature detected based on image analysis and surroundings thereof are displayed without fixing the area displayed in the middle image at the center portion of the angle of view.
The detection of the position of the subject by image analysis is often performed on an image input into the imaging device during a live view display period before imaging and on an image input into the imaging device when imaging and recording are performed. Further, the detection result is often deleted after imaging. If the detection result of the position of the subject by image analysis is deleted in this manner, the detection result cannot be used during the postview display and during the reproduction display.
In the first embodiment of the present disclosure, a position of a feature detected by image analysis during imaging (coordinates of feature in panoramic image) is recorded in association with the panoramic image. For example, the position of the feature can be recorded in an area other than pixel data of a generated image (or area other than compressed data thereof). For example, the position of the feature can be recorded in a header of image data in an Exif format of a panoramic image to be recorded or a header of an image file in a jpeg form in the image data. For example, the position of the feature can be recorded in the maker note 185 shown in Part “c” of
Alternatively, for example, in addition to the panoramic image, an image (e.g., middle image) of an area including a feature detected during imaging and surroundings thereof may be recorded at a resolution for a display image. For example, file identification information (e.g., file name) of the display image can be recorded in a header or the like of an image file in a jpeg format of the panoramic image. Then, during the postview display and at an arbitrary point of time of the reproduction display, the display image recorded in association with the panoramic image as the display target can be read out and displayed as the middle image.
Note that, although, in the first embodiment of the present disclosure, the face of the person or the butterfly is detected as the specific target, an object other than the face of the person or the butterfly may be detected and used as the specific target. For example, specific targets such as animals including mammals, reptiles, and fishes (e.g., dog, cat, cow, and horse), an automobile, and a plane may be detected and used.
[Example in which Areas of Images in Panoramic Image are Displayed]
In the above-mentioned example, the entire image of the panoramic image (or image similar to this image) is displayed together with the middle image, the left image, and the right image. For example, it is easy for the user to perform a check and the like if the user can know positions (positions in longitudinal direction) of the displayed images (middle image, left image, and right image) in the panoramic image. In view of this, an example in which areas of displayed images (middle image, left image, and right image) in a panoramic image are displayed is shown below.
In the example shown in Part “a” of
Note that, although the areas of the images in the panoramic image 638 are indicated by the frames on the panoramic image 638 in the example shown in
Specifically, an area of a middle image 604 in the panoramic image 685 is indicated by a frame 686 on the panoramic image 685. Further, an area of a left image 605 in the panoramic image 685 is indicated by a frame 687 on the panoramic image 685 and an area of a right image 606 in the panoramic image 685 is indicated by a frame 688 on the panoramic image 685.
Note that, although the areas of the images in the panoramic image 685 are indicated by the frames on the panoramic image 685 in the example shown in
Note that it is favorable that the display indicating the area of the middle image and the display indicating the areas of the left image and the right image may be in different display modes. For example, the area of the middle image may be covered with semi-transparent hatched lines and the areas of the left image and the right image may be surrounded by frames. In this case, it is favorable that the display indicating the areas of the left image and the right image is in different display modes. For example, different display colors may be adopted in the frames indicating the left image and the right image.
[Example in which Area Display is Changed Depending on Mode]
As mentioned above, it is favorable that the display indicating the areas of the images is performed in order for the user to easily understand the composition and the like of the images during imaging. It should be noted that, during reproduction, it is assumed that it is important to view the panoramic image rather than to check the composition and the like of the images. In this manner, the content that the user desires to check on the display image differs depending on the mode. For example, during the live view display, during the postview display, and during the reproduction display, the content that the user desires to check on the display image differs. In view of this, an example in which the panoramic image is displayed depending on the desire of the user on each mode when the panoramic image is displayed is shown. That is, the example in which the area display is changed depending on the display mode is shown.
For example, in the live view display and the postview display that are mainly aimed to check an image, the display information indicating the areas of the images (e.g., frames 681 to 683 and 686 to 688 shown in
On the other hand, in the reproduction display mainly aimed to view an image, the display information indicating the areas of the images is not displayed in the panoramic image.
Alternatively, when those display methods may be set when the information processing apparatus 100 is shipped from a factory. Alternatively, in each of the live view display, the postview display, and the reproduction display, the user may be capable of individually setting whether or not to display the display information indicating the areas of the images in the panoramic image.
For example, an item-setting screen may be displayed on the display unit 140 of the information processing apparatus 100 and the user may make settings in the item-setting screen.
[Example in which Middle Image is Scroll-Displayed]
In the above-mentioned example, with the positions (positions in longitudinal direction) of the images (middle image, left image, and right image) in the panoramic image, which are displayed on the display unit 140, being fixed, the images are displayed. It is also assumed that the middle image is displayed while moving the position (position in longitudinal direction) of the middle image in the panoramic image. In view of this, an example in which the middle image is displayed while moving the position (position in longitudinal direction) of the middle image in the panoramic image (so-called automatic scroll display) is shown below.
Note that, although
Further, the transition of the area of the middle image in the panoramic image 685 corresponds to the transition shown in
In this manner, the middle image is displayed while automatically moving the area of the middle image in the panoramic image. A so-called automatic scroll display of the middle image is performed.
Whether or not to perform the automatic scroll display of the middle image may be set depending on the mode. For example, the automatic scroll display of the middle image is not performed when the live view display for checking a subject before imaging is performed and when the postview display for checking a recorded image before imaging. That is, the middle image is displayed with the position (position in longitudinal direction) of the middle image in the panoramic image being fixed. On the other hand, the automatic scroll display of the middle image is performed when the reproduction display of an image at an arbitrary point of time is performed.
Alternatively, for example, the display control unit 550 may switch the first display method and the second display method based on the user operation during live view or during postview. The first display method is a display method of, for example, displaying the middle image with the position of the middle image in the panoramic image being fixed. Further, the second display method is a display method of displaying an image including an area in which a specific target included in the panoramic image is present, as the middle image. On the other hand, the display control unit 550 may switch the third display method (automatic scroll display) and the second display method based on the user operation during reproduction.
Alternatively, only when the live view display for checking the subject before imaging is performed, a specific area (e.g., area of middle image) may be displayed without performing the automatic scroll display of the middle image. In this case, the automatic scroll display of the middle image may be performed when the postview display for checking a recorded image after imaging is performed and when the reproduction display of an image at an arbitrary point of time is performed. Alternatively, a specific area may be displayed without performing the automatic scroll display of the middle image when the live view display is performed and when the postview display is performed, and the automatic scroll display of the middle image may be performed when the reproduction display of an image at an arbitrary point of time is performed.
Alternatively, for example, the display control unit 550 may switch between the first display method and the second display method based on the user operation during the live view. On the other hand, the display control unit 550 may switch between the third display method and the second display method based on the user operation during postview or during reproduction.
Alternatively, setting of those display methods may be performed when the information processing apparatus 100 is shipped from a factory or when the user uses the information processing apparatus 100. Alternatively, in each of the live view display, the postview display, and the reproduction display, the user may be capable of individually setting whether or not to perform the automatic scroll display of the middle image. Further, for example, it is possible to individually set an image display method by a live view operation, an image display method by a postview operation, and an image display method by a reproduction operation. If such setting is made, the display control unit 550 is capable of displaying images based on the setting. Alternatively, concerning those various settings, an item-setting screen may be displayed on the display unit 140 of the information processing apparatus 100 and the user may manually make those various settings in the item-setting screen. Alternatively, when the function of each of the live view display, the postview display, and the reproduction display is activated, the middle image can be displayed according to the content set in the item-setting screen.
Further, the input unit (e.g., touch panel) that inputs an instruction from the user may be provided to a surface of the casing of the information processing apparatus 100. Then, when the user inputs an instruction through the input unit in a period of each of the live view display, the postview display, and the reproduction display, the display of the middle image may be switched between a still image display and an automatic scroll display for each display period. The automatic scroll display is an example of the display method of displaying the middle image while moving the area displayed as the middle image from one end to other end of the both end portions in the panoramic image. Further, the still image display is an example of the display method of displaying a certain area as the middle image without moving the area displayed as the middle image.
For example, a case where a switching instruction operation relating to the display method for a middle image (still image display and automatic scroll display) is received from the user is assumed. In this case, the display control unit 550 is capable of selecting, based on the switching instruction operation, either one of the automatic scroll display and the still image display and displaying the middle image.
Alternatively, which of the display methods is used as the display method for a middle image at a point of time when the still image display or the automatic scroll display of the middle image is completed may be stored inside the information processing apparatus 100. Then, when the function of each of the live view display, the postview display, and the reproduction display is activated after the storing, the middle image may be displayed according to the stored content.
For example, if the display method for a middle image is switched, a display method according to the switching is stored by the time the display of the middle image based on the switching is completed. Then, when a new panoramic image is displayed after the display of the middle image based on the switching is completed, the middle image in the panoramic image can be displayed using the stored display method.
[Example in which Middle Image is Displayed Based on User Operation]
In the above-mentioned example, the images (middle image, left image, and right image) are automatically displayed on the display unit 140. It is also important to rapidly display a middle image at a position (position in longitudinal direction) that the user desires to view. In view of this, an example in which the middle image is displayed based on the user operation is shown below. That is, the example in which the display area of the middle image is changed based on the user operation is shown.
A case where the display unit 140 of the information processing apparatus 100 is constituted of a touch panel is assumed. In this case, for example, when the user depresses the display surface of the display unit 140 with a finger 10, the depression operation is detected as a change in electrical properties such as a resistance change and a capacitance change. Then, when the depression operation is detected during displaying a specific area in the middle image, the middle image with the detected position being a center is generated and displayed. Further, in the case where a touch panel capable of detecting a proximity (e.g., proximity of part of body such as hand and finger) with respect to the display surface of the display unit 140 by the user is used, also when the proximity is detected, the middle image can be displayed as when the depression operation is detected.
A case where the depression operation is detected during the above-mentioned automatic scroll display as another display mode of the middle image is assumed. In this case, the middle image with the detected position being a center is displayed. After the display, the middle image may be continuously displayed as a still image or the automatic scroll display may be re-started with that position being a start point.
For example, a case where the designation operation for specifying a position in the panoramic image is performed is assumed. In this case, when the middle image is displayed by the automatic scroll display, the display control unit 550 displays an image including a position in the panoramic image that is designated by the designation operation and a surrounding area thereof, as a new middle image. Then, the display control unit 550 performs a display of the middle image while performing a movement with that position being a start point. On the other hand, if the middle image is displayed by the still image display when the designation operation is performed, the display control unit 550 displays an image including that position and a surrounding area thereof, as a new middle image.
Note that, for example, a touch panel of an electrostatic type (electrostatic capacitance type), a touch panel of a pressure-sensitive type (resistance film type), or an optical touch panel can be used as the touch panel.
Alternatively, an operation member (e.g., arrow key) other than the touch panel may be used to specify a position at which the middle image should be displayed.
[Modified Example of Two-Row Display]
In the above-mentioned example, the middle image, the left image, and the right image are displayed in two rows in the longitudinal direction of the display unit 140. It should be noted that, depending on the size of each image, the middle image, the left image, and the right image may be displayed in one row in the longitudinal direction of the display unit 140. In view of this, a case where the middle image, the left image, and the right image are displayed in one row in the longitudinal direction of the display unit 140 is shown below.
In Part “a” of
As shown in Parts “a” and “b” of
When a phone call operation of the information processing apparatus such as the smart phone and the cell phone or an operation of creating or reading a document of an e-mail, a casing is often used in a vertical state. Therefore, for example, also when imaging is performed using an information processing apparatus with a camera, the casing and the display unit are often held in the vertical state for imaging, such that a vertically long image is recorded.
In view of this, in the first embodiment of the present disclosure, in the information processing apparatus 100 such as the smart phone and the cell phone, a plurality of imaging systems are arranged in an orthogonal direction orthogonal to the longitudinal direction of the information processing apparatus 100. With this, the panoramic image can be recorded even if the information processing apparatus 100 is in the vertical state for imaging.
Further, during the live view display, during the postview display, or during the reproduction display, when the panoramic image is displayed while the information processing apparatus 100 is in the vertical state, the plurality of images obtained by dividing the panoramic image are displayed in a plurality of rows in the longitudinal direction of the display unit 140. With this, in a portable information processing apparatus 100 with a multi-eye camera, capturing of a multi-eye panoramic image can be easily performed in a state in which the user holds the information processing apparatus 100 in a vertical state. Further, even if the panoramic image is displayed while the information processing apparatus 100 is in the vertical state, the panoramic image can be appropriately displayed.
[Display Example when State (Vertical State and Horizontal State) of Information Processing Apparatus is Changed]
In the above, the display example of the panoramic image when the information processing apparatus 100 is in the vertical state is shown. It should be noted that it is also assumed that some users view the panoramic image while the information processing apparatus 100 is in the horizontal state.
For example, it is also assumed that the same number of display rows is set in the horizontal state and the vertical state. For example, when the panoramic image is displayed in the horizontal state, the panoramic image can be displayed in a large size by displaying the panoramic image with full horizontal width of the display unit 140. However, the height of each of margin areas above and below the panoramic image in the up to down direction is smaller when the panoramic image is displayed with full horizontal width of the display unit 140 in the horizontal state than when the panoramic image is displayed with full horizontal width of the display unit 140 in the vertical state. In this manner, each of the margin areas above and below of the panoramic image is narrower in the case of the horizontal state than the vertical state. Therefore, it is favorable that the number of display rows is made smaller in the case of the horizontal state than the vertical state. In view of this, a display example when the state (vertical state and horizontal state) of the information processing apparatus 100 is changed is shown below.
On the left-hand side of each of Parts “a” to “c” of
For example, when the information processing apparatus 100 is in the horizontal state, only a panoramic image 720 or the panoramic image 720 and a middle image 721 is/are displayed on the display unit 140. For example, when the information processing apparatus 100 is in the vertical state, images of a middle image 722, a left image 723, and a right image 724 are displayed on the display unit 140 or a reduced image 725 of the panoramic image is displayed together with those images on the display unit 140.
In this manner, the number of rows of the images when the user holds the information processing apparatus 100 such that the display unit 140 is in the vertical state is made larger than the number of rows of the images when the user holds the information processing apparatus 100 such that the display unit 140 is in the horizontal state. With this, even if the state of the information processing apparatus (vertical state and horizontal state) is changed, the panoramic image can be displayed in such a manner that the user can easily view the panoramic image.
[Modified Examples of Information Processing Apparatus]
Next, modified examples of the information processing apparatus including the plurality of imaging systems will be described.
The information processing apparatus 800 is realized by, for example, an information processing apparatus (e.g., flip phone with multi-eye camera) including a plurality of imaging systems.
The information processing apparatus 800 includes a rotation member 802, the first casing 811, and the second casing 812. The first casing 811 and the second casing 812 are coupled to each other such that the first casing 811 and the second casing 812 can be folded with the rotation member 802 being an axis of rotation. That is, the folding can be performed such that an operation unit 801 and the display unit 140 are opposed to each other with the rotation member 802 being the axis of rotation.
The first casing 811 includes an imaging unit 130 and a display unit 140. The second casing 812 further includes the operation unit 801, an electronic substrate 815, and a battery storage unit 816.
The operation unit 801 is an operation including a numeric keypad, an arrow key, a determination key, and the like.
In this manner, the first embodiment of the present disclosure may be applied to the foldable information processing apparatus 800 including the plurality of imaging systems. Note that, in this modified example, the three imaging systems are arranged according to a predetermined rule. It should be noted that the first embodiment of the present disclosure may be applied to an information processing apparatus (e.g., imaging apparatus such as digital still camera and digital video camera (e.g., camera-integrated recorder)) including two or four or more imaging systems. That is, the first embodiment of the present disclosure can be applied by arranging the two or four or more imaging systems according to a predetermined rule. Examples in which two imaging systems are arranged according to a predetermined rule are shown in
As shown in Part “a” of
The information processing apparatus 820 is realized by an information processing apparatus (e.g., smart phone with multi-eye camera and cell phone with multi-eye camera) including, for example, a plurality of imaging systems.
The information processing apparatus 820 includes an electronic substrate 101, a battery storage unit 102, a changeover switch 111, a determination key 112, a display unit 140, and an imaging unit 830. Note that the information processing apparatus 820 is the same as the information processing apparatus 100 shown in
The information processing apparatus 840 is realized by an information processing apparatus (e.g., flip phone with multi-eye camera) including, for example, a plurality of imaging systems.
The information processing apparatus 840 includes a rotation member 802, the first casing 811, and the second casing 812. Note that the information processing apparatus 840 is the same as the information processing apparatus 800 shown in
[Display Example of Panoramic Image and Other Images]
As mentioned above, the panoramic image generated by the information processing apparatus including the plurality of imaging systems can be displayed. The information processing apparatus including the plurality of imaging systems is also capable of generating normal images (e.g., images having aspect ratio of 3:4), and hence it is also important to appropriately display those images.
In view of this, an example in which images generated by an information processing apparatus including a plurality of imaging systems are appropriately displayed is shown below.
[Display Example of Information Processing Apparatus Including Three Imaging Units]
In Part “a” of
In Part “a” of
In Part “a” of
[Display Example of Information Processing Apparatus Including Two Imaging Units]
In Part “a” of
In Part “a” of
In Part “a” of
[Operation Example of Information Processing Apparatus]
First, a determination is made as to whether or not an imaging mode or a reproduction mode is set as an operation mode (Step S901). Note that, if an operation mode other than the imaging mode and the reproduction mode is set, processing depending on the set operation mode is performed.
If the imaging mode is set (Step S901), an imaging function is activated (Step S902) and imaging processing is performed (Step S910). This imaging processing will be described in detail with reference to
Subsequently, a determination is made as to whether or not the imaging mode is released (Step S903), and, if the imaging mode is not released, the processing returns to Step S910. On the other hand, if the imaging mode is released (Step S903), a determination is made as to whether or not the operation mode other than the imaging mode and the reproduction mode is set (Step S904). Then, if the operation mode other than the imaging mode and the reproduction mode is not set (Step S904), the processing returns to Step S901. On the other hand, if the operation mode other than the imaging mode and the reproduction mode is set (Step S904), the operation of the display control processing is terminated.
Further, if the reproduction mode is set (Step S901), a reproduction function is activated (Step S905) and reproduction processing is performed (Step S940). This reproduction processing will be described in detail with reference to
Subsequently, a determination is made as to whether or not the reproduction mode is released (Step S906), and, if the reproduction mode is not released, the processing returns to Step S940. On the other hand, if the reproduction mode is released (Step S906), a determination is made as to whether or not the operation mode other than the imaging mode and the reproduction mode is set (Step S904).
First, the display control unit 550 determines whether or not the aspect ratio of an image generated by the image generation unit 510 is for a panoramic image (Step S911). Then, if the aspect ratio of the image is for a panoramic image (Step S911), panoramic imaging processing is performed (Step S930). Note that this panoramic imaging processing will be described in detail with reference to
Otherwise, if the aspect ratio of the image is not for a panoramic image (Step S911), the display control unit 550 determines whether or not the aspect ratio of the image is 3:4 (Step S912). Then, if the aspect ratio of the image is 3:4 (Step S912), the display control unit 550 causes the display unit 140 to display an image (3:4 image) generated by the image generation unit 510 (Step S913). That is, the 3:4 image is live-view-displayed.
Subsequently, a determination is made as to whether or not the shutter button (e.g., determination key 112) is depressed (Step S914), and, if the shutter button is not depressed, the operation of the imaging processing is terminated. On the other hand, if the shutter button is depressed (Step S914), the record control unit 530 records an image (3:4 image) generated at a timing of the depression in the storage unit 540 (Step S915). Subsequently, the display control unit 550 causes the display unit 140 to display an image (3:4 image) as a recording target in the storage unit 540 (Step S916). That is, the 3:4 image is postview-displayed.
Otherwise, if the aspect ratio of the image is not 3:4 (if aspect ratio of image is 4:3) (Step S912), the display control unit 550 causes the display unit 140 to display an image (4:3 image) generated by the image generation unit 510 (Step S917). That is, the 4:3 image is live-view-displayed.
Subsequently, a determination is made as to whether or not the shutter button is depressed (Step S918), and, if the shutter button is not depressed, the operation of the imaging processing is terminated. On the other hand, if the shutter button is depressed (Step S918), the record control unit 530 records an image (4:3 image) generated at a timing of the depression in the storage unit 540 (Step S919). Subsequently, the display control unit 550 causes the display unit 140 to display an image (4:3 image) as a recording target in the storage unit 540 (Step S920). That is, the 4:3 image is postview-displayed.
First, the display control unit 550 causes the display unit 140 to display an image (panoramic image) generated by the image generation unit 510 (Step S931). In this case, a plurality of images obtained by dividing the panoramic image are live-view-displayed in a plurality of rows (plurality of rows in longitudinal direction of display unit 140) (Step S931). For example, as shown in
Subsequently, a determination is made as to whether or not the shutter button is depressed (Step S932), and, if the shutter button is not depressed, the operation of the panoramic imaging processing is terminated. On the other hand, if the shutter button is depressed (Step S932), the record control unit 530 records an image (panoramic image) generated at a timing of the depression in the storage unit 540 (Step S933). In this case, if a feature is detected, information on the feature (e.g., coordinates of feature in panoramic image) is recorded in association with the panoramic image. Alternatively, in addition to the panoramic image, a peripheral image including the feature may be recorded in association with the panoramic image. The peripheral image including the feature can be recorded as, for example, a display image (e.g., thumbnail image).
Subsequently, the display control unit 550 causes the display unit 140 to display the image (panoramic image) as a recording target in the storage unit 540 (Step S934). In this case, a plurality of images obtained by diving the panoramic image are postview-displayed in a plurality of rows (plurality of rows in longitudinal direction of display unit 140) (Step S934). For example, as shown in
First, the display control unit 550 determines whether or not the aspect ratio of an image (image stored in storage unit 540) that the user instructs to reproduce is for a panoramic image (Step S941). Then, if the aspect ratio of the image is for a panoramic image (Step S941), the display control unit 550 determines, based on attitude information from the attitude detection unit 150, whether or not the display unit 140 is in a vertical state (Step S942).
Then, if the display unit 140 is in the vertical state (Step S942), the display control unit 550 causes the display unit 140 to display the panoramic image (panoramic image stored in storage unit 540) that the user instructs to reproduce (Step S943). In this case, a plurality of images obtained by dividing the panoramic image are displayed in a plurality of rows (plurality of rows in longitudinal direction of display unit 140) (Step S943). For example, as shown in
Further, if the display unit 140 is not in the vertical state (if display unit 140 is in horizontal state) (Step S942), the display control unit 550 causes the display unit 140 to display the panoramic image (panoramic image stored in storage unit 540) that the user instructs to reproduce (Step S944). In this case, the panoramic image is displayed in a smaller number of rows (number of rows in longitudinal direction of display unit 140) than the number of rows when displayed in the vertical state (Step S944). For example, as shown on the left-hand side of Parts “a” to “c” of
Otherwise, if the aspect ratio of the image is not for a panoramic image (Step S941), the display control unit 550 determines whether or not the aspect ratio of the image is 3:4 (Step S945). Then, if the aspect ratio of the image is 3:4 (Step S945), the display control unit 550 determines, based on the attitude information from the attitude detection unit 150, whether or not the display unit 140 is in a vertical state (Step S946).
Then, if the display unit 140 is in the vertical state (Step S946), the display control unit 550 causes the display unit 140 to display an image (3:4 image stored in storage unit 540) that the user instructs to reproduce (Step S947). In this case, the 3:4 image instructed to be reproduced is displayed upright (Step S947). For example, as shown in
Otherwise, if the display unit 140 is not in the vertical state (if display unit 140 is in horizontal state) (Step S946), the display control unit 550 causes the display unit 140 to display an image (3:4 image stored in storage unit 540) that the user instructs to reproduce (Step S948). In this case, the 3:4 image instructed to be reproduced is displayed upright (Step S948).
Otherwise, if the aspect ratio of the image is not 3:4 (if aspect ratio is 4:3) (Step S949), the display control unit 550 determines, based on the attitude information from the attitude detection unit 150, whether or not the display unit 140 is in a vertical state (Step S949).
Then, if the display unit 140 is in the vertical state (Step S950), the display control unit 550 causes the display unit 140 to display an image (4:3 image stored in storage unit 540) that the user instructs to reproduce (Step S950). In this case, the 4:3 image instructed to be reproduced is displayed upright (Step S950). For example, as shown in
Otherwise, if the display unit 140 is not in the vertical state (if display unit 140 is in horizontal state) (Step S949), the display control unit 550 causes the display unit 140 to display the image (4:3 image stored in storage unit 540) that the user instructs to reproduce (Step S951). In this case, the 4:3 image instructed to be reproduced is displayed upright (Step S951).
[Operation Example when Middle Image is Scroll-Displayed]
In the above, the operation example when a display is performed with the positions (positions in longitudinal direction) of the images (middle image, left image, and right image), which are displayed on the display unit 140, in the panoramic image being fixed. An operation example when a display (so-called automatic scroll display) is performed while moving the position (position in longitudinal direction) of the middle image in the panoramic image is shown below.
First, the scroll method is set (Step S907). In the setting of the scroll method, whether or not to perform a scroll display during each display is set. Further, the setting of the scroll method is performed, for example, manually by the user or automatically. The setting content is stored in, for example, the display control unit 550.
Otherwise, if the imaging mode is set (Step S901), the imaging function is activated (Step S902) and the imaging processing is performed (Step S910). The panoramic imaging processing (Step S930 of
Otherwise, if the reproduction mode is set (Step S901), the reproduction function is activated (Step S905) and the reproduction processing is performed (Step S960). This reproduction processing will be described in detail with reference to
First, the display control unit 550 reads out the setting of the scroll method during the live view display (Step S921). Subsequently, based on the read-out setting of the scroll method, the display control unit 550 causes the display unit 140 to display an image (panoramic image) generated by the image generation unit 510 (Step S922). For example, if the setting to perform the scroll during the live view display is made, as shown in
Subsequently, a determination is made as to whether or not the setting change of the scroll method is performed (Step S923). For example, during the live view display, a setting change of the scroll method (e.g., from performing scroll display to not performing scroll display or from not performing scroll display to performing scroll display) is performed by a manual operation by the user. Then, if a setting change of the scroll method is performed (Step S923), the scroll method during the live view display is changed (Step S924), and a scroll method after the change is stored (Step S925).
Subsequently, the display control unit 550 reads out the setting of the scroll method during the postview display (Step S926). Subsequently, the display control unit 550 causes, based on the read-out setting of the scroll method, the display unit 140 to display an image (panoramic image) as a recording target in the storage unit 540 (Step S927). For example, if the setting to perform the scroll during the postview display is made, as shown in
Subsequently, a determination is made as to whether or not a setting change of the scroll method is performed (Step S928). For example, during the postview display, a setting change of the scroll method (e.g., from performing scroll display to not performing scroll display or from not performing scroll display to performing scroll display) is performed by a manual operation by the user. Then, if a setting change of the scroll method is performed (Step S928), the scroll method during the postview display is changed (Step S929), and a scroll method after the change is stored (Step S935).
If the display unit 140 is in the vertical state (Step S942), vertical-state panoramic reproduction processing is performed (Step S970). The vertical-state panoramic reproduction processing will be described in detail with reference to
Otherwise, if the display unit 140 is not in the vertical state (display unit 140 is in horizontal state) (Step S942), horizontal-state panoramic reproduction processing is performed (Step S980). The horizontal-state panoramic reproduction processing will be described in detail with reference to
First, the display control unit 550 reads out the setting of the scroll method during the reproduction display (Step S971). Subsequently, the display control unit 550 causes, based on the read-out setting of the scroll method, the display unit 140 to display an image that the user instructs to reproduce (panoramic image stored in storage unit 540) (Step S972). For example, if the setting to perform the scroll during the reproduction display is made, as shown in
Subsequently, a determination is made as to whether or not a setting change of the scroll method is performed (Step S973). For example, during the reproduction display, a setting change of the scroll method (e.g., from performing scroll display to not performing scroll display or from not performing scroll display to performing scroll display) is performed by a manual operation by the user. Then, if the setting change of the scroll method is performed (Step S973), the scroll method during the reproduction display is changed (Step S974), and a scroll method after the change is stored (Step S975).
First, the display control unit 550 reads out the setting of the scroll method during the reproduction display (Step S981). Subsequently, the display control unit 550 causes, based on the read-out setting of the scroll method, the display unit 140 to display an image (panoramic image stored in storage unit 540) that the user instructs to reproduce (Step S982). For example, when the setting to perform the scroll during the reproduction display is made, as shown in
Subsequently, a determination is made as to whether or not a setting change of the scroll method is performed (Step S983). For example, during the reproduction display, a setting change of the scroll method (e.g., from performing scroll display to not performing scroll display or from not performing scroll display to performing scroll display) is performed by a manual operation by the user. Then, if the setting change of the scroll method is performed (Step S983), the scroll method during the reproduction display is changed (Step S984), a scroll method after the change is stored (Step S985).
In the first embodiment of the present disclosure, the example in which the imaging unit is fixedly provided in the information processing apparatus is shown above. It should be noted that the first embodiment of the present disclosure may also be applied to an information processing apparatus in which an imaging unit is movably provided.
In a second embodiment of the present disclosure, an example of an information processing apparatus in which an imaging unit is movably provided is shown.
The information processing apparatus 1000 is realized by, for example, an information processing apparatus (e.g., cell phone with movable multi-eye camera) including a plurality of imaging systems.
The information processing apparatus 1000 includes a first casing 1010, a second casing 1020, and a rotation member 1021. The second casing 1020 is a casing attached to the first casing 1010 that houses a display unit 140. Further, the second casing 1020 is a rotatable casing including an axis of rotation parallel (or almost parallel) to a short side of the display unit 140. Further, three imaging systems (optical systems 131 to 133 and imaging devices 134 to 136) constituting the imaging unit 1030 are arranged in a direction parallel (or almost parallel) to the axis of rotation of the second casing 1020 on a surface of the second casing 1020. The imaging unit 1030 is movably provided in this manner, and hence the imaging unit 1030 can be rotated in directions of the arrows 1022 and 1023 of
In this manner, the first embodiment of the present disclosure may be applied to the information processing apparatus 1000 including the plurality of imaging systems, in which the imaging system of the plurality of imaging systems is movably provided. Note that, in this example, the three imaging systems are arranged according to a predetermined rule. It should be noted that the first embodiment of the present disclosure may also be applied to an information processing apparatus (e.g., imaging apparatus such as digital still camera and digital video camera (e.g., camera-integrated recorder)) including two or four or more imaging systems. An Example in which two imaging systems are arranged according to a predetermined rule is shown in
The information processing apparatus 1050 is realized by, for example, an information processing apparatus (e.g., cell phone with movable multi-eye camera) including two imaging systems.
The information processing apparatus 1050 includes a first casing 1060, a second casing 1070, and a rotation member 1071. The second casing 1070 is a casing attached to the first casing 1060 that houses a display unit 140. Further, the second casing 1070 is a rotatable casing including an axis of rotation parallel (or almost parallel) to a short side of the display unit 140. Further, the two imaging systems (optical systems 831 and 832 and imaging devices 833 and 834) constituting the imaging unit 1080 are arranged in a direction parallel (or almost parallel) to the axis of rotation of the second casing 1070 on a surface of the second casing 1070. The imaging unit 1080 is movably provided in this manner, and hence the imaging unit 1080 can be rotated in the directions of the arrows 1072 and 1073 of
In this manner, the first embodiment of the present disclosure may be applied to the information processing apparatus 1050 including the two imaging systems, in which the imaging system of the two imaging systems are movably provided. Note that, in
Note that the first embodiment of the present disclosure may be applied to an information processing apparatus (electronic apparatus, image processing apparatus, display apparatus, or display control apparatus) capable of causing a display apparatus (built-in display apparatus or external display apparatus) to display a panoramic image. For example, the first embodiment of the present disclosure may be applied to apparatuses such as a multi-eye imaging apparatus, a digital photo frame, a tablet terminal, a digital signage terminal (e.g., rotation type), a navigation apparatus, a personal computer, a portable medium player. Further, the first embodiment of the present disclosure may also be applied to an apparatus having no imaging function (e.g., tablet terminal having no imaging function (e.g., apparatus that displays panoramic image on a web site)). Note that the multi-eye imaging apparatus is, for example, a multi-eye digital still camera or a multi-eye digital video camera (e.g., camera-integrated recorder).
The display method in the first embodiment of the present disclosure may also be applied to an imaging apparatus other than the multi-eye imaging apparatus. For example, the display method may be applied to a single-eye imaging apparatus (e.g., single-eye camera capable of capturing horizontally longer panoramic image in comparison with 16:9 in direction of short side of casing) including a wide-angle lens and an imaging device.
The embodiments of the present invention are shown as an example for implementing the present invention. The matters in the embodiments of the present invention have corresponding relations to the invention specifying matters in the claims. Similarly, the invention specifying matters in the claims have corresponding relations to the matters in the embodiments of the present invention having the same names as the invention specifying matters. However, the present invention is not limited to the embodiments, and various modifications can be made in the range without departing from the subject matter of the present invention.
In addition, the processing procedures described in the embodiments of the present invention may be grasped as the methods including the series of procedures. Moreover, the series of procedures may be grasped as the programs for making a computer execute the series of the procedures, or a recording medium storing the programs. As the recording medium, a compact disc (CD), a MiniDisc (MD), a digital versatile disc (DVD), a memory card, a blu-ray disc (registered trademark), and the like may be used.
Note that the present disclosure may also take the following configurations.
(1) An information processing apparatus, including:
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
2012-236784 | Oct 2012 | JP | national |