1. Field of the Invention
The present invention relates to at least one of an information processing method, an information processing device, and a program.
2. Description of the Related Art
A method for displaying a panoramic image has been known conventionally.
A user interface (that will be referred to as a “UI” below) has been known for accepting an instruction from a user with respect to display of a panoramic image in a panoramic image display (see, for example, Japanese Patent Application Publication No. 2011-076249).
However, a conventional UI assigns a function for scrolling an image to so-called “dragging” at a time of image display on a smartphone or the like, and hence, it may be difficult for a user to execute an image operation, for example, editing of an image or the like.
According to one aspect of the present invention, there is provided an information processing method for causing a computer to process an image, wherein the image processing method causes the computer to execute an acquisition step of acquiring the image, and an output step of outputting an output image by separating between an editing area that is a predetermined area where the image is editable on the image and a changing area other than the predetermined area on the image, wherein, in a case where an operation is done on the changing area, the editing area is moved, and in a case where an operation is done on the editing area, the editing area is not moved.
An embodiment of the present invention will be described below.
An image taking system 10 has an image taking device 1 and a smartphone 2.
The image taking device 1 has a plurality of optical systems, and produces, and outputs to the smartphone 2, for example, a taken image of a wide range such as all directions around the image taking device 1 (that will be referred to as an “all celestial sphere image” below). Details of the image taking device 1 and an all celestial sphere image will be described below. An image that is processed by the image taking system 10 is, for example, an all celestial sphere image. A panoramic image is, for example, an all celestial sphere image. An example of an all celestial sphere image will be described below.
An information processing device is, for example, the smartphone 2. The smartphone 2 will be described as an example below. The smartphone 2 is a device for causing a user to operate an all celestial sphere image acquired from the image taking device 1. The smartphone 2 is a device for causing a user to output an acquired all celestial sphere image. A detail of the smartphone 2 will be described below.
The image taking device 1 and the smartphone 2 are subjected to wired or wireless connection. For example, the smartphone 2 downloads from the image taking device 1, and inputs to the smartphone 2, data such an all celestial sphere image output from the image taking device 1. Here, connection may be executed through a network.
Here, an entire configuration is not limited to a configuration illustrated in
<An Image Taking Device>
The image taking device 1 has a front side image taking element 1H1, a back side image taking element 1H2, and a switch 1H3. A hardware that is provided in an interior of the image taking device 1 will be described below.
The image taking device 1 produces an all celestial sphere image by using images taken by the front side image taking element 1H1 and the back side image taking element 1H2.
The switch 1H3 is a so-called “shutter button” and is an input device for causing a user to execute an instruction of image taking for the image taking device 1.
The image taking device 1 is held by hand of a user, for example, as illustrated in
As illustrated in
An image taken by the front side image taking element 1H1 in
Here, it is desirable for an angle of view to be within a range greater than or equal to 180° and less than or equal to 200°. In particular, as a hemispherical image in
An image taken by the back side image taking element 1H2 in
An image in
The image taking device 1 executes processes such as a distortion correction process and a synthesis process, and thereby, produces an image illustrated in
Here, an all celestial sphere image is not limited to an image produced by the image taking device 1. An all celestial sphere image may be, for example, an image taken by another camera or the like, or an image produced based on an image taken by another camera. It is desirable for an all celestial sphere image to be an image with a view angle in a wide range taken by a so-called “all direction camera”, a so-called “wide angle lens camera”, or the like.
Furthermore, an all celestial sphere image will be described as an example, and an image is not limited to such an all celestial sphere image. An image may be, for example, an image or the like taken by a compact camera, a single lens reflex camera, a smartphone, or the like. An image may be a panoramic image that extends horizontally or vertically, or the like.
<A Hardware Configuration of an Image Taking Device>
The image taking device 1 has an image taking unit 1H4, an image processing unit 1H7, an image control unit 1H8, a Central Processing Unit (CPU) 1H9, and a Read-Only Memory (ROM) 1H10. Furthermore, the image taking device 1 has a Static Random Access Memory (SRAM) 1H11, a Dynamic Random Access Memory (DRAM) 1H12, and an operation interface (I/F) 1H13. Moreover, the image taking device 1 has a network I/F 1H14, a wireless I/F 1H15, and an antenna 1H16. Each component of the image taking device 1 is connected through a bus 1H17 and executes input or output of data or a signal.
The image taking unit 1H4 has the front side image taking element 1H1 and the back side image taking element 1H2. A lens 1H5 that corresponds to the front side image taking element 1H1 and a lens 1H6 that corresponds to the back side image taking element 1H2 are placed. The front side image taking element 1H1 and the back side image taking element 1H2 are so-called “camera units”. The front side image taking element 1H1 and the back side image taking element 1H2 have optical sensors such as a Complementary Metal Oxide semiconductor (CMOS) or a Charge Coupled Device (CCD). The front side image taking element 1H1 executes a process for converting light incident on the lens 1H5 to produce image data. The back side image taking element 1H2 executes a process for converting light incident on the lens 1H6 to produce image data. The image taking unit 1H4 outputs image data produced by the front side image taking element 1H1 and the back side image taking element 1H2 to the image processing unit 1H7. For example, image data are the front side hemispherical image in
Here, the front side image taking element 1H1 and the back side image taking element 1H2 may have an optical element other than a lens, such as a stop or a low-pass filter, in order to execute image taking with a high image quality. Furthermore, the front side image taking element 1H1 and the back side image taking element 1H2 may execute a process such as a so-called “defective pixel correction” or a so-called “hand movement correction” in order to execute image taking with a high image quality.
The image processing unit 1H7 executes a process for producing an all celestial sphere image in
The image taking control unit 1H8 is a control device that controls each component of the image taking device 1.
The CPU 1H9 executes an operation or a control for each process that is executed by the image taking device 1. For example, the CPU 1H9 executes each kind of program. Here, the CPU 1H9 may be composed of a plurality of CPUs or devices or a plurality of cores in order to attain speeding-up due to parallel processing. Furthermore, a process of the CPU 1H9 may be such that another hardware resource is provided inside or outside the image taking device 1 and caused to execute a part or an entirety of a process for the image taking device 1.
The ROM 1H10, the SRAM 1H11, and the DRAM 1H12 are examples of a storage device. The ROM 1H10 stores, for example, a program, data, or a parameter to that is executed by the CPU 1H9. The SRAM 1H11 and the DRAM 1H12 store, for example, a program, data to be used in a program, data to be produced by a program, a parameter, or the like, in a case where the CPU 1H9 executes a program. Here, the image taking device 1 may have an auxiliary storage device such as a hard disk.
The operation I/F 1H13 is an interface that executes a process for inputting an operation of a user to the image taking device 1, such as the switch 1H3. The operation I/F 1H13 is an operation device such as a switch, a connector or cable for connecting an operation device, a circuit for processing a signal input from an operation device, a driver, a control device, or the like. Here, the operation I/F 1H13 may have an output device such as a display. Furthermore, the operation I/F 1H13 may be a so-called “touch panel” wherein an input device and an output device are integrated, or the like. Moreover, the operation I/F 1H13 may have an interface such as a Universal Serial Bus (USB), connect a storage medium such as Flash Memory (“Flash Memory” is a registered trademark), and input from and output to the image taking device 1, data.
Here, the switch 1H3 may have an electric power source switch for executing an operation other than a shutter operation, a parameter input switch, or the like.
The network I/F 1H14, the wireless I/F 1H15, and the antenna 1H16 are devices for connecting the image taking device 1 with another computer through a wireless or wired network and a peripheral circuit or the like. For example, the image taking device 1 is connected to a network through the network I/F 1H14 and transmits data to the smartphone 2. Here, the network I/F 1H14, the wireless I/F 1H15, and the antenna 1H16 may be configured to be connected by using a connector such as a USB, a cable, or the like.
The bus 1H17 is used for an input or an output of data or the like between respective components of the image taking device 1. The bus 1H17 is a so-called “internal bus”. The bus 1H17 is, for example, a Peripheral Component Interconnect Bus Express (PCI Express).
Here, the image taking device 1 is not limited to a case of two image taking elements. For example, it may have three or more image taking elements. Moreover, the image taking device 1 may change an image taking angle of one image taking element to take a plurality of partial images. Furthermore, the image taking device 1 is not limited to an optical system that uses a fisheye lens. For example, a wide angle lens may be used.
Here, a process that is executed by the image taking device 1 is not limited to that is executed by the image taking device 1. A part or an entirety of a process that is executed by the image taking device 1 may be executed by the smartphone 2 or another computer connected through a network while the image taking device 1 may transmit data or a parameter.
<A Hardware Configuration of an Information Processing Device>
An information processing device is a computer. An information processing device may be, for example, a notebook Personal Computer (PC), a Personal Digital Assistance (PDA), a tablet, a mobile phone, or the like, other than a smartphone.
The smartphone 2 that is one example of an information processing device has an auxiliary storage device 2H1, a main storage device 2H2, an input/output device 2H3, a state sensor 2H4, a CPU 2H5, and a network I/F 2H6. Each component of the smartphone 2 is connected to a bus 2H7 and executes an input or an output of data or a signal.
The auxiliary storage device 2H1 stores information such as each kind of data that includes an intermediate result of a process executed by the CPU 2H5 due to a control of the CPU 2H5, a control device, or the like, a parameter, or a program. The auxiliary storage device 2H1 is, for example, a hard disk, a flash Solid State Drive (SSD), or the like. Here, information stored in the auxiliary storage device 2H1 is such that a part or an entirety of such information may be stored in a file server connected to the network I/F 2H6 or the like, instead of the auxiliary storage device 2H1.
The main storage device 2H2 is a main storage device such as a storage area to be used by a program that is executed by the CPU 2H5, that is, a so-called “Memory”. The main storage device 2H2 stores information such as data, a program, or a parameter. The main storage device 2H2 is, for example, a Static Random Access Memory (SRAM), a DRAM, or the like. The main storage device 2H2 may have a control device for executing storage in or acquisition from a memory.
The input/output device 2H3 is a device that has functions of an output device for executing display and an input device for inputting an operation of a user.
The input/output device 2H3 is a so-called “touch panel”, a “peripheral circuit”, a “driver”, or the like.
The input/output device 2H3 executes a process for displaying, to a user, an image input in, for example, a predetermined Graphical User Interface (GUI) or the smartphone 2.
The input/output device 2H3 executes a process for inputting an operation of a user, for example, in a case where a GUI with a display or an image is operated by such a user.
The state sensor 2H4 is a sensor for detecting a state of the smartphone 2. The state sensor 2H4 is a gyro sensor, an angle sensor, or the like. The state sensor 2H4 determines, for example, whether or not one side that is possessed by the smartphone 2 is provided at a predetermined or greater angle with respect to a horizon. That is, the state sensor 2H4 executes a detection as to whether the smartphone 2 is provided at a state of a longitudinally directional attitude or a state of a laterally directional attitude.
The CPU 2H5 executes a calculation in each process that is executed by the smartphone 2 and a control of a device that is provided in the smartphone 2. For example, the CPU 2H5 executes each kind of program. Here, the CPU 2H5 may be composed of a plurality of CPUs or devices, or a plurality of cores in order to execute a process in parallel, redundantly, or dispersedly. Furthermore, a process for the CPU 2H5 is such that another hardware resource may be provided inside or outside the smartphone 2 to execute a part or an entirety of a process for the smartphone 2. For example, the smartphone 2 may have a Graphics Processing Unit (GPU) for executing image processing, or the like.
The network I/F 2H6 is a device such as an antenna, a peripheral circuit, a driver, or the like, for inputting or outputting data, or the like, that is connected to another computer through a wireless or wired network. For example, the smartphone 2 executes a process for inputting image data from the image taking device 1 due to the CPU 2H5 and the network I/F 2H6. The smartphone 2 executes a process for outputting a predetermined parameter or the like to the image taking device 1 due to the CPU 2H5 and the network I/F 2H6.
<An Entire Process for an Image Taking System>
At step S0701, the image taking device 1 executes a process for producing an all celestial sphere image.
Similarly to
As illustrated in
Here, a process for producing an all celestial sphere image is not limited to a process in accordance with equidistant cylindrical projection. For example, a so-called “upside-down” case is provided in such a manner that, like
Furthermore, a process for producing an all celestial sphere image may execute a correction process for correcting distortion aberration that is provided in an image in a state of
Here, for example, in a case where an image taking range of a hemispherical image overlaps with an image taking range of another hemispherical image, a synthesis process may execute correction by utilizing an overlapping range to execute such a synthesis process at high precision.
Due to a process for producing an all celestial sphere image, the image taking device 1 produces an all celestial sphere image from a hemispherical image that is taken by the image taking device 1.
At step S0702, the smartphone 2 executes a process for acquiring an all celestial sphere image produced at step S0701. A case where the smartphone 2 acquires an all celestial sphere image in
At step S0703, the smartphone 2 produces an all celestial sphere panoramic image from an all celestial sphere image acquired at step S0702. An all celestial sphere panoramic image is an image provided in such a manner that an all celestial sphere image is applied onto a spherical shape.
At step S0703, the smartphone 2 executes a process for producing an all celestial sphere panoramic image in
A process for producing an all celestial sphere panoramic image is realized by, for example, an Application Programming Interface (API) such as Open GL (“Open GL” is a registered trademark) for Embedded Systems (Open GL ES).
An all celestial sphere panoramic image is produced by dividing an image into triangles, joining vertices P of triangles (that will be referred to as “vertices P” below), and applying a polygon thereof.
At step S0704, the smartphone 2 executes a process for causing a user to input an operation for starting an output of an image. At step S0704, the smartphone 2, for example, reduces and outputs an all celestial sphere panoramic image produced at step S0703, that is, displays a so-called “thumbnail image”. In a case where a plurality of all celestial sphere panoramic images are stored in the smartphone 2, the smartphone 2 outputs a list of thumbnail images, for example, to cause a user to select an image to be output. At step S0704, the smartphone 2 executes, for example, a process for inputting an operation for causing a user to select one image from a list of thumbnail images.
At step S0705, the smartphone 2 executes a process for producing an initial image based on an all celestial sphere panoramic image selected at step S0704.
As illustrated in
A predetermined area T is an area where a view angle of the virtual camera 3 is projected onto a sphere CS. The smartphone 2 produces an image based on a predetermined area T.
A predetermined area T is determined by, for example, predetermined area information (x, y, α).
A view angle α is an angle that indicates an angle of the virtual camera 3 as illustrated in
Here, a distance from the virtual camera 3 to a center point CP is represented by Formula (1) described below:
f=tan(α/2) (Formula 1)
An initial image is an image provided by determining a predetermined area T based on a preliminarily set initial setting and being produced based on such a determined predetermined area T. An initial setting is, for example, (x, y, α)=(0, 0, 34) or the like.
At step S0706, the smartphone 2 causes a user to execute an operation for switching to an image editing mode. Here, in a case where a user does not execute an operation for switching to an image editing mode, the smartphone 2 outputs, for example, an initial image.
At step S0707, the smartphone 2 executes a process for outputting an output image for editing an image.
An output image is, for example, an output image 21 at an initial state. An output image has an editing image 31 at an initial state and a changing image 41 at an initial state.
An output image displays a button for a Graphical User Interface (GUI) for accepting an operation of a user. A GUI is, for example, a blur editing button 51, a cancellation editing button 52, or the like. Here, an output image may have another GUI.
An editing area 31 at an initial state and a changing area 41 at an initial state are combined into an output identical to an initial image produced at step S0705. An output image has a separation line 211 for separating and outputting the editing area 31 and the changing area 41. Here, the separation line 211 is not limited to a solid line. The separation line 211 may be, for example, a broken line, a pattern, or the like. Furthermore, the smartphone 2 may change a pattern of the separation line 211 such as a color or a kind of line of the separation line 211, in such a manner that the separation line 211 is readily viewed by a user with respect to a peripheral pixel to be output. For example, in a case of an output image with many white objects such as a snow scene, the smartphone 2 outputs the separation line 211 as a solid line with a blue color. For example, in a case of an output image with many reflective objects such as a cluster of high-rise buildings, the smartphone 2 outputs the separation line 211 as a thick solid line with a red color. For example, in a case of an output image with many objects such as a graph or a design drawing, the smartphone 2 outputs the separation line 211 as a thick broken line with a green color.
A pattern of a separation line is changed depending on an object or a background, and thereby, it is possible for the smartphone 2 to cause a user to readily view a separation between areas.
Here, although a separation line is clearly indicated to cause a user to readily view a separation between areas, a separation line may not be displayed. For example, a separation line disturbs a user that has gotten familiar with an operation, and hence, the smartphone 2 may control display/non-display of such a separation line. Moreover, the smartphone 2 may change a contrast of an editing area or a contrast of a changing area instead of displaying a separation line, so that it is possible to identify a separation between areas. In this case, an embodiment is realized by a configuration provided in such a manner that the smartphone 2 controls a contrast for an area in an output image by using coordinates of an editing area and coordinates of a changing area.
A user edits an image in an image editing mode, and hence, applies an operation to an editing area or a changing area that is displayed in an output image.
At step S0708, the smartphone 2 executes a process for causing a user to input an operation for editing an image.
At step S0709, the smartphone 2 acquires coordinates where a user inputs an operation for the input/output device 2H3. At step S0709, the smartphone 2 executes a process for determining whether an operation is executed for an area in the editing area 31 at an initial state in
Image editing is editing that is executed based on an operation of a user. Editing of an area to be output is editing for changing an area to be output in an image based on a changing area or editing executed for a predetermined area based on an editing area.
Editing for changing an area to be output is executed in a case where an operation is applied to an area of a changing area at step S0709.
Editing to be executed for a predetermined area based on an editing area is executed in a case where an operation is applied to an area in an editing area at step S0709.
In a case where a user operates a changing area (an area in a changing area is determined at step S0709), the smartphone 2 goes to step S0710. In a case where a user operates an editing area (an area in an editing area is determined at step S0709), the smartphone 2 goes to step S0712.
<Editing for Changing an Area to be Output>
An output image is, for example, an output image 22 after editing an area to be output. The output image 22 after editing an area to be output has an editing area 32 after editing an area to be output and a changing area 42 after editing an area to be output. The output image 22 after editing an area to be output has a separation line 211 for separating and outputting the editing area 32 after editing an area to be output and the changing area 42 after editing an area to be output, similarly to the output image 21 at an initial state.
The output image 22 after editing an area to be output is an image produced by changing a predetermined area T as illustrated in
The output image 22 after editing an area to be output is provided at, for example, a viewpoint of a case where the virtual camera 3 at a state of
Editing of an area to be output is executed in such a manner that a user operates a screen area where a changing image is output.
An operation to be input at step S0708 is, for example, an operation for changing an area to be output with respect to left and right directions of an image or the like.
In a case of
Herein, an input amount on a swipe operation is provided as (dx, dy).
A relation between a polar coordinate system (φ, θ) of an all celestial sphere in
φ=k×dx
θ=k×dy (Formula 2)
In Formula (2) described above, k is a predetermined constant for executing adjustment.
An output image is changed based on an input amount input for a swipe operation, and hence, it is possible for a user to operate an image with a feeling that a sphere such as a terrestrial globe is rotated.
Here, for simplifying a process, what position of a screen a swipe operation is input at may not be taken into consideration. That is, similar values may be input for an input amount (dx, dy) in Formula (2) even though a swipe operation is executed at any position of a screen where the changing area 41 at an initial state is output.
The changing area 42 after editing an area to be output executes perspective projection transformation of coordinates (Px, Py, Pz) of a vertex P in a three-dimensional space based on (φ, θ) calculated in accordance with Formula (2).
In a case where a user executes a swipe operation with an input amount (dx2, dy2) in a case of
φ=k×(dx+dx2)
θ=k×(dy+dy2) (Formula 3)
As illustrated in (3) described above, a polar coordinate system (φ, θ) of an all celestial sphere is calculated based on a total value of input amounts for respective swipe operations. Even in a case where a plurality of swipe operations are executed or the like, calculation of a polar coordinate system (φ, θ) of an all celestial sphere is executed, and thereby, it is possible to keep constant operability.
Here, editing of an area to be output is not limited to pan-rotation. For example, tilt-rotation of the virtual camera 3 in upper and lower directions of an image may be realized.
An operation that is input at step S0708 is, for example, an operation for enlarging or reducing an area to be output or the like.
In a case where enlargement of an area to be output is executed, an operation that is input by a user is such that two fingers are spread on a screen where the changing area 41 at an initial state in
In a case where reduction of an area to be output is executed, an operation that is input by a user is such that two fingers are moved closer to each other on a screen where the changing area 41 at an initial state in
Here, a pinch-out or pinch-in operation is sufficient as long as a position where a finger of a user first contacts is provided in an area with a changing image displayed thereon, and may be an operation that subsequently uses an area with an editing area displayed thereon. Furthermore, an operation may be executed by a so-called “stylus pen” that is a tool for operating a touch panel or the like.
In a case where an operation illustrated in
A zoom process is a process for producing an image with a predetermined area enlarged or reduced based on an operation that is input by a user.
In a case where an operation illustrated in
A zoom process is a process for executing calculation in accordance with Formula (4) described below:
α=α0+m×dz (Formula 4)
based on an amount of change dz.
α indicated in Formula (4) described above is a view angle α of the virtual camera 3 as illustrated in
In a case where an operation illustrated in
In a case where calculation is executed in accordance with Formula (4) and a user executes an operation for providing an amount of change dz2, the smartphone 2 executes calculation in accordance with Formula (5) described below:
α=α0+m×(dz+dz2) (Formula 5)
As indicated in (5) described above, a view angle α is calculated based on a total value of amounts of change due to operations as illustrated in
Here, a zoom process is not limited to a process in accordance with Formula (4) or Formula (5).
A zoom process may be realized by combining a view angle α of the virtual camera 3 and a change in a position of a viewpoint.
An origin in
A range of a predetermined area T in
In a case where the virtual camera 3 is positioned at an origin, namely, a case of d=0, an angle of view ω is identical to a view angle α. In a case where the virtual camera 3 is displaced from an origin, that is, a case where a value of d is increased, an angle of view ω and a view angle α exhibit different ranges.
Another zoom process is a process for changing an angle of view ω.
Illustrative table 4 illustrates an example of a case where an angle of view ω is a range of 60° to 300°.
As illustrated in illustrative table 4, the smartphone 2 determines which of a view angle α and an amount of movement d of the virtual camera 3 is preferentially changed based on a zoom specification value ZP.
“RANGE” is a range that is determined based on a zoom specification value ZP.
“OUTPUT MAGNIFICATION” is an output magnification of an image calculated based on an image parameter determined by another zoom process.
“ZOOM SPECIFICATION VALUE ZP” is a value that corresponds to an angle of view to be output. Another zoom process changes a process for determining an amount of movement d and a view angle α based on a zoom specification value ZP. For a process to be executed in another zoom process, one of four methods is determined based on a zoom specification value ZP as illustrated in illustrative table 4. A range of a zoom specification value ZP is divided into four ranges that are a range of A-B, a range of B-C, a range of C-D, and a range of D-E.
“ANGLE OF VIEW ω” is an angle ω of view a that corresponds to an image parameter determined by another zoom process.
“CHANGING PARAMETER” is a description that illustrates a parameter that is changed by each of four methods based on a zoom specification value ZP. “REMARKS” are remarks for “CHANGING PARAMETER”.
“viewWH” in illustrative table 4 is a value that represents a width or a height of an output area. In a case where an output area is laterally long, “viewWH” is a value of a width. In a case where an output area is longitudinally long, “viewWH” is a value of a height. That is, “viewWH” is a value that represents a size of an output area in longitudinal direction.
“imgWH” in illustrative table 4 is a value that represents a width or a height of an output image. In a case where an output area is laterally long, “imgWH” is a value of a width of an output image. In a case where an output area is longitudinally long, “imgWH” is a value of a height of an output image. That is, “imgWH” is a value that represents a size of an output image in longitudinal direction.
“imageDeg” in illustrative table 4 is a value that represents an angle of a display range of an output image. In a case where a width of an output image is represented, “imageDeg” is 360°. In a case where a height of an output image is represented, “imageDeg” is 180°.
A case of a so-called “zoom-out” in
In a case of “A-B” or “B-C”, an angle of view ω is identical to a zoom specification value ZP. In a case of “A-B” or “B-C”, a value of an angle of view ω is increased.
A maximum display distance dmax1 is a distance where a sphere CS is displayed so as to be maximum in an output area of the smartphone 2. An output area is, for example, a size of a screen where the smartphone 2 outputs an image or the like, or the like. A maximum display distance dmax1 is, for example, a case of
“viewW” in Formula (6) described above is a value that represents a width of an output area of the smartphone 2. “viewH” in Formula (6) described above is a value that represents a height of an output area of the smartphone 2. A similar matter will be described below.
A maximum display distance dmax1 is calculated based on values of “viewW” and “viewH” that are output areas of the smartphone 2.
A limit display distance dmax2 is, for example, a case of
A limit display distance dmax2 is calculated based on values of “viewW” and “viewH” that are output areas of the smartphone 2. A limit display distance dmax2 represents a maximum range that is able to be output by the smartphone 2, that is, a limit value of an amount of movement d of the virtual camera 3. An embodiment may be limited in such a manner that a zoom specification value ZP is included in a range illustrated in illustrative table 4 in
Due to a process for “D-E”, it is possible for the smartphone 2 to cause a user to recognize that an output image is all celestial sphere panorama.
Here, in a case of “C-D” or “D-E”, an angle of view ω is not identical to a zoom specification value ZP. Furthermore, as illustrated in illustrative table 4 in
In a case where a zoom specification value ZP is changed toward a wide-angle direction, an angle of view ω is frequently increased. In a case where an angle of view ω is increased, the smartphone 2 fixes a view angle α of the virtual camera 3 and increases an amount of movement d of the virtual camera 3. The smartphone 2 fixes a view angle α of the virtual camera 3, and thereby, it is possible to reduce an increase in such a view angle α of the virtual camera 3. The smartphone 2 reduces an increase in a view angle α of the virtual camera 3, and thereby, it is possible to output an image with less distortion to a user. In a case where a view angle α of the virtual camera 3 is fixed, the smartphone 2 increases an amount of movement d of the virtual camera 3, that is, moves the virtual camera 3 to be distant, and thereby, it is possible to provide a user with an open-feeling of a wide angle display. Furthermore, movement for moving the virtual camera 3 to be distant is similar to movement at a time when a human being confirms a wide range, and hence, it is possible for the smartphone 2 to realize zoom-out with a less feeling of strangeness due to movement for moving the virtual camera to be distant.
In a case of “D-E”, an angle of view ω is decreased with changing a zoom specification value ZP toward a wide-angle direction. In a case of “D-E”, the smartphone 2 decreases an angle of view ω, and thereby, it is possible to provide a user with a feeling of being distant from a sphere CS. The smartphone 2 provides a user with a feeling of being distant from a sphere CS, and thereby, it is possible to output an image with a less feeling of strangeness to a user.
Hence, it is possible for the smartphone 2 to output an image with a less feeling of strangeness to a user, due to another zoom process illustrated in illustrative table 4 in
Here, an embodiment is not limited to a case where only an amount of movement d or a view angle α of the virtual camera 3 illustrated in illustrative table 4 in
Furthermore, an embodiment is not limited to zoom-out. An embodiment may realize, for example, zoom-in.
Here, a case where an area to be output is edited is not limited to a case where an operation is executed for a changing area. The smartphone 2 may edit an area to be output, for example, in a case where an operation is executed for an editing area.
<Editing to be Executed for a Predetermined Area Based on an Editing Area>
Editing to be executed for a predetermined area based on an editing image is blur editing that blurs a predetermined pixel. Herein, for another editing, it is possible to provide, erasing of a specified range of an image, changing of a color tone or a color depth of an image or the like, a color change of a specified range of an image, or the like.
A case where a user executes blur editing for the output image 22 after editing of an area to be output in
In a case where a user executes an operation that pushes a blur editing button 51, the smartphone 2 causes a user to input a so-called “tap operation” for an area where an editing area 32 for the output image 22 after editing of an area to be output in
The smartphone 2 executes a process for blurring a predetermined range centered at a point tapped by a user.
The output image 23 after blur editing is produced by applying blur editing to an output image after editing of an area to be output in
Editing that is applied to a predetermined area based on an editing area is editing that cancels blur editing for a blur editing area 5 blurred by such blur editing.
In a case where a user executes an operation that pushes the cancellation editing button 52, the smartphone 2 outputs an output image 24 for cancellation editing that displays a filling area 6 on the blur editing area 5 with applied blur editing. As illustrated in
Once a taken image of a face of a person or a photography-prohibited building is released or shared on the internet, trouble may be caused. In particular, in a case where a panoramic image with a broad range is taken, an image of many objects in a broad range may frequently be taken. Therefore, it is possible for a user to reduce trouble due to a process for blurring an object that is possibly problematic at a time of release or sharing. It is possible for the smartphone 2 to facilitate an operation for blurring a face of a person taken in an image due to editing to be applied to a predetermined area based on an editing area. Hence, it is possible for the smartphone 2 to cause a user to readily execute an image operation due to editing to be applied to a predetermined area based on an editing area.
Here, in a case where editing of an area to be output is executed, the smartphone 2 may change a range of editing applied to a predetermined area based on an editing area or the like in accordance with a magnification.
At step S0710, the smartphone 2 calculates amounts of movement of coordinates to be output. That is, at step S0710, the smartphone 2 calculates a position of a predetermined area T in
At step S0711, the smartphone 2 updates a position of a predetermined area T in
At step S0712, the smartphone 2 calculates coordinates of a point that is an editing object. That is, at step S0712, the smartphone 2 calculates coordinates that correspond to a tap operation of a user and executes calculation for projection onto three-dimensional coordinates.
At step S0713, the smartphone 2 calculates a predetermined area that is edited centered at coordinates calculated at step S0712 and based on an editing area. That is, at step S0713, the smartphone 2 calculates a pixel that is a point specified by a tap operation of a user or a periphery of such a point and is an object for blur editing or the like.
At step S0714, the smartphone 2 produces an editing area. In a case where a user executes an operation for a changing area at step S0714, the smartphone 2 produces a changing area based on a predetermined area T updated at step S0711. In a case where a user executes an operation for an editing area at step S0714, the smartphone 2 produces an editing area wherein a blurring process is reflected on a pixel calculated at step S0713.
At step S0715, the smartphone 2 produces a changing area. In a case where a user executes an operation for a changing area at step S0715, the smartphone 2 produces a changing area based on a predetermined area T updated at step S0711. In a case where a user executes an operation for an editing area at step S0715, the smartphone 2 produces an changing area that indicates a location that is a blurring object at step S713.
The smartphone 2 repeats processes of step S0708 through step S0715.
<A Process on a Smartphone>
At step S1801, the smartphone 2 executes a process for acquiring an image from the image taking device 1 in
At step S1802, the smartphone 2 executes a process for producing a panoramic image. A process at step S1802 is executed based on an image acquired at step S1801. A process at step S1802 corresponds to a process at step S0703 in
At step S1803, the smartphone 2 executes a process for causing a user to select an image to be output. A process at step S1803 corresponds to a process at step S0704 in
At step S1804, the smartphone 2 executes a process for producing an initial image. A process at step S1804 corresponds to a process at step S0705 in
At step S1805, the smartphone 2 executes determination as to whether or not switching to a mode for editing an image is executed. A process at step S1805 executes determination based on whether or not an operation of a user at step S0706 in
A case where determination is provided at step S1805 in such a manner that switching to a mode for editing an image is provided is a case where an input to start editing of an image is provided by a user. A case where determination is provided at step S1805 in such a manner that switching to a mode for editing an image is not provided is a case where a user does not execute an operation. Therefore, in a case where a user does not execute an operation, the smartphone 2 continues to output an initial image and waits for an input of an user to start editing of an image.
At step S1806, the smartphone 2 executes a process for outputting an output image for editing an image. A process at step S1806 corresponds to a process at step S0707 in
At step S1807, the smartphone 2 executes determination as to whether an operation of a user is executed for an editing area or a changing area. A process at step S1807 corresponds to a process at step S0709 in
In a case where determination is provided in such a manner that an operation of a user is executed for a changing area (a changing area at step S1807), the smartphone 2 goes to step S1808. In a case where determination is provided in such a manner that an operation of a user is executed for an editing area (an editing area at step S1807), the smartphone 2 goes to step S1810.
At step S1808, the smartphone 2 executes a process for calculating an amount of movement of a predetermined area due to an operation. A process at step S1808 corresponds to a process at step S0710 in
At step S1809, the smartphone 2 executes a process for updating a predetermined area. A process at step S1809 corresponds to a process at step S0711 in
At step S1810, the smartphone 2 executes a process for calculating, and three-dimensionally projecting, coordinates that are objects for an operation. A process at step S1810 corresponds to a process at step S0712 in
At step S1811, the smartphone 2 executes a process for calculating a pixel that is an object for blurring. For example, the smartphone 2 has an editing state table that causes flag data as to whether or not an object for blurring is provided, to correspond to each pixel. An editing state table represents whether or not each pixel is output in a blur state. The smartphone 2 refers to an editing state table, determines whether or not each pixel in an output image is output in a blur state, and outputs an image. That is, a process at step S1811 is a process for updating an editing state table. In a case where an operation for either blurring as illustrated in
At step S1812, the smartphone 2 executes a process for producing an editing image. A process at step S1812 corresponds to a process at step S0714 in
At step S1813, the smartphone 2 executes a process for producing a changing area. A process at step S1813 corresponds to a process at step S0715 in
The smartphone 2 returns to step S1807 and repeats previously illustrated processes.
Due to processes at step S1812 and step S1813, the smartphone 2 produces an output image and executes an output to a user.
In a case where an object for blurring is provided based on an editing state table at processes at step S1812 and step S1813, the smartphone 2 executes, for example, a blurring process as illustrated in
An image that is output to a user by the smartphone 2 is output at 30 or more frames per 1 second in such a manner that such a user feels smooth reproduction of an animation. It is desirable for the smartphone 2 to execute an output at 60 or more frames per 1 second in such a manner that a user feels particularly smooth reproduction. Here, a frame rate of an output may be such that 60 frames per 1 second is changed to, for example, 59.94 frames per 1 second.
Here, processes at step S1812 and step S1813 are not limited to processes for causing the smartphone 2 to execute a blurring process and an output.
For example, the smartphone 2 has an image provided by preliminarily applying a blurring process to all of pixels of an image to be output and an image provided by applying no blurring process. The smartphone 2 outputs each pixel by simultaneously selecting an image provided by executing a blurring process based on an editing state table or an image provided by executing no blurring process. It is possible for the smartphone 2 to reduce an amount of calculation for outputting an image by preliminarily executing a blurring process. That is, it is possible for the smartphone 2 to realize a high-speed image output such as 60 frames per 1 second by executing selection and a simultaneous output of each pixel.
Furthermore, for example, in a case where each pixel is selected and simultaneously output, the smartphone 2 may store an output image. In a case where a user does not execute an editing operation, the smartphone 2 outputs a stored image. Due to storage, a process for selecting and producing each pixel of an image to be output is not required, and hence, it is possible for the smartphone 2 to reduce an amount of calculation. Therefore, the smartphone 2 stores an output image, and thereby, it is possible to realize a high-speed image output such as 60 frames per 1 second.
Here, an output image is not limited to an image illustrated in
For example, the smartphone 2 causes a user to push the separation line 211 for a longer period of time or input a so-called “long-tap operation”. In a case of
In a case where a long-tap operation is input, the smartphone 2 executes a process for changing a position, a size, or a range of the separation line 211.
A pre-changing separation line 211A illustrates a position of a separation line before changing. That us, the pre-changing separation line 211A is provided in a state of
A post-changing separation line 211B illustrates a position of a separation line after changing. As illustrated in
Changing in a separation between areas is executed, and thereby, it is possible for a user to provide a position, a size, or a range of an editing area or a changing area so as to facilitate an operation. Changing in a separation between areas is executed, and thereby, it is possible for the smartphone 2 to output an output image in such a manner that an operation of a screen is facilitated for a user.
<A Functional Configuration>
The image taking system 10 has the image taking device 1 and the smartphone 2. The image taking system 10 has a first image taking part 1F1, a second image taking part 1F2, and an all celestial sphere image production part 1F3. The image taking system 10 has an image acquisition part 2F1, a production part 2F2, an input/output part 2F3, a storage part 2F4, and a control part 2F4.
The first image talking part 1F1 and the second image taking part 1F2 take and produce images that are materials of an all celestial sphere image. The first image taking part 1F1 is realized by, for example, the front side image taking element 1H1 in
The all celestial sphere image production part 1F3 produces an image that is output to the smartphone 2, such as an all celestial sphere image. The all celestial sphere image production part 1F3 is realized by, for example, the image processing unit 1H7 in
The image acquisition part 2F1 acquires image data such as an all celestial sphere image from the image taking device 1. The image acquisition part 2F1 is realized by, for example, the network I/F 2H6 in
The production part 2F2 executes a process for producing each kind of image and a line, and each kind of calculation necessary for production of an image. The production part 2F2 has a changing area production part 2F21, an editing area production part 2F22, and a separation line production part 2F23. The production part 2F2 is realized by the CPU 2H5 in
The changing area production part 2F21 executes a process for executing production of a changing area. The changing area production part 2F21 acquires, for example, image data and an editing state table from the storage part 2F4. The changing area production part 2F21 produces a changing area based on an acquired editing state table and image data.
The editing area production part 2F22 executes a process for executing production of an editing area. The editing area production part 2F22 acquires, for example, image data and an editing state table from the storage part 2F4. The editing area production part 2F22 produces an editing area based on an acquired editing state table and image data.
Furthermore, in a case where an operation as illustrated in
The separation line production part 2F3 produces a separation line such as the separation line 211 in
The production part 2F2 calculates, and stores as an editing state table, coordinates associated with an operation in a case where a user executes a tap or swipe operation. Furthermore, an image produced by the production part 2F2 may be stored in the storage part 2F4 and taken according to a process.
The input/output part 2F3 executes a process for inputting an operation of a user. The input/output part 2F3 causes a user to execute a process for outputting an image produced by the production part 2F2. The input/output part 2F3 is realized by, for example, the input/output device 2H3 in
The storage part 2F4 stores each kind of information acquired or produced by the smartphone 2. The storage part 2F4 has, for example, an editing state table storage part 2F41 and an image storage part 2F42. The storage part 2F4 is realized by, for example, the auxiliary storage device 2H1 or the main storage device 2H2 in
The editing state table storage part 2F41 stores data of a table that represents a pixel where a blurring process is executed.
The image storage part 2F42 stores an all celestial sphere image acquired by the image acquisition part 2F1, an output image produced by the production part 2F2, and the like.
The control part 2F5 controls each kind of a component that is provided in the smartphone 2. The control part 2F5 controls each kind of component, and thereby, realizes each kind of process, a process for assisting each kind of process, and the like. The control part 2F5 is realized by, for example, the CPU 2H5 in
Here, an entire process is not limited to a case as illustrated in
Here, an embodiment is not limited to an output with a separation between an editing area and a changing area by a separation line. For example, an embodiment may be an embodiment provided in such a manner that the smartphone 2 blurs, decorates, for example, colors with a predetermined color, and outputs a changing area, thereby causing a user to perceive different areas, and separates between, and outputs, an editing area and such a changing area.
The smartphone 2 produces an editing area and a changing area based on an all celestial sphere image acquired from the image taking device 1 or the like. An editing area is an area for outputting a predetermined area that is determined by a predetermined area T and causes a user to execute an editing operation such as blurring or cancellation of blurring. A changing area is an area for outputting an image for causing a user to execute an operation for changing a position, a size, or a range of a predetermined area T, or the like. The smartphone 2 outputs an output image that has at least an editing area and a changing area. An output image has an editing area and a changing area, and thereby, it is possible for the smartphone 2 to cause a user to execute editing such as blurring and simultaneously change an area output in such an editing area by such a changing area. Therefore, in a case where a user executes a blurring operation for an all celestial sphere image or the like, it is possible for the smartphone 2 to output an image for facilitating an operation. Hence, the smartphone 2 outputs an output image that has an editing area and a changing area, and thereby, it is possible for a user to readily execute an operation of an image.
Here, the smartphone 2 may be realized by a computer-executable program described in a legacy programming language such as Assembler, C, C++, C#, or Java (“Java” is a registered trademark), an object-oriented programming language, or the like. It is possible for a program to be stored in and distributed by a recording medium such as a ROM or an Electrically Erasable Programmable ROM (EEPROM). It is possible for a program to be stored in and distributed by a recording medium such as an Erasable Programmable ROM (EPROM). It is possible for a program to be stored in and distributed by a recording medium such as a flash memory, a flexible disk, a CD-ROM, a CD-RW, a DVD-ROM, a DVD-RAM, a DVD-RW, or the like. It is possible for a program to be stored in a device-readable recording medium such as a Blu-Ray disk (“Blu-Ray disk” is a registered trademark), SD (“SD” is a registered trademark) card, or an MO or distributed through a telecommunication line.
Here, an image in an embodiment is not limited to a still image. For example, an image may be an animation.
Furthermore, a part or an entirety of each process in an embodiment may be realized by, for example, a programmable device (PD) such as a field programmable gate array (FPGA). Moreover, a part or an entirety of each process in an embodiment may be realized by an Application Specific Integrated Circuit (ASIC).
Although preferable practical examples of the present invention have been described in detail above, the present invention is not limited to such particular embodiments and a variety of alterations and modifications are possible within a scope of an essence of the present invention as recited in what is claimed.
At least one illustrative embodiment of the present invention may relate to an information processing method, an information processing device, and a program.
At least one illustrative embodiment of the present invention may aim at facilitating execution of an image operation for a user.
According to at least one illustrative embodiment of the present invention, there may be provided an information processing method that causes a computer to process an image, characterized by causing the computer to execute an acquisition step for acquiring the image and an output step for outputting an output image by separating between an editing area for editing a predetermined area of the image and a changing area for changing the predetermined area to be output.
Illustrative embodiment (1) is an information processing method for causing a computer to process an image, wherein the image processing method causes the computer to execute an acquisition step for acquiring the image and an output step for outputting an output image by separating between an editing area for editing a predetermined area of the image and a changing area for changing the predetermined area to be output.
Illustrative embodiment (2) is the information processing method as described in illustrative embodiment (1), wherein an editing position input step for acquiring an editing position that is a target area of the editing by using the editing area and an editing step for editing the editing position are executed.
Illustrative embodiment (3) is the image processing method as described in illustrative embodiment (2), wherein the editing step is a step for blurring the editing position.
Illustrative embodiment (4) is the information processing method as described in illustrative embodiment (3), wherein an acquisition image that has just been acquired in the acquisition step and a blurred image produced by a blurring process are produced and the output image is output by selecting a pixel of the blurred image for the editing area and a pixel of the acquisition image for that other than the editing area.
Illustrative embodiment (5) is the information processing method as described in illustrative embodiment (3) or (4), wherein a specified position input step for acquiring a specifying position for specifying an area of a part or an entirety of an image output with the editing area and a cancellation step for canceling the blurring process executed for the specifying position are executed.
Illustrative embodiment (6) is the information processing method as described in any one of illustrative embodiments (1) to (5), wherein an operation input step for acquiring an operation for changing, enlarging, or reducing the predetermined area that is output with the editing area by using the changing area is executed.
Illustrative embodiment (7) is the information processing method as described in illustrative embodiment (6), wherein a determination step for determining a view point position and a view angle is executed based on the operation and the determination changes one of the view point position and the view angle based on an area indicated by the operation.
Illustrative embodiment (8) is the information processing method as described in any one of illustrative embodiments (1) to (7), wherein a changing step for changing a position, a size, or a range of the editing area or the changing area to be output, based on an operation for changing a separation between the editing area and the changing area, is executed.
Illustrative embodiment (9) is an information processing device that processes an image, wherein the image processing device has an acquisition means for acquiring the image and an output means for outputting an output image by separating between an editing area for editing a predetermined area of the image and a changing area for changing the predetermined area to be output.
Illustrative embodiment (10) is a program for causing a computer to process an image, wherein the program causes the computer to execute an acquisition step for acquiring the image and an output step for outputting an output image by separating between an editing area for editing a predetermined area of the image and a changing area for changing the predetermined area to be output.
According to at least an illustrative embodiment of the present invention, it may be possible to facilitate execution of an image operation for a user.
Although the illustrative embodiment(s) and specific example(s) of the present invention have been described with reference to the accompanying drawings, the present invention is not limited to any of the illustrative embodiment(s) and specific example(s) and the illustrative embodiment(s) and specific example(s) may be altered, modified, or combined without departing from the scope of the present invention.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority or inferiority of the invention. Although an information processing method has been described in detail, it should be understood that various changes, substitutions, and alterations could be made thereto without departing from the spirit and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2014-054783 | Mar 2014 | JP | national |
This application is a continuation application filed under 35 U.S.C. 111(a) claiming the benefit under 35 U.S.C. 120 and 365(c) of a PCT International Application No. PCT/JP2015/057610 filed on Mar. 10, 2015, which is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2014-054783 filed on Mar. 18, 2014, the entire contents of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
8214766 | Berger | Jul 2012 | B1 |
8548778 | Hart et al. | Oct 2013 | B1 |
9025044 | Fukuya et al. | May 2015 | B2 |
20040247173 | Nielsen | Dec 2004 | A1 |
20050278636 | Nomoto | Dec 2005 | A1 |
20070070473 | Lu | Mar 2007 | A1 |
20070183000 | Eisen et al. | Aug 2007 | A1 |
20080165141 | Christie | Jul 2008 | A1 |
20080212900 | Ogawa | Sep 2008 | A1 |
20080247636 | Davis et al. | Oct 2008 | A1 |
20090160996 | Yamaoka et al. | Jun 2009 | A1 |
20090256947 | Ciurea et al. | Oct 2009 | A1 |
20100162163 | Wang | Jun 2010 | A1 |
20100173678 | Kim | Jul 2010 | A1 |
20110280475 | Singhal | Nov 2011 | A1 |
20120147053 | Yamamoto et al. | Jun 2012 | A1 |
20120200665 | Furumura et al. | Aug 2012 | A1 |
20130169660 | Li | Jul 2013 | A1 |
20130230259 | Intwala | Sep 2013 | A1 |
20130235071 | Ubillos | Sep 2013 | A1 |
20130325493 | Wong et al. | Dec 2013 | A1 |
20130326419 | Harada et al. | Dec 2013 | A1 |
20140068499 | Yoo | Mar 2014 | A1 |
20140109046 | Hirsch et al. | Apr 2014 | A1 |
20140184821 | Taneichi et al. | Jul 2014 | A1 |
20140184858 | Yu et al. | Jul 2014 | A1 |
20140194164 | Lee | Jul 2014 | A1 |
20150046299 | Yan | Feb 2015 | A1 |
Number | Date | Country |
---|---|---|
H06-325144 | Nov 1994 | JP |
H10-340075 | Dec 1998 | JP |
2003-018561 | Jan 2003 | JP |
2003-132348 | May 2003 | JP |
2003-132362 | May 2003 | JP |
2005-195867 | Jul 2005 | JP |
2011-076249 | Apr 2011 | JP |
2012-029179 | Feb 2012 | JP |
2014-006880 | Jan 2014 | JP |
2014-010611 | Jan 2014 | JP |
2014-131215 | Jul 2014 | JP |
2014165764 | Sep 2014 | JP |
2011055451 | May 2011 | WO |
Entry |
---|
Translated Version of JP 2014-165764. |
International Search Report Issued on May 12, 2015 in PCT/JP2015/057610 filed on Mar. 10, 2015. |
Japanese Office Action dated Jul. 21, 2015. |
Final Office Action dated Jul. 8, 2016 issued to related U.S. Appl. No. 14/924,871. |
Office Action dated Mar. 25, 2016 issued to related U.S. Appl. No. 14/924,871. |
Japanese Office Action dated Sep. 13, 2016. |
Japanese Office Action dated Aug. 30, 2016. |
Final Office Action dated Jan. 23, 2017 issued to related U.S. Appl. No. 14/924,871. |
Canadian Office Action for 2,941,469 dated Jan. 25, 2017 issued to related U.S. Appl. No. 14/924,871. |
Number | Date | Country | |
---|---|---|---|
20160048992 A1 | Feb 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2015/057610 | Mar 2015 | US |
Child | 14924890 | US |