The present technology relates to a transmission device, a transmission method, a reception device, a reception method, and a transmission/reception system. More specifically, the present technology relates to, for example, a transmission device that processes data items of images captured by imaging with a plurality of cameras.
Hitherto, there has been a technology as disclosed in Patent Literature 1, which includes transmitting data items of images captured by a plurality of cameras to a reception side via a network, cutting out, on the reception side, data items of images corresponding to a display region from the data items of the plurality of captured images, executing a stitching process thereon so as to generate a composite image, and displaying the image.
Patent Literature 1: Japanese Patent Application Laid-open No. 2008-225600
In the technology disclosed in Patent Literature 1, all the data items of the images captured by the plurality of cameras are transmitted to the reception side. Thus, a usage amount of a network bandwidth increases in proportion to the number of cameras.
It is an object of the present technology to keep small a usage amount of a network bandwidth such that the network bandwidth is utilized.
A concept of the present technology lies in a transmission device including:
a storage unit that stores data items of images captured by imaging with a plurality of cameras in a manner that adjacent ones of the captured images overlap with each other;
an information reception unit that receives, from an external device via a network, cutting-out-target-region information items for a predetermined number of cameras selected from the plurality of cameras; and
an image-data transmission unit that
According to the present technology, the storage unit stores the data items of the images captured by the imaging with the plurality of cameras in the manner that the adjacent ones of the captured images overlap with each other. The information reception unit receives, from the external device via the network, the cutting-out-target-region information items for the predetermined number of cameras selected from the plurality of cameras. The image-data transmission unit cuts out, on the basis of the cutting-out-target-region information items for the predetermined number of cameras, the data items of the images of the cutting-out-target regions from the corresponding ones of the data items of the images captured by the plurality of cameras, the data items of the captured images being stored in the storage unit. Then, the image-data transmission unit transmits the data items of the images of the cutting-out-target regions to the external device via the network.
In this way, according to the present technology, not all the data items of the images captured by the plurality of cameras, but only the data items of the images of the cutting-out-target regions from the predetermined number of cameras are transmitted to the external device via the network on the basis of the information items from the external device. With this, a usage amount of a network bandwidth can be kept small. As a result, the network bandwidth can be utilized.
Note that, according to the present technology, for example, the image-data transmission unit may transmit, to the external device, the data items of the images of the cutting-out-target regions from the predetermined number of cameras after execution of a compression-coding process on the data items of the images of the cutting-out-target regions. When the compression-coding process is executed in this way, the usage amount of the network bandwidth can be kept much smaller.
Further, another concept of the present technology lies in a transmission device including:
According to the present technology, the plurality of cameras and the plurality of adapters provided to correspond respectively to the plurality of cameras are provided. The plurality of cameras perform the imaging in the manner that the adjacent ones of the captured images overlap with each other. The plurality of adapters respectively include the storage units, the information reception units, and the image-data transmission units.
The storage units store the data items of the images captured by the imaging with the corresponding ones of the plurality of cameras. The information reception units receive, from the external device via the network, the cutting-out-target-region information items for the corresponding ones of the plurality of cameras. Then, the image-data transmission units cut out, on the basis of the cutting-out-target-region information items, the data items of the images of the cutting-out-target regions from the corresponding ones of the data items of the captured images, the data items of the captured images being stored in the storage units. Then, the image-data transmission units transmit the data items of the images of the cutting-out-target regions to the external device via the network.
In this way, according to the present technology, not all the images captured by the plurality of cameras, but only the data items of the images of the cutting-out-target regions from the selected predetermined number of cameras are transmitted to the external device via the network on the basis of the information items from the external device. With this, the usage amount of the network bandwidth can be kept small. As a result, the network bandwidth can be utilized.
Still another concept of the present technology lies in a transmission device including
According to the present technology, the plurality of cameras are provided. The plurality of cameras perform the imaging in the manner that the adjacent ones of the captured images overlap with each other. The plurality of cameras respectively include the information reception units and the image-data transmission units. The information reception units receive the cutting-out-target-region information items from the external device via the network. The image-data transmission units cut out, on the basis of the cutting-out-target-region information items, the data items of the images of the cutting-out-target regions from the data items of the captured images. Then, the image-data transmission units transmit the data items of the images of the cutting-out-target regions to the external device via the network.
In this way, according to the present technology, not all the images captured by the plurality of cameras, but only the data items of the images of the cutting-out-target regions from the selected predetermined number of cameras are transmitted to the external device via the network on the basis of the information items from the external device. With this, the usage amount of the network bandwidth can be kept small. As a result, the network bandwidth can be utilized.
Yet another concept of the present technology lies in a transmission device including
According to the present technology, the plurality of servers are provided. The plurality of servers are provided to correspond respectively to the plurality of cameras that perform the imaging in the manner that the adjacent ones of the captured images overlap with each other. The plurality of servers respectively include the storage units, the information reception units, and the image-data transmission units.
The storage units store the data items of the images captured by the imaging with the corresponding ones of the plurality of cameras. The information reception units receive, from the external device via the network, the cutting-out-target-region information items for the corresponding ones of the plurality of cameras. Then, the image-data transmission units cut out, on the basis of the cutting-out-target-region information items, the data items of the images of the cutting-out-target regions from the data items of the captured images, the data items of the captured images being stored in the storage units. Then, the image-data transmission units transmit the data items of the images of the cutting-out-target regions to the external device via the network.
In this way, according to the present technology, not all the images captured by the plurality of cameras, but only the data items of the images of the cutting-out-target regions from the selected predetermined number of cameras are transmitted to the external device via the network on the basis of the information items from the external device. With this, the usage amount of the network bandwidth can be kept small. As a result, the network bandwidth can be utilized.
Further, yet another concept of the present technology lies in a reception device including:
According to the present technology, the cutting-out-target-region determination unit sets the display region in the composite image formed of the images captured by the imaging with the plurality of cameras in the manner that the adjacent ones of the captured images overlap with each other. Then, the cutting-out-target-region determination unit determines the regions in the images captured by the predetermined number of cameras as the cutting-out-target regions, the regions in the captured images including at least the regions that overlap with the display region.
The cutting-out-target-region determination unit may set, for example, the display region on the basis of control information for the display region, the control information being supplied from a display device that displays an image generated from the data item of the image in the composite image. In this case, the display device may, for example, be a head mounted display, and the control information for the display region may be orientation information. In addition, in this case, the display device may, for example, be a personal computer, a tablet, or a smartphone, and the control information for the display region may be movement information based on an operation by a user.
The information transmission unit transmits the cutting-out-target-region information items for the predetermined number of cameras to the external device via the network. The image-data reception unit receives, from the external device via the network, the data items of the images of the cutting-out-target regions from the predetermined number of cameras. Then, the image-data processing unit executes the stitching process on the received data items of the images of the cutting-out-target regions from the predetermined number of cameras to generate the data item of the image in the composite image, the image in the composite image corresponding to the display region.
The received data items of the images of the cutting-out-target regions from the predetermined number of cameras may, for example, have been subjected to a compression-coding process, and the image-data processing unit may, for example, execute a compression-decoding process on the data items of the images of the cutting-out-target regions from the predetermined number of cameras, and then execute the stitching process to generate the data item of the image in the composite image, the image in the composite image corresponding to the display region.
In this way, according to the present technology, the cutting-out-target-region information items for the predetermined number of cameras, which correspond to the display region, are transmitted to the external device, and only the data items of the images of the cutting-out-target regions from the predetermined number of cameras are received from the external device via the network. With this, the usage amount of the network bandwidth can be kept small. As a result, the network bandwidth can be utilized. Further, according to the present technology, the stitching process is executed on the received data items of the images of the cutting-out-target regions from the predetermined number of cameras such that the data item of the image in the composite image, which corresponds to the display region, is generated. In this way, the stitching process is executed only on the parts corresponding to the display region, and hence processing load can be reduced.
Further, yet another concept of the present technology lies in a transmission device including:
According to the present technology, the storage unit stores the data items of the images captured by the imaging with the plurality of cameras in the manner that the adjacent ones of the captured images overlap with each other. The information reception unit receives, from the external device via the network, the cutting-out-target-region information items for the predetermined number of cameras selected from the plurality of cameras.
The image-data cutting-out unit cuts out, on the basis of the cutting-out-target-region information items for the predetermined number of cameras, the data items of the images of the cutting-out-target regions from the corresponding ones of the data items of the images captured by the plurality of cameras, the data items of the captured images being stored in the storage unit. The image-data processing unit executes the stitching process on the data items of the images of the cutting-out-target regions from the predetermined number of cameras to generate the data item of the image in the composite image. Then, the image-data transmission unit transmits the data item of the image in the composite image to the external device via the network.
In this way, according to the present technology, not all the images captured by the plurality of cameras, but the data item of the image in the composite image, which is generated by executing the stitching process on the data items of the images of the cutting-out-target regions from the selected predetermined number of cameras, is transmitted to the external device via the network on the basis of the information items from the external device. With this, the usage amount of the network bandwidth can be kept small. As a result, the network bandwidth can be utilized, and processing load on the external device can be reduced.
Further, yet another concept of the present technology lies in a reception device including:
According to the present technology, the cutting-out-target-region determination unit sets the display region in the composite image formed of the images captured by imaging with the plurality of cameras in the manner that the adjacent ones of the captured images overlap with each other. Then, the cutting-out-target-region determination unit determines the regions in the images captured by the predetermined number of cameras as the cutting-out-target regions, the regions in the captured images including at least the regions that overlap with the display region. The information transmission unit transmits the cutting-out-target-region information items for the predetermined number of cameras to the external device via the network. The image-data reception unit receives, via the network, the data item of the image in the composite image, the image in the composite image having been generated by the execution of the stitching process on the data items of the images of the cutting-out-target regions from the predetermined number of cameras.
In this way, according to the present technology, the cutting-out-target-region information items for the predetermined number of cameras, which correspond to the display region, are transmitted to the external device, and the data item of the image in the composite image, which is generated by executing the stitching process on the data items of the images of the cutting-out-target regions from the predetermined number of cameras, is received from the external device. With this, the usage amount of the network bandwidth can be kept small. As a result, the network bandwidth can be utilized. In addition, the stitching process need not be executed, and hence processing load can be reduced.
According to the present technology, the usage amount of the network bandwidth can be kept small irrespective of the number of cameras. With this, the network bandwidth can be utilized. Note that, the advantaged disclosed herein are merely illustrative, and hence are not limited thereto. In addition, other advantages may be additionally provided.
Now, an embodiment for carrying out the invention (hereinafter, abbreviated as “embodiment”) is described. Note that, the description is made in the following order.
1. Embodiment
2. Modification
[Configuration Example of Transmission/Reception System]
The transmission side is described. The transmission/reception system 10A includes, on the transmission side, a plurality of, specifically, four cameras (camcorders) of a camera (camera A) 101A, a camera (camera B) 101B, a camera (camera C) 101C, and a camera (camera D) 101D. In this case, the cameras are each, for example, an HD camera for generating data items of full HD images.
The cameras 101A, 101B, 101C, and 101D are arranged in, for example, a two-by-two matrix in a horizontal direction and a perpendicular direction.
Further, the transmission/reception system 10A includes, on the transmission side, adapters 102A to 102D provided correspondingly to the cameras 101A to 101D, respectively. The adapters 102A to 102D are connected respectively to the cameras 101A to 101D via USB (Universal Serial Bus) cables and HDMI (High-Definition Multimedia Interface) cables. In addition, the adapters 102A to 102D are connected to an Ethernet switch 105 via respective LAN cables. Note that, “HDMI” and “Ethernet” are each a trademark.
The adapters receive data items of images captured by imaging with the corresponding cameras, and store these data items into storage units. Further, the adapters receive cutting-out-target-region information items for the corresponding cameras from the reception side via the network. In addition, on the basis of the cutting-out-target-region information items, the adapters cut out data items of images of cutting-out-target regions from the data items of the captured images stored in the storage units, and transmit the data items of these cut-out images to the reception side via the network.
The cameras (and adapters) are synchronized with each other using, for example, PTP (IEEE 1588 Precision Time Protocol) via the network. In this way, the cameras can be subjected to V-synchronization via the network. With this system, the cameras (and adapters) perform imaging and process the data items of the captured images while maintaining the V-synchronization.
The CPU 121 controls operations of the units in the adapter 102. The USB interface 122 is an interface for performing communication between the adapter 102 and the camera. In this USB communication, an instruction command issued on the reception side with respect to the camera is transmitted to the camera. Further, this USB communication may be used instead of HDMI transmission described below for receiving the data items of the captured images from the camera.
The HDMI interface 123 is an interface for performing the HDMI data transmission between the adapter 102 and the camera. In this case, the camera corresponds to a source device, and the adapter 102 corresponds to a sink device. In this HDMI data transmission, the data items of the captured images, which are transmitted from the camera via HDMI, are received.
The memory 124 serves as the storage unit. The memory 124 stores the data items of the captured images, which are transmitted from the camera via the HDMI data transmission or the USB communication. The Ethernet interface 126 is an interface for establishing connection to the network, specifically, to a LAN (Local Area Network). This Ethernet interface 126 receives, via the network, the above-mentioned instruction command issued on the reception side with respect to the camera.
Further, this Ethernet interface 126 receives the cutting-out-target-region information item for the corresponding camera, which is transmitted from the reception side via the network. Specifically, the Ethernet interface 126 receives, from the reception side, an instruction packet containing the cutting-out-target-region information item.
Note that, the cutting-out-target region refers to a region that is cut out from the image captured by the corresponding camera and includes at least a region that overlaps with a display region to be set in a composite image formed of the images captured by the cameras 101A to 101D. In this case, when the image captured by the corresponding camera does not include the region that overlaps with the display region, the cutting-out-target-region information item is not transmitted from the reception side. This cutting-out-target-region information item is described in further detail together with description of the reception side below.
Further, this Ethernet interface 126 transmits, to the reception side via the network, the data item of the image of the cutting-out-target region, which is cut out from the data item of the captured image stored in the memory 124 on the basis of the cutting-out-target-region information item.
The encoder 125 cuts out, on the basis of the cutting-out-target-region information item received by the Ethernet interface 126, the data item of the image of the cutting-out-target region from the data item of the captured image stored in the memory 124. With this, the encoder 125 generates the image data item to be transmitted to the reception side. Note that, when necessary, this encoder 125 executes a process of compression-coding the data item of the image of this cutting-out-target region with, for example, JPEG2000 or JPEG so as to reduce a data amount.
Next, referring back to
The subsequent processing device 103 sets the display region in the composite image formed of the images captured by the cameras 101A to 101D, and determines regions in images captured by a predetermined number of cameras as the cutting-out-target regions, the regions including at least the regions that overlap with the display region. For example, (a) of
(b) of
(a) of
The display region set in the composite image is defined by, for example, reference coordinates (X, Y) being coordinates of an upper-left end, a height H, and a width W. In this illustration, the reference coordinates (X, Y) are represented in a coordinate system of the composite image. In this case, the reference coordinates (x, y) vary in accordance with variation in orientation. Note that, the height H and the width W are fixed values corresponding to a display resolution of the head mounted display 104, such as HD.
In (b) of
(c) of
The cutting-out-target region in each of the captured images is defined by, for example, reference coordinates (x′, y′) being coordinates of an upper-left end, a height h′, and a width w′. In this illustration, the reference coordinates (x′, y′) are represented in the coordinate system of the captured image. Note that, the cutting-out-target region in each of the captured images may be defined by other information items such as the coordinates of the upper-left end and coordinates of a lower-right end.
Further, the subsequent processing device 103 transmits, to the transmission side via the network, the information items of the cutting-out-target regions in the images captured by the predetermined number of cameras, the cutting-out-target regions overlapping with the display region. In this case, the subsequent processing device 103 transmits the instruction packets containing the cutting-out-target-region information items respectively to the adapters connected to the corresponding cameras.
Still further, the subsequent processing device 103 receives, from the transmission side via the network, the data items of the images of the cutting-out-target regions, which are cut out from the data items of the images captured by the above-mentioned predetermined number of cameras (in this case, all cameras 101A to 101D). Yet further, the subsequent processing device 103 executes not only a stitching process but also a lens-distortion correction process and a projective transformation process when necessary on the received data items of the images of the cutting-out-target regions so as to generate a data item of an image in the composite image, which corresponds to the display region. The subsequent processing device 103 transmits the data item of this image in the composite image to the head mounted display 104.
The CPU 131 controls operations of the units in the subsequent processing device 103. Further, on the basis of the orientation information that is transmitted as the control information for the display region from the head mounted display 04, the CPU 131 sets the display region in the composite image formed of the images captured by the cameras 101A to 101D. Then, the CPU 131 determines, as the cutting-out-target region, the region including at least the region in each of the images captured by the predetermined number of cameras, the region in each of the images overlapping with this display region (refer to
The Ethernet interface 132 is an interface for establishing connection to the network, specifically, to the LAN (Local Area Network). This Ethernet interface 132 transmits, to the transmission side via the network, the information items of the cutting-out-target regions in the images captured by the predetermined number of cameras, the cutting-out-target regions overlapping with the display region. Further, this Ethernet interface 132 receives, via the network, the data items of the images of the cutting-out-target regions, which are cut out from the data items of the images captured by the predetermined number of cameras, and which are transmitted from the transmission side via the network.
The memory 133 stores the data items of the images of the cutting-out-target regions, which are cut out from the data items of the images captured by the predetermined number of cameras, and which are received by the Ethernet interface 132. The signal processor 134 executes not only the stitching process but also the lens-distortion correction process and the projective transformation process when necessary on the data items of the images of the cutting-out-target regions, which are stored in the memory 133, so as to generate the data item of the image in the composite image, which corresponds to the display region. The stitching process to be executed includes extraction of features of the images on the basis of, for example, a general SIFT (Scale-Invariant Feature Transform) algorithm. Note that, in a case where the compression-coding process has been executed on the data items of the images of the cutting-out-target regions, which are stored in the memory 133, this signal processor 134 executes the processes after executing a compression-decoding process.
The USB interface 135 is an interface for performing communication via USB between the subsequent processing device 103 and the head mounted display 104. In this USB communication, the orientation information as the control information for the display region is received from the head mounted display 104. Further, this USB communication may be used instead of the HDMI transmission described below for transmitting the data item of the image in the composite image, which is generated by the signal processor 134, to the head mounted display 104.
The HDMI interface 136 is an interface for performing the HDMI data transmission between the subsequent processing device 103 and the head mounted display 104. In this case, the subsequent processing device 103 corresponds to a source device, and the head mounted display 104 corresponds to a sink device. In this HDMI data transmission, the data item of the image in the composite image, which is generated by the signal processor 134, is transmitted to the head mounted display 104.
(1) The subsequent processing device 103 sets, on the basis of the orientation information supplied from the head mounted display 104, the display region in the composite image formed of the images captured by the cameras 101A to 101D (with reference to (a) of
(2) The subsequent processing device 103 determines the cutting-out-target region in each of the camera images, which is contained in the display region (with reference to (c) of
(3) The subsequent processing device 103 transmits, respectively to the corresponding cameras via the network, the information items of the cutting-out-target regions in the camera images. In this case, the subsequent processing device 103 transmits the instruction packets containing these cutting-out-target-region information items (reference coordinates (x′, y′), heights h′, and widths w′) respectively to the adapters connected to the corresponding cameras.
(4) The adapters 102, which have received the cutting-out-target-region information items from the subsequent processing device 103, cut out the data items of the images of the regions defined by the cutting-out-target-region information items from the data items of the images captured by the corresponding cameras. In this case, not only the regions that overlap with the display region but also data items of images of the pasting-margin regions on the outside of the overlapping regions are cut out together.
(5) The adapters 102, which have received the cutting-out-target-region information items from the subsequent processing device 103, transmit the data items of the images, which are cut out from the data items of the images captured by the corresponding cameras, to the subsequent processing device 103 via the network.
(6) The subsequent processing device 103 executes not only the stitching process but also the lens-distortion correction process and the projective transformation process when necessary on the data items of the images, which are received from the cameras (adapters), so as to generate a data item of a display image (data item of the image in the composite image, which corresponds to the display region).
(7) The subsequent processing device 103 transmits the data item of the display image to the display device, specifically, to the head mounted display 104.
As described above, in the transmission/reception system 10A shown in
With this, a usage amount of a network bandwidth can be reduced to a usage amount corresponding to the display region. As a result, the network bandwidth can be utilized.
Further, the subsequent processing device 103 generates the data item of the image in the composite image, which corresponds to the display region, by executing the processes such as the stitching process on the data items of the images of the cutting-out-target regions from the cameras, which are received from the transmission side. In other words, the processes such as the stitching process are executed only on the parts corresponding to the display region. With this, processing load can be reduced.
Note that, in the example of the embodiment described above, not only the cameras 101A to 101D but also the adapters 102A to 102D corresponding respectively to the cameras 101A to 101D are provided on the transmission side. However, in a case where the cameras 101A to 101D each have a function of the adapter, the adapters to be mounted externally to the cameras can be omitted.
When the cameras receive the cutting-out-target-region information items from the subsequent processing device 103, the cameras cut out the data items of the images of the cutting-out-target regions from the data items of the captured images, and transmit the data items of these cut-out images to the subsequent processing device 103 via the network. Other configuration features of this transmission/reception system 10B are the same as those of the transmission/reception system 10A shown in
Further, with regard to the example of the embodiment described above, in which not only the cameras 101A to 101D but also the adapters 102A to 102D corresponding respectively to the cameras 101A to 101D are provided on the transmission side, it is also conceivable to provide functions of the cameras and the functions of the adapters to servers.
When the servers receive the cutting-out-target-region information items from the subsequent processing device 103, the servers cut out data items of images of cutting-out-target regions from the data items of the images captured by the corresponding cameras, which are stored in the storages. Then, the servers transmit the data items of these cut-out images to the subsequent processing device 103 via the network. Other configuration features of this transmission/reception system 10C are the same as those of the transmission/reception system 10A shown in
Further, with regard to the example of the embodiment described above, in which not only the cameras 101A to 101D but also the adapters 102A to 102D corresponding respectively to the cameras 101A to 101D are provided on the transmission side, it is also conceivable to integrate the four adapters 102A to 102D into a single adapter.
When the adapter 102 receives the cutting-out-target-region information items for the cameras from the subsequent processing device 103, the adapter 102 cuts out the data items of the images of the cutting-out-target regions from the data items of the images captured respectively by the cameras, which are stored in the memory. Then, the adapter 102 transmits the data items of these cut-out images to the subsequent processing device 103 via the network. Other configuration features of this transmission/reception system 10D are the same as those of the transmission/reception system 10A shown in
Note that, features that are the same as those of the transmission/reception system 10D shown in
When the server 106 receives the cutting-out-target-region information items for the cameras from the subsequent processing device 103, the server 106 cuts out data items of images of cutting-out-target regions from the data items of the images captured by the respective cameras, which are stored in the storage. Then, the server 106 transmits the data items of these cut-out images to the subsequent processing device 103 via the network. Other configuration features of this transmission/reception system 10E are the same as those of the transmission/reception system 10C shown in
Further, in the example of the embodiment described above, the transmission side and the reception side are connected to each other via the wired network connection with use of the LAN cable. However, it is also conceivable to establish a wireless network connection therebetween.
Note that, although not described in detail, the configuration example of the transmission/reception system 10F shown in
Further, in the example of the embodiment described above, the head mounted display 104 is connected as the display device to the subsequent processing device 103. However, the display device is not limited to this head mounted display 104. For example, (a) of
In addition, in the example of the embodiment described above, the data items of the images captured by the four cameras 101A to 101D are processed. However, the number of the cameras is not limited to four, and another configuration example in which data items of images captured by another number of cameras are processed is also conceivable. For example,
In this case, not only the cameras 101A to 101P but also adapters 102A to 102P corresponding respectively to the cameras 101A to 101P are provided on the transmission side.
Note that, although not described in detail, the configuration example of the transmission/reception system 10G shown in
Further, in the example of the embodiment described above, the data items of the images of the cutting-out-target regions from the predetermined number of cameras are transmitted from the transmission side to the subsequent processing device 103, and not only the stitching process but also the lens-distortion correction process and the projective transformation process are executed when necessary in the subsequent processing device 103 on the data items of the images of these cutting-out-target regions from the predetermined number of cameras such that the data item of the image in the composite image, which corresponds to the display region, is generated. However, it is also conceivable to execute the processes such as the stitching process on the transmission side, and then to transmit the data item of the image in the composite image after these processes from the transmission side to the subsequent processing device 103. In this case, the processes such as the stitching process need not be executed in the subsequent processing device 103, and hence processing load can be significantly reduced.
Further, although not described above, it is also conceivable to provide the function of the subsequent processing device 103 to the display devices such as the head mounted display 104. In that case, the subsequent processing device 103 need not be provided independently of the display device, and hence the configuration on the reception side can be simplified.
Note that, the present technology may also provide the following configurations.
(1) A transmission device, including:
(2) The transmission device according to Item (1), in which
(3) A transmission method including an information reception step of receiving, from an external device via a network, cutting-out-target-region information items for a predetermined number of cameras selected from a plurality of cameras, the plurality of cameras performing imaging in a manner that adjacent ones of captured images overlap with each other,
(4) A transmission device, including:
(5) A transmission device, including
(6) A transmission device, including
(7) A reception device, including:
(8) The reception device according to claim 7, in which
(9) The reception device according to Item (8), in which
(10) The reception device according to Item (8), in which
(11) The reception device according to any one of Items (7) to (10), in which
(12) A reception method, including:
(13) A transmission/reception system, including:
(14) A transmission device, including:
(15) A reception device, including:
Number | Date | Country | Kind |
---|---|---|---|
2015-224621 | Nov 2015 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2016/083985 | 11/16/2016 | WO | 00 |