The present technology relates to a transmitting device, a transmitting method, a receiving device, a receiving method, a transmission system, and a non-transitory computer-readable storage medium storing program, and more particularly, to a transmitting device, a transmitting method, a receiving device, a receiving method, a transmission system, and a non-transitory computer-readable storage medium storing program, which are capable of transmitting phase detection image data in addition to visible image data in a communication standard used for an existing DisplayPort interface.
This application claims the benefit of Japanese Priority Patent Application JP 2014-064702 filed on Mar. 26, 2014, the entire contents of which are incorporated herein by reference.
A standard for an interface of transmitting image data to a display, that is, a standard called DisplayPort (trademark) has been popularized (for example, see Non-Patent Literature 1).
Meanwhile, in the DisplayPort (trademark) standard, transmission of audio data in addition to visible image data including effective pixel data is also specified, and it is possible to transmit and receive the audio data and the visible image data.
However, in the DisplayPort (trademark) standard, it is difficult to transmit and receive undefined data in addition to visible image data unless a new definition is given.
The present technology was made in light of the foregoing, and particularly, it is desirable to transmit phase detection image data in addition to visible image data in a communication standard (DisplayPort (trademark)) used in an existing DisplayPort interface.
A transmitting device according to an aspect of the present technology is a transmitting device that transmits visible image data including effective pixel data of an imaging device using a format for transmission to a display, and includes a transmitting unit that transmits phase detection image data in the imaging device in addition to the visible image data.
It is possible to cause the transmitting unit to packetize and transmit the phase detection image data in the imaging device using the format for the transmission to the display.
The format for the transmission to the display is a format specified in a DisplayPort (trademark), and it is possible to cause the transmitting unit to packetize and transmit the phase detection image data in the imaging device using a secondary data packet (SDP) specified in the DisplayPort (trademark) as the format for the transmission to the display.
It is possible to cause the transmitting unit to packetize and transmit the phase detection image data in the imaging device using a phase detection image information packet and a phase detection image data packet of the SDP specified in the DisplayPort (trademark).
It is possible to cause the transmitting unit to arrange the phase detection image information packet in a vertical blanking region, arrange the phase detection image data packet in a horizontal blanking region, and packetize and transmit the phase detection image data.
It is possible to cause the phase detection image information packet to include information of the number of lines per frame, the number of pixels per line, and the number of bits per pixel of a phase detection image configured with the phase detection image data and the number of pixels per the phase detection image data.
It is possible to cause the transmitting unit to pack the phase detection image data packet in a certain byte unit and transmit the packed phase detection image data packet.
It is possible to cause the transmitting unit to transmit the phase detection image data in the imaging device in addition to the visible image data using a scheme in which a plurality of streams are transmitted from a plurality of stream sources to a plurality of stream sinks through one transmission path in the format for the transmission to the display.
A format specified in a DisplayPort (trademark) can be used as the format for the transmission to the display, and it is possible to cause the transmitting unit to transmit the phase detection image data in the imaging device in addition to the visible image data by transmitting a stream including the visible image data and a stream including the phase detection image data from the stream sources to the stream sinks through one transmission path using a virtual channel specified in the DisplayPort (trademark).
It is possible to cause a main stream attributes (MSA) that is individually for each stream of the virtual channel and is image characteristic information of the stream to include information of the number of lines per frame, the number of pixels per line, and the number of bits per pixel of a phase detection image configured with the phase detection image data when the steam is a stream including the phase detection image data.
It is possible to cause the MSA to further include information of Mvid (a video stream clock frequency) and Nvid (a link clock frequency), and when the number of pixels in the vertical direction and the number of pixels in the horizontal direction in the phase detection image including the phase detection image data can be 1/t and 1/s of the number of pixels in the vertical direction and the number of pixels in the horizontal direction of a visible image including the visible image data, respectively, a ratio of the Mvid and the Nvid of the MSA of the phase detection image data is 1/(t×s) of a ratio of the Mvid and the Nvid of the MSA of the visible image data.
It is possible to cause the MSA to further include information specifying the imaging device.
A transmitting method according to an aspect of the present technology is a transmitting method of a transmitting device that transmits visible image data including effective pixel data of an imaging device using a format for transmission to a display, and can include transmitting phase detection image data in the imaging device in addition to the visible image data.
A first non-transitory computer-readable storage medium storing program according to an aspect of the present technology causes a computer controlling a transmitting device that transmits visible image data including effective pixel data of an imaging device using a format for transmission to a display to execute a process including transmitting phase detection image data in the imaging device in addition to the visible image data.
A receiving device according to an aspect of the present technology is a receiving device that receives visible image data including effective pixel data of an imaging device using a format for transmission to a display, and includes a receiving unit that receives a phase detection image data in the imaging device in addition to the visible image data.
A receiving method according to an aspect of the present technology is a receiving method of a receiving device that receives visible image data including effective pixel data of an imaging device using a format for transmission to a display, and includes receiving a phase detection image data in the imaging device in addition to the visible image data.
A second non-transitory computer-readable storage medium storing program according to an aspect of the present technology causes a computer controlling a receiving device that receives visible image data including effective pixel data of an imaging device using a format for transmission to a display to execute a process including receiving a phase detection image data in the imaging device in addition to the visible image data.
A transmission system according to an aspect of the present technology is a transmission system including a transmitting device that transmits visible image data including effective pixel data of an imaging device using a format for transmission to a display and a receiving device, wherein the transmitting device includes a transmitting unit that transmits phase detection image data in the imaging device in addition to the visible image data, and the receiving device includes a receiving unit that receives a phase detection image data in the imaging device in addition to the visible image data from the transmitting device.
According to an aspect of the present technology, when visible image data including effective pixel data of an imaging device is transmitted using a format for transmission to a display, the transmitting device transmits phase detection image data in the imaging device in addition to the visible image data to the receiving device, and the receiving device receives the phase detection image data in the imaging device in addition to the visible image data from the transmitting device.
The transmitting device and the receiving device according to an aspect of the present technology may be independent devices or may be blocks performing a transmission process.
According to an aspect of the present technology, it is possible to transmit phase detection image data in addition to visible image data in a communication standard used in an existing DisplayPort interface.
The description will proceed in the following order.
1. First embodiment (example using secondary data packet)
2. Second embodiment (example using virtual channel)
More specifically, the transmission system of
<ZAF Pixel>
Among pixels set to an imaging region, ZAF pixels are arranges at certain intervals in addition to effective pixels generating visible image data. As ZAF pixels, there are a left light-shielding pixel in which a left half of a pixel is light-shielded and a right light-shielding pixel in which a right half of a pixel is light-shielded, and images imaged by the pixels deviate from side to side according to a focal length. For this reason, for an image at a focal point, an image in a left light-shielding pixel matches an image in a right light-shielding pixel, but for an image deviated from a focal point, a phase difference according to a deviation amount of a focal length occurs between the respective images. In this regard, focusing can be performed at a high speed by obtaining the deviation amount of the focal length based on the phase difference and performing focusing.
The ZAF pixels are arranged, for example, as illustrated in
Next, configurations of the transmitting unit 21 and the receiving unit 22 of the transmission system of
The transmitting unit 21 includes an MSA generating unit 41, an SDP generating unit 42, and a multiplexing unit 43.
The MSA generating unit 41 generates main stream attributes (MSA) serving as image characteristic information such as the number of lines per frame, the number of pixels per line, and the number of bits per pixel of image data (visible image data) including effective pixel data that is desired to be transmitted, and supplies the generated MSA to the multiplexing unit 43. The details of the MSA will be described later with reference to
The SDP generating unit 42 generates a packet having a format for packetizing ZAF pixel data into a horizontal blanking region and a vertical blanking region other than an effective pixel region and transmitting the packetized data, that is, a packet called an SDP, and supplies the generated packet to the multiplexing unit 43. The details of the SDP will be described later with reference to
The multiplexing unit 43 multiplexes the MSA supplied from the MSA generating unit 41, the SDP supplied from the SDP generating unit 42, and image data (visible image data) including input effective pixel data, and outputs multiplexed data.
The receiving unit 22 includes a demultiplexing unit 61, an MSA reading unit 62, an SDP reading unit 63, and an image generating unit 64. The demultiplexing unit 61 demultiplexes the multiplexed data transmitted from the transmitting unit 21 into the MSA, the SDP, and the visible image data, and supplies the MSA, the SDP, and the visible image data to the MSA reading unit 62, the SDP reading unit 63, and the image generating unit 64, respectively.
The MSA reading unit 62 reads information of the number of lines per frame, the number of pixels per line, and the number of bits per pixel of the visible image data based on the supplied MSA, and supplies the read information to the image generating unit 64.
The SDP reading unit 63 reads the SDP, and extracts and outputs the packetized ZAF image data.
The image generating unit 64 acquires the visible image data, reconstructs the visible image based on the information of the MSA, and outputs the reconstructed visible image.
<SDP>
Next, the SDP will be described.
The SDP is a packet for packetizing and transmitting data other than visible image data (effective pixel data) using a horizontal blanking region and a vertical blanking region for each frame. As the SDP, there are two types of packets, that is, a phase detection image information packet and a phase detection image data packet.
The phase detection image information packet is a packet including information of the number of lines per frame, the number of pixels per line, the number of bits per pixel, and the number of pixels per ZAF pixel data of ZAF image data.
The phase detection image data packet is a packet configuring a plurality of pieces of ZAF pixel data.
The phase detection image information packet and the phase detection image data packet are packetized data arranged in an image of one frame, for example, as illustrated in
Referring to
There is a vertical blanking region (Vblank) 72 above the effective pixel region 71, and a phase detection image information packet 82 is arranged between an MSA 81 and the SDP.
There is a horizontal blanking region (Hblank) 73 at the left side of the effective pixel region 71, and phase detection image data packets 83-1 to 83-15 are arranged below lines in which there is the ZAF pixel in the effective pixel region 71, respectively. Thus, the lines in which the phase detection image data packets 83-1 to 83-15 are arranged at alternating intervals of 3 lines and 5 lines in the vertical direction. Here, when it is unnecessary to particularly distinguish the phase detection image data packets 83-1 to 83-15 from one another, they are referred to simply as a “phase detection image data packet 83,” and the same applies to the other components.
Thus, in
<Configuration of Phase Detection Image Information Packet>
Next, a configuration of the phase detection image information packet 82 will be described with reference to
Information indicating a packet type (an SDP type) is recorded in HB1 serving as a second byte. In HB1, in order to specify a display type in advance, 00h to 07h are set as a certain display type, but h08 to 0Fh are not set (DisplayPort RESERVED). In this regard, information indicating the phase detection image information packet is allocated to any one of non-set 08h to 0Fh. For example, 08h may be allocated as the information indicating the phase detection image information packet.
HB2 and HB3 serving as third and fourth bytes are unused bytes (Reserved (all 0)).
In data packet of the phase detection image information packet, as illustrated in the lower portion of
Information of lower 8 bits of the number of pixels per H of the phase detection image data is recorded in DB2 serving as a third byte. Further, information of upper 8 bits of the number of pixels per H of the phase detection image data is recorded in DB3 serving as a fourth byte. Here, for example, the number of pixels per H refers to the number of phase detection pixels included in the lines L1 to L15 illustrated in
Information of lower 8 bits of the number of pixels per packet of the phase detection image data packet is recorded in DB4 serving as a fifth byte. Further, information of upper 8 bits of the number of pixels per packet of the phase detection image data packet is recorded in DB5 serving as a sixth byte.
Information of the number of bits per pixel of the phase detection image data packet is recorded in DB6 serving as a seventh byte. Further, DB7 to DB15 of 8-th to 16-th bytes are set as unused regions (Reserved (all 0)).
<Transmission Format>
Next, a transmission formation of the SDP will be described with reference to
In
In
In
Further, in
Further, below data of the respective lanes, parities PB8 to PB11 are configured, and one byte is arranged for each lane from the lane 0 to the lane 3. In the lowest portion, SEs indicating the end of the SDP is arranged for each lane.
As described above, data of 16 bytes with parities of 4 bytes added thereto are transmitted.
<Exemplary Configuration of Phase Detection Image Data Packet>
Next, an exemplary configuration of the phase detection image data packet will be described with reference to
In the data packet of the phase detection image data packet, ZAF pixel data are sequentially stored in the data DB0 to DB15.
For example, when 10-bit ZAF pixel data AF0[9:0] to AF15[9:0] . . . each of which includes data of 0-th to 9-th bits are configured from the left of
In other words, for the lane 0, AF1[9:2] of 1-st ZAF pixel data AF0[9:0] is allocated to data DB0 serving as a first byte in a left-to-right order in
8 bits including AF0[1:0] of 1-st ZAF pixel data AF0[9:0] and AF4[9:4] of 5-th ZAF pixel data AF4[9:0] are allocated to data DB1 serving as a second byte of the lane 0.
8 bits including AF4[3:0] of 5-th ZAF pixel data AF4[9:0] and AF8[9:6] of 9-th ZAF pixel data AF8[9:0] are allocated to data DB2 serving as a third byte of the lane 0.
8 bits including AF8[5:0] of 9-th ZAF pixel data AF8[5:0] and 13-th ZAF pixel data AF12[9:8] are allocated to data DB3 serving as a fourth byte of the lane 0.
8 bits of 13-th ZAF pixel data AF12[7:0] are allocated to data DB16 serving as a fifth byte of the lane 0.
Further, in the lane 1, 8 bits of 2-nd ZAF pixel data AF1[9:2] are allocated to data DB4 of a first byte.
8 bits including 2-nd ZAF pixel data AF1[1:0] and 6-th ZAF pixel data AF5[9:4] are allocated to data DB5 serving as a second byte of the lane 1.
8 bits including 6-th ZAF pixel data AF5[3:0] and 10-th ZAF pixel data AF9[9:6] are allocated to data DB6 serving as a third byte of the lane 1.
8 bits including 10-th ZAF pixel data AF9[5:0] and 14-th ZAF pixel data AF13[9:8] are allocated to data DB7 serving as a fourth byte of the lane 1.
8 bits including 14-th ZAF pixel data AF13[7:0] are allocated to data DB20 serving as a fifth byte of the lane 1.
Further, in the lane 2, 8 bits of 3-rd ZAF pixel data AF2[9:2] are allocated to data DB8 of a first byte.
8 bits including 3-rd ZAF pixel data AF2[1:0] and 7-th ZAF pixel data AF6[9:4] are allocated to data DB9 serving as a second byte of the lane 2.
8 bits including 7-th ZAF pixel data AF6[3:0] and 11-th ZAF pixel data AF10[9:6] are allocated to data DB6 serving as a third byte of the lane 2.
8 bits including 11-th ZAF pixel data AF10[5:0] and 15-th ZAF pixel data AF14[9:8] are allocated to data DB11 serving as a fourth byte of the lane 2.
8 bits of 15-th ZAF pixel data AF14[7:0] are allocated to data DB24 serving as a fifth byte of the lane 2.
Further, in the lane 3, 8 bits of 4-th ZAF pixel data AF3[9:2] are allocated to data DB12 serving as a first byte.
8 bits including 4-th ZAF pixel data AF3[1:0] and 8-th ZAF pixel data AF7[9:4] are allocated to data DB13 serving as a second byte of the lane 3.
8 bits including 8-th ZAF pixel data AF7[3:0] and 12-th ZAF pixel data AF11[9:6] are allocated to data DB14 serving as a third byte of the lane 3.
8 bits including 12-th ZAF pixel data AF11[5:0] and 16-th ZAF pixel data AF15[9:8] are allocated to data DB15 serving as a fourth byte of the lane 3.
8 bits of 16-th ZAF pixel data AF15[7:0] are allocated to data DB28 serving as a fifth byte of the lane 3.
Here, a transmission format is the same as that of the phase detection image information packet described above with reference to
In other words, it is possible to packetize, transmit, and receive the ZAF pixel data using the format based on the SDP.
<MSA>
Next, the MSA will be described with reference to
The MSA has an arrangement illustrated in
For each lane, an SS indicating the start of the MSA is arranged twice consecutively.
Next, Mvid23:16, Mvid15:8, and Mvid7:0 indicating a clock frequency of the same video stream are arranged downward by one byte. Here, Mvid is information of a clock frequency of a video stream, and Mvid23:16 is information 16-th to 23-th bits of a clock frequency of a video stream. Further, Mvid15:8 is information of 8-th to 15-th bits of a clock frequency of a video stream. Furthermore, Mvid7:0 is information of 0-th to 7-th bits of a clock frequency of a video stream.
For a lane 0, Htotal15:8 and Htotal7:0 are arranged below Mvid by one byte. Htotal is the number of pixels in the horizontal direction obtained by adding the effective pixel region 71 to the horizontal blanking region 73 as illustrated in the upper portion of
For the lane 0, Vtotal15:8 and Vtotal7:0 are arranged below Htotal by one byte. Vtotal is the number of lines in the vertical direction obtained by adding the number of effective lines of the effective pixel region 71 to the vertical blanking region 72 as illustrated in the upper portion of
For the lane 0, HSP/HSW14:8 and HSW7:0 are arranged by one byte below Vtotal. HSP is 1-bit information indicating a polarity of Hsync (a horizontal synchronous signal), and 0 indicates an active high, and 1 indicates an active low as illustrated in the middle of
For the lane 1, Hstart15:8 and Hstart7:0 are arranged below Mvid by one byte. Hstart is the number of pixels specifying a period of time from a timing at which last data (last data of a previous line) of a previous line ends to a timing at which Hsync rises as illustrated in the lower portion of
For the lane 1, Vstart15:8 and Vstart7:0 are arranged below Hstart by one byte. Vstart is the number of lines specifying a period of time from a timing at which last Hsync (last H of a previous frame) of a previous frame rises to a timing at which Vsync (a vertical synchronous signal) rises as illustrated in the middle of
For the lane 1, VSP/VSW14:8 and VSW7:0 are arranged below Vstart by one byte. VSP is 1-bit information indicating a polarity of Vsync (a vertical synchronous signal), and 0 indicates an active high, and 1 indicates an active low as illustrated in the middle of
Meanwhile, for the lane 2, Hwidth15:8 and Hwidth7:0 are arranged below Mvid by one byte. Hwidth is the number of pixels of the effective pixel region 71 in the horizontal direction as illustrated in the upper portion of
For the lane 2, Vheight15:8 and Vheight7:0 are arranged below Hwidth by one byte. Vheight is the number of lines of the effective pixel region 71 in the vertical direction as illustrated in the upper portion of
For the lane 3, Nvid23:16, Nvid15:8, and Nvid7:0 are arranged below Mvid by one byte. Nvid is a link clock frequency. Nvid23:16, Nvid15:8, and Nvid7:0 are information of 23-rd to 16-th bits of Nvid, information of 8-th to 15-th bits of Nvid, and information of 0-th to 7-th bits of Nvid.
Here, Video Stream clock[Mz]=Mvid/Nvid×Link clock[Mz].
For the lane 3, MISC0_7:0 and MISC1_7:0 are arranged below Nvid downward by one byte. MISC0_7:0 and MISC1_7:0 are information of an encoding format.
<Encoding Format Indicated by MISC>
MISC0_7:0 and MISC1_7:0 record, for example, information of an encoding format indicated by
In other words, as illustrated in the first line of the upper portion of
As illustrated in the second row of the upper portion of
As illustrated in the third row of the upper portion of
As illustrated in the fourth row of the upper portion of
As illustrated in the fifth row of the upper portion of
As illustrated in the sixth row of the upper portion of
As illustrated in the seventh row of the upper portion of
As illustrated in the eighth row of the upper portion of
As illustrated in the ninth row of the upper portion of
As illustrated in the tenth row of the upper portion of
As illustrated in the first row of the lower portion of
As illustrated in the second row of the lower portion of
As illustrated in the third row of the lower portion of
Here, 4-th to 6-th bits of MISC1 are not set (reserved). Thus, for example, information necessary for specifying a transmission source may be added to the 4-th to 6-th bits of MISC1.
As a result, it is possible to specify a device of a transmission source of visible image data including ZAF image data, and it is possible to make it recognized that a transmission source is an image sensor such as an imaging device by including information indicating that an image transmission source is an image sensor.
<Transceiving Process>
Next, a transceiving process in the transmission system of
In step S11, the MSA generating unit 41 generates the MSA including information of the number of lines per frame, the number of pixels per line, and the number of bits per pixel of the phase detection image data of the visible image data desired to be transmitted, and supplies the generated MSA to the multiplexing unit 43.
In step S12, the SDP generating unit 42 generates the SDP based on the ZAF image data. In other words, the SDP generating unit 42 generates the phase detection image information packet and the phase detection image data packet in the SDP.
In step S13, the multiplexing unit 43 multiplexes the MSA, the SDP, and the visible image data, and generates multiplexed data.
In step S14, the multiplexing unit 43 transmits the multiplexed data to the receiving unit 22.
In step S15, the transmitting unit 21 determines whether or not there is no next image signal, and an end instruction is given, and when no end instruction is given, the process returns to step S11, and the subsequent process is repeated. Further, when an end instruction is given in step S15, the process ends.
Meanwhile, in step S31, in the receiving unit 22, the demultiplexing unit 61 receives the multiplexed data.
In step S32, the demultiplexing unit 61 demultiplexes the multiplexed data into the MSA, the SDP, and the visible image data, and supplies the MSA, the SDP, and the visible image data to the MSA reading unit 62, the SDP reading unit 63, and the image generating unit 64, respectively.
In step S33, the MSA reading unit 62 reads the information of the number of lines per frame, the number of pixels per line, and the number of bits per pixel of the visible image data based on the information of the MSA, and supplies the read information to the image generating unit 64.
In step S34, the SDP reading unit 63 reads the phase detection image information packet and the phase detection image data packet of the SDP, extracts the ZAF image data from the phase detection image data based on the information of the phase detection image information packet, and outputs the ZAF image data.
In step S35, the image generating unit 64 reconstructs the visible image from the visible image data based on the MSA, and outputs the visible image.
In step S36, the receiving unit 22 determines whether or not there is no next image signal, and an end instruction is given, and when no end instruction is given, the process returns to step S31, and the subsequent process is repeated. Further, when an end instruction is given in step S36, the process ends.
Through the above process, as the SDP is used, and the ZAF image data is packetized, it is possible to transmit the visible image data and adds the packetized ZAF image data to the horizontal blanking region and the vertical blanking region and transmit the resultant data.
The above description has been made in connection with the example in which the ZAF image data is transmitted using the SDP together with the visible image data, but, for example, the transmission may be performed using a virtual channel format specified in the DisplayPort (trademark).
Here, a virtual channel refers to a scheme in which a plurality of streams are transmitted from a plurality of stream sources to a plurality of stream sinks through a single transmission path. In the transmission system of
More specifically, the transmission system of
The transmitting unit 121 includes stream transmission processing units 141-1 to 141-n and a stream transmission processing unit 142.
The stream transmission processing units 141-1 to 141-n includes an MSA generating unit 161, an SDP generating unit 162, and a multiplexing unit 163, generates stream data including visible image data, and outputs the stream data to a multiplexing unit 143. Here, the functions of the MSA generating unit 161, the SDP generating unit 162, and the multiplexing unit 163 are basically the same as the functions of the MSA generating unit 41, the SDP generating unit 42, and the multiplexing unit 43 described above with reference to
Further, the stream transmission processing unit 142 includes an MSA generating unit 181, an SDP generating unit 182, and a multiplexing unit 183, generates stream data including ZAF image data configured with ZAF pixel, and outputs the stream data to the multiplexing unit 143. Here, the functions of the MSA generating unit 181, the SDP generating unit 182, and the multiplexing unit 183 are basically the same as the functions of the MSA generating unit 41, the SDP generating unit 42, and the multiplexing unit 43 described above with reference to
The multiplexing unit 143 transmits multiplexed data obtained by time division multiplexing the stream data including the visible image data supplied from the plurality of streams transmission processing units 141-1 to 141-n and the stream data including the ZAF image data supplied from the stream transmission processing unit 142 to the receiving unit 122.
The receiving unit 122 includes a demultiplexing unit 201, stream reception processing units 202-1 to 202-n, and a stream reception processing unit 203.
The demultiplexing unit 201 demultiplexes the multiplexed data transmitted from the transmitting unit 121 into a plurality of pieces of stream data including a plurality of pieces of visible image data and stream data including ZAF image data, and the demultiplexed stream data including the visible image data and the stream data including the ZAF image data to the stream reception processing units 202-1 to 202-n and the stream reception processing unit 203, respectively.
The stream reception processing unit 202-1 includes a demultiplexing unit 231, an MSA reading unit 232, an SDP reading unit 233, and an image generating unit 234, generates visible image data based on the stream data including a plurality of pieces of visible image data, and outputs the generated visible image data. Here, the demultiplexing unit 231, the MSA reading unit 232, the SDP reading unit 233, and the image generating unit 234 have basically the same functions as the demultiplexing unit 61, the MSA reading unit 62, the SDP reading unit 63, and the image generating unit 64 described above with reference to
The stream reception processing unit 202-2 includes a demultiplexing unit 251, an MSA reading unit 252, an SDP reading unit 253, and an image generating unit 254, generates a ZAF image based on the stream data including the ZAF image data, and outputs the generated ZAF image. Here, the demultiplexing unit 251, the MSA reading unit 252, and the SDP reading unit 253 have basically the same functions as the demultiplexing unit 61, the MSA reading unit 62, the SDP reading unit 63, and the image generating unit 64 described above with reference to
<ZAF Image>
Next, a ZAF image generated when a virtual channel format is used will be described.
A ZAF image is an image configured with ZAF pixels. For example, a ZAF image is an image that is substantially the same as an image configured by combining ZAF pixels configuring the phase detection image data packets 83-1 to 83-15 described above with reference to
In this case, a ZAF image is configured with a ZAF pixel region 271 corresponding to the effective pixel region 71, a vertical blanking region 272 corresponding to the vertical blanking region 72, and a horizontal blanking region 273 corresponding to the horizontal blanking region 73.
In other words, the ZAF pixel region 271 of the ZAF image in the left portion of
In the transmission system of
<Time Division Multiplexing>
When a virtual channel is used, according to the DisplayPort (trademark), time slots can be divided into 63. For this reason, for example, when the visible image data and the ZAF image data illustrated in
In other words, when a virtual channel is used, as time division multiplexing and transmission are performed, two streams (a visible image stream and a ZAF image stream) are transmitted from two stream sources (a visible image stream source and a ZAF image stream source) to two stream sinks (a visible image stream sink and a ZAF image stream sink) through one transmission path. As a result, the ZAF image data can be transmitted together with the visible image data.
Here, in
<Comparison of MSA between visible image and ZAF image>
Next, a comparison of an MSA between a visible image and a ZAF image will be described with reference to
As illustrated in the first row of
As illustrated in the second row of
As illustrated in the third row of
As illustrated in the fourth row of
As illustrated in the fifth row of
As illustrated in the sixth row of
As illustrated in the seventh row of
As illustrated in the eighth row of
As illustrated in the ninth row of
As illustrated in the tenth row of
As illustrated in the eleventh to thirteenth row of
As illustrated in the fourteenth row of
As illustrated in the fifteenth row of
In other words, as illustrated in
<Transceiving Process>
Next, a transceiving process when the ZAF image data is transmitted together with the visible image data in the transmission system of
In step S111, the MSA generating unit 161 of the stream transmission processing unit 141 generates the MSA for the visible image data, and outputs the MSA for the visible image data to the multiplexing unit 163.
In step S112, the multiplexing unit 163 multiplexes the supplied visible image data and the MSA for the visible image data to generate the stream data including the visible image data, and supplies the generated stream data to the multiplexing unit 143.
In step S113, the MSA generating unit 181 of the stream transmission processing unit 142 generates the MSA for the ZAF image data, and outputs the MSA for the ZAF image data to the multiplexing unit 183.
In step S114, the multiplexing unit 183 multiplexes the supplied ZAF image data and the MSA for the ZAF image data to generate the stream data including the ZAF image data, and supplies the generated stream data to the multiplexing unit 143.
In step S115, the multiplexing unit 143 performs time division multiplexing on the supplied stream data including the visible image data and the stream data including the ZAF image data according to the virtual channel format.
In step S116, the multiplexing unit 143 transmits the multiplexed data generated by the multiplexing to the receiving unit 122.
In step S117, the transmitting unit 121 determines whether or not there is no next image signal, and an end instruction is given, and when no end instruction is given, the process returns to step S111, and the subsequent process is repeated. Further, when an end instruction is given in step S117, the process ends.
Meanwhile, in the receiving unit 122, in step S131, the demultiplexing unit 201 of the receiving unit 122 receives the transmitted multiplexed data.
In step S132, the demultiplexing unit 201 of the receiving unit 122 demultiplexes the received multiplexed data into the stream data including the visible image data and the stream data including the ZAF image data according to the virtual channel format, and supplies the stream data including the visible image data and the stream data including the ZAF image data to the stream reception processing units 202 and 203.
In step S133, the demultiplexing unit 231 of the stream reception processing unit 202 demultiplexes the stream data including the visible image data into the MSA for the visible image and the visible image data, and outputs the MSA for the visible image and the visible image data to the MSA reading unit 232 and the image generating unit 234, respectively.
In step S134, the MSA reading unit 232 reads the MSA, and supplies information of the read MSA to the image generating unit 234.
In step S135, the image generating unit 234 reconstructs the visible image from the visible image data based on the information of the MSA, and outputs the reconstructed visible image.
In step S136, the demultiplexing unit 251 of the stream reception processing unit 203 multiplexes the stream data including the visible image data into the MSA for the ZAF image and the ZAF image data, and outputs the MSA for the ZAF image and the ZAF image data to the MSA reading unit 252 and the image generating unit 254, respectively.
In step S137, the MSA reading unit 252 reads the MSA for the ZAF image, and supplies information of the reads MSA to the image generating unit 254.
In step S138, the image generating unit 254 reconstructs the ZAF image from the ZAF image data based on the information of the MSA for the ZAF image, and outputs the reconstructed ZAF image.
In step S139, the receiving unit 122 determines whether or not there is no next image signal, and an end instruction is given, and when no end instruction is given, the process returns to step S131, and the subsequent process is repeated. Further, when an end instruction is given in step S139, the process ends.
Through the above process, the ZAF image and the visible image data are converted into streaming data, and thus it is possible to the ZAF pixel data while transmitting the effective pixel data serving as the visible image data using the virtual channel.
Meanwhile, a series of processes described above can be implemented by hardware and can be implemented by software as well. When a series of processes are implemented by software, a program configuring the software is installed in a computer incorporated in dedicated hardware, a general-purpose personal computer capable of executing various kinds of functions through various kinds of programs installed therein, or the like from a recording medium.
An input unit 1006 including an input device such as a keyboard or a mouse used when the user inputs an operation command, an output unit 1007 that outputs a processing operation screen or a processing result image to a display device, a storage unit 1008 including a hard disk drive storing a program or various kinds of data, and a communication unit 1009 that includes a local area network (LAN) adaptor or the like and performs communication processing via a network represented by the Internet are connected to the input/output interface 1005. Further, a drive 1010 that reads or writes data from or to a removable medium 1011 such as a magnetic disk (including a flexible disk), an optical disk (including a compact disc-read only memory (CD-ROM) and a digital versatile disc (DVD)), a magneto-optical disk (including a mini disc (MD)), or a semiconductor memory is connected to the input/output interface 1005.
The CPU 1001 executes various kinds of processes according to a program stored in the ROM 1002 or a program that is read from the removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, installed in the storage unit 1008, and loaded onto the RAM 1003 from the storage unit 1008. The RAM 1003 also appropriately stores data necessary when the CPU 1001 executes various kinds of processes.
In the computer having the above configuration, for example, a series of processes described above are performed such that the CPU 1001 loads the program stored in the storage unit 1008 onto the RAM 1003 through the input/output interface 1005 and the bus 1004 and executes the program.
For example, the program executed by the computer (the CPU 1001) may be recorded in the removable medium 1011 serving as a package medium and provided. Further, the program may be provided through a wired or wireless transmission medium such as a LAN, the Internet, or digital satellite broadcasting.
In the computer, as the removable medium 1011 is mounted in the drive 1010, the program can be installed in the storage unit 1008 through the input/output interface 1005. Further, the program may be received through the communication unit 1009 via a wired or wireless transmission medium and installed in the storage unit 1008. Furthermore, the program may be installed in the ROM 1002 or the storage unit 1008 in advance.
Further, the program executed by the computer may be a program in which processes are chronologically performed according to the order described in this specification or a program in which processes are performed in parallel or according to a necessary timing when called.
In addition, in this specification, a system means a set of two or more configuration elements (devices, modulates (parts), or the like) regardless of whether or not all configuration elements are arranged in a single housing. Thus, both a plurality of devices that are accommodated in separate housings and connected via a network and a single device in which a plurality of modules are accommodated in a single housing are systems.
Further, an embodiment of the present technology is not limited to the above embodiments, and various changes can be made within the scope not departing from the gist of the present technology.
For example, the present technology may have a configuration of cloud computing in which a plurality of devices share and process a one function together via a network.
Further, the steps described in the above flowcharts may be executed by a single device or may be shared and executed by a plurality of devices.
Furthermore, when a plurality of processes are included in a single step, the plurality of processes included in the single step may be executed by a single device or may be shared and executed by a plurality of devices.
Further, the present technology may have the following configurations.
(1) A transmitting device that transmits visible image data including effective pixel data of an imaging device using a format for transmission to a display, the transmitting device including:
a transmitting unit that transmits phase detection image data in the imaging device in addition to the visible image data.
(2) The transmitting device according to (1), wherein the transmitting unit packetizes and transmits the phase detection image data in the imaging device using the format for the transmission to the display.
(3) The transmitting device according to (2), wherein the format for the transmission to the display is a format specified in a DisplayPort (trademark), and the transmitting unit packetizes and transmits the phase detection image data in the imaging device using a secondary data packet (SDP) specified in the DisplayPort (trademark) as the format for the transmission to the display.
(4) The transmitting device according to (3),
wherein the transmitting unit packetizes and transmits the phase detection image data in the imaging device using a phase detection image information packet and a phase detection image data packet of the SDP specified in the DisplayPort (trademark).
(5) The transmitting device according to (4),
wherein the transmitting unit arranges the phase detection image information packet in a vertical blanking region, arranges the phase detection image data packet in a horizontal blanking region, and packetizes and transmits the phase detection image data.
(6) The transmitting device according to (4) or (5), wherein the phase detection image information packet includes information of the number of lines per frame, the number of pixels per line, and the number of bits per pixel of a phase detection image configured with the phase detection image data and the number of pixels per the phase detection image data.
(7) The transmitting device according to any one of (4) to (6), wherein the transmitting unit packs the phase detection image data packet in a certain byte unit and transmits the packed phase detection image data packet.
(8) The transmitting device according to (1),
wherein the transmitting unit transmits the phase detection image data in the imaging device in addition to the visible image data using a scheme in which a plurality of streams are transmitted from a plurality of stream sources to a plurality of stream sinks through one transmission path in the format for the transmission to the display.
(9) The transmitting device according to (8),
wherein the format for the transmission to the display is a format specified in a DisplayPort (trademark), and
the transmitting unit transmits the phase detection image data in the imaging device in addition to the visible image data by transmitting a stream including the visible image data and a stream including the phase detection image data from the stream sources to the stream sinks through one transmission path using a virtual channel specified in the DisplayPort (trademark).
(10) The transmitting device according to (9),
wherein a main stream attributes (MSA) that is individually for each stream of the virtual channel and is image characteristic information of the stream includes information of the number of lines per frame, the number of pixels per line, and the number of bits per pixel of a phase detection image configured with the phase detection image data when the steam is a stream including the phase detection image data.
(11) The transmitting device according to (10), wherein the MSA further includes information of Mvid (a video stream clock frequency) and Nvid (a link clock frequency), and when the number of pixels in the vertical direction and the number of pixels in the horizontal direction in the phase detection image including the phase detection image data is 1/t and 1/s of the number of pixels in the vertical direction and the number of pixels in the horizontal direction of a visible image including the visible image data, respectively, a ratio of the Mvid and the Nvid of the MSA of the phase detection image data is 1/(t×s) of a ratio of the Mvid and the Nvid of the MSA of the visible image data.
(12) The transmitting device according to (10) or (11), wherein the MSA further includes information specifying the imaging device.
(13) A transmitting method of a transmitting device that transmits visible image data including effective pixel data of an imaging device using a format for transmission to a display, the transmitting method including:
transmitting phase detection image data in the imaging device in addition to the visible image data.
(14) A non-transitory computer-readable storage medium storing program causing a computer controlling a transmitting device that transmits visible image data including effective pixel data of an imaging device using a format for transmission to a display to execute:
a process including transmitting phase detection image data in the imaging device in addition to the visible image data.
(15) A receiving device that receives visible image data including effective pixel data of an imaging device using a format for transmission to a display, the receiving device including:
a receiving unit that receives a phase detection image data in the imaging device in addition to the visible image data.
(16) A receiving method of a receiving device that receives visible image data including effective pixel data of an imaging device using a format for transmission to a display, the receiving method including:
receiving a phase detection image data in the imaging device in addition to the visible image data.
(17) A non-transitory computer-readable storage medium storing program causing a computer controlling a receiving device that receives visible image data including effective pixel data of an imaging device using a format for transmission to a display to execute:
a process including receiving a phase detection image data in the imaging device in addition to the visible image data.
(18) A transmission system, including:
a transmitting device that transmits visible image data including effective pixel data of an imaging device using a format for transmission to a display;
a receiving device,
wherein the transmitting device includes a transmitting unit that transmits phase detection image data in the imaging device in addition to the visible image data to the receiving device, and the receiving device includes a receiving unit that receives a phase detection image data in the imaging device in addition to the visible image data from the transmitting device.
Number | Date | Country | Kind |
---|---|---|---|
2014-064702 | Mar 2014 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2015/001404 | 3/13/2015 | WO | 00 |