The present disclosure relates to a projection control apparatus and a projection control method for controlling a plurality of projection apparatuses.
A projection method (stack projection) in which projection positions of a plurality of projectors (projection apparatuses) are overlapped (stacked) is known. In the stack projection in which projection alignment between projectors is required, projectors having a function that facilitates the alignment are also known.
A distortion (trapezoidal distortion) occurs in the shape of a projected image, except in a case of performing projection from a directly-facing position where an optical axis of a projector and a projection surface for the projector are orthogonal to each other. As a function for correcting the trapezoidal distortion without changing the position of the projector, a keystone correction function is known. The keystone correction function can be implemented by deforming an image on a liquid crystal panel so as to compensate for the trapezoidal distortion.
A technique for overlapping projection positions of a plurality of projectors by applying the keystone correction function is known.
In the keystone correction, a deformable range of a projected image is limited in some cases due to constraints of a hardware or software configuration. In a technique discussed in Japanese Patent Application Laid-Open No. 2009-200557, an image indicating a deformable range for keystone correction is superimposed on a projected image, to thereby provide a user with information indicating how to correct a trapezoidal distortion.
To implement the stack projection, the keystone correction function is often used for each of a plurality of projectors. The optical axes of the plurality of projectors are not parallel to each other in many cases. Accordingly, different amounts of deformation for keystone correction are set to the plurality of projectors. In addition, it is necessary to set target projection positions (e.g., screen corners) so as to fall within respective deformable ranges of all projectors used for stack projection.
Thus, if a deformable range common to the plurality of projectors is unknown, it is extremely troublesome for a user to perform the alignment.
However, in the technique discussed in Japanese Patent Application Laid-Open No. 2009-200557, the case of using a plurality of projectors is not taken into consideration.
The present disclosure is directed to a projection control apparatus capable of facilitating alignment of images projected by a plurality of projection apparatuses.
According to an aspect of the present disclosure, a projection control apparatus that controls a plurality of projection apparatuses including a first projection apparatus configured to project a first projected image and a second projection apparatus configured to project a second projected image includes an acquisition unit configured to acquire a common area in which the first projected image and the second projected image are deformable, and a control unit configured to cause at least one of the plurality of projection apparatuses to project an image representing the common area.
Further features of the present disclosure will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Exemplary embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. The present disclosure is not limited to the following exemplary embodiments. Not all components described in the exemplary embodiments are essential for the present disclosure. Functional blocks described in the exemplary embodiments can be implemented by hardware components, software components, or a combination thereof. One functional block may be implemented by a plurality of hardware components. A plurality of functional blocks may be implemented by one hardware component. One or more functional blocks may be implemented in such a manner that at least one programmable processor such as a central processing unit (CPU) or a micro processing unit (MPU) executes a computer program loaded into at least one memory. If one or more functional blocks are implemented by hardware, the functional blocks can be implemented by a discrete circuit or an integrated circuit such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC).
The following exemplary embodiments illustrate a configuration in which the present disclosure is applied to a stand-alone projection apparatus (projector). However, for example, the present disclosure can also be applied to a projector incorporated in a general electronic apparatus, such as a personal computer (PC), a smartphone, a tablet terminal, a game console, or a digital (video) camera.
The following exemplary embodiments may be described with reference to figures illustrating a graphical user interface (GUI), but the GUIs are illustrated by way of example only. Omissions, replacements, or modifications can be made on the layout of each GUI, the type of each component, screen transition, and the like without departing from the gist of the present disclosure.
All projectors included in the projection system 10 are communicably connected to a PC 200 that functions as a projection control apparatus. The present exemplary embodiment illustrates an example in which a PC is used as the projection control apparatus, but instead other information processing apparatuses, such as a smartphone and a tablet, may be used. Communications between the projection control apparatus and a plurality of projectors may be established by a wired communication or a wireless communication, and a communication protocol is not particularly limited. The present exemplary embodiment illustrates an example in which communication is established between apparatuses via a local area network (LAN) using Transmission Control Protocol/Internet Protocol (TCP/IP) as a communication protocol.
The PC 200 transmits a predetermined command to each of the projectors 100a and 100b, thereby making it possible to control operations of the projectors 100a and 100b. Each of the projectors 100a and 10013 performs an operation based on the command received from the PC 200, and transmits the operation result to the PC 200.
The projection system 10 further includes an image capturing apparatus camera. 300 as an image capturing unit. The image capturing apparatus 300 is, for example, a digital camera, a web camera, or a network camera. Alternatively, an image capturing apparatus incorporated in the projector 100 or the PC 200 may be used. Assume that the image capturing apparatus is installed so as to include the entire projection surface as an image capturing range. When an image capturing apparatus located outside of the PC 200 is used, the image capturing apparatus is communicably connected to the PC 200 directly or via a LAN. The PC 200 transmits a predetermined command to the image capturing apparatus 300, thereby making it possible to control the operation of the image capturing apparatus 300. For example, the image capturing apparatus 300 can capture an image in response to a request from the PC 200, and can transmit data on the captured image to the PC 200.
The CPU 101 is an example of a programmable processor, and implements the operation of the projector 100 by, for example, loading a program stored in the ROM 103 into the RAM 102 and executing the program.
The RAM 102 is used as a work memory for the CPU 101 to execute programs. The RAM 102 stores programs and variables and the like to be used for executing the programs. The RAM 102 may also be used for other applications such as a data buffer.
The ROM 103 may be rewritable. The ROM 103 stores programs to be executed by the CPU 101, GUI data used for displaying a menu screen, various setting values, and the like.
The projection unit 104 includes a light source and a projection optical system including a lens, and projects an optical image based on an image for projection supplied from the projection control unit 105. In the present exemplary embodiment, a liquid crystal panel is used as an optical modulation element, and by controlling the reflectance or transmittance of light from the light source based on the image for projection, an optical image is generated based on the image for projection and the generated optical image is projected onto the projection surface by the projection optical system.
The projection control unit 105 supplies the projection unit 104 with the projection image data supplied from the image processing unit 109.
The VRAM 106 is a video memory that stores the projection image data received from an external apparatus such as a PC or a media player.
The operation unit 107 is an acceptance unit that includes input devices such as a key button, a switch, and a touch panel, and accepts an instruction from a user to the projector 100. The CPU 101 monitors the operation of the operation unit 107. Upon detecting the operation of the operation unit 107, the CPU 101 executes processing based on the detected operation. When the projector 100 includes a remote controller, the operation unit 107 notifies the CPU 1111 of an operation signal received from the remote controller.
The network IF 108 is an interface for connecting the projector 100 to a communication network, and has a configuration that is compliant with the standards of supported communication networks. In the present exemplary embodiment, the projector 100 is connected to a local network common to the PC 200 via the network IF 108. Accordingly, communications between the projector 100 and the PC 200 are executed via the network IF 108.
The image processing unit 109 applies, as needed, various image processing on video signals, which are supplied to the video input unit 110 and stored in the VRAM 106, and supplies the video signals to the projection control unit 105. The image processing unit 109 may be, for example, a microprocessor for image processing. Alternatively, the function corresponding to the image processing unit 109 may be implemented by the CPU 101 executing a program stored in the ROM 103.
Examples of the image processing that can be applied by the image processing unit 109 include frame thinning processing, frame interpolation processing, resolution conversion processing, processing of superimposing an on-screen display (OSD) such as a menu screen, keystone correction processing, and edge blending processing. However, the image processing is not limited to these examples.
The video input unit 110 is an interface for directly or indirectly receiving video signals output from an external apparatus, which is the PC 200 in the present exemplary embodiment, and has a configuration that corresponds to the supported video signals. The video input unit 110 includes, for example, at least one of a composite terminal, an S-video terminal, a D-terminal, a component terminal, an analog red, green, and blue (RGB) terminal, a Digital Visual Interface-Integrated (DVI-I) terminal, a Digital Visual Interface Digital (DVI-D) terminal, and a High-Definition Multimedia Interface (HDMI®) terminal. Upon receiving an analog video signal, the video input unit 110 converts the analog video signal into a digital video signal, and stores the digital video signal in the VRAM 106.
Next, the functional configuration of the PC 200 will be described. The PC 200 may be a general-purpose computer to which an external display can be connected, and thus has a functional configuration of the general-purpose computer. The PC 200 includes a CPU 201, a RAM 202, a ROM 203, an operation unit 204, a display unit 205, a network IF 206, a video output unit 207, and a communication unit 208. These functional blocks are communicably connected to each other via an internal bus 209.
The CPU 201 is an example of a programmable processor, and implements the operation of the PC 200 by, for example, loading programs such as an operating system (OS) and an application program into the ROM 203, and executing the loaded programs.
The RAM 202 is used as a work memory for the CPU 201 to execute programs. The RAM 202 stores programs and variables and the like used for executing the programs. The RAM 202 may be used for other applications such as a data buffer.
The ROM 203 may be rewritable. The ROM 203 stores programs to be executed by the CPU 201, GUI data used for displaying a menu screen, various setting values, and the like. The PC 200 may include a storage device (hard disk drive (HDD) or a solid state drive (SSD)) having a capacity larger than that of the RAM 203. In this case, programs such as an OS and an application program may be stored in such a storage device.
The operation unit 204 includes input devices such as a keyboard, a pointing device (e.g., a mouse), a touch panel, and a switch, and accepts an instruction from the user to the PC 200. The keyboard may be a software keyboard. The CPU 201 monitors the operation of the operation unit 204. Upon detecting the operation of the operation unit 204, the CPU 201 executes processing based on the detected operation.
The display unit 205 is, for example, a liquid crystal panel or an organic electroluminescent (EL) panel. The display unit 205 displays a screen provided by an OS or an application program. The display unit 205 may be an external apparatus, or may be a touch display.
The network IF 206 is an interface for connecting the PC 200 to a communication network, and has a configuration that is compliant with the standards of communication networks. In the present exemplary embodiment, the PC 200 is connected to a local network common to the projector 100 through the network IF 206. Accordingly, communications between the PC 200 and the projector 100 are executed through the network IF 206.
The video output unit 207 is an interface for transmitting a video signal to an external apparatus, which is the projector 100 or the image capturing apparatus 300 in the present exemplary embodiment, and has a configuration that corresponds to the supported video signals. The video output unit 207 includes, for example, at least one of a composite terminal, an S-video terminal, a D-terminal, a component terminal, an analog RGB terminal, a DVI-I terminal, a DVI-D terminal, and an HDMI® terminal.
In the present exemplary embodiment, assume that a UI screen for a projection control application program including a function for adjusting the projection area of the projector 100 is displayed on the display unit 205, but instead the UI screen may be displayed on an external apparatus connected to the video output unit 207.
The communication unit 208 is a communication interface for performing, for example, serial communications with an external apparatus. A typical example of the communication unit 208 is a universal serial bus (USB) interface. The communication unit 208 may have a configuration that is compliant with other standards such as Recommended Standard (RS)-232C. In the present exemplary embodiment, assume that the image capturing apparatus 300 is connected to the communication unit 208. However, the method for establishing communication between the image capturing apparatus 300 and the PC 200 is not particularly limited and the communication can be established based on any standards supported by both the image capturing apparatus 300 and the PC 200.
Next, the keystone correction will be described with reference to
For example, assuming that the original image is represented by coordinate (xs, ys), coordinate (xd, yd) of a deformed image obtained after projective transformation are represented by the following Expression (1).
In Expression (1), M represents a 3×3 matrix, which is a projective transformation matrix from the original image to the deformed image. In Expression (1), xso and yso represent coordinates of the upper left vertex of the original image indicated by a solid line in
The CPU 101 provides the image processing unit 109 with the matrix M in Expression 1 and an inverse matrix M−1 of the matrix M, together with offset values (xso, yso) and (xdo, ydo), as parameters for keystone correction. The image processing unit 109 can obtain the coordinate (xs, ys) of the original image corresponding to the coordinate value (xd, yd) obtained after the keystone correction based on the following Expression 2.
If both of the coordinates xs and ys) of the original image obtained by Expression 2 are integers, the image processing unit 109 can use the pixel value corresponding to the coordinate (xs, ys) of the original image as the pixel value corresponding to the coordinate (xd, yd) of the image obtained after keystone correction. On the other hand, if the coordinates of the original image obtained by Expression 2 are not integers, the image processing unit 109 can obtain the pixel value corresponding to the coordinate (xs, ys) of the original image by interpolation calculation using values of a plurality of peripheral pixels. The interpolation calculation can be performed using, for example, any one of known interpolation calculations such as bilinear interpolation and bicubic interpolation. If the coordinates of the original image obtained by Expression 2 are coordinates of an external area of the original image, the image processing unit 109 sets the pixel value corresponding to the coordinate (xd, yd) of the image obtained after the keystone correction as black (0) or a background color set by the user. In this manner, the image processing unit 109 can obtain pixel values for all coordinates of the image obtained after the keystone correction, and can create a converted image.
In this case, both the matrix M and the inverse matrix M−1 of the matrix M are supplied to the image processing unit 109 from the CPU 101 of each projector 100, but only one of the matrix M and the inverse matrix M−1 may be supplied thereto and the other one of the matrix M and the inverse matrix M−1 may be obtained by the image processing unit 109.
The coordinates of each vertex of the image obtained after the keystone correction can be acquired by making the user input the movement amount through the operation unit 107 so as to project, for example, each vertex of a projected image at a desired position. In this case, to support the input of the movement amount, the CPU 201 may cause the projector 100 to project a test pattern by using functions of the projection control application program.
In the keystone correction, a deformable range of a projected image is limited in some cases due to constraints of hardware or software configuration.
The maximum deformable amounts Δx_max and Δy_max may vary depending on the hardware or software configuration of each projector. Like in Japanese Patent Application Laid-Open No. 2009-200557, Δx_max and Δy_max may be set so as to make each deformable area similar to a maximum pixel area of the panel. For example, Δx_max and Δy_max may be set to 500 pixels so that each deformable area has a square shape.
In step S501, the CPU 201 of the PC 200 selects a plurality of projectors to be subjected to automatic alignment processing from among the projectors 100 with which the PC 200 can communicate.
If the CPU 201 detects that a “search” button 601 is pressed by the user, the CPU 201 broadcasts a predetermined command for requesting information about a projector name and an IP address on a network through the network IF 206. The information requested in this case is not limited to the projector name and the IP address. For example, a keystone deformable amount or the like may also be requested in advance.
Upon receiving a command through the network IF 108, the CPU 101 of each projector 100 connected to the network transmits, to the PC 200, data including information indicating the projector name and the IP address of the projector 100. The CPU 201 of the PC 200 receives data transmitted in response to the command, extracts information included in the data, and displays the information on a list view 602. The order of projectors to be displayed on the list view 602 may be the detected order of projectors. Alternatively, the projectors may be sorted based on a specific rule.
The user selects projectors to be subjected to alignment processing by, for example, checking checkboxes 603 to 606. The screen 600 illustrates an example of a case where four projectors 100 are connected to the PC 200.
When the projector 100a (Projector1) and the projector 100b (Projector2) are used as projectors to be subjected to alignment processing, the checkboxes 604 and 606 may be checked.
Information such as a projector name and an IP address about the projectors for which the checkbox is checked is stored in the RAM 202 of the PC 200.
If an operation on a “test pattern ON” button 607 illustrated in
If an operation on a “test pattern OFF” button 608 illustrated in
If an operation on a “next” button 609 illustrated in
In step S503 illustrated in
An image area 703 illustrated in
The user observes the image area 703, thereby making it possible to easily perform the installment and zooming adjustment of the camera so as to enable the entire projected image (test pattern) of each projector to fall within the image capturing range.
If an operation on a “back” button 705 illustrated in
A checkbox 704 illustrated in
If an operation on the “next” button 706 illustrated in
In step S504 illustrated in
Dropdown lists 801, 802, and 803 illustrated in
If an operation on “test image capturing” button 804 illustrated in
If an operation on a “back” button 806 illustrated in
If an operation on a “next” button 807 illustrated in
In step S505 illustrated in
The alignment mode of “4-point designation adjustment” is a mode in which the keystone correction amount is automatically determined in such a manner that the vertices of each projection area are aligned with four predetermined points, respectively. The alignment mode of “4-point designation adjustment” is effective for, for example, a case where a projection target position is clear, such as a case where a screen with a frame is set as a projection surface. The number of points for which coordinates can be adjusted may be less than four, or five or more points including coordinates other than vertices may be set.
The alignment mode of “adjustment based on reference projector” is a mode in which the keystone correction amount is automatically determined in such a manner that one projector is set as a reference projector and the projection area of another projector is aligned with the projection area of the reference projector. The automatic alignment in this mode is executed when the position of the projection area of the reference projector is adjusted to a designated position. The keystone correction amount for aligning the projection area of a projector other than the reference projector with the projection area of the reference projector is automatically determined. This function is effective when the projection target position is not clear (e.g., in the case of projecting an image onto a wall surface), unlike in the alignment mode of “4-point designation adjustment”.
A dropdown list 903 illustrated in
If an operation on a “check reference projector” button 904 illustrated in
If an operation on a “back” button 905 illustrated in
If an operation on a “next” button 906 illustrated in
A method for calculating the projective transformation matrix will be described with reference to
where “a” to “h” each represent a predetermined constant.
The above-described Expression 3 and Expression 4 are transformed to obtain the following Expression 5 which is expressed using a matrix.
In Expression (5), M represents a projective transformation matrix. This projective transformation matrix can be calculated by substituting four sets of corresponding points (x1, y1, X1, Y1), (x2, y2, X2, Y2), (x3, y3, X3, Y3), and (x4, y4, X4, Y4) into Expression 3 and Expression 4. In other words, if correspondences between at least four coordinates on both the camera coordinate plane and the projector coordinate plane are known, the projective transformation matrix can be calculated.
For example, if each projector projects a quadrangular shape (shaded area illustrated in
In
Although the projective transformation matrix can be calculated by the above-described method, the image to be projected by each projector when the image capturing apparatus 300 captures an image is not limited to a quadrangular shape. Any image can be used as long as the correspondences between at least four coordinates on both the camera coordinate plane and the projector coordinate plane can be obtained.
Referring back to
In step S1005, the CPU 201 switches the method for calculating a deformation parameter depending on information about the alignment mode designated by the user that is stored in the RAM 202. In the present exemplary embodiment, the description of deformation in the mode of “adjustment based on reference projector” in step S1009 is omitted.
In step S1005 illustrated in
The “projection geometry designation processing” in step S1006 will be described with reference to a detailed flowchart of
Next, in step S1203, the CPU 201 of the PC 200 acquires a keystone deformable area on the projector coordinate plane, and performs projective transformation of the keystone deformable area, thereby acquiring the deformable area on the camera coordinate plane.
Coordinates of the upper left deformable area 1301 are given below by way of example.
Coordinates of the upper left vertex of the upper left deformable area 1301=(0, 0),
Coordinates of the upper right vertex of the upper left deformable area 1301=(Δx_max, 0),
Coordinates of the lower right vertex of the upper left deformable area 1301 (Δx_max, Δy_max), and
Coordinates of the lower left vertex of the upper left deformable area 1301=(0, Δy_max).
The CPU 201 of the PC 200 obtains the coordinates of the respective deformable areas for all the projectors 100 to be subjected to alignment processing. Next, the CPU 201 of the PC 200 reads the projective transformation matrix acquired in step S1004 illustrated in
In step S1204, the CPU 201 of the PC 200 acquires a deformable area common to the plurality of projectors on the camera coordinate plane. The common deformable area is calculated by applying a known mathematical technique to the coordinates of vertices of a polygon representing a deformable area of each projector. Hatched areas 1351 to 1354 illustrated in
Next, in step S1205, the CPU 201 of the PC 200 selects one of the projectors to be subjected to alignment processing, and projects the common deformable areas obtained in step S1204 on the coordinate plane of the selected projector.
In step S1206, the CPU 201 of the PC 200 generates a marker representing each of the common deformable areas, and causes the projector selected in step S1205 to project the marker.
In this case, the CPU 201 of the PC 200 transmits a command through the network IF 206 so as to bring projectors other than the projector that projects the common deformable areas into a non-projection state. As a result, it is possible to prevent other projectors from disturbing the projection of the common deformable areas onto the projection snake.
In step S1207, the CPU 201 of the PC 200 generates a deformed shape designation marker, and causes the projector selected in step S1205 to further perform projection. Examples of the marker for designating the deformed shape include images illustrated in
As the marker representing the common deformable area and the marker for designating the deformed shape, images generated by the CPU 201 of the PC 200 may be transmitted through the network IF 206 or the video output unit 207. Alternatively, the CPU 201 of the PC 200 may transmit a command for causing the projector 100 to render any line or polygon through the network IF 206, and the CPU 101 of the projector 100 may render a marker image based on the result of interpreting the received command.
After the processing in step S1207 is completed, in step S1208, the CPU 201 of the PC 200 displays a GUI screen 1500 for designating a deformed shape illustrated in
An image area 1517 illustrated in
If an operation on the operation buttons 1501 to 1516 is detected, in step S1210, the CPU 201 of the PC 200 updates the position coordinates of the marker for designating the deformed shape on the camera coordinate plane stored in the RAM 202 of the PC 200. Further, in step S1211, the CPU 201 regenerates a marker image at the updated position coordinates, and causes the projector selected in step S1205 to project the regenerated marker image again.
The operation buttons 1501 to 1516 can be operated (NO in step S1209) until the user operates an “execute” button 1519 illustrated in
Referring back to
In step S1007, the CPU 201 of the PC 200 acquires the keystone correction amount for forming one superimposed image in which the projected images of the plurality of projectors are completely superimposed on the projection surface. The CPU 201 calculates the keystone correction amount for each projector by projecting the coordinates of the deformed shape on the camera coordinate plane that are stored in the RAM 202 of the PC 200 on the plane of each projector by using the projective transformation matrices acquired in step S1004.
In step S1008, the CPU 201 of the PC 200 transmits, to each projector 100, the keystone correction amount calculated in step S1007 through the network IF 206. Upon receiving the keystone correction amount from the PC 200, the projector 100 transmits the received keystone correction amount to the image processing unit 109, and executes correction of the shape of the input image.
According to the present exemplary embodiment, in the case of performing keystone correction on the plurality of projectors, the deformable range common to all projectors can be visualized, which leads to an improvement in user-friendliness.
The deformable range common to the projectors can be presented to the user (e.g., as illustrated in
While the first exemplary embodiment illustrates an example in which the deformable area common to the projectors is acquired using a mathematical method, the acquisition method is not limited to this method. A second exemplary embodiment illustrates a method for acquiring the deformable area common to the projectors by using image processing.
Components in the second exemplary embodiment are similar to those in the first exemplary embodiment, except for the method for acquiring the deformable area common to the projectors, and thus the descriptions of components other than the acquisition method are omitted.
First, the CPU 201 of the PC 200 repeatedly performs steps S1602 and S1603 to generate a marker image based on the deformable amount. More specifically, in step S1602, the CPU 201 acquires the deformable amount from the projector 100 that has not completed processing, and in step S1603, the CPU 201 generates a marker image based on the deformable amount. If the processing on all projectors is not completed (NO in step S1601), the CPU 201 executes steps S1602 and S1603. If the processing on all projectors is completed (YES in step S1601), the processing proceeds to step S1604. In step S1604, each projector is caused to project and display the marker image based on the deformable amount.
By modifying the marker image as described above, when all projectors simultaneously project the marker images, the projection surface and the captured image of the projection surface have the following features.
By using the above-described features, the deformable area common to the projectors on the camera coordinate plane can be acquired. In a case of using color information, the common deformable area can be obtained by applying processing of determining whether the hue in each pixel is close to a predetermined hue to all pixels of the captured image. Further, in a case of using luminance information, the common deformable area can be obtained by applying processing of determining whether the luminance in each pixel is greater than or equal to a predetermined threshold to all pixels of the captured image. The processing using the hue or luminance is merely an example, and other pixel information such as brightness or color saturation may also be used.
In step S1605, the CPU 201 of the PC 200 causes the image capturing apparatus 300 to capture an image of the projection surface on which the marker images of all projectors are simultaneously projected and displayed as described above, and in step S1606, the CPU 201 calculates the deformable area common to all projectors on the camera coordinate plane based on the captured image.
Processing from step S1607 to step S1613 illustrated in
According to the present exemplary embodiment, like in the first exemplary embodiment, when keystone correction is performed on the plurality of projectors, the deformable amount common to all projectors can be visualized, which leads to an improvement in user-friendliness.
In a third exemplary embodiment, a case is described where rendering of the marker image described in the “projection geometry designation processing” illustrated in
In this processing, first, the CPU 201 of the PC 200 transmits a command for causing the projector 100 to render any line or polygon through the network IF 206. Further, the CPU 101 of the projector 100 interprets the command received via the network IF 108, and implements the processing by storing, in the VRAM 106, the line or polygon based on the content of the command.
In the first exemplary embodiment, the projector selected in step S1205 is caused to project the marker representing the common deformable area in step S1206 illustrated in
First, differences between the markers 1401 to 1404 representing the common deformable areas illustrated in
When the markers with different requirements are rendered by the CPU of a single projector, a long processing time is required for rendering markers for which sequentially updating is not required. Thus, there is a possibility that the markers that are required to have a high response speed cannot be rendered with a high response speed. A method for solving this problem will be described with reference to
In step S1801, the CPU 201 of the PC 200 transmits a command for acquiring a marker rendering capability to each projector through the network IF 206. The projector 100, which has received the command, returns information indicating the marker rendering capability to the PC 200. This information is, for example, an operating frequency of the CPU 101 of the projector 100. Upon acquiring the information of the marker rendering capability from all projectors, the CPU 201 of the PC 200 determines a projector (hereinafter, referred to as PJ-A) with the highest rendering capability from among the projectors.
In step S1802, the CPU 201 of the PC 200 selects a projector (hereinafter, referred to as PJ-B) other than the PJ-A determined in step S1801, and projects the common deformable area obtained in step S1204 onto the coordinate plane of the projector.
In step S1803, the CPU 201 of the PC 200 generates markers representing the common deformable area, and projects the generated markers on the PJ-B selected in step S1802.
In step S1804, the CPU 201 of the PC 200 generates markers for designating the deformed shape, and causes the PJ-A selected in step S1801 to project the generated markers.
Subsequent steps are similar to steps S1208 to S1211 illustrated in
According to the present exemplary embodiment, rendering of “markers for designating a projection geometry” that is required to have a high response speed can be achieved by the projector with the highest rendering capability, and rendering of “markers representing a common deformable area” that does not require sequential updating can be achieved by another projector. Consequently, it is possible to present updating of the “markers for designating a projection geometry” to the user who is operating the markers, while maintaining a high response performance.
Embodiment(s) of the present disclosure can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present disclosure has been described with reference to exemplary embodiments, the scope of the following claims are to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Applications No. 2018-081157, filed Apr. 20, 2018, and No. 2018-081158, filed Apr. 20, 2018, which are hereby incorporated by reference herein in their entirety.
Number | Date | Country | Kind |
---|---|---|---|
2018-081157 | Apr 2018 | JP | national |
2018-081158 | Apr 2018 | JP | national |