The present disclosure relates to a technology for performing production according to movement of a target photographed by a camera.
Recently, a library for estimating the posture of a human has been disclosed as an open source. This library detects characteristic points such as joint positions of a human from a two-dimensional image by using a neural network in which deep learning is performed, and estimates the posture of the human by connecting the characteristic points to each other. Japanese Patent Laid-Open No. 2020-204890 discloses a robot system that estimates the posture of a human photographed by using such a posture estimating model and synchronizes the posture of a robot device with the estimated posture of the user.
In a live venue or a concert hall, production is performed which changes lighting and changes the volume of sound according to movement of a performer. Such production has been performed manually. A person in charge of production determines start timing and end timing of production, and manually controls lighting apparatuses and sound apparatuses. During performance by the performer, the person in charge of production observes movement of the performer, and performs work of synchronizing production with the movement of the performer. However, a burden of the work is heavy.
It is desirable to provide a technology that automatically performs production according to movement of a target such as a performer.
According to an aspect of the present technology, there is provided an information processing device including an obtaining section configured to obtain position information of a plurality of parts of a photographing target, a deriving section configured to derive a length between two parts, and a production control section configured to perform production on the basis of the derived length.
According to another aspect of the present technology, there is provided an information processing device including an obtaining section configured to obtain position information of a plurality of parts of a photographing target, a deriving section configured to derive a direction connecting two parts to each other, and a production control section configured to perform production on the basis of the derived direction.
It is to be noted that optional combinations of the above constituent elements and modes obtained by converting expressions of the present technology between a method, a device, a system, a recording medium, a computer program, and the like are also effective as modes of the present technology.
A camera 4 photographs the performer during performance on the stage 5 in predetermined cycles. The camera 4 is a three-dimensional camera capable of obtaining depth information. The camera 4 may be a stereo camera or a time of flight (ToF) camera. The camera 4 photographs a three-dimensional space in which the performer is present in predetermined cycles (for example, 30 frames/sec).
The lighting apparatuses 2 each include a movable unit that can change an irradiation direction of light. The color and light amount of the irradiation light are dynamically changed by an information processing device (see
The elements described as functional blocks performing various processes in
The estimating section 20 receives the image of the performer as a photographing target photographed by the camera 4, and estimates the positions of a plurality of parts of a body of the performer. Various methods for recognizing the positions of parts of a human body have been proposed. The estimating section 20 may estimate the position of each part of the performer by using an existing posture estimating technology.
In the embodiment, the performer may be a dancer dancing to music, and the posture and position of the performer on the stage 5 change with the passage of time. The production control section 36 performs light production and/or sound production by controlling the lighting apparatuses 2 and/or the sound apparatuses 3 according to movement of the performer.
In a human body model adopted in the embodiment, 19 parts are defined, and for each part, parts adjacent to the part are defined. For example, for the neck 50b, the nose 50a, the right shoulder 50c, the left shoulder 50g, and the central waist 50k are defined as adjacent parts, and two adjacent parts are coupled to each other by a bone. For example, for the right elbow 50d, the right shoulder 50c and the right wrist 50e are defined as adjacent parts. Thus, in the human body model, a plurality of parts and coupling relation between two parts are defined. The estimating section 20 estimates the position information of the plurality of parts of the performer on the basis of the human body model.
After the obtaining section 32 obtains the position information of the plurality of parts of the performer, the deriving section 34 derives a length between two parts. The production control section 36 performs production on the basis of the derived length between the two parts.
The production control section 36 may adjust the volume of music output by the sound apparatuses 3 according to the derived length L. For example, the production control section 36 may increase the volume of the music when the length L becomes long, and the production control section 36 may decrease the volume of the music when the length L becomes short. Incidentally, the production control section 36 may decrease the volume of the music when the length L becomes long, and the production control section 36 may increase the volume of the music when the length L becomes short. The production control section 36 may apply a sound effect corresponding to the length L to the music in addition to the volume, and may change the parameter of the sound effect according to the length L. For example, the production control section 36 may amplify and emphasize a high frequency range when the length L becomes long, and the production control section 36 may amplify and emphasize a low frequency range when the length L becomes short.
The production control section 36 may adjust amounts of light of the lighting apparatuses 2 according to the derived length L. For example, the production control section 36 may increase the light amounts when the length L becomes long, and the production control section 36 may decrease the light amounts when the length L becomes short. At this time, the production control section 36 may adjust the light amounts of the lighting apparatuses 2 according to the derived length L while controlling the movable units of the plurality of lighting apparatuses 2 so as to irradiate the right hand 50f or the left hand 50j with light. Incidentally, the production control section 36 may decrease the light amounts when the length L becomes long, and the production control section 36 may increase the light amounts when the length L becomes short. The production control section 36 may adjust the color of the irradiation light in addition to the light amounts, and may change the color of the irradiation light according to the length L.
In the production system 1 according to the embodiment, it is preferable that the deriving section 34 derive a length between two parts not adjacent to each other in the human body model and that the production control section 36 perform production on the basis of the length between the two parts not adjacent to each other. The right hand 50f and the left hand 50j described above are an example of the two parts not adjacent to each other in the human body model. The length L between the two parts not adjacent to each other can be changed greatly as compared with the length between two adjacent parts, and is therefore suitable for use as a dynamic production parameter. For example, the deriving section 34 may derive, as a production parameter, a length between one part of a right half body of a human body and one part of a left half body of the human body. The performer may give performance paying attention to the length between the two parts, for example, give a performance of moving in a left-right direction greatly, by recognizing the two parts set as a production parameter.
In addition, the deriving section 34 may derive, as a production parameter, a length between one part of an upper half of the human body and one part of a lower half of the human body. The performer may give performance paying attention to the length between the two parts, for example, give a performance of moving in an upward-downward direction greatly, by recognizing the two parts set as a production parameter.
Incidentally, the deriving section 34 may derive lengths between a predetermined plurality of sets as a production parameter without being limited to the length between one predetermined set (two specific parts), and the production control section 36 may perform production on the basis of the length between each set. For example, the production control section 36 may perform sound production that controls the sound apparatuses 3 on the basis of a length between parts of a first set, and may perform light production that controls the lighting apparatuses 2 on the basis of a length between parts of a second set different from the first set.
The deriving section 34 may derive a direction connecting two parts to each other as another production parameter. The production control section 36 performs production on the basis of the derived direction.
The production control section 36 may adjust the irradiation directions of light of the lighting apparatuses 2 according to the derived direction vector D. For example, the production control section 36 may control the movable unit of each lighting apparatus 2 such that the plurality of lighting apparatuses 2 apply light to one point (cross mark on a dotted line in
In a case where such light production is performed, it is natural for the performer to specify the position to be irradiated by the lighting apparatuses 2 by moving an arm. It is therefore preferable for the deriving section 34 to derive a direction connecting two parts in a left arm or a right arm to each other and use the direction as a production parameter. However, a direction connecting two parts of other than the arm to each other may be derived.
The present technology has been described above on the basis of the embodiment. According to the embodiment, the production control section 36 can perform production automatically on the basis of a production parameter derived by the deriving section 34. The present embodiment is illustrative, and it is to be understood by those skilled in the art that combinations of constituent elements and processing processes of the embodiment are susceptible of various modifications and that such modifications also fall within the scope of the present technology.
In the embodiment, the photographing target of the camera 4 is a live performer. However, the photographing target may be a person other than the live performer. It is to be noted that the photographing target of the camera 4 may be anything as long as a body model is established and the position information of a plurality of parts can be obtained from a photographed image. The photographing target may, for example, be a human type or pet type autonomous robot.
In the embodiment, description has been made of the camera 4 as a three-dimensional camera capable of obtaining depth information. However, as illustrated in Japanese Patent Laid-Open No. 2020-204890, the camera 4 may be a camera that obtains a two-dimensional image not having depth information.
In the embodiment, the obtaining section 32 derives the length L between two parts as a production parameter. However, for example, a length between one part of the performer and a production apparatus may be derived and used as a production parameter.
In addition, in the embodiment, description has been made of a case where the production system 1 is used in a venue in which the performer performs in front of an audience. However, in a modification, the production system 1 may be used when the performance of the performer is live distribution. In the modification, the production control section 36 may perform video production on the basis of a production parameter derived by the deriving section 34.
Incidentally, the production control section 36 may perform video production on the basis of the direction vector D. In the embodiment, the production control section 36 controls the movable units of the respective lighting apparatuses 2 such that the plurality of lighting apparatuses 2 apply light to one point on the half straight line obtained by extending the direction vector D. In the modification, however, the production control section 36 may set a virtual sound source that outputs sound at one point on the half straight line obtained by extending the direction vector D, dispose a light bulb at the position of the virtual sound source, and perform video production such that the virtual sound source moves according to movement of an arm of the performer.
This application claims the benefit of U.S. Patent Application No. 63/243,780 filed Sep. 14, 2021, the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63243780 | Sep 2021 | US |