ELECTRONIC DEVICE THAT PROVIDES IMAGE CONTENT IN CONJUNCTION WITH EXTERNAL ELECTRONIC DEVICE FOR PROJECTING IMAGE, AND CONTROL METHOD THEREFOR

Abstract
A first electronic device provides a first part of an input image in conjunction with a second electronic device projecting a second part of the input image. The first electronic device is configured to: obtain information about a projection surface to which the second electronic device projects the second part of the input image, segment an input image into the first part corresponding to a first area and the second part corresponding to a second area based on the information about the projection surface and information about the first electronic device, and display the first part of the input image based on information about the first area, and control the communication interface to transmit information about the second part corresponding to the second area to the second electronic device.
Description
BACKGROUND
1. Field

The present disclosure relates to an electronic device that provides an image content in conjunction with an external electronic device and a control method thereof, and more particularly to, the present disclosure relates to an electronic device that provides the image content in conjunction with the external electronic device for projecting an image, and the control method thereof.


2. Description of Related Art

In recent years, the quality of a content has improved and the needs for providing a content on a large screen are increasing even at home. In response to these needs, devices are implementing larger display sizes to provide a user with a larger screen to view content.


As the display of the electronic device becomes larger, content may be provided on the large screen. However, the size of the screen provided in the display of the electronic device is limited due to a problem in a process of manufacturing the display, and increases of costs for buying the electronic device. Further, there are limitations in installation and usage environments due to the display of the electronic device which is already installed in the home. There is a need for providing a content on screens of various sizes according to types of a content, an environment in the home, and requirements of family members.


SUMMARY

According to an aspect of the disclosure, a first electronic device that provides a first part of an input image in conjunction with a second electronic device projecting a second part of the input image, includes: a communication interface; a display; a memory storing one or more instructions; and at least one processor operatively coupled to the communication interface, the display and the memory and configured to execute the one or more instructions in the memory, the one or more instructions, when executed by the at least one processor, cause the first electronic device to: obtain information about a projection surface to which the second electronic device projects the second part of the input image, segment an input image into the first part corresponding to a first area and the second part corresponding to a second area based on the information about the projection surface and information about the first electronic device, and control the display to display the first part of the input image based on information about the first area, and control the communication interface to transmit information about the second part corresponding to the second area to the second electronic device.


According to an aspect of the disclosure, the one or more instructions, when executed by the at least one processor, cause the first electronic device to: receive, through the communication interface, an image about the projection surface photographed by a camera included in the second electronic device; and identify an area where the first electronic device is positioned on the projection surface based on the image about the projection surface.


According to an aspect of the disclosure, the one or more instructions, when executed by the at least one processor, cause the first electronic device to: identify the area where the first electronic device is positioned on the projection surface of the image about the projection surface as the first area and identify other areas excluding the first area in the image about the projection surface as the second area.


According to an aspect of the disclosure, the one or more instructions, when executed by the at least one processor, cause the first electronic device to: receive, through the communication interface, information about a distance between the second electronic device and the projection surface via a distance sensor included in the second electronic device, obtain information about a size and a location of the projection surface based on the information about the distance between the second electronic device and the projection surface, and segment the input image into the first part corresponding to the first area and the second part corresponding the second area based on the information about the size and the location of the projection surface.


According to an aspect of the disclosure, the one or more instructions, when executed by the at least one processor, cause the first electronic device to: convert a channel of an audio corresponding to the input image to a channel corresponding to an output specification of the second electronic device and an output specification of the first electronic device, and map the audio to the converted channel based on information about a location of the second electronic device.


According to an aspect of the disclosure, the one or more instructions, when executed by the at least one processor, cause the first electronic device to: analyze the second part of the input image projected to the projection surface and perform post-processing on the audio per the converted channel.


According to an aspect of the disclosure, the one or more instructions, when executed by the at least one processor, cause the first electronic device to: control the display to display a screen of a preset color in the first electronic device and control the communication interface to transmit, to the second electronic device, a control command to project a screen of the preset color, receive, from the second electronic device through the communication interface, an image capturing the screen of the preset color displayed by the first electronic device and the screen of the preset color projected by the second electronic device, and control brightness of at least one of a screen output by the first electronic device or a screen projected by the second electronic device based on the image capturing the screen of the preset color received from the second electronic device.


According to an aspect of the disclosure, the one or more instructions, when executed by the at least one processor, cause the first electronic device to: control the display to display a screen which changes to a pattern preset in the first electronic device and control the communication interface to transmit, to the second electronic device, a control command to project a screen which changes to a preset pattern, receive, from the second electronic device through the communication interface, an image capturing the screen changing to the preset pattern displayed by the first electronic device and the screen changing to the preset pattern projected by the second electronic device, and control at least one of an image output timing of the first electronic device or an image projecting timing of the second electronic device based on the image received from the second electronic device.


According to an aspect of the disclosure, the input image includes a flag indicating whether the input image is configured to be into the first part and the second part; and the one or more instructions, when executed by the at least one processor, cause the first electronic device to: determine whether to segment the image into the first part corresponding to the first area and the second part corresponding to the second area based on the flag.


According to an aspect of the disclosure, the one or more instructions, when executed by the at least one processor, cause the first electronic device to: operate in one of a first mode which displays the input image in the first electronic device and a second mode which segments and displays the input image through the first electronic device and the second electronic device.


According to an aspect of the disclosure, a method that controls a first electronic device providing a first part of an input image in conjunction with a second electronic device projecting a second part of the input image, includes: obtaining information about a projection surface to which the second electronic device projects the second part of the input image; segmenting the input image into the first part corresponding to a first area and the second part corresponding to a second area based on information about the projection surface and information about the first electronic device; and displaying the first part of the input image based on information about the first area and transmitting information about the second part corresponding to the second area to the second electronic device.


According to an aspect of the disclosure, the obtaining the information about the projection surface further includes obtaining an image about the projection surface photographed by a camera included in the second electronic device; and the method further includes identifying an area where the first electronic device is positioned on the projection surface based on the image about the obtained projection surface.


According to an aspect of the disclosure, the method further includes identifying the area where the first electronic device is positioned on the projection surface of the obtained image as the first area and identifying other areas excluding the first area in the obtained image as the second area.


According to an aspect of the disclosure, the obtaining the information about the projection surface further includes: obtaining information about a distance between the second electronic device and the projection surface through a distance sensor included in the second electronic device; and obtaining information about a size and a location of the projection surface based on information about the distance between the second electronic device and the projection surface, and the segmenting further includes segmenting the input image into the first part corresponding to the first area and the second part corresponding to the second area based on the information about the size and the location of the projection surface.


According to an aspect of the disclosure, the method further includes converting a channel of an audio corresponding to the input image to a channel corresponding to an output specification of the second electronic device and an output specification of the first electronic device; and mapping the audio to the converted channel based on information about a location of the second electronic device.


According to an aspect of the disclosure, a non-transitory computer readable medium having instructions stored therein, which when executed by a processor in a first electronic device, controls the processor to execute a method that controls the first electronic device to provide a first part of an input image in conjunction with a second electronic device projecting a second part of the input image, the method includes: obtaining information about a projection surface to which the second electronic device projects the second part of the input image; segmenting the input image into the first part corresponding to a first area and the second part corresponding to a second area based on information about the projection surface and information about the first electronic device; and displaying the first part of the input image based on information about the first area and transmitting information about the second part corresponding to the second area to the second electronic device.


According to an aspect of the disclosure, the obtaining the information about the projection surface further includes obtaining an image about the projection surface photographed by a camera included in the second electronic device; and the method further includes identifying an area where the first electronic device is positioned on the projection surface based on the image about the obtained projection surface.


According to an aspect of the disclosure, the method further includes: identifying the area where the first electronic device is positioned on the projection surface of the obtained image as the first area and identifying the other areas excluding the first area in the obtained image as the second area.


According to an aspect of the disclosure, the obtaining the information about the projection surface further includes: obtaining information about a distance between the second electronic device and the projection surface through a distance sensor included in the second electronic device; and obtaining information about a size and a location of the projection surface based on information about the distance between the second electronic device and the projection surface, and the segmenting further includes segmenting the input image into the first part corresponding to the first area and the second part corresponding to the second area based on the information about the size and the location of the projection surface.


According to an aspect of the disclosure, the method further includes: converting a channel of an audio corresponding to the input image to a channel corresponding to an output specification of the second electronic device and an output specification of the first electronic device; and mapping the audio to the converted channel based on information about a location of the second electronic device.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a view of illustrating a system including a first electronic device displaying an image and a second electronic device projecting an image according to one or more embodiments;



FIG. 2 is a block diagram of illustrating a configuration of a first electronic device according to one or more embodiments;



FIG. 3 is a block diagram of illustrating a configuration of the first electronic device for providing a content in conjunction with a second electronic device according to one or more embodiments;



FIG. 4 is a view for describing a method of receiving an image captured by a second electronic device to obtain information about a projection surface according to one or more embodiments;



FIGS. 5 and 6 are views for describing a method of receiving information about a distance between a second electronic device and a projection surface from the second electronic device to obtain information about the projection surface according to one or more embodiments;



FIG. 7 is a view for describing a method of providing an effect of a sound field according to extension of a screen according to one or more embodiments;



FIG. 8 is a flow chart for describing a method of controlling brightness of an image displayed by a first electronic device and an image projected by a second electronic device according to one or more embodiments;



FIG. 9 is a flow chart for describing a method of controlling an image output timing displayed by a first electronic device and an image projecting timing projected by a second electronic device according to one or more embodiments;



FIG. 10 is a flow chart for describing a control method of a first electronic device providing an image content in conjunction with a second electronic device projecting an image according to one or more embodiments;



FIGS. 11 and 12 are views for describing an example of extending a screen in a width and height directions according to one or more embodiments;



FIG. 13 is a view for illustrating an example in which a first electronic device is positioned on a side area of a projecting surface according to one or more embodiments;



FIG. 14 is a view for describing an example of displaying an area of interest in an image of a content according to one or more embodiments;



FIG. 15 is a view for describing a system including a first electronic device and a plurality of second electronic devices according to one or more embodiments;



FIG. 16 is a sequence diagram for describing an example in which a first electronic device displaying an image and a second electronic device projecting an image provide a content according to one or more embodiments;



FIG. 17 is a sequence diagram for describing an example in which a first electronic device displaying an image and a second electronic device projecting an image provide a content through a control device according to one or more embodiments; and



FIG. 18 is a block diagram of specifically illustrating a configuration of a second electronic device according to one or more embodiments.





DETAILED DESCRIPTION OF EMBODIMENTS

Various modifications may be made to the embodiments of the disclosure, and there may be various types of embodiments. Accordingly, specific embodiments will be illustrated in drawings, and the embodiments will be described in detail in the detailed description. However, it should be noted that the various embodiments are not for limiting the scope of the disclosure to a specific embodiment, but they should be interpreted to include all modifications, equivalents, and/or alternatives of the embodiments of the disclosure. In one or more examples, with respect to the detailed description of the drawings, similar components may be designated by similar reference numerals.


In case it is determined that in describing the disclosure, detailed explanation of related known functions or features may unnecessarily confuse the gist of the disclosure, the detailed explanation will be omitted.


In addition, embodiments below may be modified in various different forms, and the scope of the technical spirit of the disclosure is not limited to the embodiments below. Rather, these embodiments are provided to make the disclosure more sufficient and complete, and to fully convey the technical idea of the disclosure to those skilled in the art.


The terms used in the disclosure are used only to explain specific embodiments, and are not intended to limit the scope of the disclosure. In addition, singular expressions include plural expressions, unless defined obviously differently in the context.


In the disclosure, expressions such as “have”, “may have”, “include”, and “may include” denote the existence of such characteristics (e.g.: elements such as numbers, functions, operations, and components), and do not exclude the existence of additional characteristics.


In the disclosure, the expressions “A or B”, “at least one of A and/or B” or “one or more of A and/or B”, and the like may include all possible combinations of the listed items. For example, “A or B”, “at least one of A and B”, or “at least one of A or B” may refer to all of the following cases: (1) including at least one A, (2) including at least one B, or (3) including all of at least one A and at least one B.


Further, the expressions “1st”, “2nd”, “first”, “second”, or the like used in the disclosure may describe various elements regardless of any order and/or degree of importance. Also, such expressions are used only to distinguish one element from another element, and are not intended to limit the elements.


In one or more examples, the description in the disclosure that one element (e.g.: a first element) is “(operatively or communicatively) coupled with/to” or “connected to” another element (e.g.: a second element) should be interpreted to include both the case where the one element is directly coupled to the another element, and the case where the one element is coupled to the another element through the other element (e.g.: a third element).


In contrast, the description that one element (e.g.: a first element) is “directly coupled” or “directly connected” to another element (e.g.: a second element) may be interpreted to mean that the other element (e.g.: a third element) does not exist between the one element and the another element.


Also, the expression “configured to (set to)” used in the disclosure may be interchangeably used with other expressions such as “suitable for,” “having the capacity to,” “designed to,” “adapted to,” “made to,” and “capable of,” depending on cases. In one or more examples, the term “configured to (or set to)” may not necessarily mean “specifically designed to” in terms of hardware.


Instead, under some circumstances, the expression “a device configured to” may mean that the device “is capable of” performing an operation together with another device or component. For example, the phrase “a processor configured to (set to) perform A, B, and C” may mean a generic-purpose processor (e.g.: a CPU or an application processor) that can perform the corresponding operations by executing one or more software programs stored in a memory device.


In the embodiments of the disclosure, ‘a module’ or ‘a part’ may perform at least one function or operation, and may be implemented as hardware or software, or as a combination of hardware and software. In addition, a plurality of ‘modules’ or ‘parts’ may be integrated into at least one module and implemented as at least one processor, excluding ‘a module’ or ‘a part’ that needs to be implemented as specific hardware.


In one or more examples, various elements and areas in drawings were illustrated schematically. Accordingly, the technical spirit of the disclosure is not limited by the relative sizes or intervals illustrated in the accompanying drawings.


Hereinafter, various embodiments of the disclosure are specifically described based on the attached drawings. FIG. 1 is a view of illustrating a system including a first electronic device displaying an image and a second electronic device projecting an image according to one or more embodiments. According to one or more embodiments of the disclosure, a first electronic device 100 may include a display and may be a display device such as a TV providing an image through the display. A second electronic device 200 may be an image projecting device such as a projector projecting an image on a wall or a screen. In one or more examples, the first electronic device 100 may operate as a main device and the second electronic device 200 may operate as a secondary device. However, the disclosure is not limited thereto. Although FIG. 1 illustrates the electronic device 200 positioned on the floor, the embodiments are not limited to this configuration. For example, the electronic device 200 may be installed on a ceiling, a sidewall of a room, or a back wall facing the electronic device 100.


The first electronic device 100 may be installed on a wall at home and display an image. The second electronic device 200 may project an image on a wall on which the first electronic device 100 is installed. In particular, as shown in FIG. 1, the first electronic device 100 may be positioned at an area of the image projected by the second electronic device 200.


The first electronic device 100 obtains information about a projection surface 100A to which the second electronic device 200 projects an image. In one or more examples, the projection surface 100A may correspond to a surface that surrounds the electronic device 100. As illustrated in FIG. 1, the projection surface extends in a horizontal and vertical direction around the electronic device 100. Although FIG. 1 illustrates the projection surface 100A as a rectangular shape, the embodiments are not limited to this configuration. For example, the projection surface 100A may be any desired shape such as a square, circle, oval, etc. In one or more examples, the first electronic device 100 segments an input image of a content into a first area and a second area based on information about the projection surface and information about the first electronic device 100. For example, the input image may be segmented into a first image part and a second image part, where the first electronic device 100 displays the first image part, and second electronic device 200 displays the second image part. In one or more examples, the first electronic device 100 displays part of the image based on information about the first area in the image and transmits information about the second area to the second electronic device 200. The second electronic device 200 may display the other part of the image based on information about the second area.


In one or more examples, the first electronic device 100 may segment not only the image of the content, but also an audio of the content based on information about the projection surface and information about a position of the second electronic device 200. Part of the segmented audio may be output through the first electronic device 100, and the other part of the audio may be output through the second electronic device 200.


As a result, the first electronic device 100 may provide one content in conjunction with the second electronic device 200 projecting an image as shown in FIG. 1.



FIG. 2 is a block diagram of illustrating a configuration of the first electronic device 100 according to one or more embodiments. As shown in FIG. 2, the first electronic device 100 includes a display 110, a speaker 120, a communication interface 130, an input interface 140, memory 150, and at least one processor 160. However, a configuration of the first electronic device 100 shown in FIG. 2 is merely one of examples, wherein it is obvious that another configuration may be added thereto or part of the configuration may be omitted.


The display 110 may display various information. In particular, the display 110 may display part of an image of a content when displaying the content in conjunction with the second electronic device 200 projecting an image. In one or more examples, the display 110 may display a screen of the preset color or a screen changing to the preset panel to synchronize with the conjunctive second electronic device.


In one or more examples, the display 110 may be realized as a liquid crystal display panel (LCD), organic light emitting diodes (OLED), etc. In one or more examples, the display 110 may be realized as a flexible display, a transparent display, or the like in some cases. However, the display 110 according to the disclosure is not limited to a specific type.


The speaker 120 may output various voice messages and audios. In particular, the speaker 130 may output audios of a content obtained based on information about the first electronic device 100 and the second electronic device 200 when providing a content in conjunction with the second electronic device 200.


The communication interface 130 may include at least one circuit and may perform communication with various types of second electronic devices 200. The communication interface 130 may include at least one of a Bluetooth low energy module, a Wi-Fi communication module, a cellular communication module, a 3rd generation (3G) mobile communication module, a 4th generation (4G) mobile communication module, a 4G long term evolution (LTE) communication module, or a 5th generation (5G) mobile communication module.


In particular, the communication interface 130 may receive information about the second electronic device 200 and information about the projection surface from the second electronic device 200 when providing a content in conjunction with the second electronic device 200.


In one or more examples, the aforementioned embodiments describe performing communication connection with external equipment through a wireless communication module but it is obvious that it is possible to perform communication connection with the external equipment through a wired communication module.


The memory 150 may store an operating system (OS) for controlling overall operations of components of the first electronic device 100 and instructions or data related to components of the first electronic device 100. In particular, the memory 150 may include a projection surface information-obtaining module 310, a content-obtaining module 320, an AV signal separation module 330, an image analysis module 340, an image segmentation module 350, an audio conversion module 360, an audio post-processing module 370, an AV signal synthesis and distribution module 380, a content output module 390, and a content transmission module 395 to provide a content in conjunction with the second electronic device 200 as shown in FIG. 3. In particular, if a function for providing a content in conjunction with the second electronic device is implemented, various modules for providing a content in conjunction with the second electronic device stored in non-volatile memory may load data for performing various operations onto volatile memory. In one or more examples, loading may refer to an operation of calling data stored in non-volatile memory to volatile memory and storing the data in order that at least one processor 160 may access the data.


In one or more examples, the memory 150 may be realized as non-volatile memory (e.g. a hard disk, a solid state drive (SSD), flash memory), volatile memory (memory within at least one processor 150 may be also included), etc.


The at least one processor 160 may control the first electronic device 100 according to at least one instructions stored in the memory 150. In particular, the at least one processor 160 obtains information about the projection surface to which the second electronic device projects an image. Further, the least one processor 160 segments an input image into a first area and a second area based on information about the projection surface and information about the first electronic device 100. Still further, the least one processor 160 controls the display 110 to display part of the image and controls the communication interface 140 to transmit information about the second area of the image to the second electronic device based on information about the first area of the image.


In one or more examples, the at least one processor 160 may receive an image of the projection surface photographed by a camera included in the second electronic device through the communication interface 140. Further, the at least one processor 160 may identify an area where the first electronic device 100 is positioned on the projection surface based on an image about the obtained projection surface. The image of the projection surface may be captured by a camera of the second electronic device 200 and transmitted to the first electronic device 100. Further, the at least one processor 160 is configured to identify an area corresponding to the area where the first electronic device 100 is positioned on the projection surface of the obtained image as the first area and identify the other areas excluding the first area in the obtained image as the second area. In one or more examples, the first electronic device 100 may include one or more sensors that are able to detect a size and position of the projection surface.


In one or more examples, the at least one processor 160 may receive information about a distance between the second electronic device and the projection surface through a distance sensor included in the second electronic device. Further, the at least one processor 160 may obtain information about a size and a location of the projection surface based on the information about the distance between the second electronic device and the projection surface. Still further, the at least one processor 160 may segment an input image into a first area and a second area based on information about the size and the location of the projection surface.


The at least one processor 160 may convert a channel of an audio corresponding to an input image to a channel corresponding to an output specification of the second electronic device and the first electronic device 100. Further, the at least one processor 160 may map the audio to the converted channel based on information about a location of the second electronic device. In one or more examples, the at least one processor 160 may analyze an image projected to the projection surface and may perform post-processing on the audio per the converted channel.


The at least one processor 160 may control the display 110 to display a screen of the preset color in the first electronic device and control the communication interface 140 to transmit a control command for projecting a screen of the preset color to the second electronic device 200. Further, the at least one processor 160 may receive an image capturing the screen of the preset color displayed by the first electronic device 100 and the screen of the preset color projected by the second electronic device 200, from the second electronic device 200 through the communication interface 140. In one or more examples, the at least one processor 160 may control brightness of at least one of a screen output by the first electronic device 100 or a screen projected by the second electronic device 200 based on the image received from the second electronic device 200.


The at least one processor 160 may control the display 110 to display a screen, which changes to a pattern preset in the first electronic device 100 and control the communication interface 140 to transmit a control command to project a screen, which changes to the preset pattern, to the second electronic device 200. Further, the at least one processor 160 may receive an image capturing a screen changing to the preset pattern displayed by the first electronic device 100, and a screen changing to the preset pattern projected by the second electronic device, from the second electronic device 200 through the communication interface 110. In one or more examples, the at least one processor 160 may control at least one of an image output timing of the first electronic device 100 or an image projecting timing of the second electronic device 200 based on the image received from the second electronic device 200 to adjust the image output timing of the first electronic device 100 to match with the image projecting timing of the second electronic device 200.


The at least one processor 160 may determine whether to segment the image into the first area and the second area based on flag information indicating whether the image includes metadata indicating that the input image may be segmented.


In one or more examples, the at least one processor 160 may operate at one of a first mode which displays the input image in the first electronic device 100 and a second mode which segments and displays the input image through the first and second electronic devices, where the second electronic device 200 displays part of the input image on the projection surface.


In one or more examples, the at least one processor 160 may load data for performing various operations shown in modules including the projection surface information-obtaining module 310, the content-obtaining module 320, the AV signal separation module 330, the image analysis module 340, the image segmentation module 350, the audio conversion module 360, the audio post-processing module 370, the AV signal synthesis and distribution module 380, the content output module 390, or the content transmission module 395, as illustrated in FIG. 3. The modules may be stored in non-volatile memory onto the volatile memory. A method in which the at least one processor 160 provides a content in conjunction with the second electronic device through a plurality of components is specifically described with reference to FIG. 3. FIG. 3 is a block diagram of illustrating a configuration of the first electronic device for providing a content in conjunction with the second electronic device according to one or more embodiments.


The projection surface information-obtaining module 310 may obtain information about the projection surface based on information received from the second electronic device 200. The information received from the second electronic device 200 may be an image of the projection surface, or one or more parameters that identify a size and shape of the projection surface. Further, the projection surface information-obtaining module 310 may identify an area where the first electronic device 100 is positioned on the projection surface, and the other areas excluding that of the first electronic device 100.


In one or more examples, the projection surface information-obtaining module 310 may receive an image capturing the projection surface including the first electronic device 100 from the second electronic device 200. Further, the projection surface information-obtaining module 310 may identify an area occupied by a display of the first electronic device 100 on the projection surface based on an image about the obtained projection surface. For example, as shown in FIG. 4, the second electronic device 200 may obtain an image by capturing the projection surface including the first electronic device 100. Further, the second electronic device 200 may transmit the obtained image to the first electronic device 100. The first electronic device 100 may identify an area 410 where the first electronic device 100 is positioned on the projection surface and the other areas 420 excluding that of the first electronic device 100 based on the received image.


In one or more examples, in the aforementioned embodiment, to the electronic device 100 may receive an image capturing the projecting surface including the first electronic device 100 from the second electronic device 200. However, the embodiments are not limited to this configuration. For example, an image capturing the projection surface including the first electronic device 100 may be received by a mobile terminal (e.g. a smart phone) having a camera. For example, if the first electronic device 100 and the second electronic device 200 is in conjunction with an external mobile terminal, information (including the captured image) about the projection surface projected by the second electronic device 200 may be received by using an application installed in the mobile terminal.


In one or more examples, the projection surface information-obtaining module 310 may receive information about a distance between the second electronic device 200 and the projection surface via a distance sensor included in the second electronic device 200, from the second electronic device 200 through the communication interface 140. Further, the projection surface information-obtaining module 310 may obtain information about a size and a location of the projection surface based on the information about the second electronic device 200 and the projection surface and information about the first electronic device 100. In one or more examples, the information about the size and the location of the projection surface according to a distance between the second electronic device 200 and the projection surface may be previously stored. In one or more examples, the electronic device 200 may communicate with the electronic device 100 using short range communication technologies (e.g., near field communication (NFC), Bluetooth, etc.) to transmit one or more data packets containing information that the electronic device 100 may use to determine the size and shape of the projection surface. The electronic device 100 may be configured to process the data packets to decode and reconstruct an image of the projection surface, or obtain one or more parameters corresponding to the size and shape of the projection surface. In one or more examples, the area where the first electronic device 100 is positioned on the projection surface may be fixed to the preset location (e.g. a center area, etc.).


For example, as shown in FIG. 5, if a distance between the second electronic device 200 and the projection surface is d1, the projection surface information-obtaining module 310 may identify a size and a location of a first projection surface corresponding to the distance d1 between the second electronic device 200 and the projection surface based on the previously stored information. Further, the projection surface information-obtaining module 310 may identify an area 510 where the first electronic device 100 is positioned and the other areas 520 excluding that of the first electronic device 100 based on the size and the location of the first projection surface. In particular, the projection surface information-obtaining module 310 may identify the area 510 where the first electronic device 100 is positioned and the other areas 520 excluding that of the first electronic device 100 based on the size and the location of the first projection surface and the previously stored size and location of the first electronic device 100. As shown in FIG. 6, if a distance between the second electronic device 200 and the projection surface is d2, which is longer than d1, the projection surface information-obtaining module 310 may identify a size and a location of a second projection surface corresponding to the distance d2 between the second electronic device 200 and the projection surface based on the previously stored information. Further, the projection surface information-obtaining module 310 may identify an area 610 where the first electronic device 100 is positioned and the other areas 620 excluding that of the first electronic device 100 based on the size and the location of the second projection surface.


In one or more examples, the projection surface information-obtaining module 310 may store information about the first electronic device 100 (e.g. information about a display size, information about resolution, information about a speaker, etc.) and information about the second electronic device 200 (e.g. information about a speaker, etc.).


The content-obtaining module 320 may obtain various contents. For example, the content-obtaining module 320 may obtain the contents through the communication interface 140. In one or more examples, the content-obtaining module 320 may obtain the contents from the second electronic device through a wired input/output (I/O) interface (e.g. a HDMI terminal, etc.). The content-obtaining module 320 may obtain the contents stored in the first electronic device 100. In one or more examples, the contents may include an image signal and an audio signal. In one or more examples, the content may correspond to content received through a tuner, a set top box, or a Wi-Fi connection. The content may be a live broadcast or content on demand provided through a streaming service.


The AV signal separation module 330 may separate an image signal and an audio signal included in a content. Further, the AV signal separation module 330 may input the image signal to the image analysis module 340 and may input the audio signal to the audio conversion module 360.


The image analysis module 340 may analyze the input image signal through the AV signal separation module 330. In one or more examples, the image analysis module 340 may determine an expansion ratio, a crop area, and information about a location of a main object in the image signal based on information about the first electronic device 100 and information about the projection surface. For example, the image analysis module 340 may determine an expansion ratio of the image inputted based on an aspect ratio and a size of the projection surface. For example, a size of the display is 55 inches and a size of the projection surface is 110 inches, the image analysis module 340 may determine the expansion ratio of double. In one or more examples, if it is required to display the main object within the image in the first electronic device 100, the image analysis module 340 may analyze the image signal to determine information about a location of the main object and determine a crop area of the image resulting from moving a display area of the main object to the first electronic device 100.


The image segmentation module 350 may segment an image based on a result of analysis obtained by the image analysis module 340 and information about the projection surface. Specifically, the image segmentation module 350 may segment the image processed by the image analysis module 340 into a first part corresponding to a first area and a second part corresponding to a second area. In one or more examples, the first area may correspond to an area where the first electronic device 100 is positioned, and which is identified by the projection surface information-obtaining module 310. The second area may correspond to the other areas (e.g., the other areas excluding the first area) excluding that of the first electronic device 100 identified by the projection surface information-obtaining module 310. For example, the image segmentation module 350 may segment the image processed by the image analysis module 340 into an image of the first area to be displayed by the first electronic device 100 and an image of the second area projected by the second electronic device 200. Accordingly, the image segmentation module 350 may obtain a first image signal to output an image about the first area and a second image signal to output an image about the second area.


In one or more examples, the image segmentation module 350 may determine information (e.g. brightness, color, a contrast ratio, etc.) about an image to be displayed by the first electronic device 100 and determine information (e.g. brightness, color, a contrast ratio, etc.) about an image to be projected by the second electronic device 200. For example, the image segmentation module 350 may determine to project light having the preset brightness (e.g. black) with respect to the first area (i.e. the area where the first electronic device 100 is positioned) of the image projected by the second electronic device 200 to minimize interruption of watching the image displayed by the first electronic device 100.


In one or more examples, the image segmentation module 350 may perform image processing with respect to at least one boundary area of the first area and the second area divided not to disparately display boundaries of the first area and the second area. For example, the image segmentation module 350 may process at least one of the first area or the second area to display an average pixel of the first area and the second area on at least one boundary area of the first area and the second area. In one or more examples, the image segmentation module 350 may process the boundary area of the second area such that its luminance gradually increases as it is close to the first area. That is, in general, luminance of the first electronic device 100 is brighter than that of the second electronic device 200, and thus, the image segmentation module 350 may process the boundary area of the second area such that its luminance gradually increases as it is close to the first area (or the first electronic device 100). In one or more examples, the image segmentation module 350 may process the boundary area of the second area to extend it in a direction of the first area to overlap the second area with the first area.


The audio conversion module 360 may convert a channel of the audio signal of a content to a channel corresponding to an output specification of the second electronic device 200 and the first electronic device 100. For example, if a configuration is possible such that a channel of an audio signal of a content is a 2.1 channel, and an output specification of the second electronic device 200 and the first electronic device 100 is a 5.1 channel, the audio conversion module 360 may upscale the audio signal from the 2.1 channel to the 5.1 channel. Accordingly, the second electronic device 200 and the first electronic device 100 may output the audio of the 5.1 channel. Otherwise, if the channel of the audio signal of a content is a 5.1 channel and an output specification of the second electronic device 200 and the first electronic device 100 is a 2.1 channel, the audio conversion module 360 may downscale the audio signal from the 5.1 channel to the 2.1 channel.


In addition, the audio conversion module 360 may map the audio signal to the converted channel based on information about a location of the second electronic device 200. For, example, if the second electronic device 200 outputs an audio signal corresponding to a woofer channel of a plurality of channels, the audio conversion module 360 may process the audio signal of the woofer channel according to a location of the second electronic device 200.


The audio post-processing module 370 may analyze an image projected on the projection surface to perform post-processing of the audio signal per the converted channel. For example, the audio post-processing module 370 may analyze the image projected to the projection surface to perform post-processing of an audio per the converted channel so as to provide an effect of a sound field according to extension of the image. For example, if an image is displayed on the display 110 of the first electronic device 100, an audio (e.g. a sound of gunshots) may be generated at a first point 710 of the left top of the display 110 as shown in FIG. 7. In one or more examples, if the image is extended to the projection surface projected by the second electronic device 200 and the audio is continuously generated at the first point 710, the audio is heard such that it is generated in the center rather than the left top of the screen. Accordingly, there may be disparity between the image and the audio. Therefore, the audio post-processing module 370 may perform post-processing of the audio signal per channel to cause the audio to be generated at a second point 720 which is the left top of the projection surface rather than the first point 710 based on a result (e.g. an extension ratio, etc.) of analyzing an image projected on the projection surface. For example, the audio signal may be modified such that when the audio signal is output from the electronic device 100 or electronic device 200, the output audio signal corresponds to sounds being output at the first point 710 and the second point 720.


The audio post-processing module 370 may obtain a first audio signal to be output by the first electronic device 100 and a second audio signal to be output by the second electronic device 200 based on a result of post-processing.


The AV signal synthesis and distribution module 380 may synthesize a first image signal and a second image signal obtained by the image segmentation module 350 and a first audio signal and a second audio signal obtained by the audio post-processing module 370. For example, the AV signal synthesis and distribution module 380 may synthesize the first image signal and the first audio signal as a first content to be provided by the first electronic device 100 and may synthesize the second image signal and the second audio signal as a second content to be provided by the second electronic device 200.


In one or more examples, the AV signal synthesis and distribution module 380 may synchronize the first and second image signals and the first and second audio signals and may correct the first and second image signals and the first and second audio signals according to an output difference (e.g. resolution, a gain value, etc.) between the first electronic device 100 and the second electronic device 200.


The content output module 390 may output a first content output from the AV signal synthesis and distribution module 380. For example, the content output module 390 may display part of the image by displaying the first image signal of the first content through the display 110 and may output the first audio signal of the first content through the speaker 120.


The content transmission module 395 may transmit the second content output from the AV signal synthesis and distribution module 380 to the second electronic device 200. In one or more examples, the second electronic device 200 may project the second image signal of the second content on the projection surface to project part of the other image, and may output the second audio signal of the second content through the speaker.


In particular, the first electronic device 100, according to one or more embodiments of the disclosure, may enter into a synchronization mode for synchronizing an image displayed by the first electronic device 100, and an image projected by the second electronic device 200 before the first electronic device 100 provides a content in conjunction with the second electronic device 200. With respect to the above, is the synchronization mode is described with reference to FIGS. 8 and 9.



FIG. 8 is a flow chart for describing a method of controlling brightness of an image displayed by a first electronic device and an image projected by a second electronic device according to one or more embodiments. In one or more examples, the operations illustrated in FIG. 8 may be performed by the at least one processor 160 (FIG. 2) of the first electronic device 100.


First, a first electronic device 100 may display a screen of the preset color in the first electronic device 100, and may transmit a control command to project a screen of the preset color to the second electronic device 200 (S810). In one or more examples, the screen of the preset color displayed by the first electronic device 100 and a screen of the preset color projected by the second electronic device 200 may be a white screen. However, as understood by one of ordinary skill in the art, the embodiments are not limited to this configuration and may include a screen of any suitable preset color (e.g., yellow, etc.).


The first electronic device 100 may receive an image capturing the screen of the preset color displayed by the first electronic device 100 and the screen of the preset color projected by the second electronic device 200 from the second electronic device 200 (S820). For example, the second electronic device 200 may capture the projection surface including the screen of the preset color projected by the second electronic device 200 and the screen of the preset color displayed by the first electronic device 100. Further, the second electronic device 200 may transmit the captured image to the first electronic device 100.


The first electronic device 100 may control brightness of at least one of a screen output by the first electronic device 100 or a screen projected by the second electronic device 200 based on the image received from the second electronic device 200 (S830). For example, the screen of the preset color displayed by the first electronic device 100 and the screen of the preset color projected by the second electronic device 200 may have different brightness according to a specification of the display of the first electronic device 100, a specification of a projection part of the second electronic device 200, an external environment (e.g. an indoor lighting environment), etc. Accordingly, the first electronic device 100 may obtain a brightness ratio (or a value of a difference in brightness) of a screen output by the first electronic device 100 and a screen projected by the second electronic device 200 included in the image. The first electronic device 100 may control brightness of at least one of the screen output by the first electronic device 100 or the screen projected by the second electronic device 200 based on the obtained brightness ratio (or the value of the difference in brightness). For example, if brightness of the screen output by the first electronic device 100 is 1.25 times brighter than the screen projected by the second electronic device 200, the first electronic device 100 may transmit a control command for 20% reducing brightness of the screen output by the first electronic device 100 or 25% increasing brightness of the screen projected by the second electronic device 200, to the second electronic device 200.


Based on the above features, the user may identically recognize brightness of the screen displayed by the first electronic device 100 and brightness of the screen projected by the second electronic device 200.



FIG. 9 is a flow chart for describing a method of controlling an image output timing displayed by a first electronic device and an image projecting timing projected by a second electronic device according to one or more embodiments. In one or more examples, the operations illustrated in FIG. 9 may be performed by the at least one processor 160 (FIG. 2) of the first electronic device 100.


First, the first electronic device 100 may display a screen which changes to a pattern preset in the first electronic device 100 and may transmit a control command to project a screen which changes to the preset pattern, to the second electronic device 200 (S910). In one or more examples, the screen which changes to the preset pattern may be a screen to which an edge of a triangle is added one by one but the disclosure is not limited thereto.


The first electronic device 100 may receive an image capturing a screen changing to the preset pattern displayed by the first electronic device 100 and a screen changing to the preset pattern projected by the second electronic device 200, from the second electronic device 200 (S920). For example, the second electronic device 200 may capture a projection surface including the screen changing to the preset pattern projected by the second electronic device 200 and the screen changing to the preset pattern displayed by the first electronic device 100. Further, the second electronic device 200 may transmit the captured image to the first electronic device 100.


The first electronic device 100 may control at least one of an image output timing of the first electronic device 100 or an image projecting timing of the second electronic device 200 based on the image received from the second electronic device 200 (S930). For example, the patterns in the screen changing to the preset pattern displayed by the first electronic device 100 and the screen changing to the preset pattern projected by the second electronic device 200 may change in different timings due to a communication environment, etc. Therefore, the first electronic device 100 may confirm whether patterns in the screen output by the first electronic device 100 and the screen projected by the second electronic device 200 included in the image identically change. If the patterns do not change identically, the first electronic device 100 may identify a time difference between time in that the patterns in the screen output by the first electronic device 100 change and time in that the screen projected by the second electronic device 200 included in the image changes. The first electronic device 100 may control at least one of an image output timing of the screen output by the first electronic device 100 or an image projecting timing of the screen projected by the second electronic device 200 based on the obtained time difference. For example, if the screen output by the first electronic device 100 is 1 second faster than the screen projected by the second electronic device 200, the first electronic device 100 may control the image output timing of the screen output by the first electronic device 100 to be 1 second slower. As such, based on this adjustment, the output of an image by the first electronic device 100 and the second electronic device 200 may be advantageously synchronized.



FIG. 10 is a flow chart for describing a control method of a first electronic device providing an image content in conjunction with a second electronic device projecting an image according to one or more embodiments. In one or more examples, the operations illustrated in FIG. 10 may be performed by the at least one processor 160 (FIG. 2) of the first electronic device 100.


First, the first electronic device 100 obtains information about a projection surface on which the second electronic device 200 projects an image (S1010). For example, the first electronic device 100 may obtain an image about the projection surface captured by a camera included in the second electronic device 200, and may identify an area where the first electronic device 100 is positioned on the projection surface based on the image about the obtained projection surface. Otherwise, the first electronic device 100 may obtain information about a distance between the second electronic device 200 and the projection surface through a distance sensor included in the second electronic device 200 and may obtain information about a size and a location of the projection surface based on the information about the distance between the second electronic device 200 and the projection surface.


Further, the first electronic device 100 segments an input image into a first part corresponding to a first area and a second part corresponding to a second area based on information about the projection surface and information about the first electronic device 100 (S1020). For example, the first electronic device 100 may identify an area corresponding to the area the first electronic device 100 is positioned on the projection surface of the obtained image as a first area and may identify the other areas excluding the first area of the obtained image as a second area. Otherwise, the first electronic device 100 may segment the input image into the first part corresponding to the first area and the second part corresponding to the second area based on information about the size and the location of the projection surface.


The first electronic device 100 displays part of the image based on information about the first area of the image and transmits information about the second area of the image to the second electronic device 200 (S1030).


In one or more examples, the first electronic device 100 may convert a channel of an audio corresponding to an input image to a channel corresponding to an output specification of the second electronic device 200 and the first electronic device 100. Further, the first electronic device 100 may map the audio to the converted channel based on information about a location of the second electronic device 200. In one or more examples, the first electronic device 100 may analyze an image projected on the projection surface to perform post-processing of the audio per the converted channel.


In one or more examples, the first electronic device 100 may display a screen of the preset color in the first electronic device 100 and may transmit a control command for projecting the screen of the preset color to the second electronic device 200. Further, the first electronic device 100 may receive an image capturing the screen of the preset color displayed by the first electronic device 100 and the screen of the preset color projected by the second electronic device 200, from the second electronic device 200. Still further, the first electronic device 100 may control brightness of at least one of the screen output by the first electronic device 100 or the screen projected by the second electronic device 200 based on the image received from the second electronic device 200.


In one or more examples, the first electronic device 100 may display the screen changing to the preset pattern in the first electronic device 100, and may transmit a control command for projecting the screen changing to the preset pattern, to the second electronic device 200. Further, the first electronic device 100 may receive an image capturing the screen of the preset color displayed by the first electronic device 100 and the screen of the preset color projected by the second electronic device 200, from the second electronic device 200. In one or more examples, the first electronic device 100 may control at least one of an image output timing of the first electronic device 100 or an image projecting timing of the second electronic device 200 based on the image received from the second electronic device 200.


In one or more examples, the first electronic device 100 according to one or more embodiments of the disclosure may provide various modes by using the second electronic device projecting an image.


For example, the first electronic device 100 may operate in a first mode (e.g. a concentration mode) for controlling such that the second electronic device 200 projecting an image onto a background area outside the first electronic device 100 may project the image having a specific color or a pattern.


Otherwise, the first electronic device 100 may operate in a second mode (e.g. an extension mode) for controlling such that the first electronic device 100 displays part of the image and the second electronic device 200 projects part of the other image in order that a viewer who watches a content may watch a content on a large screen as described in FIGS. 1 to 10.


Otherwise, if the first electronic device 100 is used in connection with an external audio device (e.g. a sound bar), the first electronic device 100 may operate at a third mode (e.g. a sound reinforcement mode) for causing the first electronic device 100 and the second electronic device 200 to segment and play an image of a content and playing an audio of a content in conjunction with an external audio device.


In one or more examples, the content disclosed in the aforementioned embodiment may include metadata including flag information indicating whether the image is segmented. Accordingly, the first electronic device 100 may determine whether to segment the image into the first part corresponding to the first area and the second part corresponding to the second area based on the flag information. For example, the first electronic device 100 may determine whether to segment the image per scene based on the flag information. For example, if a first scene of the content includes the flag information indicating image segmentation, the first electronic device 100 may segment an image and an audio in the first scene of the content and then together with the second electronic device 200, may provide the content. If a second section of the content includes flag information indicating image maintenance, the first electronic device 100 may provide the content in the second scene of the content through the first electronic device 100. In one or more examples, a content provider may determine whether to extend and provide the image per scene included in the content, thereby providing the content through various screen sizes according to characteristics of types of the scenes to provide enhanced content.


In one or more examples, it is described in the aforementioned embodiment that the first electronic device 100 may maintain a ratio of the image to be displayed and extend a size thereof wherein the first electronic device 100 displays part of the image and the second electronic device 200 projects part of the other image. However, as understood by one of ordinary skill in the art, the embodiments are not limited to this configuration. For example, the first electronic device 100 may maintain an original of the image to be displayed, and the second electronic device 200 may project an additional image in which the image is extended to the right and left or the top and bottom. With respect to the above, it is described with reference to FIGS. 11 and 12.



FIGS. 11 and 12 are views for describing an example of extending a screen in a width and height directions according to one or more embodiments.


The first electronic device 100 may display an image 1110 of a content in an original ratio as shown in (a) of FIG. 11. If flag information for image extension in a horizontal direction is included in metadata of the content, the first electronic device 100 may obtain an additional image of the content and may transmit information about the obtained additional image to the second electronic device 200. The second electronic device 200 may project additional images 1120 onto surroundings of the right and left of the first electronic device 100 as shown in (b) of FIG. 11. In one or more examples, the image and the additional images may provide one continuous screen that may be extended in a horizontal direction compared to the existing image 1110. In one or more examples, the display area of image 1110 may correspond to the first electronic device 100, and the display area for images 1120 may correspond to the projection surface.


The first electronic device 100 may display an image 1210 of a content in an original ratio as shown in (a) of FIG. 12. Thereafter, flag information for image extension in a vertical direction is included in metadata of the content, the first electronic device 100 may obtain an additional image of the content and may transmit information about the obtained additional image to the second electronic device 200. The second electronic device 200 may project an additional image 1220 onto surroundings of the top of the first electronic device 100. In one or more examples, the image and the additional image may be one continuous screen that may be extended in a vertical direction compared to the existing image 1110. The second electronic device 200 may project the image in the horizontal direction of the image, or project the image in the vertical direction thereof based on a type of the scene or flag information.


As mentioned above, the first electronic device 100 maintains a ratio of the image to be provided and extends the screen and thus the viewer may be provided with the enhanced content based on more extended images.


In one or more examples, the aforementioned embodiment describes that the second electronic device 200 projects part of a content. However, as understood by one of ordinary skill in the art, the embodiments are not limited to this configuration. For example, the second electronic device 200 may project an image or information related to a content provided by the first electronic device 100. For example, the first electronic device 100 may recognize a content provided by the first electronic device 100. In one or more examples, the first electronic device 100 may recognize a content currently provided by using an artificial intelligence (AI) model for recognizing a scene. The first electronic device 100 may transmit information about the recognized content to an external server and may obtain information or an image related to the obtained content. Further, the first electronic device 100 may transmit a control command in order that the second electronic device 200 projects the information or the image related to the obtained content. Accordingly, the second electronic device 200 may project information related to the content currently provided by the first electronic device 100 or the related image (e.g. an advertisement image, etc.).


For example, the first electronic device 100 may not directly transfer additional information such as metadata to the second electronic device 200 but may recognize a content and control the second electronic device 200 to provide information or an image related to the obtained content based on the recognized information.


In one or more examples, the aforementioned embodiment describes that the area where the first electronic device 100 is positioned is a center area of the projection surface. However, as understood by one of ordinary skill in the art, the embodiments are not limited to this configuration. For example, the first electronic device 100 may be positioned at various areas of the projection surface. For example, an area 1310 where the first electronic device is positioned may be positioned on a side area of the projection surface. In one or more examples, the first electronic device 100 may identify an area occupied by the display of the first electronic device 100 on the projection based on an image about the obtained projection. For example, the second electronic device 200 may capture the projection surface including the first electronic device 100 positioned at the side area of the projection surface to obtain an image as shown in FIG. 13. Further, the second electronic device 200 may transmit the obtained image to the first electronic device 100. The first electronic device 100 may identify an area 1310 where the first electronic device 100 is positioned on the projection surface and the other area 1320 excluding that of the first electronic device 100 based on the received image. In one or more examples, if the first electronic device 100 determines from an image of the projection surface that the project surface is obstructed by an object (e.g., picture hanging on a wall, shelves, etc.), the first electronic device 100 may adjust the projection surface to avoid the obstruction.


In one or more examples, the aforementioned embodiment describes that the first electronic device 100 displays part of the image of the content according to an area where the first electronic device 100 is positioned. However, as understood by one of ordinary skill in the art, the embodiments are not limited to this configuration. For example, the first electronic device 100 may display an area of interest in the image of the content. For example, the first electronic device 100 may determine the area of interest (an area where a main object or a person is positioned) in the input image of the content. Further, the first electronic device 100 may segment the image of the content based on the area where the first electronic device 100 is positioned and the other areas excluding that of the first electronic device 100 and the area of interest in the image of the content. For example, the first electronic device 100 may display the area of interest in the image of the content on the area where the first electronic device 100 is positioned and may transmit information about the other areas excluding the area of interest to the second electronic device 200. For example, the first electronic device 100 may display an area of interest (e.g., an area where the sun is positioned) in the content and the second electronic device 200 may project the other area 1420 excluding the area of interest based on information about the other areas excluding the area of interest as shown in FIG. 14. In one or more examples, the first electronic device 100 may control a size of the image of the content according to a location of the area of interest or may crop a partial area of the image of the content.


In one or more examples, the aforementioned embodiment describes that the first electronic device 100 segments the input image of the content into the first area to be displayed by the first electronic device 100 and the second area to be displayed by the second electronic device 200. However, as understood by one of ordinary skill in the art, the embodiments are not limited to this configuration. For example, the first electronic device 100 may segment the input image of the content into three or more areas. In one or more examples, the first electronic device 100 may segment the image of the content into a first area 1510 to be displayed by the first electronic device 100, a second area 1520-1 to be displayed by a second electronic device 200-1 and a third area 1520-2 to be displayed by a second electronic device 200-2 as shown in FIG. 15. In one or more examples, the aforementioned describes that the input image of the content is segmented into three or more by using at least one first electronic device 100 and at least one second electronic device. However, it is merely one of examples and the image of the content may be segmented into three or more by using a portable terminal (e.g. a smart phone, a notebook computer, a tablet PC, etc.). For example, the first electronic device 100 may segment the image of the content into a first area to be displayed by the first electronic device 100, a second area to be displayed by the second electronic device 200, and a third area to be displayed by the portable terminal device (e.g. a smart phone, etc.).


In one or more examples, the aforementioned embodiment describes that the first electronic device 100 is a main device, wherein it segments the input image into the first area and the second area based on information about the projection surface onto which the second electronic device 200 projects an image and information about the first electronic device 100, displays part of the image based on information about the first area of the image, and transmits information about the second area of the image to the second electronic device 200. However, the above is merely one of many examples, and the second electronic device 200 may operate as a main device. A subject of operations described in FIGS. 1 to 15 may be realized to be the second electronic device 200 rather than the first electronic device 100.



FIG. 16 is a sequence diagram for describing an example in which a first electronic device displaying an image and a second electronic device projecting an image provide a content according to one or more embodiments.


First, the first electronic device 100 may transmit information about the first electronic device 100 to the second electronic device 200 (S1610). In one or more examples, information about the first electronic device 100 may include information (e.g. a size, resolution, a scan rate, etc.) about the display of the first electronic device 100 and identification information (e.g. a product name, etc.) of the first electronic device 100.


The second electronic device 200 may capture a projection surface (S1620). In one or more examples, the projection surface may include an area where the first electronic device 100 is positioned and the second electronic device 200 may capture a projection surface including the area where the first electronic device 100 is positioned to obtain an image.


The second electronic device 200 may segment the input image into the first area and the second area (S1630). For example, the second electronic device 200 may segment the input image into the first area and the second area based on information about the projection surface obtained from the image capturing the projection surface and information about the first electronic device 100.


The second electronic device 200 may transmit the information about the first area to the first electronic device 100 (S1640). Further, the first electronic device 100 may display part of the input image based on the information about the first area (S1650).


While the first electronic device 100 displays part of the input image based on the information about the first area, the second electronic device 200 may project part of the other image based on information about the second area (S1660).


In one or more examples, a subject of the operations described in FIGS. 1 to 15 may be realized as an external control device (e.g. a set-top box or a smart phone) or the like rather than the first electronic device 100 or the second electronic device 200.



FIG. 17 is a sequence diagram for describing an example in which a first electronic device displaying an image and a second electronic device projecting an image provide a content through a control device according to one or more embodiments.


First, the first electronic device 100 may transmit information about the first electronic device 100 to a control device 1700 (S1710). In one or more examples, the information about the first electronic device 100 may include information (e.g. a size, resolution, a scan rate, etc.) about the display of the first electronic device 100 and identification information (e.g. a product name, etc.) of the first electronic device 100.


The second electronic device 200 may transmit the information about the second electronic device 200 to the control device 1700 (S1720). In one or more examples, information about the second electronic device 200 may include information (e.g. a size, resolution, brightness, etc.) about the projection part of the second electronic device 200 and identification information (e.g. a product name, etc.) of the second electronic device 200.


The control device 1700 may obtain information about the projection surface (S1730). For example, the control device 1700 may obtain information about the projection surface based on the image of the projection surface captured by the control device 1700 or the second electronic device 200 and may obtain information about the projection surface based on information of a distance between the first electronic device 100 and the second electronic device 200 obtained by the second electronic device 200.


The control device 1700 may segment the input image into the first area and the second area (S1740). Specifically, the control device 1700 may segment the input image into the first area and the second area based on information about the projection surface as previously described.


The control device 1700 may transmit the information about the first area to the first electronic device 100 (S1750). Further, the first electronic device 100 may display part of the input image based on the information about the first area (S1760).


The control device 1700 may transmit the information about the second area to the second electronic device 200 (S1770). While the first electronic device 100 displays part of the input image based on the information about the first area, the second electronic device 200 may project part of the other image based on the information about the second area (S1780).



FIG. 18 is a block diagram of specifically illustrating a configuration of a second electronic device 200 according to one or more embodiments. Referring to FIG. 18, the second electronic device 200 may include at least one of at least one processor 211, a projection part 212, memory 213, a communication interface 214, an operation interface 215, an I/O interface 216, a speaker 217, a mike 218, a power supply part 219, a driving part 220, or a sensor part 221. In one or more examples, a configuration shown in FIG. 18 is merely one of various examples, wherein part of the configuration may be omitted and a novel configuration may be added.


The at least one processor 211 may be realized as a digital signal processor (DSP) processing a digital signal, a microprocessor, or a Time controller (TCON). However, it is not limited thereto and may include one or more of a central processing unit (CPU), a Micro Controller Unit (MCU), a micro processing unit (MPU), a controller, an application processor (AP), a graphics-processing unit (GPU) or a communication processor (CP), or an advanced reduced instruction set computer (RISC) machines (ARM) processor or may be defined based on the relative terms. In one or more examples, the at least one processor 211 may be realized as a system on chip (SoC) or large scale integration (LSI) in which a processing algorism is embedded and may be realized in a form of a Field Programmable gate array (FPGA). In one or more examples, the at least one processor 211 may perform various functions by executing computer executable instructions stored in the memory 213.


The projection part 212 is configured to externally project an image. According to one of various examples of the disclosure, the projection part 212 may be realized by various projection methods (e.g. a cathode-ray tube (CRT) method, a liquid crystal display (LCD) method, a digital light processing (DLP) method, a laser method, etc.). As an example, the CRT method is basically the same as the principle of a CRT monitor. The CRT method displays an image on the screen by extending an image with a lens in the front of a cathode ray tube (CRT). According to the number of the cathode ray tube, it is divided into a 1-tube type and a 3-tube type. In the case of the 3-tube type, it maybe be realized by separately dividing the cathode ray tube into red, green, and blue cathode ray tubes.


As another example, the LCD method is a method of displaying an image by transmitting light resulting from a light source to a liquid crystal. The LCD method is divided into a single panel type and a 3-panel type, wherein in the case of the 3-panel type, light resulting from the light source is divided into red, green, and blue light on a dichroic mirror (a mirror which reflects light having a specific color and passes the others) and then is transmitted to the liquid crystal. Thereafter, the light may gather at one point again.


As still another example, the DLP method is a method of displaying an image by using a digital micromirror device (DMD) chip. The projection part of the DLP method may include a light source, a color wheel, a DMD chip, a projection lens, etc. Light output from the light source may have color as passing through a rotating color wheel. Light passing through the color wheel is input to the DMD chip. The DMD chip includes a number of micromirrors and reflects light input to the DMD chip. The projection lens may perform a role of extending light reflected from the DMD chip to a size of the image.


As another example, the laser method includes a diode pumped solid state (DPSS) laser and a galvanometer. A laser which outputs various colors uses a laser in which optical axises are overlapped by using a special mirror after installing three DPSS lasers according to RGB colors. The galvanometer includes a mirror and a motor having a high output power and moves the mirror in a fast speed. For example, the galvanometer may rotate a mirror at maximum 40 KHz/sec. The galvanometer is mounted according to a scan direction, wherein the projector performs flatbed scanning in general and thus the galvanometer may be also arranged to be divided into x and y axises.


In one or more examples, the projection part 212 may include various types of light sources. For example, the projection part 212 may include at least one light source of a lamp, a LED, or a laser.


The projection part 212 may output an image in a 4:3 aspect ratio, a 5:4 aspect ratio, a 16:9 wide aspect ratio according to use of the second electronic device 200, setting of the user, etc. According to the aspect ratio, it may output an image in various resolution such as WVGA (854*480), SVGA (800*600), XGA (1024*768), WXGA (1280*720), WXGA (1280*800), SXGA (1280*1024), UXGA (1600*1200), and Full HD (1920*1080).


In one or more examples, the projection part 212 may perform various functions for controlling an output image by controlling the at least one processor 211. For example, the projection part 212 may perform functions such as a zoom, a keystone, a quick corner (4-corner) keystone, and a lens shift.


Specifically, the projection part 212 may extend or reduce an image according a distance from a screen (a projection distance). That is, a zoom function may be performed according the distance from the screen. In one or more examples, the zoom function may include a hardware method of moving a lens to control a size of the screen and a software method of controlling a size of the screen by cropping an image, etc. In one or more examples, if the zoom function is performed, it is needed to control of focus of the image. For example, a method of controlling the focus includes a manual focus method, an electric method, etc. The manual focus method means a method of adjusting focus manually and the electric method means a method in which the projector automatically adjusts focus by using an embedded motor if the zoom function is performed. If the zoom function is performed, the projection part 212 may provide a digital zoom function through software and may provide an optical zoom function of performing the zoom function by moving the lens through the driving part 220.


In one or more examples, the projection part 212 may perform a keystone correction function. If a height is not suitable for a front projection, the screen may be distorted upwardly or downwardly. The keystone correction function means a function of correcting the distorted screen. For example, if the distortion occurs in right and left directions of the screen, it may be corrected by using a horizontal keystone and if the distortion occurs in up and down directions, it may be corrected by using a vertical keystone. The quick corner (4-corner) keystone correction function is a function of correcting a screen if a center area of the screen is normal but corner areas thereof are not balanced. The lens shift function is a function of shifting a picture as it is if the picture deviates from the screen.


In one or more examples, the projection part 212 may provide a zoom/keystone/focus function by automatically analyzing a surrounding environment and a projection environment without a user input. Specifically, the projection part 212 may automatically provide the zoom/keystone/focus function based on a distance between the second electronic device 200 and the screen, information about a space where the second electronic device 200 is currently positioned, information about an amount of light of surroundings, and the like sensed by a sensor (a depth camera, a distance sensor, an infrared sensor, an illuminance sensor, etc.).


In one or more examples, the projection part 212 may provide a lighting function by using a light source. In particular, the projection part 212 may provide the lighting function by outputting the light source by using a LED. According to one of various examples, the projection part 212 may include one LED. According to another example, the second electronic device 200 may include a plurality of LEDs. In one or more examples, the projection part 212 may output the light source by using a surface light emitting LED according to one or more embodiments. In one or more examples, the surface light emitting LED may mean a LED having a structure that an optical sheet is disposed on an upper side of the LED such that the light source is evenly dispersed and output. Specifically, if the light source is output through the LED, the light source may be evenly dispersed throughout the optical sheet and the light source dispersed throughout the optical sheet may enter into a display panel.


In one or more examples, the projection part 212 may provide a dimming function for controlling strength of the light source to the user. Specifically, if a user input for controlling strength of the light source from the user through an operation interface 215 (e.g. a touch display button or a dial) is received, the projection part 212 may control the LED to output strength of the light source corresponding to the received user input.


In one or more examples, the projection part 212 may provide the dimming function based on a content analyzed by the at least one processor 211 without the user input. Specifically, the projection portion 212 may control the LED to output strength of the light source based on information (e.g. a type of a content, brightness of a content, etc.) about the content currently provided.


In one or more examples, the projection portion 212 may control a color temperature by controlling the at least one processor 211. In one or more examples, the at least one processor 211 may control the color temperature based on a content. For example, if the at least one processor 211 identifies to output the content, it may obtain color information per frame of the content of which the output is determined. Further, the at least one processor 211 may control the color temperature based on color information per the obtained frame. In one or more examples, the at least one processor 211 may obtain at least one or more of main colors of the frame based on color information per frame. Further, the at least one processor 211 may control color temperature based on the obtained at least one or more main colors. For example, the color temperature capable of being controlled by the at least one processor 211 may be divided into a warm type or a cold type. In one or more examples, it is assumed that the frame to be output (hereinafter, referred to as an output frame) includes a scene where a fire breaks out. The at least one processor 211 may identify (or obtain) that the main color is red based on color information included in the current output frame. Further, the at least one processor 211 may identify the color temperature corresponding to the identified main color (red). In one or more examples, the color temperature corresponding to red may be the warm type. In one or more examples, the at least one processor 211 may use the AI model to obtain color information or a main color of the frame. According to one of various examples, the AI model may be stored in the second electronic device 200 (e.g. memory 213). According to another example, the AI model may be stored in an external server communicable with the second electronic device 200.


The memory 213 may be realized as internal memory such as ROM (e.g. electrically erasable programmable read-only memory (EEPROM)), and RAM included in the at least one processor 211 or may be realized as memory separate from the at least one processor 211. In this case, the memory 213 may be realized in a form of memory embedded into the second electronic device 200 according to a data storage use or may be realized in a form of memory attachable/detachable to the second electronic device 200. For example, data for driving the second electronic device 200 is stored in memory embedded in the second electronic device 200 and data for an extension function of the second electronic device 200 may be stored in memory attachable/detachable to the second electronic device 200.


In one or more examples, the memory embedded in the second electronic device 200 may be realized as at least one of volatile memory (e.g. dynamic RAM (DRAM), static RAM (SRAM), synchronous dynamic RAM (SDRAM), etc.) or non-volatile memory (e.g. one time programmable ROM (OTPROM), programmable ROM (PROM), erasable and programmable ROM (EPROM), electrically erasable and programmable ROM (EEPROM), mask ROM, flash ROM, flash memory (e.g. a NAND flash, a NOR flash, etc.), a hard drive, or a solid state drive (SSD)). The memory attachable/detachable to the second electronic device 200 may be realized in a form of a memory card (e.g. a compact flash (CF), secure digital (SD), micro secure digital (Micro-SD), mini secure digital (Mini-SD), extreme digital (xD), a multi-media card (MMC), external memory (e.g. USB memory) connectable to a USB port, etc.


The memory 213 may store at least one of instructions with respect to the second electronic device 200. Further, the memory 213 may store an operating system (O/S) for driving the second electronic device 200. In one or more examples, the memory 213 may store various software programs or applications for operating the second electronic device 200 according to various embodiments of the disclosure. Still further, the memory 213 may include semiconductor memory such as flash memory or a magnetic storage medium such as a hard disk.


Specifically, the memory 213 may store various software modules for operating the second electronic device 200 and the at least one processor 211 may implement various software modules stored in the memory 213 to control operations of the second electronic device 200 according to various embodiments of the disclosure. That is, the memory 213 is accessed by the at least one processor 211 and a read/a record/a correction/a deletion/renewal, or the like of data may be performed by the at least one processor 211.


In one or more examples, the term of the memory 213 of the disclosure may be used as a meaning including a storage, ROM within the at least one processor 211, RAM or a memory card mounted in the second electronic device 200 (e.g. a micro SD card, a memory stick).


The communication interface 214 is configured to perform communication with various types of external devices according various types of communication methods. The communication interface 214 may include a wireless communication module or a wired communication module. In one or more examples, each communication module may be realized in a form of at least one hardware chip.


The wireless communication module may be a module in communication with an external device wirelessly. For example, the wireless communication module may include at least one module of a Wi-Fi module, a Bluetooth module, an infrared communication module, or the other communication module.


The Wi-Fi module and the Bluetooth module may perform communication in a Wi-Fi method and a Bluetooth method, respectively. In the case of using the Wi-Fi module or the Bluetooth module, it may transmit/receive various information after performing communication connection by transmitting/receiving first and then using various connection information such as a service set identifier (SSID) and a session key.


The infrared communication module performs communication according to an infrared data association (IrDA) technology which transmits data wirelessly in a short range by using infrared between visible light and a millimeter wave.


The other module may include at least one communication chip of performing communication according to various wireless communication standards such as zigbee, 3rd generation (3G), a 3rd generation partnership project (3GPP), long term evolution (LTE), LTE Advanced (LTE-A), 4th generation (4G), and 5th generation (5G).


The wired communication module may be a module in communication with an external device by wire. For example, the wired communication module may include at least one of a local area network (LAN) module, an Ethernet module, a pair cable, a coaxial cable, an optical fiber cable or an ultra-wide band (UWB) module.


The operation interface 215 may include various types of input devices. For examples, the operation interface 215 may include a physical button. In one or more examples, the physical button may include a function key, an arrow key (e.g. four-arrow keys), or a dial button. According to one of various examples, the physical button may be realized as a plurality of keys. According to another example, the physical button may be realized as one key. In one or more examples, if the physical key is realized as one key, the second electronic device 200 may receive a user input in which the one key is pushed for a critical time or more. If the user input in which the one key is pushed for the critical time or more is received, the at least one processor 211 may perform a function corresponding to the user input. For example, the at least one processor 211 may provide a lighting function based on the user input.


In one or more examples, the operation interface 215 may receive the user input by using a non-contact method. When the user input is received through the non-contact method, a physical force must be transferred to the second electronic device 200. Therefore, there may be a need of a method for controlling the second electronic device 200 regardless of the physical force. Specifically, the operation interface 215 may receive a user gesture and may perform an operation corresponding to the received user gesture. In one or more examples, the operation interface 215 may receive a gesture of the user through a sensor (e.g. an image sensor or an infrared sensor).


In one or more examples, the operation interface 215 may receive the user input by using a touch method. For example, the operation interface 215 may receive the user input through a touch sensor. According to one of various examples, the touch method may be realized in a non-contact method. For example, the touch sensor may determine whether a user body accesses it within a critical distance. In one or more examples, the touch sensor may identify the user input even if the user does not contact the touch sensor. In one or more examples, the touch sensor may identify the user input in which the user contacts the touch sensor.


In one or more examples, the second electronic device 200 may receive the user input through various methods besides the operation interface 215. As one of various examples, the second electronic device 200 may receive the user input though an external remote control device. In one or more examples, the external remote control device may be a remote control device corresponding to the second electronic device 200 (e.g. dedicated control equipment of the second electronic device 200) or portable communication equipment of the user (e.g. a smart phone or a wearable device). In one or more examples, the portable communication equipment of the user may store an application for controlling the second electronic device 200. The portable communication equipment may obtain the user input through the stored application and may transmit the obtained user input to the second electronic device 200. The second electronic device 200 may receive the user input from the portable communication equipment and may perform an operation corresponding to a control command of the user.


In one or more examples, the second electronic device 200 may receive the user input by using voice recognition. According to one of various examples, the second electronic device 200 may receive a user voice through a mike included in the second electronic device 200. According to another example, the second electronic device 200 may receive the user voice from the mike or an external device. Specifically, the external device may obtain the user voice through the mike of the external device and may transmit the obtained user voice to the second electronic device 200. The user voice transmitted from the external device may be audio data or digital data to which the audio data is converted (e.g. the audio data converted to a frequency domain, etc.). In one or more examples, the second electronic device 200 may perform an operation corresponding to the received user voice. Specifically, the second electronic device 200 may receive the audio data corresponding to the user voice through the mike. Further, the second electronic device 200 may convert the received audio data to digital data. Still further, the second electronic device 200 may convert the converted digital data to text data by using a speech to text (STT) function. According to one of various examples, the STT function may be directly performed in the second electronic device 200.


According to another example, the STT function may be performed in an external server. The second electronic device 200 may transmit digital data to the external server. The external server may convert digital data to text data and may obtain control command data based on the converted text data. The external server may transmit the control command data (e.g., text data may be included) to the second electronic device 200. The second electronic device 200 may perform an operation corresponding to a user voice based on the obtained control command data.


In one or more examples, the second electronic device 200 may provide a voice recognition function by using one assistant (or an AI secretary, e.g. Bixby™, etc.). However, it is merely one of various examples and the voice recognition function may be provided through a plurality of assistants. In one or more examples, the second electronic device 200 may select one of a plurality of assistants based on a specific key present in a trigger word or a remote controller corresponding to the assistant to provide the voice recognition function.


In one or more examples, the second electronic device 200 may receive the user input by using a screen interaction. The screen interaction may mean a function that the second electronic device 200 identifies whether the predetermined event occurs through an image projected onto the screen (or the projection surface) and obtains the user input based on the predetermined event. In one or more examples, the predetermined event may mean an event that the predetermined object is identified at a specific location (e.g. a location where a UI for receiving the user input is projected). In one or more examples, the predetermined object may include at least one of part of the user body (e.g. a finger), a pointer, or a laser pointer. If the predetermined object is identified at the location corresponding to the projected UI, the second electronic device 200 may identify that a user input of selecting the projected UI is received. For example, the second electronic device 200 may project a guide image to display the UI on the screen. Further, the second electronic device 200 may identify whether the user selects the projected UI. Specifically, if the predetermined event is identified at the location of the projected UI, the second electronic device 200 may identify that the user selects the projected UI. In one or more examples, the projected UI may include one or more items. In one or more examples, the second electronic device 200 may perform space analysis for identifying whether the predetermined event is present at the location of the projected UI. In one or more examples, the second electronic device 200 may perform the space analysis through a sensor (e.g. an image sensor, an infrared sensor, a depth sensor, a distance sensor, etc.) The second electronic device 200 may identify whether the predetermined event occurs at a specific location (a location where the UI is projected) by performing the space analysis. Further, if it is identified that the predetermined event occurs at the specific location (the location where the UI is projected), the second electronic device 200 may identify that the user input for selecting the UI corresponding to the specific location is received.


The I/O interface 216 is configured to input/output at least one of an audio signal or an image signal. The I/O interface 216 may receive at least one of the audio signal or the image signal from an external device and may output a control command from the external device.


According to one or more embodiments, the I/O interface 216 may be realized as an interface inputting/outputting only the audio signal and an interface inputting/outputting only the image signal or may be realized as one interface inputting/outputting all of the audio signal and the image signal.


In one or more examples, the I/O interface 216 according to one of various examples of the disclosure may be realized as at least one or more wired I/O interface of a high definition multimedia interface (HDMI), a mobile high-definition link (MHL), a universal serial bus (USB), a C-type USB, a display port (DP), a thunderbolt, a video graphics array (VGA) port, a RGB port, a D-subminiature (D-SUB), connector, or a digital visual interface (DVI). According to one of various examples, the wired I/O interface may be realized as an interface inputting/outputting only the audio signal and an interface inputting/outputting only the image signal or may be realized as one interface inputting/outputting all of the audio signal and the image signal.


In one or more examples, the second electronic device 200 may receive data through the wired I/O interface. However, it is merely one of various examples and power may be provided through the wired I/O interface. For example, the second electronic device 200 may receive the power from an external battery through the C-type USB or may receive the power from an electric outlet through a power adapter. As another example, the second electronic device 200 may receive power from the external device (e.g. a notebook computer, a monitor, etc.) through the DP.


In one or more examples, the audio signal may be realized to be received through the wired I/O interface and the image signal may be realized to be received through the wireless I/O interface (or a communication interface). In one or more examples, the audio signal may be received through the wireless I/O interface (or the communication interface) and the image signal may be realized to be received through the wired I/O interface.


The speaker 217 is configured to output an audio signal. In particular, the speaker 217 may include an audio output mixer, an audio signal processor, and a sound output module. The audio output mixer may synthesize a plurality of audio signals to be output into at least one audio signal. For example, the audio output mixer may synthesize the analog audio signal and another analog audio signal (e.g. an analog audio signal received from the outside) into at least one analog audio signal. The sound output module may include a speaker or an output terminal. According to one of various examples, the sound output module may include a plurality of speakers. In this case, the sound output module may be disposed in a body. Sound emitted while covering at least part of a diaphragm of the sound output module may pass through a waveguide and be transferred to the outside of the body. The sound output module may include a plurality of sound output units and the plurality of sound output units are symmetrically arranged with respect to an appearance of the body. As a result, sound may be emitted in all directions, i.e., in all directions of 360°.


The mike 218 is configured to receive user voices or other sounds and convert them to audio data. The mike 218 may receive the user voice in an active state. For example, the mike 218 may be formed to be integrated with an upper side, a front direction, a side direction, or the like of the second electronic device 200. The mike 218 may include various configurations such as a mike collecting a user voice in an analog form, an amplifier circuit amplifying the collected user voice, an A/D converting circuit sampling the amplified user voice to convert it to a digital signal, and a filter circuit removing a noise component from the converted digital signal.


The power supply part 219 may receive power from the outside and may supply power to various configurations of the second electronic device 200. According to one of various examples of the disclosure, the power supply part 219 may receive power through various methods. As one of various examples, the power supply part 219 may receive power by using a connector 230. In one or more examples, the power supply part 219 may receive power by using a DC power cord of 220V. However, the disclosure is not limited thereto and the second electronic device 200 may receive power by using a USB power cord or may receive power by using a wireless charging method.


In one or more examples, the power supply part 219 may receive power by using an internal battery or an external battery. According to one of various examples of the disclosure, the power supply part 219 may receive power through the internal battery. As an example, the power supply part 219 may charge power of the internal battery by using at least one of a DC power cord of 220V, a USB power cord, or a C-type USB power cord and may receive power through the charged internal battery. In one or more examples, the power supply part 219 according to one of various examples of the disclosure may receive power through the external battery. As an example, if connection between the second electronic device 200 and the external battery is performed through various wired communication methods such as a USB power cord, a C-type USB power cord, and a socket groove, the power supply part 219 may receive power through the external battery. That is, the power supply part 219 may directly receive power from the external battery or may charge the internal battery through the external battery to receive power from the charged internal battery.


According to the disclosure, the power supply part 219 may receive power by using at least one of the plurality of power supply methods as aforementioned.


In one or more examples, related to power consumption, the second electronic device 200 may have to power consumption of the preset value (e.g. 43 W) or less due to a form of a socket, other standards, etc. In one or more examples, the second electronic device 200 may vary power consumption to reduce power consumption when using the battery. That is, the second electronic device 200 may vary power consumption based on a power supply method, power usage, etc.


The driving part 220 may drive at least one hardware configuration included in the second electronic device 200. The driving part 220 may generate physical power and may transfer it to at least one hardware configuration included in the second electronic device 200.


In one or more examples, the driving part 220 may generate driving power for movement of a hardware configuration included in the second electronic device 200 (e.g. movement of the second electronic device 200) or for a rotating operation (e.g. rotation of the projection lens) of a configuration.


The driving part 220 may control a projection direction (or a projection angle) of the projection part 222. In one or more examples, the driving part 220 may move a location of the second electronic device 200. In one or more examples, the driving part 220 may control a moving member 209 for moving the second electronic device 200. For example, the driving part 220 may control the moving member 209 by using a motor.


The sensor part 221 may include at least one sensor. Specifically, the sensor part 221 may include at least one of a tilt sensor sensing a tilt of the second electronic device 200 or an image sensor capturing an image. In one or more examples, the tilt sensor may be an acceleration sensor or a gyro sensor and the image sensor may mean a camera or a depth camera. In one or more examples, the tilt sensor may be described as a motion sensor. In one or more examples, the sensor part 221 may include various sensors besides the tilt sensor or the image sensor. For example, the sensor part 221 may include an illuminance sensor or a distance sensor. The distance sensor may include a time of flight (ToF) sensor. In one or more examples, the sensor part 221 may include a LIDAR sensor.


In one or more examples, the second electronic device 200 may control a lighting function in conjunction with external equipment. Specifically, the second electronic device 200 may receive lighting information from the external equipment. In one or more examples, the lighting information may include at least one of information about brightness or information about a color temperature set from the external equipment. In one or more examples, the external equipment may mean equipment (e.g. IoT equipment included in the same home/company network) connected to the same network with that of the second electronic device 200 or equipment (e.g. a remote control server) that is not connected to the same network with that of the second electronic device 200 but is communicable with the second electronic device 200. For example, it is assumed that the external lighting equipment (IoT equipment) included in the same network with that of the second electronic device 200 outputs red lighting in brightness of 50. The external lighting equipment (IoT equipment) may directly or indirectly transmit lighting information (e.g. information indicating that red lighting is output in brightness of 50) to the second electronic device 200. In one or more examples, the second electronic device 200 may control output power of a light source based on the lighting information received from the external lighting equipment. For example, if the lighting information received from the external lighting equipment includes information in which red lighting is output in brightness of 50, the second electronic device 200 may output red lighting in brightness of 50.


In one or more examples, the second electronic device 200 may control a lighting function based on biometric data. Specifically, the at least one processor 211 may obtain biometric data of a user. In one or more examples, the biometric data may include at least one of a body temperature of the user, the number of a cardiac impulse, a blood pressure, breathing, or an electrocardiogram. In one or more examples, the biometric data may include various information besides the aforementioned information. As an example, the second electronic device 200 may include a sensor for measuring biometric data. The at least one processor 211 may obtain biometric data of the user through the sensor and may control the output of the light source based on the obtained biometric data. As another example, the at least one processor 211 may receive biometric data from the external equipment through the I/O interface 216. In one or more examples, the external equipment may mean a portable communication device (e.g. a smart phone or a wearable device) of the user. The at least one processor 211 may obtain biometric data of the user from the external equipment and may control the output of the light source based on the obtained biometric data. In one or more examples, according to one or more embodiments, the second electronic device 200 may identify whether the user sleeps and if it identifies that the user sleeps (or prepares to sleep), the at least one processor 211 may control the output of the light source based on the biometric data of the user.


In one or more examples, the second electronic device 200 according to one of various examples of the disclosure may provide various smart functions.


Specifically, the second electronic device 200 may be connected to a portable terminal device for controlling the second electronic device 200 to control the screen outputted from the second electronic device 200 through the user input inputted from the portable terminal device. As one or more embodiments, the portable terminal device may be realized as a smart phone including a touch display, the second electronic device 200 may receive and output screen data provided by the portable terminal device from the portable terminal device, and the screen outputted from the second electronic device 200 may be controlled according to the user input inputted from the portable terminal device.


The second electronic device 200 may perform connection with the portable terminal device through various communication methods such as Miracast, Airplay, wireless DEX, and a remote PC method to share a content or music provided by the portable terminal device.


Further, connection between the portable terminal device and the second electronic device 200 may be performed in various connection methods. As one of various examples, wireless connection may be performed by searching for the second electronic device 200 in the portable terminal device or wireless connection may be performed by searching for the portable terminal device in the second electronic device 200. Further, the second electronic device 200 may output a content provided from the portable terminal device.


As one of various examples, after positioning the portable terminal device in a state where a specific content or music is output from the portable terminal device around the second electronic device 200, if the preset gesture is sensed by the display of the portable terminal device (e.g. a motion tap view), the second electronic device 200 may output the content or music outputted from the portable terminal device.


As one of various examples, in a state where a specific content or music is output from the portable terminal device, if the portable terminal device is close to the second electronic device 200 in the preset distance or less (e.g. a non-contact tap view) or the portable terminal device contacts the second electronic device 200 twice in a short interval (e.g. a contact tap view), the second electronic device 200 may output the content or music outputted from the portable terminal device.


In the aforementioned examples, the same screen as the screen provided by the portable terminal device is provided in the second electronic device 200 but the disclosure is not limited thereto. That is, connection between the portable terminal device and the second electronic device 200 is established, the portable terminal device outputs a first screen provided by the portable terminal device and the second electronic device 200 may output a second screen provided by the portable terminal device different from the first screen. As an example, the first screen may be a screen provided by a first application installed in the portable terminal device and the second screen may be a screen provided by a second application installed in the portable terminal device. As an example, the first screen and the second screen may be different screens provided by one application installed in the portable terminal device. In one or more examples, as an example, the first screen may be a screen including a UI in a form of a remote controller to control the second screen.


According to the disclosure, the second electronic device 200 may output an idle screen. As an example, if connection between the second electronic device 200 and the external equipment is not performed or there is no input received from the external equipment for the preset time, the second electronic device 200 may output an idle screen. A condition that the second electronic device 200 outputs the idle screen is not limited to the above example and the idle screen may be output by various conditions.


The second electronic device 200 may output the idle screen in a form of a blue screen but the disclosure is not limited thereto. As an example, the second electronic device 200 may extract only a form of a specific object from data received from the external equipment to obtain an atypical object and may output the idle screen including the obtained atypical object.


In one or more examples, the second electronic device 200 may further include a display.


The display may be realized as displays having various forms such as a liquid crystal display (LCD), an organic light emitting diode (OLED) display, and a plasma display panel (PDP). The display may include a driving circuit which may be realized as a form of an amorphous silicon thin film transistor (a-si TFT), a low temperature poly silicon (LTPS) TFT, an organic TFT (OTFT), or the like, a back light unit, etc. In one or more examples, the display may be realized as a touch screen combined with a touch sensor, a flexible display, a three-dimensional display (3D display), etc. In one or more examples, according to one of various examples of the disclosure, the display may include not only a display panel outputting an image but also a bezel housing the display panel. In particular, the bezel according to one of various examples of the disclosure may include the touch sensor for sensing the user interaction.


In one or more examples, the second electronic device 200 may further include a shutter part.


The shutter part may include at least one of a shutter, a fixing member, a rail, or a body.


In one or more examples, the shutter may block light output from the projection part 212. In one or more examples, the fixing member may fix a location of the shutter. In one or more examples, the rail may be a path for moving the shutter and the fixing member. In one or more examples, the body may be a component including the shutter and the fixing member.


In one or more examples, a subject of the operations described in FIGS. 1 to 15 may be realized as the second electronic device 200 described in FIG. 18 rather than the first electronic device 100.


In one or more examples, a method according to one of the various examples of the disclosure may be provided to be included in a computer program product. The computer program product may be traded between a seller and a buyer as a product. The computer program product may be distributed in the form of a storage medium that is readable by machines (e.g.: a compact disc read only memory (CD-ROM)), or may be distributed (e.g. downloaded or uploaded) on-line through an application store (e.g.: Play Store™) or directly between two user devices (e.g. smart phones). In the case of on-line distribution, at least part of the computer program product (e.g. an downloadable app) may be stored in a storage medium that is readable by machines such as a server of a manufacturer, a server of an application store, and memory of a relay server at least temporarily, or may be generated temporarily.


A method according to one of various examples of the disclosure may be implemented as software including instructions stored in machine-readable storage media (e.g.: computers). The machines refer to devices that call instructions stored in a storage medium, and can operate according to the called instructions and may include an electronic device (e.g. TV) according to the disclosed examples.


In one or more examples, a storage medium which is readable by machines may be provided in a form of a non-transitory storage medium. In one or more examples, the ‘non-transitory storage medium’ is a tangible device and merely mean not to include a signal (e.g. an electromagnetic wave). This terminology does not separate the case that data is semipermanently stored in a storage medium from the case that it is temporarily stored therein. As an example, the ‘non-transitory storage medium’ may include a buffer where data is temporarily stored.


If the instructions are performed by a processor, the processor may perform a function corresponding to the instructions directly or by using other components under control of the processor. The instructions may include a cord generated or implemented by a compiler or an interpreter.


Although preferred embodiments of the present disclosure have been shown and described above, the disclosure is not limited to the specific embodiments described above, and various modifications may be made by one of ordinary skill in the art without departing from the gist of the disclosure as claimed in the claims, and such modifications are not to be understood in isolation from the technical spirit or prospect of the disclosure.

Claims
  • 1. A first electronic device that provides a first part of an input image in conjunction with a second electronic device projecting a second part of the input image, the first electronic device comprising: a communication interface;a display;a memory storing one or more instructions; andat least one processor operatively coupled to the communication interface, the display and the memory and configured to execute the one or more instructions in the memory,wherein the one or more instructions, when executed by the at least one processor, cause the first electronic device to: obtain information about a projection surface to which the second electronic device projects the second part of the input image,segment an input image into the first part corresponding to a first area and the second part corresponding to a second area based on the information about the projection surface and information about the first electronic device, andcontrol the display to display the first part of the input image based on information about the first area, and control the communication interface to transmit information about the second part corresponding to the second area to the second electronic device.
  • 2. The first electronic device of claim 1, wherein the one or more instructions, when executed by the at least one processor, cause the first electronic device to: receive, through the communication interface, an image about the projection surface photographed by a camera included in the second electronic device; andidentify an area where the first electronic device is positioned on the projection surface based on the image about the projection surface.
  • 3. The first electronic device of claim 2, wherein the one or more instructions, when executed by the at least one processor, cause the first electronic device to: identify the area where the first electronic device is positioned on the projection surface of the image about the projection surface as the first area and identify other areas excluding the first area in the image about the projection surface as the second area.
  • 4. The first electronic device of claim 1, wherein the one or more instructions, when executed by the at least one processor, cause the first electronic device to: receive, through the communication interface, information about a distance between the second electronic device and the projection surface via a distance sensor included in the second electronic device,obtain information about a size and a location of the projection surface based on the information about the distance between the second electronic device and the projection surface, andsegment the input image into the first part corresponding to the first area and the second part corresponding the second area based on the information about the size and the location of the projection surface.
  • 5. The first electronic device of claim 1, wherein the one or more instructions, when executed by the at least one processor, cause the first electronic device to: convert a channel of an audio corresponding to the input image to a channel corresponding to an output specification of the second electronic device and an output specification of the first electronic device, andmap the audio to the converted channel based on information about a location of the second electronic device.
  • 6. The first electronic device of claim 5, wherein the one or more instructions, when executed by the at least one processor, cause the first electronic device to: analyze the second part of the input image projected to the projection surface and perform post-processing on the audio per the converted channel.
  • 7. The first electronic device of claim 1, wherein the one or more instructions, when executed by the at least one processor, cause the first electronic device to: control the display to display a screen of a preset color in the first electronic device and control the communication interface to transmit, to the second electronic device, a control command to project a screen of the preset color,receive, from the second electronic device through the communication interface, an image capturing the screen of the preset color displayed by the first electronic device and the screen of the preset color projected by the second electronic device, andcontrol brightness of at least one of a screen output by the first electronic device or a screen projected by the second electronic device based on the image capturing the screen of the preset color received from the second electronic device.
  • 8. The first electronic device of claim 1, wherein the one or more instructions, when executed by the at least one processor, cause the first electronic device to: control the display to display a screen which changes to a pattern preset in the first electronic device and control the communication interface to transmit, to the second electronic device, a control command to project a screen which changes to a preset pattern,receive, from the second electronic device through the communication interface, an image capturing the screen changing to the preset pattern displayed by the first electronic device and the screen changing to the preset pattern projected by the second electronic device, andcontrol at least one of an image output timing of the first electronic device or an image projecting timing of the second electronic device based on the image received from the second electronic device.
  • 9. The first electronic device of claim 1, wherein the input image comprises a flag indicating whether the input image is configured to be into the first part and the second part; and wherein the one or more instructions, when executed by the at least one processor, cause the first electronic device to:determine whether to segment the image into the first part corresponding to the first area and the second part corresponding to the second area based on the flag.
  • 10. The first electronic device of claim 1, wherein the one or more instructions, when executed by the at least one processor, cause the first electronic device to: operate in one of a first mode which displays the input image in the first electronic device and a second mode which segments and displays the input image through the first electronic device and the second electronic device.
  • 11. A method that controls a first electronic device providing a first part of an input image in conjunction with a second electronic device projecting a second part of the input image, the method comprising: obtaining information about a projection surface to which the second electronic device projects the second part of the input image;segmenting the input image into the first part corresponding to a first area and the second part corresponding to a second area based on information about the projection surface and information about the first electronic device; anddisplaying the first part of the input image based on information about the first area and transmitting information about the second part corresponding to the second area to the second electronic device.
  • 12. The method of claim 11, wherein the obtaining the information about the projection surface further comprises obtaining an image about the projection surface photographed by a camera included in the second electronic device; and wherein the method further comprises identifying an area where the first electronic device is positioned on the projection surface based on the image about the obtained projection surface.
  • 13. The method of claim 12, further comprising: identifying the area where the first electronic device is positioned on the projection surface of the obtained image as the first area and identifying other areas excluding the first area in the obtained image as the second area.
  • 14. The method of claim 11, wherein the obtaining the information about the projection surface further comprises: obtaining information about a distance between the second electronic device and the projection surface through a distance sensor included in the second electronic device; andobtaining information about a size and a location of the projection surface based on information about the distance between the second electronic device and the projection surface,wherein the segmenting further comprises segmenting the input image into the first part corresponding to the first area and the second part corresponding to the second area based on the information about the size and the location of the projection surface.
  • 15. The method of claim 11, comprising: converting a channel of an audio corresponding to the input image to a channel corresponding to an output specification of the second electronic device and an output specification of the first electronic device; andmapping the audio to the converted channel based on information about a location of the second electronic device.
  • 16. A non-transitory computer readable medium having instructions stored therein, which when executed by a processor in a first electronic device, controls the processor to execute a method that controls the first electronic device to provide a first part of an input image in conjunction with a second electronic device projecting a second part of the input image, the method comprising: obtaining information about a projection surface to which the second electronic device projects the second part of the input image;segmenting the input image into the first part corresponding to a first area and the second part corresponding to a second area based on information about the projection surface and information about the first electronic device; anddisplaying the first part of the input image based on information about the first area and transmitting information about the second part corresponding to the second area to the second electronic device.
  • 17. The non-transitory computer readable medium of claim 16, wherein the obtaining the information about the projection surface further comprises obtaining an image about the projection surface photographed by a camera included in the second electronic device; and wherein the method further comprises identifying an area where the first electronic device is positioned on the projection surface based on the image about the obtained projection surface.
  • 18. The non-transitory computer readable medium of claim 17, wherein the method further comprises: identifying the area where the first electronic device is positioned on the projection surface of the obtained image as the first area and identifying the other areas excluding the first area in the obtained image as the second area.
  • 19. The non-transitory computer readable medium of claim 16, wherein the obtaining the information about the projection surface further comprises: obtaining information about a distance between the second electronic device and the projection surface through a distance sensor included in the second electronic device; andobtaining information about a size and a location of the projection surface based on information about the distance between the second electronic device and the projection surface,wherein the segmenting further comprises segmenting the input image into the first part corresponding to the first area and the second part corresponding to the second area based on the information about the size and the location of the projection surface.
  • 20. The non-transitory computer readable medium of claim 16, wherein the method further comprises: converting a channel of an audio corresponding to the input image to a channel corresponding to an output specification of the second electronic device and an output specification of the first electronic device; andmapping the audio to the converted channel based on information about a location of the second electronic device.
Priority Claims (2)
Number Date Country Kind
10-2022-0077125 Jun 2022 KR national
10-2022-0114338 Sep 2022 KR national
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of PCT International Application No. PCT/KR2023/007194, which was filed on May 25, 2023, and claims priority to Korean Patent Application No. 10-2022-0077125, filed on Jun. 23, 2022, and claims priority to Korean Patent Application No. 10-2022-0114338, filed on Sep. 8, 2022, in the Korean Intellectual Property Office, the disclosures of each of which are incorporated by reference herein their entirety.

Continuations (1)
Number Date Country
Parent PCT/KR2023/007194 May 2023 WO
Child 18999545 US