One disclosed aspect of the embodiments relates to an electronic apparatus and a method for controlling the electronic apparatus.
Lately various standards, such as the perceptual quantizer (PQ) and the hybrid log-gamma (HLG), have been used as the standards of image signals having wide dynamic ranges. These dynamic ranges are called a “high dynamic range (HDR)”. A user demand who use images having the HDR is that a position of a feature region (e.g. high brightness region, low brightness region) can be checked in an image having the HDR.
Japanese Patent Application Publication No. 2021-173901 discloses a technique that detects a pixel having a highest brightness value in a high brightness region that is larger than a predetermined size in an image, based on a brightness distribution of the image, and the brightness value of the detected pixel is displayed around the pixel.
Another demand is that one appropriate feature region can be selected from a plurality of feature regions in order to check this feature region closely. Japanese Patent Application Publication No. 2021-173901, however, discloses only a technique to display the highest brightness value for each of a plurality of high brightness regions, and does not disclose a technique to select one high brightness region (feature region).
An aspect of the disclosure is an electronic apparatus including at least one memory and at least one processor which function as an acquiring unit, a position acquiring unit, and a control unit. The acquiring unit is configured to acquire an image. The position acquiring unit is configured to acquire a position of a display item which is displayed at a currently selected position in the image. The control unit is configured to select one of a plurality of feature regions based on a distance between each of the plurality of feature regions and the display item in a case where the image includes the plurality of feature regions.
An aspect of the disclosure is a method for controlling an electronic apparatus including an acquiring step, a position acquiring step, and a control step. The acquiring step acquires an image. The position acquiring step acquires a position of a display item which is displayed at a currently selected position in the image. The control step, in a case where the image includes a plurality of feature regions, selects one of the plurality of feature regions based on a distance between each of the plurality of feature regions and the display item.
Further features of the disclosure will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Embodiments of the disclosure will be described with reference to the drawings. Same or equivalent composing elements, members or processing steps indicated in each drawing are denoted with a same reference sign, and redundant description may be omitted. In each drawing, a part of composing elements, members and processing steps will be omitted. In the following, the term “unit” may refer to a software context, a hardware context, or a combination of software and hardware contexts. In the software context, the term “unit” refers to a functionality, an application, a software module, a function, a routine, a set of instructions, or a program that can be executed by a programmable processor such as a microprocessor, a central processing unit (CPU), or a specially designed programmable device or controller. A memory contains instructions or program that, when executed by the CPU, cause the CPU to perform operations corresponding to units or functions. In the hardware context, the term “unit” refers to a hardware element, a circuit, an assembly, a physical structure, a system, a module, or a subsystem. It may include mechanical, optical, or electrical components, or any combination of them. It may include active (e.g., transistors) or passive (e.g., capacitor) components. It may include semiconductor devices having a substrate and other layers of materials having various concentrations of conductivity. It may include a CPU or a programmable processor that can execute a program stored in a memory to perform specified functions. It may include logic elements (e.g., AND, OR) implemented by transistor circuits or any other switching circuits. In the combination of software and hardware contexts, the term “unit” or “circuit” refers to any combination of the software and hardware contexts as described above. In addition, the term “element,” “assembly,” “component,” or “device” may also refer to “circuit” with or without integration with packaging materials.
In the following description, a region of a plurality of pixels, of which brightness values are higher than a first brightness value, is called a “high brightness region”. A region of a plurality of pixels, of which brightness values are lower than a second brightness value (brightness value lower than the first brightness value), is called a “low brightness region”. A region having a distinctive feature, such as a high brightness region or a low brightness region, is called a “feature region”.
In Embodiment 1, a display control device 100, which selects one feature region out of a plurality of feature regions, considering a distance between each of a plurality of feature regions and a currently selected position (display item) (and a distance between each of the plurality of feature regions and a center of the image), will be described.
A possible method for selecting one feature region out of a plurality of feature regions (a method other than Embodiment 1) is a method of the display control device sequentially searching in the image in a top to bottom direction, and selecting a feature region that is detected first, for example. Here in the case where a plurality of feature regions are disposed at a same height, for example, inside the image is searched in the left to right direction, and a feature region that is detected first is selected. In this method, however, the selected feature region is a feature region selected without considering user operation or the like. Therefore it is quite likely that a feature region desired by the user is not selected.
In the following, an example of selecting one high brightness region as a feature region will be described. However, any region, instead of the high brightness region, may be used as the feature region. For example, instead of the high brightness region, a region that can be detected from an image based on a predetermined standard (e.g. low brightness region, high saturation region) may be used. In a case where an object that can be detected by machine learning or the like (predetermined type of object, such as a person, an animal or a car) is included in an image, the region of this object may be used as the feature region instead of the high brightness region. This is the same for embodiments other than Embodiment 1.
The image acquiring unit 101 acquires image signals, which are moving image contents constituted of a plurality of frames (images), from an external device (e.g. imaging device, reproducing device). The image acquiring unit 101 outputs the acquired image signals to the brightness information acquiring unit 102 as an image for each frame. The image in each frame is constituted of a plurality of pixels (e.g. in the case of an image signal of which resolution is 1920 × 1080, the number of pixels is 1920 × 1080 = 2073600). The image acquiring unit 101 is an input terminal conforming to the serial digital interface (SDI) or the high-definition multimedia interface (HDMI®), for example.
The image 200 indicated in
For example, it is assumed that the brightness value of each pixel constituting the object 201 is 900 cd/m2, the brightness value of each pixel constituting the object 202 is 800 cd/m2, and the brightness value of each pixel constituting the object 203 is 700 cd/m2. It is also assumed that the brightness value of each pixel constituting the object 204 is 210 cd/m2, the brightness value of each pixel constituting the object 205 is 850 cd/m2, and the brightness value of each pixel constituting the object 206 is 700 cd/m2. Further, in the image 200, it is assumed that the size of the object 201 is 30000, the size of the object 202 is 110000, and the size of the object 203 is 80. It is also assumed that the size of the object 204 is 10000, the size of the object 205 is 20000, and the size of the object 206 is 80000. Here a size of an object in an image refers to a total number of pixels corresponding to a region occupied by the object in the image. However, the size is not limited to a number of pixels, but may be indicated by a different index to indicate a size of an object in an image (e.g. area of a region occupied by the object).
The brightness information acquiring unit 102 analyzes the image outputted from the image acquiring unit 101, and outputs the analysis result to the display control unit 106. On the other hand, the brightness information acquiring unit 102 outputs the image outputted from the image acquiring unit 101 to the image processing unit 103. For each acquired image, the brightness information acquiring unit 102 converts the signal values of all the pixels of the image into brightness values, and outputs the information on the converted brightness value of each pixel to the display control unit 106 as the brightness distribution. Here the brightness distribution is used, but a distribution of information other than the brightness values (e.g. gradation values) may be used instead of the brightness distribution. The brightness information acquiring unit 102 also encloses a frame memory, and writes the image outputted from the image acquiring unit 101 to the frame memory. Then the brightness information acquiring unit 102 converts the signal values of all the pixels of the image into the brightness values with reference to the frame memory in which the image was written.
To convert the signal value of a pixel into a brightness value, an electro-optical transfer function (EOTF) of PQ, HLG, gamma 2.2 or the like is used.
The image processing unit 103 performs image processing on the image outputted from the brightness information acquiring unit 102. The image processing unit 103 outputs the image, generated after performing the image processing, to the drawing unit 104. The image processing performed by the image processing unit 103 is processing to convert an image outputted from the brightness information acquiring unit 102 into data that can be handled by the display unit 105, based on the various settings set by the display control unit 106. The various settings here are, for example, the EOTF setting, color gamut setting (e.g. ITU-R BT.709 or ITU-R BT.2020), and signal range setting (e.g. limited range or full range).
The drawing unit 104 draws the brightness value of a currently selected pixel and a cursor (indicator: display item) to indicate the current position of the pixel (currently selected position), for the image outputted from the image processing unit 103. The drawing unit 104 outputs the image, in which the brightness value of the pixel and the cursor are drawn, to the display unit 105 as the display image.
The display unit 105 is a display including a backlight and a liquid crystal panel, for example. The display unit 105 displays a display image that is outputted from the drawing unit 104.
The display control unit 106 is a processing circuit (control circuit) that executes programs stored in the storage unit 107 and controls each component of the display control device 100. The display control unit 106 receives an instruction of the user operation performed via buttons (not illustrated) or the like disposed on the display control device 100. The display control unit 106 controls the processing of the image processing unit 103 based on the various settings (e.g. the EOTF setting, color gamut setting, signal range setting), which are set by the user operation.
Further, the display control unit 106 determines a pixel for which the brightness value of the pixel is drawn (drawing pixel), and controls the drawing unit 104. For example, when an operation to instruct automatic determining of a drawing pixel (hereafter called “automatic determination operation”) is performed, the display control unit 106 controls the drawing unit 104 so that the brightness value of the pixel at a determined position based on the brightness distribution is displayed (see
The storage unit 107 includes a non-volatile memory that stores programs for the display control unit 106 to execute. The storage unit 107 also stores information that the user set in advance (setting of pixel value display, brightness threshold of high brightness region, and size threshold of high brightness region). The display control unit 106 refers to this information when each processing to be described below is executed. Here the setting of the pixel value display is the setting of enabled/disabled of the pixel value display function of the display control device 100. The brightness threshold of the high brightness region is a threshold to determine whether or not the brightness value of the pixel is higher than a predetermined brightness value (“first brightness value” mentioned above). The size threshold of the high brightness region is a threshold to determine whether or not the size of the high brightness region is larger than a predetermined size.
In the case where the pixel value display function is enabled, the display control unit 106 calculates the coordinates of a pixel for which the pixel value is displayed and the brightness value of this pixel based on the brightness distribution. Then the display control unit 106 outputs the calculated coordinates of the pixel and brightness value of this pixel to the drawing unit 104.
(Drawing Pixel Determining Processing) The drawing pixel (pixel for which the brightness value is displayed) determining processing executed by the display control unit 106 will be described with reference to the flow chart in
In operation S401, the display control unit 106 refers to the setting of the pixel value display, and determines whether or not the pixel value display function is enabled. Processing advances to operation S402 if the pixel value display function is enabled. The processing of this flow chart ends if the pixel value display function is disabled.
In operation S402, the display control unit 106 detects a high brightness region in the image based on the brightness distribution acquired from the brightness information acquiring unit 102. Specifically, the display control unit 106 searches the brightness distribution, and classifies a pixel indicating a brightness value higher than a brightness threshold of the high brightness region as a high brightness pixel; and classifies a pixel indicating a brightness value lower than the brightness threshold of the high brightness region as a low brightness pixel. Then the display control unit 106 determines whether or not an adjacent high brightness pixel exists, sequentially from the high brightness pixel on the upper left of the image, and if an adjacent high brightness pixel exists, this adjacent high brightness pixel is classified as a pixel belonging to the same high brightness region. Thereby the display control unit 106 detects a plurality of pixels, classified to a same high brightness region, as one high brightness region. Using the same method as this, a low brightness region may also be detected.
For example, in the case where the brightness threshold of the high brightness regions is set to 203 cd/m2 in the image 200 indicated in
In operation S403, the display control unit 106 refers to the size threshold of the high brightness region, and selects only regions of which size is larger than this size threshold (a number of pixels is larger than a predetermined number), out of the high brightness regions detected in operation S402. In other words, the display control unit 106 excludes a region having a predetermined area or less, or smaller than this size threshold out of the high brightness regions. For example, if the size threshold of the high brightness region is set to 100 in the image 200 in
The reason the high brightness region, of which size is less than the size threshold, is excluded is because it is considered less important for the user to check the brightness value of this region, compared with the high brightness region of which size is larger.
In operation S404, the display control unit 106 determines whether a number of detected high brightness regions is at least one. The display control unit 106 advances the processing to operation S405 if the number of detected high brightness regions is at least one. The processing of this flow chart ends if the number of detected high brightness regions is 0.
In the case where the operations of this flow chart are executed on the image 200 indicated in
In operation S405, the display control unit 106 calculates the highest brightness value in each high brightness region in the image and coordinates (position) of the pixel having the highest brightness value (hereafter “highest brightness pixel”). Specifically, the display control unit 106 searches the brightness distribution for each high brightness region, calculates the highest brightness value among the brightness values of all the pixels in this high brightness region, and also calculates the coordinates of the pixel having the highest brightness value (highest brightness pixel).
In the case where a plurality of highest brightness pixels exist in the image, only an appropriate one thereof may be selected by the display control unit 106. For example, out of the plurality of pixels indicating the highest brightness value in the high brightness region, the display control unit 106 selects only a highest brightness pixel located at a position in a very specific direction (e.g. pixel of which coordinates are uppermost left in the image). As an alternative, the display control unit 106 may select only a highest brightness pixel closest to the center (center of gravity) of the high brightness region, out of the plurality of highest brightness pixels in the high brightness region.
In a region of each object indicated in
In operation S406, the display control unit 106 selects one high brightness region out of the high brightness regions in the image as a selected region. The processing to select the selected region (region selecting processing) will be described in detail later with reference to
In operation S407, the display control unit 106 outputs information on the highest brightness value and information on the coordinates of the highest brightness pixel in the selected region, which was selected in operation S406, to the drawing unit 104.
When the drawing unit 104 acquires the information on the coordinates of the highest brightness pixel and the information on the highest brightness value from the display control unit 106, the drawing unit 104 moves the cursor so as to indicate the position of the highest brightness pixel, and draws the highest brightness value on the image. Then the drawing unit 104 outputs the image, generated after the drawing processing, to the display unit 105 as the display image. The display unit 105 display the display image acquired from the drawing unit 104. Thereby the display unit 105 can display the display image, generated by superimposing the cursor, which indicates the position of the highest brightness pixel of the selected region, and the brightness value of the highest brightness pixel, on the image.
Here when the information on the coordinates of the highest brightness pixel and the information on the highest brightness value are received from the display control unit 106, the drawing unit 104 may highlight the selected region including this highest brightness pixel, instead of moving the cursor to the coordinates of the highest brightness pixel. For example, the drawing unit 104 may indicate the position of the selected region to the user by drawing a frame surrounding the selected region. As an alternative, the drawing unit 104 may also draw a frame surrounding the selected region after moving the cursor first.
(Region Selecting Processing; Operation S406) The processing for selecting one selected region out of one or a plurality of high brightness regions in an image (region selecting processing) in operation S406 will be described with reference to the flow chart in
In operation S501, the display control unit 106 acquires the coordinates (position) of the highest brightness pixel of each high brightness region (the coordinates of each high brightness region) and the coordinates (position) of the cursor (acquires position). Then the display control unit 106 calculates the distance between the highest brightness pixel of each high brightness region and the cursor (the distance between each high brightness region and the cursor). The distance here indicates a value when the length of one pixel in the horizontal direction or the vertical direction is one.
In operation S502, the display control unit 106 calculates (determines) a cursor distance of which value is shortest (smallest) among the cursor distance values calculated in operation S501 (hereafter called “shortest cursor distance”). In the case where the cursor 701 is disposed at coordinates (1500, 450), as in the example indicated in
In operation S503, the display control unit 106 determines whether the shortest cursor distance is a threshold or less. Processing advances to operation S504 if the shortest cursor distance is the threshold or less. Processing advances to operation S505 if the shortest cursor distance is longer (larger) than the threshold.
The threshold to determine whether the shortest cursor distance is long/short may be an arbitrary value, but is 300 in Embodiment 1. In the case where the threshold is set to 300 and the cursor 701 is disposed at coordinates (1500, 450), as indicated in
In operation S504, using the cursor distance as an evaluation value, the display control unit 106 selects the high brightness region, including the coordinates having the shortest cursor distance, as the selected region. In the case where the cursor 701 is disposed at coordinates (1500, 450), as indicated in
In operation S504, if there are a plurality of high brightness regions, including the coordinates having the shortest cursor distance, the display control unit 106 may select an arbitrary high brightness region out of these high brightness regions, as the selected region. As an alternative, the display control unit 106 may select a high brightness region of which distance between the center of the image and the highest brightness pixel of the high brightness region is the shortest (shortest distance) out of these high brightness regions, as the selected region.
In operation S505, the display control unit 106 calculates the distance between the center of the image and the highest brightness pixel of each high brightness region (the distance between the center of the image and each high brightness region).
In the image 200 in
In operation S506, the display control unit 106 uses the center distance as the evaluation value, and selects a high brightness region, which includes the coordinates having the shortest value of the calculated center distances of a plurality of high brightness regions (hereafter “shortest center distance”), as the selected region. In the case where the cursor 702 is disposed at the coordinates (900, 100), as indicated in
In operation S506, if there are a plurality of high brightness regions, which includes the coordinates having the shortest center distance, the display control unit 106 may select an arbitrary high brightness region out of these high brightness regions, as the selected region. As an alternative, the display control unit 106 may select a high brightness region of which distance between the cursor and the highest brightness pixel of the high brightness region is the shortest (shortest distance), out of these high brightness regions, as the selected region.
When the drawing pixel determining processing is executed in a state where the cursor 701 is disposed at the coordinates (1500, 450), as indicated in
When the drawing pixel determining processing is executed in a state where the cursor 702 is disposed at the coordinates (900, 100), as indicated in
According to Embodiment 1, when one high brightness region is selected out of a plurality of high brightness regions (feature regions), a more preferable high brightness region can be selected by considering the distance between the display item and each high brightness region, as described above.
In Embodiment 1, if the high brightness regions are disposed near the cursor, the display control unit 106 selects a high brightness region, which includes the coordinates having the shortest cursor distance (that is, selects a high brightness region closest to the cursor). Thereby after the user manually moves the cursor close to a high brightness region, this cursor can be accurately moved to the coordinates of the high brightness region. Further, in a case where a cursor is positioned at a high brightness region in advance in a moving image, and the image changes and the high brightness region slightly moves, the cursor can be accurately moved back to the coordinates of the high brightness region.
In a case here high brightness regions are not disposed near the cursor, on the other hand, the display control unit 106 selects a high brightness region, which includes the coordinates having the shortest center distance (that is, selects a high brightness region closest to the center of the image). Thereby the cursor can be moved to a high brightness region close to the center of the image where the probability of a major subject being located is relatively high. If the processing, to move the cursor to a high brightness region near the cursor, were performed in such a case, the cursor may have moved to a position unintended by the user because the high brightness region to which the cursor is moved and the position of the cursor before moving the cursor are distant. If the cursor is moved to a high brightness region near the center of the image, on the other hand, the cursor moves to a position where the probability of a major subject being located is relatively high, hence the possibility of the above mentioned problem can be reduced. In the case where high brightness regions are not disposed near the cursor, any method may be used, as long as a high brightness region, where the probability of a major subject being located is relatively high, can be selected. This means that a largest sized high brightness region, among high brightness regions of a specific object (e.g. a person), for example, may be selected.
In Embodiment 1, when the high brightness regions are detected, the display control unit 106 classifies a pixel indicating a brightness value, which is higher than a threshold that is set for the brightness threshold of the high brightness region, as a high brightness pixel, but the disclosure is not limited thereto. For example, the display control unit 106 may classify only a pixel having the highest brightness included in the image, as a high brightness pixel. Further, the display control unit 106 may classify a pixel, of which brightness value is at least 90% of the highest brightness value, as a high brightness pixel.
In Embodiment 1, in operation S403, the display control unit 106 removes a high brightness region of which size is less than the size threshold from the high brightness regions. The processing in operation S403, however, may be omitted.
In the description of Embodiment 1, the highest brightness pixel, out of the pixels in a high brightness region, was used, but instead of the highest brightness pixel, a pixel of which brightness value is lowest (lowest brightness pixel) in the high brightness region may be used. Further, instead of the highest brightness pixel, a pixel closest to the center of the high brightness region may be used, or a pixel closest to the cursor in the high brightness region may be used. In other words, an arbitrary pixel may be used instead of the highest brightness pixel. This aspect is the same for the embodiments other than Embodiment 1. Therefore the distance between the highest brightness pixel of the high brightness region and the cursor described above may be a distance between the center of the high brightness region and the cursor, or a distance between the pixel closest to the cursor in the high brightness region and the cursor, for example, as long as the distance is the distance between the high brightness region and the cursor.
A display control device 100 according to Embodiment 2 will be described. In Embodiment 2, the display control device 100 selects one high brightness region out of a plurality of high brightness regions, further considering the sizes of the high brightness regions (feature regions). Then the display control device 100 displays a position on a pixel of the selected high brightness region and the brightness value of this pixel. In the following description, a composing element the same as Embodiment 1 is denoted with a same reference sign, and detailed description thereof will be omitted.
In Embodiment 2, the region selecting processing in the flow chart in
In operation S1101, the display control unit 106 calculates a size and a “cursor evaluation value” of each high brightness region. The cursor evaluation value here in Embodiment 2 is a value determined by dividing the size of a high brightness region by a distance between this high brightness region and the cursor. The cursor evaluation value, however, is not limited thereto, as long as the value is a value determined considering the distance between the high brightness region and the cursor, and is a value indicating the desirability of each high brightness region as a selected region. For example, the cursor evaluation value may be a total of a number determined by dividing a predetermined number (e.g. 10000000) by a distance between the high brightness region and the cursor and the size of this high brightness region.
In operation S1 102, the display control unit 106 selects a high brightness region, which includes coordinates having a maximum value of the cursor evaluation value, as a selected region. As indicated in
In operation S1103, the display control unit 106 calculates a “center evaluation value” of each high brightness region. The center evaluation value here in Embodiment 2 is a value determined by dividing a size of a high brightness region by a distance between this high brightness region and the center of the image. The center evaluation value, however, is not limited thereto, as long as the value is a value determined considering the distance between the high brightness region and the center of the image, and is a value indicating the desirability of each high brightness region as a selected region. For example, the center evaluation value may be the total of a number determined by dividing a predetermined number (e.g. 10000000) by a distance between the high brightness region and the center of the image and the size of this high brightness region.
In operation S1104, the display control unit 106 selects a high brightness region, which includes coordinates having a maximum value of the center evaluation value, as the selected region. As indicated in
When the drawing pixel determining processing is executed in a state where the cursor 701 is disposed at the coordinates (1500, 450), as indicated in the example in
When the drawing pixel determining processing is executed in a state where the cursor 702 is disposed at the coordinates (900, 100), as indicated in the example in
According to Embodiment 2, when one high brightness region is selected out of a plurality of high brightness regions (feature regions), a more preferable high brightness region can be selected by considering the size of the feature region.
Compared with a high brightness region (feature region) of which size is small, the high brightness region of which size is large has a relatively high probability of playing a major role to express the image. Therefore by considering the size as in Embodiment 2, a high brightness region (feature region), of which size is large and has a probability of playing a major role to express the image is relatively high, can be selected with priority (that is, the position of the cursor is changed thereto).
A display control device 100 according to Embodiment 3 will be described. In Embodiment 3, the display control device 100 sequentially changes a high brightness region selected out of a plurality of high brightness regions (feature regions), as a selected region. Specifically, every time the user instructs to execute the automatic determination operation, the display control device 100 sequentially changes a high brightness region to be selected as the selected region in order of a first high brightness region (first feature region), a second high brightness region, a third high brightness region and the like. Then the display control device 100 displays the position and the highest brightness value of the highest brightness pixel in the selected region. In the following description, a composing element the same as Embodiment 1 or 2 is denoted with a same reference sign, and detailed description thereof will be omitted.
In the drawing pixel determining processing in Embodiment 3, operations of a flow chart in
In operation S1501, the display control unit 106 detects high brightness regions just like operation S402. In a case where three coordinates of the high brightness regions in the past (first selected coordinates to third selected coordinates) are stored in the storage unit 107, the display control unit 106 deletes the three coordinates from the storage unit 107 unless any one of the three coordinates is included in the detected high brightness regions.
In operation S1502, the display control unit 106 determines whether the three selected coordinates (first selected coordinates to third selected coordinates) are stored in the storage unit 107. Processing advances to operation S1503 if the first selected coordinates to the third selected coordinates are stored in the storage unit 107. Processing advances to operation S403 if the first selected coordinates to the third selected coordinates are not stored in the storage unit 107.
In operation S1503, the display control unit 106 selects a high brightness region next to the high brightness region currently indicated by the cursor, as the selected region. In other words, if the cursor currently indicates a high brightness region that exists at the first selected coordinates (first high brightness region), the display control unit 106 selects a high brightness region that exists at the second selected coordinates (second high brightness region) as the selected region. If the cursor currently indicates the second high brightness region, the display control unit 106 selects a high brightness region that exists at the third selected coordinates (third high brightness region) as the selected region. Further, if the cursor currently indicates the third high brightness region, the display control unit 106 selects the first high brightness region as the selected region. In the case where the cursor does not indicate any one of the first to third high brightness regions, the display control unit 106 selects the first high brightness region as the selected region.
In Embodiment 3, only the operations S504 and S506 of the region selecting processing (operation S406) are different (see flow chart in
In operation S504, the display control unit 106 selects the high brightness region corresponding to the shortest cursor distance as the selected region. Further, the display control unit 106 selects three high brightness regions in order from the shorter (smaller) cursor distance, and stores the coordinates of the highest brightness pixels of the three high brightness regions in the storage unit 107. Specifically, in the storage unit 107, the display control unit 106 stores the coordinates having the shortest cursor distance as the first selected coordinates, the coordinates having the second shortest cursor distance as the second selected coordinates, and the coordinates having the third shortest cursor distance as the third selected coordinates.
In step S506, the display control unit 106 selects a high brightness region corresponding to a shortest center distance, as the selected region. Further, the display control unit 106 selects three high brightness regions in order from the shorter center distance, and stores the coordinates of the highest brightness pixels of the three high brightness regions in the storage unit 107. Specifically, in the storage unit 107, the display control unit 106 stores the coordinates having the shortest center distance as the first selected coordinates, the coordinates having the second shortest center distance as the second selected coordinates, and the coordinates having the third shortest center distance as the third selected coordinates.
When the user instructs to execute the automatic determination operation like this, the display control unit 106 selects a selected region in the same manner as Embodiment 1 if the three selected coordinates are not stored in the storage unit 107. If the three selected coordinates are stored in the storage unit 107, on the other hand, every time the automatic determination operation is performed, the display control unit 106 sequentially changes a high brightness region to be selected as the selected region in the order of a first high brightness region, a second high brightness region and a third high brightness region. As an alternative, every time the automatic determination operation is executed, the display control unit 106 may sequentially change a high brightness region to be selected as the selected region in the order of the third high brightness region, the second high brightness region and the first high brightness region.
Instead of the three high brightness regions of the first to third high brightness regions, an arbitrary number N (N≥2) of high brightness regions may be sequentially changed. In other words, the display control unit 106 may store coordinates of the N number of high brightness regions in the storage unit 107 in advance, so as to change to a high brightness region to be selected as the selected region, every time the automatic determination operation is executed.
According to Embodiment 3, a preferable high brightness region can be sequentially selected out of a plurality of high brightness regions (feature regions) by considering the distance between the display item and the feature region, and the distance between the center of the image and the feature region.
A display control device 100 according to Embodiment 4 will be described. In Embodiment 4, the display control device 100 selects high brightness regions (feature regions) from a region which does not include an on-screen display (OSD) region. In the following description, a composing element the same as Embodiment 1, 2 or 3 is denoted with a same reference sign, and detailed description thereof will be omitted.
In the case of selecting a high brightness region according to Embodiment 4, the processing operations in the flow chart in
In operation S1701, the display control unit 106 detects whether or not the OSD region exists in the image 200. For example, in the case where the output terminal of the image transmission apparatus connected to the image acquiring unit 101 is an output terminal for monitoring according to the information on the output terminal, the display control unit 106 detects a predetermined range as the OSD region. In other cases, the display control unit 106 determines that the OSD region does not exist in the image 200. Here it is assumed that the image transmission apparatus (e.g. imaging device) includes an output terminal for recording to be connected to a device for recording images, and an output terminal for monitoring to check the state of the image during operation. It is also assumed that the type information on the output terminal is transmitted from the output terminal for monitoring in the form of additional information (e.g. InfoFrame of HDMI, Ancillary data packet of SDI). In the case where the type information cannot be acquired, the display control unit 106 assumes that the output terminal is not the output terminal for monitoring.
In operation S1702, the display control unit 106 determines whether or not the OSD region is included in the image 200. In operation S1701, it is determined that the OSD region is included in the image 200 if the OSD region is detected, and processing advances to operation S1703. In operation S1701, it is determined that the OSD region is not included in the image 200 if the OSD region is not detected, and processing advances to operation S1704.
In operation S1703, the OSD region does not exist in the image 200, hence the display control unit 106 performs processing the same as operation S402 or operation S1501.
In operation S1704, the display control unit 106 extracts the OSD region from the image 200. Then the display control unit 106 detects (extracts) one or a plurality of high brightness regions from the range (region) in the image 200, excluding the OSD region. Here between operation S402 and operation S1501, the classifying processing of the region performed on the brightness distribution acquired from the brightness information acquiring unit 102 is different. In operation S1704, the display control unit 106 classifies the OSD region (all the coordinates included in the OSD region) as a low brightness region, regardless the brightness values thereof (classifies the OSD region as a high brightness region in the case where low brightness regions are selected as the feature regions). The OSD region in Embodiment 4 is in a range of the image 200 in
According to Embodiment 4, a preferable high brightness region (feature region) can be selected by selecting the high brightness region from a region not including the OSD region.
In the description in Embodiment 4, the OSD region is detected in operation S1701 based on the information on the output terminal of the image transmission apparatus. The display control unit 106, however, may acquire at least one of the information on whether or not an OSD exist and the information on the position where the OSD is superimposed (superimposed position of the OSD) from the image transmission apparatus via network communication, and detect an OSD region based on the acquired information. Further, the display control unit 106 may acquire at least one of the information on whether or not the OSD exists and information on the position where the OSD is superimposed (superimposed position of the OSD) from additional information attached to the image 200 (image signal; image information), and detect the OSD region based on the acquired information. In these cases, the display control unit 106 may detect the superimposed position of the OSD as the OSD region. Further, in Embodiment 4, all the regions on which the OSD image is superimposed are regarded as OSD regions, but in the case where the information on the position where the OSD is superimposed can be explicitly acquired, only the region where the OSD image is currently superimposed may be determined as an OSD region. Furthermore, the display control unit 106 may determine that the OSD region exists in the case where objects (e.g. text, graphics) that can be detected by machining learning or the like is included in the image 200, or may detect the regions of the objects as OSD regions.
Whereas the disclosure has been described in detail based on preferred embodiments thereof, the disclosure is not limited to these specified embodiments, and various other modes within the scope of not departing from the spirit of the disclosure are also included in the disclosure. Parts of the above embodiments may be combined as required.
For example, in the above description, the display control device executes a series of processing operations to display the brightness value on the image, but a part of the processing operations executed by the display control device may be executed by another device. For example, the processing operations of calculating a brightness value, generating the cursor to indicate a position of a pixel, and combining this information with an image, may be performed by a standalone PC, and the display control device may display the display image outputted by the PC.
According to the disclosure, a preferable feature region can be selected in an image having a plurality of feature regions.
In the above description, “processing advances to operation S1 if A is B or more, and advances to operation S2 if A is smaller (lower) than B” may be interpreted as “processing advances to operation S1 if A is larger (higher) than B, and advances to operation S2 if A is B or less”. Conversely, “processing advances to operation S1 if A is larger (higher) than B, and advances to operation S2 if A is B or less” may be interpreted as “processing advances to operation S1 if A is B or more, and advances to operation S2 if A is smaller (lower) than B”. This means that “A or more” may be interpreted as “larger (higher; longer; more) than A”, and “A or less” may be interpreted as “smaller (lower; shorter; less)than A”, as long as no inconsistencies occur. Further, “larger (higher; longer; more) than A” may be interpreted as “A or more” and “smaller (lower; shorter; less) than A” may be interpreted as “A or less”.
Embodiment(s) of the disclosure can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM), a flash memory device, a memory card, and the like.
While the disclosure has been described with reference to exemplary embodiments, it is to be understood that the disclosure is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2022-012007, filed on Jan. 28, 2022 and Japanese Patent Application No. 2022-179042, filed on Nov. 8, 2022, which are hereby incorporated by reference herein in their entirety.
Number | Date | Country | Kind |
---|---|---|---|
2022-012007 | Jan 2022 | JP | national |
2022-179042 | Nov 2022 | JP | national |