This application claims the priority benefits of Taiwan application serial no. 101149283, filed on Dec. 22, 2012, and Taiwan application serial no. 102117572, filed on May 17, 2013. The entirety of each of the above-mentioned patent applications is hereby incorporated by reference herein and made a part of specification.
1. Technical Field
The disclosure relates to an image interaction system, a method for detecting a finger position, a stereo display system and a control method of a stereo display.
2. Related Art
In recent years, stereo displays have become one of the popular commodities in the consumer electronics market. Compared with the conventional flat panel displays, users can obtain different feelings by viewing stereo images.
For general stereo displays, stereo images viewed by viewers may change along with relative positions between the stereo displays and the viewers and the angles that the viewers view the stereo images. Accordingly, if the viewers desire to view better stereo images, they are limited to view the stereo images exactly in front of the stereo displays.
On the other hand, for some interactive stereo displays, users operate application programs by touching stereo images. However, similar to the limitation of the above stereo displays, the users are required to operate the application programs exactly in front of the stereo displays, so as to correctly perform the touch operation on the stereo images. If the users locate in other positions or view in different viewing angles, the viewed stereo images may be comparably different, and thus, the users can not correctly perform the touch operation on the stereo images.
The disclosure provides a stereo display system comprising a stereo display, a depth detector, and a computing processor. The stereo display is configured to display a left eye image and a right eye image, such that a left eye and a right eye of a viewer generate a parallax to view a stereo image. The depth detector is configured to capture a depth data of a three-dimensional space. The computing processor is coupled to the stereo display and the depth detector and configured to control image display of the stereo display. The computing processor analyzes an eyes position of the viewer according to the depth data, and when the viewer moves horizontally, vertically, or obliquely in the three-dimensional space relative to the stereo display, the computing processor adjusts the left eye image and the right eye image based on variations of the eyes position.
The disclosure provides a control method of a stereo display comprising the following steps: displaying a left eye image and a right eye image, such that a left eye and a right eye of a viewer generate a parallax to view a stereo image; capturing a depth data of a three-dimensional space; analyzing an eyes position of the viewer according to the depth data; and adjusting the left eye image and the right eye image based on variations of the eyes position when the viewer moves horizontally, vertically, or obliquely in the three-dimensional space relative to the stereo display.
The disclosure provides a stereo display system comprising a stereo display, a depth detector, and a computing processor. The stereo display is configured to display a left eye image and a right eye image, such that a left eye and a right eye of a viewer generate a parallax to view a stereo image. The depth detector is configured to capture a depth data of a three-dimensional space. The computing processor is coupled to the stereo display and the depth detector and configured to control image display of the stereo display. The computing processor analyzes an eyes position of the viewer according to the depth data and computes an appearance position of the stereo image appeared in the three-dimensional space according to the eyes position and a display position of the left eye image and the right eye image displayed in the stereo display. The computing processor performs the following steps: defining coordinates of a first vector, a second vector and a third vector in the three-dimensional space; computing a coordinate of the appearance position on the first vector according to a formula of
and computing the coordinate of the appearance position on the second vector and the third vector according to a formula of
where Pz is the coordinate of the appearance position on the first vector, Px,y is the coordinate of the appearance position on the second vector and the third vector, Ez is a coordinate of the left eye position or the right eye position on the first vector, Ex,y is a coordinate of the left eye position or the right eye position on the second vector and the third vector, Wdp is a width of a display region of the stereo display, Ox,y is a coordinate value of the left eye image or the right eye image on the second vector and the third vector, Weye is a distance between the left eye and the right eye, Dobj is a disparity between the left eye image and the right eye image, and Rx is a resolution of the stereo display on the second vector. Ox,y is corresponding to the left eye image when Ex,y and Ez are corresponding to the left eye position, and Ox,y is corresponding to the right eye image when Ex,y and Ez are corresponding to the right eye position. The computing processor adjusts the left eye image and the right eye image based on variations of the eyes position when the viewer moves in the three-dimensional space.
The disclosure provides a method for detecting a finger position, adapted to detect the finger position of a user. The method comprises the following steps: capturing an image data; obtaining a position of a hand region according to an image intensity information of the image data; dividing the hand region into a plurality of identification regions by at least one mask; and determining whether the identification regions satisfy with an identification condition to detect the finger position of the user.
The disclosure provides an image interaction system comprising a display, a video camera, and a computing processor. The display is configured to display an interactive image. The video camera configured to capture an image of a user to generate an image data. The computing processor is coupled to the display and the video camera and configured to control frame display of the display. The computing processor obtains a position of a hand region according to an image intensity information of the image data captured by the video camera, divides the hand region into a plurality of identification regions by at least one mask, and determines whether the identification regions satisfy with an identification condition to detect the finger position of the user.
In order to make the disclosure comprehensible, several exemplary embodiments accompanied with figures are described in detail below.
In exemplary embodiments of the disclosure, a stereo display system and a control method of a stereo display are provided. The stereo display system and the control method of the stereo display are adapted to the stereo display designed based on any optical display principle. By this control method, a left eye image and a right eye image displayed by the stereo display are adaptively adjusted according to an eyes position of the viewer, such that a stereo image viewed by the viewer is displayed on a specific position, or a constant distance between the stereo image and the viewer is maintained based on the requirement of the viewer. Reference will now be made in detail to embodiments of the disclosure, examples of which are illustrated in the accompanying drawings.
The depth detector 120 is configured to capture a depth data D_dep of the three-dimensional space. Herein, the depth detector 120, for example, may be an active depth detector which actively emits lights or ultrasonic waves as signals to calculate the depth data D_dep, or a passive depth detector which calculates the depth data D_dep by using characteristic information in environments. The computing processor 130 is coupled to the stereo display 110 and the depth detector 120 and configured to control image display of the stereo display 110 according to the depth data D_dep.
The control method of the stereo display 110 performed by the computing processor 130 is illustrated as
Specifically, the appearance position of the viewed stereo image appeared in the three-dimensional space is affected by the eyes position of the viewer, the specification of the stereo display 110, such as the size of the display region and resolution, and the positions of the left eye image L and the right eye image R displayed on the stereo display 110. For example, under the condition that the left eye image L and the right eye image R are not changed, the stereo image viewed by the viewer exactly in front of the stereo display 110 is different from the stereo image viewed substantially in front of the stereo display 110 with a left or right offset.
In the present exemplary embodiment, the computing processor 130 adaptively adjusts the left eye image L and the right eye image R according to the eyes position of the viewer, so as to allow the appearance position of the stereo image changing along with the position and the view angle of the viewer, or allow the stereo image viewed by the viewer in any angle locating at a preset position in the three-dimensional space.
Furthermore, the step of analyzing the eyes position of the viewer according to the depth data D_dep (step S404), may be implemented by detecting the position of the head and then analyzing the eyes position according to the depth data D_dep through the computing processor 130.
For example, in an exemplary embodiment, the viewer defines an initial position in advance for viewing stereo images, such that the computing processor 130 is allowed to analyze the depth data D_dep for a preset region comprising the initial position. The computing processor 130 determines the characteristics of the head according to the depth data D_dep. For example, the computing processor 130 compares the depth data D_dep of the preset region to a hemisphere model. If a shape of an object corresponding to the depth data D_dep of the preset region satisfies with the hemisphere model, the computing processor 130 determines the position of the head of the viewer, and then analyzes the eyes position according to the ratio of the position of the head.
In another exemplary embodiment, the computing processor 130 may also actively detect the position of the head to confirm the eyes position of the viewer. For example, the computing processor 130 detects a dynamic motion, such as a wave, or a static posture, such as a specific gesture, and then analyzes the position of the head of the viewer according to regions of the detected dynamic motion or the static posture, so as to orientate and select a locating region comprising the position of the head. Accordingly, the computing processor 130 analyzes the depth data within the locating region to obtain the eyes position based on a method similar to the above method for analyzing the eyes position. Herein, the step of analyzing the eyes position of the viewer according to the depth data D_dep may be implemented in any of the above exemplary embodiments. However, the disclosure is not limited to the foregoing exemplary embodiments.
Furthermore, in the step of displaying the left eye image L and the right eye image R (step S400), to allow the viewer adapting to the viewed stereo image, when the disparity of the left eye image L and the right eye image R is set to a target value, the stereo display 110 is configured to gradually increase the disparity of the left eye image L and the right eye image R to the target value during an initial display period when the stereo display 110 initially displays the stereo image, such that the viewer views the stereo image gradually appears out of the stereo display 110.
In order to further describe the stereo display system of the disclosure in detail,
Referring to
Specifically, the computing processor 130 defines coordinates of a first vector z, a second vector x and a third vector y in the three-dimensional space so as to define the value of each pixel in the depth data D_dep as a corresponding coordinate position in the three-dimensional space. In the present embodiment, the computing processor 130 adopts the coordinate of the depth detector 120 as the origin of the coordinates of the first vector z, the second vector x and the third vector y for example, but the disclosure is not limited thereto.
In detail, the computing processor 130 computes the appearance position Px,y,z of the stereo image in the three-dimensional space according to the follow formulas:
wherein Pz is the coordinate of the appearance position on the first vector z, and Px,y is the coordinate of the appearance position on the second vector x and the third vector y. Ez is a coordinate of the left eye position or the right eye position on the first vector z, and Ex,y is a coordinate of the left eye position or the right eye position on the second vector x and the third vector y. Herein, Ez and Ex,y are combined as Ex,y,z to represent the coordinate of the left eye position or the right eye position in the three-dimensional space. Ox,y is a coordinate of the left eye image L or the right eye image R on the second vector x and the third vector y, i.e. the display position in the stereo display. Wdp is a width of a display region of the stereo display 110. Weye is a distance between the left eye and the right eye. Dobj is a disparity between the left eye image and the right eye image. Rx is a resolution of the stereo display 110 on the second vector x. When Ex,y and Ez are corresponding to the left eye position, Ox,y is corresponding to the left eye image, and when Ex,y and Ez are corresponding to the right eye position, Ox,y is corresponding to the right eye image.
In the present embodiment, since the coordinates of the left eye position and the right eye position can be converted based on the distance Weye between the left eye and the right eye, a person skilled in the art can conclude based on the teaching herein that no matter Ex,y,z represents the coordinate of the left eye position or the right eye position, the computing processor 130 can calculate the appearance position Px,y,z of the stereo image according to the above formulas (1) and (2).
Referring to
Next, the computing processor 130 calculates formula (2) (step S510). In step S510, the coordinate Ox,y of the left eye image L is obtained based on the left eye image L before adjustment. The coordinate Pz of the appearance position on the first vector z is obtained based on the previous step S508. Accordingly, the computing processor 130 calculates the coordinate Px,y of the appearance position on the second vector x and the third vector y. Based on the steps S508 and S510, the computing processor 130 obtains the coordinate Px,y,z of the appearance position in the three-dimensional space.
Accordingly, the computing processor 130 adjusts the display position of the left eye image L and the right eye image R displayed in the stereo display 110 based on the coordinate Ex,y,z of the left eye position or the right eye position (step S512), such that the stereo image appears in different positions in the three-dimensional space according to the design requirement. In detail, based on formulas (1) and (2), in step S512, the step of adjusting the display position of the left eye image L and the right eye image R displayed in the stereo display 110 is implemented by adjusting the coordinate Ox,y of the left eye image L and the disparity Dobj.
As shown in
First, referring to
On the other hand, when the viewer moves left or right relative to the stereo display 110, as shown in
Furthermore, referring to
In the present embodiment, when the viewer leaves the stereo display 110 farther, the display region of the left eye image L and the right eye image R displayed in the stereo display 110 is required to become larger. When the viewer moves left and right or moves up and down relative to the stereo display 110, the display region of the left eye image L and the right eye image R is respectively limited to the width Wdp and the length Ldp of the display region of the stereo display 110. In other words, the maximum display region of the stereo image viewed by the viewer changes based on the size of the display region of the stereo display 110. In detail, according to the size of the display region of the stereo display 110, the intersection of the maximum regions that the left eye image L and the right eye image R are respectively displayed in the stereo display 110, such as the whole display region, is the maximum display region of the stereo image viewed by the viewer.
Referring to
On the other hand, as shown in
In addition, referring to
Furthermore, the appearance position Px,y,z of the stereo image of the present embodiment is similar to that of the embodiment of
Referring to
Herein, the disclosed formulas is simply exemplary for teaching an implementation of an embodiment and do not limit the disclosure. If the stereo image viewed by the viewer changes along with the viewing angle, and the appearance position and the angle are adaptively adjusted in a control method of a stereo display and a stereo display system, the control method of the stereo display and the stereo display system do not depart from the scope or spirit of the disclosure.
Referring to
Specifically, besides adjusting the appearance position of the stereo image according to the eyes position of the viewer, the computing processor 130 may also detect a touch event that the viewer touches the stereo image and control image display of the stereo display 110 according to the detected touch event, so as to implement the interaction function of the stereo image in the three-dimensional space.
Since the appearance position of the stereo image is adaptively adjusted based on the position of the user in the stereo display system 100, when the user would like to interact with the stereo image, the user touches the appearance position of the stereo image more conveniently. For example, as the description of the embodiment of
Referring to
Specifically, in step S1108, the computing processor 130 analyzes the position of the touch media in the three-dimensional space based on the depth data, and determines whether the touch event occurs based on the appearance position of the stereo image and the position of the touch media. When the computing processor 130 determines the touch event occurs, the computing processor 130 controls the stereo display 110 based on the type of the corresponding application program and the type of the detected touch event. When the computing processor 130 determines the touch event does not occur, the computing processor 130 returns to step S400 to perform the step flow of
After the computing processor 130 determines the touch event occurs, the computing processor 130 determines whether the touch media stays in a movement status (step S1204). If the computing processor 130 determines the touch media does not move or immediately leaves the stereo image after touching the stereo image, i.e. the position of the touch media does not overlap with the appearance position, the computing processor 130, for example, determines the user touches the stereo image in the manner of clicking, so as to controls image display of the stereo display 110 based on the touch position and the application program.
On the other hand, if the computing processor 130 determines the touch media stays in the movement status, the computing processor continuously detects a movement locus of the touch media (step S1206), and controls image display of the stereo display 110 according to the movement locus and the corresponding application program.
For example, the user operates the stereo images which are represented by different application program interfaces in different touch methods as shown in
Referring to
Referring to
Referring to
Generally speaking, when the user interacts with the stereo display system 100, the user may touch the stereo image by using different touch media. However, the stereo display system 100 may also be limited to be operated by using a specific touch media, and the control method thereof is shown in
Referring to
For example,
Besides the finger and the touch stick mentioned above, the computing processor 130 may determine the specific touch media based on a specific shape of an object, such as a palm of a hand, a gesture, a posture of a body, a star-like object or a circular object. Furthermore, the specific touch media is not limited to a static posture, the dynamic motion of the user may also serve as the specific touch media, such as the dynamic motion of waving or brandishing an object.
Specifically, the computing processor 130 may identify whether the touch media is a specific touch media based on multiple different methods. For example, the computing processor 130 identifies whether the touch media is a specific touch media by comparing the touch media with preset templates. Taking different gestures serving as the specific touch media for example, the preset templates may be shown in
Referring to
For example, when the touch media is preset as a finger, the computing processor 130 analyzes a position of the finger according to a hand outline, as shown in
Referring to
Accordingly, the computing processor 130 may determine whether the fingertip coordinate overlaps with the coordinate of the appearance position to determine whether the stereo image is touched by the user based on a method similar to the description of the above-mentioned embodiment.
Based on the above description, the stereo display system 100 provides a human-computer interaction interface by detecting whether the position of the touch media overlaps with the appearance position of the stereo image. According to this stereo image display method, the user is not required and limited to operate the human-computer interaction exactly in front of the stereo display, and thus the user may feel good stereo touch experience.
In another exemplary embodiment of the disclosure, a method for detecting a finger position and an image interaction system are provided, which are adapted to the display designed based on any optical display principle. In the image interaction system, the image displayed in the display is adjusted according to variation of the position of the finger and the palm of the user, such that the user gives different instructions to the image interaction system according to different gestures. The image interaction system and the method for detecting the finger position are further described in the following exemplary embodiments.
In the present embodiment, the display 1310 displays an interactive image IMG for the user to perform an interactive operation in a display region thereof. The video camera 1320 is configured to capture the image of the user to generate an image data D_img. After the image data D_img is processed by the computing processor 1330, the position of the palm and the finger of the user in the image is obtained. Accordingly, the user performs an operation on the interactive image IMG based on the action of the hand. Herein, according to different used apparatuses, the display 1310 may be a flat panel display or a stereo display, and the stereo display may be a stereoscopic display or an auto-stereoscopic display. The video camera 1320, for example, may be a video camera for detecting brightness, such as a visible light camera, a video camera for detecting chroma, such as a chroma detector, or the depth detector of the above embodiment. The disclosure does not limit the types of the display 1310 and the video camera 1320.
Furthermore, the method for analyzing the position of the palm and the finger of the user by using the computing processor 1330 is as shown in
In an exemplary embodiment, after capturing the image data of the user from the video camera 1320, the computing processor 1330 calculates an image intensity distribution, such as a color distribution of the hand, based on the image intensity information of the image data D_img, and defines a region of the image data locating in hand image intensity information range, such as the maximum region satisfying with the color distribution of the hand in the image data D_img, as the hand region of the user by comparing the image intensity distribution to the hand image intensity information range. In other words, in the present exemplary embodiment, the computing processor 1330 detects the position of the hand region by calculating the difference of the pixel values between the skin color and the background color. For example, the calculation result, such as the color distribution of the hand disclosed, in the present exemplary embodiment may be calculated based on the following formula:
C=Gaussian(m,σ) (3)
In formula (3), C is the color distribution of the hand, Gaussian (m, σ) is the Gaussian function, m is an average color value of the pixels of the position of the hand and the region around the hand, and σ is a variance of the color distribution in the image data D_img.
In another exemplary embodiment, the computing processor 1330 may compare the image intensity information of the image data D_img, e.g. the grayscale information or the chroma information, to a preset color distribution, and define the region satisfying with the preset color distribution in the image data D_img as the hand region. For example, the step of comparing the image intensity information of the image data D_img, to the preset color distribution may be implemented by using the following formula:
|color−m|≦ρ×σ (4)
In formula (4), color is the image intensity information of the image data D_img, m is an average color value of the pixels of the position of the hand and the region around the hand, σ is a variance of the color distribution in the image data D_img, and ρ is an adjustable parameter which is larger than or equal to zero. In one exemplary embodiment, considering that the region of the hand is not separated, the method of breadth-first search (BFS) is accordingly performed to search the hand region from the center point of the hand region when the hand region is searched in the image data D_img. Furthermore, the values of m and σ are updated by the color of the newly searched hand region. In one exemplary embodiment, the RGB of the color are separately calculated, and ρ may be set to 1.5.
In another exemplary embodiment, the computing processor 1330 may identify the hand region by detecting a dynamic motion of the hand, such as waving and the like. For example, the computing processor 1330 may determine whether a variation of the image intensity information of the image data D_img during a preset period exceeds a preset threshold value. When the variation of the image intensity information of a specific region of the image data exceeds the threshold value, the computing processor 1330 defines the region as the hand region.
In still another exemplary embodiment, the computing processor 1330 may detect the position of the hand region by comparing the image intensity information of the image data to a preset image intensity range. The computing processor 1330 defines a region of the image data locating in the image intensity range as the hand region. For example, when the video camera 1320 is a depth detector, the computing processor 1330 defines the region within a certain distance away from the video camera 1320 as the hand region according to a comparison result of the depth data and the preset depth range.
Specifically, in the embodiment that the depth detector is applied to serve as the video camera 1320, in order to avoid the body or the head affecting the detection of the hand region, the depth range may be set based on the depth data of the hand region, such that the computing processor 1330 determines the amount of the variance by calculating an average depth value within the depth range and the variance of the depth value and comparing the average depth value within the depth range and the variance of the depth value to a preset threshold value. For example, when the variance value of the depth data of any region of the image data D_img is smaller than the threshold value, the computing processor 1330 determines the region simply has the hand region. On the contrary, when the variance value of the depth data of any region of the image data D_img is larger than the threshold value, the computing processor 1330 determines the region has the hand region and the body region or the head region. The amount of the variance may be determined based on the following formula:
|D−M|≦p×std (5)
In formula (5), D is the depth data of the image data D_img, M is the average depth value, std is the variance of the depth, and p is an adjustable parameter. When the position of the hand region is obtained, considering the position of the hand locates between the video camera 1320 and the body or the head, the average depth value approaches the value of the hand region, and thus p is set to a positive number, such that the hand region is separated more completely. In one exemplary embodiment, the threshold value of the variance, for example, is 0.6, and p, for example, is 1.
After obtaining the position of the hand region, the computing processor 1330 divides the hand region into a plurality of identification regions and determines whether each of the identification regions satisfies with an identification condition to analyze the finger position of the user, as shown in
Referring to
To be specific, after the hand region is divided into a plurality of identification regions by a plurality of masks MK each having the size m X n, the computing processor 1330 determines whether each of the identification regions comprises the finger position based on the following identification conditions:
T
min≦Area≦Tmax (6)
Periphery≦TPeriphery (7)
wherein Area is the area of the hand region within the identification region. The actual area of the hand region within the identification region is calculated based on the depth data of each hand region and the data point of each hand region. Tmin and Tmax are respectively the minimum threshold value and the maximum threshold value of the area of the hand region. That is, Tmin is a minimum threshold area, and Tmax is a maximum threshold area. Periphery is the overlap length of the closed curve of the mask MK and the hand region. The actual overlap length of the closed curve and the hand region is obtained by the calculation with the depth data. Tperiphery is the overlap threshold length of the closed curve and the hand region, i.e. a length threshold value.
Accordingly, the computing processor 1330 may determine a part of the identification regions that the area of the hand region satisfies with the identification condition (6). Herein, for the determined identification regions, the area of the hand region corresponding thereto satisfies with the preset the finger area. Next, the computing processor 1330 further analyzes the finger position based on the identification regions satisfying with the identification condition (7). Herein, for the determined identification regions, the shape of the hand region corresponding thereto satisfies with characteristics of the peripheral region, as shown in
Specifically, in step S1440, the computing processor 1330 defines the adjustable comparison circle C within the detected hand region. Herein, the center position of the comparison circle C is preset on the center point Ct of the hand region. Next, the computing processor 1330 gradually adjusts a diameter and the center position of the comparison circle C, such that the comparison circle C is adjusted to a maximum inscribed circle which is inscribed in a hand outline HS.
For example, after obtaining the position of the hand region, the computing processor 1330 starts from the center point Ct of the hand region and performs the analysis from a smaller circle. In one exemplary embodiment, the diameter of the comparison circle C may be preset to the size of 31 pixels. Herein, the computing processor 1330 sets the overlap position of the circumference of the comparison circle C and the hand region to 1 and sets the non-overlap position to 0, so as to perform the calculation. The computing processor 1330 gradually increases the diameter of the comparison circle C based on the principle that the comparison circle C is not broken, i.e. the comparison circle C does not exceed the hand outline HS.
In detail, once the comparison circle C is broken, i.e. the comparison circle C exceeds the hand outline HS, the computing processor 1330 adjusts the center position of the comparison circle C first, as shown
Furthermore, in the process of the interactive operation, the position of the hand may continuously move, and the shape of the palm may also continuously change. In other words, during different frames, the area of the palm may be different. Accordingly, in the present embodiment, after the computing processor 1330 analyzes the center position of the palm, for the analysis of the center position of the palm of the next frame, the center position of the palm of the previous frame may be preset as the center position of the comparison circle C, and the diameter of the comparison circle C of the previous frame may be preset as the diameter length, such that the analysis time of the computing processor 1330 is reduced.
Moreover, when the area of the palm of the next frame is larger than that of the previous frame, the computing processor 1330 increases the diameter of the comparison circle C and moves the center position of the comparison circle C to find the center position of the palm. On the contrary, when the area of the palm of the next frame is smaller than that of the previous frame, since each section of the comparison circle C has damage under the initial state of the analysis, the computing processor 1330 decreases the diameter of the comparison circle C and moves the center position of the comparison circle C to find the center position of the palm under this condition.
After analyzing the center position of the palm, the computing processor 1330 may determine the furthest coordinate point of the center position of the palm to the finger within each identification region as the fingertip coordinate, as shown in
In an exemplary embodiment that the display 1310 is a flat panel display, the computing processor 1330 controls the display 1310 to display a cursor of a position corresponding to the palm or the finger of the user in the image on the frame, so as to allow the user realizing the current position of the operation.
Furthermore, in an exemplary embodiment, the computing processor 1330 identifies a gesture action, such as a horizontal movement, a vertical movement, or a stay on the same position, according to the center position of the palm and a movement locus of the identification regions corresponding to the finger position. In addition, the computing processor 1330 identifies the gesture action of the user according to the center position of the palm and the number of the detected identification regions corresponding to the finger position. For example, the computing processor 1330 may identify the gesture action, such as scissors, rock, or paper, according to the center position of the palm and the number of the identification regions.
Moreover, the computing processor 1330 may also identify the grab action of the user based on this method. For example, in one exemplary embodiment, to avoid parts of finger positions in the image being omitted due to the interference of the image noise, the computing processor 1330 may be configured to determine the user opens his/her hand when detecting the user extends more than two fingers, i.e. the release action, and on the contrary, determine the user fists his/her hand, i.e. the grab action.
The image interaction system 1300 of the present embodiment may be the foregoing interactive stereo display system. In other words, the method for detecting the finger position of the present embodiment may be applied to the foregoing stereo display system 100, such that the stereo display system 100 automatically detects the finger position of the user. Accordingly, the user performs the interaction operation on the stereo image by the finger.
In summary, in the stereo display system and the control method of the stereo display provided in the disclosure, by detecting the eyes position of the viewer, the left eye image and the right eye image displayed by the stereo display are adaptively adjusted according to the eyes position, such that the stereo image viewed by the viewer is displayed on the specific position, or a constant distance between the stereo image and the viewer is maintained based on the requirement of the viewer. Furthermore, the method for detecting the finger position and the image interaction system are provided in the disclosure. The hand region is divided into a plurality of identification regions, and whether each of the identification regions satisfies with an identification condition is determined to detect the finger position of the user, such that the operation action of the user is effectively identified in the image interaction system, and thus the operational sensitivity of the image interaction system is further enhanced.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the disclosed embodiments without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the disclosure covers modifications and variations of this disclosure provided they fall within the scope of the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
101149283 | Dec 2012 | TW | national |
102117572 | May 2013 | TW | national |