There is a problem in the art that arises from golfers who manually position an alignment device on the ground in an effort to align their positioning relative to the ball and the target because such manually positioned alignment devices are often in fact misaligned relative to the target. For example,
However, achieving proper alignment of a golf shot is technically challenging. For example, previous attempts at helping golfers with evaluating the alignment of their shots suffer from shortcomings.
For example, U.S. Pat. No. 9,737,757 (Kiraly) discloses a golf ball launch monitor that can use one or more cameras to generate images of a golf shot and process those images to determine the shot's trajectory. Kiraly describes that this image processing can include detecting the presence of alignment sticks in the images, where the detected alignment stick would establish the frame of reference for determining whether the shot's trajectory was on target or off target. However, Kiraly suffers from an assumption that the alignment stick is properly aligned with the golfer's target. In other words, Kiraly merely informs users how well the trajectories of their shots align with the directional heading of the alignment stick. Kiraly fails to provide any feedback regarding whether the alignment stick is itself aligned with the target. In many cases, the alignment stick placed by the golfer will not be aligned with the target, in which case Kiraly's feedback about alignment would be based on a faulty premise.
U.S. Pat. No. 10,603,567 (Springub) discloses various techniques for aligning a golfer with a target, where these techniques rely on the use of active sensors that are disposed in, at, or near the golfer's body or clothing to determine where the golfer's body is pointing. In an example embodiment, Springub discloses the use of an active sensor that is included as part of a ruler on the ground and aligned with the golfer's feet. The active sensors serve as contact sensors that permit the golfer to position his or her feet in a desired orientation. However, this approach also suffers from an inability to gauge whether the ruler is actually aligned with the golfer's target.
In an effort to address these technical shortcomings in the art, disclosed herein are techniques where computer technology is practically applied to solve the technical problem of aligning a golf shot for a golfer with a target. This technology can operate in coordination with an alignment device (e.g., an alignment stick) used by the golfer as an aid for aligning the golf shot with the target.
To solve this technical problem, the inventor discloses examples that use image processing in combination with computer-based modeling of physical relationships as between an alignment device, ball, and/or target that exist in the real world to compute and adjust alignments for golf shots. This inventive technology can provide real-time feedback to golfers for improved training and shot accuracy.
According to an example embodiment, image data about a scene can be processed. This image data comprises one or more images of the scene, wherein the one or more images comprise a plurality of pixels, the pixels having pixel coordinates in the one or more images, wherein the image data includes depictions of an alignment device and a target in the scene. One or more processors translate a plurality of the pixel coordinates applicable to the alignment device to 3D coordinates in a frame of reference based on a spatial model of the scene. The one or more processors also determine an orientation of the alignment device relative to the frame of reference based on the translation of the pixel coordinates. The one or more processors can generate alignment data based on the determined alignment device orientation, wherein the generated alignment data is indicative of a relative alignment for the alignment device in the scene with respect to a golf shot for striking a golf ball toward the target. Feedback that is indicative of the generated alignment data can then be generated for presentation to a user.
As an example, the generated alignment data can be a target line from the golf ball that has the same orientation as the alignment device. With this example, the feedback can be visual feedback that depicts the target line in the scene. Moreover, the generated alignment data may also include an identification and/or quantification of any discrepancy that exists between the target line and the target. Further still, the feedback can include a presentation of any identified and/or quantified discrepancy between the target line and the target.
As another example, the generated alignment data can be a projection of an alignment line that extends outward into the scene toward the target from the alignment device, where the alignment line has the same orientation as the alignment device. With this example, the feedback can be visual feedback that depicts the alignment line in the scene, which can allow the user to visually evaluate how close the alignment line is to the target. Moreover, the generated alignment data may also include an identification and/or quantification of any discrepancy that exists between the alignment line and the target. Further still, the feedback can include a presentation of any identified and/or quantified discrepancy between the alignment line and the target.
As still another example, the generated alignment data can be a projection of a line that extends from the target toward the golfer, where this line has the same orientation as the alignment device. Such a line projection can help support a decision by the golfer regarding where the ball can be placed in the scene (from which the golfer would strike the ball). With this example, the feedback can be visual feedback that depicts this line in the scene or a depiction of a suggested area for ball placement in the scene (where the suggested ball placement area is derived from the projected line (e.g., a point, line, circle, or other zone/shape around the projected line near the alignment device where the golfer is expected to be standing)).
According to another example embodiment, image data about a scene can be processed, where this image data comprises one or more images of the scene, wherein the one or more images comprise a plurality of pixels, the pixels having pixel coordinates in the one or more images, wherein the image data includes depictions of a golf ball and a target in the scene. One or more processors translate a plurality of the pixel coordinates applicable to the golf ball and the target to 3D coordinates in a frame of reference based on a spatial model of the scene. The one or more processors also determine a line relative to the frame of reference, wherein the determined line connects the 3D coordinates for the golf ball with the 3D coordinates for the target. Feedback that is indicative of the determined line can then be generated for presentation to the user.
These and other example embodiments are described in greater detail below.
In an example embodiment, the image 222 can be captured by a camera. For example, the camera can capture the image 222 when the camera is oriented approximately 90 degrees/perpendicular to the target line (as described below) which can facilitate processing operations with respect to changes in elevation between the ball 224 and the target 228. However, it should be understood that the camera need not be oriented in this manner for other example embodiments. For example, the camera could be positioned obliquely relative to the target line and still be capable of generating and evaluating shot alignments. Moreover, the image capture can be accomplished manually based on user operation of the camera (e.g., via user interactions with the user interface of a camera app on a smart phone) or automatically and transparently to the user when running the system (e.g., a sensor such as a camera automatically begins sensing the scene when the user starts a mobile app). Images such as the one shown by image 222 can serve as a data basis for evaluating whether the alignment device 226 is positioned in a manner that will align the golfer with the target when swinging and hitting the ball. Furthermore, it should be understood that with example embodiments, this data may be further augmented with additional information such as a range to the target, which may be inputted manually or derived by range finding equipment, GPS or other mapping data, and/or lidar (which may potentially be equipment that is resident on a smart phone).
To support the generation of alignment data about the alignment device, the pixel coordinates of one or more objects in the image data (e.g., the ball 224, alignment device 226, and/or target 228) are translated to 3D coordinates in a frame of reference based on a spatial model of the scene 220. This spatial model can define a geometry for the scene 220 that positionally relates the objects depicted in the scene 220. Augmented reality (AR) processing technology such as Simultaneous Localization and Mapping (SLAM) techniques can be used to establish and track the coordinates in 3D space of the objects depicted in the image data. Moreover, as discussed below, the system can track movement and tilting of the camera that generates the image data so that the 3D coordinate space of the scene can be translated from the pixel coordinates of the image data as images are generated while the camera is moving.
The AR processing can initialize its spatial modeling by capturing image frames from the camera. While image frames are being captured, the AR processing can also obtain data from one or more inertial sensors associated with the camera (e.g., in examples where the camera is part of a mobile device such as a smart phone, the mobile device will have one or more accelerometers and/or gyroscopes that serve as inertial sensors), where the obtained data serves as inertial data that indicates tilting and other movements by the camera. The AR processing can then perform feature point extraction. The feature point extraction can identify feature points (keypoints) in each image frame, where these feature points are points that are likely to correspond to the same physical location when viewed from different angles by the camera. A descriptor can be computed for each feature point, where the descriptor summarizes the local image region around the feature point so that it can be recognized in other image frames.
The AR processing can also perform tracking and mapping functions. For local mapping, the AR system can maintain a local 3D map of the scene, where this map comprises the feature points and their descriptors. The AR system can also provide pose estimation by mapping feature points between image frames, which allows the system to estimate the camera's pose (its position and orientation) in real-time. The AR system can also provide sensor fusion where inertial data from the inertial sensors are fused with the feature points to improve tracking accuracy and reduce drift.
As an example, the AR processing can be provided by software such as Android's ARCore and/or Apple's ARKit libraries.
The alignment device 226 can take any of a number of forms, e.g., an alignment stick, a golf club, a range divider, wood stake, the edge of a hitting mat, or other directional instrument. In some embodiments, the alignment device 226 may even take the form of projected light. In still other embodiments, the alignment device 226 may take the form of a line on the ball 224 (e.g., see
The target 228 can be any target that the golfer wants to use for the shot. For example, the target 228 can be a flagstick, hole, or any other landmark that the golfer may be using as the target for the shot.
The
At step 200, the processor processes the image data to determine the ground plane depicted by the image data. The processor can read the image data from memory that holds image data generated by a camera. The ground plane is the plane on which the alignment device 226 is positioned. This ground plane determination establishes a frame of reference for determining the orientation of the alignment device 226, the position of the ball 224, and the position of the target 228 in 3D space.
AR processing technology such as SLAM techniques can be used to establish this ground plane 402 and track the spatial relationship between the camera that generates the image data and the objects depicted in the image data. For example, the AR processing can work on a point cloud of feature points in a 3D map that are derived from the image data to identify potential planes. The Random Sample Consensus (RANSAC) algorithm or similar techniques can be used to fit planes to subsets of the point cloud. Candidate ground planes are then validated and refined over several image frames to ensure that they are stable and reliable. The ground plane 402 can be represented in the data by a pose, dimensions, and boundary points. The boundary points can be convex, and the pose defines a position and orientation of the plane. The pose can be represented by a 3D coordinate and a quaternion for rotation. This effectively defines the origin of the plane in the 3D spatial model and defines how it is rotated. The pose of the plane can be characterized as where the plane is and how it is oriented in the coordinate system of the 3D spatial model for the scene 220. The defined origin can serve as a central point from which other properties of the ground plane 402 are derived. The dimensions of the ground plane 402 refer to the extent of the ground plane 402, which can usually be described by a width and a length. This can be exposed by the AR system as extents, providing a half-extent in each of the X and Z dimensions (since the ground plane 402 is flat, there would not be a Y extent). Knowing the extents allows the system to understand how big the ground plane 402 is and consequently how much space there is for placing virtual objects in a scene. The boundary points describe the shape of the ground plane 402 along its edges. The ground plane 402 may not be a perfect rectangle and it may have an irregular shape. For example, the ground plane 402 can be defined to have a convex shape if desired by a practitioner (in which case all interior angles of the ground plane 402 would be less than or equal to 180 degrees and the line segment connecting any two points inside the convex shape would also be entirely inside the convex shape. Understanding a set of boundary points for the ground plane 402 allows the AR system to render a visual graphic of the ground plane 402 in a displayed image and help detect collisions/intersections with virtual objects in the scene. Accordingly, it should be understood that a practitioner may choose to visually highlight the detected ground plane 402 in a displayed image which can help with the placement of virtual objects on the ground plane 402.
At step 202, the processor processes the image data to determine the location and orientation of the alignment device 226. This location and orientation can be a vector that defines the directionality of the alignment device 226 with respect to the alignment device's dominant direction (e.g., its length) in 3D space relative to the ground plane. This vector can be referred to as the “alignment line” or “extended alignment line”, which can be deemed to extend outward in space from the foreground of the scene 220 to the background of the scene 220 in the general direction of the target 228.
In an example embodiment, the alignment device 226 can be identified in the image data in response to user input such as input from a user that identifies two points on the alignment device 226 as depicted in the image data. An example of this is shown by
The pixel locations of points 412 and 414 can be translated into locations in the 3D space referenced by the ground plane 402. To find these 3D points, rays can be cast from the position of the camera outwards at point 412 and at point 414. If the rays collide with the detected ground plane 402, the AR system can get these collision points, which are 3D positions that can be represented by x, y, and z float variables. For the ray cast, the ray can start at a specified origin point in the 3D space of the system's spatial model (e.g., the camera). The ray can be cast from this origin point in a direction away from the camera through the pixel location on the display screen that has been selected by the user (e.g., point 412 or point 414). Optionally, a distance for the ray can be specified, although this need not be the case. The intersection of the ray with the ground plane 402 would then define the 3D coordinates for the specified point (412 or 414 as applicable). For example, SLAM technology as discussed above can provide this translation. Accordingly, the line that connects points 412 and 414 in the 3D space defines the orientation of the alignment device 226, and this orientation can define a vector that effectively represents where the alignment device 226 is aimed. As such, the vector defined by the orientation of the alignment device 226 can be referred to as the alignment line for the alignment device 226. The alignment line vector can be deemed to lie in the ground plane 402, and the alignment line vector can be defined by 3D coordinates for two points along the alignment line. Based on the 3D coordinates for these two points, the alignment line will exhibit a known slope (which can be expressed as an azimuth angle and elevation angle between the two points 412 and 414). Vector subtraction can be used to determine the directional heading (orientation) of the alignment device 224, and a practitioner may choose to virtually render the alignment line (or at least the portion of the alignment line connecting points 412 and 414) in the displayed image.
While the example of
While the example discussed above employs user input to identify the alignment device 226 in the image data, it should also be understood that automated techniques for detecting the alignment device 226 can be used if desired by a practitioner. For example, the processor can use computer vision techniques such as edge detection, corner detection, and/or object recognition techniques to automatically detect the existence and location of an alignment device 226 in the image data. For example, the image data can be processed to detect areas of high contrast with straight lines to facilitate automated detection of an alignment stick. The object recognition can be based on machine learning (ML) techniques where a classifier is trained using known images of alignment devices to detect the presence of an alignment device in an image. An example of ML techniques that can be used in this regard include YOLOX and convolutional neural networks (CNNs) that are trained to recognize alignment devices. To facilitate such automated detection, the alignment device 226 can include optically-readable indicia such as predefined patterns, labels, or the like that allow it to be easily detected within the image data. However, it should be understood that these optically-readable indicia need not necessarily be used because computer vision techniques can also be designed to recognize and detect alignment devices that have not been marked with such optically-readable indicia. Further still, the system can employ detection techniques other than optical techniques for locating the alignment device 226. For example, the alignment device can include wireless RF beacons utilizing RFID or Bluetooth technology to render the alignment device 226 electromagnetically detectable, and triangulation techniques could be used to precisely detect the location and orientation of the alignment device 226.
At step 204, the processor processes the image data to determine the location of the ball 224. This location can be referenced to the ground plane 204 so that the position of the ball 224 in 3D space relative to the alignment line is known.
In an example embodiment, the ball 224 can be identified in the image data in response to user input such as input from a user that identifies a point where the ball 224 is located in the image. An example of this is shown by
While the example discussed above employs user input to identify the ball 224 in the image data, it should also be understood that automated techniques for detecting the ball 224 can be used if desired by a practitioner. For example, the processor can use edge detection, corner detection, and/or object recognition techniques to automatically detect the existence and location of a golf ball in the image data. The object recognition can be based on machine learning (ML) techniques where a classifier is trained using known images of golf balls to detect the presence of a golf ball in an image. An example of ML techniques that can be used in this regard include convolutional neural networks (CNNs) that are trained to recognize golf balls.
At step 206, the processor calculates a vector extending from the determined ball location, where this calculated vector has the same orientation as the alignment line. This calculated vector serves as the “target line” for the shot. Accordingly, it should be understood that the target line has the same directional heading as the alignment line.
To calculate the target line, the system can use the 3D coordinate for the location of ball 224 (defined via point 422) as the origin for the target line vector and extend the target line vector outward with the same directional heading as the alignment line. For purposes of a visual display of the target line, the system may also optionally specify a distance for how long the target line is to extend from the ball location 422 along the directional heading with the same orientation as the alignment line.
At step 208, the processor processes the image data to determine the location of the target 228. This location can be referenced to the ground plane 204 so that the position of the target 228 in 3D space relative to the alignment line and the target line 424 is known.
In an example embodiment, the target 228 can be identified in the image data in response to user input such as input from a user that identifies a point where the target 228 is located in the image. An example of this is shown by
The displayed image 440 may also draw a line 444, where line 444 is a vertical line from point 442 (representing target 228) that is perpendicular with the ground plane 402. Line 444 can help the user with respect to visualizing the placement of point 442 for the target 228. Moreover, because line 444 connects to point 442 and is perpendicular to the ground plane 402, it should be understood that the display of line 444 in the displayed image may tilt as the user tilts the camera, which allows the user to visually gauge his or her perspective through the camera relative to the target 228. However, it should be understood that a practitioner may choose to implement step 208 without displaying the line 444 if desired.
Moreover, the system may optionally also leverage topographical map data, lidar data, or other data that would provide geo-located height (elevation) data for the land covered by the scene 220 in the image data. This height data can be leveraged by the system to take the contours of the land in scene 220 into consideration when the user is dragging a point 442 (e.g., a virtual flag) out toward the desired target 228 on the display image so that the point 442 can move up and down the contours of the scene 220 to thereby inform the user of the contours in the field. Similarly, this height data can be leveraged by the system to take the contours of the land in scene 220 into consideration if displaying the target line 424 (in which case the line 424 depicted in
While the example discussed above employs user input to identify the target 228 in the image data, it should also be understood that automated techniques for detecting the target 228 can be used if desired by a practitioner.
For example, the processor can use edge detection, corner detection, object recognition and/or other computer vision techniques to automatically detect the existence and location of typical targets for golf shots (such as hole flags). The object recognition can be based on machine learning (ML) techniques where a classifier is trained using known images of target indicators such as hole flags to detect the presence of a hole flag in an image. An example of ML techniques that can be used in this regard include convolutional neural networks (CNNs) that are trained to recognize hole flags. However, it should be understood that a user may choose to use virtually anything as the target 228, as any desired landing point for a shot downfield from the ball 224 could serve as the user-defined target 228.
As another example, geo-location techniques could be used to determine the location for target 228. For example, on many golf courses, the holes will have known geo-locations, and global positioning system (GPS) data or other geo-location data can be used to identify the target 228 and translate the known GPS location of the target 228 to the coordinate space of the ground plane 402. The system may optionally use visual positioning system (VPS) data that helps localize the camera using known visual imagery of the landscape in scene 220. This ability to leverage VPS data will be dependent on the coverage of the relevant geographic area (e.g. a particular golf course) within available VPS data sets. This can help link the 3D spatial model of the AR processing system with real world geo-location data.
As still another example, crowd-sourced data can be used to define the location for target 228 in some circumstances. For instance, input from other users that indicates a location for a target 228 such as a hole on a golf course can be aggregated to generate reliable indications of where a given hole is located. For example, the average user-defined location for a hole as derived from a pool of users (e.g., a pool of recent users) can be used to automatically define the location for target 228 when the user is aiming a shot at the subject hole.
Once the target 228 and the target line 424 have been located in the 3D space of the system, the processor is able to evaluate the alignment of alignment device 226 based on the determined target location and the target line 424 (step 210). Toward this end, at step 210, the processor can determine whether the location of target 228 determined at step 208 falls along the target line vector 424 determined at step 206. To accomplish this, the processor can find the closest point along the target line 424 to the determined target location. The distance between this closest point along the target line 424 and the determined target line can serve as a measure of the alignment of the alignment device 226, where this measure quantifies the accuracy or inaccuracy as applicable of the subject alignment, where values close to zero would indicate accurate alignment while larger values would indicate inaccurate alignment (misalignment). If step 210 results in a determination that the location of target 228 falls along the target line vector 424 (in which the alignment measurement would be zero), then the processor can determine that the alignment device 226 is aligned with the target 228. If step 210 results in a determination that the location of target 228 does not fall along the target line vector 424 (in which case the alignment measurement would be a non-zero value), then there is a misalignment of the alignment device 226. However, it should be understood that, if desired by a practitioner, step 210 can employ a tolerance that defines a permitted amount of divergence between the location of target 228 and the target line 424 while still concluding that the alignment device 226 is properly aligned with the target. As examples, the tolerance value can be represented by physical distances (e.g., 2 feet) or angular values (e.g. 2 degrees) that serve as thresholds for evaluating whether a candidate orientation is “aligned” or “misaligned”; and the tolerance value can be hard-coded into the system or defined in response to user input, depending on the desires of a practitioner. Further still, the exact threshold values can be chosen and selected by practitioners or users based on empirical factors that are deemed by the practitioners or users to be helpful for practicing their shots.
Moreover, step 210 may include the processor quantifying an extent of misalignment between the target line 424 and location of target 228 if applicable. For example, the processor can compute an angular displacement as between the target line 424 and a line connecting the determined locations for the ball 224 and target 228. This angular displacement can represent the extent of misalignment indicated by the current orientation of the alignment device 226. Moreover, the processor can combine this angular displacement with a range to the target 228 to translate the angular displacement to a distance value (e.g., a misalignment of X feet at Y feet of range). In another example, the processor can compare the 3D coordinate of the determined location for target 228 and the nearest 3D coordinate on the target line vector 424 to compute the distance between these 3D coordinates.
Feedback can be provided to the user about the quality of alignment for the alignment device 226 based on the processing at step 210 (see steps 212 and 214). This feedback may be provided to the user via augmented reality (AR) and mixed reality (MR) techniques if desired by a practitioner. However, this need not be the case as discussed in greater detail below
If step 210 results in a determination that the alignment device 226 is aligned with the target 228, then the process flow can proceed to step 212. At step 212, the processor provides feedback to the user indicating that the alignment device 226 is aligned with the target 228. This feedback can be simple binary feedback such as the display of an indicator or message on a GUI display which indicates that the alignment device 226 is properly aligned with the target 228. For example, the GUI display of image 440 can show the target line 424 in a particular color such as bright yellow if step 210 results in a determination that the alignment device 226 is aligned with the target 228. However, it should be understood that the GUI display could also provide a written message (e.g., “You are aligned”) to similar effect. Still further, audio or haptic feedback could be provided at step 212 to indicate alignment if desired by a practitioner.
Further still, if desired by a practitioner, the displayed image 440 can provide additional feedback to the user that informs the user about changes in perspective as the user changes the orientation of the camera over time. For example, the color of target line 424 can vary based on how far off “perpendicular” the camera's 2D field of view perspective is relative to the target line 424. As the image plane of image 440 goes from less perpendicular to more perpendicular to the target line 424, the color of target line 424 in the image 440 can change from Color to Color Y (e.g., bright red when far away from perpendicular to bright green when perpendicular, with a bright yellow in the interim). This can help the user keep track of the view perspective provided by image 440. However, it should be understood that a practitioner may choose to omit this feedback if desired. Moreover, if this feedback is used in combination with the color-coded visual feedback discussed above for evaluating alignment/misalignment, the system can employ color coding that would distinguish between the colors used for indicating alignment/misalignment and the colors used for indicating perspective.
If step 210 results in a determination that the alignment device 226 is not aligned with the target 228, then the process flow can proceed to step 214. At step 214, the processor provides feedback to the user indicating that the alignment device 226 is misaligned with the target 228. This feedback can be simple binary feedback in visual form such as text and/or graphics. For example, the binary feedback can be a display of an indicator or message on a GUI display which indicates that the alignment device 226 is not aligned with the target 228 (e.g., “You are misaligned”). As another example, the misalignment feedback can be a display of graphics such as a red warning or X mark, a display of the target line 424 and/or alignment device 226 in a particular color (e.g., red), and/or a written, audio, or haptic feedback indicating the misalignment. In the example of
Further still, if step 210 provides a quantification of the misalignment, the feedback may be quantified feedback (e.g., “adjust the alignment stick by 4 degrees), visually displayed feedback (e.g., a visual indicator on a display screen that shows a user how the alignment device can be better aligned), and/or it may be generalized feedback (e.g., “tilt the alignment stick to the left” or even more simply “you are misaligned”).
In the example of
The range to target 228 can be derived in any of a number of fashions. For example, user input could supply this range based on the user's knowledge or estimations. As another example, a laser range finder could be used to determine the range. As yet another example, GPS data, geo-location data, or other mapping data (which may include drone-derived mapping data) could be used to determine the range based on knowledge of a geo-location of the user (e.g., derived from the user's mobile device if the mobile device is GPS-equipped and enabled) and knowledge of a GPS position or other geo-location for the defined target 228. It should be appreciated that even relatively small angular misalignments of the alignment device 226 will produce fairly substantial distance misalignments when long ranges are taken into consideration. Accordingly, a feedback message 442 which quantifies an extent of misalignment can help the user gauge how far off the alignment device 226 may be guiding the user.
Furthermore, image 440 of
Feedback at step 214 may also take the form of an indication to the user of how the alignment device 226 can be re-oriented to improve its alignment relative to the target 228. An example of this is shown by
Accordingly, the
It should be understood that the alignment assessment produced by the process flow of
In the example of
In the examples of
The process flow of
With the example of
In the example of
In an example such as one where the user is interacting with the system via a mobile device such as a smart phone, the visual indication at step 702 can be a graphical overlay of the desired alignment orientation on the displayed image of the scene to show where the alignment device 226 should be positioned. This graphical overlay can be a line depicted in the scene via AR that replicates at least a portion of the desired alignment orientation vector (or a line that is parallel to the desired alignment orientation vector). For example, the graphical overlay line can be a colored line (e.g., bright yellow or some other color) to show where the desired alignment orientation is located in the scene depicted by the image. Moreover, if desired by a practitioner, once a physical alignment device 226 is placed in the field of view, the system can also provide visual feedback on whether the alignment device 226 is aligned to the target 228 (using techniques such as those discussed above, such as the visual feedback explained in connection with
This approach for the visual indication at step 702 is expected to also be effective for example embodiments where the system works in conjunction with virtual reality (VR) equipment (e.g., wearable devices such as VR goggles, glasses, or headsets). With this approach, the VR equipment can display via AR a virtual alignment device with the proper orientation, negating the need for a physical alignment device 226.
In another example where the system includes a device with light projection capabilities positioned near the user, the device can include a light projector that is capable of steering and projecting light into the scene so that a virtual alignment device is illuminated on the ground plane 402 of the scene. This light projection can also provide the user with a reliable virtual alignment device, which also can negate the need for the traditional physical alignment device 226.
The process flows of
The mobile device 300 of
The mobile device 300 may also comprise one or more processors 302 and associated memory 304, where the processor(s) 302 and memory 304 are configured to cooperate to execute software and/or firmware that supports operation of the mobile device 300. Furthermore, the mobile device 300 may include one or more cameras 308. Camera(s) 308 may be used to generate the images used by the example process flows of
The instructions may further include instructions defining a control program 354. The control program can be configured to provide the primary intelligence for the mobile application 350, including orchestrating the data outgoing to and incoming from the I/O programs 356 (e.g., determining which GUI screens 352 are to be presented to the user).
While
For example, as shown by
As another example, as shown by
In an example embodiment, the system 810 can take the form of a launch monitor. Launch monitors are often used by golfers to image their swings and generate data about the trajectory of the balls struck by their shots. By incorporating the alignment evaluation features described herein, a ball launch monitor can be augmented with additional functionality that is useful for golfers. In a launch monitor embodiment, one or more processors resident in the launch monitor itself can perform the image processing operations described herein to support alignment evaluations; or the one or more processors 814 may include one or processors on a user's mobile device 300 that performs some or all of the alignment evaluation tasks and communicates alignment data to the launch monitor for presentation to the user. In still another example, the launch monitor could be configured to communicate launch data to the mobile device 300 for display of the launch data via the mobile application 350 in coordination with the alignment data. In another example embodiment, the system 810 can take the form of a monitor or display screen that is augmented with processing capabilities to provide alignment assistance as described herein.
Further still, a launch monitor (such as the one disclosed by the above-referenced Kiraly patent) can be augmented to use the spatial model data generated by the system to adjust its internal calculations regarding features such as azimuth feedback (e.g., launch direction, horizontal launch angle, or side angle) and/or elevation changes. For example, the mobile device 300 can be used to also image the launch monitor and detect or determine the launch monitor's orientation with respect to the 3D spatial model maintained by the mobile application 350. By also detecting or determining the launch monitor's orientation in 3D space, the mobile device 300 could communicate data to the launch monitor that allows the launch monitor to better orient itself to the target 228 (which can improve the ability of the launch monitor to calculate accurate azimuth values).
Furthermore, as shown by
While the invention has been described above in relation to example embodiments, various modifications may be made thereto that still fall within the invention's scope.
For example, it should be understood that the process flows of
As another example, while the examples illustrated above in connection with
Moreover, to support putting, many golfers will use balls with lines on them (or will mark their balls with lines), where the golfers will place the ball on the ground so that the line is intended to be aimed in the direction the golfer intends to putt the ball to help the golfer visualize a putting line. The techniques described herein can be adapted to evaluate whether such a line on the ball is aligned with the target 228. An example of this is shown by image 910 of
As another example, a user may choose to use multiple alignment devices 226, and a practitioner may choose to configure the system to support evaluating the alignment of multiple alignment devices 226. For example, if a user is using more than one alignment device 226, the user could select which alignment device 226 he or she would like to utilize as the primary alignment device to determine the target line 424. An example of this is shown by
In the example of
In the example of
As another example, the system can include automated mechanisms for adjusting the alignment of the alignment device 226 if desired by a practitioner. For example, stepper motors, actuators, or other motive capabilities could be employed on or connected to alignment devices (together with data communication capabilities) to adjust alignment devices to better alignments if indicated by the alignment data generated by the system.
These and other modifications to the invention will be recognizable upon review of the teachings herein.
This patent application claims priority to U.S. provisional patent application 63/406,311, filed Sep. 14, 2022, and entitled “Applied Computer Technology for Golf Shot Alignment”, the entire disclosure of which is incorporated herein by reference. This patent application is also related to U.S. patent application Ser. No. ______, filed this same day, and entitled “Image-Based Spatial Modeling of Alignment Devices to Aid Golfers for Golf Shot Alignments” (said patent application being identified by Thompson Coburn Attorney Docket Number 72096-230109), the entire disclosure of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63406311 | Sep 2022 | US |