The present invention is related to virtual and real-world positioning and locationing systems and methods, and more particularly, to virtual and real-world positioning and locationing systems and methods for integrating, and using, for example, a new point of origin, and a scalable earth-centered, earth-fixed (ECEF) coordinate space accessed by computing instructions for providing positioning shape-based locationing, positioning, and/or timing of shapes, which may represent objects or otherwise locations, within the ECEF coordinate space.
A location or place may be defined by a point, line, or area, e.g., between two or more point-based locations that form a segment or area or multiple segments or areas. Descriptive locality defines an area with or without boundaries, but that are used to describe the overall area, i.e., the Los Angles Metro area. Relative location is similar to military polar missions, which describe a shift or displacement from one site to another using range, i.e. three miles south of the intersection. Absolute location can be designated using a specific pairing of latitude and longitude in a Cartesian coordinate grid (for example, a spherical coordinate system or an ellipsoid-based system such as the World Geodetic System) or similar methods.
There are three main types of navigation: celestial, Global Position Systems (GPS), and map and compass. With development of GPS, and the use of known latitude and longitudinal descriptors, navigation has become commonplace in most devices. For example, the current U.S. GPS system uses twenty-four or more satellites orbing the earth at approximately 20,200 miles. Using six orbits each satellite orbits the earth twice a day. All GPS satellites broadcast on at least two carrier frequencies: L1, at 1575.42 MHz, and L2, at 1227.6 MHz (newer satellites also broadcast on L5 at 1176 MHz). The GPS system provides absolute location but with an error of approximately 1.82 to 4.9 meters and 95% of the time. Real Time Kinematic (RTK) GPS uses a base station can provide up to 1-2 centimeter accuracy.
Currently, quantitative data and information from conventional sources such as LiDAR, SAR, photogrammetry, electromagnetic emissions are unable to, in a same dataset, associate a qualitative a location. This is so, even if RTK may be used to somehow quantify the size of an emission/propagation or value relative to other datasets in a world's current location framework. That is, data received from multiple data sources creates a problem because coordinating or correlating multiple data sources, especially in real-time or near real-time, and is difficult. In particular, the assumption of location decouples the context of the data, coming from multiple data sources, in the same way a street address decouples all the information about the house, property, occupants, etc.
In view of this, there is a need for systems and methods to measure distance between data sources instead of disparate location in insolation from the data attributes and/or descriptors, that overcomes the aforementioned limitations in the prior art.
In various aspects, virtual and real-world positioning and locationing system(s) and/or method(s) described herein my comprise one or more controller(s) (e.g., processor(s) as executing on one or more servers or cloud platform(s)) and one or more computer memories storing computing instructions for implementing algorithms or methods for virtual and/or real-world positioning and locationing by use of a scalable earth-centered, earth-fixed (ECEF) coordinate to an origin cube based algorithm to provide location of data and time series information in four dimensional space. The virtual and real-world positioning and locationing system(s) and/or method(s) may generate, receive, and/or store positioning and locationing data or information or database in a database or, more generally, computer memory, and may access such data or information therefrom. The systems and methods described here are configured to create, store, and/or provide high quality or high fidelity location or position information or data in real-time or near real-time.
The present invention may comprise, or take the form of, systems and/or methods for determining the measurement of absolute locations using a scalable origin (e.g., comprising a shape, such as a cube) and an ECEF designed framework.
In accordance with the disclosure herein, the present disclosure includes improvements in computer functionality or in improvements to other technologies at least because the disclosure recites, e.g., a computationally efficient platform for locationing and positioning within a single coordinate space (e.g., tesseract-based space). That is, the present disclosure describes improvements in the functioning of the computer itself or “any other technology or technical field” because the virtual and real-world positioning and locationing system(s) and/or method(s) described herein are configured to aggregate and ingest data and provide an accurate and single source or position-and-temporal coordinated database, or otherwise memory store, for determining virtual and real-world positioning or locationing of shapes, shapes (e.g., cubes) that may defined objects within coordinate space. This improves over the prior art at least because the single source database or otherwise memory store reduces computer memory and storage required by conventional redundant systems that may store the same information in various ways and in different or incompatible formats. In addition, conventional systems typically lack data fidelity where the different and/or redundant systems are unable to, or not configured, to coordinate in order to streamline data, such as time series of data, for locationing and/or positioning in a given coordinate space in real-time or near-real time.
In addition, the present disclosure includes effecting a transformation or reduction of a particular article to a different state or thing, e.g., locationing and/or position data in the real world in transformed or reduced into space-based (e.g., tesseract based) information defining attributes and/or features in four dimensions.
Still further, the present disclosure includes specific features other than what is well-understood, routine, conventional activity in the field, and/or otherwise add unconventional steps that confine the disclosure to a particular useful application, e.g., virtual and real-world positioning and locationing system(s) as described herein.
Advantages will become more apparent to those of ordinary skill in the art from the following description of the preferred aspects that have been shown and described by way of illustration. As will be realized, the present embodiments may be capable of other and different embodiments, and their details are capable of modification in various respects. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive.
The Figures described below depict various aspects of the system and methods disclosed therein. It should be understood that each Figure depicts an embodiment of a particular aspect of the disclosed system and methods, and that each of the Figures is intended to accord with a possible embodiment thereof. Further, wherever possible, the following description refers to the reference numerals included in the following Figures, in which features depicted in multiple Figures are designated with consistent reference numerals.
There are shown in the drawings arrangements which are presently discussed, it being understood, however, that the present embodiments are not limited to the precise arrangements and instrumentalities shown, wherein:
The Figures depict preferred embodiments for purposes of illustration only. Alternative embodiments of the systems and methods illustrated herein may be employed without departing from the principles of the invention described herein.
In various aspects, virtual and real-world positioning and locationing system(s) and/or method(s) described herein my comprise one or more controller(s) (e.g., processor(s) as executing on one or more servers or cloud platform(s)), and one or more computer memories, storing computing instructions for implementing algorithms or methods for virtual and/or real-world positioning and locationing by use of a scalable earth-centered, earth-fixed (ECEF) coordinate to an origin cube based algorithm providing location of data and time series. The virtual and real-world positioning and locationing system(s) and/or method(s) may store positioning and locationing data or information or database in a database or, more generally, computer memory, and may access such data or information therefrom. The memory may comprise tangible, non-transitory computer-readable medium storing instructions for generating location specific data and/or for performing any other algorithms, methods, or functions as described herein.
In various aspects, the computing instructions, when executed by the controller, may further cause the controller executing virtual and real-world positioning and locationing system(s) and/or method(s), to translate latitude, longitude, elevation into tesseract based XYZ coordinates having temporal or time data. In addition, in some aspects, the computing instructions, when executed by the controller, may further cause the extraction or determination of data, features, or attributes for associating with given shapes into coordinate space (e.g., a coordinate space based on tesseract locationing or positioning). This may comprise, for example, ingestion or coordination of information, data, features, and/or attributes for instantiation of multiple partial or full shapes (e.g., cubes) within the coordinate space (e.g., tesseract-based space), and having such information, data, features, and/or attributes, in order to achieve data integrity and/or alignment for multiple time series of data. Additionally, or alternatively, this may further include accessing information, from the system, for the determination, locationing, or positioning of partial or full shapes (e.g., tesseracts) into a coordinate space.
Still further, in additional aspects, the present invention may comprise, or take the form of, systems and/or methods for determining the measurement of absolute locations using a scalable origin (e.g., comprising a shape, such as a cube) and ECEF designed framework, comprising a controller, executing computing instructions, configured to receive contextual information containing location data.
The computing instructions, when executed by the controller, may further cause the controller to determine a corresponding tesseract location or position of a tesseract coordinate area. A given tesseract location be defined by, or may otherwise be associated with, a shape (e.g., a cube). More generally, a tesseract location may define a four-dimensional shape comprising three-dimensional shape coordinates (e.g., X, Y, and Z dimension coordinates) in addition to a time or otherwise temporal coordinate defining one or more features or attributes of the space at the three-dimensional shape coordinates, such as at a given point in time. In various aspects, the given point in time may be measured at one or more frequencies (e.g., one measurement or reading per second, one measurement or reading per millisecond, or the like).
The memory can store computing constructions that when executed by the controller, causes the controller to (at block 102) receive input data from a sensor or other data source, where the input data comprises location or positioning information. In various aspects, the locating or positioning information may be stored in a database.
The computing constructions, when executed by the controller, may further cause the controller (at block 106) to set or map the locating or positioning information to a four dimensional coordinate space (e.g., a tesseract-based space), for example, as shown by tesseract 200, and as further described herein for
At block 112, the computing constructions, when executed by the controller, may further cause the controller to display one or more locations, position, or related information on a graphic user interface (GUI). Additionally, or alternatively, the coordinate space 200 (e.g., a tesseract-based space), for example, a tesseract graphic or representation, may be displayed.
The coordinate space 200 may be generated, mapped, prepared, or accessed in various manners. For example, a controller, e.g., processor 104 as described for
Once mapped or set, a controller, e.g., processor 104 as described for
In this way, in various aspects coordinate space 200 (e.g., shaped as a cube) may comprise one or more degrees of granularity measurements 330 that require corresponding granular locations 320. For example, in various aspects, one given shape (e.g., a cube), for example cube 1 of cubes 310, itself may be comprised of multiple tesseract shapes (e.g., any one or more of cubes 2-10), each having their own precise locations. Said another way, one cube (e.g., cube 1 of cubes 310) may be comprised of several smaller cubes, each having their own respective, more precise tesseract location(s) within a predefined coordinate space. In various aspects, the controller (e.g., processor 104), executing computing instructions, may decompose or otherwise determine a shape (e.g., a cube based shape, see, for example
In various aspects, the location of the shape or voxel within coordinate space 200 is determined by a confidence or range (e.g., a level of granularity as shown for
In still further aspects, a controller (e.g., processor 104) may store information assigned to a defined shape (e.g., a cube such as cube 1 of
In addition, a controller (e.g., processor 104) executing computer-readable medium storing instructions for generating location specific data may translate or assign input data having a location, timestamp, and/or other attribute to a shape located within a four dimensional coordinate space 200 (e.g., a tesseract-based space). By associating one or more of the shapes with relevant information, each of the one or more shapes, either alone or collectively, provides information about one or more related shape(s) and/or object(s) in the coordinate space 200, where shapes make up or at least correspond to the shape(s) and/or object(s) in coordinate space 200.
In various aspects, as the vehicle 500 moves within the real-world, the set of voxels that make up the vehicle 500 (and therefore define its position) at a first time period may change. For example, at least a subset of voxels within tesseract based coordinate space 200, and previously identified as having a position data relative to the car's position or location during the first time period, may no longer map to the vehicle 500. Instead, at a second time period, the vehicle (and therefore its position) may have moved to and/or map to a new set of voxels, and the new set of voxels within coordinate space 200 (e.g., a tesseract-based space) may be updated with the vehicle 500′s new location and/or position. In this manner the vehicle 500 can be tracked based on the voxels that make up the coordinate space 200 (e.g., a tesseract-based space), and the vehicle 500, relative to all other objects or data within coordinate space 200 (e.g., a tesseract-based space), can be monitored, positioned, tracked, and/or otherwise be identified, including over one or more time periods (e.g., in real-time or near real-time).
Illustrates an example aspect of positioning and locationing within the coordinate space 200 (e.g., a tesseract-based space) of
For example, as shown for
In various aspects, such 2D image-based positioning and locationing is especially useful in GPS denied areas or areas of operation, e.g., requiring navigation, locationing, or the like, because such positing is not dependent on GPS provided data.
In one use case, the virtual and real-world positioning and locationing systems and methods, as described by the Figures and elsewhere herein, may generate and access coordinate space 200 (e.g., a tesseract-based space), for example, for geospatial data analysis, visualization, and/or risk mitigation. In some aspects, the virtual and real-world positioning and locationing systems and methods, as described herein, may be configured to analyze and describe an operational environment (OE) in accordance with coordinate space 200 (e.g., a tesseract-based space). The OE complexity may be determined based on the granularity or otherwise number of shapes (e.g., cubes) of the in terms of the coordinate space 200 (e.g., a tesseract-based space). In some aspects, machine learning (ML) and/or artificial intelligence (AI) models may be trained using locationing and/or positioning data as input into an ML or AI algorithm for feature as in order to train an AI or ML model to provide predictions corresponding to identification or attributes of shapes (e.g., cubes) within the coordinate space 200 (e.g., a tesseract-based space). Once trained, the machine learning (ML) and/or artificial intelligence (AI) models may provide outputs predictions or classifying objects (based on one or more shapes) and related activity within the coordinate space 200 (e.g., a tesseract-based space). Such predictions and/or classifications may include, for example, one or more observations corresponding to shapes (or related objects) within coordinate space 200 (e.g., a tesseract-based), which may include the detection of regularities or irregularities in an OE. Additionally, or alternatively, in some aspects the virtual and real-world positioning and locationing systems an output prediction may comprise a suggestion or alteration to produce a desired outcome within the OE, e.g., where an output could cause a real-world or virtual device or instrument to affect assets, targets, or objects in the coordinate space 200 (e.g., a tesseract-based space) or otherwise OE. Additionally, or alternatively, in some aspects the prediction output provides information, such as a situational awareness, of a given activity, event, or situation within the coordinate space 200 (e.g., a tesseract-based space) or otherwise OE, and provides a suggestion or alteration to modify or update the activity, event, or situation within the coordinate space 200 (e.g., a tesseract-based space) or otherwise OE.
In another use case, the virtual and real-world positioning and locationing systems and methods, as described by the Figures and elsewhere herein, may generate and access coordinate space 200 (e.g., a tesseract-based space), for example, for proliferating low orbit positioning, navigation, and timing. The virtual and real-world positioning and locationing systems and methods, as described herein, may be configured to provide navigation in the Low Earth Orbit (LEO) satellite space without using GPS. In such aspects, data from non-GPS input sources may be added to one or more shapes of the coordinate space 200 (e.g., a tesseract-based space). The coordinate space 200 (e.g., a tesseract-based space) be may be accessed, e.g., by processor 104, to provide positioning and precise timing with a positioning accuracy of less than 50 meters 3-D (Spherical) Position (95%), less than 6 meters/second velocity error (RMS per axis), and less than 50 nanosecond time transfer (95%) (threshold). In other aspects, such parameters may comprise less than 10 meters, less than 3 meters/second, and less than 20 nanosecond time transfer.
In another use case, the virtual and real-world positioning and locationing systems and methods, as described by the Figures and elsewhere herein, may generate and access coordinate space 200 (e.g., a tesseract-based space), for example, for navigation in GPS denied environments. The virtual and real-world positioning and locationing systems and methods, as described herein, may be configured to provide locating, positioning, and/or navigation related data to objects traveling within the coordinate space 200 (e.g., a tesseract-based space) where GPS is unavailable.
In another use case, the virtual and real-world positioning and locationing systems and methods, as described by the Figures and elsewhere herein, may generate and access coordinate space 200 (e.g., a tesseract-based space), for example, for virtual reality (VR) and/or augmented reality (AR) implementations. For example, the virtual and real-world positioning and locationing systems and methods, as described herein, may be configured to access coordinate space 200 in order to provide accurate locationing, positioning, and temporal data for AR/VR systems. For example, a user interacting with an AR/VR device, e.g., a VR headset, may view the coordinate space 200 (e.g., a tesseract-based space) or an environment based on the coordinate space 200 (e.g., a tesseract-based space) via a display of the AR/VR device, and, in some aspects may interact with objects within the coordinate space 200 or environment.
In another use case, the virtual and real-world positioning and locationing systems and methods, as described by the Figures and elsewhere herein, may generate and access coordinate space 200 (e.g., a tesseract-based space), for example, for deduplication of large datasets as ingested from multiple sources in real time. The virtual and real-world positioning and locationing systems and methods, as described herein, may be configured to ingest or otherwise receive data or datasets from multiple input sources (e.g., in real time and/or near real time). In addition, one or more of the shape s (e.g., cubes) of the coordinate space 200 (e.g., a tesseract-based space) may be assigned or otherwise associated with the data, where the shape (e.g., cube) has a location or position coordinate associated with the data at a given time. In this way, the data or datasets are instantiated with metadata, which may come from multiple sources, and which may be reduced into, or otherwise associated with, a given shape or cube. In some cases, a single cube may receive information or data from multiple sources. In some aspects, the sources or data may be redundant, and, in such aspects, the real-world positioning and locationing systems may reduce, coordinate, or normalize the data for the data to be assigned to the given shape (e.g., cube) so that data may be stored for the shape in the coordinate space 200 (e.g., a tesseract-based space) as non-redundant. The data may then be accessed by, e.g., a processor, of the virtual and real-world positioning and locationing systems and methods where the data received will be high fidelity or high quality because the processor would have previously deduplicated, reduced, normalized, and/or otherwise fileted, cleaned, and/or otherwise preprocessed the data before being storing or adding it to the coordinate space 200 (e.g., a tesseract-based space).
Although the disclosure herein sets forth a detailed description of numerous different embodiments, it should be understood that the legal scope of the description is defined by the words of the aspects set forth at the end of this patent and equivalents. The detailed description is to be construed as exemplary only and does not describe every possible embodiment since describing every possible embodiment would be impractical. Numerous alternative embodiments may be implemented, using either current technology or technology developed after the filing date of this patent, which would still fall within the scope of the aspects herein.
The following additional considerations apply to the foregoing discussion. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
Additionally, certain embodiments are described herein as including logic or a number of routines, subroutines, applications, or instructions. These may constitute either software (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware. In hardware, the routines, etc., are tangible units capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein.
In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
Accordingly, the term “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time.
Hardware modules may provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and may operate on a resource (e.g., a collection of information).
The various operations of example methods described herein may be performed, at Icast partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules.
Similarly, the methods or routines described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented hardware modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location, while in other embodiments the processors may be distributed across a number of locations.
The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations.
This detailed description is to be construed as exemplary only and does not describe every possible embodiment, as describing every possible embodiment would be impractical, if not impossible. A person of ordinary skill in the art may implement numerous alternate embodiments, using either current technology or technology developed after the filing date of this application.
Those of ordinary skill in the art will recognize that a wide variety of modifications, alterations, and combinations can be made with respect to the above described embodiments without departing from the scope of the invention, and that such modifications, alterations, and combinations are to be viewed as being within the ambit of the inventive concept.
The patent aspects at the end of this patent application are not intended to be construed under 35 U.S.C. § 112(f) unless traditional means-plus-function language is expressly recited, such as “means for” or “step for” language being explicitly recited in the aspects herein. The systems and methods described herein are directed to an improvement to computer functionality, and improve the functioning of conventional computers.
This application claims the benefit of U.S. Provisional Application No. 63/370,016 (filed on Aug. 1, 2022), which is incorporated in its entirety by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2023/029161 | 8/1/2023 | WO |
Number | Date | Country | |
---|---|---|---|
63370016 | Aug 2022 | US |