Spaceborne data, such as satellite images, has numerous useful applications in agriculture, biodiversity, cartography, conservation, disaster response, education, fishing, forestry, geology, landscape analysis, meteorology, oceanography, and regional planning, among other things. For instance, spaceborne data may be used to monitor disasters such as fires, volcanic activity, landslides, avalanches, and the like. Spaceborne data may also be used for rapid mapping after a disaster. Spaceborne data may be used to locate ships in the ocean, planes on the tarmac, or the like. Spaceborne data may be used in various hydrological applications, such as drought and soil water monitoring, discovering hidden river channels, and the like. These are but a few of the many possible applications for spaceborne data.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Briefly stated, the disclosed technology is generally directed to on-orbit spaceborne-sensor-related prioritization. Image metadata and raw image data obtained from sensors on the constrained-environment device are stored. The raw image data includes a plurality of images. A plurality of images tiles is provided such that the plurality of images tiles includes, for each image of the plurality of images, evenly-spaced portions of the image. Via an embedding-generation model, a plurality of embeddings is generated based on the plurality of image tiles. The plurality of embeddings is used to perform a prioritization of image tiles among the plurality of image tiles. During a downlink session from the constrained-environment device, image tiles from among the plurality of image tiles are downlinked based, at least in part, on the prioritization.
Other aspects of and applications for the disclosed technology will be appreciated upon reading and understanding the attached figures and description.
Non-limiting and non-exhaustive examples of the present disclosure are described with reference to the following drawings. In the drawings, like reference numerals refer to like parts throughout the various figures unless otherwise specified. These drawings are not necessarily drawn to scale.
For a better understanding of the present disclosure, reference will be made to the following Detailed Description, which is to be read in association with the accompanying drawings, in which:
Briefly stated, the disclosed technology is generally directed to on-orbit spaceborne-sensor-related prioritization. Image metadata and raw image data obtained from sensors on the constrained-environment device are stored. The raw image data includes a plurality of images. A plurality of images tiles is provided such that the plurality of images tiles includes, for each image of the plurality of images, evenly-spaced portions of the image. Via an embedding-generation model, a plurality of embeddings is generated based on the plurality of image tiles. The plurality of embeddings is used to perform a prioritization of image tiles among the plurality of image tiles. During a downlink session from the constrained-environment device, image tiles from among the plurality of image tiles are downlinked based, at least in part, on the prioritization.
Each of client device 151, client device 152, service device 161, service device 162, constrained-environment device 171, and constrained-environment device 172 includes an example of computing device 500 of
Service device 151 and service device 152 are each part of a service that is provided on behalf of clients. Each client may communicate with the service via one or more devices, such as client device 151 and client device 162. The service includes aspects that are associated with data collected by constrained-environment devices, including constrained-environment device 171 and constrained-environment device 172. Each of the constrained-environment devices is a device that is in a constrained environment. Each of the constrained devices may be, for example, a satellite in Earth's orbit, a device in deep space such as a device on a probe or a spacecraft that is in deep space, a drone, a device on a stationary platform such as a stationary tower or a stationary sensor such as an oil platform, an Internet of Things (IoT) device in a constrained environment, or the like.
For example, the service may include aspects related to the collection of data from the constrained-environment devices, the analysis of data from the constrained devices, searches in the data from the constrained devices, or the like.
Some of the constrained-environment devices (e.g., 171 and 172) operate as follows.
The constrained-environment devices use sensors to generate and store images. The constrained-environment devices use an embedding-generation model to generate embeddings from the images or from portions of the images. Additionally, on the constrained-environment device, embeddings are automatically and autonomously triaged using an embedding-generation model, so that spaceborne data to downlink is prioritized.
The service that includes service device 161 and service device 162 operates as follows in some examples.
The service is arranged to receive information from the constrained-environment devices. Each constrained-environment device communicates with the service during a separate downlink session. During a downlink session from, a constrained-environment device to the service, some of the image portions on the constrained-environment device are downlinked to the service. The portions of the images that are downlinked during the downlink session are triaged based on the prioritization performed on the constrained-environment devices. The downlinked image portions are then stored by the service.
Network 130 may include one or more computer networks, including wired and/or wireless networks, where each network may be, for example, a wireless network, local area network (LAN), a wide-area network (WAN), and/or a global network such as the Internet. On an interconnected set of LANs, including those based on differing architectures and protocols, a router acts as a link between LANs, enabling messages to be sent from one to another. Also, communication links within LANs typically include twisted wire pair or coaxial cable, while communication links between networks may utilize analog telephone lines, full or fractional dedicated digital lines including T1, T2, T3, and T4, Integrated Services Digital Networks (ISDNs), Digital Subscriber Lines (DSLs), wireless links including satellite links, and/or other communications links known to those skilled in the art. Such satellite links may include, for example, satellite interlink communications and various types of satellite downlinks including satellite downlinks in the X, S, Ka, and Ultra-High Frequency (UHF) bands. Furthermore, remote computers and other related electronic devices could be remotely connected to either LANs or WANs via a modem and temporary telephone link. Network 130 may include various other networks such as one or more networks using local network protocols such as 6LoWPAN, ZigBee, or the like. In essence, network 130 may include any suitable network-based communication method by which information may travel among client device 151, client device 152, service device 161, service device 162, constrained-environment device 171, and constrained-environment device 172. Although each device is shown connected as connected to network 130, that does not necessarily mean that each device communicates with each other device shown. In some examples, some devices shown only communicate with some other devices/services shown via one or more intermediary devices. Also, although network 130 is illustrated as one network, in some examples, network 130 may instead include multiple networks that may or may not be connected with each other, with some of the devices shown communicating with each other through one network of the multiple networks and other of the devices shown instead communicating with each other with a different network of the multiple networks. Further, some of the links, such as satellite links, may be limited or intermittent.
System 100 may include more or less devices than illustrated in
The satellite devices, including satellite devices 271 and 272, are devices onboard satellites that orbit the Earth. The satellite devices store spaceborne data collected by the orbiting satellites. The spaceborne data includes images taken by the satellites. Service system 260 provides services on behalf of clients, where the services include at least one service that is associated with spaceborne data collected by the satellite devices. Whereas the satellites are orbiting the Earth, service system 260 is on the Earth. Each client communicates with service system 260 via one or more devices, such as client device 251 and client device 252.
Resources, such as compute resources, onboard the orbiting satellite are relatively limited compared to the resources of service system 260. Accordingly, an orbiting satellite contains more data than can be fully analyzed onboard the satellite in a reasonable period of time. Additionally, the amount of data that can be downlinked to the Earth is limited so that the spaceborne data onboard each satellite is significantly greater than the amount of spaceborne data that can be downlinked in a reasonable period of time. This could result in valuable data being lost, and prevent the sensors onboard the satellites and the spaceborne data captured by the sensors from being used to their fullest potential. However, aspects of the disclosure enable the most useful analysis to be performed onboard the satellite and enable the highest value images onboard the satellite to be downlinked first.
In some examples, some of the satellite devices, such as satellite devices 271 and 272, operate as follows.
The satellite devices include sensors and also include devices that store data. The sensors onboard the orbiting satellites collect data, including raw images and image metadata, which is all stored onboard the satellite. Examples of the image metadata include time, inclination, azimuth angle, altitude, scene center latitude, scene center longitude, viewing angle, incidence angle, sun elevation, sun azimuth, or other suitable metadata. The sensors may include a variety of different sensor types including cameras, synthetic aperture radar, thermal imaging sensors, hyperspectral sensors, video sensors, or other suitable sensor types. The images provided by the sensors may include two-dimensional images, three-dimensional representations, and may include images such as synthetic aperture radar images or the like. Each of the types of images discussed above is a type of spaceborne data that may be captured by the satellite devices. The satellite devices divide each of the stored images into smaller tiles. For instance, in one example, a full-size image that is 8, 192 pixels by 8, 192 pixels may be broken down into 256 evenly-sized tiles of 512 pixels by 512 pixels. The smaller tiles are easier to process, analyze, and downlink.
One or more of the satellite devices (e.g., satellite device 271 or 272) then, onboard the satellite, converts each of the tiles into an embedding. The tiles are retained and continued to be stored in the satellite devices, and the embeddings are also stored in the satellite devices. The embeddings are feature vectors of floating-point numbers that are ordered list of numerical vectors such that, for two similar images, the corresponding feature vectors are close to each other in the vector space. Similarly, for two dissimilar images, the feature vectors are not close to each other in the vector space. Embeddings are also sometimes referred to as “representations.”
For instance, in some examples, an unsupervised machine-learning embedding-generation model is used to convert each of the tiles into an embedding. In one example, a simple Siamese (SimSiam) model, trained on a large number of images such as over 100,000 different image tiles, is used to convert each of the files into a 512-dimensional feature vector. Various suitable numbers of dimensions may be used in various examples, such as at least 256 dimensions in some examples. An unsupervised machine-learning embedding generation model may also be referred to as an unsupervised representation learning model. Other suitable methods are used to generate the embeddings in other examples. For instance, in some examples, a self-supervised representation learning technique or supervised representation learning technique is used to generate the embeddings. An embedding of a tile may be significantly smaller than the corresponding tile, such as 1/12 the size of the corresponding tile in one example. As soon as an image is captured by the satellite, the image is divided into tiles and the tiles are converted to embeddings.
In some examples, the satellite has dozens or hundreds of different models onboard which may be used to run against any of the image tiles. The models may include, for example, artificial intelligence (AI) models that are plane detector models, ship detector models, building detector models, cloud detector models, methane detector models, fire detector models, car detector models, or the like. The models may include, for example, classification models, object detection models, and segmentation models. It may be desirable to run particular models for various reasons. For instance, it may be desirable to use a cloud detection model to de-prioritize or exclude tiles that contain no useful information due to cloud cover present in the tile.
It may be useful to run models that detect particular objects or phenomenon for reasons of prioritization or for identifying data that may require immediate attention. For example, if an image tile includes a potentially dangerous hazard, such as a forest fire, it may be desirable to downlink the image tile as soon as possible rather than waiting for the next scheduled downlink. As another example, it might be determined that a ship detector model should not be run on tiles on which the image is taken over by land or is obstructed by cloud, fog, or smoke, thereby preserving sources for other jobs onboard the satellite. However, the satellite does not have enough power and resources to run all of the models against all of the image tiles. Accordingly, optionally, in some examples, model orchestration is performed in the satellite. In other examples, model orchestration is not performed in the satellite. One example of system 200 in which model orchestration is performed operates as follows.
In the model orchestration performed onboard the satellite, one or more of the satellite devices (e.g., satellite device 271 or 272) uses the embeddings to determine which of the models should be run on which of the tiles. After it is determined which models should be run on which tiles, satellite devices (e.g., satellite devices 271 and 272) run the determined models on the tiles. The satellite devices then take appropriate actions based on the results of the models being run on the tiles. For instance, if the model indicates that the image tile may indicate the presence of a significant hazard, the satellite device may cause the image tile to be downlinked as soon as possible. Instead of immediately downloading the image tile, in some examples, a short message or other type of alert may be downlinked as soon as possible, because a short message can be downlinked more easily than an image. The model orchestration onboard the satellite enables more efficient use of resources on the satellite and enables more models to be run with the same resources.
The satellite devices (e.g., satellite devices 271 and 272) use the embeddings to perform downlink prioritization onboard the satellite. In examples in which model orchestration is performed onboard the satellite, the downlink prioritization occurs after the model orchestration is performed onboard the satellite. The downlink prioritization performed onboard the satellite makes use of a set of target embeddings. The set of target embeddings for use in the downlink prioritization is generated by service system 260. The embeddings included in the set of target embeddings are determined based on the priorities of the client. The set of target embeddings are uploaded from service system 260 to the satellite devices and stored on the satellite devices. The satellite devices automatically triage which tiles should be transmitted by comparing the embeddings to the set of target embeddings. The comparison of the embeddings to the set of target embeddings includes determining which of the embeddings are close, in the vector space, to embeddings in the set of embeddings.
As discussed above, the embeddings included in the set of embeddings are determined based on the priorities of the client. For example, if a client wants to prioritize the downlink of cloud-free imagery, the downlink prioritization may be used to infer which of the image tiles appear to be most cloud-free. The determination for both the onboard downlink prioritization and the onboard model orchestration is made on a tile-by-tile basis based on each of the embeddings, where each of the embeddings was generated from a corresponding tile. The downlink prioritization performed onboard the satellite saves time and bandwidth, ensures access to pertinent assets, and prioritizes decision-making.
In some examples, some information from other tiles in the image, such as mean pixel values, may also be used in the determination that is made for each of the tiles. In some examples, the onboard downlink prioritization uses a rule-based decision tree to perform downlink prioritization. In some examples, the decision tree uses both the embeddings and image information to perform the onboard downlink prioritization. The image information that is used during the downlink prioritization may include image metadata. In some examples, the rules are defined manually by a client. In some examples, the rules are defined based on an analysis of historical usage patterns. In some examples, the historical usage patterns include various suitable information such as existing client requests, existing client searches, historical accesses by clients, sales patterns with clients, or other suitable information. In some examples, the image data used includes geographical information. For instance, some clients may have particular geographical areas of interest, and the prioritization may be based, in part, on whether the tile is in a geographical area of interest to the client.
Each of the satellites may perform a downlink at periodic time intervals, in which the satellite communicates with service system 260. As discussed above, service system 260 provides services on behalf of clients, where the services include at least one service associated with spaceborne data collected by the satellite devices (e.g., satellite devices 271 and 272). For example, the services may include storing spaceborne data downlinked from the satellite device, the analysis of the spaceborne data from the constrained devices, searches in the spaceborne data, or the like. Each of the satellite devices communicates information to service system 260 during a separate downlinking session that occurs between the satellite device and service system 260. During the downlink session, some of the image tiles onboard the satellite device are downlinked to service system 260. The downlink of the image tiles is triaged based, at least in part, on the onboard downlink prioritization discussed above.
As discussed above, downlinked prioritization is based, at least in part, on onboard downlink prioritization. Optionally, in some examples, prioritization is based on both the onboard downlink prioritization discussed above, and further based on an edge-cloud image orchestration. In some examples, onboard downlink prioritization is performed and edge-cloud image orchestration is not performed. An example in which prioritization is also based on edge-cloud image orchestration operates as follows.
Service system 260 stores an embeddings database. The embeddings database is a database that stores a relatively large number of reference embeddings that may be used to assist in determining which image tiles are of the highest value. The value of the image tiles is based at least in part on the priorities of the clients. When a downlink session occurs between a satellite and service system 260, embeddings are sent from satellite devices (e.g., satellite devices 271 and 272) on the satellite to service system 260. For instance, in some examples, the satellite devices send the embeddings for each of the images that has been taken by the satellite since the last downlink. In other examples, some of the embeddings are excluded based on the downlink prioritization performed onboard the satellite. In either case, service system 260 then compares the received embeddings to the reference embeddings in the embeddings database. The comparison determines which of the received embeddings are closest, in the vector space, to a reference embedding.
Service system 260 then analyzes the received embeddings that are determined to be closest to the reference embeddings in order to determine which of these embeddings correspond to high-value image tiles. As discussed above, the value of the image tiles is based at least in part on the priorities of the clients. The determination as to which of the image tiles are of the highest value may be made based on the determined similarities of the embeddings with the reference embeddings, combined with various suitable information such as existing client requests, existing client searches, historical accesses by clients, sales patterns with clients, or other suitable information. After the highest value image tiles are identified, service system 260 communicates to the satellite device the highest value tiles to be downlinked.
Next, the satellite devices downlink the indicated high-value tiles. Service system 260 then receives the downlinked tiles and then stores the received tiles. In some examples, the determination and downlinking of the high-value image tiles occurs in a matter of seconds during a single downlink session and occurs during each of the scheduled downlink sessions. In some examples, the entire downlink session is about ten minutes as the satellite passes over the ground station. The determination of high-value image tiles that occurs during the downlink session between the satellite and service system 260 on the ground allows for a relatively seamless communication between the satellite and service system 260, increases the speed of decision-making, and ensures the capture of high-value data. After the high-value tiles are downlinked, service system 260 receives the downlinked tiles and then stores the received tiles.
The images collected from the various downlink sessions are stored in service system 260 and have associated metadata that is stored according to the SpatioTemporal Asset Catalog (STAC) specification. In some examples, the associated metadata is stored in a separate database. Service system 260 enables reverse image searches to be performed by clients based on the embeddings of the images. In some examples, the reverse image search includes querying the metadata database of STAC items via an Application Programming Interface (API) or user interface. In this way, a user can perform a visual search against existing images in a database to find similar images of interests. The embeddings are also used to recommend similar images to users. For example, if a client is an analyst that is interested in mapping the location of small solar installations within the United States, service system 260 may recommend additional images where solar installations are likely to be visible.
Service system 260 enables clients to run models against any of the satellite images stored in service system 260. Some examples of service system 260 have a large number of models, such as hundreds of models, that may be run against images. The models may include, for example, artificial intelligence (AI) models that are plane detector models, ship detector models, building detector models, cloud detector models, or the like. However, running every model against all of the images would be relatively inefficient. Model orchestration is performed by service system 260. Accordingly, optionally, in some examples, cloud model orchestration is performed. One example of system 200 in which the optional cloud model orchestration is performed operates as follows.
The model orchestration uses the embeddings of images taken from the satellite to determine which of the models should be run on which of the images. For example, a model trained to find airplanes on a tarmac may work one day but fail the next if it snows at the given airport of interest. The model orchestration performed in service system 260 alerts the client to this type of change and prevents the model from giving the user erroneous results. Using the embeddings, a determination is made for each model and for each image as to whether that model should be run on that image. As another example, a model that detects poultry barns might be run only on tiles that correspond to rural or semi-rural location, that are not water, and that have a structure on them. In this way, the model that detects poultry barns can avoid running on tiles on which it is unnecessary to run that model. The model orchestration performed in service system 260 enables the ability to make more efficient use of the resources of service system 260.
Examples of system 200 may be useful for a variety of different example use cases for a variety of different example clients.
For instance, one example of a client is an oil and gas company that makes use of an example of system 200 to decrease the amount of time it takes for the client to detect and address large methane leaks across their oil wells, pipelines, and refineries spread out over an oil field covering hundreds of square miles. Quick detection of methane leaks may significantly reduce compliance, regulatory, and enforcement risks for the client. In this example, a methane detection model is employed onboard a constellation of satellites in order to enable rapid alerting of methane leaks. Every time a satellite within the constellation passes over the oil field of interest, this model is “turned on” within the satellite to look for methane leaks. This is accomplished onboard the satellite as follows in this example.
Of all of the images taken over an oil field by the satellites, about 99% of the images or images tiles will be cloudy or will not contain a methane leak. As part of the model orchestration onboard the satellites, the satellite devices use the embeddings of the images tiles to perform an initial analysis and identify the approximately 1% of image tiles that could contain a methane leak. An onboard methane detection model is then run onboard the satellite on only those images tiles that could contain a methane leak. The methane detection model calculates the approximate location, size, and probability of a methane leak. The location and size for high-probability leaks are then quickly downlinked to service system 260 and then shared with the oil and gas company via client device 271, allowing the company to more rapidly deploy a crew to investigate and fix the leak.
Typically, oil and gas companies that want to detect methane plumes in satellite imagery need to order and downlink entire satellite images before they can determine if the imagery ordered by the company contains a methane plume at all. Ordering such imagery is typically incredibly costly and typically takes from hours to weeks before the imagery is delivered to the oil and gas company for analysis. However, in this example, system 200 enables a methane detection model to be run persistently onboard a spacecraft alongside other models from other clients, enabling real-time monitoring for the oil and gas company while limiting resource and downlink costs for the satellite operator.
Another example of a client is an energy trading hedge fund that is tasked with updating the fund's dataset of oil storage tanks globally, which the client uses to inform trading decisions. The example analyst client doesn't have the funds, time, or resources to run an oil tank detection model over all available high-resolution imagery globally. Instead, the example client uses an example of system 200 to identify a subset of existing images that likely contain oil tanks to then pass through an oil tank detection model. This dramatically reduces the time, cost, and complexity required to update the associated dataset. This is accomplished by the example client using an example of system 200 as follows.
The client provides a few reference images of oil tanks, and service system 260 obtains embeddings for those reference images. The client can then initiate a catalog search of existing high-resolution imagery to identify the approximately 0.05% of images/tiles that have very similar embeddings that are indicative of a possible oil tank within the image. The identified images are then passed through an oil tank detection model by service system 260 to calculate the location and size of any oil tanks within the image. The results of the detection model are then provided to the client.
Typically, if an analyst wanted to inventory all oil storage tanks globally, the analyst would either 1) need to run their oil tank detection model over a complete global archive of satellite imagery which is incredibly costly to obtain and resource/time intensive to run or 2) rely on an external source of information to narrow their search. However, these external data sources typically are incomplete and would result in a significant number of oil tanks being missed. Instead, system 200 to allows the analyst to efficiently build a global dataset of oil tanks using a fraction of the time and resources it would otherwise take to run an oil tank detection model across all images/tiles.
Another example of a client is the Coast Guard looking for ships engaged in illicit activities (e.g., drug running, oil embargo violations, or the like). Quick detection of such activities would increase the odds of arrest and prosecution. An example of system 200 may be used with the Coast Guard as a client as follows.
A ship detection model is employed onboard a constellation of satellites in order to provide interdiction teams of the Coast Guard with real-time insights. Only approximately 0.1% of images/tiles taken by a satellite will be of a manmade object in the ocean. The model orchestration performed onboard the satellite will allow the approximately 0.1% of images/tiles that contain a manmade object in the ocean to be identified. The satellite device onboard the satellite then performs an object detection model on these identified tiles in order to predict the class of the object (e.g., oil platform, ship, wind turbine, etc.) present on the tiles.
For each tile on which the object detection model determines that a ship is present on the tile, the tile is flagged for further analysis. The embeddings for these tiles along with the location of the tile is then rapidly downlinked from the satellite to service system 260 for further analysis. Downlinking the embeddings and metadata is much faster and much more efficient than downlinking an entire image. Service system 260 then compares the downlinked embeddings to prior images and external data. The downlinked embeddings are rapidly compared to embeddings generated for all prior images to predict the type of ship in the image (e.g., cargo ship, tanker, cruise ship, trawler, etc.).
The detected locations are also compared to external data sources (e.g., ship Automatic Identification System (AIS) transponder data). If these two sources don't align, for example: 1) service system 260 appears to have identified a ship in an image, but there is no AIS transponder record for that ship or 2) an embedding for a tile is similar to prior embeddings of oil tankers, but the AIS transponder data for that ship reports it is a trawler. Discrepancies such as these are indicative of possible nefarious behavior that may require additional investigation by the Coast Guard.
Step 391 occurs first. At step 391, image metadata and raw image data obtained from sensors on the constrained-environment device are stored. The raw image data includes a plurality of images. As shown, step 392 occurs next. At step 392, a plurality of images tiles is provided such that the plurality of images tiles includes, for each image of the plurality of images, evenly-spaced portions of the image. As shown, step 393 occurs next. At step 393, via an embedding-generation model, a plurality of embeddings is generated based on the plurality of image tiles.
As shown, step 394 occurs next. At step 394, the plurality of embeddings is used to perform a prioritization of image tiles among the plurality of image tiles. As shown, step 395 occurs next. At step 395, during a downlink session from the constrained-environment device, image tiles from among the plurality of image tiles are downlinked based, at least in part, on the prioritization. The process then advances to a return block, where other processing is resumed.
As shown in
In some examples, one or more of the computing devices 410 is a device that is configured to be at least part of a system for spaceborne-sensor-related prioritization.
Computing device 500 includes at least one processing circuit 510 configured to execute instructions, such as instructions for implementing the herein-described workloads, processes, and/or technology. Processing circuit 510 may include a microprocessor, a microcontroller, a graphics processor, a coprocessor, a field-programmable gate array, a programmable logic device, a signal processor, and/or any other circuit suitable for processing data. The aforementioned instructions, along with other data (e.g., datasets, metadata, operating system instructions, etc.), may be stored in operating memory 520 during run-time of computing device 500. Operating memory 520 may also include any of a variety of data storage devices/components, such as volatile memories, semi-volatile memories, random access memories, static memories, caches, buffers, and/or other media used to store run-time information. In one example, operating memory 520 does not retain information when computing device 500 is powered off. Rather, computing device 500 may be configured to transfer instructions from a non-volatile data storage component (e.g., data storage component 550) to operating memory 520 as part of a booting or other loading process. In some examples, other forms of execution may be employed, such as execution directly from data storage component 550, e.g., execute In Place (XIP).
Operating memory 520 may include 4th generation double data rate (DDR4) memory, 3rd generation double data rate (DDR3) memory, other dynamic random access memory (DRAM), High Bandwidth Memory (HBM), Hybrid Memory Cube memory, 3D-stacked memory, static random access memory (SRAM), magnetoresistive random access memory (MRAM), pseudorandom random access memory (PSRAM), and/or other memory, and such memory may comprise one or more memory circuits integrated onto a DIMM, SIMM, SODIMM, Known Good Die (KGD), or other packaging. Such operating memory modules or devices may be organized according to channels, ranks, and banks. For example, operating memory devices may be coupled to processing circuit 510 via memory controller 530 in channels. One example of computing device 500 may include one or two DIMMs per channel, with one or two ranks per channel. Operating memory within a rank may operate with a shared clock, and shared address and command bus. Also, an operating memory device may be organized into several banks where a bank can be thought of as an array addressed by row and column. Based on such an organization of operating memory, physical addresses within the operating memory may be referred to by a tuple of channel, rank, bank, row, and column.
Despite the above-discussion, operating memory 520 specifically does not include or encompass communications media, any communications medium, or any signals per se.
Memory controller 530 is configured to interface processing circuit 510 to operating memory 520. For example, memory controller 530 may be configured to interface commands, addresses, and data between operating memory 520 and processing circuit 510. Memory controller 530 may also be configured to abstract or otherwise manage certain aspects of memory management from or for processing circuit 510. Although memory controller 530 is illustrated as single memory controller separate from processing circuit 510, in other examples, multiple memory controllers may be employed, memory controller(s) may be integrated with operating memory 520, and/or the like. Further, memory controller(s) may be integrated into processing circuit 510. These and other variations are possible.
In computing device 500, data storage memory 550, input interface 560, output interface 570, and network adapter 580 are interfaced to processing circuit 510 by bus 540. Although
In computing device 500, data storage memory 550 is employed for long-term non-volatile data storage. Data storage memory 550 may include any of a variety of non-volatile data storage devices/components, such as non-volatile memories, disks, disk drives, hard drives, solid-state drives, and/or any other media that can be used for the non-volatile storage of information. However, data storage memory 550 specifically does not include or encompass communications media, any communications medium, or any signals per se. In contrast to operating memory 520, data storage memory 550 is employed by computing device 500 for non-volatile long-term data storage, instead of for run-time data storage.
Also, computing device 500 may include or be coupled to any type of processor-readable media such as processor-readable storage media (e.g., operating memory 520 and data storage memory 550) and communication media (e.g., communication signals and radio waves). While the term processor-readable storage media includes operating memory 520 and data storage memory 550, the term “processor-readable storage media,” throughout the specification and the claims, whether used in the singular or the plural, is defined herein so that the term “processor-readable storage media” specifically excludes and does not encompass communications media, any communications medium, or any signals per se. However, the term “processor-readable storage media” does encompass processor cache, Random Access Memory (RAM), register memory, and/or the like.
Computing device 500 also includes input interface 560, which may be configured to enable computing device 500 to receive input from users or from other devices. In addition, computing device 500 includes output interface 570, which may be configured to provide output from computing device 500. In one example, output interface 570 includes a frame buffer, graphics processor, graphics processor or accelerator, and is configured to render displays for presentation on a separate visual display device (such as a monitor, projector, virtual computing client computer, etc.). In another example, output interface 570 includes a visual display device and is configured to render and present displays for viewing. In yet another example, input interface 560 and/or output interface 570 may include a universal asynchronous receiver/transmitter (UART), a Serial Peripheral Interface (SPI), Inter-Integrated Circuit (I2C), a General-purpose input/output (GPIO), and/or the like. Moreover, input interface 560 and/or output interface 570 may include or be interfaced to any number or type of peripherals.
In the illustrated example, computing device 500 is configured to communicate with other computing devices or entities via network adapter 580. Network adapter 580 may include a wired network adapter, e.g., an Ethernet adapter, a Token Ring adapter, or a Digital Subscriber Line (DSL) adapter. Network adapter 580 may also include a wireless network adapter, for example, a Wi-Fi adapter, a Bluetooth adapter, a ZigBee adapter, a Long-Term Evolution (LTE) adapter, SigFox, LoRa, Powerline, or a 5G adapter.
Although computing device 500 is illustrated with certain components configured in a particular arrangement, these components and arrangements are merely one example of a computing device in which the technology may be employed. In other examples, data storage memory 550, input interface 560, output interface 570, or network adapter 580 may be directly coupled to processing circuit 510 or be coupled to processing circuit 510 via an input/output controller, a bridge, or other interface circuitry. Other variations of the technology are possible.
Some examples of computing device 500 include at least one memory (e.g., operating memory 520) having processor-executable code stored therein, and at least one processor (e.g., processing unit 510) that is adapted to execute the processor-executable code, wherein the processor-executable code includes processor-executable instructions that, in response to execution, enables computing device 500 to perform actions, where the actions may include, in some examples, actions for one or more processes described herein, such as the process shown in
The above description provides specific details for a thorough understanding of, and enabling description for, various examples of the technology. One skilled in the art will understand that the technology may be practiced without many of these details. In some instances, well-known structures and functions have not been shown or described in detail to avoid unnecessarily obscuring the description of examples of the technology. It is intended that the terminology used in this disclosure be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain examples of the technology. Although certain terms may be emphasized below, any terminology intended to be interpreted in any restricted manner will be overtly and specifically defined as such in this Detailed Description section. Throughout the specification and claims, the following terms take at least the meanings explicitly associated herein, unless the context dictates otherwise. The meanings identified below do not necessarily limit the terms, but merely provide illustrative examples for the terms. For example, each of the terms “based on” and “based upon” is not exclusive, and is equivalent to the term “based, at least in part, on,” and includes the option of being based on additional factors, some of which may not be described herein. As another example, the term “via” is not exclusive, and is equivalent to the term “via, at least in part,” and includes the option of being via additional factors, some of which may not be described herein. The meaning of “in” includes “in” and “on.” The phrase “in one embodiment,” or “in one example,” as used herein does not necessarily refer to the same embodiment or example, although it may. Use of particular textual numeric designators does not imply the existence of lesser-valued numerical designators. For example, reciting “a widget selected from the group consisting of a third foo and a fourth bar” would not itself imply that there are at least three foo, nor that there are at least four bar, elements. References in the singular are made merely for clarity of reading and include plural references unless plural references are specifically excluded. The term “or” is an inclusive “or” operator unless specifically indicated otherwise. For example, the phrases “A or B” means “A, B, or A and B.” As used herein, the terms “component” and “system” are intended to encompass hardware, software, or various combinations of hardware and software. Thus, for example, a system or component may be a process, a process executing on a computing device, the computing device, or a portion thereof. The term “cloud” or “cloud computing” refers to shared pools of configurable computer system resources and higher-level services over a wide-area network, typically the Internet. “Edge” devices refer to devices that are not themselves part of the cloud but are devices that serve as an entry point into enterprise or service provider core networks.
While the above Detailed Description describes certain examples of the technology, and describes the best mode contemplated, no matter how detailed the above appears in text, the technology can be practiced in many ways. Details may vary in implementation, while still being encompassed by the technology described herein. As noted above, particular terminology used when describing certain features or aspects of the technology should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the technology to the specific examples disclosed herein, unless the Detailed Description explicitly defines such terms. Accordingly, the actual scope of the technology encompasses not only the disclosed examples, but also all equivalent ways of practicing or implementing the technology.
This application claims priority to U.S. Provisional Pat. App. No. 63/446,755, filed Feb. 17, 2023, entitled “SPACEBORNE-SENSOR-DATA-RELATED PRIORITIZATION AND ORCHESTRATION” (Atty. Dkt. No. 412763-US-PSP). The entirety of this afore-mentioned application is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63446755 | Feb 2023 | US |