Conventional patient modeling methods can only obtain a three-dimensional (3D) surface model of the patient. To enrich the information that may be encompassed in a patient model and make it applicable to more clinical applications, it may be desirable to model the interior anatomical structures of the patient together with the 3D surface model. Information regarding these interior anatomical structures may be obtained through one or more of the common medical scans such as computed tomography (CT), X-ray, magnetic resonance imaging (MRI), or ultrasound imaging. Compared to the other imaging techniques, ultrasound imaging may be faster and safer, have a noninvasive nature, and be less expensive than the MRI and CT alternatives. Accordingly, it may be beneficial to obtain the information regarding a patient's interior anatomical structures by using ultrasound imaging techniques.
Described herein are systems, methods and instrumentalities associated with generating a three-dimensional (3D) patient model based on ultrasound images of the patient. A system as described herein may comprise at least one sensing device and one or more processors communicatively coupled to the at least one sensing device. The at least one sensing device may be configured to capture images of the patient in a medical environment, wherein the medical environment may include an ultrasound machine with an ultrasound probe. The at least one sensing device may be installed on the ultrasound machine or on a ceiling of the medical environment, and may be configured to capture images of the medical environment. The one or more processors may be configured to obtain a 3D human model of the patient, wherein the 3D human model may indicate at least a pose and a shape of the patient's body. The one or more processors may be further configured to receive a first ultrasound image of the patient captured using the ultrasound probe, determine, based on the captured images of the medical environment, a position of the ultrasound probe (e.g., relative to the patient's body), and align the first ultrasound image with the 3D human model of the patient based at least on the position of the ultrasound probe. The one or more processors may then generate a visual representation that shows the alignment of the first ultrasound image and the 3D human model.
In one or more embodiments, the visual representation may include a 3D body contour of the patient, and the one or more processors may be configured to fill a first inside portion of the 3D body contour with the first ultrasound image based on the alignment of the first ultrasound image and the 3D human model of the patient. In one or more embodiments, the one or more processors may be further configured to receive a second ultrasound image of the patient captured using the ultrasound probe, align the second ultrasound image with the 3D human model based on at least the position of the ultrasound probe, and add the second ultrasound image to the visual representation by filling a second inside portion of the 3D body contour with the second ultrasound image based on the alignment of the second ultrasound image and the 3D human model of the patient. In one or more embodiments, the first and second ultrasound images of the patient may be associated with an anatomical structure (e.g., an internal organ such as the heart) of the patient, and the one or more processors may be further configured to reconstruct a 3D ultrasound model of the anatomical structure based on at least the first ultrasound image and the second ultrasound image.
In one or more embodiments, the 3D human model of the patient may be obtained from another source or generated by the one or more processors based on the images (e.g., of the medical environment) captured by the at least one sensing device. In one or more embodiments, the one or more processors may be further configured to determine an orientation of the ultrasound probe (e.g., relative to the patient's body), and align the first ultrasound image with the 3D human model further based on the determined orientation of the ultrasound probe. In one or more embodiments, the one or more processors being configured to determine the position of the ultrasound probe may include the one or more processors being configured to detect, in the images of the medical environment, a marker associated with the ultrasound probe and determine the position of the ultrasound probe relative to the patient's body based on the detected marker. Alternatively, or additionally, the one or more processors may be further configured to determine the position of the ultrasound probe by detecting, using a machine learning model, visual features associated with the ultrasound probe in the captured images of the medical environment and determining the position of the ultrasound probe based on the detected visual features.
In one or more embodiments, the one or more processors may be further configured to determine, based on respective visual features extracted by a machine learning model from multiple ultrasound images, that two or more of the ultrasound images are substantially similar to each other, and provide an indication that the two or more ultrasound images are duplicative. In one or more embodiments, the one or more processors may be further configured to detect, based on a machine learning model, a medical abnormality in an ultrasound image, and provide an indication of the detection. For example, the indication may include a bounding box around the detected medical abnormality in the ultrasound image.
A more detailed understanding of the examples disclosed herein may be had from the following description, given by way of example in conjunction with the accompanying drawing.
The present disclosure is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.
One or more sensing devices 110 may be installed at various locations of the medical environment 100 and may be communicatively coupled to a processing device 112 (e.g., comprising one or more processors) and/or other devices of the medical environment 100 via a communication network 114. Each of the sensing devices 110 may include one or more sensors such as one or more 2D visual sensors (e.g., 2D cameras), one or more 3D visual sensors (e.g., 3D cameras), one or more red, green and blue (RGB) sensors, one or more depth sensors, one or more RGB plus depth (RGB-D) sensors, one or more thermal sensors (e.g., infrared (FIR) or near-infrared (NIR) sensors), one or more motion sensors, one or more radar sensors, and/or other types of image capturing circuitry that are configured to capture images of a person, an object or a scene in the medical environment 100. Depending on the type of cameras, sensors, and/or image capturing circuitry included in the sensing devices 110, the images generated by the sensing devices 110 may include, for example, one or more photos, one or more thermal images, one or more radar images, and/or the like. The sensing devices 110 may be configured to generate the images described herein in response to detecting a person (e.g., patient 118), an object (e.g., ultrasound probe 106), or a scene (e.g., a standing medical professional, such as doctor 122, examining the patient 118 lying on the patient bed 102) in the medical environment 100. The sensing devices 110 may also be configured to generate the images described herein based on a preconfigured schedule or time interval, or upon receiving a control signal (e.g., from a remote control device like programming device 116) that triggers the image generation.
Each of the sensing devices 110 may include a functional unit (e.g., a processor) configured to control the image capturing functionalities described herein. The functional unit may also be configured to process the images (e.g., pre-process the images before sending the images to another processing device), communicate with other devices located inside or outside of the medical environment 100, determine a characteristic (e.g., a person or object) of the medical environment 100 based on the captured images, etc. For example, the functional unit (and/or the processing device 112) may be capable of generating (e.g., constructing) a 3D human model such as a 3D human mesh model of the patient 118 (e.g., a 3D patient model) based on the images captured by the sensing devices 110. Such a 3D human model may include a plurality of parameters that may indicate the body shape and/or pose of the patient while the patient is inside medical environment 100 (e.g., during an MRI, X-ray, Ultrasound, or CT procedure). For example, the parameters may include shape parameters β and pose parameters θ that may be used to determine multiple vertices (e.g., 6890 vertices based on 82 shape and pose parameters) associated with the patient's body and construct a visual representation of the patient model (e.g., a 3D mesh), for example, by connecting the vertices with edges to form polygons (e.g., such as a triangles), connecting multiple polygons to form a surface, using multiple surfaces to determine a 3D shape, and applying texture and/or shading to the surfaces and/or shapes.
The 3D patient model described above may also be generated by processing device 112. For example, processing device 112 may be communicatively coupled to one or more of sensing devices 110 and may be configured to receive images of the patient 118 from those sensing devices 110 (e.g., in real time or based on a predetermined schedule). Using the received images, processing device 112 may construct the 3D patient model, for example, in a similar manner as described above. It should be noted here that, even though processing device 112 is shown in
As noted above, each of the sensing devices 110 may include a communication circuit and may be configured to exchange information with one or more other sensing devices via the communication circuit and/or the communication network 114. The sensing devices 110 may form a sensor network within which the sensing devices 110 may transmit data to and receive data from each other. The data exchanged between the sensing devices 110 may include, for example, imagery data captured by each sensing device 110 and/or control data for discovering each sensing device's 110 presence and/or calibrating each sensing device's 110 parameters. For instance, when a new sensing device 110 is added to the medical environment 100, the sensing device 110 may transmit messages (e.g., via broadcast, groupcast or unicast) to one or more other sensing devices 110 in the sensor network and/or a controller (e.g., a processing device as described herein) of the sensor network to announce the addition of the new sensing device 110. Responsive to such an announcement or to a transmission of data, the other sensing devices 110 and/or the controller may register the new sensing device 110 and begin exchanging data with the new sensing device 110.
The sensing devices 110 may be configured to be installed at various locations of the medical environment 100 including, e.g., on a ceiling, above a doorway, on a wall, on a medical device, etc. From these locations, each of the sensing devices 110 may capture images of a person, object or scene that is in the field of view (FOV) of the sensing device 110 (e.g., the FOV may be defined by a viewpoint and/or a viewing angle). The FOV of each of the sensing devices 110 may be adjusted manually or automatically (e.g., by transmitting a control signal to the sensing device) so that the sensing device 110 may take images of a person, an object, or a scene in the medical environment 100 from different viewpoints or different viewing angles.
Each of the sensing devices 110 may be configured to exchange information with other devices (e.g., ultrasound machine 104 or monitoring device 108) in the medical environment 100, e.g., via the communication network 114. The configuration and/or operation of the sensing devices 110 may be at least partially controlled by a programming device 116. For example, the programming device 116 may be configured to initialize and modify one or more operating parameters of the sensing devices 110 including, e.g., the resolution of images captured by the sensing devices 110, a periodicity of data exchange between the sensing devices 110 and the processing device 112, a frame or bit rate associated with the data exchange, a duration of data storage on the sensing devices, etc. The programming device 116 may also be configured to control one or more aspects of the operation of the sensing devices 110 such as triggering a calibration of the sensing devices 110, adjusting the respective orientations of the sensing devices 110, zooming in or zooming out on a person or object in the medical environment 100, triggering a reset, etc. The programming device 116 may be a mobile device (e.g., such a smartphone, a tablet, or a wearable device), a desktop computer, a laptop computer, etc., and may be configured to communicate with the sensing devices 110 and/or the processing device 110 over the communication network 114. The programming device 116 may receive information and/or instructions from a user (e.g., via a user interface implemented on the programming device 116) and forward the received information and/or instructions to the sensing devices 110 via the communication network 114.
The communication network 114 described herein may be a wired or a wireless network, or a combination thereof. For example, the communication network 114 may be established over a public network (e.g., the Internet), a private network (e.g., a local area network (LAN), a wide area network (WAN)), etc.), a wired network (e.g., an Ethernet network), a wireless network (e.g., an 802.11 network, a Wi-Fi network, etc.), a cellular network (e.g., a Long Term Evolution (LTE) or 5G network), a frame relay network, a virtual private network (VPN), a satellite network, and/or a telephone network. The communication network 114 may include one or more network access points. For example, the communication network 114 may include wired and/or wireless network access points such as base stations and/or internet exchange points through which one or more devices in the medical environment 100 may be connected to exchange data and/or other information. Such exchange may utilize routers, hubs, switches, server computers, and/or any combination thereof.
The processing device 112 may be configured to receive images from the sensing devices 110 and determine one or more characteristics of the medical environment 100 based on the images. These characteristics may include, for example, people and/or objects that are present in the medical environment 100 and the respective locations of the people and/or objects in the medical environment 100. The people presented in the medical environment 100 may include, e.g., a patient 118 and/or medical staff (e.g., the doctor 122, a technician, a nurse, etc.) attending to the patient 118. The objects presented in the medical environment 100 may include, e.g., the ultrasound machine 104, the ultrasound probe 106, the monitoring device 108, the patient bed 102, and/or other medical devices or tools not shown in
Furthermore, the processing device 112 may be configured to automatically generate a 3D ultrasound model of an internal organ (e.g., anatomical structure 120) of the patient 118 based on multiple ultrasound images of the internal organ captured by the ultrasound probe 106 of ultrasound machine 104. The organ may be, for example, the spleen, liver, heart, etc. of the patient and the 3D ultrasound model of the internal organ may show, for example, the shape and/or location of the organ as it corresponds to the body of the patient as indicated by the 3D patient model. The operation of the ultrasound machine 104 may involve the doctor 122 moving the ultrasound probe 106 over the body of patient 118 around the area of the internal organ of interest (e.g., anatomical structure 120) to capture 2D ultrasound images of the organ. The captured 2D ultrasound images may be displayed on the screen output (e.g., a display of ultrasound machine 104 and/or the monitoring device 108). The 2D ultrasound images may show a cross-section of the internal organ, and the doctor 122 may then be able to estimate the health state of the internal organ based on the 2D ultrasound images.
In examples, the sensing devices 110 may be configured to capture images of the medical environment 100 that includes the patient 118, the ultrasound machine 104, and/or the ultrasound probe 106. The processing device 112 may be configured to obtain a 3D human model of the patient 118 (e.g., based on the images captured by the sensing devices 110 or from a different source such as a patient model database), wherein the 3D human model may indicate at least a pose and a shape of the body of the patient 118. A first ultrasound image of the patient 118 captured using the ultrasound probe 106 may be received by the processing device 112, which may determine, based on the captured images of the medical environment 100, a position of the ultrasound probe 106 (e.g., relative to the body of patient 118). The processing device 112 may then align the first ultrasound image with the 3D human model of the patient 118 based on at least the position of the ultrasound probe 106, and generate a visual representation that shows the alignment of the first ultrasound image and the 3D human model.
In examples, the visual representation may include a 3D body contour of the patient 118, and the processing device 112 may be configured to fill a first inside portion of the 3D body contour with the first ultrasound image based on the alignment of the first ultrasound image and the 3D human model of the patient 118. For example, if the first ultrasound image is a left-side view of the patient's stomach, the image may be displayed inside the 3D body contour, in an area that corresponds to the left side of the stomach. Furthermore, the processing device 112 may also be configured to receive a second ultrasound image of the patient 118 captured using the ultrasound probe 106 and align the second ultrasound image with the 3D human model based, at least, on the position of the ultrasound probe 106 (e.g., relative to the body of patient 118). The second ultrasound image may then be added to the visual representation, for example, by filling a second inside portion of the 3D body contour with the second ultrasound image based on the alignment of the second ultrasound image and the 3D human model of the patient 118.
In examples, the first and second ultrasound images of the patient 118 may be associated with the anatomical structure 120 (e.g., an internal organ such as the heart) of the patient 118, and the processing device 112 may be further configured to reconstruct a 3D ultrasound model of the anatomical structure 120 based, at least, on the first ultrasound image, the second ultrasound image, and the position or location of the ultrasound probe 106 determined from the images of the medical environment 100. For instance, with the help of the sensing devices 110, the position/location of the ultrasound probe 106 may be tracked while the ultrasound images of the patient are taken. The position/location information may then be used to determine the respective 3D viewpoints of the 2D ultrasound images and to align and fuse the 2D ultrasound images into a 3D reconstructed view based on the determined 3D viewpoints.
In examples, the processing device 112 may be further configured to determine an orientation of the ultrasound probe 106 (e.g., relative to the body of patient 118), and align the first ultrasound image with the 3D human model further based on the determined orientation of the ultrasound probe. For example, if the orientation of the ultrasound probe 106 with respect to the body of patent 118 is 180° (e.g., the probe is upside down with respect to the head-feet axis of the body of patent 118), then the ultrasound images captured by the ultrasound probe 106 may be rotated accordingly in order to align them with the 3D human model.
In examples, the processing device 112 may be configured to determine the position of the ultrasound probe 106 (e.g., relative to the body of patient 118) by detecting, in the images of the medical environment 100, a marker associated with the ultrasound probe 106, and determining the position of the ultrasound probe 106 relative to the body of patient 118 based on the detected marker. Alternatively, or additionally, the processing device 112 may be further configured to determine the position of the ultrasound probe 106 (e.g., relative to the body of patient 118) based on detecting, using a machine learning model, visual features associated with the ultrasound probe 106 in the captured images of the medical environment 100, and determining the position of the ultrasound probe 106 based on the detected visual features.
In examples, the processing device 112 may be further configured to receive a second ultrasound image of the patient 118 captured using the ultrasound probe 106 and determine, based on respective visual features of the first ultrasound image and the second ultrasound image detected by a machine learning model, that the first ultrasound image is substantially similar to the second ultrasound image. An indication (e.g., a visual indication) that the first ultrasound image and the second ultrasound image are duplicative of each other may be provided (e.g., to the doctor 122). In examples, the processing device 112 may be further configured to detect, based on a machine learning model, a medical abnormality in an ultrasound image, and provide an indication of the detection (e.g., on monitoring device 108). For example, the indication may include a bounding shape (e.g., a bounding box or a bounding circle) around the detected medical abnormality in the ultrasound image.
In examples, the processing device 112 may be configured to present the 3D human model and the 3D ultrasound model of an internal anatomical structure on a display device (e.g., on monitoring device 108) by presenting a graphical representation of the surface of the patient's body together with a graphical representation of the internal anatomical structure of the patient on the display device. In examples, the processing device 112 may be communicatively coupled to the database 124, for example, via the communication network 114. The database 124 may comprise a patient record repository that stores basic information of the patient 118, diagnostic and/or treatment histories of the patient 118, scan images of the patient 118, etc. As a part of the generation of the 3D human model based on ultrasound images of the patient 118, the processing device 112 may be configured to retrieve all or a subset of the medical records of the patient 118 from the database 124, analyze the retrieved medical records in conjunction with other information of the patient 118 gathered or determined by the processing device 112 (e.g., such as the 3D human model described herein), and generate the 3D human model and the 3D ultrasound model of an internal anatomical structure of the patient 118 based, at least in part, on the retrieved medical records. For example, based on past medical scans of the patient 118, body geometry of the patient 118, and/or other preferences and/or constraints associated with the patient 118, the processing device 112 may automatically determine the parameters and/or configurations of a device (e.g., the position and/or orientation of the ultrasound probe 106) used in the medical procedure and cause the parameters and/or configurations to be implemented for the medical device, e.g., by transmitting the parameters and/or configurations to a display device visible to the doctor 122. The processing device 112 may also display, for example, a medical scan associated with the anatomical structure 120 on a display (e.g., as requested by the doctor 122 via an interface of the processing device 112) in order to assist the doctor 122.
In the examples, one or more of the tasks are described as being initiated and/or implemented by a processing device, such as the processing device 112, in a centralized manner. It should be noted, however, that the tasks may also be distributed among multiple processing devices (e.g., interconnected via the communication network 114, arranged in a cloud-computing environment, etc.) and performed in a distributed manner. Further, even though the processing device 112 has been described herein as a device separate from the sensing devices (e.g., the sensing devices 110), the functionalities of the processing device 112 may be realized via one or more of the sensing devices (e.g., the one or more sensing devices 110 may comprise respective processors configured to perform the functions of the processing device 112 described herein). Therefore, in some implementations, a separate processing device may not be included and one or more sensing devices (e.g., the sensing devices 110) may assume the responsibilities of the processing device.
The tracking of the ultrasound probe 106 with respect to the patient's body may be accomplished based on images of the medical environment captured by the sensing devices described herein (e.g., the sensing devices 110 of
In some embodiments, the tracking view screen 202 of
As noted above, the tracking view screen 202 of
In some embodiments, multiple 2D ultrasound images (e.g., including the first ultrasound image 210 and the second ultrasound image 302) that have been aligned with the 3D human model 204 may be used to generate a 3D ultrasound model (e.g., as described herein), which may be displayed together with the graphical representation of the 3D human model 204 (e.g., within a third interior portion of the 3D human model 204) in the alignment view screen 208.
One or more of the tasks described herein (e.g., such as automatically recognizing an ultrasound probe and determining the position of the ultrasound probe) may be performed using an artificial neural network (e.g., based on a machining learning model implemented via the artificial neural network). In examples, such an artificial neural network may include a plurality of layers such as one or more convolution layers, one or more pooling layers, and/or one or more fully connected layers. Each of the convolution layers may include a plurality of convolution kernels or filters configured to extract features from an input image. The convolution operations may be followed by batch normalization and/or linear (or non-linear) activation, and the features extracted by the convolution layers may be down-sampled through the pooling layers and/or the fully connected layers to reduce the redundancy and/or dimension of the features, so as to obtain a representation of the down-sampled features (e.g., in the form of a feature vector or feature map). In examples (e.g., if the task involves the generation of a segmentation mask associated with the ultrasound probe), the artificial neural network may further include one or more un-pooling layers and one or more transposed convolution layers that may be configured to up-sample and de-convolve the features extracted through the operations described above. As a result of the up-sampling and de-convolution, a dense feature representation (e.g., a dense feature map) of the input image may be derived, and the artificial neural network may be configured to predict the location of the ultrasound probe based on the feature representation.
For simplicity of explanation, the training operations are depicted in
At 508, the processing device may determine, based on the images of the medical environment, a position of the ultrasound probe (e.g., relative to the patient's body). For example, visual features associated with people (e.g., patient 118, doctor 124, etc.) and/or objects (e.g., ultrasound probe 106 or other tools, devices, etc.) in the images of the medical environment may be analyzed to determine respective locations of the persons and/or objects detected in the images in the medical environment and learn a spatial relationship of the persons or objects based on the determined locations. The processing device may assemble information from multiple images that may be captured by different sensing devices in order to determine the respective locations of a person and/or object. The processing device may accomplish this task by utilizing knowledge about the parameters of the sensing devices such as the relative positions of the sensing devices to each other and to the other people and/or objects in the medical environment. For example, the processing device may determine the depth (e.g., a Z coordinate) of a person or object in the medical environment based on two images captured by respective sensing devices, e.g., using a triangulation technique to determine the (X, Y, Z) coordinates of the person or object in the medical environment based on the camera parameters of the sensing device.
At 510, the processing device may align the first ultrasound image with the 3D human model based, at least, on the position of the ultrasound probe relative to the patient's body. For example, the processing device may determine that the ultrasound probe is positioned over the patient's chest area and therefore the captured first ultrasound image may be aligned with the 3D human model so that it is located at the chest area of the 3D human model of the patient. At 512, the processing device may generate a visual representation (e.g., on a display device) that shows the alignment of the first ultrasound image and the 3D human model. The processing device may continuously perform the operations of 502-508, for example, as new ultrasound images are captured for the patient.
As described herein, the sensor 702 may include a RGB sensor, a depth sensor, a RGB plus depth (RGB-D) sensor, a thermo-sensor such as a FIR or NIR sensor, a radar sensor, a motion sensor, a camera (e.g., a digital camera) and/or other types of image capturing circuitry configured to generate images (e.g., 2D images or photos) of a person, object, and/or scene in the FOV of the sensor. The images generated by the sensor 702 may include, for example, one or more photos, thermal images, and/or radar images of the person, object or scene. Each of the images may comprise a plurality of pixels that collectively represent a graphic view of the person, object or scene and that may be analyzed to extract features that are representative of one or more characteristics of the person, object or scene.
The sensor 702 may be communicatively coupled to the functional unit 704, for example, via a wired or wireless communication link. The sensor 702 may be configured to transmit images generated by the sensor to the functional unit 704 (e.g., via a push mechanism) or the functional unit 704 may be configured to retrieve images from the sensor 702 (e.g., via a pull mechanism). The transmission and/or retrieval may be performed on a periodic basis (e.g., based on a preconfigured schedule) or in response to receiving a control signal triggering the transmission or retrieval. The functional unit 704 may be configured to control the operation of the sensor 702. For example, the functional unit 704 may transmit a command to adjust the FOV of the sensor 702 (e.g., by manipulating a direction or orientation of the sensor 702). As another example, the functional unit 704 may transmit a command to change the resolution at which the sensor 702 takes images of a person, object or scene.
The sensor 702 and/or the functional unit 704 (e.g., one or more components of the functional unit 704) may be powered by the power supply 706, which may comprise an alternative current (AC) power source or a direct current (DC) power source (e.g., a battery power source). When a DC power source such as a battery power source is used, the power supply 706 may be rechargeable, for example, by receiving a charging current from an external source via a wired or wireless connection. For example, the charging current may be received by connecting the sensing device 700 to an AC outlet via a charging cable and/or a charging adaptor (including a USB adaptor). As another example, the charging current may be received wirelessly by placing the sensing device 700 into contact with a charging pad.
The functional unit 704 may comprise one or more of a communication interface circuit 708, a data processing device 710, a computation unit 712, a data rendering unit 714, a memory 716, or a programming and/or calibration application programming interface (API) 718. It should be noted that the components shown in
The functional unit 704 may be configured to receive or retrieve images from the sensor 702 via the communication interface circuit 708, which may include one or more wired and/or wireless network interface cards (NICs) such as ethernet cards, WiFi adaptors, mobile broadband devices (e.g., 4G/LTE/5G cards or chipsets), etc. In examples, a respective NIC may be designated to communicate with a respective sensor. In examples, a same NIC may be designated to communication with multiple sensors.
The images received or retrieved from the sensor 702 may be provided to the data processing device 710, which may be configured to analyze the images and carry out one or more of the operations described herein (e.g., including operations of the processing device 112 described herein). The functionality of the data processing device 710 may be facilitated by the computation unit 712, which may be configured to perform various computation intensive tasks such as feature extraction and/or feature classification based on the images produced by the sensor 702. The computation unit 712 may be configured to implement one or more neural networks. The data rendering unit 714 may be configured to generate the one or more visual representations described herein including, e.g., a representation of a 3D human model for a patient and a 3D ultrasound model of an anatomical structure of the patient, etc.
Each of the data processing device 710, the computation unit 712, or the data rendering unit 714 may comprise one or more processors such as a central processing device (CPU), a graphics processing device (GPU), a microcontroller, a reduced instruction set computer (RISC) processor, an application specific integrated circuit (ASIC), an application-specific instruction-set processor (ASIP), a physics processing device (PPU), a digital signal processor (DSP), a field programmable gate array (FPGA), or a combination thereof. The data processing device 710, computation unit 712, and/or data rendering unit 714 may also comprise other type(s) of circuits or processors capable of executing the functions described herein. Further, the data processing device 710, the computation unit 712, or the data rendering unit 714 may utilize the memory 716 to facilitate one or more of the operations described herein. For example, the memory 716 may include a machine-readable medium configured to store data and/or instructions that, when executed, cause the processing device 710, the computation unit 712, or the data rendering unit 714 to perform one or more of the functions described herein. Examples of a machine-readable medium may include volatile or non-volatile memory including but not limited to semiconductor memory (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)), flash memory, and/or the like. And even though not shown in
The operation of the sensing device 700 may be configured and/or controlled through the programming/calibration API 718, for example, using a remote programming device such as the programming device 116 in
The sensing device 700 (e.g., the functional unit 704) may also be configured to receive ad hoc commands through the programming/calibration API 718. Such ad hoc commands may include, for example, a command to zoom in or zoom out a sensor, a command to reset the sensing device 700 (e.g., restart the device or reset one or more operating parameters of the device to default values), a command to enable or disable a specific functionality of the sensing device 700, etc. The sensing device 700 (e.g., the functional unit 704) may also be programmed and/or trained (e.g., over a network) via the programming/calibration API 718. For example, the sensing device 700 may receive training data and/or operating logics through the programming/calibration API 718 during and/or after an initial configuration process.
It should be noted that the processing device 800 may operate as a standalone device or may be connected (e.g., networked or clustered) with other computation devices to perform the functions described herein. And even though only one instance of each component is shown in
While this disclosure has been described in terms of certain embodiments and generally associated methods, alterations and permutations of the embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure. In addition, unless specifically stated otherwise, discussions utilizing terms such as “analyzing,” “determining,” “enabling,” “identifying,” “modifying” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data represented as physical quantities within the computer system memories or other such information storage, transmission or display devices.
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other implementations will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.