The present invention relates to a method and apparatus for the collection and management of quantitative data on unusual aerial phenomena via a citizen network of personal devices.
Attempts have been made in the past to attain sighting information from citizens regarding unusual aerial phenomena. These include organizations that provide contact numbers whereby citizens can report their observations. Some organizations provide on-line forms to attempt to gather relevant information, as well as to help process the volume of incoming sighting reports. In some cases, personal device apps have been provided to assist in collecting citizen sighting information. Because of the nature and use of the sighting information, these data are considered anecdotal and not of scientific value. Some of these data may be part of deliberate hoaxes.
What is needed is a method and apparatus whereby a network formed of citizens using personal devices can operate to collect data on unusual aerial phenomena that is of the quality and integrity sufficient to be considered scientifically quantitative and useful.
The present invention relates to a method and apparatus to quantify unusual aerial phenomena via a network of citizen operated personal devices.
A feature of the present invention is an app that runs on popular personal devices. A further feature of the invention is a server system in communication with said app. A further feature of the invention is a means by which a user having spotted a potential event can quickly engage the app to begin a data collection mode. A further feature of the invention is a data collection mode that simultaneously records data including but not limited to video, audio, 9-axis IMU data, GPS coordinates, and time. A further feature of the invention is a method of tagging the collected data with a cryptographic signature to ensure integrity. A further feature of the invention is real-time app communication to a centralized server. A further feature of the invention is an alert to other users indicating something of interest is happening nearby. A further feature of the invention is the a means by which the server can distinguish interesting events from non-interesting events. A further feature of the inventions is a means by which the server can analyze the data to obtain scientifically useful quantitative information.
These and other features, aspects and advantages of the present invention will become better understood with reference to the following drawings, description and claims.
The present invention provides a method and apparatus for collecting and managing scientifically usable quantitative data on unusual aerial phenomena using a centralized server 110 and a network of citizens in possession of personal devices 120. Personal device 120 contains sensors 280 comprising a precision clock 210, a GPS receiver 220, a video camera 230, a microphone 250, and a 9-axis IMU sensor 240. Sensors 280 of personal device 120 are in communication with host processor 260. Host processor 260 is in communication with internet connection 270 and local memory 290. In the preferred embodiment 100, host processor 260 can access information from sensors 280, and/or control sensors 280. Further host processor can transmit information through internet connect 270 to server 110 and store information in non-volatile media storage 290.
In the preferred embodiment, a user downloads app 600 of the present invention on their personal device 120. App 600 gains permission to access elements 280. App 600 remains in idle mode 410 except to keep track of bias offsets and pointing direction of 9-axis IMU 240, and/or any other necessary housekeeping analytics or control as would be called for to enable the method of this invention. In the preferred embodiment, in idle mode 410, app 600 periodically sends its GPS position in an info heartbeat 330 to server 110 for purposes as described below. If a user observes an event of interest, the user can invoke a recording session 500 of the app. In this mode the host processor 260 records data from sensors 280 to local memory 290. In addition, host processor 260 under the direction of app 600 sends relatively slow, periodic, repeating alert signals 320 to server 110 through internet connection 270.
Alert signal 320 comprises a set of informative metrics regarding recording session 500. These include but are not limited to the current time, average pointing angle, average GPS coordinates, and device ID. The server 110 receives this information transmitted from personal device 120 under the direction of app 600. The location information is used to check if other users in the network are nearby. In response to alert signal 320 from a personal device 120, and with general knowledge of the location of all personal devices 120, server 110 then notifies the users of other personal devices 120, nearby to the position indicated in said alert signal 320, that a possible unusual aerial phenomenon may have been spotted. The pointing angle and GPS location of personal device 120 as indicated in alert signal 320 sent to server 100 by direction of app 600 is interpreted by algorithms in server 110 and used to inform other users where they might find said object in the sky. The focus of recording session 500 is known to server 110 through alert signal 320 by virtue of known GPS position and pointing angle of personal device 120 as provided by GPS receiver 220 and 9-axis IMU 240. These data, by virtue of coordinate system 700. Origin 710 corresponds to GPS location and pointing angle corresponds to vector 740 extending from origin 710 and extending in the direction 740. This implies a pyramidal volume 720 extending to infinity within which the object of interest is expected to be contained.
In the preferred embodiment, if other users in the vicinity send alert signals 320, the additional pointing angles obtained from their alert signals 320 are used by algorithms in server 110 to further refine the object location to better inform additional other users where to look. This occurs by virtue of the volume defined by the intersection of pyramidal volume 720 of each personal device 120 involved in recording the same object.
When the user stops recording session 500, an information packet 310 is sent by the method of this invention to server 110. Information packet 310 comprises information that summarizes the data recorded by the user during recording session 500 comprising a start time, stop time, average pointing angle, average GPS coordinates, device ID, recording session 500 ID, and a cryptographic signature 800. The cryptographic signature 800 at minimum contains a hash of all the data that was recorded from sensors 280. In the preferred embodiment, cryptographic signature 800 relates to the hash of a recording data file 810 on storage 290 comprising all the raw data recorded during recording session 500 including all information required as per the method of this invention. Thus if any of the information of recording session 500 as stored in recording data file 810 on storage 290 were to be altered, the cryptographic signature 800 hash would not match.
In the preferred embodiment, the cryptographic signature 800 is a hash of all the data recorded and stored in storage 290 as a single recording data file 810 during recording session 500. In an alternative embodiment, the cryptographic signature 800 can be a crypto currency transaction, or NFT (non-fungible token).
In the preferred embodiment, a record of information packet 310 is kept in storage 290. This serves as a means for users to confirm the findings of the server 110 host. In one embodiment, an email containing the cryptographic signature 800 and other contents of information packet 310 is sent to the email account of the user associated with the personal device 120 responsible for recording the associated recording session 500. In this way, the data of recording session 500 stored in recorded data file 810 is protected from attacks from the user side or the server side.
In the preferred embodiment, after recording session 500, the user is presented with an event form 330 that regards recording session 500. Event form 330 can be filled out partly, but requires a minimum set of information that is necessary for the server to use to aid in vetting the recording session 500 at the server. The user can access form 330 of recording session 500 later to fill in the remainder of the form that better describes what the user witnessed.
In the preferred embodiment, subsequent to a data collection session 500, an object detection neural net 900 is used on the local app 600 to highlight detected objects. The user is prompted to assist the app 600 in annotating the object. This helps in the case that other objects may also be present in the image but are known to the user to be not of interest.
In the event that a recording session 500 is invoked during a period in which personal device 120 is not in communication with the internet, the integrity of recorded data file 810 does not benefit from strict two-sided attack protection. However, the method of this invention increases significantly the difficulty of a hoaxing attack involving synthetic data mimicking what is expected of the ensemble of sensors 280.
In the event that multiple recording sessions 500 are captured for a sighting of an anomalous object, by multiple devices 120 in communication with server 110, it is assumed that the authenticity of the event is virtually guaranteed, as the creation of real-time synthetic data required to hoax the event would be insurmountably difficult by any standard.
In some cases, the object detector 900 may not be able to find the object of interest in the images of recording session 500. In that case, the user is prompted to add in a bounding box to annotate the location of the object in the images. Once the annotations of the video are acceptable to the user, the metadata describing the annotations are sent to the server.
In an alternative embodiment, in the case of multiple recording sessions 500 having captured the same object from different directions, Neural Radiance Fields (NeRF) or other techniques known to those skilled in the art may be employed to faithfully reconstruct animated views, and/or 3D models of the object.
User calibration can be performed before, during, or after a sighting by pointing the camera at the sun or moon or constellation.
Server 110 may comprise a collection of servers located together or located apart and acting together in the manner described in the method of this invention.
Server 110 must vet the data coming in from potentially millions of personal devices 120 of the network of users. There are likely to be many sightings of usual objects such as airplanes or stars. There will also be reports associated with errors and tests and jokes and attempted hoaxes. A feature of the present invention is the ability to automatically vet recording sessions 500 to separate usual objects from the unusual objects that are the subject of this invention.
Recall that all data from recording sessions 500 is stored locally in storage 290 of users' personal device 120. Server 110 receives recording session 500 metadata from alert signals 320 and information packets 310. The data of recording session 500 may stay stored in recorded data file 810 only in storage 290 on personal device 120 for a long period of time (days, weeks, months, years) without ever being uploaded to server 110. Only when a recording session 500 is deemed significant is all the data from recording session 500 as stored in recorded data file 810 uploaded from storage 290 of personal device 120 to server 110. In the preferred embodiment, the user is not involved in approving or being aware of the upload of recorded data file 810 of their recording session 500.
In an alternative embodiment, recorded data file 810 may be uploaded to server 110 and/or deleted from personal device 120 storage 290 after a predetermined amount of time in order to better manage storage 290 on personal device 120. In this embodiment it is somewhat more likely that important information could be lost by deletion before a determination of its importance is made.
In the preferred embodiment, information packets 310 for all recording sessions 500 are saved at the server 110 in a database. The amount of data per information packet 310 is fewer that 100 bytes, so that even one billion information packets 310 could easily be stored. Further, they are each time-tagged, so that they can be stored by date and archived. In the preferred embodiment data is kept indefinitely to allow for retrieval at some time in the future. In an alternative embodiment, data may be kept only a maximum time before being deleted in order to better manage storage space on server 110.
At the first level, the required information of event form 330 is used to determine if recording session 500 is of significance. The lack of a submitted event form 330 is the first level of disqualification. Event form 330 collects other information from questions targeted directly at the nature of the event to further disqualify recording session 500. An example question may be, “Was this event a mistake?” Or, “Was this a sighting of something of interest?”
In the preferred embodiment, recording sessions 500 are disqualified unless multiple users record the same event. The average GPS location and average pointing angle metadata available from alert signal 320 and information packet 310 are used by the server to determine whether multiple users could have recorded the same object. If no such multiple sighting exists for a recording session 500, it is disqualified for further study until and unless further information is revealed in the future that warrants further investigation. At that time, the data of recording session 500 as stored in recorded data file 810 can be uploaded from the users personal device 120 to server 110.
In the preferred embodiment, the raw data of the 9-axis IMU is recorded in real-time during recording session 500 into recorded data file 810 on storage 290. These raw data are in integer counts of the IMU signals, and contain unwanted biases that are normally removed during processing. In the preferred embodiment, these biases can be removed in a two-pass or Weiner algorithm known to those skilled in the art. In the two-pass algorithm, the IMU data are processed by means well known to those skilled in the art, such as by the method of Magwick, known to those skilled in the art. When the full data stream has been processed, the final biases can be used to initialize the biases for a second pass. Further, GPS position can be used to place an upper bound on accelerations. These approaches amount to estimation by the technique of Weiner rather than Kalman, and can be used to estimate absolute position and relative orientation of the camera with relatively high accuracy. The techniques by which this is accomplished are known by those skilled in the art.
By this method, a high-quality estimate of camera absolute position and relative pointing angle can be determined. And with this information, the vector 750 that describes the direction pointed to by any pixel in the recording can be determined, subject to knowledge of a focal plane parameter 740f. With the computed rotation matrix R we find the vector defined by a given sx, sy pixel coordinate as
And f is the equivalent focal length to the image plane 730. In the case of fixed lens, the focal plane parameter 740f is simply a parameter of the model of personal device 120. In the case of a variable zoom lens, the focal plane parameter 740f can be determined by personal device 120 camera model 820, and with the camera metadata that describes the camera zoom factor. A lookup table of camera model 820 or curve is used to relate the camera zoom factor of the metatdata stored in recorded data file 810 to the focal plane parameter 740f.
A vector pointing in the direction of an object is determined based on the xy coordinates of said object in camera 230 image and the pointing angle of camera 230 of personal device 120. If two or more recording files 810 from two or more personal devices 120 exist regarding the same event, and if the two or more said personal devices are separated in location sufficiently to form an adequate baseline, triangulation can be used to determine the position of said object.
In general a plurality of vectors describing the pointing angle to an object will violate epipolar constraints due to inaccuracies. For this reason the preferred embodiment computes the location of the object using least-squares triangulation wherein a position is found such that the sum of the pointing errors of all contributing vectors is minimized so as to best estimate the actual location of the object of interest.
The degree to which the epipolar constraint is violated serves as a metric on the accuracy of the camera 230 pointing computation. Thus, error bars can be determined by those skilled in the art as is necessary in the pursuit of scientific quality data as per the intent of the present invention.
With 9-axis IMU 240 computations the relative orientation estimation is highly accurate up to a fixed rotation dependent upon magnetic compass biases, disturbances, and inaccuracies. Thus, if the data from two recording sessions 500 from two users at two different locations are used to triangulate position, we can expect errors. In the preferred embodiment, a more accurate absolute pointing angle can be determined by correlating landmarks in the image with landmarks at the site where the recording session 500 was recorded. This requires a visit to the site, and is reserved only for cases that, if confirmed, would justify the effort. In some cases, this refinement of absolute pointing angle could be performed without visiting the site provided distant landmarks can be seen prominently.
A usual problem with modern personal device 120 cameras 230 is that they are not designed to focus on tiny objects in a uniform background, such as would often be encountered in the hunt for unusual aerial phenomenon. In the preferred embodiment, the focus is set for infinity when the recording session 500 is instantiated. This is because the expectation is that the object being observed is greater than 100 meters away from the camera, so that infinite focus is ideal.
In an alternative embodiment, the app includes specialized algorithms to detect the object within the frame and measure its response to focus commands. In that way focus can be adjusted in order to minimize and/or clarify the object of interest.
In either focus adjustment embodiment of the current invention, buttons displayed prominently on the screen allow the user to exit the recording session 500 default focus mode and either choose a manual focus mode or allow personal device 120 to take control of focus adjustments.
The embodiments of the invention described herein are exemplary and numerous modifications, variations and rearrangements can be readily envisioned to achieve substantially equivalent results, all of which are intended to be embraced within the spirit and scope of the invention as defined in the appended claims.
The present application is a utility patent. This application claims the benefit of U.S. Provisional Application Ser. No. 63/326,014, filed on Mar. 31, 2022 which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63326014 | Mar 2022 | US |