The present invention relates to the assisted exploration of computer generated virtual environments, and in particular to managing the difficulties associated with densely populated environments.
Volumetric datasets are found in many fields, such as engineering, material sciences, medical imaging, astrophysics. The exploration of volumetric datasets is not trivial, often requiring extensive knowledge and is usually heavily impacted by the specific needs of users. In most airports for example, security agents deal with such data exploration in the context of baggage inspections. X-ray and tomography are two commonly used fluoroscopic scanning systems. X-ray systems provide a flattened 2D luggage scan while tomography systems produce transversal scans, also called slices. Thanks to data processing techniques such as the Radon transform, these systems can produce a full 3D scan, comprising a set of voxels with corresponding density data. Since the resulting X-ray scanned image only contains voxel or pixel densities, it cannot display the original material colours. The standard colour visual mapping uses three different colours (orange, green, and blue) to display the data density. Orange colour corresponds to low density (mainly organic items). In opposition, blue colour is used for high density values (i.e. metal). In the case of X-ray systems, green colour corresponds to the superposition of different kinds of materials or average density materials.
Superposition:
A threat (e.g. prohibited object like knife, cutter . . . ) may be sheltered behind dense materials. Sometimes, it's possible to see through this blind shield using functionalities such as high penetration (enhanced X-ray power) or image processing (contrast improvement). As shown in
Location:
Depending on its location inside the luggage, a threat can be difficult to detect. Objects located in the corners, in the edges or inside the luggage's frame are very difficult to identify. As shown in
Dissociation:
Another way to dissimulate a threat is to separate and to spread parts of it in the luggage (weapons or explosives are composed of many separated items like the trigger, the barrel . . . ). This dissociation can be combined with other dissimulation techniques. As shown in
Lure:
An ill-intentioned individual may use a lure to hide the real threat. For instance, a minor threat like a small scissors may be clearly visible and catch security agent's attention while a more important threat remains hidden. As shown in
Volumetric data exploration with direct volume rendering techniques is of great help to visually extract relevant structures in many fields of science: medical imaging, astrophysics and more recently in luggage security. To leverage this knowledge extraction, many techniques have been developed. A number of existing basic technologies are known in this field, including volume visualization, transfer function, direct voxel manipulation and focus plus context interaction.
In particular, volume visualization can be done with geometric rendering system which transforms the data into a set of polygons representing an iso-surface. The contour tree algorithm and other alternatives such as branch decomposition are usually used to find these iso-surfaces. Contour tree algorithms may be vulnerable to noise, which can be problematic in luggage inspections since dense materials such as steel cause noise by reflecting the X-rays.
In order to investigate volumetric dataset, one can use the Transfer Function (TF). In practice, this maps the voxel density with a specific colour (including its transparency). Transfer functions can be 1, 2 or n dimensional and are of great help to isolate structures of interest in volumetric data. Thanks to the colour blending process, a suitable transfer function can also reveal iso-surfaces or hide density to improve the volumetric data visualization.
A specific difficulty that arises in an environment such as that described with respect to
In accordance with a first aspect there is provided a method of displaying objects having a predefined spatial relationship in a three dimensional computer generated environment, said objects each being associated with a respective metadata value defining the respective visibility of said objects in representations thereof, said method comprising the steps of:
defining a virtual projector in said environment having a specified position, orientation and field of view therein,
determining a display threshold for each object within the field of view of the virtual projector on the basis of a display function, wherein the display function has an inverse relation to distance from said virtual projector, and wherein said display function further varies as a function of the angle defined by the orientation of the virtual projector and a line drawn from said virtual projector to each object respectively, and
displaying objects in said field of view objects excluding those not meeting their respective display threshold.
In accordance with a development of the first aspect, the metadata value represents the density of the respective object.
The selective omission of certain objects enables a user to better and more quickly understand the contents of the environment, which in turn may lead to reduced demands on system capacity.
In accordance with a further development of the first aspect, the objects are voxels
In accordance with a further development of the first aspect, the objects are polygons
In accordance with a further development of the first aspect, the objects are defined by intersecting surfaces.
The applicability of the mechanisms described to any three dimensional representation makes them compatible with any three dimensional environment, facilitating adoption with the lowest possible adaptation effort.
In accordance with a further development of the first aspect, the display function reflects an inverse square law with respect to distance from the virtual camera. By mirroring physical processes, the behaviour of the mechanism is more intuitive, further enabling a user to better and more quickly understand the contents of the environment, which in turn may lead to reduced demands on system capacity.
In accordance with a further development of the first aspect, the display function tends to a maximum as the angle defined by the axis and each object falls to zero. By imitating common tools, the behaviour of the mechanism is more intuitive, further enabling a user to better and more quickly understand the contents of the environment, which in turn may lead to reduced demands on system capacity.
In accordance with a further development of the first aspect, there are defined a plurality of candidate display functions, and the method comprising the further step of selecting the candidate display function to be applied as the display function. Enabling the user to specify the display function, or automatically selecting an optimal function makes it possible to apply different functions, and select whichever gives the most useful results further enabling a user to better and more quickly understand the contents of the environment, which in turn may lead to reduced demands on system capacity.
In accordance with a further development of the first aspect, the display function comprises a scaling term, and the method comprises the further step of receiving a user input determining the value of the scaling term. Enabling the user to specify the scaling term makes it possible to apply different scaling terms, and select whichever gives the most useful results further enabling a user to better and more quickly understand the contents of the environment, which in turn may lead to reduced demands on system capacity.
In accordance with a further development of the first aspect, there is defined a virtual camera having a specified position, orientation and field of view in the environment, wherein the position is the same as the position of the virtual projector, and the orientation and field of view of the virtual camera are such as to overlap with the field of view of the virtual projector, and wherein at the step of displaying, objects in the field of view of said virtual camera are displayed excluding those not meeting their respective display threshold.
In accordance with a further development of the first aspect, there is provided a further step of receiving a user input determining the orientation, position or field of view of either the virtual camera or the virtual projector. Separate control of the virtual camera and virtual projector opens up new possibilities for exploration of the environment, and inspection of elements of interest from different positions and perspectives, further enabling a user to better and more quickly understand the contents of the environment, which in turn may lead to reduced demands on system capacity.
In accordance with a further development of the first aspect, the position of the virtual camera and the position of said virtual projector, or the orientation of said virtual camera and the orientation of said virtual projector, or the field of view of said virtual camera and the field of view of said virtual projector are in a defined relationship such that a modification with respect to the virtual camera brings a corresponding modification with respect to the virtual projector.
In accordance with a second aspect, there is provided an apparatus adapted to implement the method of any preceding claim.
In accordance with a third aspect, there is provided an apparatus for managing the display of objects having a predefined spatial relationship in a three dimensional computer generated environment with respect to a virtual projector having a specified position, orientation and field of view in the environment, the objects each being associated with a respective metadata value defining the respective visibility of the objects in representations thereof,
wherein the apparatus is adapted to determine a display threshold for each object within the field of view of the virtual projector, wherein the display function has an inverse relation to distance from the virtual projector, and wherein the display function further varies as a function of the angle defined by the orientation of the virtual projector and a line drawn from said virtual projector to each said object respectively, and wherein the apparatus is further adapted to cause the display of objects in the field of view objects excluding those not meeting their respective display threshold.
In accordance with a fourth aspect, there is provided computer program adapted to perform the steps of the first aspect.
The above and other advantages of the present invention will now be described with reference to the accompanying drawings, in which:
The three dimensional environment may be defined in any suitable terms, such as for example voxels, polygons (for example in polygon mesh structures), intersecting surfaces (for example NURBS surfaces or subdivision surfaces,) or equation-based representations. By way of example, certain embodiments below will be described in terms of voxel based environments; however the skilled person will appreciate that the described embodiments may be adapted to any of these other environments.
In such an environment, the objects are each associated with a respective metadata value, which can be used to define the respective visibility of said objects in representations thereof. This value may directly define the opacity of the object when displayed, or some other visibility value such brightness or colour, or may reflect a physical characteristic of the real substance represented by the object. For example, where the objects represent components of physical artifacts such as described with respect to
As shown in
The method then proceeds to step 220 of determining a display threshold for each object within the field of view of said virtual projector on the basis of a display function, wherein the display function has an inverse relation to distance from the virtual projector, and wherein the display function further varies as a function of the angle defined by the orientation of the virtual projector and a line drawn from said virtual projector to each said object respectively.
The method then proceeds to step 230 at which objects in said field of view are displayed, excluding those not meeting the respective display threshold before terminating at step 240.
The definition of the display function will clearly have a marked influence on the final objects selected for display.
Thus the display function may tend to a maximum as the angle defined by the axis and each said object falls to zero.
It may be imagined that intermediate curves between the curves 410, 420 might exhibit successive intermediate variants of these curves. As shown, the curves 410, 420 resemble exponential functions falling from maximum values at shorter distances from the virtual projector, and approaching zero as the distance value gets greater, so that in general the display function represented in
For example, the display function may reflect an inverse square law with respect to distance from the virtual camera.
It will be appreciated that although the display function has been described with respect to
As shown, the method starts at step 500 before proceeding to step 510 at which a virtual projector is defined in the environment, having a specified position, orientation and field of view therein. The method then proceeds to step 520 at which it is determined whether the field of view of the virtual projector contains any objects. If the field of view of the virtual projector contains no objects the method terminates at step 580. Otherwise, one of the objects in the field of view is selected at step 520. The object may be selected on any basis—it may be the closest to or furthest from the virtual projector, for example. The method then proceeds to step 530 at which the distance d of the selected object from the virtual projector, and the angle with respect to the virtual projector θ are determined. The method then proceeds to step 540 at which the display threshold corresponding to the angle and distance values evaluated at step 530 is determined. At step 550 the selected object's metadata value is compared to the display threshold, and in a case where the object's metadata value exceeds the display threshold the method proceeds to step 551 at which the object is tagged for display, or otherwise in a case where the object's metadata value does not exceed the display threshold the method proceeds to step 552 at which the object is tagged as excluded from display. In some cases it may be possible for the metadata value to equal the display threshold, in which case the method will classify for display, or not, as appropriate to the specific implementation. After the method passes via step 551 or 552 the method returns to step 560 at which it is determined whether any objects in the field of view of the virtual projector have yet to be assessed with respect to a display threshold. At this step, the set of objects in the field of view may be expanded since objects now tagged as excluded from display can be ignored, which may expose new objects as candidates for display. In a case where one or more objects remain in the field of view of the virtual projector that have not yet been assessed the method selects an new, presently untagged object at step 561 before reverting to step 530. Otherwise, if all objects in the field of view of the virtual projector have been assessed, the method proceeds to step 570 at which the objects in the field of view that have not been tagged as excluded from display are displayed to a user, before the method terminates at step 580. It will be appreciated that the term “tagging” as used here does not imply any particular data structure or recording mechanism, merely that the status of particular objects is flagged, recorded or represented in some manner. In this sense, an object may be treated as being tagged in one way or another implicitly, for example by absence of a tag having the alternative meaning. When the method calls for presenting objects not tagged as excluded, it may equally call for presenting tags that have been tagged for display.
On considering
While
It will be appreciated that while
For the sake of the present example, the display threshold at 0 degrees is taken to be 1/d2, and at other angles the threshold is modified in accordance with
While
It will be appreciated that the display function may determine different thresholds in different planes. In
Still further, the transfer function need not be defined as a continuous function, but rather as a set of discrete thresholds.
It will be appreciated that there may be defined a plurality of candidate display functions, which may implement any combination of the different variations proposed for the definition of the display function as set out above, or otherwise. For example, respective candidate display functions may implement different angular intensity distributions such as described with respect to
Accordingly, there may be provided a further method step of receiving a user input specifying the candidate display function to be applied as said display function, such as the scaling term, or any display function characteristic or any combination of display function characteristics.
Furthermore there may be provided a further method step of automatically selecting the candidate display function to be applied as said display function, such as the scaling term, or any display function characteristic or any combination of display function characteristics, on the basis of suitable predefined criteria. For example, the method may attempt to identify coherent objects lying within the field of view of the virtual projector, and select a display function which tends to makes any such objects either wholly visible, or wholly obscured, and thus tends to minimise the display of partial objects.
Accordingly, there may be provided a further method step of selecting the candidate display function to be applied as said display function.
Generally speaking, the objects selected for display in a three dimensional environment are determined by the position, orientation and field of view of a virtual camera. In the foregoing description, for the sake of simplicity, it is assumed that the virtual camera settings will not affect the implementation of the virtual projector effect, for example where the position of the virtual camera is the same as the position of the virtual projector, the orientation of the virtual camera is aligned with that of the virtual projector and field of view of the virtual camera is broader than or equal to that of the virtual projector. As such, the position of the virtual camera and the position of the virtual projector, or the orientation of said virtual camera and the orientation of said virtual projector, or the field of view of said virtual camera and the field of view of said virtual projector may be in a defined relationship such that a modification with respect to the virtual camera brings a corresponding modification with respect to the virtual projector.
Thus the virtual projector and virtual camera may be locked together, so that the centre of the representation displayed to the user is subject to the effect of the virtual projector, which scans across the environment as the user moves the virtual camera.
Nevertheless, many departures from this arrangement may be envisaged. Any, all, or none of the position, orientation and field of view of the virtual camera and virtual projector may be locked together. In some cases, the two may be entirely independent. Where the degree of independence between the virtual projector and virtual camera is such that the field of view of the projector affects objects that are outside the field of view of the virtual camera, there may be provided an additional step, for example prior to the step 520 of
Accordingly, there may furthermore be provided an additional step of receiving a user input determining the orientation, position or field of view of either said virtual camera or said virtual projector, or any combination of these.
It will also be appreciated that there may be defined a plurality of virtual projectors as described above, with various position, orientation and field of view settings, in the same environment.
Accordingly, to better explore a virtual 3D computer generated environment comprised of objects which may be voxels, polygons or any other construct are selectively not displayed so as to better reveal underlying objects. The objects are each associated with a metadata value which contributes to determining their visibility such as a density or opacity value. The manner of selection is somewhat analogous to the projection of a beam of light towards the objects from a virtual projector, where a display threshold is determined for each object within the field of view of said virtual projector on the basis of a display function having an inverse relation to distance from the virtual projector and further varying as a function of the angle defined by the orientation of the virtual projector and a line drawn from said virtual projector to each said object respectively. On this basis, objects having a smaller angular separation from the axis of the virtual projector, and closer to the projector, will be preferentially excluded from display.
The disclosed methods can take form of an entirely hardware embodiment (e.g.
FPGA), an entirely software embodiment (for example to control a system according
to the invention) or an embodiment containing both hardware and software elements. As such, embodiments may comprise a number of subsystems, functional elements or means adapted to implement the invention in communication with each other, and/or with standard fixed function or programmable elements for example as described below.
On this basis, there is provided an apparatus for managing the display of objects having a predefined spatial relationship in a three dimensional computer generated environment with respect to a virtual projector having a specified position, orientation and field of view in said environment, the objects each being associated with a respective metadata value defining the respective visibility of said objects in representations thereof. The apparatus is further adapted to determine a display threshold for each object within the field of view of the virtual projector, wherein said display function has an inverse relation to distance from said virtual projector, and wherein said display function further varies as a function of the angle defined by the orientation of the virtual projector and a line drawn from said virtual projector to each said object respectively, and wherein said apparatus is further adapted to cause the display of objects in said field of view objects excluding those not meeting their respective said display threshold.
Similarly, there is provided an apparatus adapted to perform the steps of any of the methods described above, for example with respect to
Software embodiments include but are not limited to applications, firmware, resident software, microcode, etc. The invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or an instruction execution system.
A computer-usable or computer-readable can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
In some embodiments, the methods and processes described herein may be implemented in whole or part by a user device. These methods and processes may be implemented by computer-application programs or services, an application-programming interface (API), a library, and/or other computer-program product, or any combination of such entities.
The user device may be a mobile device such as a smart phone or tablet, a drone, a computer or any other device with processing capability, such as a robot or other connected device.
A shown in
Logic device 1001 includes one or more physical devices configured to execute instructions. For example, the logic device 1001 may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
The logic device 1001 may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic device may include one or more hardware or firmware logic devices configured to execute hardware or firmware instructions. Processors of the logic device may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic device 1001 optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic device 1001 may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration.
Storage device 1002 includes one or more physical devices configured to hold instructions executable by the logic device to implement the methods and processes described herein. When such methods and processes are implemented, the state of storage 1002 device may be transformed—e.g., to hold different data.
Storage device 1002 may include removable and/or built-in devices. Storage device may be locally or remotely stored (in a cloud for instance). Storage device 602 may comprise one or more types of storage device including optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., FLASH, RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. Storage device may include volatile, non-volatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
In certain arrangements, the system may comprise an interface 1003 adapted to support communications between the Logic device 1001 and further system components. For example, additional system components may comprise removable and/or built-in extended storage devices. Extended storage devices may comprise one or more types of storage device including optical memory 1032 (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (not shown) (e.g., RAM, EPROM, EEPROM, FLASH etc.), and/or magnetic memory 1031 (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. Such extended storage device may include volatile, non-volatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
It will be appreciated that storage device includes one or more physical devices, and excludes propagating signals per se. However, aspects of the instructions described herein alternatively may be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.), as opposed to being stored on a storage device.
Aspects of logic device 1001 and storage device 1002 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
The term “program” may be used to describe an aspect of computing system implemented to perform a particular function. In some cases, a program may be instantiated via logic device executing machine-readable instructions held by storage device. It will be understood that different modules may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same program may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The term “program” may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
In particular, the system of
For example a program implementing the steps described with respect to
In some cases, the computing system may comprise or be in communication with a scanner 1080 or other three dimensional imaging system as described above. This communication may be achieved by wired or wireless network, serial bus, Firewire, Thunderbolt, SCSI or any other communications means as desired. In such cases, a program for the control of the scanner 1080 and/or the retrieval of data therefrom may run concurrently on the logic device 1001, or these features may be implemented in the same program as implementing the steps described with respect to
Accordingly the invention may be embodied in the form of a computer program.
Furthermore, when suitably configured and connected, the elements of
It will be appreciated that a “service”, as used herein, is an application program executable across multiple user sessions. A service may be available to one or more system components, programs, and/or other services. In some implementations, a service may run on one or more server-computing devices.
When included, display subsystem 1011 may be used to present a visual representation of data held by a storage device. This visual representation may take the form of a graphical user interface (GUI). As the herein described methods and processes change the data held by the storage device 1002, and thus transform the state of the storage device 1002, the state of display subsystem 1011 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 1011 may include one or more display devices utilizing virtually any type of technology for example as discussed above. Such display devices may be combined with logic device and/or storage device in a shared enclosure, or such display devices may be peripheral display devices.
When included, input subsystem may comprise or interface with one or more user-input devices such as a keyboard 1012, mouse 1013, touch screen 1011, or game controller (not shown). In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, colour, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity.
When included, communication subsystem 1020 may be configured to communicatively couple computing system with one or more other computing devices. For example, communication module of may communicatively couple computing device to remote service hosted for example on a remote server 1076 via a network of any size including for example a personal area network, local area network, wide area network, or internet. Communication subsystem may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network 1074, or a wired or wireless local- or wide-area network. In some embodiments, the communication subsystem may allow computing system to send and/or receive messages to and/or from other devices via a network such as Internet 1075. The communications subsystem may additionally support short range inductive communications with passive or active devices (NFC, RFID etc).
The system of
It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
16305437 | Apr 2016 | EP | regional |
Number | Name | Date | Kind |
---|---|---|---|
9082217 | McKenzie | Jul 2015 | B1 |
20070206027 | Chen | Sep 2007 | A1 |
Number | Date | Country |
---|---|---|
2010145016 | Dec 2010 | WO |
2011011894 | Feb 2011 | WO |
Entry |
---|
European Search Report for 16305437.2 dated Sep. 27, 2016. |
Nicholas Joseph Jr.: “Cardinal Principles of Radiation Protection”, 2006, pp. 1-38, XP002761960, Retrieved from the Internet: URL:https://www.ceessentials.net/article4.htm. |
Number | Date | Country | |
---|---|---|---|
20170301147 A1 | Oct 2017 | US |