The present invention relates to the assisted exploration of computer generated virtual environments, and in particular the identification of discrete objects in such environments.
Volumetric datasets are found in many fields, such as engineering, material sciences, medical imaging, astrophysics. The exploration of volumetric datasets is not trivial, and is heavily impacted by the specific needs of users. In most airports for example, security agents deal with such data exploration in the context of baggage inspections. X-ray and tomography are two commonly used fluoroscopic scanning systems. X-ray systems provide a flattened 2D luggage scan while tomography systems produce transversal scans, also called slices. Thanks to data processing techniques such as the Radon transform, these systems can produce a full 3D scan, comprising a set of voxels with corresponding density data. Since the resulting X-ray scanned image only contains voxel or pixel densities, it cannot display the original material colours. The standard colour visual mapping uses three different colours (orange, green, and blue) to display the data density. Orange colour corresponds to low density (mainly organic items). In opposition, blue colour is used for high density values (i.e. metal). In the case of X-ray systems, green colour corresponds to the superposition of different kinds of materials or average density materials.
A threat (e.g. prohibited object like knife, cutter . . . ) may be sheltered behind dense materials. Sometimes, it's possible to see through this blind shield using functionalities such as high penetration (enhanced X-ray power) or image processing (contrast improvement). As shown in
Depending on its location inside the luggage, a threat can be difficult to detect. Objects located in the corners, in the edges or inside the luggage's frame are very difficult to identify. As shown in
Another way to dissimulate a threat is to separate and to spread parts of it in the luggage (weapons or explosives are composed of many separated items like the trigger, the barrel . . . ). This dissociation can be combined with other dissimulation techniques. As shown in
An ill-intentioned individual may use a lure to hide the real threat. For instance, a minor threat like a small scissors may be clearly visible and catch security agent's attention while a more important threat remains hidden. As shown in
Volumetric data exploration with direct volume rendering techniques is of great help to visually extract relevant structures in many fields of science: medical imaging, astrophysics and more recently in luggage security. To leverage this knowledge extraction, many techniques have been developed. A number of existing basic technologies are known in this field, including volume visualization, transfer function, direct voxel manipulation and focus plus context interaction.
In particular, volume visualization can be done with geometric rendering system which transforms the data into a set of polygons representing an iso-surface. The contour tree algorithm and other alternatives such as branch decomposition are usually used to find these iso-surfaces. Contour tree algorithms may be vulnerable to noise, which can be problematic in luggage inspections since dense materials such as steel cause noise by reflecting the X-rays.
In order to investigate volumetric dataset, one can use the Transfer Function (TF). In practice, this maps the voxel density with a specific colour (including its transparency). Transfer functions can be 1, 2 or n dimensional and are of great help to isolate structures of interest in volumetric data. Thanks to the colour blending process, a suitable transfer function can also reveal iso-surfaces or hide density to improve the volumetric data visualization.
A specific difficulty that arises in an environment such as that described with respect to
In accordance with a first aspect there is provided a method for managing display of multiple objects having a predefined spatial relationship in a three dimensional computer generated environment, comprising the steps of displaying a graphical representation of a first subset of the objects in their respective spatial relationships in a first display area, and displaying a graphical representation of all of the objects not belonging to said first subset in their respective spatial relationships in one or more further display areas. A selected object is removed from either the first display area or one of the further display areas, and added to the graphical representation in a destination display area, the destination display area being any display area other that the display area from which said selected object is removed. By presenting all objects in a display area, a user may more easily keep track of numerous objects and their special relationships, whilst uncluttering his view of a set of objects of particular significance. In this way, user errors are avoided and user efficiency improves, leading to lower system demands over time.
In a development of the first aspect, the method comprises the further steps of receiving a predefined user input, and responsive to the predefined user input displaying all objects in one said display area. By easily reversing the separation of objects, a user may more easily keep track of numerous objects and their special relationships, further reducing user errors and improving efficiency, leading to still lower system demands over time,
In a development of the first aspect, the method comprises the further steps of receiving a predefined user input, and responsive to said predefined user input displaying the objects in respective display areas as a function of a categorization of the objects, where objects having related categorizations are displayed in the same display area. By presenting all objects grouped on the basis of categorization, a user may more easily grasp relationships between objects.
In a development of the first aspect the categorization reflects any one of density, shape, volume, aspect ratio or weight.
In a development of the first aspect, the method comprises the further step of identifying the shape of each object by comparison to a library of object models, and wherein the categorization corresponds to the identified shape of each object. By categorizing objects on the basis of comparison to a library of models, relationships may be established with minimal foreknowledge of the content of the environment, thereby avoiding the system resource cost of generating, storing and managing such data.
In a development of the first aspect, the method comprises the further steps of receiving a predefined user input, and responsive to the predefined user input displaying the objects in respective display areas in such a way as to achieve the best possible view of all objects, given the current virtual camera position and orientation. By distributing and displaying the objects for optimal visibility, the amount of manipulation by the user is reduced, reducing demands on processing and memory resources.
In a development of the first aspect, the method comprises the further steps of subsequent to the step of removing the selected object from the first display area and prior to the step of and adding to the destination display area, of representing the selected object in one or more intermediate positions between its starting position in one display area and its final position in said destination display area. The user's intuitive understanding of the location of different objects is thus further improved, further reducing user errors and user improving efficiency, leading to lower system demands over time.
In a development of the first aspect, the steps of removing the selected object from the first display area, representing said selected object in one or more intermediate positions between its starting position in one display area and its final position in the destination display area, and adding to the destination display area are synchronised with a drag and drop type interface interaction. The user's intuitive understanding of the location of different objects is thus further improved, further reducing user errors and user improving efficiency, leading to lower system demands over time.
In a development of the first aspect, the objects in each said display area are displayed from the point of view of a first virtual camera position with respect to said three dimensional computer generated environment.
In a development of the first aspect, the objects in one or more display areas are displayed from the point of view of a first virtual camera position with respect to the three dimensional computer generated environment, and one or more further display areas are displayed from the point of view of a second virtual camera position with respect to the three dimensional computer generated environment. The possibility of viewing different groups of objects from different orientations can reduce the amount of manipulation by the user is reduced, reducing demands on processing and memory resources.
In a development of the first aspect, the method comprises the further step of identifying the objects as objects in said three dimensional computer generated environment prior to said steps of displaying. By identifying objects in the environment, relationships may be established with minimal foreknowledge of the content of the environment, thereby avoiding the system resource cost of generating, storing and managing such data.
In a development of the first aspect, the step of identifying objects as objects comprises the steps of: selecting a first voxel having a scalar metadata value exceeding a predetermined threshold, assessing each voxel adjacent said first voxel, and selecting and tagging each adjacent voxel whose scalar metadata value threshold exceeds said predetermined scalar metadata value threshold, and repeating said steps of assessing, selecting and tagging for each voxel adjacent a tagged voxel until no further voxels meet the criteria for assessment, changing to a new first voxel and a new tag, and repeating said steps of selecting, assessing and changing until all voxels in said computer generated environment have been tagged, wherein each set of voxels with the same tag is taken to constitute an object.
In accordance with a second aspect there is provided an apparatus adapted to implement the first aspect.
In accordance with a third aspect there is provided an apparatus for managing display of multiple objects having a predefined spatial relationship in a three dimensional computer generated environment. This apparatus is adapted:
to cause the display of a graphical representation of a first subset of said objects in their respective spatial relationships in a first display area,
to cause the display of display a graphical representation of all of said objects not belonging to said first subset in their respective spatial relationships in one or more further display areas,
to cause the removal of a selected said object from either said first display area or one of further display areas, and to cause the addition of said selected object to the graphical representation in a destination display area, said destination display area being any said display area other that the display area from which said selected object is removed.
In a fourth aspect there is provided a computer program adapted to implement the steps of the first aspect.
The above and other advantages of the present invention will now be described with reference to the accompanying drawings, in which:
Accordingly, it will often be necessary to adopt a sub-optimal point of view.
As shown in
As shown, the method starts at step 400 before proceeding to step 410 at which a graphical representation of a first subset of said objects is displayed in their respective spatial relationships in a first display area, and then proceeds to step 420 at which a graphical representation of all of the objects not belonging to the first subset are displayed in their respective spatial relationships in a second display area.
The method then proceeds to step 430 at which a selected object is removed from either said first display area or said second display area, before proceeding to step 440 at which the selected object is added to the graphical representation in whichever said display area the selected object was not removed from.
As such, a three dimensional computer generated virtual environment can be broken down into a set of objects X, where the first subset Y⊂X, and where, with respect to the second subset Z, Z\Y=.
It will be appreciated that the order of the steps of displaying the first set of objects 410 and displaying the second set of objects 420 may be reversed, or the two steps may be carried out together.
It will be appreciated that the order of the steps of removing a selected object form either the first display area or the second display area 430 on one hand, and adding the selected object to the graphical representation in whichever display area the selected object was not removed from 440 on the other may be carried out together.
By this means, the two display areas taken together will at any time (other than in the interval between steps 430 and 440, if any) show every one of the objects in the environment (to the extent that they are visible with the current camera position and orientation settings), although the distribution of the objects between the first and second display areas may change over time, they will generally be visible to the user in one display area or the other. By this means, the risk of losing or forgetting an object, or losing track of the interrelationship of the various objects is reduced.
As shown in
As shown in
As shown in
As shown in
In accordance with certain embodiments, there may be provided more than two display areas, with the complete set of objects being distributed amongst any or all of them in whatever way best suits the user.
In accordance with certain embodiments, there may be provided short-cut interface features such as keyboard short-cuts, display hot zones etc, which when activated may cause predetermined operating on the objects with respect to the different display areas. For example, such interface features may cause all objects to be gathered in one particular display area, or to be distributed in accordance with a particular algorithm amongst the different display areas. Examples of such algorithms may include:
It is also possible to use a combination of these different approaches, possibly with different respective weightings.
In accordance with certain embodiments, there may be provided one or more intervening steps between been step 430 of removing the selected object from the first display area 541 and adding to the second display area 542 at step 440, during which the selected object may be represented in one or more intermediate positions between its starting position in one display area and its final position in another display area, so as to provide a semblance of movement of the selected object from one display area to the other. This animated effect may further help the user maintain an intuitive relationship with the objects. Still further, this process may be synchronised with a drag and drop type interface interaction, so that the user has the impression of actually physically moving the objects from one display area to the other.
In the embodiment of
In certain embodiments, the different display areas may be uncoupled, so that a different camera position and orientation is defined for each, making it possible for the user to observe the different sets of objects from different points of view. In this way, the user may observe the same object from different points of view by shifting it from one view to another as described above.
In this context, short cuts may be provided for moving certain objects, selected for example in line with the selection algorithms suggested above, from one display area to another so as to present that set of objects from a different point of view.
A short cut may be provided for decoupling the point of view of the displays as described above, and furthermore for re-synchronising the displays with respect to a single preferred camera position and orientation, which may be predefined, or the current camera position and orientation of a selected display area, or otherwise.
The three dimensional computer generated environment may defined in terms of voxels, polygons or any other such data structure. Often such structures will not inherently distinguish between separate objects insofar as they are in contact with each other, thus providing limited support for a method such as that of
Accordingly, in certain embodiments there may be provided an additional step prior to the step of displaying a graphical representation of a first subset of the objects in their respective spatial relationships in the first display area, of identifying the objects in the three dimensional computer generated environment.
In a case where the three dimensional computer generated environment is defined in terms of voxels, the identification of objects may be implemented by way of example as follows.
As shown in
The process of
The step of displaying the tagged voxels may be omitted altogether, or carried out at some arbitrary later time as desired.
There objects may be divided between any number of display areas. In some embodiments, there are two display areas, such that the one or more further display areas comprise a second display area, and the step of removing a selected said object from either the first display area or one of the further display areas, and adding the selected object to the graphical representation in a different display area comprises removing a selected object from either the first display area or the second display area, and adding the selected object to whichever display area the selected object was not removed from.
It will be appreciated that the basic underlying approach of
The definition of any objects identified in accordance with these steps may in some embodiments be retained for later reference. For example, the tagging of each new object may be specific to that object, or alternatively a registry of which voxels have been grouped together as an object may be compiled. Accordingly, by multiple applications of the described method the three dimensional environment may be defined in terms of a collection of objects, rather than a mere matrix of voxels. Furthermore, this approach may accelerate the process for subsequent applications of the process, since voxels that have already been identified as belonging to a first object can be excluded from the process.
The process of
In some embodiments the opacity threshold may be two-fold, stating that voxels must not only exceed a minimum value, but also fall below a maximum value.
The first selected voxel may be selected in a variety of different ways. For example, it may be selected as being the closest to the current position of the virtual camera 1160 defining a user's point of view, which meets the predefined opacity threshold. On this basis, in view of the position an orientation of the virtual camera 1160 and the predefined opacity threshold, for the purposes of the present example, the first selected voxel is voxel 1131. Accordingly there is provided a further step of determining a virtual camera position, and establishing a straight path between the part of said object and said virtual camera position, whereby the first voxel is selected as being the voxel nearest said virtual camera position situated along said path and exceeding said predetermined opacity threshold.
In accordance with the method of
In accordance with the method of
It will be appreciated that the definition of the object will depend heavily on the choice of the opacity threshold.
In accordance with certain embodiments, the process described with respect to
In accordance with certain embodiments, the process described with respect to
In accordance with certain embodiments, the process described with respect to
the process described with respect to
In the foregoing examples, adjacent voxels have been taken to be those having a face in common. In other embodiments the term may be extended in include voxels having a vertex in common, and still further, to voxel within a predetermined radius, or with a predetermined number of intervening voxels.
The foregoing embodiments have been described on the basis of voxels arranged in a cubic lattice. It will be appreciated that other voxel type structures are known, such as based on regular octahedrons and tetrahedrons, rhombic do-decahedra and truncated octahedral, etc. and that the process of
In accordance with certain embodiments, objects in a voxel based computer generated three dimensional environment are identified by crawling through adjacent voxels meeting a predetermined criterion with respect to a scalar metadata value associated with each voxel, such as opacity or density. These adjacent voxels may be explored in accordance with a tree-crawling algorithm such as a breadth first or depth first algorithm. Once all adjacent cells meeting the criterion are identified, these are determined to represent a discrete object, and displayed as such. The starting point for the crawling process may be the voxel closest to a virtual camera position along the line of sight of that virtual camera meeting the criterion.
In certain embodiments, the objects present in a particular computer generated 3D environment are represented to a user as distributed amongst a plurality of display area. The relative positions of the objects are maintained, and whenever an object is removed from one display area it is added to another. The point of view presented to the user may be the same for each display area, with all being controlled together, or separate control may be provided for each area or sub-group of areas.
The disclosed methods can take form of an entirely hardware embodiment (e.g. FPGA), an entirely software embodiment (for example to control a system according to the invention) or an embodiment containing both hardware and software elements. As such, embodiments may comprise a number of subsystems, functional elements or means adapted to implement the invention in communication with each other, and/or with standard fixed function or programmable elements for example as described below.
Software embodiments include but are not limited to applications, firmware, resident software,
microcode, etc. The invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or an instruction execution system. A computer-usable or computer-readable can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
In some embodiments, the methods and processes described herein may be implemented in whole or part by a user device. These methods and processes may be implemented by computer-application programs or services, an application-programming interface (API), a library, and/or other computer-program product, or any combination of such entities.
The user device may be a mobile device such as a smart phone or tablet, a drone, a computer or any other device with processing capability, such as a robot or other connected device.
In accordance with certain embodiments, in order to browse between a collection of datasets susceptible of graphical representation, these datasets are associated with points on a sliding scale of one, two or three dimensions. When a point corresponding to a particular dataset is selected by a user via a mouse pointer or the like, it is rendered as a graphical representation and presented to the user. When an intermediate point is selected, an interpolation of the datasets corresponding to the nearby points is generated and the resulting dataset rendered as a graphical representation and presented to the user. The interaction may be implemented with a slider bar type widget having hybrid behaviour such that clicking on the bar causes the button to jump to the nearest point corresponding to a data, while sliding to a chosen intermediate position activates the interpolation of adjacent datasets.
A shown in
Logic device 1501 includes one or more physical devices configured to execute instructions. For example, the logic device 1501 may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
The logic device 1501 may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic device may include one or more hardware or firmware logic devices configured to execute hardware or firmware instructions. Processors of the logic device may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic device 1501 optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic device 1501 may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration.
Storage device 1502 includes one or more physical devices configured to hold instructions executable by the logic device to implement the methods and processes described herein. When such methods and processes are implemented, the state of storage 1502 device may be transformed—e.g., to hold different data.
Storage device 1502 may include removable and/or built-in devices. Storage device 1502 may comprise one or more types of storage device including optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., FLASH RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. Storage device may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
In certain arrangements, the system may comprise an interface 1503 adapted to support communications between the Logic device 1501 and further system components. For example, additional system components may comprise removable and/or built-in extended storage devices. Extended storage devices may comprise one or more types of storage device including optical memory 1532 (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (not shown) (e.g., RAM, EPROM, EEPROM, FLASH etc.), and/or magnetic memory 1531 (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. Such extended storage device may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
It will be appreciated that storage device includes one or more physical devices, and excludes propagating signals per se. However, aspects of the instructions described herein alternatively may be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.), as opposed to being stored on a storage device.
Aspects of logic device 1501 and storage device 1502 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
The term “program” may be used to describe an aspect of computing system implemented to perform a particular function. In some cases, a program may be instantiated via logic device executing machine-readable instructions held by storage device. It will be understood that different modules may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same program may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The term “program” may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
In particular, the system of
For example a program implementing the steps described with respect to
In some cases, the computing system may comprise or be in communication with a scanner 1580 or other three dimensional imaging system as described above. This communication may be achieved by wired or wireless network, serial bus, firewire, Thunderbolt, SCSI or any other communications means as desired. In such cases, a program for the control of the scanner 1580 and/or the retrieval of data therefrom may run concurrently on the logic device 1501, or these features may be implemented in the same program as implementing the steps described with respect to
Accordingly the invention may be embodied in the form of a computer program.
Furthermore, when suitably configured and connected, the elements of
It will be appreciated that a “service”, as used herein, is an application program executable across multiple user sessions. A service may be available to one or more system components, programs, and/or other services. In some implementations, a service may run on one or more server-computing devices.
When included, display subsystem 1511 may be used to present a visual representation of data held by a storage device. This visual representation may take the form of a graphical user interface (GUI). As the herein described methods and processes change the data held by the storage device 1502, and thus transform the state of the storage device 1502, the state of display subsystem 1511 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 1511 may include one or more display devices utilizing virtually any type of technology for example as discussed above. Such display devices may be combined with logic device and/or storage device in a shared enclosure, or such display devices may be peripheral display devices.
When included, input subsystem may comprise or interface with one or more user-input devices such as a keyboard 1512, mouse 1513, touch screen 1511, or game controller (not shown). In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, colour, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity. When included, communication subsystem 1520 may be configured to communicatively couple computing system with one or more other computing devices. For example, communication module of may communicatively couple computing device to remote service hosted for example on a remote server 1576 via a network of any size including for example a personal area network, local area network, wide area network, or internet. Communication subsystem may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network 1574, or a wired or wireless local- or wide-area network. In some embodiments, the communication subsystem may allow computing system to send and/or receive messages to and/or from other devices via a network such as Internet 1575. The communications subsystem may additionally support short range inductive communications with passive devices (NFC, RFID etc).
The system of
It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
16305339.0 | Mar 2016 | EP | regional |