The present invention relates generally to two-dimensional (2D) and three-dimensional (3D) building information models (BIM), and in particular, to a method, apparatus, system, and article of manufacture for enabling deep learning of BIMs. Such deep learning includes the processing of a small set of exemplary BIMs to identify and cluster data and information (of such exemplary BIMS) to provide an expressive representation of the BIM models. The expressive representations enable smart features such as clustering, classifying, recommending, or searching/retrieving BIM models.
(Note: This application references a number of different publications as indicated throughout the specification by reference numbers enclosed in brackets, e.g., [x]. A list of these different publications ordered according to these reference numbers can be found below in the section entitled “References.” Each of these publications is incorporated by reference herein.)
Building information model (BIM) software (such as REVIT BIM software available from the assignee of the present invention) helps architecture, engineering and construction (AEC) teams create high-quality buildings and infrastructure. BIMs are complex and contain information (e.g., in metadata and otherwise) about the structure and appearance of elements in the models. When designing a new building, users manually search through a database of previously designed BIM models to identify relevant content and/or find similar designs that can serve as a template/foundation. Such a prior art search process is slow and inefficient. What is needed is quick and efficient mechanism to search and/or identify elements and designs that can be used in the design process. To better understand such problems, a description of prior art BIM applications and limitations may be useful.
BIM software can: model shapes, structures, and systems in three dimensions (3D) with parametric accuracy, precision, and ease; streamline documentation work, with instant revisions to plans, elevations, schedules, and sections as projects change; and empower multidisciplinary teams with specialty toolsets in a unified project environment. BIM models contain rich information about semantic, geometric, and visual characteristics of a building model designed by architects. In other words, BIM models are complex and contain irregular data structures that can be analyzed from different perspectives such as appearance, structure, semantics, etc.
It may be desirable to utilize deep learning to generate and search for BIMs. Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. In deep learning, data is organized into layers and input data (e.g., exemplary images) is transformed into slightly more abstract and composite representations at each layer. The word “deep” in deep learning refers to the number of layers through which data is transformed. By processing data at each layer/level, a deep learning process can learn which features to optimally place in which level on its own. Unfortunately, the irregular data structures and complexity of BIMs makes BIMs incompatible with regular deep learning architectures that expect well-defined input structures. In this regard, it is not a trivial task to query information about BIM models or feed such models into deep models for predictive tasks. Moreover, BIM models are treated as independent files and the rich patterns across models are ignored.
To better understand the problems of the prior art, it may be useful to describe deep learning for images. Suppose it is desirable to categorize an image of a cat as a cat (e.g., instead of as a dog). Systems may process the cat image through a deep convolutional network (DCN) (also known as a deep convolutional neural network [DCNN] or convolutional neural network [CNN]). A DCN/DCNN/CNN is a class of artificial neural network that is used to analyze visual imagery and uses convolution in at least one of the layers. To process new images, several exemplary images of cats and dogs are input into the DCN and the DCN learns the representations. In this regard, based on the DCN processing, a set of numbers (e.g., [0.7, −0.1, 0.4, 0.3, −0.4, −0.1, −0.3]) are assigned to represent the image (e.g., different features/attributes of the images are transformed/translated into the numerical representations).
Similar processing can be done for objects/elements other than just images (e.g., product reviews). To process such other objects/elements, a network other than DCN may be utilized. For example, deep learning may utilize a long/short term memory (LSTM) network/system to categorize a product review as a good review, very good review, bad review, or very bad review.
While DCNs and LSTMs may be used in deep learning for standard images, BIM models are complex. For example, BIM models may have text, floor plans, structural information, electrical information, mechanical information, etc. It may be desirable to categorize/classify a BIM model as residential or commercial but neither DCN or LSTM networks may be used due to the complexity. In this regard, standard images or text that may be classified via a DCN or LSTM network include Euclidean/structured data. In contrast, BIM models contain non-Euclidean or irregular structured data. Non-Euclidean/irregular structured data may exist in a variety of different environments such as in 3D solid models, chemical structures, graph structures, 3D architectural models, etc. However, none of these irregular structured data environments can be processed by traditional deep learning models due to these irregularities.
In view of the above, the prior art fails to provide the capability to feed BIM models into deep learning models to learn expressive representations of such models. To provide such a capability, the following questions need to addressed: (1) how to structure BIM models in order to feed them into deep models; (2) what aspects of BIM models should be considered; and (3) what deep architectures are the best fit for BIM models.
To overcome the problems of the prior art, embodiments of the invention use a combination of preprocessing steps including translating BIM models to graph structures and 2D structures and recent advances in deep learning including self-supervised learning and multimodal learning to map a BIM model into a dense vector representation using a stack of deep models. Learning such expressive representations over BIM models enables smart features such as clustering, classifying, recommending, or retrieving BIM models.
In other words, one of the goals of embodiments of the invention is to learn task-agnostic representations (e.g., set of numbers) of BIM models from limited input data (i.e., with limited access to exemplary input examples) such that the learned representations can be adapted to new tasks based on a small training sample set (e.g., a few examples in which the training is based).
Referring now to the drawings in which like reference numbers represent corresponding parts throughout:
In the following description, reference is made to the accompanying drawings which form a part hereof, and which is shown, by way of illustration, several embodiments of the present invention. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention.
BIM models contain rich information about semantic, geometric, and visual characteristics of a building model designed by architects. However, it is not a trivial task to query information about them or feed them into deep models for predictive tasks. Moreover, BIM models are treated as independent files and the rich patterns of cross models are ignored. To address these issues, embodiments of the invention represent BIM models as directed and attributed multigraphs (also known as relational graphs). Once this translation is done, it is possible to feed the graphs into graph neural networks (GNNs) to classify them, predict some of the missing attributes, cluster them, or recommend them. Providing such features to BIM models can enhance the user experience.
In general, self-supervised learning is used to generate task-agnostic representations of a BIM model (i.e., self-supervised provides for conducting learning without human based annotations/input). Multimodality is then used to augment multi-view data (i.e., multiple different aspects of a BIM model are used simultaneously/at the same time). Relational inductive bias is used for sample efficiency (i.e., preexisting relations that exist in BIM models may be utilized), and transfer learning is used to provide a quick adaptation (i.e., learning from one trained model/task may be transferred to another model/task—e.g., from representing a model with certain parts to a cost model [e.g., a model for costing the physical construction of the model in the real world]).
In order to process and feed BIM models into deep models, embodiments of the invention consider two modalities to represent the models. The first modality represents the BIM models as relational structures (i.e., directed attributed graphs) with the goal to capture the underlying structure of a BIM model. The second modality captures the appearance of the BIM models. The first modality is fed into a Graph Neural Network (GNN) to compute a structural representation of a given model and the second modality is fed into a Multiview Convolutional Neural Network (MVCNN) [1] to compute a visual representation of a given model. In order to train the GNN and the MVCNN models, embodiments of the invention use a multi-view contrastive training objective to maximize the agreement between the structural and visual representations of a same BIM model and push apart the structural and visual representations of different BIM models. Once the training is done, embodiments of invention can treat the learned representations as the representation of the original BIM model and use it for downstream tasks such as classification, regression, or similarity search across the BIM models.
Embodiments of the invention utilized 360 BIM models of Japanese houses. These models are toy Japanese house of one or two stories. Some sample models are as shown in
The graph is
Once all of the BIM models (e.g., all of the examples in
In order to define the relations/types of relations (i.e., the edges 208/306), a set of heuristics may be utilized.
In such relations, the “Element” can be replaced with any derived element category (e.g., the element categories identified in
Once the BIM models have been translated to graph structures, a GNN (Graph Neural Network) may be utilized to compute a representation per node in each graph. To compute such a representation, a GNN learns representations based on connectivity and node/edge attributes. A strong connectivity inductive bias generates sample efficiency.
The node representations may then be aggregated into a graph level representation which is basically the structural representation of the input BIM model. Embodiments of the invention may utilize Graph Isomorphism Network (GIN) [2] layers to construct the GNN.
where G=(V;E) denotes a graph with node feature vectors Xv for v∈V. There are two tasks of interest: (1) Node classification, where each node v∈V has an associated label yv and the goal is to learn a representation vector hv of v such that v's label can be predicted as yv=f(hv); (2) Graph classification, where, given a set of graphs {G1, . . . , GN}⊆G and their labels {y1, . . . , yN}⊆Y, embodiments of the invention aim to learn a representation vector hG that helps predict the label of an entire graph, yG=g(hG). The GNN may use the graph structure and node features Xv to learn a representation vector of a node, hv, or the entire graph, hG. Further, the GNN follows neighborhood aggregation, where the representation of a node is iteratively updated by aggregating representations of its neighbors. After k iterations of aggregation, a node's representation captures the structural information within its k-hop network neighborhood. In the equations above, hv(k) is the feature vector of node v at the k-th iteration/layer. N(v) is a set of nodes adjacent to v.
The node representations 706 are then pooled 708 to generate the graph representation 710. The pooling may be based on hG=READOUT({hv(K)|v∈G}), where the READOUT function aggregates node features from the final iteration to obtain the entire graph's representation hG.
In view of the above, embodiments of the invention use the GNN 704 to attempt to fit a graph into a deep model where the node representations 706 are learned. In this regard, the GNN may consist of a message passing technique where graph nodes (e.g., beginning with the initial attributes 702) iteratively update their representations by exchanging information with their neighbors (resulting in the node representations 706). The node representations 706 are aggregated and a neural network is used to group similar elements to generate the graph representation 710.
Once the structure of the BIM model has been captured, it is desirable to capture the model's appearance. In this regard, the appearance of a BIM model is also an important aspect of the model.
To capture the visual representation information, embodiments of the invention take a specified number (e.g., 12) of snapshots/views of a three-dimensional (3D) model of a given BIM model with different camera angles 802 and then feed the snapshots into a pre-trained CNN (convolutional neural network) architecture 804. In this regard, embodiments of the invention may utilize a pretrained ResNet backbone 806 trained on ImageNet to compute the visual representation 808 of each snapshot 802. Once all view representations 808 are computed (e.g., one snapshot/view representation 808 per BIM model view 802), embodiments of the invention aggregate/pool 810 them into single visual representation 812 corresponding to the appearance of the input BIM model.
To create the final (multi-modal) representation of the BIM model, information from both the structural view (i.e., the graph representation 710) and the appearance view (i.e., the visual representation 812) are utilized/summarized to generate the final representation.
To train the model, embodiments of the invention utilize contrastive training. Contrastive training utilizes the principle of contrasting samples against each other to learn attributes that are common between data classes and attributes that set apparat a data class from another. In embodiments of the invention, the model is trained by pushing the model to detect whether a graph is coming from a BIM model and whether images from the BIM model belong to the same model or not. The notion of closeness in vector space is utilized to push images closer together/further apart. For example, if the graph structure is coming from BIM model 1, and twelve (12) images are from BIM model 2, the model is trained to push the images apart because they are not from the same model. If the images are from the same BIM model, then the system pushes the images closer together. In other words, contrastive training maximizes agreement between images belonging to a particular model.
More specifically, it is assumed that the system does not have access to ground-truth labels of any task during the training. The framework is trained end-to-end using deep InfoMax (e.g., with a Jensen-Shannon MI estimator) and maximizing the mutual information (MI) between the visual representations 708 and structural representations 812 following the objective:
where ϕ, θ are parameters of GNN and MVCNN to be learned, hvi, hsj denote visual representations of BIM model i and structural representations of BIM model j, respectively. I is the mutual information estimator. Embodiments of the invention may utilize the Jensen-Shannon MI estimator:
where (.,.):
d
d
is a discriminator that takes in a node and a graph representation and scores the agreement between them and is implemented as (−
(hv, hs)=hn·hsT. Embodiments of the invention provide the positive samples from the joint distribution (p) and the negative samples from the product of marginals p×{tilde over (p)}, and optimize the model parameters with respect to the objective using mini-batch stochastic gradient descent.
Once the GNN and MVCNN models have been trained end-to-end in a self-supervised manner, these two models can be treated as encoders that can encode the underlying knowledge across the BIM models. Thus, once the model is primed with such knowledge, the models can be adapted to new tasks by only providing the encoder with a few examples. These tasks may consist of adapting to classify the BIM models, predicting properties of such models, or using the models for clustering and similarity searching.
In other words, using self supervised learning, weights can be used and tweaked to further train the models and map inputs to outputs. In this regard, rather than starting with a random starting point, the knowledge of the base is transferred to the learning network making it possible to adapt prior classifications to new tasks based on just a few examples.
In
In
In
In view of the above, various libraries and BIM modeling application add-ons may be created to extract information from BIM models, analyze the BIM models (e.g., using neural network models), and to visualize the results. A cloud pipeline may also be used to (i) preprocess train and inference data, and (ii) train and serve neural network models.
Training and inference pipelines may be implemented in a variety of manners. An exemplary implementation may utilize the AMAZON WEB SERVICES (AWS) CLOUD using AWS CLI (command line interface). Exemplary training pipeline services may include batch and SAGEMAKER. Inference pipeline services may include SAGEMAKER, LAMBDA, and an API (Application Programming Interface) Gateway. Further, PYTHON may be utilized to provide API access to a BIM modeling application.
The AWS ECR 1106 if provided wo the AWS SAGEMAKER 1108 which is software that builds, trains, and deploys machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.
Simultaneously with writing the code 1102 and packaging it as a docker image 1104, training data 1110 is provided to the AWS S3 BUCKET 1112 (AWS simple storage service (S3) bucket that serves as a container for objects stored in AMAZON S3. The data 1110 in the AWS S3 Bucket 112 is also provided to the AWS SAGEMAKER 11108.
The AWS SAGEMAKER 1108 is utilized to generate the trained model 1114 which is stored in another AWS S3 BUCKET 1116.
The next part 1212 of the process is to convert the entity information 208 to a graph representation. In this regard, a Batch process 1214 (e.g., an AWS Batch process) is used to generate the graph representations 1216 that are then stored in a data lake 1218 (e.g., an AWS S3 bucket).
The next step 1220 is to train the graph neural network (GNN) to infer models embedding. In this regard, the graph representations 1216 (stored in data lake 1218) are provide to a model training application 1222 (e.g., AWS SAGEMAKER) that generated a trained graph neural network (GNN) model 1224 that may also be exported/stored in a data lake 1226.
Further to the above,
At step 1402, multiple 3D models (e.g., BIM models) are obtained, with each 3D model consisting on non-Euclidean (e.g., complex) data.
At step 1404, each 3D model is translated into a relational graph. The relational graph consists of multiple nodes and at least one edge. Each of the multiple nodes corresponds to an element within the 3D model and a least one edge of the relational graph corresponds to a type of relation between a pair of the multiple nodes. In one or more embodiments, at least one of the multiple nodes has properties for the element it corresponds to, and the properties may include: an identification; a label; a type; a category; a bounding box; and an orientation.
At step 1406, each relational graph is processed using a graph neural network (GNN) that computes a node representation per node.
At step 1408, the node representations are aggregated into a structural representation of the 3D model.
At step 1410, multiple different views of the 3D model are captured. In one or more embodiments, each the multiple different views is a snapshot from a different camera angle of the 3D model.
At step 1412, the multiple different views are passed through/processed by a convolutional neural network (CNN) to compute a view representation of each of the multiple different views.
At step 1414, the view representations are aggregated into a single visual representation.
At step 1416, the GNN and CNN are trained using a multiview contrastive training objective to maximize agreement between the structural representation and the single visual representation to form final learned representations. In one or more embodiments, the training is performed end-to-end in a self-supervised manner.
In one or more embodiments, the structural representation and single visual representation both comprise a set of numbers that uniquely corresponds to the structural representation and single visual representation respectively.
At step 1418, the final learned representation is utilized to perform the predictive task. Multiple different types of predictive tasks are within the scope of embodiments of the invention. Some exemplary predictive tasks are described below.
In one exemplary predictive task, search input is received, and the final learned representations to utilized to identify a similar 3D model of one of the multiple 3D models based on the search input. Thereafter, the similar 3D model is provided in response to the search input.
In another exemplary predictive task, a new 3D model is received and clustered with at least one of the multiple 3D models based on the final learned representations. The clustering is then utilized to determine attributes of the new 3D model.
In yet another exemplary predictive task, a new 3D model is received, and a classification of the new 3D model is determined based on the final learned representations. Based on the classification and the final learned representations, a new attribute of the new 3D model is determined (e.g., the cost is determined as an attribute where the classification is that of commercial v. residential).
In one embodiment, the computer 1502 operates by the hardware processor 1504A performing instructions defined by the computer program 1510 (e.g., a computer-aided design [CAD] application) under control of an operating system 1508. The computer program 1510 and/or the operating system 1508 may be stored in the memory 1506 and may interface with the user and/or other devices to accept input and commands and, based on such input and commands and the instructions defined by the computer program 1510 and operating system 1508, to provide output and results.
Output/results may be presented on the display 1522 or provided to another device for presentation or further processing or action. In one embodiment, the display 1522 comprises a liquid crystal display (LCD) having a plurality of separately addressable liquid crystals. Alternatively, the display 1522 may comprise a light emitting diode (LED) display having clusters of red, green and blue diodes driven together to form full-color pixels. Each liquid crystal or pixel of the display 1522 changes to an opaque or translucent state to form a part of the image on the display in response to the data or information generated by the processor 1504 from the application of the instructions of the computer program 1510 and/or operating system 1508 to the input and commands. The image may be provided through a graphical user interface (GUI) module 1518. Although the GUI module 1518 is depicted as a separate module, the instructions performing the GUI functions can be resident or distributed in the operating system 1508, the computer program 1510, or implemented with special purpose memory and processors.
In one or more embodiments, the display 1522 is integrated with/into the computer 1502 and comprises a multi-touch device having a touch sensing surface (e.g., track pod or touch screen) with the ability to recognize the presence of two or more points of contact with the surface. Examples of multi-touch devices include mobile devices (e.g., IPHONE, NEXUS S, DROID devices, etc.), tablet computers (e.g., IPAD, HP TOUCHPAD, SURFACE Devices, etc.), portable/handheld game/music/video player/console devices (e.g., IPOD TOUCH, MP3 players, NINTENDO SWITCH, PLAYSTATION PORTABLE, etc.), touch tables, and walls (e.g., where an image is projected through acrylic and/or glass, and the image is then backlit with LEDs).
Some or all of the operations performed by the computer 1502 according to the computer program 1510 instructions may be implemented in a special purpose processor 1504B. In this embodiment, some or all of the computer program 1510 instructions may be implemented via firmware instructions stored in a read only memory (ROM), a programmable read only memory (PROM) or flash memory within the special purpose processor 1504B or in memory 1506. The special purpose processor 1504B may also be hardwired through circuit design to perform some or all of the operations to implement the present invention. Further, the special purpose processor 1504B may be a hybrid processor, which includes dedicated circuitry for performing a subset of functions, and other circuits for performing more general functions such as responding to computer program 1510 instructions. In one embodiment, the special purpose processor 1504B is an application specific integrated circuit (ASIC).
The computer 1502 may also implement a compiler 1512 that allows an application or computer program 1510 written in a programming language such as C, C++, Assembly, SQL, PYTHON, PROLOG, MATLAB, RUBY, RAILS, HASKELL, or other language to be translated into processor 1504 readable code. Alternatively, the compiler 1512 may be an interpreter that executes instructions/source code directly, translates source code into an intermediate representation that is executed, or that executes stored precompiled code. Such source code may be written in a variety of programming languages such as JAVA, JAVASCRIPT, PERL, BASIC, etc. After completion, the application or computer program 1510 accesses and manipulates data accepted from I/O devices and stored in the memory 1506 of the computer 1502 using the relationships and logic that were generated using the compiler 1512.
The computer 1502 also optionally comprises an external communication device such as a modem, satellite link, Ethernet card, or other device for accepting input from, and providing output to, other computers 1502.
In one embodiment, instructions implementing the operating system 1508, the computer program 1510, and the compiler 1512 are tangibly embodied in a non-transitory computer-readable medium, e.g., data storage device 1520, which could include one or more fixed or removable data storage devices, such as a zip drive, floppy disc drive 1524, hard drive, CD-ROM drive, tape drive, etc. Further, the operating system 1508 and the computer program 1510 are comprised of computer program 1510 instructions which, when accessed, read and executed by the computer 1502, cause the computer 1502 to perform the steps necessary to implement and/or use the present invention or to load the program of instructions into a memory 1506, thus creating a special purpose data structure causing the computer 1502 to operate as a specially programmed computer executing the method steps described herein. Computer program 1510 and/or operating instructions may also be tangibly embodied in memory 1506 and/or data communications devices 1530, thereby making a computer program product or article of manufacture according to the invention. As such, the terms “article of manufacture,” “program storage device,” and “computer program product,” as used herein, are intended to encompass a computer program accessible from any computer readable device or media.
Of course, those skilled in the art will recognize that any combination of the above components, or any number of different components, peripherals, and other devices, may be used with the computer 1502.
A network 1604 such as the Internet connects clients 1602 to server computers 1606. Network 1604 may utilize ethernet, coaxial cable, wireless communications, radio frequency (RF), etc. to connect and provide the communication between clients 1602 and servers 1606. Further, in a cloud-based computing system, resources (e.g., storage, processors, applications, memory, infrastructure, etc.) in clients 1602 and server computers 1606 may be shared by clients 1602, server computers 1606, and users across one or more networks. Resources may be shared by multiple users and can be dynamically reallocated per demand. In this regard, cloud computing may be referred to as a model for enabling access to a shared pool of configurable computing resources.
Clients 1602 may execute a client application or web browser and communicate with server computers 1606 executing web servers 1610. Such a web browser is typically a program such as MICROSOFT INTERNET EXPLORER/EDGE, MOZILLA FIREFOX, OPERA, APPLE SAFARI, GOOGLE CHROME, etc. Further, the software executing on clients 1602 may be downloaded from server computer 1606 to client computers 1602 and installed as a plug-in or ACTIVEX control of a web browser. Accordingly, clients 1602 may utilize ACTIVEX components/component object model (COM) or distributed COM (DCOM) components to provide a user interface on a display of client 1602. The web server 1610 is typically a program such as MICROSOFT'S INTERNET INFORMATION SERVER.
Web server 1610 may host an Active Server Page (ASP) or Internet Server Application Programming Interface (ISAPI) application 1612, which may be executing scripts. The scripts invoke objects that execute business logic (referred to as business objects). The business objects then manipulate data in database 1616 through a database management system (DBMS) 1614. Alternatively, database 1616 may be part of, or connected directly to, client 1602 instead of communicating/obtaining the information from database 1616 across network 1604. When a developer encapsulates the business functionality into objects, the system may be referred to as a component object model (COM) system. Accordingly, the scripts executing on web server 1610 (and/or application 1612) invoke COM objects that implement the business logic. Further, server 1606 may utilize MICROSOFT'S TRANSACTION SERVER (MTS) to access required data stored in database 1616 via an interface such as ADO (Active Data Objects), OLE DB (Object Linking and Embedding DataBase), or ODBC (Open DataBase Connectivity).
Generally, these components 1600-1616 all comprise logic and/or data that is embodied in/or retrievable from device, medium, signal, or carrier, e.g., a data storage device, a data communications device, a remote computer or device coupled to the computer via a network or via another data communications device, etc. Moreover, this logic and/or data, when read, executed, and/or interpreted, results in the steps necessary to implement and/or use the present invention being performed.
Although the terms “user computer”, “client computer”, and/or “server computer” are referred to herein, it is understood that such computers 1602 and 1606 may be interchangeable and may further include thin client devices with limited or full processing capabilities, portable devices such as cell phones, notebook computers, pocket computers, multi-touch devices, and/or any other devices with suitable processing, communication, and input/output capability.
Of course, those skilled in the art will recognize that any combination of the above components, or any number of different components, peripherals, and other devices, may be used with computers 1602 and 1606. Embodiments of the invention are implemented as a software/CAD application on a client 1602 or server computer 1606. Further, as described above, the client 1602 or server computer 1606 may comprise a thin client device or a portable device that has a multi-touch-based display.
This concludes the description of the preferred embodiment of the invention. In view of the above, embodiments of the invention provide the ability to lean representations over complex 3D BIM models (e.g., where a BIM model is represented by/mapped to a set numbers). In addition to use for BIM models, embodiments of the invention may also be utilized with computer aided design (CAD) models or 3D CAD, modeling, manufacturing, industrial design, electronics, and mechanical engineering models/applications. Further embodiments may be utilized to infer properties/attributes for any type of complex models such as movie characters, 3D animation and characters (e.g., in AUTODESK MAYA), etc.
Further, embodiments of the invention may be implemented on any type of computer, such as a mainframe, minicomputer, or personal computer, or computer configuration, such as a timesharing mainframe, local area network, or standalone personal computer, could be used with the present invention.
The foregoing description of the preferred embodiment of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto.