Emergency responders are often called to the scene of an ongoing incident, such as a barricaded gunman or school shooting, and must respond quickly to the incident within a structure, such as a facility and/or building, in which they may not be familiar. In many cases, the first few moments when the first responder arrives may mean life or death for individuals confronted by the situation. While first responders often train on the specific facility to which they must respond, they typically do not have the necessary familiarity to instinctively locate specific rooms within which the events are unfolding.
Currently, there are efforts underway to map major portions of interiors within buildings. These various efforts includes things such as the “interior version” of Google Street View, 360-degree camera systems (e.g., IPIX), and LIDAR mapping of interiors with systems such as Trimble's TIMMS unit. While each system may capture images and/or wall positions, conveyance of the information in a manner that allows a first responder to quickly identify a location is lacking.
The most common manner of displaying information regarding the layout of a room within a structure is through a floor plan. Most people, however, are not trained to use floor plans. Even those trained to review floor plans take time to review and interpret the floor plan in order to assess which room is the desired location and how best to get there. Generally, the view of the walls is missing on floor plans, and this is the way normal people see and interpret interior rooms.
Many systems compensate for the view of the walls by going to a three-dimensional viewer allowing for an operator to move through the structure and see the walls, floors, ceilings, and the like, of rooms within the building. These views, however, do not allow for a quick assessment of the floor plan of the structure where an incident is occurring and quick identification of one or more routes to a particular room within the structure to pursue.
To assist those of ordinary skill in the relevant art in making and using the subject matter hereof, reference is made to the appended drawings, which are not intended to be drawn to scale, and in which like reference numerals are intended to refer to similar elements for consistency. For purposes of clarity, not every component may be labeled in every drawing.
Before explaining at least one embodiment of the disclosure in detail, it is to be understood that the disclosure is not limited in its application to the details of construction, experiments, exemplary data, and/or the arrangement of the components set forth in the following description or illustrated in the drawings unless otherwise noted.
The disclosure is capable of other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for purposes of description, and should not be regarded as limiting.
The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements.
As used in the description herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” or any other variations thereof, are intended to cover a non-exclusive inclusion. For example, unless otherwise noted, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements, but may also include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Further, unless expressly stated to the contrary, “or” refers to an inclusive and not to an exclusive “or”. For example, a condition A or B is satisfied by one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the inventive concept. This description should be read to include one or more, and the singular also includes the plural unless it is obvious that it is meant otherwise. Further, use of the term “plurality” is meant to convey “more than one” unless expressly stated to the contrary.
As used herein, any reference to “one embodiment,” “an embodiment,” “some embodiments,” “one example,” “for example,” or “an example” means that a particular element, feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment. The appearance of the phrase “in some embodiments” or “one example” in various places in the specification is not necessarily all referring to the same embodiment, for example.
Circuitry, as used herein, may be analog and/or digital components, or one or more suitably programmed processors (e.g., microprocessors) and associated hardware and software, or hardwired logic. Also, “components” may perform one or more functions. The term “component,” may include hardware, such as a processor (e.g., microprocessor), an application specific integrated circuit (ASIC), field programmable gate array (FPGA), a combination of hardware and software, and/or the like. The term “processor” as used herein means a single processor or multiple processors working independently or together to collectively perform a task.
Software may include one or more computer readable instructions that when executed by one or more components cause the component to perform a specified function. It should be understood that the algorithms described herein may be stored on one or more non-transient memory. Exemplary non-transient memory may include random access memory, read only memory, flash memory, and/or the like. Such non-transient memory may be electrically based, optically based, and/or the like.
It is to be further understood that, as used herein, the term user is not limited to a human being, and may comprise, a computer, a server, a website, a processor, a network interface, a human, a user terminal, a virtual computer, combinations thereof, and the like, for example.
Referring now to the Figures, and in particular to
In some embodiments, the apparatus 10 includes a computer system for storing a database of three-dimensional floor plans of structures with corresponding geo-location data identifying the structures within the database. The geo-location data can be an address (street, city and/or zip code) for each structure or one or more geospatial coordinates such as latitude/longitude. The computer system has computer executable logic that, when executed by a processor, causes the computer system to receive a geographic point from a user, search the database to find three-dimensional floor plans that correspond to the geographic point, and make the three-dimensional floor plans that contain the geographic point available to the user.
In another embodiment, a method of providing three-dimensional floor plans of structures to a user with the apparatus 10 includes the following steps. The apparatus 10 includes a database hosted by a computer system that stores data indicative of a plurality of floor plans of structures with corresponding geo-location data identifying the structures within the database. The floor plans have a room(s) comprising a set of walls and a floor between the walls in the set, and a set of image data depicting the walls and the floor of the room(s). Image data may include, but is not limited to, captured images, computer-aided design (CAD) images, hand drawn images, and/or the like. A selection of a geographic point is received by one or more I/O ports of a computer system hosting the database from a user and the database is then searched to find floor plans that contain the selected geographic point and the set of image data depicting the walls and the floor of the room(s). The floor plans and the set of image data depicting the walls and the floor of the floor plan that contain the selected geographic point are then made available to the user via the one or more I/O port of the computer system.
In some embodiments, the apparatus 10 may include an image capturing system 12 and one or more computer systems 14. Alternatively, the apparatus 10 may solely include one or more computer systems 14, with the apparatus obtaining image data (e.g., one or more images) from a third party system. To that end, in some embodiments, the image capturing system 12 may obtain image data, in addition to, image data obtained from a third party system.
In some embodiments, the image capturing system 12 may include one or more capturing devices 16 collecting one or more images of an interior of a structure. For example, the image capturing system 12 may include one or more capturing devices 16 collecting one or more images of a floor and/or walls of an interior room. For simplicity in description, the following disclosure may relate to an interior room including the floor and walls of the interior rooms, however, it should be noted that one skilled in the art will appreciate that the system and methods as disclosed herein may be applied to any structure and is not limited to interior rooms.
The capturing device 16 may be capable of capturing images photographically and/or electronically. The capturing device 16 may include known or determinable characteristics including, but not limited to, focal length, sensor size, aspect ratio, radial and other distortion terms, principal point offset, pixel pitch, alignment, and/or the like. Generally, the capturing device 16 may provide one or more images from a viewing location within the room that extends in a viewing direction to provide a particular perspective of physical characteristics within the interior of the room.
In some embodiments, the capturing device 16 of the image capturing system 12 may include, but is not limited to, one or more conventional cameras, digital cameras, digital sensors, charge-coupled devices, and/or the like. For example, in one example, the capturing device 16 may be one or more conventional cameras manually operated by a user positioned within an interior of the room to collect one or more images of the floor and/or walls of the room.
In another example, the capturing device 16 of the image capturing system 12 may include one or more 360-degree camera systems collecting one or more images of the floor and/or walls of the interior room. For example, the capturing device 16 may include a 360-degree camera system such as the IPIX system, manufactured by IPIX Corporation. The 360-degree camera system may include one or more ultra-wide fish-eye lenses capable of capturing the room in two or more images. Such images may be stitch together to form one or more contiguous views of at least a portion or the entire room. In some embodiments, the imagery may then be displayed in a viewer (e.g., IPIX viewer). Each wall and/or the floor of the interior room may be extracted into separate images for processing as described in further detail herein.
In another example, the capturing device 16 of the image capturing system 12 may include one or more scanners. For example, the capturing device 16 may use a LiDAR scanner, such as the Trimble TIMMS unit, distributed by Applanix based in Richmond Hill, Ontario. The room scanner may be used to collect three-dimensional data points of the floor and/or walls and form a three-dimensional model of the interior room. The three-dimensional data points may be loaded into the three-dimensional model viewer.
In some embodiments, the image capturing system 12 may further include a RGB sensor. The RGB sensor may be used in addition to the room scanner to enhance color of acquired images. For example, the RBG sensor may provide color representation to the three-dimensional data points collected by a LiDAR scanner.
The capturing device 16 may acquire image data including, but not limited to, one or more images, and issue one or more image data signals (IDS) 22 corresponding to the particular image data acquired (e.g., one or more particular images and/or photographs). The image data may be stored in the computer system 14. In addition, the image capturing system 12 may further include a positioning and orientation device, such as a GPS and/or, an inertial measurement unit, which collects data indicative of a three-dimensional location of the sensor of the capturing device 16, an orientation of the sensor, as well as compass direction of the sensor each time the images and/or photographs are acquired.
Referring to
In some embodiments, the computer system 14 may include one or more processors 24 communicating with one or more input devices 26, output devices 28, and/or I/O ports 30 enabling the input and/or output of data to and from the computer system 14 to the image capturing system 12 and/or a user. As used herein, the term “user” is not limited to a human, and may comprise a human using a computer, a host system, a smart phone, a tablet, a computerized pen or writing device, combinations thereof, and/or the like, for example, but not by way of limitation.
The one or more input devices 26 may be capable of receiving information input from a user and/or one or more processors, and transmitting such information to the processor 24. The one or more input devices 26 may include, but are not limited to, implementation as a keyboard, touchscreen, mouse, trackball, microphone, fingerprint reader, infrared port, slide-out keyboard, flip-out keyboard, cell phone, PDA, video game controller, remote control, fax machine, network interface, speech recognition, gesture recognition, eye tracking, brain-computer interface, combinations thereof, and/or the like.
The one or more output devices 28 may be capable of outputting information in a form perceivable by a user and/or processor(s). For example, the one or more output devices 28 may include, but are not limited to, implementations as a computer monitor, a screen, a touchscreen, a speaker, a website, a television set, a smart phone, a PDA, a cell phone, a fax machine, a printer, a laptop computer, an optical head-mounted display (OHMD), combinations thereof, and/or the like. It is to be understood that in some exemplary embodiments, the one or more input devices 26 and the one or more output devices 28 may be implemented as a single device, such as, for example, a touchscreen or a tablet.
In some embodiments, output of information in a form perceivable by a user and/or processor may comprise displaying or providing for display a webpage (e.g., webpage having one or more images and software to permit formation of a floor plan), electronic communications, e-mail, and/or electronic correspondence to one or more user terminals interfacing with a computer and/or computer network(s) and/or allowing the one or more users to participate, such as by interacting with one or more mechanisms on a webpage, electronic communications, e-mail, and/or electronic correspondence by sending and/or receiving signals (e.g., digital, optical, and/or the like) via a computer network interface (e.g., Ethernet port, TCP/IP port, optical port, cable modem, combinations thereof, and/or the like). A user may be provided with a web page in a web browser, or in a software application, for example.
The image data signals 22 may be provided to the computer system 14. For example, the image data signals 22 may be received by the computer system 14 via the I/O port 30. The I/O port 30 may comprise one or more physical and/or virtual ports.
In some embodiments, the computer system 14 may issue an image capturing signal 32 to the image capturing system 12 to thereby cause the capturing device 16 to acquire and/or capture an image at a predetermined location and/or at a predetermined interval. Additionally, in some embodiments, the image capturing signal 32 may be a point collection signal given to a room scanner (e.g., LiDAR scanner) to thereby cause the room scanner to collect points at a predetermined location and/or at a predetermined interval.
The computer system 14 may include one or more processors 24 working together, or independently to execute processor executable code, and one or more memories 34 capable of storing processor executable code. In some embodiments, each element of the computer system 14 may be partially or completely network-based or cloud-based, and may or may not be located in a single physical location.
The one or more processors 24 may be implemented as a single or plurality of processors working together, or independently, to execute the logic as described herein. Exemplary embodiments of the one or more processors 24 may include, but are not limited to, a digital signal processor (DSP), a central processing unit (CPU), a field programmable gate array (FPGA), a microprocessor, a multi-core processor, and/or combination thereof, for example. The one or more processors 24 may be capable of communicating via a network (e.g., analog, digital, optical, and/or the like) via one or more ports (e.g., physical or virtual ports) using a network protocol. It is to be understood, that in certain embodiments, using more than one processor 24, the processors 24 may be located remotely from one another, in the same location, or comprising a unitary multi-core processor. The one or more processors 24 may be capable of reading and/or executing processor executable code and/or capable of creating, manipulating, retrieving, altering, and/or storing data structures into one or more memories 34.
The one or more memories 34 may be capable of storing processor executable code. Additionally, the one or more memories 34 may be implemented as a conventional non-transient memory, such as, for example, random access memory (RAM), a CD-ROM, a hard drive, a solid state drive, a flash drive, a memory card, a DVD-ROM, a floppy disk, an optical drive, combinations thereof, and/or the like, for example.
In some embodiments, the one or more memories 34 may be located in the same physical location as the computer system 14. Alternatively, one or more memories 34 may be located in a different physical location as the computer system 14, with the computer system 14 communicating with one or more memories 34 via a network, for example. Additionally, one or more of the memories 34 may be implemented as a “cloud memory” (i.e., one or more memories 34 may be partially or completely based on or accessed using a network, for example).
Referring to
Referring to
In some embodiments, the network 42 may be the Internet and/or other network. For example, if the network 42 is the Internet, a primary user interface of the image capturing software and/or image manipulation software may be delivered through a series of web pages. It should be noted that the primary user interface of the image capturing software and/or image manipulation software may be replaced by another type of interface, such as, for example, a Windows-based application.
The network 42 may be almost any type of network. For example, the network 42 may interface by optical and/or electronic interfaces, and/or may use a plurality of network topographies and/or protocols including, but not limited to, Ethernet, TCP/IP, circuit switched paths, and/or combinations thereof. For example, in some embodiments, the network 42 may be implemented as the World Wide Web (or Internet), a local area network (LAN), a wide area network (WAN), a metropolitan network, a wireless network, a cellular network, a Global System for Mobile Communications (GSM) network, a code division multiple access (CDMA) network, a 3G network, a 4G network, a satellite network, a radio network, an optical network, a cable network, a public switched telephone network, an Ethernet network, combinations thereof, and/or the like. Additionally, the network 42 may use a variety of network protocols to permit bi-directional interface and/or communication of data and/or information. It is conceivable that in the near future, embodiments of the present disclosure may use more advanced networking topologies.
The computer system 14 and image capturing system 12 may be capable of interfacing and/or communicating with the one or more computer systems including processors 40 via the network 42. Additionally, the one or more processors 40 may be capable of communicating with each other via the network 42. For example, the computer system 14 may be capable of interfacing by exchanging signals (e.g., analog, digital, optical, and/or the like) via one or more ports (e.g., physical ports or virtual ports) using a network protocol, for example.
The processors 40 may include, but are not limited to implementation as a variety of different types of computer systems, such as a server system having multiple servers in a configuration suitable to provide a commercial computer based business system (such as a commercial web-site), a personal computer, a smart phone, a network-capable television set, a television set-top box, a tablet, an e-book reader, a laptop computer, a desktop computer, a network-capable handheld device, a video game console, a server, a digital video recorder, a DVD player, a Blu-Ray player, a wearable computer, a ubiquitous computer, combinations thereof, and/or the like. In some embodiments, the computer systems comprising the processors 40 may include one or more input devices 44, one or more output devices 46, processor executable code, and/or a web browser capable of accessing a website and/or communicating information and/or data over a network, such as network 42. The computer systems comprising the one or more processors 40 may include one or more non-transient memory comprising processor executable code and/or software applications, for example. The computer system 14 may be modified to communicate with any of these processors 40 and/or future developed devices capable of communicating with the computer system 14 via the network 42.
The one or more input devices 44 may be capable of receiving information input from a user, processors, and/or environment, and transmit such information to the processor 40 and/or the network 42. The one or more input devices 44 may include, but are not limited to, implementation as a keyboard, touchscreen, mouse, trackball, microphone, fingerprint reader, infrared port, slide-out keyboard, flip-out keyboard, cell phone, PDA, video game controller, remote control, fax machine, network interface, speech recognition, gesture recognition, eye tracking, brain-computer interface, combinations thereof, and/or the like.
The one or more output devices 46 may be capable of outputting information in a form perceivable by a user and/or processor(s). For example, the one or more output devices 46 may include, but are not limited to, implementations as a computer monitor, a screen, a touchscreen, a speaker, a website, a television set, a smart phone, a PDA, a cell phone, a fax machine, a printer, a laptop computer, an optical head-mounted display (OHMD), combinations thereof, and/or the like. It is to be understood that in some exemplary embodiments, the one or more input devices 44 and the one or more output devices 46 may be implemented as a single device, such as, for example, a touchscreen or a tablet.
Referring to
In some embodiments, the location, orientation and/or compass direction of the one or more capturing devices 16 relative to the floor and/or walls at the precise moment each image is captured may be recorded within the one or more memories 34. Location data may be associated with the corresponding captured image. Such location data may be included within the image data signals 22.
The one or more processors 24 may create and/or store in the one or more memories 34, one or more output image and data files. For example, the processor 24 may convert image data signals 22 into computer-readable output image, data files, and/or LIDAR 3D point cloud files. The output image, data files, and/or LIDAR 3D point cloud files may include a plurality of captured image data corresponding to captured images, positional data, and/or LIDAR 3D point clouds corresponding thereto.
Output image, data files, and/or LIDAR 3D point cloud files may then be further provided, displayed and/or used for generating a multi-3D perspective floor plan of an interior room. The multi-3D perspective floor plan of the interior room includes a three-dimensional fitted representation of real-life physical characteristics of an interior of the room.
The most common method of displaying information regarding layout of room(s) within a structure currently within the art is a floor plan.
Images and/or data may further provide users with a quick reference and visual representation of the interior room. Emergency responders are often called to a scene of an incident in progress and must respond quickly within a structure. The emergency responder may have limited knowledge regarding elements (e.g., doors, windows, location of interior rooms) within the structure. Using images and/or image data, the multi-3D perspective floor plan 50b may provide an overhead representation that includes real-life physical characteristics of the walls of the interior room.
The multi-3D perspective floor plan 50b provided in
Generally, the multi-3D perspective floor plan 50b may allow a first responder to quickly visualize a three-dimensional representation of the interior room 52 while viewing an entire floor plan at once. As such, elements of interest within the structure (e.g., real-life physical characteristics) may be quickly identified and distinguished relative to other objects within the structure. For example, within a school, interior rooms such as auditoriums, swimming pools, boiler rooms, may be quickly identified and distinguished relative to regular classrooms by visually identifying specifics characteristics (e.g., desks) of the rooms.
Additionally, the multi-3D perspective floor plan 50b may provide a first responder with an instinctive feel for objects and/or characteristics within the interior room 52. For example, by providing real-life physical characteristics on the walls 54 and the floor 56, the first responder may be able to determine the number and magnitude of windows, whether there is sufficient cover within the room, and/or other information not readily obtained via the basic floor plan 50a illustrated in
Referring to
In a step 74, the background of the floor plan 50a may be filled with a distinct color. For example, in some embodiments, the background of the floor plan 50a may be filled with pure red (RGB 255, 0, 0). Filling the background of the floor plan 50a may aid in providing three-dimensional conceptual features as discussed in further detail herein.
In a step 76, each wall 54 may be extracted within the floor plan 50a to provide a fitted representation of each wall. For example, each view of each wall 54 may be formed within the multi-3D perspective floor plan 50b by extracting the wall 54a distance d from the outline 62 of the interior room 52. The distance d is determined relative to the scale of the floor plan 50a. The distance d may be a number of pixels or a percentage of an area of the floor 56 and determined such that elements of each wall 54 may be visually represented without also obscuring the floor 56. Although in
In a step 80, in each interior room 52 of the multi-3D perspective floor plan 50b, image data (e.g., one or more images, CAD images, and/or data renderings) of the floor 56 may be obtained. In one example, image data of the floor 56 may be obtained from the image capturing system 12 via the computer system 14. In another example, image data of the floor 56 may be obtained from a third party system.
In some embodiments, the image data may be obtained via the one or more additional processors 40. For example, in some embodiments, a user may obtain images of the interior room 52 and provide the images to the computer system 14 via the network 42. In another example, image data may be obtained via a third party system and accessed via the network 42. For example, one or more images and/or data renderings may be obtained via an image database from a third party system. The computer system 14 may access the third party system via the network 42 storing and/or manipulating the images with memory 34.
In a step 82, image data of the floor 56 may be fitted to the resized floor 56 within the multi-3D perspective floor plan 50b. For example, one or more images of the floor 56 may be scaled, rotated, and/or otherwise distorted to fit the sizing of the floor 56 within the multi-3D perspective floor plan 50b.
In a step 84, image data for each wall 54 may be obtained. Similar to image data of the floor 56, in some embodiments, image data of each wall 54 may be obtained from the image capturing system 12 via the computer system 14, from a third party system, by users via the one or more additional processors 40, via a third party system accessed via the network 42, and/or the like.
In a step 86, the image data (e.g., one or more images) for each wall 54 may be fitted within the space provided between the resized floor 56 and the outline 62 of the interior room 52. For example, one or more images of each wall 54 may be scaled, rotated, and/or otherwise distorted to fit the spacing between the resized floor 56 and the outline 62 of the interior room 52 to provide a 3D fitted representation of real-life physical characteristics of the walls 54 from a particular perspective (e.g., overhead perspective) having a particular viewing direction extending from a viewing location (e.g. nadir).
In some embodiments, fitting image data to each wall 54 to fit the spacing between the resized floor 56 and the outline 62 of the interior room 52 may further include skewing the image. For example,
In a step 88, one or more three dimensional conceptual features may be optionally added to the multi-3D perspective floor plan 50b. For example, in the step 74 the background of the interior room 52 may be filled with the distinct color (e.g., RGB (255, 0, 0). Positioning the floor 56 and/or walls 54 such that the distinct color is shown may extrude depth and present cues to the human brain to visualize a three-dimensional structure.
In a step 90, the steps above may be repeated for each room, hallway, and/or structure having associated images and/or data such that multi-3D perspective fitted images having real-life physical characteristics for each interior room 52 may be created. For example,
In a step 92, once imagery for the interior room 52 has been scaled, rotated, and/or modified, each image and/or a composite image of the interior room 52 may be overlaid on the floor plan rendering a multi-3D perspective floor plan representation.
For example, in some embodiments, the multi-3D perspective fitted image 100, as shown in
In some embodiments, the floor plan 112 and the floor plan 100 with imagery showing real-life physical characteristics may each be individual layers such that each one may be visually provided by user selection. For example, Geographical Information System (GIS) software is often used for building information modeling. Both the floor plan 112 and the floor plan 100 with imagery may be used as layers within the GIS software such that the user may select the floor plan 112 with imagery to provide additional detailed information as needed. Further, as GIS software is geospatially indexed, the software may further include instructions such that when a user selects a room within the floor plan 112 or the floor plan 100, a menu may be provided adding options for viewing such room via the floor plan 112, the floor plan 100, or a three-dimensional model of the room. For example, a first responder using one or more processors 40 may select the room 102 within the floor plan 112. A menu may be provided to the first responder to select whether to view the room 102 using the floor plan 100, giving the first responder a visual representation of real-life physical characteristics of the room 102. Even further, in some embodiments, the menu may provide an option to enter a three-dimensional model rendering program to view the room 102. For example, some GIS systems are fully three-dimensionally aware such that the three-dimensional model of the room 102 may be a third layer capable of being selected by a user. By accessing the program logic 38 of the computer system 14 illustrated in
Alternatively, the floor plan 100 having imagery overlaid therein may be provided within a physical report. For example, the floor plan 100 having imagery overlaid therein may be provided in a hard copy. Hard copies may allow for external markings (e.g., markers, grease pencils) during an emergency situation. Additionally, hard copies may reduce reliance on technology mishaps during the emergency situation (e.g., battery life, screen viewing in sunlight, and/or the like).
In some embodiments, the hard copy of the floor plan 100 having imagery overlaid therein may be provided within a lock box at the physical location of the structure. The lock box may house the hard copy such that first responders may be able to access the floor plan 100 on scene during an emergency event.
In some embodiments, the floor plan 100 having imagery overlaid therein may be created automatically using a textured three-dimensional model floor plan, such as the floor plan 20 depicted in
In a step 116, a textured three-dimensional model floor plan, similar to the floor plan 20 illustrated in
In a step 118, a basic floor plan, similar to the floor plan 50a of
In a step 120, each wall 54 of the floor plan 50a may be extruded by a pre-defined distance d. The distance d may be determined relative to the scale of the floor plan 50a. The distance may also be determined such that elements on and/or positioned adjacent to each wall 54 may be visually represented without obscuring the floor 56. Although in
In a step 124, a rendering of the floor 134 from the textured three-dimensional floor plan 20 may be extracted. The floor 134 may be fit to the resized floor 56 within the multi-3D perspective floor plan 50b. For example, the extracted rendering of the floor 134 from the textured three-dimensional floor plan 20 may be scaled, rotated, and/or otherwise fit to the sizing of the floor 56 within the multi-3D perspective floor plan 50b.
In a step 126, each wall 132a-d from the textured three-dimensional floor plan 20 may be extracted. After extraction, each wall 132a-d may be fit within the space provided between resized floor 56 and the outline 62 of the interior room 52 within the respective positions within the floor plan 50a. For example, each wall 132a-d may be scaled, rotated, and/or otherwise fit to the spacing between the resized floor 56 and the outline 62 of the interior room 52. In some embodiments, fitting each extract wall 132a-d to the spacing between the resized floor 56 and the outline 62 of the interior room 52 may further include skewing the extracted wall 132 similar to the wall illustrated in
In a step 128, the steps of the method may be repeated for each structure within the floor plan 50a such that a multi-3D perspective floor plan of the structure includes a fitted representation of real-life physical characteristics of walls 132a-d and floor 134 of the room 130.
Although the preceding description has been described herein with reference to particular means, materials and embodiments, it is not intended to be limited to the particulars disclosed herein; rather, it extends to all functionally equivalent structures, methods and uses, such as are within the scope of the appended claims. For example, many of the examples relate to first responders, however, the multi-3D perspective floor plan may be used in other applications as well. Building management software may include the multi-3D perspective floor plan such that building managers are able to use the visual representations provided by the multi-3D perspective floor plan for their planning and/or management. Additionally, the multi-3D perspective floor plan may be used and/or distributed as evacuation plans, such that people within a structure may be able to easily locate emergency exits, and/or the like. In another example, the multi-3D perspective floor plan may be used to identify and visualize proposed changes to a structure. For example, multi-3D perspective floor plans may be created of a proposed space and/or structure, and the space and/or structure may be analyzed in relation to a current structure.
The present patent application claims priority to U.S. patent application Ser. No. 14/617,575, filed on Feb. 9, 2015, which claims priority to the provisional patent application identified by U.S. Ser. No. 61/937,488, filed on Feb. 8, 2014, the entire the contents of all of which are hereby expressly incorporated herein.
Number | Name | Date | Kind |
---|---|---|---|
2273876 | Lutz et al. | Feb 1942 | A |
3153784 | Petrides et al. | Oct 1964 | A |
3594556 | Edwards | Jul 1971 | A |
3614410 | Bailey | Oct 1971 | A |
3621326 | Hobrough | Nov 1971 | A |
3661061 | Tokarz | May 1972 | A |
3716669 | Watanabe et al. | Feb 1973 | A |
3725563 | Woycechowsky | Apr 1973 | A |
3864513 | Halajian et al. | Feb 1975 | A |
3866602 | Furihata | Feb 1975 | A |
3877799 | O'Donnell | Apr 1975 | A |
4015080 | Moore-Searson | Mar 1977 | A |
4044879 | Stahl | Aug 1977 | A |
4184711 | Wakimoto | Jan 1980 | A |
4240108 | Levy | Dec 1980 | A |
4281354 | Conte | Jul 1981 | A |
4344683 | Stemme | Aug 1982 | A |
4360876 | Girault et al. | Nov 1982 | A |
4382678 | Thompson et al. | May 1983 | A |
4387056 | Stowe | Jun 1983 | A |
4396942 | Gates | Aug 1983 | A |
4463380 | Hooks | Jul 1984 | A |
4489322 | Zulch et al. | Dec 1984 | A |
4490742 | Wurtzinger | Dec 1984 | A |
4491399 | Bell | Jan 1985 | A |
4495500 | Vickers | Jan 1985 | A |
4527055 | Harkless et al. | Jul 1985 | A |
4543603 | Laures | Sep 1985 | A |
4586138 | Mullenhoff et al. | Apr 1986 | A |
4635136 | Ciampa et al. | Jan 1987 | A |
4653136 | Denison | Mar 1987 | A |
4653316 | Fukuhara | Mar 1987 | A |
4673988 | Jansson et al. | Jun 1987 | A |
4686474 | Olsen et al. | Aug 1987 | A |
4688092 | Kamel et al. | Aug 1987 | A |
4689748 | Hofmann | Aug 1987 | A |
4707698 | Constant et al. | Nov 1987 | A |
4758850 | Archdale et al. | Jul 1988 | A |
4805033 | Nishikawa | Feb 1989 | A |
4807024 | Mclaurin et al. | Feb 1989 | A |
4814711 | Olsen et al. | Mar 1989 | A |
4814896 | Heitzman et al. | Mar 1989 | A |
4843463 | Michetti | Jun 1989 | A |
4899296 | Khattak | Feb 1990 | A |
4906198 | Cosimano et al. | Mar 1990 | A |
4953227 | Katsuma et al. | Aug 1990 | A |
4956872 | Kimura | Sep 1990 | A |
5034812 | Rawlings | Jul 1991 | A |
5086314 | Aoki et al. | Feb 1992 | A |
5121222 | Endoh et al. | Jun 1992 | A |
5138444 | Hiramatsu | Aug 1992 | A |
5155597 | Lareau et al. | Oct 1992 | A |
5164825 | Kobayashi et al. | Nov 1992 | A |
5166789 | Myrick | Nov 1992 | A |
5191174 | Chang et al. | Mar 1993 | A |
5200793 | Ulich et al. | Apr 1993 | A |
5210586 | Grage et al. | May 1993 | A |
5231435 | Blakely | Jul 1993 | A |
5247356 | Ciampa | Sep 1993 | A |
5251037 | Busenberg | Oct 1993 | A |
5265173 | Griffin et al. | Nov 1993 | A |
5267042 | Tsuchiya et al. | Nov 1993 | A |
5270756 | Busenberg | Dec 1993 | A |
5296884 | Honda et al. | Mar 1994 | A |
5335072 | Tanaka et al. | Aug 1994 | A |
5342999 | Frei et al. | Aug 1994 | A |
5345086 | Bertram | Sep 1994 | A |
5353055 | Hiramatsu | Oct 1994 | A |
5369443 | Woodham | Nov 1994 | A |
5402170 | Parulski et al. | Mar 1995 | A |
5414462 | Veatch | May 1995 | A |
5467271 | Abel et al. | Nov 1995 | A |
5481479 | Wight et al. | Jan 1996 | A |
5486948 | Imai et al. | Jan 1996 | A |
5506644 | Suzuki et al. | Apr 1996 | A |
5508736 | Cooper | Apr 1996 | A |
5555018 | von Braun | Sep 1996 | A |
5604534 | Hedges et al. | Feb 1997 | A |
5617224 | Ichikawa et al. | Apr 1997 | A |
5633946 | Lachinski et al. | May 1997 | A |
5668593 | Lareau et al. | Sep 1997 | A |
5677515 | Selk et al. | Oct 1997 | A |
5798786 | Lareau et al. | Aug 1998 | A |
5835133 | Moreton et al. | Nov 1998 | A |
5841574 | Willey | Nov 1998 | A |
5844602 | Lareau et al. | Dec 1998 | A |
5850352 | Moezzi et al. | Dec 1998 | A |
5852753 | Lo et al. | Dec 1998 | A |
5894323 | Kain et al. | Apr 1999 | A |
5899945 | Baylocq et al. | May 1999 | A |
5963664 | Kumar et al. | Oct 1999 | A |
6037945 | Loveland | Mar 2000 | A |
6088055 | Lareau et al. | Jul 2000 | A |
6094215 | Sundahl et al. | Jul 2000 | A |
6097854 | Szeliski et al. | Aug 2000 | A |
6108032 | Hoagland | Aug 2000 | A |
6130705 | Lareau et al. | Oct 2000 | A |
6157747 | Szeliski et al. | Dec 2000 | A |
6167300 | Cherepenin et al. | Dec 2000 | A |
6201546 | Bodor et al. | Mar 2001 | B1 |
6222583 | Matsumura et al. | Apr 2001 | B1 |
6236886 | Cherepenin et al. | May 2001 | B1 |
6256057 | Mathews et al. | Jul 2001 | B1 |
6373522 | Mathews et al. | Apr 2002 | B2 |
6421610 | Carroll et al. | Jul 2002 | B1 |
6434280 | Peleg et al. | Aug 2002 | B1 |
6597818 | Kumar et al. | Jul 2003 | B2 |
6639596 | Shum et al. | Oct 2003 | B1 |
6711475 | Murphy | Mar 2004 | B2 |
6731329 | Feist et al. | May 2004 | B1 |
6747686 | Bennett | Jun 2004 | B1 |
6810383 | Loveland | Oct 2004 | B1 |
6816819 | Loveland | Nov 2004 | B1 |
6826539 | Loveland | Nov 2004 | B2 |
6829584 | Loveland | Dec 2004 | B2 |
6834128 | Altunbasak et al. | Dec 2004 | B1 |
6876763 | Sorek et al. | Apr 2005 | B2 |
7009638 | Gruber et al. | Mar 2006 | B2 |
7018050 | Ulichney et al. | Mar 2006 | B2 |
7046401 | Dufaux et al. | May 2006 | B2 |
7061650 | Walmsley et al. | Jun 2006 | B2 |
7065260 | Zhang et al. | Jun 2006 | B2 |
7123382 | Walmsley et al. | Oct 2006 | B2 |
7127348 | Smitherman et al. | Oct 2006 | B2 |
7133551 | Chen | Nov 2006 | B2 |
7142984 | Rahmes et al. | Nov 2006 | B2 |
7184072 | Loewen et al. | Feb 2007 | B1 |
7199793 | Oh et al. | Apr 2007 | B2 |
7233691 | Setterholm | Jun 2007 | B2 |
7262790 | Bakewell | Aug 2007 | B2 |
7277572 | MacInnes et al. | Oct 2007 | B2 |
7348895 | Lagassey | Mar 2008 | B2 |
7509241 | Guo | Mar 2009 | B2 |
7728833 | Verma | Jun 2010 | B2 |
7832267 | Woro | Nov 2010 | B2 |
7844499 | Yahiro | Nov 2010 | B2 |
8078396 | Meadow | Dec 2011 | B2 |
8705843 | Lieckfeldt | Apr 2014 | B2 |
9398413 | Scalise | Jul 2016 | B1 |
20020041328 | LeCompte et al. | Apr 2002 | A1 |
20020041717 | Murata et al. | Apr 2002 | A1 |
20020114536 | Xiong et al. | Aug 2002 | A1 |
20030014224 | Guo et al. | Jan 2003 | A1 |
20030043824 | Remboski et al. | Mar 2003 | A1 |
20030088362 | Melero et al. | May 2003 | A1 |
20030164962 | Nims et al. | Sep 2003 | A1 |
20030214585 | Bakewell | Nov 2003 | A1 |
20040105090 | Schultz et al. | Jun 2004 | A1 |
20040122628 | Laurie | Jun 2004 | A1 |
20040167709 | Smitherman et al. | Aug 2004 | A1 |
20050073241 | Yamauchi et al. | Apr 2005 | A1 |
20050088251 | Matsumoto | Apr 2005 | A1 |
20050169521 | Hel-Or | Aug 2005 | A1 |
20060028550 | Palmer et al. | Feb 2006 | A1 |
20060092043 | Lagassey | May 2006 | A1 |
20060238383 | Kimchi et al. | Oct 2006 | A1 |
20060250515 | Koseki et al. | Nov 2006 | A1 |
20060256109 | Acker et al. | Nov 2006 | A1 |
20070024612 | Balfour | Feb 2007 | A1 |
20070046448 | Smitherman | Mar 2007 | A1 |
20070237420 | Steedly et al. | Oct 2007 | A1 |
20080120031 | Rosenfeld et al. | May 2008 | A1 |
20080123994 | Schultz et al. | May 2008 | A1 |
20080158256 | Russell et al. | Jul 2008 | A1 |
20090177458 | Hochart et al. | Jul 2009 | A1 |
20090208095 | Zebedin | Aug 2009 | A1 |
20090237396 | Venezia et al. | Sep 2009 | A1 |
20090304227 | Kennedy et al. | Dec 2009 | A1 |
20090307255 | Park | Dec 2009 | A1 |
20100066559 | Judelson | Mar 2010 | A1 |
20100275018 | Pedersen | Oct 2010 | A1 |
20100296693 | Thornberry et al. | Nov 2010 | A1 |
20110033110 | Shimamura et al. | Feb 2011 | A1 |
20110205228 | Peterl et al. | Aug 2011 | A1 |
20110270584 | Plocher et al. | Nov 2011 | A1 |
20130246204 | Thornberry et al. | Sep 2013 | A1 |
20130278755 | Starns | Oct 2013 | A1 |
20140095122 | Appleman et al. | Apr 2014 | A1 |
20140267717 | Pitzer | Sep 2014 | A1 |
20160055268 | Bell | Feb 2016 | A1 |
Number | Date | Country |
---|---|---|
331204 | Jul 2006 | AT |
0316110 | Sep 2005 | BR |
2402234 | Sep 2000 | CA |
2505566 | May 2004 | CA |
1735897 | Feb 2006 | CN |
60017384 | Mar 2006 | DE |
60306301 | Nov 2006 | DE |
1418402 | Oct 2006 | DK |
1010966 | Dec 1999 | EP |
1180967 | Feb 2002 | EP |
1418402 | May 2004 | EP |
1696204 | Aug 2006 | EP |
2266704 | Mar 2007 | ES |
2003317089 | Nov 2003 | JP |
2010197100 | Sep 2010 | JP |
2010197100 | Sep 2011 | JP |
PA05004987 | Feb 2006 | MX |
WO9918732 | Apr 1999 | WO |
WO2000053090 | Sep 2000 | WO |
WO2004044692 | May 2004 | WO |
WO2005088251 | Sep 2005 | WO |
WO2008028040 | Mar 2008 | WO |
Entry |
---|
Ackermann, Prospects of Kinematic GPS Aerial Triangulation, ITC Journal, 1992. |
Ciampa, John A., “Pictometry Digital Video Mapping”, SPIE, vol. 2598, pp. 140-148, 1995. |
Ciampa, J. A., Oversee, Presented at Reconstruction After Urban earthquakes, Buffalo, NY, 1989. |
Dunford et al., Remote Sensing for Rural Development Planning in Africa, The Journal for the International Institute for Aerial Survey and Earth Sciences, 2:99-108, 1983. |
Gagnon, P.A., Agnard, J. P., Nolette, C., & Boulianne, M., “A Micro-Computer based General Photogrammetric System”, Photogrammetric Engineering and Remote Sensing, vol. 56, No. 5., pp. 623-625, 1990. |
Konecny, G., “Issues of Digital Mapping”, Leibniz University Hannover, Germany, GIS Ostrava 2008, Ostrava Jan. 27-30, 2008, pp. 1-8. |
Konecny, G., “Analytical Aerial Triangulation with Convergent Photography”, Department of Surveying Engineering, University of New Brunswick, pp. 37-57, 1966. |
Konecny, G., “Interior Orientation and Convergent Photography”, Photogrammetric Engineering, pp. 625-634, 1965. |
Graham, Lee A., “Airborne Video for Near-Real-Time Vegetation Mapping”, Journal of Forestry, 8:28-32, 1993. |
Graham, Horita TRG-50 SMPTE Time-Code Reader, Generator, Window Inserter, 1990. |
Hess, L.L, et al., “Geocoded Digital Videography for Validation of Land Cover Mapping in the Amazon Basin”, International Journal of Remote Sensing, vol. 23, No. 7, pp. 1527-1555, 2002. |
Hinthorne, J., et al., “Image Processing in The Grass GIS”, Geoscience and Remote Sensing Symposium, 4:2227-2229, 1991. |
Imhof, Ralph K., “Mapping from Oblique Photographs”, Manual of Photogrammetry, Chapter 18, 1966. |
Jensen, John R., Introductory Digital Image Processing: A Remote Sensing Perspective, Prentice-Hall, 1986; 399 pages. |
Lapine, Lewis A., “Practical Photogrammetric Control by Kinematic GPS”, GPS World, 1(3):44-49, 1990. |
Lapine, Lewis A., Airborne Kinematic GPS Positioning for Photogrammetry—The Determination of the Camera Exposure Station, Silver Spring, MD, 11 pages, at least as early as 2000. |
Linden et al., Airborne Video Automated Processing, US Forest Service Internal report, Fort Collins, CO, 1993. |
Myhre, Dick, “Airborne Video System Users Guide”, USDA Forest Service, Forest Pest Management Applications Group, published by Management Assistance Corporation of America, 6 pages, 1992. |
Myhre et al., “An Airborne Video System Developed Within Forest Pest Management—Status and Activities”, 10 pages, 1992. |
Myhre et al., “Airborne Videography—A Potential Tool for Resource Managers”—Proceedings: Resource Technology 90, 2nd International Symposium on Advanced Technology in Natural Resource Management, 5 pages, 1990. |
Myhre et al., Aerial Photography for Forest Pest Management, Proceedings of Second Forest Service Remote Sensing Applications Conference, Slidell, Louisiana, 153-162, 1988. |
Myhre et al., “Airborne Video Technology”, Forest Pest Management/Methods Application Group, Fort Collins, CO, pp. 1-6, at least as early as Jul. 30, 2006. |
Norton-Griffiths et al., 1982. “Sample surveys from light aircraft combining visual observations and very large scale color photography”. University of Arizona Remote Sensing Newsletter 82-2:1-4. |
Norton-Griffiths et al., “Aerial Point Sampling for Land Use Surveys”, Journal of Biogeography, 15:149-156, 1988. |
Novak, Rectification of Digital Imagery, Photogrammetric Engineering and Remote Sensing, 339-344, 1992. |
Slaymaker, Dana M., “Point Sampling Surveys with GPS-logged Aerial Videography”, Gap Bulletin No. 5, University of Idaho, http://www.gap.uidaho.edu/Bulletins/5/PSSwGPS.html, 1996. |
Slaymaker, et al., “Madagascar Protected Areas Mapped with GPS-logged Aerial Video and 35mm Air Photos”, Earth Observation magazine, vol. 9, No. 1, http://www.eomonline.com/Common/Archives/2000jan/00jan_tableofcontents.html, pp. 1-4, 2000. |
Slaymaker, et al., “Cost-effective Determination of Biomass from Aerial Images”, Lecture Notes in Computer Science, 1737:67-76, http://portal.acm.org/citation.cfm?id=648004.743267&coll=GUIDE&dl=, 1999. |
Slaymaker, et al., “A System for Real-time Generation of Geo-referenced Terrain Models”, 4232A-08, SPIE Enabling Technologies for Law Enforcement Boston, MA, ftp://vis-ftp.cs.umass.edu/Papers/schultz/spie2000.pdf, 2000. |
Slaymaker, et al.,“Integrating Small Format Aerial Photography, Videography, and a Laser Profiler for Environmental Monitoring”, In ISPRS WG III/1 Workshop on Integrated Sensor Calibration and Orientation, Portland, Maine, 1999. |
Slaymaker, et al., “Calculating Forest Biomass With Small Format Aerial Photography, Videography and a Profiling Laser”, In Proceedings of the 17th Biennial Workshop on Color Photography and Videography in Resource Assessment, Reno, NV, 1999. |
Slaymaker et al., Mapping Deciduous Forests in Southern New England using Aerial Videography and Hyperclustered Multi-Temporal Landsat TM Imagery, Department of Forestry and Wildlife Management, University of Massachusetts, 1996. |
Star et al., “Geographic Information Systems an Introduction”, Prentice-Hall, 1990. |
Tomasi et al., “Shape and Motion from Image Streams: a Factorization Method”—Full Report on the Orthographic Case, pp. 9795-9802, 1992. |
Warren, Fire Mapping with the Fire Mousetrap, Aviation and Fire Management, Advanced Electronics System Development Group, USDA Forest Service, 1986. |
Welch, R., “Desktop Mapping with Personal Computers”, Photogrammetric Engineering and Remote Sensing, 1651-1662, 1989. |
Westervelt, James, “Introduction to GRASS 4”, pp. 1-25, 1991. |
“RGB Spectrum Videographics Report, vol. 4, No. 1, McDonnell Douglas Integrates RGB Spectrum Systems in Helicopter Simulators”, pp. 1-6, 1995. |
RGB “Computer Wall”, RGB Spectrum, 4 pages, 1995. |
“The First Scan Converter with Digital Video Output”, Introducing . . . The RGB/Videolink 1700D-1, RGB Spectrum, 2 pages, 1995. |
ERDAS Field Guide, Version 7.4, A Manual for a commercial image processing system, 1990. |
“Image Measurement and Aerial Photography”, Magazine for all branches of Photogrammetry and its fringe areas, Organ of the German Photogrammetry Association, Berlin-Wilmersdorf, No. 1, 1958. |
“Airvideo Analysis”, Microimages, Inc., Lincoln, NE, 1 page, Dec. 1992. |
Zhu, Zhigang, Hanson, Allen R., “Mosaic-Based 3D Scene Representation and Rendering”, Image Processing, 2005, ICIP 2005, IEEE International Conference on 1(2005). |
Mostafa, et al., “Direct Positioning and Orientation Systems How do they Work? What is the Attainable Accuracy?”, Proceeding, American Society of Photogrammetry and Remote Sensing Annual Meeting, St. Louis, MO, Apr. 24-27, 2001. |
“POS AV” georeferenced by APPLANIX aided inertial technology, http://www.applanix.com/products/posav_index.php. |
Mostafa, et al., “Ground Accuracy from Directly Georeferenced Imagery”, Published in GIM International vol. 14 No. 12 Dec. 2000. |
Mostafa, et al., “Airborne Direct Georeferencing of Frame Imagery: An Error Budget”, The 3rd International Symposium on Mobile Mapping Technology, Cairo, Egypt, Jan. 3-5, 2001. |
Mostafa, M.R. and Hutton, J., “Airborne Kinematic Positioning and Attitude Determination Without Base Stations”, Proceedings, International Symposium on Kinematic Systems in Geodesy, Geomatics, and Navigation (KIS 2001) Banff, Alberta, Canada, Jun. 4-8, 2001. |
Mostafa, et al., “Airborne DGPS Without Dedicated Base Stations for Mapping Applications”, Proceedings of ION-GPS 2001, Salt Lake City, Utah, USA, Sep. 11-14. |
Mostafa, “ISAT Direct Exterior Orientation QA/QC Strategy Using POS Data”, Proceedings of OEEPE Workshop: Integrated Sensor Orientation, Hanover, Germany, Sep. 17-18, 2001. |
Mostafa, “Camera/IMU Boresight Calibration: New Advances and Performance Analysis”, Proceedings of the ASPRS Annual Meeting, Washington, D.C., Apr. 21-26, 2002. |
Hiatt, “Sensor Integration Aids Mapping at Ground Zero”, Photogrammetric Engineering and Remote Sensing, Sep. 2002, p. 877-878. |
Mostafa, “Precision Aircraft GPS Positioning Using CORS”, Photogrammetric Engineering and Remote Sensing, Nov. 2002, p. 1125-1126. |
Mostafa, et al., System Performance Analysis of INS/DGPS Integrated System for Mobile Mapping System (MMS), Department of Geomatics Engineering, University of Calgary, Commission VI, WG VI/4, Mar. 2004. |
Artes F., & Hutton, J., “GPS and Inertial Navigation Delivering”, Sep. 2005, GEOconnexion International Magazine, p. 52-53, Sep. 2005. |
“POS AV” APPLANIX, Product Outline, airborne@applanix.com, 3 pages, Mar. 28, 2007. |
POSTrack, “Factsheet”, APPLANIX, Ontario, Canada, www.applanix.com, Mar. 2007. |
POS AV “Digital Frame Camera Applications”, 3001 Inc., Brochure, 2007. |
POS AV “Digital Scanner Applications”, Earthdata Brochure, Mar. 2007. |
POS AV “Film Camera Applications” AeroMap Brochure, Mar. 2007. |
POS AV “Lidar Applications” MD Atlantic Brochure, Mar. 2007. |
POS AV “OEM System Specifications”, 2005. |
POS AV “Synthetic Aperture Radar Applications”, Overview, Orbisat Brochure, Mar. 2007. |
“POSTrack V5 Specifications” 2005. |
“Remote Sensing for Resource Inventory Planning and Monitoring”, Proceeding of the Second Forest Service Remote Sensing Applications Conference—Slidell, Louisiana and NSTL, Mississippi, Apr. 11-15, 1988. |
“Protecting Natural Resources with Remote Sensing”, Proceeding of the Third Forest Service Remote Sensing Applications Conference—Apr. 9-13, 1990. |
Heipke, et al, “Test Goals and Test Set Up for the OEEPE Test—Integrated Sensor Orientation”, 1999. |
Kumar, et al., “Registration of Video to Georeferenced Imagery”, Sarnoff Corporation, CN5300, Princeton, NJ, 1998. |
McConnel, Proceedings Aerial Pest Detection and Monitoring Workshop—1994.pdf, USDA Forest Service Forest Pest Management, Northern Region, Intermountain region, Forest Insects and Diseases, Pacific Northwest Region. |
“Standards for Digital Orthophotos”, National Mapping Program Technical Instructions, US Department of the Interior, Dec. 1996. |
Tao, “Mobile Mapping Technology for Road Network Data Acquisition”, Journal of Geospatial Engineering, vol. 2, No. 2, pp. 1-13, 2000. |
“Mobile Mapping Systems Lesson 4”, Lesson 4 SURE 382 Geographic Information Systems II, pp. 1-29, Jul. 2, 2006. |
Konecny, G., “Mechanische Radialtriangulation mit Konvergentaufnahmen”, Bildmessung und Luftbildwesen, 1958, No. 1. |
Myhre, “ASPRS/ACSM/RT 92” Technical papers, Washington, D.C., vol. 5 Resource Technology 92, Aug. 3-8, 1992. |
Rattigan, “Towns get new view from above,” The Boston Globe, Sep. 5, 2002. |
Mostafa, et al., “Digital image georeferencing from a multiple camera system by GPS/INS,” ISP RS Journal of Photogrammetry & Remote Sensing, 56(I): I-12, Jun. 2001. |
Dillow, “Grin, or bare it, for aerial shot,” Orange County Register (California), Feb. 25, 2001. |
Anonymous, “Live automatic coordinates for aerial images,” Advanced Imaging, 12(6):51, Jun. 1997. |
Anonymous, “Pictometry and US Geological Survey announce—Cooperative Research and Development Agreement,” Press Release published Oct. 20, 1999. |
Miller, “Digital software gives small Arlington the Big Picture,” Government Computer NewsState & Local, 7(12), Dec. 2001. |
Garrett, “Pictometry: Aerial photography on steroids,” Law Enforcement Technology 29(7):114-116, Jul. 2002. |
Weaver, “County gets an eyeful,” The Post-Standard (Syracuse, NY), May 18, 2002. |
Reed, “Firm gets latitude to map O.C. in 3D,” Orange County Register (California), Sep. 27, 2000. |
Reyes, “Orange County freezes ambitious aerial photography project,” Los Angeles Times, Oct. 16, 2000. |
Aerowest Pricelist of Geodata as of Oct. 21, 2005 and translations to English 3 pages. |
www.archive.org Web site showing archive of German AeroDach Web Site http://www.aerodach.de from Jun. 13, 2004 (retrieved Sep. 20, 2012) and translations to English 4 pages. |
AeroDach®Online Roof Evaluation Standard Delivery Format and 3D Data File: Document Version 01.00.2002 with publication in 2002, 13 pages. |
Noronha et al., “Detection and Modeling of Building from Multiple Aerial Images,” Institute for Robotics and Intelligent Systems, University of Southern California, Nov. 27, 2001, 32 pages. |
Applicad Reports dated Nov. 25, 1999-Mar. 9, 2005, 50 pages. |
Applicad Online Product Bulletin archive from Jan. 7, 2003, 4 pages. |
Applicad Sorcerer Guide, Version 3, Sep. 8, 1999, 142 pages. |
Xactimate Claims Estimating Software archive from Feb. 12, 2010, 8 pages. |
Bignone et al, Automatic Extraction of Generic House Roofs from High Resolution Aerial Imagery, Communication Technology Laboratory, Swiss Federal Institute of Technology ETH, CH-8092 Zurich, Switzerland, 12 pages, 1996. |
Geospan 2007 Job proposal. |
Greening et al., Commercial Applications of GPS-Assisted Photogrammetry, Presented at GIS/LIS Annual Conference and Exposition, Phoenix, AZ, Oct. 1994. |
International Search Report and Written Opinion of PCT/US2015/014674, Korean Intellectual Property Office, dated May 8, 2015. |
Farin et al., “Floorplan Reconstruction from Panoramic Images”, In Proc. of ACM 15th Int. Conf. on Multimedia, 2007, pp. 823-826. |
Tarnavsky et al., “Semiautomatic Floor-Plan Reconstruction from a 360° Panoramic Image”, In Proc. of 2nd VISAPP Conference, 2012, pp. 345-350. |
Roman, “Multiperspective Imaging for Automated Urban Visualization”, Doctoral Dissertation, Stanford University, Sep. 2006. |
Robertson et al., “Building Architectural Models from Many Views Using Map Constraints”, European Conference on Computer Vision: Lecture Notes in Computer Science vol. 2351, 2002, pp. 155-169. |
Dunston et al., “An Immersive Virtual Reality Mockup for Design Review of Hospital Patient Rooms”, 7th Int. Conf. on Construction Applications of Virtual Reality, 2007. |
Rosenberg, “A Room With a View (From Above)”, Behold: The Photo Blog, accessed at http://www.slate.com/blogs/behold/2013/01/14/menno_aden_room_portraits_taken_from_above_photos.html, [retrieved on Jun. 9, 2015], Jan. 14, 2013. |
Amsbary, “Virtual IKEA: User Interaction in 2d and 3d Spaces”, Masters Project, Georgia Institute of Technology, Apr. 11, 2007. |
Yin et al., “Generating 3D Building Models from Architectural Drawings: A Survey,” IEEE Computer Graphics and Applications, IEEE Computer Society, Jan./Feb. 2009, pp. 20-30. |
Aden, “Room Portraits,” retrieved from the internet site http://mennoaden.com/room_portraits.html [retrieved on Jun. 9, 2015]. |
Mitchell, “The World from Above: Menno Aden's ‘Room Portraits’,” Apartment Therapy, retrieved from internet site http://www.apartmenttherapy.com/the-world-from-above-184514 [retrieved on Jun. 9, 2015], Feb. 22, 2013. |
Canadian Intellectual Property Office, Office Action regarding Canadian Patent Application No. 2,938,973, dated Apr. 13, 2021. |
Number | Date | Country | |
---|---|---|---|
20180239842 A1 | Aug 2018 | US |
Number | Date | Country | |
---|---|---|---|
61937488 | Feb 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14617575 | Feb 2015 | US |
Child | 15959877 | US |