There are various needs for understanding the shape and size of cavity surfaces, such as body cavities. For example, hearing aids, hearing protection, custom headphones, and wearable computing devices may require impressions of a patient's ear canal. To construct an impression of an ear canal, audiologists may inject a silicone material into a patient's ear canal, wait for the material to harden, and then provide the mold to manufacturers who use the resulting silicone impression to create a custom fitting in-ear device. As may be appreciated, the process is slow, expensive, and unpleasant for the patient as well as a medical professional performing the procedure.
Computer vision and photogrammetry generally relates to acquiring and analyzing images in order to produce data by electronically understanding an image using various algorithmic methods. For example, computer vision may be employed in event detection, object recognition, motion estimation, and various other tasks.
Three-dimensional reconstruction is the process of obtaining data relating to a shape and appearance of an object to generate a three-dimensional reconstruction of the object.
Many aspects of the present disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
The present disclosure relates to a mobile scanning device configured to generate various displays for conducting a scan of a surface. Advancements in computer vision permit imaging or capture devices, such as conventional cameras, to be employed as sensors useful in determining locations, shapes, and appearances of objects in a three-dimensional space. For example, a position and an orientation of an object in a three-dimensional space may be determined relative to a certain world coordinate system utilizing digital images captured via image capturing devices. As may be appreciated, the position and orientation of the object in the three-dimensional space may be beneficial in generating additional data about the object, or about other objects, in the same three-dimensional space.
For example, scanning devices may be used in various industries to scan objects to generate data pertaining to the objects being scanned. A scanning device may employ an imaging device, such as a camera, to determine information about the object being scanned, such as the size, shape, appearance, or structure of the object, the distance of the object from the scanning device, etc.
As a non-limiting example, a scanning device may include an otoscanner configured to visually inspect or scan the ear canal of a human or animal. An otoscanner may comprise one or more cameras that may be beneficial in generating data about the ear canal subject of the scan, such as the size, shape, or structure of the ear canal. This data may be used in generating three-dimensional reconstructions of the ear canal that may be useful in customizing in-ear devices, such as hearing aids or wearable computing devices.
Determining the placement of a display to facilitate a scan of an object remains problematic. Thus, according to various embodiments of the present disclosure, a mobile computing device, such as an otoscanner, may be configured to perform a scan of an object utilizing at least one imaging device that may generate a first video stream. The mobile computing device may generate or otherwise access a second video stream comprising at least a three-dimensional reconstruction of the object subject to the scan and may render the first video stream and the second video stream in one or more displays in data communication with the mobile computing device, as will be described in greater detail below.
In the following discussion, a general description of the system and its components is provided, followed by a discussion of the operation of the same.
With reference to
The hand grip 106 may be configured such that the length is long enough to accommodate large hands and the diameter is small enough to provide enough comfort for smaller hands. A trigger 121, located within the hand grip 106, may perform various functions such as initiating a scan of a surface, controlling a user interface rendered in the display, and/or otherwise modifying the function of the scanning device 100.
The scanning device 100 may further comprise a cord 124 that may be employed to communicate data signals to external computing devices and/or to power the scanning device 100. As may be appreciated, the cord 124 may be detachably attached to facilitate the mobility of the scanning device 100 when held in a hand via the hand grip 106. According to various embodiments of the present disclosure, the scanning device 100 may not comprise a cord 124, thus acting as a wireless and mobile device capable of wireless communication.
The probe 109 mounted onto the scanning device 100 may be configured to guide light received at a proximal end of the probe 109 to a distal end of the probe 109 and may be employed in the scanning of a surface cavity, such as an ear canal, by placing the probe 109 near or within the surface cavity. During a scan, the probe 109 may be configured to project a 360-degree ring onto the cavity surface and capture reflections from the projected ring to reconstruct the image, size, and shape of the cavity surface. In addition, the scanning device 100 may be configured to capture video images of the cavity surface by projecting video illuminating light onto the cavity surface and capturing video images of the cavity surface.
The fan light element 112 mounted onto the scanning device 100 may be configured to emit light in a fan line for scanning an outer surface. The fan light element 112 comprises a fan light source projecting light onto a single element lens to collimate the light and generate a fan line for scanning the outer surface. By using triangulation of the reflections captured when projected onto a surface, the imaging sensor within the scanning device 100 may reconstruct the scanned surface.
Referring next to
Turning now to
In the examples of
Further, the display screen 118 is coupled for data communication to the imaging devices 115 (
According to various embodiments of the present disclosure, the imaging devices 115 of
Moving on to
Referring next to
In the second video stream 303b of
A three-dimensional reconstruction 306 of an ear canal may be generated via one or more processors internal to the scanning device 100, external to the scanning device 100, or a combination thereof. Generating the three-dimensional reconstruction 306 of the object subject to the scan may require information related to the pose of the scanning device 100. The three-dimensional reconstruction 306 of the ear canal may further comprise, for example, a probe model 312 emulating a position of the probe 109 relative to the surface cavity being scanned by the scanning device.
A notification area 315 may provide the operator of the scanning device 100 with notifications, whether assisting the operator with conducting a scan or warning the operator of potential harm to the object being scanned. Measurements 318 may be rendered in the display to assist the operator in conducting scans of surface cavities at certain distances and/or depths. A bar 321 may provide the operator with an indication of which depths have been thoroughly scanned as opposed to which depths or distances remain to be scanned. One or more buttons 324 may be rendered in various locations of the user interface permitting the operator to initiate a scan of an object and/or manipulate the user interface presented on the display screen 118 or other display in data communication with the scanning device 100.
According to one embodiment, the user interfaces of
Although a first video stream 303a and a second video stream 303b are shown simultaneously in a side-by-side arrangement, other embodiments may be employed without deviating from the scope of the user interface. For example, the first video stream 303a may be rendered in the display screen 118 on the scanning device 100 and the second video stream 303b may be rendered in a display external to the scanning device 100, and vice versa, as will be discussed in greater detail below.
Referring next to
As shown in
Turning now to
As depicted in
Although
Moving on to
As shown in
According to various embodiments, the three-dimensional reconstruction 306 may be generated in a processor internal to the scanning device 100 and communicated to the mobile computing device 606 via a form of wired or wireless communication consisting of, for example, wireless telephony, Wi-Fi, Bluetooth™, Zigbee, IR, USB, HMDI, Ethernet, or any other form of data communication. In another embodiment, the three-dimensional reconstruction 306 may be generated in a processor internal to the mobile computing device 606 based at least in part on data transmitted from the scanning device 100 that may be used in generating the three-dimensional reconstruction 306 and/or rendering a first video stream 303a or the second video stream 303b.
Although
Referring next to
As depicted in
According to various embodiments, the three-dimensional reconstruction 306 may be generated in a processor internal to the scanning device 100 and communicated to the monitor 706 via a form of wired or wireless communication consisting of, for example, wireless telephony, Wi-Fi, Bluetooth™, Zigbee, IR, USB, HMDI, analog video, Ethernet, or any other form of data communication. In another embodiment, the three-dimensional reconstruction 306 may be generated in a processor internal to the monitor 706 based at least in part on data transmitted from the scanning device 100 that may be used in generating the three-dimensional reconstruction 306 and/or rendering a first video stream 303a or the second video stream 303b.
Although
Turning now to
As depicted in
According to various embodiments, the three-dimensional reconstruction 306 may be generated in a processor internal to the scanning device 100 and communicated to the head-mounted display device 806 via a form of wired or wireless communication consisting of, for example, wireless telephony, Wi-Fi, Bluetooth™, Zigbee, IR, USB, HMDI, analog video, Ethernet, or any other form of data communication. In another embodiment, the three-dimensional reconstruction 306 may be generated in a processor internal to the head-mounted display device 806 based at least in part on data transmitted from the scanning device 100 that may be used in generating the three-dimensional reconstruction 306 and/or rendering a first video stream 303a or the second video stream 303b.
Although
Moving on to
Beginning with 903, a first video stream generated via at least one imaging device 115 (
Next, in 906, a second video stream 303b comprising a three-dimensional reconstruction 306 (
In 909, the first video stream 303a and the second video stream 303b may be rendered in one or more display devices. According to various embodiments, both the first video stream 303a and the second video stream 303b may be rendered in the same display 118 of a scanning device 100, as depicted in
In 912, a device used to render the first video stream 303a and/or the second video stream 303b may be monitored for user input to determine whether user input has been received. In the embodiment of a display device comprising a touch-screen display, the touch-screen display may be monitored to determine whether a touch of the surface of the display has been conducted by an operator of the scanning device 100. Similarly, in the embodiment of a display in data communication with one or more other input devices (e.g., mouse, keyboard, voice recognition device, gesture recognition device, or any other input device), the input devices may be monitored to determine whether an interaction with one or both of the video streams 303 has been conducted by an operator of the scanning device 100.
If a user input has been detected, in 915, then the first or second video stream 303 may be manipulated according to the user input identified in 912. For example, in the event a user may want to rotate a view of the three-dimensional reconstruction 306 of the object subject to the scan, a user may initiate a swipe across a touch-screen display in which the three-dimensional reconstruction 306 is rendered. The video stream rendering the three-dimensional reconstruction 306 may modify the generated view of the three-dimensional reconstruction 306 accordingly. Thus, the rendering of the second video stream 303b in the touch-screen display will be modified according to the user input in box 909.
With reference to
Stored in the memory 1006 are both data and several components that are executable by the processor 1003. In particular, a display application 900 is stored in the memory 1006 and executable by the processor 1003, as well as other applications. Also stored in the memory 1006 may be a data store 1012 and other data. In addition, an operating system may be stored in the memory 1006 and executable by the processor 1003.
It is understood that there may be other applications that are stored in the memory 1006 and are executable by the processor 1003 as can be appreciated. Where any component discussed herein is implemented in the form of software, any one of a number of programming languages may be employed such as, for example, C, C++, C#, Objective C, Java®, JavaScript®, Perl, PHP, Visual Basic®, Python®, Ruby, Flash®, or other programming languages.
A number of software components are stored in the memory 1006 and are executable by the processor 1003. In this respect, the term “executable” means a program file that is in a form that can ultimately be run by the processor 1003. Examples of executable programs may be, for example, a compiled program that can be translated into machine code in a format that can be loaded into a random access portion of the memory 1006 and run by the processor 1003, source code that may be expressed in proper format such as object code that is capable of being loaded into a random access portion of the memory 1006 and executed by the processor 1003, or source code that may be interpreted by another executable program to generate instructions in a random access portion of the memory 1006 to be executed by the processor 1003, etc. An executable program may be stored in any portion or component of the memory 1006 including, for example, random access memory (RAM), read-only memory (ROM), hard drive, solid-state drive, USB flash drive, memory card, optical disc such as compact disc (CD) or digital versatile disc (DVD), floppy disk, magnetic tape, or other memory components.
The memory 1006 is defined herein as including both volatile and nonvolatile memory and data storage components. Volatile components are those that do not retain data values upon loss of power. Nonvolatile components are those that retain data upon a loss of power. Thus, the memory 1006 may comprise, for example, random access memory (RAM), read-only memory (ROM), hard disk drives, solid-state drives, USB flash drives, memory cards accessed via a memory card reader, floppy disks accessed via an associated floppy disk drive, optical discs accessed via an optical disc drive, magnetic tapes accessed via an appropriate tape drive, and/or other memory components, or a combination of any two or more of these memory components. In addition, the RAM may comprise, for example, static random access memory (SRAM), dynamic random access memory (DRAM), or magnetic random access memory (MRAM) and other such devices. The ROM may comprise, for example, a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or other like memory device.
Also, the processor 1003 may represent multiple processors 1003 and/or multiple processor cores and the memory 1006 may represent multiple memories 1006 that operate in parallel processing circuits, respectively. In such a case, the local interface 1009 may be an appropriate network that facilitates communication between any two of the multiple processors 1003, between any processor 1003 and any of the memories 1006, or between any two of the memories 1006, etc. The local interface 1009 may comprise additional systems designed to coordinate this communication, including, for example, performing load balancing. The processor 1003 may be of electrical or of some other available construction.
Similarly, the computing arrangement described above with respect to
Although the display application 900, and other various systems described herein may be embodied in software or code executed by general purpose hardware as discussed above, as an alternative the same may also be embodied in dedicated hardware or a combination of software/general purpose hardware and dedicated hardware. If embodied in dedicated hardware, each can be implemented as a circuit or state machine that employs any one of or a combination of a number of technologies. These technologies may include, but are not limited to, discrete logic circuits having logic gates for implementing various logic functions upon an application of one or more data signals, application specific integrated circuits (ASICs) having appropriate logic gates, field-programmable gate arrays (FPGAs), or other components, etc. Such technologies are generally well known by those skilled in the art and, consequently, are not described in detail herein.
The flowchart of
Although the flowchart of
Also, any logic or application described herein, including the display application 900, that comprises software or code can be embodied in any non-transitory computer-readable medium for use by or in connection with an instruction execution system such as, for example, a processor 1003 in a computer system or other system. In this sense, the logic may comprise, for example, statements including instructions and declarations that can be fetched from the computer-readable medium and executed by the instruction execution system. In the context of the present disclosure, a “computer-readable medium” can be any medium that can contain, store, or maintain the logic or application described herein for use by or in connection with the instruction execution system.
The computer-readable medium can comprise any one of many physical media such as, for example, magnetic, optical, or semiconductor media. More specific examples of a suitable computer-readable medium would include, but are not limited to, magnetic tapes, magnetic floppy diskettes, magnetic hard drives, memory cards, solid-state drives, USB flash drives, or optical discs. Also, the computer-readable medium may be a random access memory (RAM) including, for example, static random access memory (SRAM) and dynamic random access memory (DRAM), or magnetic random access memory (MRAM). In addition, the computer-readable medium may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or other type of memory device.
Further, any logic or application described herein, including the display application 900, may be implemented and structured in a variety of ways. For example, one or more applications described may be implemented as modules or components of a single application. Further, one or more applications described herein may be executed in shared or separate computing devices or a combination thereof. For example, a plurality of the applications described herein may execute in the same scanning device 100, or in multiple computing devices in a common computing environment. Additionally, it is understood that terms such as “application,” “service,” “system,” “engine,” “module,” and so on may be interchangeable and are not intended to be limiting.
Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
This application is related to U.S. patent application Ser. No. ______, filed on Oct. ______, 2013 (Attorney Docket No. 52105-1010) and entitled “Tubular Light Guide,” U.S. patent application Ser. No. ______, filed on Oct. ______, 2013 (Attorney Docket No. 52105-1020) and entitled “Tapered Optical Guide,” U.S. patent application Ser. No. ______, filed on Oct. ______, 2013 (Attorney Docket No. 52105-1040) and entitled “Fan Light Element,” U.S. patent application Ser. No. ______, filed on Oct. ______, 2013 (Attorney Docket No. 52105-1050) and entitled “Integrated Tracking with World Modeling,” U.S. patent application Ser. No. ______, filed on Oct. ______, 2013 (Attorney Docket No. 52105-1060) and entitled “Integrated Tracking with Fiducial-based Modeling,” U.S. patent application Ser. No. ______, filed on Oct. ______, 2013 (Attorney Docket No. 52105-1070) and entitled “Integrated Calibration Cradle,” and U.S. patent application Ser. No. ______, filed on Oct. ______, 2013 (Attorney Docket No. 52105-1080) and entitled “Calibration of 3D Scanning Device,” all of which are hereby incorporated by reference in their entirety.