The invention generally relates to head-mounted cameras, and more particularly to the alignment of the field of view of a head-mounted camera with the user's view, and/or with a head-mounted light.
The use of endoscopes or other cameras insertable into the body is necessary for some types of minimally-invasive (also called endoscopic) surgery. Such minimally-invasive surgery may be performed directly by a surgeon, or may be performed using a surgical robot such as the daVinci® Surgical System of Intuitive Surgical (Sunnyvale, Calif.). An example of the type of minimally-invasive surgery that requires a camera to be inserted into the body to view the surgical site is totally endoscopic coronary artery bypass graft (TECAB) surgery. Whether a surgeon uses tools directly inserted by hand into the thoracic cavity to place the graft, or uses a surgical robot, a camera must view the site of attachment of the graft to the coronary artery or the aorta. The camera outputs video to one or more monitors, allowing the surgeon to view that video and control tools or the surgical robot to properly place and attach the graft. In addition, other healthcare professionals in the vicinity, such as but not limited to anesthesiologists, nurses, medtechs, and vendor representatives, can view the video and remain engaged with the procedure. Thus, even though minimally-invasive surgery is performed through small ports in the patient's body, multiple people in the operating room or other surgical location can easily view the procedure and its progress.
Paradoxically, open surgery is often difficult for anyone other than the surgeon and at most two attending nurses or other professionals to watch. The incisions in the patient may be large. The chest of the patient may be open. However, the surgeon is positioned adjacent to the patient, as is an attending nurse. Their bodies block the view by others. Further, even without the surgeon or attending nurse standing adjacent to the patient, it can be difficult to see the surgical site even a few feet away. Further, the anesthesiologist is typically seated, as are other professionals in the operating room, and their elevation is not high enough to see into the surgical site. As a result, it can be difficult for people other than the surgeon and the attending nurse to remain engaged with the procedure, resulting in less-than-optimal care for the patient.
Cameras are known that mount to the surgical light handle commonly used in operating rooms. However, such cameras typically need to be frequently repositioned in use, which is inconvenient. The surgeon's head or hands often obscure the surgical field. The video quality is often insufficient due to the distance between the camera and the surgical field. Typically, such cameras do not provide audio, either. Large mobile camera systems on wheels/casters are also known, in which cameras are placed on long swivelable arms. Such cameras have the same issues as the light-handle-mounted cameras described above. In addition, such mobile camera systems have a large footprint in a confined operating environment, which can make their use challenging. Further, such devices may fall into the category of capital equipment and typically are expensive, restricting their adoption at hospitals.
A surgeon may wear a head-mounted camera and a head-mounted light during open surgery. The camera must be manually aligned with the surgeon's eyes, so that the camera views what the surgeon views. Conventionally, a nurse or other professional in the OR must perform the alignment, because the camera is not sterile and thus the surgeon cannot touch the camera without contaminating the sterile field. Similarly, alignment of the light with the camera must be performed the same way. Because someone has to perform the alignment for the surgeon, there is no guarantee that the alignment will actually align the camera with the surgeon's view, or the light with the camera's field of view.
Further, in non-surgical applications, there is a need for the ability to align the field of view of a user with the field of view of a head-mounted or body-mounted camera. For example, a skier, runner, cyclist or other athlete may wear a head-mounted or helmet-mounted camera such as the GOPRO® camera of GoPro, Inc. of San Mateo, Calif. The user may wish to use the video obtained with that camera for social media, broadcast media or training purposes, or for other purposes, and accordingly the field of view of the camera should be able to be aligned easily with the field of view of the user. Other non-surgical applications may include teaching, whether to a class in person or to a remote class. For example, an archaeological performing a dig, or a geologist exploring a particular area, may wear a head-mounted camera as described above, and the current manual manner of aligning the camera's field of view with the user's field of view can be cumbersome and distract from the subject matter that the user is attempting to teach.
Thus, there is an unmet need for camera and lighting systems for surgical use as well as non-surgical use that align the user's field of view with that of a head-mounted camera, and/or with a head-mounted light, that does not require manually touching and manipulating the camera.
According to some aspects of the invention, a method for aligning the field of view of a user with the field of view of a camera mounted on the user's head may include providing at least one head-mounted camera module that includes at least one sensor that in turn includes an image sensor, where the image sensor outputs video of an area; receiving a first input from a user to expect a second input relating to the field of view; receiving the second input relating to the field of view; and aligning the field of view of the user with the field of view of the camera.
According to some aspects of the invention, a method for aligning the field of view of a user with the field of view of a camera mounted on the user's head may include providing at least one head-mounted camera module that includes at least one sensor that in turn includes an image sensor, where the image sensor outputs video of an area; and at least one motorized gimbal attached to the camera module; receiving a first input from a user to expect a second input relating to the field of view; receiving the second input relating to the field of view; and aligning the field of view of the user with the field of view of the camera.
According to some aspects of the invention, a camera system mountable on the head of a user includes at least one head-mounted light module; a camera module associated with the light module; a motorized gimbal attached to the camera module; and a base to which the motorized gimbal is attached.
According to some aspects of the invention, a light module for a camera mounted on a user's head includes a heat sink including a cavity defined in a distal portion thereof, the cavity including an opening at a proximal end thereof, and a slot defined through a bottom wall thereof, the slot having a distal surface that is substantially planar; and an LED assembly including at least one printed circuit board including at least one LED package, wherein the printed circuit board is slidable into the slot such that the at least one LED package is positioned proximal to the opening at the proximal end of the cavity such that light emitted from the at least one LED package is transmitted through the opening.
According to some aspects of the invention, a head-mounted camera may be aligned with the user's ocular line of sight utilizing software. The head-mounted camera may include a high-resolution sensor, such as a 4K sensor, an 8K sensor, or a sensor with even higher resolution. The head-mounted camera, or a separate sensor associated with the head-mounted camera, may be adapted to sense gestural commands by the user. For example, a user wearing the head-mounted camera may simulate drawing a circle or other shape with a fingertip, fingertips or hand over an area at which the user wishes to look, or is currently looking. Software associated with the head-mounted camera or separate sensor senses that the user has made a specific gesture, and zooms the field of view and then pans and tilts to an area enclosed by or adjacent to the user's gesture. Because the sensor is a high-resolution sensor, the adjustment of the field of view need not be mechanical. Instead, software may digitally move the field of the view of the camera by zooming to, panning and tilting the image, and focusing on the area enclosed by or adjacent to the user's gesture.
According to some aspects of the invention, a head-mounted camera may be aligned with the user's ocular line of sight utilizing hardware. The field of view of the camera may be adjusted by optically zooming the camera, and panning and/or tilting the camera via a motorized gimbal attached to the camera.
According to some aspects of the invention, a head-mounted light may be utilized in conjunction with the head-mounted light. The head-mounted camera may be configured to sense the edges of an area illuminated by the head-mounted light. Software may then digitally move the field of the view of the camera by zooming to, then panning and tilting, the image and focusing on the brightly illuminated area. Alternately, the field of view of the camera is adjusted by optically zooming the camera, and panning and/or tilting the camera via a motorized gimbal attached to the camera.
According to some aspects of the invention, a light module for a camera mounted on a user's head, may include a heat sink that includes a cavity defined in a distal portion thereof, the cavity including an opening at a proximal end thereof, and a slot defined through a bottom wall thereof, the slot having a distal surface that is substantially planar; and an LED assembly including at least one printed circuit board including at least one LED package, where the printed circuit board is slidable into the slot such that the at least one LED package is positioned proximal to the opening at the proximal end of the cavity such that light emitted from the at least one LED package is transmitted through the opening.
The use of the same reference symbols in different figures indicates similar or identical items.
Referring to
Referring also to
Referring also to
The use of swappable lenses 130 eliminates that problem with prior art surgical lighting. Different swappable lenses 130 may be utilized with the light module 20, where each swappable lens 130 is associated with a different fixed spot diameter in the surgical field. The lens element 131 of each swappable lens 130 may be glass, or may be fabricated from any other suitable material. The swappable lenses 130 may be threaded with threads 132 that are configured to be received by light module threads 134. According to other embodiments, the swappable lenses 130 may be detachably connected to the light module 20 in any other suitable manner and with any other suitable mechanism, such as by a quick disconnect. The swappable lenses 130 optionally include a grippable ring 134 defined at an end or another location thereon. The grippable ring 134 may be rubberized or treated in a manner to increase friction when grasped by a user, to allow for convenient unscrewing of a swappable lens 130 and screwing in of another swappable lens 130. When a user wishes to decrease the spot diameter illuminated in the surgical field, the user detaches the swappable lens 130 currently attached to the light module 20, and attaches a different swappable lens associated with that smaller spot diameter. No iris is utilized, and as a result, the amount of light passing through the swappable lens 130 is unchanged. Consequently, because the same amount of light passes through the swappable lens 130 to a smaller spot diameter, the illuminance of that spot diameter in the surgical field is increased.
Referring also to
According to other embodiments, the camera module 30 may be attached to the head mount 4. The attachment between the camera module 30 and the head mount 4 may be accomplished in any suitable manner. According to one embodiment, the camera module 30 may be substantially fixed to the head mount 4. According to another embodiment, the camera module 30 may be movable relative to the head mount 4, such as via a swivel joint or other connection allowing for movement of the camera module 30 relative to the head mount 4. Where the camera module 30 is movable relative to the head mount 4, the camera module 30 may be lockable relative to the head mount 4 after the camera module 30 has been moved to a desired position. In such embodiments, the light module 20 may be attached directly to the camera module 30 in a manner such as described above with regard to the connection of the camera module 30 directly to the light module 20.
According to other embodiments, the light module 20 and camera module 30 may be integrated into a single module.
As seen in
Referring also to
An image sensor 34 is placed in the camera relative to the liquid lens 32 to collect light from the liquid lens 32. Optionally, one or more intermediate lenses (not shown) may be placed in the optical path between the liquid lens 32 and the image sensor 34 in a multi-element structure. The image sensor 34 may be a high resolution sensor, configured to output video in 4K, 8K or other high resolution format.
According to some embodiments, the camera module 30 includes a time-of-flight sensor 36 in proximity to the liquid lens 32. According to some embodiments, the time-of-flight sensor 36 emits intermittent pulses of light, which may be generated by an LED, a laser, or any other suitable source. The time between pulses of light may be regular, or may be irregular and linked to motion of the camera module 30. The light emitted by the time-of-flight sensor 36 may be in the infrared range of wavelengths, according to some embodiments; according to other embodiments, the light emitted by the time-of-flight sensor 36 may be in a different range of wavelengths. The light emitted by the time-of-flight sensor 36 is reflected by objects in the field of view of the camera module 30, and a portion of that reflected light is received by the time-of-flight sensor 36. The time between emission of the light pulse by the time-of-flight sensor 36 and the sensing by the time-of-flight sensor 36 of light reflected from that light pulse by an objects illuminated by the time-of-flight sensor 36 allows the distance between the time-of-flight sensor 36 and those objects to be calculated.
According to other embodiments, the time-of-flight sensor 36 emits light continuously. The amplitude of the emitted light is modulated, creating a light source of a sinusoidal form at a known and controlled frequency. The reflected light is phase-shifted, and the time-of-flight sensor 36 determines the phase shift of the reflected light to calculate the distance between the time-of-flight sensor 36 and objects illuminated by the time-of-flight sensor 36.
According to other embodiments, the time-of-flight sensor 36 may be a lidar device. Regardless of which embodiment of the time-of-flight sensor 36 is utilized, the time-of-flight sensor 36 provides fast and precise measurements of the distance between the time-of-flight sensor 36 and the objects illuminated thereby—in surgical applications, for example, those objects are structures in a patient's body within the surgical field. The use of a time-of-flight sensor 36 in conjunction with a liquid lens 32 in the camera module 30 allows for very fast and accurate focusing on the area of the surgical field where the user is looking. The focusing provided by the combination of the time-of-flights sensor 36 and the liquid lens 32 may be continuous or near-continuous, maintaining the image of the objects in the field of view of the image sensor 34 in focus or very close to focus. Data from the time-of-flight sensor 36 may be routed through a microcontroller 33 and then transmitted to the liquid lens 32. According to some embodiments, the microcontroller 33 may process the range data received from the time-of-flight sensor 36 and then transmit focusing instructions directly to the liquid lens 32. According to other embodiments, one or more other components of the camera system 2 may perform such processing.
According to some embodiments, the camera module 30 may include an inertial sensor 38. The inertial sensor 38 may include one or more accelerometers. Advantageously, the inertial sensor 38 includes accelerometers that measure acceleration along each of three orthogonal axes. The inertial sensor 38 may include one or more gyroscopes, such as but not limited to MEMS gyroscopes. Advantageously, the inertial sensor 38 includes gyroscopes that measure rotation about each of three orthogonal axes.
According to some embodiments, the camera module 30 includes an image signal processor 40. The image signal processor 40 receives image data from the image sensor 34, the time-of-flight sensor 36, and the inertial sensor 38. Data from the inertial sensor 38 may be routed through a serializer/deserializer 42 (described in greater detail below) outside of the camera module 30, and then transmitted back to the image signal processor 40. Alternately, data from the inertial sensor 38 is transmitted directly from the inertial sensor to the image signal processor 40, without leaving the camera module 30. Alternately, data from the inertial sensor 38 may be routed in any other suitable manner that causes that data to reach the image signal processor 40.
The image signal processor 40 utilizes the information provided by the time-of-flight sensor 36 and the inertial sensor 38 to modify the data received from the image sensor 34 in order to reduce or eliminate shakiness in the image data received from the image sensor 34. Motion sickness can be experienced by a person who views a moving image on a screen. The more that a moving image is unstable, the greater the potential that a viewer may experience motion sickness upon viewing that moving image. Such motion sickness can result in nausea and vomiting, both of which are undesirable in a surgical setting. By integrating data from the image sensor 34, the time-of-flight sensor 36, and the inertial sensor 38 to reduce or eliminate shakiness in the moving images captured by the image sensor 34, the potential for motion sickness by a viewer is reduced or eliminated, and the image quality is enhanced. In addition, the continuous or near-continuous focusing provided by the combination of the time-of-flight sensor 36 and the liquid lens 32 causes the video experienced by a viewer to be in focus or close to in focus, further reducing the potential for a motion sickness effect that could be experienced by a viewer. The use of the liquid lens 32, the time-of-flight sensor 36, and the inertial sensor 38 in combination synergistically improves video stability and watchability.
The image signal processor 40 may output data to a serializer/deserializer 42, which may be located in the camera module 30. The serializer/deserializer 42 transmits data to and receives data from a wearable unit 50. According to some embodiments, the serializer/deserializer 42 is connected to the wearable unit 50 via a Gigabit Multimedia Serial Link (GMSL) (Maxim Integrated Products, San Jose, Calif.) cable 43 and associated connectors. One GMSL connector may be provided in association with the camera module 30, and another GMSL connector may be provided in association with the wearable unit 50. The GMSL standard provides multistream support over a single cable, reducing the number of cables in the camera system 120. Further, the GMSL standard allows aggregation of different protocols in a single connection, while meeting hospital requirements or other locations' requirements for electromagnetic interference. According to other embodiments, the serializer/deserializer 42 is connected to the wearable unit 50 via a coax cable 44 or other cable, or wirelessly, and/or using a suitable standard other than GMSL.
The serializer/deserializer 42 may receive from the image signal processor 40 data that includes image data (such as in raw or Bayer format), inertial data from the inertia sensor 38, and/or time-of-flight data from the time-of-flight sensor 36, and then serialize that data for transmission to the wearable unit 50. The serializer/deserializer 42 may receive from the wearable unit 50 control data for the liquid lens 32 to adjust the liquid lens 32 for calibration or manual adjustments (without time of flight focus), firmware updates for the processors and sensors associated with the camera module 30, and/or other data.
According to other embodiments, the image signal processor 40 is located elsewhere than the camera module 30. In such embodiments, image data from the image sensor 34, the time-of-flight sensor 36, and the inertial sensor 38 are transmitted to the image signal processor 40 in any suitable wired or wireless manner. The components of the wearable unit 50 may be distributed across two or more separate housings on the user, for balance or other considerations. One or more processors in the wearable unit 50 may be distributed across two or more separate housings on the user, for balance or other considerations. Further, components described in this document as being located in the camera module 30 may instead be located in the wearable unit 50, and vice versa.
Referring also to
A battery 60 may be worn by the user. The battery 60 may be worn anywhere on the user's body and may be secured to the user's body in any suitable manner. According to some embodiments, the battery 60 may be most conveniently and comfortably placed about the user's waist or hips using a belt 126. According to other embodiments, the battery 60 may take the form of a backpack or other ergonomically-desirable configuration. Advantageously, the battery 60 is rechargeable, and easily detachable from the associated belt or other support that carries the battery 60. In this way, the battery 60 can be replaced quickly and easily with a fully-charged one if the battery 60 becomes depleted during a surgical procedure. According to other embodiments, the battery 60 is not rechargeable, or is integrated into and not detachable from the associated belt or other support.
The battery 60 is connected to one or more of the light module 20, the camera module 30 and the wearable unit 50, in order to supply power thereto. According to some embodiments, the battery 60 may be connected to one or more of the light module 20, the camera module 30 and the wearable unit 50 with separate, individual cables, in order to power one or more such components independently. According to other embodiments, the battery 60 may be connected directly to only one of the light module 20, the camera module 30 and the wearable unit 50, and the other modules are electrically connected to the module which receives power from the battery 60. In this way, the number of power cables required by the camera system 120 may be reduced. As one example, the wearable unit 50 receives power from the battery 60, and then distributes power to the light module 20, camera module 30, and any other components of the camera system 120.
Referring also to
The base station 70 may include one or more ports for coax, HDMI, Ethernet, or other connections. Those ports may be used to receive data from other cameras or sensors, and transmit data to a network, to one or more monitors, or other locations. Multiple individuals in proximity to the subject of the video may wear a camera assembly 2, and the data output from each camera assembly may be transmitted to the same base station 70, in the same manner as described above.
Optionally, referring also to
According to other embodiments, any camera that is useful for recording the particular subject of the video may be connected directly or indirectly to the base station 70, and may be recorded and utilized like any other input to the base station 70. As one example, a camera 92 may be mounted to the helmet of a bicyclist, motorcyclist or skier. As another example, a camera 92 may be included in, or provided as, glasses or sunglasses wearable by the user, such as the RAY-BAN® STORIES® smart glasses of Luxottica USA LLC S.p.A. of Milan, Italy. As another example, a camera 92 may be positioned in an ambulance to view a patient during transport. As another example, a camera 92 may be positioned in a hospital room, treatment room and/or diagnosis room. As another example, the camera 91 may be a standard body-mounted camera worn by an EMT, paramedic, firefighter or law enforcement officer.
Referring also to
Referring also to
Referring also to
Referring also to
Referring also to
The distal end of the heat sink 230 may be threaded or otherwise configured to receive a swappable lens 130, as described above. According to other embodiments, the distal end of the outer shell 232, or a structure in proximity to the distal end of the outer shell 232, is threaded or otherwise configured to receive a swappable lens 130. According to other embodiments, the lens 130 is fixed to the heat sink 230 or other component of the light module 220, and is not swappable.
A fan 260 may be mounted in the light module 220 above the heat sink 230. The fan 260 may be configured to pull air into the light module 220 or pull air out of the light module 220. In either case, referring also to
Referring also to
For transmission and/or receipt of data and/or for the receipt of commands, the motorized camera module gimbal 400 may be connected to the wearable unit 50 via a cable such as the GMSL cable 43. Alternately, motorized camera module gimbal 400 may be connected to the serializer/deserializer 42, which in turn is connected to the GMSL cable 43 extending to the wearable unit 50. Data transmitted to the motorized gimbal 400 from the wearable unit 50 may include commands directed to the direction and amount of rotation of the pan motor and/or tilt motor. Data transmitted from the motorized gimbal 400 to the wearable unit 50 may include current rotational position data and/or other state data of the pan motor and/or tilt motor.
In use, one or more users put on one or more components of the camera system 120 as described above, in particular the camera assembly 2 that is worn on the user's head. Where the camera system 120 is used in an operating room, for example, each user may be any healthcare professional who is authorized to be in proximity to a patient, such as but not limited to a physician, nurse, medtech, EMT, paramedic, orderly, or vendor representative. The more users, the greater the flexibility of the camera system 120 and the greater the ability to switch between different views.
Where the camera system 120 is utilized for healthcare, it may be used in locations such as operating rooms, catheterization labs, treatment rooms, diagnosis rooms, emergency rooms, accident sites, and locations outside of a hospital or healthcare building. An example of the use of the camera system 120 for surgery in an operating room is described below, but this example does not limit the use of the camera system 120 or the environment in which the camera system 120 may be used. During surgery, the patient 200 may be positioned on an operating table 102 in the operating room. One or more monitors 104 may be positioned in the operating room 100, whether mounted permanently to a wall or other structure, or placed on stands that may be moved. One or more monitors 104 may be placed in a location outside the operating room 100, which may be adjacent to the operating room 100, may be in the same building and spaced apart from the operating room 100, or may be in a different building from the operating room 100. The base station 70 transmits video from the camera module 30 to one or more monitors 104. A user may utilize the tablet 80 to control video transmission from the base station 70 to the one or more monitors 104. As one example, the same video transmission may be sent to every monitor 104. As another example, at least one monitor 104 receives a different video transmission from the base station 70 than at least one other monitor 104. In this way, different views of the open surgery may be shown on different monitors 104. As one example, a surgeon and an attending nurse each may wear a camera assembly 2, and a camera 92 may be attached to a surgical tool 90 used in the procedure. In this example, three separate video streams are generated, and are received by the base station 70; each of those video streams may be shown at the same time on different monitors 104. Alternately, one or two of the three video streams may be shown on one or more different monitors 104, omitting one or two of the video streams. The tablet 80 and its user may be located in the operating room 100, or in a remote location, as long as the tablet 80 has a data connection to the base station 70. According to some embodiments, the base station 70 may be configured to livestream video and audio 71 via the internet or other communications network to remotely-located viewers. In this way, interested people, such as medical students or physicians, can view the procedure as the physician performs it. The livestream 71 may be one-way, in which viewers can view the livestream 71 but not interact with it, or two-way, in which one or more viewers can transmit audio and/or video themselves back to the base station 70. Two-way livestreaming 71 may be useful where specialist knowledge of a remotely-located physician would be useful, such that the remotely-located physician can provide helpful information to the physician performing the procedure. In accordance with some embodiments, all video and audio is livestreamed 71 from the base station 70, and the monitor or monitors 104 receive and show a livestream 71 received from the base station 70.
The user or users wearing one or more components of the camera system 120 acquire video of a subject with at least one head-mounted camera assembly 2. Where the camera system is used in an operating room 100, for example, that video may be acquired by directly viewing the surgical field during open surgery. Where the procedure includes an endoscopic or percutaneous component, that video may be acquired from viewing the control and/or display elements associated with the endoscopic or percutaneous component of the procedure. In this way, the viewer of the video from the camera system 120 can obtain greater knowledge of the overall procedure, which may be useful from an instructional standpoint and also from the standpoint of retaining a record of the particular procedure performed on that particular patient. The user or users of the camera system 120 look wherever he, she or they would look to perform the procedure in the absence of the camera system 120. It is up to the user of the tablet 80 to select and control the video stream or streams that are output to the monitor or monitors 104 and/or livestreamed 71 outward by the base station.
Video acquired by each user's head-mounted camera assembly 2 may be stabilized by the image signal processor 40 associated with that head-mounted camera assembly 2. Then, that stabilized video is transmitted to and received by the base station 70. Another user, who may or may not be wearing one or more components of the camera system 120, controls the video output from the base station 70, such as via a tablet 80, as described above. The video output may be controlled to appear on one or more monitors 104 inside and/or outside a room, may be controlled to stream to recipients outside the room, and/or may be controlled to be saved locally or remotely.
According to some embodiments, one or more of the video streams received by the base station 70 are saved for later viewing. Such one or more video streams may be saved at the base station 70 itself, and/or on removable media associated with the base station 70. According to some embodiments, all video streams received by the base station 70 are stored. In some embodiments, a user may make that saved video available to others, such as via social media. The storage of video in a manner that can be viewed by others on demand is referred to in this document as “sharing” video. In some embodiments, in which the camera system is used in a hospital operating room 100, a record of the surgical procedure thus may be saved by the hospital, the surgeon, and/or others for legal, regulatory and/or compliance purposes. The saved video streams may be saved in a system that allows for access by other doctors, medical students, or the public, for learning and educational purposes. Such video storage and sharing may be particularly useful for medical students at a time such as during the COVID-19 pandemic, in which in-person learning may be limited or suspended altogether.
While certain examples above describes the use of a camera system 120 in an operating room, a user may utilize the camera system 120 in any other suitable location and for any other suitable purpose. Regardless of the particular location, the camera system 120 functions substantially as described above. For example, the camera system 120 may be used in a hospital room, a treatment room, a field hospital, an emergency room, in the field at an accident site, in a veterinary hospital, or any other suitable location. As another example, the camera system 120 may be used by scientists at a site of exploration in order to transmit detailed video to viewers who may be located a significant distance away. As another example, the camera system 120 may be used by one or more mechanics in the course of diagnosing and repairing damaged vehicles in or out of a garage, whether to create videos useful for training other mechanics, for obtaining expert advice from other mechanics at a distant location, or other reasons.
As another example, the camera system 120 may be utilized by an EMT or paramedic at an accident site. The base station 70 may be located in an ambulance, and may be capable of transmitting video and other data via any suitable communication technology, such as cellular network data service. When used by an EMT or paramedic at an accident site or other site where emergency treatment of a patient is necessary, two-way livestreaming 71 may be useful, because such two-way livestreaming would allow a remotely-located doctor to provide instructions to the EMT or paramedic based on the content of the livestream 71.
According to some embodiments, software may be utilized to align the field of view of the camera module 30 with the user's ocular line of sight. The software may reside in the wearable unit 50, in the camera module 30, and/or elsewhere. Referring to
At box 504, the user provides gestural input to the process 500. The gestural input may be drawing a virtual shape in the air by a fingertip or fingertips of the user around an area of interest. The virtual shape may be a general circle, where the field of view may be generally encompassed by the generally circular shape. The terms “general circle” and “generally circular” refer to the fact that no user can gesture a geometrically perfect circle, and that it is unlikely the gestured circle is a completely closed geometrically perfect circle.
According to other embodiments, instead of voice activation at box 502, the process 500 checks input from the camera module 30 and/or other sensor constantly, or effectively constantly at very short intervals, for the gestural command Such embodiments may be more power-intensive and/or more computationally intensive, but may provide additional utility for the user in certain applications where voice activation is difficult, such as applications in noisy areas or in areas where quiet is particularly important
The process moves to box 506, at which the camera module 30 registers and recognizes the movement of the user's hand and records that movement. Put another way, the process recognizes the gestural input of box 504 by utilizing the camera module 30 to detect that input. Gesture recognition may be performed with custom or customized software, or with off-the-shelf software, such as the Motion Gestures software of Motion Gestures, Kitchener, Ontario, Canada; the TouchFree software of Ultraleap, Mountain View, Calif., or other off-the-shelf software.
The process moves to box 508, where the software translates the movement of the user's hand that was recognized in box 506 into a vector-based graph having X and Y coordinates. That is, the shape of the virtual general circle, or other shape, made by the user in box 504 is converted into a mathematical and/or graphical form that can be further analyzed mathematically. Alternately, the software may translate the movement of the user's hand that was recognized in box 506 into any other type of graph that is usable in subsequent boxes. Next, in box 510, the software determines the center of the general circle or other shape that was generated in box 508. The center need not be the precise center of the general circle or other shape, such that an approximation of the center point is sufficient. Analysis of the vector-based graph of the shape, in X and Y coordinates, provides that actual or approximate center point.
Next, in box 512, the image from the camera module is zoomed into, such that the outer boundaries of the visual area are substantially aligned with the boundaries of the vector graph. As described above, the camera module 30 includes an image sensor 34 that is a high-resolution sensor capable of 4K, 8K or higher image resolution. Because the resolution of the image sensor 34 is so high, the alignment of the field of view of the camera module 30 with the user's field of view is performed in software by zooming into the area generally encompassed by, or otherwise associated with, the gesture sensed in diamond 504. Where that gesture is generally a circle, as described above, such zooming is performed to generally encompass the area within the gestured circle. The camera module 30 itself performs no zooming; rather, the high-resolution output of the camera module 30 remains the same, and that output is zoomed into in software. After the zooming, the image is panned and tilted in software so that the center of the zoomed-into area is aligned with the center of the vector graph circle or other vector graph shape. The process 500 is then complete for that particular field-of -view alignment. If the user wishes to change the field of view, the process 500 begins again at box 502.
According to some embodiments, hardware may be utilized in combination with software to align the field of view of the camera module 30 with the user's ocular line of sight. As described above, the software may reside in the wearable unit 50, in the camera module 30, and/or elsewhere. Referring to
After box 610, the process moves to box 612. At box 612, the software commands the camera module 30 to zoom optically until the outer boundaries of the visual area are substantially aligned with the boundaries of the vector graph. The alignment of the field of view of the camera module 30 with the user's field of view is performed in hardware by zooming optically into the area generally encompassed by, or otherwise associated with, the gesture sensed in diamond 504. Where that gesture is generally a circle, as described above, such zooming is performed to generally encompass the area within the gestured circle. Optical zooming is performed by physically moving one or more lenses using a motor. Such physical motion of components of the camera module 30 is part of the hardware alignment of the camera field of view. Alternately, the zooming performed in box 612 may be performed in software, as described above, such that the high-resolution output of the camera module 30 remains the same, and that output is zoomed into in software.
After the zooming of box 612, in box 614 the image is panned and tilted so that the center of the zoomed-into area is aligned with the center of the vector graph circle or other vector graph shape. The software transmits one or more commands to one or more motors within the camera gimbal 400 to cause the camera module 30 to pan and/or tilt as required to have the center of the zoomed area align with the center of the vector circle. Such commands cause the motors of the camera gimbal 400 to pan and/or tilt the camera module 30 physically so that the center of the zoomed area aligns with the center of the vector circle. That is, in box 614, the panning and/or tilting are performed mechanically by commanding the camera gimbal 400 to physically move the camera module 30.
The process 500 is then complete for that particular field-of -view alignment. If the user wishes to change the field of view, the process 500 begins again at box 502.
According to some embodiments, the camera module 30 may be utilized in conjunction with a light module 20, 220. Rather than utilizing gestural sensing, the software may sense the edges of the light emitted from the light module 20, 220 onto an area, such as but not limited to a patient in an operating room. The light module 20, 220 emits a strong light, so that the difference in brightness between the illuminated area and the non-illuminated area is strong and well-defined.
In such embodiments, the process 500 or the process 600 described above generally applies, depending on whether the camera field of view is to be aligned in software alone or with hardware, respectively, but with the following variations. Boxes 504, 604 are not utilized. In boxes 506, 606 , the edges of the area illuminated by the light module 20, 220 are sensed, instead of a gesture made by the user. In boxes 508, 608 the software translates the shape of the illuminated area to a vector-based graph with X and Y coordinates. In boxes 510, 610, software determines the center of the vector circle corresponding to the illuminated area. Where the alignment of the camera field of view is performed in software, the process 500 continues to box 512. Where the alignment of the camera field of view is performed with hardware, the process 600 continues to box 612.
As used in this document, and as customarily used in the art, terms of approximation, including the words “substantially” and “about,” are defined to mean normal variations in the dimensions and other properties of finished goods that result from manufacturing tolerances and other manufacturing imprecisions, the normal variations in the measurement of such dimensions and other properties of finished goods, and normal tolerances and deviations experienced within a process of use.
While the invention has been described in detail, it will be apparent to one skilled in the art that various changes and modifications can be made and equivalents employed, without departing from the present invention. It is to be understood that the invention is not limited to the details of construction, the arrangements of components, and/or the method set forth in the above description or illustrated in the drawings. Statements in the abstract of this document, and any summary statements in this document, are merely exemplary; they are not, and cannot be interpreted as, limiting the scope of the claims. Further, the figures are merely exemplary and not limiting. Topical headings and subheadings are for the convenience of the reader only. They should not and cannot be construed to have any substantive significance, meaning or interpretation, and should not and cannot be deemed to indicate that all of the information relating to any particular topic is to be found under or limited to any particular heading or subheading. Therefore, the invention is not to be restricted or limited except in accordance with the following claims and their legal equivalents.
Number | Date | Country | |
---|---|---|---|
63292537 | Dec 2021 | US |