This invention pertains generally to unmanned aerial vehicles and more particularly to a system and method for transmitting and displaying messages by means of unmanned aerial vehicles.
The use of Unmanned Aerial Vehicles (UAVs), otherwise known as drones, is a growing market and their use for multiple purposes is expected to grow exponentially within the next few years. UAVs can be used for any number of purposes. UAVs can fly over parts of land to give aerial views of land for planning purposes or be used for recreational purposes.
The use of a system of drones connected to a central computer system, which is used to plan and control the operation of the system of drones has been disclosed and taught by patents owned by the current inventors—U.S. Pat. No. 9,454,157, the disclosure of which is hereby fully incorporated by reference, and U.S. Pat. No. 9,454,907, the disclosure of which is hereby fully incorporated by reference.
In addition, the use of electronic message boards, aerial banners and billboards is well known as well. Aerial Banners, billboards or electronic message boards can be utilized to present information to people in public areas, such as sporting arenas, on buildings, or along highways. However, there is a limitation with the current art for billboards and electric message boards due to the stationary nature of the billboards and message boards. People must plan ahead of time to determine an optimal location where the billboard will be viewed by people. This is an inefficient system because there is no guarantee that the message on the billboard will be viewed by people. Also, the message on the billboard may be viewed by people but there is no guarantee that the individuals viewing the message are a part of the message's target audience. For aerial banners and advertisements, such as blimps, and helicopter or plane based banners, it is difficult to determine who is viewing the advertisements, and it takes pre-planning to have an aerial based advertisements which does not allow for changes to location or advertisements based on viewers data.
What is needed is a system and method for utilizing UAVs to correct the deficiencies of standard billboards, aerial advertisements and electronic message boards. What is needed is a system and method for utilizing UAVs to determine locations of individuals for the presentation of messages and to determine the demographics of a particular group of people for the purposes of determining a number of UAVs to send to a specific location and particular message to display by the drones.
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed innovation. This summary is not an extensive overview, and it is not intended to identify key/critical elements or to delineate the scope thereof. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
The invention is directed toward a computer implemented method for executing a flight mission by one or more unmanned aerial vehicles. The method is performed on a computer system comprising two or more microprocessors and two or more nonvolatile memory units wherein at least one of the two or more microprocessors and one of the two or more nonvolatile memory units is integral to an unmanned aerial vehicle further comprising a flight means and a display screen. The two or more nonvolatile memory units storing instructions which, when executed by the two or more microprocessors, cause the computer system to perform operations comprising receiving, from a first unmanned aerial vehicle at an audience location, a data stream containing audience location information; analyzing the audience location information to determine the presence of one or more people at the audience location; receiving an instruction setting a predetermined number of people; determining whether a number of people at the location is equal to or greater than the predetermined number of people; retrieving a message data file from a database; transmitting the message data file to the first unmanned aerial vehicle; and displaying, by the first unmanned aerial vehicle, the message data file on a display screen integral to the first unmanned aerial vehicle.
The method may further comprise determining demographic information of one or more people at the location. In another embodiment the method further comprises determining an appropriate second message file based on the demographic information; retrieving a second message data file from the database; transmitting the second message data file to the first unmanned aerial vehicle; and displaying, by the first unmanned aerial vehicle, the second message data file on a display screen integral to the first unmanned aerial vehicle.
The method may further comprise performing a scanning method on one or more people during a period of time. The scanning method is selected from a group consisting of: taking a picture of one or more people with a camera integral to the first unmanned aerial vehicle; detecting motion within a predetermined distance of the first unmanned aerial vehicle by means of a motion sensor integral to the first unmanned aerial vehicle; detecting body heat of one or more people with an infrared sensor integral to the first unmanned aerial vehicle; and creating a virtual geographic boundary a predetermined distance from the first unmanned aerial vehicle and detecting the presence of one or more location-aware devices within the virtual geographic boundary.
The method may further comprise determining one or more metrics for a message displayed by the first unmanned aerial vehicle, wherein the one or more metrics is selected from a group comprising: a number of people viewing the first unmanned aerial vehicle, a gender of one or more people viewing the first unmanned aerial vehicle, an age of one or more people viewing the first unmanned aerial vehicle, an age range of two or more people viewing the first unmanned aerial vehicle, and a time period during which a message is displayed on the first unmanned aerial vehicle.
The method may further comprise receiving a visual input from a person at the audience location; creating a visual input data file; and transmitting the visual input data file from the first unmanned aerial vehicle to a server computer. The method may further comprise broadcasting an audio file through a speaker integral to the first unmanned aerial vehicle.
The method may further comprise scanning, with a camera integral to the first unmanned aerial vehicle, at least a portion of a face of a person; creating a facial image file; comparing the facial image file to a set of previously stored reference files, wherein each of the reference files comprises facial characteristic information of one or more people; and determining a match between the facial image file and the reference file. Additionally, the method may further comprise determining an appropriate second message file based on information contained in a reference file which matches the facial image file; retrieving a second message data file from the database; transmitting the second message data file to the first unmanned aerial vehicle; and displaying, by the first unmanned aerial vehicle, the second message data file on a display screen integral to the first unmanned aerial vehicle.
The method may further comprise determining an appropriate number of unmanned aerial vehicles to display the message data file to a number of people at the audience location; determining a location of one or more second unmanned aerial vehicles; respectively determining one or more geographic flight paths for the one or more second unmanned aerial vehicles and transmitting flight path instructions to the one or more second unmanned aerial vehicles. Each of the one or more geographic flight paths includes a starting point, the starting point being a location of a second unmanned aerial vehicle, and a geographic ending point, the geographic ending point being the audience location.
Alternatively the invention is directed toward a computer implemented method of: scanning, by an unmanned aerial vehicle, one or more people at an audience location; creating, by the unmanned aerial vehicle, a scan data file; transmitting, by the unmanned aerial vehicle, the scan data file to a server computer; receiving, by the server computer, the scan data file; analyzing information contained in the scan data file; selecting, by the server computer, a message data file from a database; transmitting, by the server computer, the message data file to the unmanned aerial vehicle; and displaying, by the first unmanned aerial vehicle, the message data file on a display screen integral to the first unmanned aerial vehicle.
The method may further comprise determining demographic information of one or more people at the audience location. The method may further comprise determining an appropriate second message file based on the demographic information; retrieving a second message data file from the database; transmitting the second message data file to the unmanned aerial vehicle; and displaying, by the unmanned aerial vehicle, the second message data file on a display screen integral to the unmanned aerial vehicle.
The method may further comprise broadcasting an audio file through a speaker integral to the unmanned aerial vehicle. Additionally, the step of scanning is selected from a group consisting of: taking a picture with a camera integral to the unmanned aerial vehicle; detecting motion within a predetermined distance of the first unmanned aerial vehicle by means of a motion sensor integral to the unmanned aerial vehicle; detecting body heat of one or more people with an infrared sensor integral to the unmanned aerial vehicle; and creating a virtual geographic boundary a predetermined distance from the unmanned aerial vehicle and detecting the presence of one or more location-aware devices within the virtual geographic boundary.
The method may further comprise determining one or more metrics for a message displayed by the unmanned aerial vehicle, wherein the one or more metrics is selected from a group comprising: a number of people viewing the first unmanned aerial vehicle, a gender of one or more people viewing the first unmanned aerial vehicle, an age of one or more people viewing the first unmanned aerial vehicle, an age range of two or more people viewing the first unmanned aerial vehicle, and a time period during which a message is displayed on the first unmanned aerial vehicle.
Alternatively, the invention is directed toward a method of receiving, by an unmanned aerial vehicle, visual input information from one or more people; creating, by the unmanned aerial vehicle, an image data file; transmitting, by the unmanned aerial vehicle, the image data file to a server computer; analyzing, by the server computer, the image data file to determine the visual input information; determining, by the server computer, a predetermined response message to the visual input information; selecting, by the server computer, a message data file from a database; transmitting, by the server computer, the message data file to the unmanned aerial vehicle; and displaying, by the first unmanned aerial vehicle, the message data file on a display screen integral to the first unmanned aerial vehicle.
The method may further comprise determining, by the server computer, that the visual input information comprises a QR code. Alternatively, the method may further comprise determining, by the server computer, that the visual input information comprises at least a portion of a person's face. Additionally, the method may further comprise broadcasting an audio file through a speaker integral to the unmanned aerial vehicle.
Still other embodiments of the present invention will become readily apparent to those skilled in this art from the following description wherein there is shown and described the embodiments of this invention, simply by way of illustration of the best modes suited to carry out the invention. As it will be realized, the invention is capable of other different embodiments and its several details are capable of modifications in various obvious aspects all without departing from the scope of the invention. Accordingly, the drawing and descriptions will be regarded as illustrative in nature and not as restrictive.
Various exemplary embodiments of this invention will be described in detail, wherein like reference numerals refer to identical or similar components, with reference to the following figures, wherein:
The claimed subject matter is now described with reference to the drawings. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the claimed subject matter. It may be evident, however, that the claimed subject matter may be practiced with or without any combination of these specific details, without departing from the spirit and scope of this invention and the claims.
As used in this application, the terms “component”, “module”, “system”, “interface”, or the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component.
The invention is directed toward a system and method for managing missions being executed by UAVs. Referring to
The server computer 100 is communicatively coupled to a database 108. The database 108 stores all information about every UAV 300a, 300b, 300c connected to the server computer 100. The database 108 may store any relevant information pertaining to the system such as UAV location, missions being performed by each UAV, mission history, battery power levels of each UAV, and time for execution of any mission. In addition the database 108 may store messages for transferring to UAVs to display. The messages stored on the database 108 may be any type of messages. The messages stored on the database 108 may be text messages, video messages, audio messages, audiovisual messages, or any other type of content. The messages may store any type of content, such as advertising message, public service announcement, weather warning, time, temperature, commercial, video, or any other type of media content.
Users may interact with the server computer 100 directly or through a client device 50. The client device 50 may be any type of computerized device utilized by a user to communicate with the server computer 100. The client device 50 may be a desktop computer, a laptop computer, a tablet computer, a wireless cellular phone, or any other type of communicative computerized device.
The server computer 100 stores and executes a series of software modules, including a communication module 102, a mission module 104, a flight path computation module 106, and a message module 110. The communication module 102 determines the location of a UAV 300 and transmits instructions to be executed by a UAV 300. Each UAV 300a, 300b, 300c has a specific communication ID number which permits the communication module 102 to track and send specific instructions to each respective UAV 300a, 300b, 300c. The communication ID number can be any number assigned to each respective UAV 300a, 300b, 300c that permits the system to independently identify each each respective UAV 300a, 300b, 300c, such as a unique IP address. The communication module 102 may communicate with a UAV 300 through a charging station 200 or directly through a network connection, such as the internet or a cellular connection.
The mission module 104 computes and tracks each mission executed by each UAV 300. When a user assigns a mission to the system to be executed, the mission module 104 determines the start point and end point of the mission and which respective UAVs 300a, 300b, 300c are needed to execute the mission. The mission module 104 then determines the specific instructions to send to the respective UAVs 300a, 300b, 300c and assigns the mission to the proper UAVs 300a, 300b, 300c.
The flight path computation module 106 determines the proper flight path for each UAV 300a, 300b, 300c to maximize efficiency in time and battery life for each UAV 300a, 300b, 300c. The flight path computation module 106 determines the proper flight path from the starting point to the end point of the mission. The flight path computation module 106 determines the charging stations 200a, 200b, 200c which are along the proper flight path which may be used by the specific UAVs executing the mission.
The message module 110 tracks the messages displayed by the respective UAVs 300a, 300b, 300c, determines the proper message to send to a UAV 300, tracks the analytics of any message displayed by a UAV, and otherwise tracks and manages the usage, storage, and operations of all messages sent through the system.
Referring to
The sensor module 322 of the UAV 300 is a means to carry single or multiple sensors by the UAV 300. The sensor module 322 consists of sensors, a means to carry these sensors, a means to have the appropriate sensor ready for the UAV 300 to place. The sensor module 322 may comprise of a mechanism that carries multiple sensors and, based on the commands sent by the MCU 304, selects the appropriate sensors to be made ready for placement by the placement module 320. In other embodiments the sensor module 322 may utilize sensors directly such that the UAV 300 may take measurements directly while in flight without the placement of a sensor. The measurements taken by the sensor module may include motion detection, light level detection, weather or precipitation detection, wind detection, or any other measurement of an attribute in the vicinity of the UAV 300 during flight or after landing of the UAV 300.
The placement module 320 of the UAV 300 is a means to place the sensors that are carried by the sensor module 322. The placement module 320 may comprise of a screw or another type of rod that, by commands sent by the MCU 304, extends and retracts, placing the sensors fed by the sensor module 322. The placement module 320 may also comprise of a gas cylinder, or another means of projecting sensors, that pushes the sensors fed by the sensor module 322 to their appropriate placement location.
The flight means 310 of the UAV 300 is any type of motorized component or multiple components configured to generate sufficient lift to get the UAV 300 into flight. The flight means 310 may comprise one or more horizontal propellers. In other embodiments, the flight means 310 may comprise one or more set of wings and a vertical propeller. In other embodiments the flight means 310 may comprise one or more set of wings and a combustible jet engine.
The UAV 300 further comprises a camera 314. The camera 314 may be a still photograph camera or a video camera. The camera 314 takes visual images from the point of view of the UAV and feeds information back to the server computer 100, communication hub 120, and/or client computer 110. The UAV 300 further comprises a display screen 316 and a speaker 318. The display screen 316 is any type of electronic display screen, such as LCD, LED, projector, OLED, or any other type of component configured to create a visual display. The speaker 318 is any type of component configured to play and broadcast an audio file to be heard by individuals in the vicinity of the UAV 300.
The UAV 300 further comprises an attachment means 324. The attachment means 324 is any type of physical or electrical means by which the UAV 300 may mount itself on a structure to conserve energy stored in the power source 306 since the flight means 310 would not need to be operated. The attachment means 324 may be a mechanical adhesion, such as a clamp, screw, rope, bolt, Velcro, suction cups, glue, adhesive, temporary adhesive, or any other mechanical means to attach the UAV 300 to a physical structure. The attachment means 324 may be a chemical adhesion, such as the UAV 300 mixing two separately stored chemicals together to create an adhesive on a portion of the surface of the UAV 300. The adhesive is then used to attach the UAV 300 to a structure at a desired location. The attachment means 324 may also be through magnetic adhesion. In this embodiment the UAV 300 utilizes an electromagnet or magnet to attach the UAV 300 to a metal structure at a desire location. The attachment means 324 may also be electrostatic adhesion, which uses Gecko type adhesion to adhere the UAV 300 to a structure. The electrostatic adhesion uses a combination of embedded electrodes and directional dry adhesives to create van der Waals forces to adhere the UAV 300 to a structure.
The charging station 200 may be realized in any number of embodiments. Referring to
Referring to
The sensor encasement 250 also provides a uniform size and shape for each sensor 260, permitting the UAV 300 to be configured in a simple design and easily interact with each sensor 260 regardless of the type, size, and shape of each individual sensor 260. As illustrated in
As shown in
The system may determine the relative size of an audience 400 in many ways. First, the system may have a basis of photographs displaying a known number of individuals. The system may compare an image of an audience 400 to images of known group sizes to determine the closest match. In addition, the system may recognize the form of each individual person in an image and perform a calculation to count each individual. Alternatively, the system may analyze only a portion of an image or scan of an audience 400, determine the specific number of individuals in that portion of the image, and then calculate an average number of individuals for the entire audience by extrapolating the 400 number of portions contained in the entire image or scan of the area. The calculations determining the size of the audience 400 may be performed by the server 100 or the UAV 300.
As shown in
Referring to
Referring to
The UAV then determines the presence of one or more people 704. In another embodiment the UAV may scan and determine the presence of one or more people through “geo-fencing.” In this manner the UAV creates a virtual geographic boundary within a certain distance from the UAV. The UAV 300 or server 100 may then detect the presence of one or more location-aware devices (such as cellular phones) within the virtual geographic boundary.
Alternatively, the UAV may send information received from the camera or other sensors to the server and the server determines the presence of one or more people. The system may determine the presence of people through software programmed to recognize human shape or facial recognition software. The UAV then determines the presence of a predetermined number of people 706. The predetermined number of people is the number chosen by an operator for a UAV to present a message to an audience. For instance, if the predetermined number of people is ten, then the UAV will not display a message if the number of people present is nine or less. If the predetermined number of people is one, then the UAV will display the message when it recognizes the presence of a person. In other embodiments the server 100 determines the presence of a predetermined number of people.
The UAV then notifies the server of the presence of the predetermined number of people 708. Alternatively, the server determines the presence of the predetermined number of people via the image sent to the server by the UAV. The server then determines the message to be displayed to the audience 710. The server transmits the message to the UAV 712. The UAV then displays the message to the predetermined number of people 714.
Furthermore, the method of the invention is illustrated by the
The system may determine the demographics of the audience in a number of ways. To determine the racial make-up of an audience 400 the system may determine the skin color of the separate individuals, assign each individual a value based on the tone or color of an individual's skin, and group those with similar values together. The system may then calculate a percentage for each group as a part of the entire audience 400. Alternatively, the system may have images stored in a database with known racial demographics. The system may compare an image of an audience 400 to images with known racial make-ups to find the image with the closest match. To determine the age demographics of an audience 400 the system may utilize the height of the individuals to determine a relative age for each individual in an image. Those who are shorter are grouped into a younger age category while those who are taller are grouped into an older age category. The height of any specific individual can be determined by the system by triangulation or measuring the angle of inclination and the distance from the UAV 300 to the individual to execute a sine or tangent function and calculate the height of the individual. The system may also attempt to determine the hair color of individuals as well. If the system detects individuals with gray hair then the system will categorize those individuals in an elderly age group. The system may also compare an image of the audience to a group of images in a database containing known age demographics. The system then matches the image to the picture with the closest match and utilizes the known demographics of the matching image. The system may also analyze a small portion of the overall image of an audience 400 and calculate the total based on the number of portions contained in the entire image. The calculations determining the demographic make-up may be performed by the UAV 300 or the server 100.
Furthermore, as illustrated in
Referring to
Referring to
Reviewing
Referring to
The message displayed by the UAV 300 may be a prerecorded message stored in a database 108 on a server 100. Alternatively, the message may be a live streaming video feed which is selected and redirected by the server 100. The server 100 may select a certain prerecorded message based on the demographics of the audience and transmit the prerecorded message to the UAV 300 for display. Additionally, based on information scanned by the UAV 300, the server 100 may determine that a certain live streaming video feed may be better suited to the audience. The server 100 may then select a predetermined live video feed to transmit to the UAV 300 to be displayed. Alternatively, the server 100 may decide to “change the channel” and select an alternative live video stream to display to the audience. The live video feeds may be any audiovisual stream of information and come from any source into the server computer 100, such as from a cable feed or from a satellite broadcast. The feed may also be only an audiosignal, such as a live radio broadcast received by the server computer 100. The server computer 100 may change the message which is selected and transmitted to the UAVs 300 at any time and for any reason—such as switching between live video feeds and prerecorded messages stored on a database. The server 100 may also switch between audiovisual messages, static visual display messages, and audio messages. In the preferred embodiment the live video feed is segmented into a series of message data files which are continuously transmitted to the UAV 300.
The system may be utilized in many separate ways. For instance, a UAV 300 may fly down a public sidewalk in an urban setting. The UAV 300 may take a picture of a face of a person walking on the sidewalk. The UAV 300 can then transmit the image to the server 100. The server 100 may then run a facial recognition program against a database of users to determine the identity of the person. Once the server 100 determines the identity of the person the server 100 may search the database 108 for a message which is appropriate for the identified person. The server 100 then selects the appropriate message and transmits it to the UAV 300. The UAV 300 then displays the message on the display screen 316 to the person.
In another embodiment of the invention a person may hold out a visual input which is recorded by the UAV 300. The visual input may be any type of visual signal or sign. For instance, the visual signal may be a QR code or a bar code. The UAV 300 then scans the QR code or bar code with a camera and sends the information to the server 100. The server 100 may then determine the proper response which is stored in the database 108 that properly corresponds to the visual input presented by the person. The server 100 then selects the appropriate message and transmits it to the UAV 300. The UAV 300 then displays the message on the display screen 316 to the person.
In another utilization the UAV 300 may fly along a highway and detect the presence of a sizable number of cars traveling on the highway which would constitute an audience. The UAV 300 notifies the server 100 of the audience. The server 100 may determine the location of the UAV 300 along the highway and determine that an accident has occurred on the highway five miles ahead of the UAV 300. The server 100 then selects a notification message to transmit to the UAV 300, such as “CAUTION: ACCIDENT AHEAD.” The server 100 then transmits the message to the UAV 300. The UAV 300 then displays the message “CAUTION: ACCIDENT AHEAD” on the display screen 316 to cars traveling along the highway.
What has been described above includes examples of the claimed subject matter. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the claimed subject matter, but one of ordinary skill in the art can recognize that many further combinations and permutations of such matter are possible. Accordingly, the claimed subject matter is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term “includes” is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim.
The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of steps in the foregoing embodiments may be performed in any order. Words such as “thereafter,” “then,” “next,” etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an” or “the” is not to be construed as limiting the element to the singular.
The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some steps or methods may be performed by circuitry that is specific to a given function.
In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may reside on a tangible, non-transitory computer-readable storage medium. Tangible, non-transitory computer-readable storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such non-transitory computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of non-transitory computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a tangible, non-transitory machine readable medium and/or computer-readable medium, which may be incorporated into a computer program product.
The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein.