The use of vision systems in commercial vehicles provides for enhanced viewing around a commercial vehicle. In some situations, various views are limited to a select few cameras on a commercial vehicle that do not provide complete awareness of the surrounding environment to an operator of the commercial vehicle. Consequently, the operator may be hampered during driving or other activity with respect to the commercial vehicle.
The invention can be understood with reference to the following drawings. The components in the drawings are not necessarily to scale. Also, in the drawings, like reference numerals designate corresponding parts throughout the several views.
Referring to
To this end, the vehicle 100 includes a vehicle video system 101 having a plurality of cameras mounted on or in the vehicle 100. Specifically, the cameras include a number of visible light cameras 103 and a number of night vision cameras 106. Alternatively, a single camera may be employed in the place of one of the visible light cameras 103 and one the night vision camera 106 that includes both visible light and night vision capability. In addition, the vehicle video system 101 includes a video processing unit 109. Each of the cameras 103, 106 are electrically coupled to the video processing unit 109 and each of the cameras 103, 106 generates a video image 111 that is applied to the video processing unit 109. In this respect, the video processing unit 109 includes a number of video inputs to facilitate the electrical coupling with each of the cameras 103, 106. The video system within the vehicle 100 also includes a plurality of monitors 113. Each of the monitors 113 is also electrically coupled to the video processing unit 109 through video output ports on the video processing unit 109.
The vehicle video system 101 further includes video image selectors 116 that may be hand-held devices or may be mounted in the commercial vehicle 100 in an appropriate manner. Each of the video image selectors 116 enable an operator to control the video displayed on a respective one of the monitors 113. Specifically, each of the video image selectors 116 is associated with a respective one of the monitors 113 and controls the video displayed thereon as will be described. Each of the video image selectors 116 may be coupled to the video processing unit 109 through an appropriate vehicle data bus or by direct electrical connection as will be described.
In addition, the video system in the vehicle 100 includes audible alarms 119 that are coupled to the video processing unit 109. In this respect, the audible alarms 119 are sounded upon detection of predefined conditions relative to the video system within the vehicle 100 as will be described. Alternatively, the video processing unit 109 may generate visual alarms on the monitors 113 as will be described. Also, both audible alarms 119 and visual alarms may be employed in combination, etc.
The cameras 103, 106 are mounted within the vehicle 100, for example, so that a field of view 123 of each of the cameras 103, 106 is oriented in either a substantially longitudinal direction 126 or a substantially lateral direction 129 with respect to the vehicle 100. In this respect, the longitudinal direction 126 is generally aligned with the direction of travel of the vehicle 100 when it moves in a forward or reverse direction. The lateral direction 129 is substantially orthogonal to the longitudinal direction 126.
Some of the cameras 103, 106 are oriented so as to have a field of view 123 oriented in the substantially longitudinal direction 126 with respect to the vehicle 100, whereas other cameras 103, 106 are oriented so as to have a field of view 123 oriented in the substantially lateral direction 129. In this respect, cameras 103, 106 are provided that can generate video images 111 that show views of the environment all around the entire vehicle 100. In one embodiment, the angle of the fields of view 123 of the cameras 103, 106 may differ depending upon their location and orientation relative to the vehicle 100. For example, the cameras 103, 106 that are oriented so that their field of view 123 is forward facing in the longitudinal direction may have an angle associated with their field of view 123 that is less than the angle of the field of view 123 of the rearward facing cameras 103, 106 in the longitudinal direction. In one specific embodiment, the angle of the field of view 123 of such forward facing cameras 103, 106 is 12 degrees, and the angle of the field of view 123 of the rearward facing cameras 103, 106 is approximately 153 degrees, although the angles of the fields of views 123 of the forward and reverse facing cameras 103, 106 may differ from these values depending upon the desired viewing capabilities of the vehicle video system 101.
The video processing unit 109 is configured to select a number of subsets of the cameras 103, 106 from which output video images 133 may be generated. In this respect, the video processing unit 109 generates at least two output video images 133 that are applied to corresponding ones of the monitors 113. In one embodiment, a first output video image 133 incorporates one or more video images 111 generated by a corresponding one or more of the cameras 103, 106 included in a first one of the subsets of the cameras 103, 106. At the same time, a second output video image 133 incorporates one or more video images 111 generated by a corresponding one or more of the cameras 103, 106 included in a second one of the subsets of the cameras 103, 106.
According to an embodiment of the present invention, the video processing unit 109 independently displays the first output video image 133 on a first one of the monitors 113 and the second output video image 133 on a second one of the monitors 113. In this respect, the output video images 133 displayed on either one of the monitors 113 does not affect or dictate the output video image 133 displayed on the other one of the monitors 113. In addition, there may be more than two of the monitors 113 (not shown) and more than two output video images 133 (not shown) generated by the video processing unit 109, etc.
Each of the output video images 133 that are generated by the video processing unit 109 may incorporate one or more video images 111 generated by a corresponding one or more of the cameras 103, 106 in a respective one of the subsets of the cameras 103, 106. In this respect, a user may manipulate one of the video image selectors 116 that are configured to select which of the video images 111 from which one of the cameras 103, 106 within a subset are to be incorporated into a respective output video image 133 to be applied to a respective one of the monitors 113. The output video images 133 may incorporate a single one of the video images 111 or multiple ones of the video images 111 generated by cameras within a respective one of the subsets.
The cameras 103, 106 selected to be in one of the subsets from which the output video images 133 are generated may be selected according to various characteristics. For example, a given subset of cameras 103, 106 may include only visible light cameras 103 or only night vision cameras 106. In this respect, an operator can thus dictate that the output video images 133 incorporate video images 111 generated entirely by visible light cameras 103 or night vision cameras 106, depending upon the nature of the environment surrounding the vehicle 100.
Alternatively, a given selected subset of cameras 103, 106 may include only cameras 103, 106 that have a field of view that is oriented along the longitudinal direction 126 or oriented along the lateral direction 129. In this respect, an operator can thus dictate that the output video images 133 display views directed solely to the forward and rear of the vehicle 100 or views directed to the environment at the side of the vehicle 100.
The video processing unit 109 is also configured to detect a motion within a field of view 123 of each of the cameras 103, 106 that are included within any of the subsets of the cameras 103, 106. When motion is detected within the field of view of a respective one of the cameras 103, 106, the video processing unit 109 may generate an alarm that alerts operators within the vehicle 100 of such motion. In this respect, the alarm may comprise, for example, the incorporation of a border, alarm text, or other imagery within the output video images 133 displayed on the monitors 113. The border, alarm text, or other imagery may be generated within the video images 111 incorporated within the output video image 133, for example, if the motion is detected in such video images 111.
Alternatively, the alarms may comprise the audible alarms 119 or both a video image alarm and an audio alarm 119. In some situations, the output video image 133 viewed on a particular monitor 133 may not incorporate a video image 111 generated by one of the cameras 103, 106 that is included within a particular subset of the cameras 103, 106. The video processing unit 109 may also detect motion in the video image 111 that is excluded from the output video image 133. In such case, an alarm may be generated that informs an operator that motion was detected in a video image 111 generated by a camera 103, 106 that is not currently viewed on one of the monitors 113. In this respect, operators are advantageously made aware of motion that they cannot see in any of the video images 111 incorporated into the output video images 133 viewed on the respective monitors 113. Such an alarm may differ in appearance or may sound different compared to an alarm due to motion detected in a video image 111 that is incorporated into an output video image 133 that is displayed on a monitor 113.
Thus, according to one embodiment of the present invention, different alarms are sounded for motion detected within a video image 111 that is incorporated within an output video image 133 displayed on a monitor 113 and for motion detected within a video image 111 that is excluded from an output video image 133 displayed on a respective monitor 113. As additional embodiments, differing alarms can be generated depending upon where the motion is detected relative to the vehicle 100. Specifically, differing alarms may be generated depending upon which of the video images 111 from the cameras 103, 106 the motion is detected, thereby providing instantaneous information to an operator as to where motion is detected relative to the vehicle 100 itself.
In still another embodiment, the video processing unit 109 may operate on a respective video image 111 from one of the cameras 103, 106 to generate a mirror image therefrom for purposes of showing images from rear facing cameras 103, 106 in a manner that does not confuse an operator as to the orientation of the fields of view 123 of respective ones of the cameras 103, 106.
With respect to
The video processing unit 109 further comprises a number of video encoders 163. The output of each of the video encoders 163 is applied to a number of multiplexed inputs of one of the video processors 156a/156b. Each of the video encoders 163 performs the function of converting the video images 111 generated by the cameras 103, 106 in the form of an analog signal into a digital video signal that is recognizable by the video processors 156a/156b. Each of the video encoders 163 is associated with a respective corner of the vehicle 100 (
Each of the left front corner (LFC) video encoders 163 receives inputs from the left front (LF) cameras 103, 106 and the left side front (LSF) cameras 103, 106. Also, the right front corner (RFC) video encoders 163 receive inputs from the right front (RF) cameras 103, 106, and the right side front (RSF) cameras 103, 106. The left rear corner (LRC) video encoders 163 receive inputs from the left rear (LR) cameras 103, 106 and the left side rear (LSR) cameras 103, 106. Finally, the right rear corner (RRC) video encoders 163 receive inputs from the right rear (RR) cameras 103, 106 and the right side rear (RSR) cameras 103, 106.
The respective video inputs 111 into each of the video encoders 163 are multiplexed through a single output that is applied to one of the video processors 156a, 156b. For example, a first one of the left front corner (LFC) video encoders 163 applies its output to the video processor 156a and the remaining left front corner (LFC) video encoder 163 applies its output to the video processor 156b. Similarly, the outputs of the various pairs of video encoders 163 are applied to one of the video processor 156a and 156b. Ultimately, the encoders 163 facilitate the selection of the subset 165 of video images 111 generated by respective ones of the cameras 103, 106 that are applied to the video processors 156a/156b to be incorporated into the video output signals 133 as described above. In this respect, the control processor 153 is electrically coupled to each of the encoders 163 and executes a control system that controls the operation of each of the encoders 163 in selecting various ones of the video images 111 that are applied to the inputs of the video processors 156a, thereby selecting the subset of the cameras 103, 106 that generate video images 111 that are incorporated into a respective one of the output video images 133.
Given that the video encoders 163 are grouped in pairs that receive identical inputs as from four cameras as shown, and given that each video encoder 163 within each pair provides its output to a separate one of the video processors 156a and 156b, then the multiplexed inputs of the video processors 156a/156b can receive the same video images 111 generated by the various cameras 103, 106. In this respect, video images 111 generated by any one of the cameras 103, 106 may be applied to each one of the video processors 156a, 156b.
The video processors 156a/156b each generate the video output images 133 (
In generating the various output video images 133, each of the video processors 156a/156b can perform various processing operations relative to the video images 111 received from respective ones of the cameras 103, 106. For example, each of the video processors 156a/156b can incorporate any number of the video images 111 received from the selected cameras 103, 106 into a single output video image 133 that is applied to a respective one of the monitors 113. Also, each of the video processors 156a/156b include motion detection capability with respect to each of the video images 111 received from one of the selected cameras 103, 106. Such motion detection may be performed, for example, by performing screen to screen comparisons to detect changes in the video images 111 over time, etc. Once motion is detected in a respective video image 111, the respective video processor 156a/156b may set a register to a predefined value that is then supplied to the control processor 153. The control processor 153 is thus programmed, for example, to perform various tasks in reaction to the value in the register such as executing an alarm or taking some other action, etc.
Each of the video processors 156a/156b may perform a mirror image operation with respect to any one of the video images 111 received from one of the cameras 103, 106, thereby generating a mirror video image therefrom. Such a mirror image may be including in one of the output video images 133 where appropriate, for example, for viewing reverse directions on a respective monitor 113. Also, each of the video processors 156a/156b may perform a digital zoom function and a pan function with respect to one of the video images 111. For example, the digital zoom function may involve performing a 2× digital zoom or a digital zoom of greater magnification. The pan function involves scrolling up, down, left, and right to make unseen portions of a zoomed video image 111 appear on a respective monitor 113. The zoom and pan functions are discussed in greater detail in the following text.
In addition, each of the video processors 156a, 156b includes memory in which is stored various templates of images, such as icons, symbols, or other images, or text that may be overlaid onto a respective output video image 133 displayed on a monitor 113 as directed by the control processor 153, etc. Specific examples of images such as text that may be overlaid onto a respective output video image 133 include, for example, information indicating from which camera a particular video image 111 depicted within the output video image 133 has been generated.
In addition, the control processor 153 includes inputs that facilitate an electrical coupling of the video image selectors 116 directly to the control processor 153. Alternatively, the control processor 153 may be coupled to a vehicle data bus 166 through a controller electronic communications unit (ECU) 168. In this respect, each of the video image selectors 116 may also coupled to the data bus 166 associated with the vehicle 100 and communicate to the control processor 153 therethrough. In this respect, the vehicle data bus 166 may operate according to any one of a number of a number of vehicle data communication specifications such as, for example, SAE J1587, “Electronic Data Interchange Between Microcomputer Systems in Heavy-Duty Vehicle Applications” (February 2002); SAE J1939/71, “Vehicle Application Layer” (December 2003); or SAE J2497, “Power Line Carrier Communications for Commercial Vehicles” (October 2002) as promulgated by the Society of Automotive Engineers, the entire text of each of these standards being incorporated herein by reference.
Given that the control processor 153 may be coupled directly to a vehicle data bus 166, it can receive data information that describes general operational aspects of the vehicle 100 that is transmitted on the vehicle data bus 166. The control processor 153 may then be programmed to direct the video processors 156a/156b to overlay such information onto one of the output video images 133. Such information may include text or other images that describes operational aspects of the vehicle 100 such as whether the vehicle 100 is moving, gear settings, engine diagnostic information, other vehicle diagnostic information, and other information, etc.
In addition, the control processor 153 includes an alarm output that may be used to drive the audible alarms 119. Specifically, as an alternative, there may be multiple audible alarms 119 coupled to the control processor 153 beyond the two shown that are used to indicate various alarm conditions that may be detected with the video processing unit 109. Also, a single alarm may be driven in different ways to indicate different alarm conditions. For example, the audible alarms 119 may include a speaker that can be driven to generate multiple different alarm sounds, etc.
Turning then to
The video image selector 116 includes a number of directional buttons 173 including, for example, a “left front” button LF, a “right front” button RF, a “left rear” button LR, and a “right rear” button RR. The directional buttons 173 allow a user to select a respective left front, right front, left rear, or right rear video image 111 (
In addition, the video image selector 116 includes a multi-view button 176 that directs the video processing unit 109 to generate an output video image 133 that includes two, three, or four or more video images 111 from multiple ones of the cameras 103, 106 that are included in the subset 165 (
In addition, the video image selector 116 includes a day/night button 179 that is used to control whether the subset 165 of video images 111 are generated by visible light cameras 103 or night vision cameras 106. In one embodiment, each one of the output video images 133 generated by the video processing unit 109 is generated only by either visible light cameras 103 or night vision cameras 106.
Also, the video image selector 116 includes a “forward-reverse/side-to-side” button 183. The forward-reverse/side-to-side button 183 is employed to select the subset 165 of video images 111 generated by cameras 103, 106 that are facing in the longitudinal direction 126 (
In this respect, operators may advantageously choose between viewing areas in front and behind the vehicle 100, or on either side of the vehicle 100. When any one of the button 173, 176, 179, 183 are depressed, the video image selector 116 provides a signal to the controller ECU 169 which in turn generates a message on the data bus 166 that is transmitted to and received by the control processor 153 (
Alternatively, the video image selector 116 may be directly coupled to the video processing unit 109 and the video processing unit 109 may react to the signals received directly from the video image selector 116 that are generated upon manipulating any one of the buttons 173, 176, 179, 183.
Turning to
Stored in the memory 196 and executable by the processor 193 are an operating system 203 and a control system 206. The control system 206 is executed by the processor 193 in order to orchestrate the operation of the video processing unit 109 in response to various inputs from the video image selectors 116 (
The memory 196 is defined herein as both volatile and nonvolatile memory and data storage components. Volatile components are those that do not retain data values upon loss of power. Nonvolatile components are those that retain data upon a loss of power. Thus, the memory 196 may comprise, for example, random access memory (RAM), read-only memory (ROM), hard disk drives, floppy disks accessed via an associated floppy disk drive, compact discs accessed via a compact disc drive, magnetic tapes accessed via an appropriate tape drive, and/or other memory components, or a combination of any two or more of these memory components. In addition, the RAM may comprise, for example, static random access memory (SRAM), dynamic random access memory (DRAM), or magnetic random access memory (MRAM) and other such devices. The ROM may comprise, for example, a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or other like memory device.
In addition, the processor 193 may represent multiple processors and the memory 196 may represent multiple memories that operate in parallel. In such a case, the local interface 199 may be an appropriate network that facilitates communication between any two of the multiple processors, between any processor and any one of the memories, or between any two of the memories etc. The processor 193 may be of electrical, optical, or molecular construction, or of some other construction as can be appreciated by those with ordinary skill in the art.
The operating system 203 is executed to control the allocation and usage of hardware resources such as the memory, processing time and peripheral devices in the control processor 153. In this manner, the operating system 203 serves as the foundation on which applications such as the control system 206 depend as is generally known by those with ordinary skill in the art.
Turning to
Beginning with box 223, the control system 206 initializes all registers and other aspects of the operation of the video processing unit 109. Thereafter, in box 226, the control system 206 determines whether a quad or other multiple video image command message has been received from a respective video image selector 116 (
Assuming that a quad message has been received from a respective one of the video image selectors 116 in box 226, then the control system 206 proceeds to box 229 in which it is determined whether a pan function is active with respect to a current output video image displayed on the respective monitor 113. While in a pan mode, the output video image 133 (
Assuming that a pan feature within a respective one of the video processors 156a/156b is active, then the control system 206 proceeds to box 233. Otherwise, the control system 206 progresses to box 236 in which the “quad” view is displayed on the specified monitor 113 by the video processing unit 109. In this respect, the control system 206 communicates with a respective one of the video processors 156a, 156b and directs the video processor 156a, 156b to display an output video image 133 that incorporates the video images 111 from multiple ones of the cameras 103, 106 included in the subset 165. Thereafter, the control system 206 progresses to box 233 as shown.
In box 233, the control system determines whether a directional button 173 (
In box 243, the control system 206 determines whether a day/night message has been received from a respective one of the video image selectors 106 to be directed to one of the video processors 156a, 156b to switch between the application of visible light cameras 103 or night vision cameras 106 to the respective video processor 156a, 156b identified in the day/night message. If such is the case, then the control system 206 proceeds to execute process 246 that controls the selection of the visible light cameras 103 or the night vision cameras 106 as the subset 165 of cameras 103, 106. Otherwise, the control system 206 progresses to box 249. In box 249, the control system 206 determines whether a forward-reverse/side-to-side message has been received from a respective one of the video image selectors 116. If such is the case, then the control system 206 executes the process 253. Otherwise, the control system 206 reverts back to box 226.
Referring next to
In box 269, the process 239 directs the respective video processor 156a, 156b identified in the respective message to generate the output video image 133 incorporating the full view of the respective video image 111 of the selected camera 103, 106 based upon the directional button 173 pressed in the video image selector 116 as identified in the message received by the control processor 153. In this respect, the output video image 133 includes the video image 111 of the selected camera 103, 106 in a full view mode such that the entire monitor 113 displays the video image 111 from a respective one of the cameras 103, 106. Thereafter, the process 239 ends as shown.
Assuming that the process 239 has proceeded to box 266, then the full view of the video image 111 from the respective camera 103, 106 associated with the directional button 173 depressed on the video image selector 163 is already displayed in the respective monitor 113 associated with the respective video image selector 116. In such case, in box 266 the process 239 determines whether the zoom function with respect to the current full view displayed as a rendering of the output video image 133 is active.
The zoom function performs a digital zoom with respect to the output video image 133 currently displayed in the respective monitor 113. If the zoom function is inactive, then the process 239 proceeds to box 273 in which the zoom function is activated with respect to the current output video image 133 displayed on the respective monitor 113. Thereafter, the process 239 ends as shown. On the other hand, assuming that the zoom function is already active as determined in box 266, then in box 276 the process 239 determines whether a pan function with respect to the current output video image 133 is active. In this respect, the pan function allows a user to move around within the video image 111 from the respective one of the cameras 103, 106.
If the pan function is active in box 276, then in box 279 the process 239 causes the current output video image 133 to pan to a selected direction based upon the respective one of the directional buttons 173 (
However, if in box 273 the pan function is inactive with respect to the current output video image 133, then the process 239 proceeds to box 269 in which the full view of the video image 111 from a respective camera 103, 106 is incorporated as the current output video image 133 to be displayed on the respective monitor 113. In this respect, depressing one of the directional buttons 173 may cause the display of a full view of one of the video images 111, the zooming of a current full view of a video image 111, or a pan movement with respect to a displayed video image 111 in a respective one of the output video images 133.
The flow chart of
Beginning with box 303, the process 246 determines whether a pan function is active with respect to a particular full view of a video image 111 incorporated within an output video image 133 applied to a respective one of the monitors 113 by the respective one of the video processors 156a/156b. If so, then the process 246 ends. In this respect, the control system 206 prevents the selection of the video images 111 from visible light or night vision cameras 103, 106 as one of the subsets 165 of video images 111 if a respective video processor 156a/156b currently implements a pan function with respect to the output video image 133 generated thereby.
Assuming that no pan function is active in box 303, then the process 246 proceeds to box 306 in which it is determined whether the video images 111 of the current subset 165 are generated by night vision cameras 106. If so, then the process 246 proceeds to box 309 in which the video images 111 from visible light cameras 103 are selected as the subset from which an output video image 133 is generated. The output video image 133 is generated in the same mode as was previously viewed during use of the night vision cameras 106. Thereafter, the process 246 ends as shown.
On the other hand, if the video images 111 generated by the night vision cameras 106 are not currently selected as the subset of video images 111 applied to the multiplexed inputs of a respective video processor 156a, 156b, then the process 246 proceeds to box 313 in which the video images 111 of the respective night vision cameras 106 are applied to the multiplexed inputs of a respective one of the video processors 156a, 156b and a corresponding output video image 133 is generated. Thereafter, the process 246 ends as shown.
In this respect, it is seen that the depressing of the day/night button 179 (
Turning then to
Beginning with box 323, the process 253 determines whether the zoom function is active with respect to a full view of a video image 111 generated by a left front (LF)/left side front (LSF) camera 103, 106. If the zoom function is active, then the process 253 proceeds to box 326. Otherwise, the process 253 progresses to box 329 as shown. In box 326, the process 253 determines whether a pan function is active with respect to the current output video image 133 applied to the respective one of the monitors 113. If such is the case, then the process 253 progresses to box 333. Otherwise, the process 253 progresses to box 336 as shown.
In box 333, the zoom function is activated with respect to the current output video image 133 that includes the video image 111 generated by one of the left front LF or left side front LSF cameras 103, 106. Thereafter, the process 253 ends as shown. Assuming however, that the pan function is not active in box 326, then in box 336 the process 253 implements the pan function with respect to the current output video image 133 that incorporates the video image 111 generated by a respective left front LF or left side front LSF cameras 103, 106. Thereafter, the process 253 ends as shown.
Thus, the process 253 facilitates, for example, the activation and deactivation of the pan function with respect to a particular output video image 133 that incorporates the video image generated by a respective camera 103, 106 as described.
However, assuming that the zoom feature is not active in box 323 with respect to the current output video image 133, then the process 253 progresses to box 329 in which it is determined whether the video images 111 generated by the cameras 103, 106 that face a forward/reverse or longitudinal direction with respect to the vehicle 100 (
If the video images 111 generated by the cameras facing the longitudinal direction 126 are applied to the multiplexed inputs of the respective video processor 156a/156b as determined in box 329, then the process 253 proceeds to box 339. Otherwise, the process 253 progresses to box 343. Assuming that the process 253 has progressed to box 339, then the video images 111 generated by the cameras 103, 106 facing a lateral direction 129 are applied to the inputs of the respective video processor 156a/156b. Thereafter, the process 253 ends.
Assuming that the process 253 has progressed to box 343, then the process 253 manipulates the respective video encoders 163 so as to apply the video images 111 from the cameras 103, 106 facing the longitudinal direction 126 to the multiplexed inputs of the respective video processor 156a/156b. The corresponding output video image 133 thus incorporates the video images 111 from the cameras 103, 106 facing the longitudinal direction 126. In this respect, a full view of a single one of the cameras 103, 106 or a quad view that incorporates the video images 111 from multiple ones of the cameras 103, 106 oriented in a longitudinal direction 126 are applied to the monitor 113. Thereafter, the process 253 ends as shown.
In addition, while
Although the control system 206 (
The block diagram/diagrams and/or flow chart/charts of
Although the flow charts of
Also, where the control system 206 comprises software or code, it can be embodied in any computer-readable medium for use by or in connection with an instruction execution system such as, for example, a processor in a computer system or other system. In this sense, the logic may comprise, for example, statements including instructions and declarations that can be fetched from the computer-readable medium and executed by the instruction execution system. In the context of the present invention, a “computer-readable medium” can be any medium that can contain, store, or maintain the control system 206 for use by or in connection with the instruction execution system. The computer readable medium can comprise any one of many physical media such as, for example, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor media. More specific examples of a suitable computer-readable medium would include, but are not limited to, magnetic tapes, magnetic floppy diskettes, magnetic hard drives, or compact discs. Also, the computer-readable medium may be a random access memory (RAM) including, for example, static random access memory (SRAM) and dynamic random access memory (DRAM), or magnetic random access memory (MRAM). In addition, the computer-readable medium may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or other type of memory device.
Although the invention is shown and described with respect to certain embodiments, it is obvious that equivalents and modifications will occur to others skilled in the art upon the reading and understanding of the specification. The present invention includes all such equivalents and modifications, and is limited only by the scope of the claims.
Number | Date | Country | |
---|---|---|---|
20050190261 A1 | Sep 2005 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10325083 | Dec 2002 | US |
Child | 10787786 | Feb 2004 | US |