The present disclosure relates to systems and methods for acoustic analysis and three dimensional visualization of industrial systems and products.
Industrial products such as engine systems, cooling systems, and exhaust systems generate noise that can vary depending on the characteristics of the products themselves, the surrounding environment and a user's position within the environment. These products may be designed to meet stringent noise standards, regulations, and/or specifications to minimize disruption in the field of use. These standards and/or specifications can vary widely between different technology areas, and may also depend on where/how the product(s) will be used.
To ensure compliance with standards and/or specifications, product(s) must be vigorously tested in an environment that is designed to simulate how the product(s) will be used in the field.
One embodiment of the present disclosure relates to a method. The method includes receiving acoustic data that is associated with a first product, mapping a sound field around the first product based on the acoustic data, and generating a 3D surface of the sound data for the first product based on the mapping by at least one of interpolating or extrapolating the sound field. The method further includes generating a simulation of the first product by combining the 3D surface with a visual representation of the first product, and providing, via an emitter, an audio output based on the position of an avatar within the simulation with respect to a position of the first product.
Another embodiment of the present disclosure relates to a system for virtual product review and analysis. The system includes a communications interface configured to communicate with an emitter, a memory configured to store acoustic data associated with a first product, and a processor communicably coupled to the communications interface and the memory. The processor is configured to (i) receive the acoustic data; (ii) map a sound field around the first product based on the acoustic data; (iii) generate a first three-dimensional (3D) surface of the sound field for the first product based on the map by at least one of interpolating or extrapolating the sound field; (iv) generate a simulation of the first product by combining the 3D surface with a visual representation of the first product; and (v) provide, to the emitter, an audio output based on a position of an avatar in the simulation with respect to a position of the first product.
Yet another embodiment of the present disclosure relates to a non-transitory computer-readable medium configured to store a program which, when executed by a process, cause a device to (i) receive acoustic data associated with a first product; (ii) map a sound field around the first product based on the acoustic data; (iii) generate a first three-dimensional (3D) surface of the sound field for the first product based on the map by at least one of interpolating or extrapolating the sound field; (iv) generate a simulation of the first product by combining the 3D surface with a visual representation of the first product; and (v) provide, via an emitter, an audio output based on a position of an avatar in the simulation with respect to a position of the first product.
These and other features, together with the organization and manner of operation thereof, will become apparent from the following detailed description when taken in conjunction with the accompanying drawings.
The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several implementations in accordance with the disclosure and are therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings.
Reference is made to the accompanying drawings throughout the following detailed description. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative implementations described in the detailed description, drawings, and claims are not meant to be limiting. Other implementations may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and made part of this disclosure.
Embodiments described herein relate to methods and systems for virtual product review and analysis. In particular, embodiments described herein relate to a virtual reality system for product visualization and acoustic noise source localization. The virtual reality system allows a user to evaluate the actual acoustic performance of a product in the planned environment of use.
In many technology areas, the noise levels produced by commercial and/or industrial equipment are limited based on local regulations and/or user-specific requirements (e.g., customer specifications, etc.). These requirements can vary significantly depending on the type of product and the application in which the product will be used. To evaluate product compliance, the noise levels produced by the product may be measured in a test facility that is specifically designed to minimize any imported noise from the surrounding environment, and to accurately capture the noise generated by the product. Additionally, because of the variability of regulations and different user requirements, the setup of the test facility (e.g., the position of the product and monitoring devices within the test facility) may also need to be modified for different applications in order to replicate the environment where the product will be used, and/or to determine the sound that will be produced by multiple products that are used together. For example, in one application, the noise levels from a single product (e.g., genset) may be of interest (e.g., the noise at a given distance, including but not limited to a distance of about 10 m from the product, etc.). In another application, the noise levels from a combined system with multiple products (e.g., gensets) in a specific arrangement may be of interest (e.g., at a distance including but not limited to a distance of about 5 m from the products), where each individual product contributes to the overall noise levels produced. While this method of quantifying product noise from the products themselves is very accurate, it is difficult for technicians to account for the various influences that the surrounding environment will have on actual values of exported noise. Moreover, entry to the test facility is limited to prevent interference with the sound measurements and damage to delicate monitoring equipment, among other reasons.
Further complicating matters, in some applications, a user (e.g., a customer) may be unsure of the exact performance that is required of the product(s), or the reasonableness of the noise level regulations that are implemented in their jurisdiction. As a result, the user may be forced to make modifications to the product(s) on site, after installation, to reduce or otherwise modify the noise levels, which can lead to additional design complexity and expense.
The virtual reality system (e.g., audio-video virtual reality (VR) system) of the present disclosure mitigates the aforementioned issues by providing a simulated test environment in which a user can navigate to assess how sound changes with position and the orientation of the user relative to the product(s). The virtual reality system utilizes sound data from experimental testing to simulate the noise levels generated by the product(s) and how those noise levels vary with position. In particular, the virtual reality system uses sound data from a plurality of sensors positioned around the product(s) to determine a sound field that varies spatially within the simulation. The virtual reality system then extrapolates and/or interpolates the sound field to generate a continuous or semi-continuous three-dimensional (3D) surface of the sound data. The virtual reality system combines a 3D visualization of the product(s) with the 3D surface to produce a simulation that a user can navigate to gain a better understanding of the real-world performance of the product(s).
In at least one embodiment, the virtual reality system includes a user interface (e.g., an I/O device, haptic device, etc.) that allows the user to control their position within the simulation and to interact with the product(s). For example, the user may be able to manipulate access panels, doors, and/or other components of the product in the same way as for an actual device construction. The virtual reality system may be configured to automatically update the simulation based on the change in product structure to demonstrate how the sound levels within the environment change. The virtual reality system may also be configured to modify the 3D sound field to account for the influence of structures in the environment surrounding the product(s) to simulate what the product(s) would sound like in a real-world setting. For example, the virtual reality system may be configured to account for the location of buildings, sound barriers, or other structural interference as well as the change in sound due to materials used in their construction (e.g., sound reflections, sound absorption, etc.).
In at least one embodiment, the virtual reality system is configured to allow a user to select different product(s), add or remove components, and/or change the position of products within the simulation. The virtual reality system may be configured to modify the visual representation and the 3D sound field to account for these changes (e.g., by superimposing the sound field from a first product onto the sound field from the second product, etc.). In at least one embodiment, the 3D sound field for an industrial system (e.g., genset, recreational vehicle (RV), or other noise producing assembly) may be produced by combining the sound data from the system's constituent components and/or computer aided design (CAD) surface geometry of the enclosure for the system.
In at least one embodiment, the virtual reality system is configured to present a visual indication of the localized sound field within the simulation and/or to allow manipulation of the sound field based on user inputs. For example, the virtual reality system may be configured to present a contour plot that the user can use to visually assess levels in different areas within the simulation and how the sound field changes in response to the placement of structures in the environment surrounding the product(s) (e.g., the addition of sound barriers, etc.). In another embodiment, the virtual reality system is configured to present sound quality controls that the user can manipulate to trace different sources of noise and to evaluate the effectiveness of design changes on noise suppression and/or mitigation. These and other advantageous features will become apparent in view of the present disclosure.
The various concepts introduced above and discussed in greater detail below may be implemented in any of numerous ways, as the described concepts are not limited to any particular manner of implementation. Examples of specific implementations and applications are provided for illustrative purposes.
Various numerical values herein are provided for reference purposes. Unless otherwise indicated, all numbers expressing quantities of properties, parameters, conditions, and so forth, used in the specification and claims are to be understood as being modified in all instances by the term “approximately.” Accordingly, unless indicated to the contrary, the numerical parameters set forth in the following specification and attached claims are approximations. Any numerical parameter should at least be construed in light of the number reported significant digits and by applying ordinary rounding techniques. The term “approximately” when used before a numerical designation, e.g., a quantity and/or an amount including range, indicates approximations which may vary by (+) or (−) 10%, 5%, or 1%.
As will be understood by one of skill in the art, for any and all purposes, particularly in terms of providing a written description, all ranges disclosed herein also encompass any and all possible subranges and combinations of subranges thereof. Any listed range can be easily recognized as sufficiently describing and enabling the same range being broken down into at least equal halves, thirds, quarters, fifths, tenths, etc. As a non-limiting example, each range discussed herein can be readily broken down into a lower third, middle third and upper third, etc. As will also be understood by one skilled in the art all language such as “up to,” “at least,” “greater than,” “less than,” and the like include the number recited and refer to ranges which can be subsequently broken down into subranges as discussed above. Finally, as will be understood by one skilled in the art, a range includes each individual member.
Referring now to
As shown in
The virtual reality system 200 is also configured to track the location of an avatar 108 (e.g., construct, simulated person, etc.) within the 3D simulation and the orientation of the avatar 108 (e.g., rotational position) with respect to the surrounding environment 104. In the embodiment of
The virtual reality system 200 is configured to generate a 3D sound field that simulates the actual sound produced by the product 102 and combine the sound field with the visual representation of the product 102 and its surrounding environment 104. In particular, the virtual reality system 200 is configured to calculate the 3D sound field within the simulation based on actual test data from the product 102 and/or one or more noise generation components within the product 102. The virtual reality system 200 is configured to provide, via an audio output device, an audio output that is representative of how the product 102 will sound in a real-world setting. The audio output is based on the position of the avatar 108 and their orientation with respect to the product 102 (e.g., their position within the 3D sound field, directionality of the sound, etc.). As shown in
As described with reference to
The haptic device 204 (e.g., haptic interface, user input device, etc.) is a mechanical device that mediates communication between the user and the virtual reality system 200. The haptic device 204 may be a handheld device that includes buttons, joysticks, and/or other mechanical actuators that a user can manipulate to control the position of the avatar 108 within the 3D simulation (see
The virtual reality system 200 is configured to produce an audio output based on the position of the avatar 108 (see
As shown in
As shown in
In one configuration, the acoustic mapping module 220, the display module 218, and/or the product selection module 216 are configured by computer-readable media that are executable by a processor, such as the processor 212. As described herein and amongst other uses, the modules and/or circuitry facilitate performance of certain operations to enable reception and transmission of data. For example, the modules may provide an instruction (e.g., command, etc.) to, e.g., acquire data from the I/O devices or receive data from the I/O devices. In this regard, the modules may include programmable logic that defines the frequency of acquisition of the data and/or other aspects of the transmission of the data. In particular, the modules may be implemented by computer readable media which may include code written in any programming language including, but not limited to, Java, JavaScript, Python or the like and any conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program code may be executed on one processor or multiple remote processors. In the latter scenario, the remote processors may be connected to each other through any type of network.
In some embodiments, the acoustic mapping module 220, the display module 218, and/or the product selection module 216 may take the form of one or more analog circuits, electronic circuits (e.g., integrated circuits (IC), discrete circuits, system on a chip (SOCs) circuits, microcontrollers, etc.), hybrid circuits, and any other type of “circuit.” In this regard, the acoustic mapping module 220, the display module 218, and/or the product selection module 216 may include any type of component for accomplishing or facilitating achievement of the operations described herein. For example, a circuit as described herein may include one or more transistors, logic gates (e.g., NAND, AND, NOR, OR, XOR, NOT, XNOR, etc.), resistors, multiplexers, registers, capacitors, inductors, diodes, wiring, and so on. Thus, the acoustic mapping module 220, the display module 218, and/or the product selection module 216 may also include programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like. In this regard, the acoustic mapping module 220, the display module 218, and/or the product selection module 216 may include one or more memory devices for storing instructions that are executable by the processor(s) of the acoustic mapping module 220, the display module 218, and/or the product selection module 216. The one or more memory devices and processor(s) may have the same definition as provided below with respect to the memory 214 and the processor 212. Thus, in this hardware unit configuration, the acoustic mapping module 220, the display module 218, and/or the product selection module 216 may be dispersed throughout separate locations (e.g., separate control units, etc.). Alternatively, and as shown, the acoustic mapping module 220, the display module 218, and/or the product selection module 216 may be embodied in or within a single unit/housing, which is shown as the controller 208.
In the example shown, the controller 208 includes the processing circuit 210 having the processor 212 and memory 214. The processing circuit 210 may be structured or configured to execute or implement the instructions, commands, and/or control processes described herein with respect to the acoustic mapping module 220, the display module 218, and/or the product selection module 216. Thus, the depicted configuration represents the aforementioned arrangement where the acoustic mapping module 220, the display module 218, and/or the product selection module 216 are embodied as machine or computer-readable media. However, as mentioned above, this illustration is not meant to be limiting as the present disclosure contemplates other embodiments such as the aforementioned embodiment where the acoustic mapping module 220, the display module 218, and/or the product selection module 216, or at least one circuit of the acoustic mapping module 220, the display module 218, and/or the product selection module 216, are configured as a hardware unit. All such combinations and variations are intended to fall within the scope of the present disclosure.
The processor 212 may be implemented as one or more general-purpose processors, an application specific integrated circuit (ASIC), one or more field programmable gate arrays (FPGAs), a digital signal processor (DSP), a group of processing components, or other suitable electronic processing components. In some embodiments, the one or more processors may be shared by multiple modules and/or circuits (e.g., the acoustic mapping module 220, the display module 218, and/or the product selection module 216 may comprise or otherwise share the same processor which, in some example embodiments, may execute instructions stored, or otherwise accessed, via different areas of memory). Alternatively or additionally, the one or more processors may be structured to perform or otherwise execute certain operations independent of one or more co-processors. In other example embodiments, two or more processors may be coupled to enable independent, parallel, pipelined, or multi-threaded instruction execution. All such variations are intended to fall within the scope of the present disclosure.
The memory 214 (e.g., RAM, ROM, Flash Memory, hard disk storage, etc.) may store data and/or computer code for facilitating the various processes described herein. For example, in some embodiments, the memory 214 is configured to store acoustic data associated with a first product (e.g., genset, etc.), a second product, etc. In some embodiments, the memory 214 is configured to store other data associated with the first product, second product, etc. For example, in some embodiments, the memory 214 is configured to store images, CAD models, etc. of the first product. In some embodiments, the memory 214 is configured to store a boundary condition including a surface geometry and/or a surface location relative to a position of the first product. In some embodiments, the memory 214 is configured to store outputs from the simulation such as contour plots based on user-specified boundary conditions, etc. The memory 214 may be communicably coupled (e.g., connected, linked, etc.) to the processor 212 to provide computer code or instructions to the processor 212 for executing at least some of the processes described herein. Moreover, the memory 214 may be or include tangible, non-transient volatile memory or non-volatile memory. Accordingly, the memory 214 may include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described herein. In one embodiment, the non-transitory computer-readable medium is configured to store a program which, when executed by the processor 212, cause a device (e.g., the system 200, the controller 208, etc.) to perform any of the operations described herein.
The communications interface 222 may include wired and/or wireless interfaces (e.g., jacks, antennas, transmitters, receivers, transceivers, wire terminals, etc.) for conducting data communications with various systems, devices, and/or networks. For example, the communications interface 222 may include an Ethernet card and port for sending and receiving data via an Ethernet-based communications network and/or a Wi-Fi transceiver for communicating via a wireless communications network. The communications interface 222 may be structured to communicate via local area networks or wide area networks (e.g., the Internet, etc.) and may use a variety of communications protocols (e.g., IP, local area network (LAN), Bluetooth, ZigBee, near field communication, etc.). The communications interface 222 is configured to communicate with (e.g., is communicably coupled to) an I/O device. In the embodiment of
The product selection module 216 (e.g., product configuration module, etc.) is structured to receive user selections of the product(s) and product specifications to be modeled within the 3D simulation. The product selection module 216 may include a product database stored in memory 214 that includes a listing of products for which sound data is available. For example, the product database may include a searchable lookup table that includes lists of selection criteria for various input parameters, such as the type of product, the model number of the product, the performance characteristics of the product (e.g., horsepower, etc.), and the like. The lookup table may also include lists of selection criteria that allow a user to specify a number of products and/or the subcomponents or subassemblies that make up the product. For example, the product selection module 216 may be configured to present a visually-perceptible selection pane to the user via the display device 202. In the context of a genset, the selection pane may include a first column that includes a listing of engine types for the genset, a second column that includes a listing of mufflers that can be used with the selected engine system, etc. The product selection module 216 may be structured to receive command signals from the I/O device (e.g., haptic device 204, etc.) and interpret user selections from the commands.
The display module 218 is structured to generate a visual representation of the product(s) 102 and environment 104 (see
In yet other embodiments, the display module 218 is configured to work in coordination with the product selection module 216 to allow a user to “build” the product(s) and/or surrounding environment within the 3D simulation. For example, the display module 218 may allow the avatar 108 to pick and place components, products, and/or structures within the 3D simulation. As such, the product selection module 216 may be accessible to the avatar 108 from within the 3D simulation. In some embodiments, the display module 218 may also be structured to present visual representations of the sound field to a user. For example, the display module 218 may be structured to display contour plots of different sound metrics and/or to present text outputs that are indicative of various sound metrics at the location of the avatar 108. The display module 218 may be structured to issue commands to a graphics card and/or other graphics driver to present the selected image data and/or CAD models as images on the display device 202.
The acoustic mapping module 220 is structured to determine a sound field (e.g., a 3D sound field) based on measured sound data from real-world operation of the product 102 (see
In at least one embodiment, the acoustic mapping module 220 includes a sound data library (e.g., acoustic database, etc.) that stores sound data from experimental testing of a plurality of different products 102 (e.g., engine systems, gensets, vehicles, etc.). The acoustic mapping module 220 may be configured to combine (e.g., superimpose, overlay, etc.) sound data from multiple products 102 to determine the sound field resulting from the interaction between the products 102 (e.g., due to their relative position, orientation, etc.).
For example, in at least one embodiment, in a genset application, the acoustic mapping module 220 is configured to receive acoustic data associated with a first portion of a first product (e.g., a genset) and a second portion of the first product. In at least one embodiment, acoustic mapping module 220 is configured to receive a relative position of the first portion relative to the second portion. In some embodiments, the relative position is an installation position at which the second portion is installed onto the first portion. In some embodiments, a memory is configured to store the installation position. In at least one embodiment, the acoustic mapping module 220 is configured to combine the acoustic data from the first portion and the second portion based on the relative position. The acoustic mapping module 220 is configured to update the 3D surface based on the combined acoustic data from the first portion and the second portion. For example, in some embodiments, the acoustic mapping module 220 is configured to combine experimentally-measured sound data from a user-selected engine system, a user-selected cooling system, and/or a user-selected exhaust system based on their relative positions in a real-world application to generate an acoustic model to accurately simulate real-world performance.
The acoustic mapping module 220 may also be configured to apply boundary conditions to the acoustic model based on the geometry of the enclosure (e.g., based on CAD models of the enclosure geometry and relative positioning), and/or the geometry of surrounding structures in the real-world application. The acoustic mapping module 220 may be configured to overlay the 3D surface and/or sound field onto the 3D visualization generated by the display module 218 and/or to interface with the display module 218 to output sound to the audio output device 206 based on the position and/or orientation of the avatar 108 within the 3D simulation (e.g., a position and/or orientation that is determined by the display module 218). For example, the acoustic mapping module 220 may be configured to generate a control signal to each channel of the audio output device 206 that is indicative of the sound frequency, loudness, etc. at the location of at least one ear of the avatar 108 within the simulation.
Referring now to
At 302, a controller (e.g., the controller 208, the acoustic mapping module 220, etc.) receives acoustic data that is associated with a first product. For example, in some embodiments, operation 302 includes receiving a plurality of measurements associated with a plurality of discrete locations around the first product and storing the plurality of measurements in memory (e.g., memory 214). Operation 302 may include measuring acoustic data associated with a plurality of acoustic sensors positioned in discrete locations around the first product. In some embodiments, operation 302 includes measuring acoustic data obtained from the acoustic sensors. The acoustic sensors may include an array of microphones in the form of a microphone ball that covers a 360° acoustic view around the first product. In some embodiments, the coverage may be less than 360°. For example, the microphones may be positioned around all sides of the first product, at different vertical positions (e.g., Z-axis positions) along the sides, as well as above the first product. The measurements may be taken under controlled conditions within an experimental test facility that minimizes imported noise from the environment surrounding the first product (e.g., an anechoic or semi-anechoic chamber, etc.).
Operation 302 may include measuring acoustic data associated with different operating states of the first product (e.g., engine rotational speeds, power levels, etc.). Operation 302 may also include measuring acoustic data of the noise generated by the first product when the product is configured in multiple different operating positions. For example, in a scenario where the first product is an enclosed genset, operation 302 may include collecting acoustic data in a first operating state in which the enclosure is closed off (e.g., doors shut, maintenance panels closed, etc.) and a second operating state in which the enclosure is fully open and/or in an unenclosed condition of the genset.
At 304, the controller receives acoustic data from a plurality of products that include the first product and a second product (e.g., a sub-assembly of the first product such as an engine system, cooling system, etc.), or a third product that is a variation of the first product (e.g., a genset with a different muffler type/size, etc.). Operations 302 or 304 may further include cataloging the acoustic data with an identifier (e.g., a genset model number, etc.) and storing the acoustic data in controller memory (e.g., memory 214, sound data library of the acoustic mapping module 220, etc.).
At 306, the controller combines the acoustic data from each product. In some embodiments, operation 306 includes receiving a selection (e.g., from a haptic device) of a visually-perceptible icon displayed in the simulation that identifies the second product, and in response, receiving acoustic data (e.g., from memory 214) associated with the second product to combine with the acoustic data from the first product. Operation 306 may include adding the acoustic data at each discrete sensor position from testing of the first product to the acoustic data at the same or similar sensor positions from testing of the second product. In other embodiments, operation 306 includes inputting the acoustic data from the acoustic sensors, based on their relative position with respect to the first and second product, into the acoustic simulation for further calculations.
At 308, the controller (e.g., the controller 208, the acoustic mapping module 220, the display module 218, etc.) receives a boundary condition including a surface geometry and a surface location of the boundary location relative to the position of the first product and/or second product. Operation 308 may include receiving a CAD model of the boundary feature that includes a 3D representation of the boundary, an X, Y, and Z coordinate location of the boundary condition, and/or material specifications for the boundary condition. For example, in a scenario where the boundary condition is a building that is located near the first product in the real-world application, operation 308 may include accessing a CAD model of the building from memory (e.g., memory 214), X, Y, and Z coordinates to identify the location of the building within the simulation (e.g., relative to the first product), and material properties of the outer walls of the building or average material properties of the building. Operation 308 may also include receiving fluid properties such as the average air temperature in the environment surrounding the first product.
At 310, the controller (e.g., the controller 208, the acoustic mapping module 220, etc.) maps a sound field around the first product based on the acoustic data. Operation 304 may include using computational methods to determine the sound levels (e.g., frequency, loudness, directionality, etc.) at discrete points throughout the 3D environment of the simulation. Operation 304 may include iterating through spatial and temporal points using finite element analysis to determine the sound levels at each discrete point. For example, operation 304 may include solving partial differential equations for sound pressure and velocity using a iterative numerical technique to determine the spatial and temporal distribution of sound within the 3D environment (e.g., as a function of distance from the first product, etc.). In at least one embodiment, operation 304 may include using a finite elements method (FEM) to predict the noise at different distances or positions from the object. In another embodiment, operation 304 may include using a boundary elements method (BEM) to predict the noise at different distances or positions from the object. Among other benefits, using BEM may be less computationally resource intensive than FEM. In some embodiments, operation 304 may also include determining vibration levels via a coupled structural and acoustics analysis.
At 312, the controller (e.g., the controller 208, the acoustic mapping module 220, etc.) generates a 3D surface of the sound data for the first product based on the mapping. Operation 312 may include interpolating between the discrete points determined in operation 310, and/or extrapolating values based on changes in the sound levels between the discrete points in at least one direction. The controller may use linear interpolation or a higher order interpolation or extrapolation technique to generate a continuous or semi-continuous 3D surface of the sound data (e.g., by using interpolation or extrapolation techniques to increase the resolution of the sound field from the mapping operation). For example, the 3D sound field may be generated such that changes within the sound field can be determined within a range between approximately 1 mm and 3 mm, or another suitable range depending on the desired spatial resolution of the simulation and available processing power of the controller. In at least one embodiment, the method further includes modifying the 3D surface based on user-specified boundary conditions, as will be further described.
Referring now to
At 402, the controller (e.g., controller 208) receives a setting selection from an I/O device. Operation 402 may include receiving, from a haptic device (e.g., the haptic device 204), a control signal that indicates a size of the virtual environment, an elevation of the environment, an average temperature of the environment, and/or other environmental parameters. The control signal may be produced in response to manipulation of the haptic device by a user, for example, in response to the user selecting a visually-perceptible icon that is presented to the user by the display device (e.g., the display device 202). At 404, the controller (e.g., the display module 218) loads the setting in response to the setting selection. Operation 404 may include presenting images through the display device that are representative of the desired 3D environment.
At 406, the controller (e.g., the product selection module 216, etc.) receives a product selection from the I/O device. Operation 406 may include presenting to the user, via the display device, a visually-perceptible selection pane that is selectable using the haptic device.
At 408, the controller receives a product position from the I/O device. Operation 408 may include receiving spatial coordinates for each product within the simulation (e.g., an X-axis position, Y-axis position, Z-axis position, a rotational position, etc.). In other embodiments, operation 408 may include determining a desired product position and/or orientation based on user interactions with the haptic device. For example, the virtual reality system may incorporate a drag and drop feature that allows the user to manipulate the position of the product(s) within the simulation by selecting the product(s) and “walking,” pulling and/or dragging the product to a new location. It will be appreciated that a similar approach to operations 406-408 may be used to receive and/or establish boundary conditions for the simulation.
At 410, the controller (e.g., the controller 208, the display module 218, the product selection module 216, etc.) loads product data based on the user selections. Operation 410 may include searching through a lookup table, based on the product identifier (e.g., by comparing the product identifier with entries in the table for each product), to locate the images and/or other file(s) and/or CAD model(s) (e.g., dimensions, sound files, etc.) associated with the product, and to obtain the sound data from experimental testing of the product.
At 412, the controller (e.g., the controller 208, the display module 218, the acoustic mapping module 220, etc.) evaluates the sound field based on the product data and product positioning. Operation 412 may be similar to operations 306 through 312 of method 300 (
In some embodiments, operation 412 includes generating a simulation of the first product by combining the 3D surface with a visual representation of the first product. In at least one embodiment, operation 412 includes presenting a visual representation of the first product on the display device 202. The visual representation can include the image of the first product and/or CAD representation of the first product overlaid within a simulation space. Operation 412 may further include providing, via an audio output device, an audio output based on the position of an avatar within the simulation with respect to a position of the first product (e.g., a position of the avatar within the simulated environment). For example, operation 412 may include determining a position of a left ear and a right ear of the avatar, a directionality of the sound relative to the position and orientation of the left ear and the right ear, and generating an audio output based on the position and the directionality.
At 414, the controller (e.g., the controller 208, the display module 218, the acoustic mapping module 220, etc.) modifies a position of an avatar based on user inputs from the I/O device. Operation 414 may include receiving control signals from the haptic device and/or display device to navigate the avatar through the virtual environment. For example,
At 416, the controller (e.g., the controller 208, the display module 218, the acoustic mapping module 220, etc.) manipulates a position of a portion of the first product within the simulation. Operation 416 may include receiving, from the haptic device, an indication (e.g., control signal, etc.) to manipulate the position of the portion. For example, as shown in
At 418, the controller (e.g., the controller 208, the acoustic mapping module 220, the display module 218, etc.) adjusts sound quality and/or display parameters within the simulation. Operation 418 may include receiving a command from the haptic device to add visual indicators of the sound level at the avatar's location within the simulation (e.g., the sound level at an ear of the avatar 108 as shown in
Operation 418 may further include generating plots to illustrate how the sound level changes with distance from the product(s) and/or between the inside and outside of an enclosure for the product(s). For example,
In one embodiment, operation 418 includes overlaying the contour plot 700 onto a ground surface of the simulation. In some embodiments, the ground surface includes a portion of an office space, parking lot, neighboring houses, and/or any other environmental features or applications surrounding the genset (e.g., any environment that the genset is installed in such as an industrial environment, residential environment, data center, etc.).
In at least one embodiment, operation 418 includes displaying (e.g., via the display device 202 of
In at least one embodiment, operation 418 includes providing an indication of how sound from the genset (e.g., from the genset itself and/how the gensets interaction with the surrounding environment) can affect a human conversation, or otherwise impact discussions between individuals at different positions relative to the genset. For example, as shown in
The resulting sound input is modifiable based on sound from the genset and based on details of the surrounding environment to simulate how a person's voice would actually be affected by the surrounding environment. Operation 418 can include modifying the sound input based on the 3D surface (e.g., the sound from the first product including the effects of the surrounding environment) to simulate how the sound input would actually be affected by the sound field around the first product and/or the surrounding environment. The operation 418 includes outputting the modified sound input to the first avatar, through speakers of the haptic device for the first avatar. In other words, two users, via their avatars, can speak to one another within the simulation, and their headsets will feedback the actual sound that the user would hear (including any genset contamination of the sound).
In at least one embodiment, operation 418 includes modifying the audio output; for example, by presenting sound quality and/or modification controls that the user can manipulate within the simulation. The sound controls can be used to modify the frequency (e.g., frequency suppression) for tracing noise sources and/or to enhance the sound quality. The sound controls can also be used to modify the loudness, sharpness, tonality, roughness, fluctuation strength, or any other calculated sound quality parameter. In at least some embodiments, these controls facilitate virtual reality led design of the product for the final application, by allowing the user to observe the impact of design changes made within the simulation. In some embodiments, operation 418 further includes inserting virtual constructs and/or manufactured artifacts (e.g., sound generators, etc.) into the model, in place of more complex structures, to reduce modeling complexity and allow for lower order approximations.
Among other benefits, the virtual reality system of the present disclosure provides a tool that a product manufacturer, supplier or other party can use to accurately simulate the performance of products in a real-world setting. The system can be used to represent and demonstrate product performance to another party or parties (e.g., a target audience, such as customers) without forcing the party or parties into the field, and allows a manufacturer to iterate through design changes before installation of the product into its end-use environment. The virtual reality system can also be used to tailor products to meet user specifications, without undue experimentation in a test facility.
For the purpose of this disclosure, the term “coupled” or “communicated” may mean the joining or linking of two elements directly or indirectly to one another. Such joining may be wired or wireless in nature. Such joining may be achieved with the two members or the two members and any additional intermediate members. For example, circuit A communicably “coupled” to circuit B may signify that the circuit A communicates directly with circuit B (i.e., no intermediary) or communicates indirectly with circuit B (e.g., through one or more intermediaries).
While various modules and/or circuits with particular functionality are shown in
As mentioned above and in one configuration, the “modules” may be implemented in machine-readable medium for execution by various types of processors, such as processor 212 of
It should be appreciated that the processor 212 is configured to perform any of the operations described herein (e.g., any of the operations described with reference to the method 300 of
It should be noted that although the diagrams herein may show a specific order and composition of method steps, it is understood that the order of these steps may differ from what is depicted. For example, two or more steps may be performed concurrently or with partial concurrence. Also, some method steps that are performed as discrete steps may be combined, steps being performed as a combined step may be separated into discrete steps, the sequence of certain processes may be reversed or otherwise varied, and the nature or number of discrete processes may be altered or varied. The order or sequence of any element or apparatus may be varied or substituted according to alternative embodiments. Accordingly, all such modifications are intended to be included within the scope of the present disclosure as defined in the appended claims. Such variations will depend on the machine-readable media and hardware systems chosen. It is understood that all such variations are within the scope of the disclosure.
The foregoing description of embodiments has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from this disclosure. The embodiments were chosen and described in order to explain the principals of the disclosure and its practical application to enable one skilled in the art to utilize the various embodiments and with various modifications as are suited to the particular use(s) contemplated. Other substitutions, modifications, changes and omissions may be made in the design, operating conditions and arrangement of the embodiments without departing from the scope of the present disclosure as expressed in the appended claims.
This application claims the benefit of and priority to U.S. Provisional Patent Application No. 63/168,586, filed Mar. 31, 2021, the entire disclosure of which is hereby incorporated by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/022607 | 3/30/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63168586 | Mar 2021 | US |