We explore an integrated approach to sound generation that supports
a wide variety of physics-based simulation models and computer-animated
phenomena. Targeting high-quality offline sound synthesis, we seek to resolve
animation-driven sound radiation with near-field scattering and
diffraction effects. The core of our approach is a sharp-interface
finite-difference time-domain (FDTD) wavesolver, with a series of supporting
algorithms to handle rapidly deforming and vibrating embedded interfaces
arising in physics-based animation sound. Once the solver rasterizes these
interfaces, it must evaluate acceleration boundary conditions (BCs) that involve
model and phenomena-specific computations. We introduce acoustic shaders as a
mechanism to abstract away these complexities, and describe a variety of
implementations for computer animation: near-rigid objects with
ringing and acceleration noise, deformable (finite element) models such as
thin shells, bubble-based water, and virtual characters. Since time-domain wave
synthesis is expensive, we only simulate pressure waves in a small region about
each sound source, then estimate a far-field pressure signal. To further
improve scalability beyond multi-threading, we propose a fully time-parallel
sound synthesis method that is demonstrated on commodity cloud computing
resources. In addition to presenting results for multiple animation phenomena
(water, rigid, shells, kinematic deformers, etc.) we also propose 3D automatic
dialogue replacement (3DADR) for virtual characters so that pre-recorded
dialogue can include character movement, and near-field shadowing and
scattering sound effects.

Thin shells — solids that are thin in one dimension compared to the other
two — often emit rich nonlinear sounds when struck. Strong excitations can
even cause chaotic thin-shell vibrations, producing sounds whose energy
spectrum diffuses from low to high frequencies over time — a phenomenon
known as wave turbulence. It is all these nonlinearities that grant shells such
as cymbals and gongs their characteristic “glinting” sound. Yet, simulation
models that efficiently capture these sound effects remain elusive.

We propose a physically based, multi-scale reduced simulation method
to synthesize nonlinear thin-shell sounds. We first split nonlinear vibrations
into two scales, with a small low-frequency part simulated in a fully nonlinear
way, and a high-frequency part containing many more modes approximated
through time-varying linearization. This allows us to capture interesting
nonlinearities in the shells’ deformation, tens of times faster than previous
approaches. Furthermore, we propose a method that enriches simulated
sounds with wave turbulent sound details through a phenomenological
diffusion model in the frequency domain, and thereby sidestep the expensive
simulation of chaotic high-frequency dynamics. We show several examples
of our simulations, illustrating the efficiency and realism of our model.

Many objects of interest in imaging, such as biological cells or turbulent air, are phase-only objects that are transparent and thus produce little to no contrast in wide-field microscopes. The phase accumulated by this light carries important information about the refractive index and the thickness of the object. We propose a method for retrieving the phase by using a spatial light modulator (SLM) to conjugate the phase of the object, flattening the wavefront of light passing through the SLM and the object. After we flatten the wavefront, the resulting configuration on the SLM is the conjugate of the phase image, which we can easily invert to recover the original phase image. This method retrieves the phase without using any prior knowledge about the object.

Our algorithm performs a decomposition of the image into basis functions and searches for the coefficients that yield the flattest output intensity pattern. This algorithm takes advantage of the fact that a relatively small number of basis elements can store the majority of the information in the image. Popular phase retrieval methods such as the Gerchberg–Saxton algorithm can only converge to the phase image under light that is sufficiently coherent. From our simulations, we find that our method consistently produces correlations of over 99% with the original phase image, using either incoherent or coherent light and only 10% as many basis elements as the number of pixels in the image. We believe this result is a strong indication that this method will be able to reliably retrieve a direct phase image in the laboratory.

The guiding center code ORBIT was originally developed 30 years ago to study the
drift-orbit effects of charged particles in the strong guiding magnetic fields of tokamaks. Today,
ORBIT remains a very active tool in magnetic-confinement fusion research and continues to
adapt to the latest toroidal devices, such as the NSTX-Upgrade, for which it plays a very
important role in the study of energetic particle effects. Although the capabilities of ORBIT
have improved throughout the years, the code still remains a serial application, which has now
become an impediment to the lengthy simulations required for the NSTX-U project. In this
work, multi-threaded parallelism is introduced in the core of the code with the goal of achieving
the largest performance improvement while minimizing changes made to the source code. To
that end, we introduce compiler directives in the most compute-intensive parts of the code,
which constitutes the stable core that seldom changes. Standard OpenMP directives are
used for shared-memory CPU multi-threading while newly developed OpenACC directives
and CUDA Fortran code are used for Graphics Processing Unit (GPU) multi-threading. Our
data shows that the fully-optimized CUDA Fortran version is 53.6 times faster than the original code.