LLNL-VIDEO-849083
Inside Livermore Lab
The Bible Study Employee Resource group hosted William Lane Craig to speak on the topic “What evidence do we have for God’s existence?”
LLNL-VIDEO-849083
LLNL-VIDEO-849083
updated 1 year ago
LLNL-VIDEO-849083
Speaker: Alberto Padovan (UIUC, linkedin.com/in/alberto-padovan-7b0416272)
Description: Computing reduced-order models using non-intrusive methods is particularly attractive for systems that are simulated using black-box solvers. However, obtaining accurate data-driven models can be challenging, especially if the underlying systems exhibit large-amplitude transient growth. Although these systems may evolve near a low-dimensional subspace that can be easily identified using standard techniques such as Proper Orthogonal Decomposition (POD), computing accurate models often requires projecting the state onto this subspace via a non-orthogonal projection. While appropriate oblique projection operators can be computed using intrusive techniques that leverage the form of the underlying governing equations, purely data-driven methods currently tend to achieve dimensionality reduction via orthogonal projections, and this can lead to models with poor predictive accuracy. We address this issue by introducing a non-intrusive framework designed to simultaneously identify oblique projection operators and reduced-order dynamics directly from data. In particular, given training trajectories and assuming reduced-order dynamics of polynomial form, we fit a reduced-order model by solving an optimization problem over the product manifold of a Grassmann manifold, a Stiefel manifold, and several linear spaces (as many as the tensors that define the low-order dynamics). Furthermore, we show that the gradient of the cost function with respect to the optimization parameters can be conveniently written in closed form, so that there is no need for automatic differentiation. This formulation is compared with state-of-the-art methods on three examples: a three-dimensional system of ordinary differential equations, the complex Ginzburg-Landau (CGL) equation, and a two-dimensional lid-driven cavity flow at Reynolds number Re = 8300.
Bio: Alberto is a Postdoctoral Research Associate in the Department of Aerospace Engineering at the University of Illinois Urbana-Champaign. He obtained his PhD from Princeton University in August 2022, and his research interests lie at the intersection of fluid mechanics, dynamical systems and control theory. Alberto’s current work focuses on the development of data-driven methods for model reduction of fluid flows, as well as on the analysis and control of supersonic and hypersonic wall-bounded flows.
DDPS webinar: librom.net/ddps.html
💻 LLNL News: llnl.gov/news
📲 Instagram: instagram.com/livermore_lab
🤳 Facebook: facebook.com/livermore.lab
🐤 Twitter: twitter.com/Livermore_Lab
About LLNL: Lawrence Livermore National Laboratory has a mission of strengthening the United States’ security through development and application of world-class science and technology to: 1) enhance the nation’s defense, 2) reduce the global threat from terrorism and weapons of mass destruction, and 3) respond with vision, quality, integrity and technical excellence to scientific issues of national importance.
Learn more about LLNL: llnl.gov/.
IM release number is: LLNL-VIDEO-867054
After his retirement, he drew upon his lifelong love of science and mystery stories to write Atomic Peril. When he's not writing, or visiting his children and ten grandchildren, he can often be found on his road bike, navigating the hills surrounding the lab where he once worked. Learn more about the Livermore Laboratory Employee Services Association Author/Speaker Series events: llesa.com/previous-presentations1.html
#AtomicPeril #LLESA #LivermoreLab
LLNL-VIDEO-870040
Speaker: Akhil Nekkanti (CalTech, https://scholar.google.co.in/citations?user=qOpT0w0AAAAJ&hl=en)
Description: Turbulent flows are high-dimensional systems characterized by instabilities and non-linearity, which make modeling challenging. Data-driven techniques reduce complexity by extracting key flow features and projecting governing equations onto a low-dimensional subspace. Recently, spectral proper orthogonal decomposition (SPOD), a frequency-domain variant of principal component analysis, has emerged as a powerful tool for analyzing turbulent flows. We extend SPOD to include low-rank reconstruction, denoising, and frequency-time analysis. In this talk, I will demonstrate two applications: gappy-data reconstruction and the intermittency of coherent structures. First, our gappy-data reconstruction algorithm uses spatial and temporal correlations to estimate compromised or missing regions, outperforming standard techniques like gappy POD and Kriging. Second, we introduce a convolution-based strategy for frequency-time analysis that characterizes the intermittency of spatially coherent flow structures. When applied to turbulent jet data, SPOD-based frequency-time analysis reveals that the intermittent occurrence of large-scale coherent structures is directly associated with high-energy events. Finally, we present bispectral mode decomposition (BMD), a technique that extracts flow structures linked to nonlinear triadic interactions by optimizing third-order statistics. This method is applied to a forced turbulent jet to examine and construct the cascade of triads.
Bio: Akhil Nekkanti is a postdoctoral scholar in the Division of Engineering and Applied Sciences at Caltech. He received his Ph.D. from the University of California San Diego in 2023. His research interests include reduced-order modeling, hydrodynamic stability, aeroacoustics, and turbulent flows. He specializes in high-fidelity numerical simulations and developing data-driven techniques for flow control and the discovery of flow physics.
DDPS webinar: librom.net/ddps.html
💻 LLNL News: llnl.gov/news
📲 Instagram: instagram.com/livermore_lab
🤳 Facebook: facebook.com/livermore.lab
🐤 Twitter: twitter.com/Livermore_Lab
About LLNL: Lawrence Livermore National Laboratory has a mission of strengthening the United States’ security through development and application of world-class science and technology to: 1) enhance the nation’s defense, 2) reduce the global threat from terrorism and weapons of mass destruction, and 3) respond with vision, quality, integrity and technical excellence to scientific issues of national importance. Learn more about LLNL: llnl.gov/.
LLNL-VIDEO-870221
The integration of ontologies, semantic reasoning, and graph-based deep learning and AI signifies a paradigm shift in studying high-dimensional multimodal problems, particularly within advanced manufacturing, synchrotron science, and photovoltaics. Ontologies provide structured frameworks for knowledge representation, while graphs model complex relationships and interactions, enhancing AI’s reasoning and predictive capabilities. In this talk, we explore ‘mds-onto’: a low-level ontology developed for multiple materials science domains such as laser powder bed fusion (LPBF), direct ink writing (DIW), and synchrotron x-ray experiments. Foundation models, which are domain-specific deep learning neural network models trained using self-supervised learning, can be fine-tuned for multiple specific learning tasks. Utilizing spatiotemporal graph neural networks as graph foundation models enables multimodal analysis, wherein preprocessing extracts features from diverse datasets and constructs spatiotemporal graphs with these feature vectors for foundation model training. These ddDTs are capable of answering task-specific questions such as classifying parts with or without pores and ensuring track continuation in LPBF, performing data imputation and regression for error estimation in DIW, and predicting PV powerplant performance, enabling real-time monitoring, predictive maintenance, and optimization of manufacturing processes. Incorporating ontologies and knowledge graphs into ddDTs enhances their intelligence and decision-making capabilities, thereby improving process efficiency and product innovation. This underscores the importance of data-centric AI for ensuring accurate and robust AI models.
Dr. Pawan Tripathi is a research assistant professor in the Department of Materials Science and Engineering at CWRU in Ohio. He leads projects related to materials data science at the DOE/NNSA-funded Center of Excellence for Materials Data Science for Stockpile Stewardship. His expertise lies in interface structural simulations and developing automated analysis pipelines for large multimodal datasets from diverse experiments. Dr. Tripathi’s current research focuses extensively on data FAIRification, deep learning, image processing, semantic segmentation, and statistical modeling, particularly in the context of advanced manufacturing and laser powder bed fusion.
LLNL-VIDEO-2000580
DDPS Talk date: September 20th, 2024
Speaker: Jianxun Wang (University of Notre Dame, https://sites.nd.edu/jianxun-wang/)
Description: Predictive modeling and simulation are essential for understanding, predicting, and controlling complex physical processes across many engineering disciplines. However, traditional numerical models, which are based on first principles, face significant challenges, especially for complex systems involving multiple interacting physics across a wide range of spatial and temporal scales. (1) A primary obstacle stems from our often-incomplete understanding of the underlying physics, which results in inadequate mathematical models that fail to accurately capture system behavior. (2) Additionally, the high computational demands of traditional solvers represent another substantial barrier, especially when real-time control or many repeated model queries are required, as in design optimization, inference, and uncertainty quantification. Fortunately, the continual evolution of sensing technology and the exponential increase in data availability have opened new avenues for the development of data-driven computational modeling frameworks. Bolstered by advanced machine learning and GPU computing techniques, these models hold the promise of greatly enhancing our predictive capabilities, effectively tackling the challenges posed by traditional numerical models. While data science and machine learning offer novel methods for computational mechanics models, challenges persist, such as the need for extensive data, limited generalizability, and lack of interpretability. Addressing existing challenges for predictive modeling issues requires innovative computational methods that integrate advanced machine learning techniques with physics principles. This talk will introduce some of our efforts along this direction, spotlighting the Neural Differentiable Physics, a novel SciML framework unifying classic numerical PDE solvers and advanced deep learning models for computational modeling of complex physical systems. Our approach centers on the integration of numerical PDE operators into neural architectures, enabling the fusion of prior knowledge of known physics, multi-resolution data, numerical techniques, and deep neural networks through differentiable programming. The way for integrating physics into the deep learning model represents a novel departure from existing SciML frameworks, such as Physics-Informed Neural Networks (PINNs). By combining the strengths of known physical principles and established numerical techniques with cutting-edge deep learning and AI technology, this innovative framework promises to inaugurate a new era in the understanding and modeling of complex physical systems, with far-reaching implications for science and engineering applications.
Bio: Dr. Jian-Xun Wang is the Robert W. Huether Collegiate Associate Professor of Aerospace Engineering in the Department of Aerospace and Mechanical Engineering at the University of Notre Dame. He earned his Ph.D. in Aerospace Engineering from Virginia Tech in 2017 and worked as a Postdoctoral Scholar at UC Berkeley before joining Notre Dame in 2018. Dr. Wang has a multidisciplinary research background that spans Scientific Machine Learning, Data Assimilation, Bayesian Computing, Uncertainty Quantification, and Computational Fluid Dynamics. His research focuses particularly on the in-depth integration of advanced AI/ML techniques with physics-based mathematical models and classic numerical methods, aiming to revolutionize the field of computational modeling in the era of "big data" and significantly enhance the predictive simulation capabilities. He has led research projects sponsored by multiple agencies, including NSF, ONR, AFSOR, DARPA, Google, and others. Dr. Wang is a recipient of the 2021 NSF CAREER Award and the 2023 ONR YIP Award. He is also an elected vice chair of the US Association for Computational Mechanics (USACM) Technical Thrust Area on Data-Driven Modeling.
DDPS webinar: librom.net/ddps.html
💻 LLNL News: llnl.gov/news
📲 Instagram: instagram.com/livermore_lab
🤳 Facebook: facebook.com/livermore.lab
🐤 Twitter: twitter.com/Livermore_Lab
About LLNL: Lawrence Livermore National Laboratory has a mission of strengthening the United States’ security through development and application of world-class science and technology to: 1) enhance the nation’s defense, 2) reduce the global threat from terrorism and weapons of mass destruction, and 3) respond with vision, quality, integrity and technical excellence to scientific issues of national importance. Learn more about LLNL: llnl.gov/.
IM release number is: LLNL-VIDEO-869779
Maintaining the US national security advantage associated with commercial smallsat capabilities will likely require specific strategies, including alignment of acquisitions with industrial base objectives, government insight into the financial stability of key firms, and systematic assessment of the relative value of different types of smallsat capabilities to national security objectives. This talk will present evidence-driven analysis of commercial smallsat businesses and offer recommendations for next steps to preserve important capability.
Carissa Christensen
LLNL-VIDEO-2000504
RR0004814
On July 30, 2024, Patrick Farrell of the University of Oxford presented “Designing Conservative and Accurately Dissipative Numerical Integrators in Time.” Numerical methods for the simulation of transient systems with structure-preserving properties are known to exhibit greater accuracy and physical reliability, in particular over long durations. These schemes are often built on powerful geometric ideas for broad classes of problems, such as Hamiltonian or reversible systems. However, there remain difficulties in devising higher-order-in-time structure-preserving discretizations for nonlinear problems, and in conserving non-polynomial invariants. In this work we propose a new, general framework for the construction of structure-preserving time steppers via finite elements in time and the systematic introduction of auxiliary variables. The framework reduces to Gauss methods where those are structure-preserving, but extends to generate arbitrary-order structure-preserving schemes for nonlinear problems, and allows for the construction of schemes that conserve multiple higher-order invariants. We demonstrate the ideas by devising novel schemes that exactly conserve all known invariants of the Kepler and Kovalevskaya problems, arbitrary-order schemes for the compressible Navier–Stokes equations that conserve mass, momentum, and energy, and provably dissipate entropy, and multi-conservative schemes for the Benjamin-Bona-Mahony equation.
Learn more about MFEM at mfem.org and view the seminar speaker lineup at mfem.org/seminar/.
LLNL-VIDEO-868947
LLNL-VIDEO-868951
#hpc #opensource #software #supercomputers #tutorial #math
Learn more at github.com/llnl/axom. Documentation and tutorial are available at axom.readthedocs.io/en/develop/.
LLNL-VIDEO-2000492
#hpc #opensource #software #supercomputers #tutorial
Caliper: integrate performance profiling capabilities into your applications (software.llnl.gov/Caliper, github.com/daboehme/caliper-tutorial)
Hatchet: analyze hierarchical performance data (llnl-hatchet.readthedocs.io/en/latest, github.com/llnl/hatchet-tutorial)
Thicket: optimize application performance on supercomputers (thicket.readthedocs.io/en/latest, github.com/llnl/thicket-tutorial)
LLNL-PRES-821032, LLNL-PRES-850268
#hpc #opensource #software #supercomputers #tutorial
LLNL-PRES-867320
#hpc #opensource #software #supercomputers #tutorial
LLNL-PRES-868641
#hpc #opensource #software #supercomputers #tutorial
LLNL-VIDEO-2000500
Learn more at github.com/llnl/raja and github.com/llnl/umpire. Documentation and tutorials are available at raja.readthedocs.io/en/develop and umpire.readthedocs.io/en/develop/sphinx/tutorial.html.
LLNL-PRES-853131
LLNL-PRES-868220
#hpc #opensource #software #supercomputers #tutorial
Learn more at github.com/llnl/blt. Documentation and tutorial are available at llnl-blt.readthedocs.io/en/develop/.
LLNL-PRES-819321
#hpc #opensource #software #supercomputers #tutorial
Learn more at github.com/spack/spack. Documentation and tutorial are available at spack.readthedocs.io/en/latest and spack-tutorial.readthedocs.io/en/latest/. The first part of this tutorial is also included in this playlist.
LLNL-PRES-837654
#hpc #opensource #software #supercomputers #tutorial
LLNL-PRES-806064
#hpc #opensource #software #supercomputers #tutorial
Speaker: Aditi Krishnapriyan (UC Berkeley, a1k12.github.io)
Description: Machine learning (ML) is increasingly playing a pivotal role in spatiotemporal modeling. A number of open questions remain on the best learning strategies to maximize the utility of machine learning while ensuring the validity of such predictions, particularly in limited data scenarios. This talk will focus on exploring machine learning strategies for neural PDE solvers, with an emphasis on broad learning strategies that are applicable across a wide variety of systems and neural network architectures. Some topics I will discuss include: using self-supervised learning to change the basis of learning with spectral methods to solve fluid dynamics and transport PDE problems, and “simulation-in-the-loop” approaches via incorporating PDE-constrained optimization as a layer in neural networks. In each of these settings, I will discuss how ML methods can be used with numerical methods through fully differentiable settings.
Bio: Aditi Krishnapriyan is an Assistant Professor at UC Berkeley where she is a member of Berkeley AI Research (BAIR), Electrical Engineering and Computer Sciences (EECS), and Chemical Engineering. Her research interests include physics-inspired machine learning methods; geometric deep learning; inverse problems; and development of machine learning methods informed by physical sciences applications including molecular dynamics and fluid mechanics. A former DOE Computational Science Graduate Fellow, she holds a PhD from Stanford University and in 2020–2022 was the Luis W. Alvarez Fellow in Computing Sciences at Lawrence Berkeley National Laboratory.
DDPS webinar: librom.net/ddps.html
💻 LLNL News: llnl.gov/news
📲 Instagram: instagram.com/livermore_lab
🤳 Facebook: facebook.com/livermore.lab
🐤 Twitter: twitter.com/Livermore_Lab
About LLNL: Lawrence Livermore National Laboratory has a mission of strengthening the United States’ security through development and application of world-class science and technology to: 1) enhance the nation’s defense, 2) reduce the global threat from terrorism and weapons of mass destruction, and 3) respond with vision, quality, integrity and technical excellence to scientific issues of national importance. Learn more about LLNL: llnl.gov/.
IM release number is: LLNL-VIDEO-868699
Speaker: Elizabeth Qian (Georgia Tech, elizabethqian.com)
Description: Machine learning (ML) methods have garnered significant interest as potential methods for learning surrogate models for complex engineering systems for which traditional simulation is expensive. However, in many scientific and engineering settings, training data are scarce due to the cost of generating data from traditional high-fidelity simulations. ML models trained on scarce data have high variance and are sensitive to vagaries of the training data set. We propose a new multifidelity training approach for scientific machine learning that exploits the scientific context where data of varying fidelities and costs are available; for example high-fidelity data may be generated by an expensive fully resolved physics simulation whereas lower-fidelity data may arise from a cheaper model based on simplifying assumptions. We use the multifidelity data to define new multifidelity control variate estimators for the unknown parameters of linear regression models, and provide theoretical analyses that guarantee accuracy and improved robustness to small training budgets. Numerical results show that multifidelity learned models achieve order-of-magnitude lower expected error than standard training approaches when high-fidelity data are scarce.
Bio: Elizabeth Qian is an Assistant Professor at Georgia Tech jointly appointed in the School of Aerospace Engineering and the School of Computational Science and Engineering. Her interdisciplinary research develops new computational methods to enable engineering design and decision-making for complex systems, with special expertise in model reduction, scientific machine learning, and multifidelity methods. Recent awards include a 2024 Air Force Young Investigator award and a 2023 Hans Fischer visiting fellowship at the Technical University of Munich. Prior to joining Georgia Tech, she was a von Karman Instructor at Caltech in the Department of Computing and Mathematical Sciences. She earned her SB, SM, and PhD degrees from MIT.
DDPS webinar: librom.net/ddps.html
💻 LLNL News: llnl.gov/news
📲 Instagram: instagram.com/livermore_lab
🤳 Facebook: facebook.com/livermore.lab
🐤 Twitter: twitter.com/Livermore_Lab
About LLNL: Lawrence Livermore National Laboratory has a mission of strengthening the United States’ security through development and application of world-class science and technology to: 1) enhance the nation’s defense, 2) reduce the global threat from terrorism and weapons of mass destruction, and 3) respond with vision, quality, integrity and technical excellence to scientific issues of national importance. Learn more about LLNL: llnl.gov/.
IM release number is: LLNL-VIDEO-868128
Speaker: Francesco Romor (Weierstrass Institute, https://www.wias-berlin.de/contact/staff/index.jsp?lang=1&uname=romor)
Description: A slowly decaying Kolmogorov n-width of the solution manifold for a parametric partial differential equation hinders the development of efficient linear projection-based reduced-order models. This is due to the high dimensionality of the reduced space required to accurately approximate the solution manifold. To address this issue, neural networks, through various architectures, have been utilized to design accurate nonlinear regressions of solution manifolds. However, most implementations are non-intrusive black-box surrogate models, and only some perform dimensional reduction from the number of degrees of freedom of the discretized parametric models to a latent dimension. We introduce a novel intrusive and interpretable methodology for reduced-order modeling that uses neural networks as solution manifold approximants while retaining the underlying physical and numerical models during the predictive/online stage. Specifically, we focus on autoencoders to further compress the dimensionality of linear approximations of solution manifolds, ultimately achieving nonlinear dimension reduction. After obtaining an accurate nonlinear approximation, we seek solutions on the latent manifold using the residual-based nonlinear least-squares Petrov-Galerkin method, suitably hyper-reduced to be independent of the number of degrees of freedom. We develop new adaptive hyper-reduction strategies and demonstrate the feasibility of employing also local nonlinear approximants. We validate our methodology on two nonlinear, time-dependent parametric benchmarks: a supersonic flow past a NACA airfoil with varying Mach number and an incompressible turbulent flow around the Ahmed body with changing slant angle.
DDPS webinar: librom.net/ddps.html
💻 LLNL News: llnl.gov/news
📲 Instagram: instagram.com/livermore_lab
🤳 Facebook: facebook.com/livermore.lab
🐤 Twitter: twitter.com/Livermore_Lab
About LLNL: Lawrence Livermore National Laboratory has a mission of strengthening the United States’ security through development and application of world-class science and technology to: 1) enhance the nation’s defense, 2) reduce the global threat from terrorism and weapons of mass destruction, and 3) respond with vision, quality, integrity and technical excellence to scientific issues of national importance. Learn more about LLNL: llnl.gov/.
IM release number is: LLNL-VIDEO-866743
IM#: RR0002143
Speaker: Doug James (Stanford University, https://graphics.stanford.edu/~djames/)
Description: This talk will be in two parts: (1) progressive simulation for art-directable physics, and (2) improved water sound synthesis using coupled acoustic bubbles.
First, I will talk about our new progressive simulation methods that enable art-directable modeling and animation for cloth and thin shells. This family of coarse-to-fine, level-of-detail simulation methods supports physics-based modeling of complex frictionally contacting thin shell and cloth models in both quasistatic and dynamic scenarios. Based on multiscale model reduction for incremental potential contact, these progressive simulation methods are biased to allow designers to quickly design at coarse resolutions but then "up-res" and still get consistent, higher-fidelity results without introducing simulation artifacts and/or unpredicted outcomes, such as different folds, wrinkles, and drapes. (Work with Ph.D. student Eris Zhang et al.)
Second, I will talk about reduced-order vibration models for synthesizing water sound using coupled acoustic bubbles. Despite the ubiquity of physics-based simulation in visual computing workflows, sound simulation remains relatively unexplored, with realistic water sounds among the most challenging. In our recent work, we developed a framework for simulating the inter-bubble coupling effects crucially missing from prior work, resulting in airborne sounds with more natural pitch variations and fuller lower frequency content. (Work with Ph.D. student Kangrui Xue et al.)
Bio: Doug L. James is a Professor of Computer Science at Stanford University (since June 2015) and a member of Stanford’s Center for Computer Research in Music and Acoustics (CCRMA) and the Institute for Computational and Mathematical Engineering (ICME). He has been a consulting Senior Research Scientist at NVIDIA Research since 2022. He holds three degrees in applied mathematics, including a Ph.D. in 2001 from the University of British Columbia. In 2002 he joined the School of Computer Science at Carnegie Mellon University as an Assistant Professor and later became an Associate Professor of Computer Science at Cornell University (2006-2015). His research interests include computer graphics, computer sound, physically based modeling and animation, and reduced-order physics models. Doug is a recipient of a National Science Foundation CAREER award and a fellow of both the Alfred P. Sloan Foundation and the Guggenheim Foundation. He received the ACM SIGGRAPH 2021 Computer Graphics Achievement Award, a 2012 Technical Achievement Award from The Academy of Motion Picture Arts and Sciences for “Wavelet Turbulence,” and the 2013 Katayanagi Emerging Leadership Prize from Carnegie Mellon University and Tokyo University of Technology. He was the Technical Papers Program Chair of ACM SIGGRAPH 2015, and a consulting Senior Research Scientist at Pixar Animation Studios from 2015-2020.
DDPS webinar: librom.net/ddps.html
💻 LLNL News: llnl.gov/news
📲 Instagram: instagram.com/livermore_lab
🤳 Facebook: facebook.com/livermore.lab
🐤 Twitter: twitter.com/Livermore_Lab
About LLNL: Lawrence Livermore National Laboratory has a mission of strengthening the United States’ security through development and application of world-class science and technology to: 1) enhance the nation’s defense, 2) reduce the global threat from terrorism and weapons of mass destruction, and 3) respond with vision, quality, integrity and technical excellence to scientific issues of national importance. Learn more about LLNL: llnl.gov/.
IM release number is: LLNL-VIDEO-866254
IM # LLNL-VIDEO-865988
• Speaker: Fabio Giampaolo (University of Naples Federico II, scholar.google.com/citations?user=I8q5NwUAAAAJ&hl=it)
• Description: Backpropagation is the most widely used method for training Neural Networks. It has proven its effectiveness across a wide array of contexts, facilitating the efficient optimization of deep learning models. However, it exhibits certain weaknesses in specific scenarios that must be addressed to broaden the applicability of AI strategies in real-world situations. This is especially true in the integration of Deep Learning (DL) strategies within complex frameworks that deal with physics-related problems. Challenges such as the incorporation of non-differentiable components within neural architectures, or the implementation of distributed learning on heterogeneous devices, are just a few examples of the hurdles faced by researchers in the field. Inspired by one of the recent works of Geoffrey Hinton, the Locally Backpropagated Forward Forward training strategy is a novel approach that merges the effectiveness of backpropagation with the appealing attributes of the Forward-Forward algorithm. This combination aims to provide a viable solution in contexts where traditional methods show limitations.
• Bio: Fabio Giampaolo is a Research Fellow in Computer Science and Artificial Intelligence and a member of the MODAL Research Group at the University of Naples Federico II, where he received his Ph.D. in in Mathematics and Applications. Member of the Editorial Board of the journal Neural Computing and Applications, published by Springer, he has co-authored articles in International Journals and conference papers on both applicative and methodological research in the Deep Learning field. His research interests include Machine Learning and Deep Learning, with a particular focus on exploring the dynamics of learning. This exploration encompasses a wide range of topics, including the development of innovative neural network architectures, the investigation on new learning algorithms' efficiency, and the application of these methodologies to solve complex real-world problems.
DDPS webinar: librom.net/ddps.html
💻 LLNL News: llnl.gov/news
📲 Instagram: instagram.com/livermore_lab
🤳 Facebook: facebook.com/livermore.lab
🐤 Twitter: twitter.com/Livermore_Lab
About LLNL: Lawrence Livermore National Laboratory has a mission of strengthening the United States’ security through development and application of world-class science and technology to: 1) enhance the nation’s defense, 2) reduce the global threat from terrorism and weapons of mass destruction, and 3) respond with vision, quality, integrity and technical excellence to scientific issues of national importance. Learn more about LLNL: llnl.gov/.
IM release number is: LLNL-VIDEO-865970
• Speaker: Gianluca Iaccarino (Stanford University, https://engineering.stanford.edu/magazine/gianluca-iaccarino-dont-be-afraid-non-linear-career-path)
• Description: What is an autoencoder? How does it work? How can one trust its predictions? The talk will focus on recent activities centered around the development of an autoencoder, an unsupervised data-driven model, to predict the flow past wing geometries. The model relies on non-linear compression to construct a low-dimensional latent representation of the available data and its relation to the physical inputs. This enables the approach to generate new (unseen) cases. A careful construction of the dataset produces latent variables that can be interpreted in terms of aerodynamic performance both for attached and separated flow conditions. An important thrust of the work is the investigation of effect of uncertainties due to the autoencoder architecture, the hyperparameters and the amount of the training data (internal or model-form uncertainties). Comparisons to a Gaussian Process regression and linear compression strategies illustrate the advantage of the present approach in extracting useful information on the prediction uncertainty even in the absence of data. The effect of model (internal) uncertainties is also compared to the impact of the variability induced by uncertain operating conditions (external uncertainties) showing the importance of accounting for the total uncertainty when establishing prediction confidence. A brief discussion of how to incorporate multi-fidelity data in the autoencoder training will conclude the presentation.
• Bio: Gianluca Iaccarino is the Director of the Institute for Computational Mathematical Engineering and a professor in the Mechanical Engineering Department at Stanford University. He received his PhD in Italy from the Politecnico di Bari (Italy) before joining the faculty at Stanford in 2007. Since 2014, he is the Director of the PSAAP Center at Stanford, funded by the US Department of Energy focused on multi-physics simulations, uncertainty quantification and exa-scale computing. He received the Presidential Early Career Award for Scientists and Engineers (PECASE) award and he is a Fellow of APS.
• Q&A session questions:
a. How does the method handle noise in the input training or testing data, perhaps due to measurement error in sensors?
b. How does the computational cost of the autoencoder compare to that of a RANS fluid simulation of flow over an airfoil?
c. Thank you very much for this great presentation! How can you predict two different values for the same angle of attack values for noise up and noise down motions of dynamic stall from the snapshot trained machine learning model?
d. I'm just learning about these ML techniques, but from what I have seen Variational Autoencoders are more ideal for generation. Does a basic AutoEncoder work well here due to the more narrow/specific scope of the problem? Thanks for the presentation!
e. Can you please explain a little about sequential training for the multi fedelity problems - Do we train the low fidelity's data with low latent and then increase the problem complexity by introducing viscosity etc and increase the latent to see how the new physics is handled?
f. With your method of multi fidelity data, for noisy experiment data for instance, could you add a new latent parameter, but have it be a variational latent parameter and keep the rest as their fixed deterministic latents?
g. The plots shown are CD/CL polars, if the integrated quantity is the goal, would the training benefit or worsen from using the error in the coefficient predictions in the loss rather than just the MSE loss?
h. What differences does the training process have for 3D, unsteady, turbulent flow?
DDPS webinar: librom.net/ddps.html
💻 LLNL News: llnl.gov/news
📲 Instagram: instagram.com/livermore_lab
🤳 Facebook: facebook.com/livermore.lab
🐤 Twitter: twitter.com/Livermore_Lab
About LLNL: Lawrence Livermore National Laboratory has a mission of strengthening the United States’ security through development and application of world-class science and technology to: 1) enhance the nation’s defense, 2) reduce the global threat from terrorism and weapons of mass destruction, and 3) respond with vision, quality, integrity and technical excellence to scientific issues of national importance. Learn more about LLNL: llnl.gov/.
IM release number is: LLNL-VIDEO-865917
1099833
Evaluating the risks with a learned AI system statically seems hopeless because the number of contexts in which it could behave is infinite or exponentially large and static checks can only verify a finite and relatively small set of such contexts. However, if we had a run-time evaluation of risk, we could potentially prevent actions with an unacceptable level of risk. The probability of harm produced by an action or a plan in a given context and past data under the true explanation for how the world works is unknown. However, under reasonable hypotheses related to Occam's Razor and having a non-parametric Bayesian prior (that thus includes the true explanation) it can be shown to be bounded by quantities that can in principle be numerically approximated or estimated by large neural networks, all based on a Bayesian view that captures epistemic uncertainty about what is harm and how the world works. Capturing this uncertainty is essential: The AI could otherwise be confidently wrong about what is “good” and produce catastrophic existential risks, for example through instrumental goals or taking control of the reward mechanism (wrongly thinking that the rewards recorded in the computer are what it should maximize). The bound relies on a kind of paranoid theory, the one that has maximal probability given that it predicts harm and given the past data. The talk will discuss the research program based on these ideas and how amortized inference with large neural networks could be made to estimate the required quantities.
LLNL-VIDEO-865371
On May 6, 2024, Gonzalo de Diego of New York University’s Courant Institute of Mathematical Sciences presented “Numerical Solvers for Viscous Contact Problems in Glaciology.” Viscous contact problems are time-dependent viscous flow problems where the fluid is in contact with a solid surface from which it can detach and reattach. Over sufficiently long timescales, ice is assumed to flow like a viscous fluid with a nonlinear rheology. Therefore, certain phenomena in glaciology, like the formation of subglacial cavities in the base of an ice sheet or the dynamics of marine ice sheets (continental ice sheets that slide into the ocean and go afloat at a grounding line, detaching from the bedrock), can be modelled as viscous contact problems. In particular, these problems can be described by coupling the Stokes equations with contact boundary conditions to free boundary equations that evolve the ice domain in time. In this talk, de Diego described the difficulties that arise when attempting to solve this system numerically and introduced a method that is capable of overcoming them.
Learn more about MFEM at mfem.org and view the seminar speaker lineup at mfem.org/seminar/.
LLNL-VIDEO-865044
Guest speaker Dr. Bhavani Thuraisingham, the Founders Chair Professor of Computer Science and the Founding Executive of the Cyber Security Research and Education Institute of University of Texas at Dallas (UTD) presented: "Integrated Cyber Security and Machine Learning for Applications in Transportation Systems." Thuraisingham described the challenges associated with transportation systems using machine learning techniques developed for use in healthcare, finance, manufacturing, and cyber security applications to solve security issues like malware analysis and insider threat detection. She first covers her 10 years of research on machine learning systems, describes the work done by her team to secure the Internet of Transportation systems, and then describes her current work in Intelligent Transportation Systems Security.
Dr. Thuraisingham's 43+ year career includes industry (Honeywell), federal research laboratory (MITRE), US government (NSF) and US Academia. Her work has resulted in 130+ journal articles, 300+ conference papers, 200+ keynote and featured addresses, seven US patents, sixteen books, and podcasts. She received her PhD from the University of Wales, Swansea, UK, and the prestigious earned higher doctorate (D. Eng) from the University of Bristol, UK. She has a Certificate in Public Policy Analysis from the London School of Economics and Political Science.
LLNL-VIDEO-864608
IM # 1098982
Moderator and LLNL data scientist Anna Jurgensen leads a panel featuring panelists of her Livermore colleagues in a discussion and Q&A session surrounding the importance of outreach and education and advice for early career women in the technical field of data science. Panel members include: (1) Jen Caseres of LLNL's Nuclear and Chemical Sciences Division where her work has focused on chemical and isotopic data analysis for nuclear forensics since 2020. Previously, she completed an M.S. in Geology at the University of Minnesota and a B.S. in Geochemistry from Caltech. Since coming to LLNL, Jen has developed an interest in applying data science to chemical data and begun doing outreach through Girls Who Code. (2) Emilia Grzesiak has been a data scientist at LLNL for almost 3 years, working on the GUIDE program to develop antibody-antigen analysis and visualization tools. She joined the Lab after a 2020 DSSI internship and completing her Bachelors and Masters in Biomedical Engineering from Duke University. In the past, her research interests included building robotic exoskeleton software and predicting flu/cold infections with wearable device data. She is passionate about mentorship and helping early career/students break into data science. (3) Paige Jones has been a software developer in LLNL’s Enterprise Application Services division for 3 years. She is responsible for the integration of commercial off-the-shelf tools and software, the development and enhancement of web apps, and the exploration of cutting-edge technologies for potential use at LLNL. With a B.S. in Computer Information Systems from Cal State, Chico, Paige is currently advancing her expertise with an M.S. in Computer Science at Georgia Tech. Paige is an avid advocate for outreach and STEM education and participates in recruitment, Girls Who Code, and Science Accelerating Girls Engagement. (4) Samantha joined the Lab as a software developer in January 2023 with a background in computer science and biology. Her current role marries these interdisciplinary studies by exploring the applications of computation and machine learning in drug development. Since coming to LLNL, Samantha has pursued several new sub-projects and begun volunteering for Girls Who Code.
LLNL-VIDEO-863885
Speaker: Yexiang Xue (Purdue University, https://www.cs.purdue.edu/homes/yexiang/)·
Description: Automated reasoning and machine learning are two fundamental pillars of artificial intelligence. Despite much recent progress, building autonomous agents fully integrating reasoning and learning is still beyond reach. This talk presents three cases where integrated vertical reasoning significantly enhances learning. In our first case, we introduce Spatial Reasoning INtegrated Generator (SPRING), which embeds a spatial reasoning module inside a deep generative network for image generation, to ensure constraint satisfaction, offer interpretability, and facilitate zero-shot transfer learning. In the second case, we embed vertical reasoning to expedite symbolic regression, and to learn Partial Differential Equations (PDEs) for materials science applications. In the third case, we demonstrate vertical reasoning via streamlining XOR constraints enables solvers with constant approximation guarantees for Satisfiable Modulo Counting (SMC), an important problem class integrating symbolic and statistical AI.
Bio: Dr. Yexiang Xue is an assistant professor in the Department of Computer Science, Purdue University. The goal of Dr. Xue’s research is to bridge large-scale constraint-based reasoning with state-of-the-art machine learning techniques to enable intelligent agents to make optimal decisions in high-dimensional and uncertain real-world applications. More specifically, Dr Xue’s research focuses on scalable and accurate probabilistic reasoning techniques, statistical modeling of data, and robust decision-making under uncertainty. His work is motivated by key problems across multiple scientific domains, ranging from artificial intelligence, machine learning, renewable energy, materials science, crowdsourcing, citizen science, urban computing, ecology, to behavioral econometrics. Recently, Dr. Xue has been focusing on developing cross-cutting computational methods, with an emphasis in the areas of computational sustainability and AI-driven scientific discovery.
DDPS webinar: librom.net/ddps.html
💻 LLNL News: llnl.gov/news
📲 Instagram: instagram.com/livermore_lab
🤳 Facebook: facebook.com/livermore.lab
🐤 Twitter: twitter.com/Livermore_Lab
About LLNL: Lawrence Livermore National Laboratory has a mission of strengthening the United States’ security through development and application of world-class science and technology to: 1) enhance the nation’s defense, 2) reduce the global threat from terrorism and weapons of mass destruction, and 3) respond with vision, quality, integrity and technical excellence to scientific issues of national importance. Learn more about LLNL: llnl.gov/.
LLNL-VIDEO-864597
Laura Bruckman from Case Western Reserve University presented "Materials Data Science Approach for Reliability: Materials to Systems." Lifetime prediction of long-lived materials requires an understanding of key degradation mechanisms in relation to the stressors (i.e., UV irradiance, water, temperature, mechanical stress) and the applied stressor levels, an understanding accessible through application of a materials data science approach. Predicting durability and mitigating degradation has failed under traditional materials reliability methods, which has typically focused on pass/fail criteria for materials under accelerated exposures. When a material fails, the necessary data to understand the failure is often missing since detailed evaluations of a large enough population of samples were not performed in the reliability study. Commercial PV modules are a complex system made up of several different materials. The degradation of one material impacts the next material especially at the interface. This has required a data science approach to collecting the complex data including the real-world stress conditions for each site, the time-series power data, degradation data of individual modules, and even on different materials. This data needs to be FAIRified, integrated, and modeled. This then allows for the prediction of the lifetime of modules in different climate zones and also for power prediction.
Bruckman is an Associate Professor in the Department of Materials Science and Engineering in the Case School of Engineering, Case Western Reserve University. Her research is focused on a data science approach to materials degradation. She is an expert in leveraging quantitative spectroscopic techniques and image analysis to understand materials degradation under different stressors. Her research has application to solar packaging materials, building envelope materials, coatings, and additively manufactured materials. She teaches in the Applied Data Science program at CWRU with a focus on visualization and analytics, research projects, and communicating results to various audiences.
LLNL-VIDEO-863720
LLNL-VIDEO-864214
Learn more about MFEM at mfem.org and view the seminar speaker lineup at mfem.org/seminar/. LLNL-VIDEO-862721
LLNL-VIDEO-863498
On March 5, 2024, Sungho Lee of the University of Memphis presented “LAGHOST: Development of Lagrangian High-Order Solver for Tectonics.” Long-term geological and tectonic processes associated with large deformation highlight the importance of using a moving Lagrangian frame. However, modern advancements in the finite element method, such as MPI parallelization, GPU acceleration, high-order elements, and adaptive grid refinement for tectonics based on this frame, have not been updated. Moreover, the existing solvers available in open access suffer from limited tutorials, a poor user manual, and several dependencies that make model building complex. These limitations can discourage both new users and developers from utilizing and improving these models. As a result, we are motivated to develop a user-friendly, Lagrangian thermo-mechanical numerical model that incorporates viscoelastoplastic rheology to simulate long-term tectonic processes like mountain building, mantle convection and so on. We introduce an ongoing project called LAGHOST (Lagrangian High-Order Solver for Tectonics), which is an MFEM-based tectonic solver. LAGHOST expands the capabilities of MFEM's LAGHOS mini-app. Currently, our solver incorporates constitutive equation, body force, mass scaling, dynamic relaxation, Mohr-Coulomb plasticity, plastic softening, Winkler foundation, remeshing, and remapping. To evaluate LAGHOST, we conducted four benchmark tests. The first test involved compressing an elastic box at a constant velocity, while the second test focused on the compaction of a self-weighted elastic column. To enable larger time-step sizes and achieve quasi-static solutions in the benchmarks, we introduced a fictitious density and implemented dynamic relaxation. This involved scaling the density factor and introducing a portion of force component opposing the previous velocity direction at nodal points. Our results exhibited good agreement with analytical solutions. Subsequently, we incorporated Mohr-Coulomb plasticity, a reliable model for predicting rock failure, into LAGHOST. We revisited the elastic box benchmark and considered plastic materials. By considering stress correction arising from plastic yielding, we confirmed that the updated solution from elastic guess aligned with the analytical solution. Furthermore, we applied LAGHOST to simulate the evolution of a normal fault, a significant tectonic phenomenon. To model normal fault evolution, we introduced strain softening on cohesion as the dominant factor based on geological evidence. Our simulations successfully captured the normal fault's evolution, with plastic strain localizing at shallow depths before propagating deeper. The fault angle reached approximately 60 degrees, in line with the Mohr-Coulomb failure theory.
Learn more about MFEM at mfem.org and view the seminar speaker lineup at mfem.org/seminar/. LLNL-VIDEO-862196
On March 14, 2024, William Moses of the University of Illinois Urbana-Champaign presented “Supercharging Programming Through Compiler Technology.” The decline of Moore's law and an increasing reliance on computation has led to an explosion of specialized software packages and hardware architectures. While this diversity enables unprecedented flexibility, it also requires domain-experts to learn how to customize programs to efficiently leverage the latest platform-specific APIs and data structures, instead of working on their intended problem. Rather than forcing each user to bear this burden, he proposes building high-level abstractions within general-purpose compilers that enable fast, portable, and composable programs to be automatically generated. This talk demonstrates this approach through compilers that Moses built for two domains: automatic differentiation and parallelism. These domains are critical to both scientific computing and machine learning, forming the basis of neural network training, uncertainty quantification, and high-performance computing. For example, a researcher hoping to incorporate their climate simulation into a machine learning model must also provide a corresponding derivative simulation. The compiler, Enzyme, automatically generates these derivatives from existing computer programs, without modifying the original application. Moreover, operating within the compiler enables Enzyme to combine differentiation with program optimization, resulting in asymptotically and empirically faster code. Looking forward, this talk also touches on how this domain-agnostic compiler approach can be applied to new directions, including probabilistic programming.
Learn more about MFEM at mfem.org and view the seminar speaker lineup at mfem.org/seminar/. LLNL-VIDEO-862253
· Speaker: Burcu Beykal (University of Connecticut, https://beykal.engr.uconn.edu/biosketch/)
· Description: Current industrial processes require the coordination of many interconnected pieces that involve multi-dimensional, multi-purpose, and multi-product systems. Across the different layers of supply chain management, starting from the supply chain structure to production planning and scheduling, the optimal coordination of each element and their robust response to changing market conditions is essential for increasing the efficiency, resiliency, productivity, and profitability of any enterprise. Yet, the modeling and optimization of such interdependent systems are still burdensome and require a holistic approach to ensure feasible realizations of the individual activities of the supply chain. Bi-level programming is well-suited for the task, as scheduling problems (followers) provide constraints for the decision making in the planning problem (leader). However, there are many algorithmic challenges for this class of mathematical programs, especially when high numbers of integer variables are present in the scheduling problems. In this talk, I will demonstrate how such large-scale complex optimization problems can be solved without the full knowledge of the underlying mathematical models using data-driven modeling and global optimization theory.
· Bio: Dr. Burcu Beykal is an Assistant Professor in the Department of Chemical & Biomolecular Engineering and a resident faculty in the Center for Clean Energy Engineering at University of Connecticut. She holds a B.S. degree in Chemical & Biological Engineering from Koc University, an M.S. degree in Chemical Engineering from Carnegie Mellon University, and a Ph.D. degree in Chemical Engineering from Texas A&M University. Before joining UConn, Burcu was a Postdoctoral Research Associate at the Texas A&M Energy Institute. Her research focuses on data-centric process systems engineering and machine learning of energy-critical systems, spanning chemical, environmental, and biological domains. Among her awards are the American Chemical Society Petroleum Research Fund Doctoral New Investigator Award, the 2020 CAST Directors’ Award, and the Outstanding Graduate Student Award from Texas A&M. She was also selected as a Rising Star in Chemical Engineering by the Massachusetts Institute of Technology in 2019.
DDPS webinar: librom.net/ddps.html
💻 LLNL News: llnl.gov/news
📲 Instagram: instagram.com/livermore_lab
🤳 Facebook: facebook.com/livermore.lab
🐤 Twitter: twitter.com/Livermore_Lab
About LLNL: Lawrence Livermore National Laboratory has a mission of strengthening the United States’ security through development and application of world-class science and technology to: 1) enhance the nation’s defense, 2) reduce the global threat from terrorism and weapons of mass destruction, and 3) respond with vision, quality, integrity and technical excellence to scientific issues of national importance. Learn more about LLNL: llnl.gov/.
IM release number is: LLNL-VIDEO-863438
Sarah Osborn presented "Uncertainty Quantification, Software Ecosystems for Exascale Systems, and Squirrels: A Brief Survey of Research Projects." She explained a scalable approach for forward propagation of uncertainties for large-scale subsurface flow problems, then described the Extreme-scale Scientific Software Development Kit (xSDK) developed as part of the Exascale Computing Project (ECP). Osborn also presented the Squirrel computational framework for identifying weaknesses within critical infrastructure to help prioritize investments to increase resilience against cyberattacks, natural disasters, and mischievous squirrels.
LLNL-PRES-861626
LLNL distinguished member of technical staff Carol Woodward presented "Mathematics Meets Science Meets Computer Science: My Career Path to High Performance Computational Science." She outlined her path from almost becoming a microbiologist to working as an applied mathematician at LLNL. She also talked about her work with the SUNDIALS numerical software package, explained how its capabilities are useful for scientific simulations, and gave examples of how the package is applied in scientific simulations on the Lab’s fastest computing systems.
Woodward's research interests include numerical methods for nonlinear partial differential equations, nonlinear and linear solvers, time integration methods, and parallel computing. She leads the development and deployment of the SUNDIALS package of time integrators and nonlinear solvers which garners over 100,000 downloads/clones each year and is used worldwide in an array of scientific simulations. The core SUNDIALS team was honored with the ACM/SIAM Prize in Computational Science and Engineering in 2023. Woodward currently serves as the International Council on Industrial and Applied Mathematics representative and Vice Chair of the Standing Committee for Gender Equality in Science, an international committee of scientific professional unions formed to promote gender equality in sciences worldwide. Woodward is also currently President-Elect of the Society for Industrial and Applied Mathematics.
LLNL-PRES-861639
LLNL WiDS ambassador Marisa Torres welcomed the audience, described the Lab's history with the WiDS organization, and explained the day's activities. LLNL WiDS ambassador Mary Silva summarized the February 28 datathon hosted at LLNL two weeks before this main WiDS Livermore event.
LLNL-PRES-861449 and LLNL-PRES-860744
LLNL-VIDEO-862296
Speaker: Yanlai Chen (UMass Dartmouth, http://yanlaichen.reawritingmath.com)
Physics-Informed Neural Network (PINN) has proven itself a powerful tool to obtain the numerical solutions of nonlinear partial differential equations (PDEs) leveraging the expressivity of deep neural networks and the computing power of modern heterogeneous hardware. However, its training is still time-consuming, especially in the multi-query and real-time simulation settings, and its parameterization often overly excessive.
In this talk, we present the recently proposed Generative Pre-Trained PINN (GPT-PINN). It mitigates both challenges in the setting of parametric PDEs. GPT-PINN represents a brand-new meta-learning paradigm for parametric systems. As a network of networks, its outer-/meta-network is hyper-reduced with only one hidden layer having significantly reduced number of neurons. Moreover, its activation function at each hidden neuron is a (full) PINN pre-trained at a judiciously selected system configuration. The meta-network adaptively “learns” the parametric dependence of the system and “grows” this hidden layer one neuron at a time. In the end, by encompassing a very small number of networks trained at this set of adaptively-selected parameter values, the meta-network is capable of generating surrogate solutions for the parametric system across the entire parameter domain accurately and efficiently.
Time permitting, we will discuss the Transformed GPT-PINN, TGPT-PINN, which achieves nonlinear model reduction via the addition of a transformation layer before the pre-trained PINN layer.
Yanlai Chen received his Ph.D. in Mathematics from School of Mathematics, University of Minnesota, in 2007. He then worked as a Postdoctoral Researcher at Brown University before joining the Department of Mathematics at University of Massachusetts Dartmouth in August 2010 where he currently serves as a full professor. Dr. Chen’s research has been supported by the NSF, the AFOSR, and by ONR via UMassD's MUST program. His research interests are in numerical analysis and scientific computing, in particular, finite element methods, reduced basis methods and other fast algorithms, uncertainty quantification, design and analysis for machine learning algorithms. He has graduated five doctoral students, and initiated the UMassD's ACCOMPLISH program via an NSF grant to provide students in need financial, social, and curricular support.
DDPS webinar: librom.net/ddps.html
💻 LLNL News: llnl.gov/news
📲 Instagram: instagram.com/livermore_lab
🤳 Facebook: facebook.com/livermore.lab
🐤 Twitter: twitter.com/Livermore_Lab
About LLNL: Lawrence Livermore National Laboratory has a mission of strengthening the United States’ security through development and application of world-class science and technology to: 1) enhance the nation’s defense, 2) reduce the global threat from terrorism and weapons of mass destruction, and 3) respond with vision, quality, integrity and technical excellence to scientific issues of national importance. Learn more about LLNL: llnl.gov/.
IM release number is: LLNL-VIDEO-862758