NVIDIA
Explore the truth behind the iconic Buzz Aldrin moon landing photo. See how modern graphics innovations can shed new light on a 45-year-old conspiracy theory. More info: blogs.nvidia.com/blog/2014/09/18/debunked
updated 10 years ago
00:00:12 - Introducing LLaDA
00:00:36 - Multimodal neural interfaces integrate diverse types of data
00:00:53 - LLaDA can rapidly adapt to local traffic rules and customs
00:01:30 - The NVIDIA Riva speech SDK can process different languages
00:01:58 - LLaDA can also be applied to AV motion planning
00:02:28 - Accelerated by NVIDIA DRIVE Thor, built on the NVIDIA Blackwell architecture
00:03:02 - Visit our GitHub page and check out the LLaDA paper at CVPR 2024
Project page: boyiliee.github.io/llada
Paper: arxiv.org/abs/2402.05932
Watch the full series here: nvda.ws/3LsSgnH
Learn more about DRIVE Labs: nvda.ws/36r5c6t
Follow us on social:
Twitter: nvda.ws/3LRdkSs
LinkedIn: nvda.ws/3wI4kue
#NVIDIADRIVE
Learn more about the Inception Program for Startups: nvda.ws/4bwQflf
Learn more about Orbital Composites: orbitalcomposites.com
Learn more about the Inception Program for Startups: nvda.ws/4bwQflf
Minerva CQ: minervacq.com
Learn more: nvda.ws/3WJ7Kuf
#GraceCPU
#GTC24
Learn more about NVIDIA Maxine: nvda.ws/4cwJSzs
Learn more about the NVIDIA Blackwell Architecture: nvidia.com/en-us/data-center/technologies/blackwell-architecture/?ncid=so-yout-795864
Read the Kawasaki case study: nvda.ws/49bZQfG
Learn more about cuOpt: nvda.ws/3IXkmWK
#GTC24 #RouteOptimizaion #GenerativeAI
Complex AI is being tested in real time inside an Omniverse digital twin of a warehouse, showcasing AI that's been developed inside this digital twin.
It’s a workflow that developers can use to build AI gyms to train and evaluate complex AI, all in real time within the digital twin. This is something that otherwise would be incredibly costly or impossible to run in the real world—particularly for heavy industry, factories, and supply chains.
This demo leverages NVIDIA Metropolis, Omniverse, CuOpt, and Isaac for robot perception to create an end-to-end concept of how to fully automate logistically complex co-bot spaces.
Read the blog: blogs.nvidia.com/blog/ai-digital-twins-industrial-automation-demo
#GTC24 #DigitalTwin #AI #cuOpt #NVIDIAOmniverse #NVIDIAIsaac
Highlights included:
- Founder and CEO Jensen Huang introduced the highly-anticipated Blackwell platform, which will power a new era of computing. The Blackwell GPU architecture is being integrated into DRIVE Thor to enable generative AI applications and immersive in-vehicle experiences. Large language models will run in the car, enabling an intelligent copilot that understands and speaks in natural language.
- BYD, the world's largest EV maker, is adopting NVIDIA DRIVE Thor as the #AI brain of their future fleets. In addition, BYD will use NVIDIA’s AI infrastructure for cloud-based AI development and training, along with NVIDIA Isaac and NVIDIA Omniverse to develop tools and applications for virtual factory planning and retail configurators.
- On the show floor, attendees saw the latest NVIDIA-powered vehicles on display, including Lucid Air, Mercedes-Benz Concept CLA Class, Nuro R3, Polestar 3, Volvo EX90, WeRide Robobus, and an Aurora-powered truck. Additionally, NVIDIA showcased the wide adoption of NVIDIA DRIVE, displaying ECUs (electronic control units) from a variety of partners, from Lenovo to Zeekr.
- The conference featured sessions and panels from NVIDIA automotive partners including Ford, GM, Geely, JLR, and many more, covering topics from data center applications to developing safe AVs.
Learn how to watch these sessions on-demand at nvidia.com/gtc
#GTC24
#GTC24
In this insightful discussion with NVIDIA's Richard Kerris, they delve into the collaboration between artists, researchers, and businesses to foster a culture of innovation and creativity.
Discover how we're entering a new era of interacting with #AI models to expand creative possibilities.
#GTC24
Learn more about AI in telecom operations: nvidia.com/telco-ai
Learn more: nvidia.com/en-us/data-center/products/ai-enterprise/?ncid=so-yout-110891
Try NVIDIA AI Enterprise for free:
API catalog: build.nvidia.com/explore/discover?ncid=so-yout-728251
LaunchPad: nvidia.com/en-us/launchpad/ai/?ncid=so-yout-923331
90-Day Evaluation: enterpriseproductregistration.nvidia.com/?LicType=EVAL&ProductFamily=NVAIEnterprise&ncid=so-yout-701253
Get Started: www.build.nvidia.com/explore/healthcare
Key Points:
- BioNeMo allows us to build a virtual screening pipeline with microservices: protein folding, molecule generation, and docking.
- MolMIM performs multi-parameter optimization to guide molecule generation.
- MolMIM intertwines physics and generative AI (physics: docking, structure-guided) to guide the generative model.
- BioNeMo's microservices allow us to architect this incredibly powerful application very simply.
#generativeAI #DrugDiscovery #VirtualScreening #GTC24
Learn how cuOpt is reinventing logistics management and operations research by optimizing operations, reducing costs, cutting carbon emissions, and enhancing customer satisfaction.
Learn more about cuOpt:
nvda.ws/49YJdVJ
Explore how Kawasaki is using cuOpt:
nvda.ws/3TH7KrN
Read how cuOpt is transforming route optimization:
nvda.ws/4cnQrEG
#routeoptimization #NVIDIAcuOpt #fleetmanagement #lastmiledelivery
Experience our journey from simulation to real-world deployment, showcasing our commitment to innovation and technological excellence.
Learn more about Project GR00T: nvda.ws/43l1WZn
#airobotics #industrialrobotics #humanoidrobotics #robotics #GTC24
Read about our #GTC24 keynote and announcements: nvda.ws/3Vn5RTe
#GTC24 #GenAI #AI #RAGAI #LLM
- Developing an Autonomous Vehicle is a very complex task with many different elements, from managing your dataset to validation and testing the driving behaviors.
- We can use AI to help review and curate sensor datasets.
- We can use generative AI to not only generate a simulation scenario but also scale that scenario to many different variants.
- These generated simulation scenarios are complete with synchronized sensor data and ground-truth data like occupancy voxels.
- These simulation scenarios can be used for either training or validation.
#GTC24 #NVIDIA #AI #NVIDIADRIVE
NVIDIA Isaac Perceptor, optimized on Jetson Orin, uses multiple cameras for 3D surround perception to detect obstacles like low-lying hazards or overhangs, which are invisible to standard 2D lidar.
Using robust AI-based depth estimation, GPU-Accelerated 3D reconstruction, and semantic segmentation, the mobile robot can work more safely alongside humans.
Key Points:
- Isaac Perceptor is a live multi-camera, surround visual perception running on Jetson.
- Isaac Perceptor is GPU-accelerated and optimized on Orin, leaving headroom for adding other SW such as navigation stack.
- The traditional approaches have predominantly utilized 2D lidars, which offer limited functionality, or 3D lidars, known for their prohibitive costs. Isaac Perceptor revolutionizes this by offering an affordable, camera-based solution that doesn't compromise on capability. Also, brings visual AI semantics for autonomy.
Learn more about NVIDIA Isaac Perceptor: nvda.ws/43nAZUY
#autonomousmobilerobots #Robotics #AI #GTC24
Enterprise applications are composed of many software containers, each involving a complex set of dependencies. To triage a container for vulnerabilities, hundreds of pieces of information need to be retrieved, understood, and integrated—a tedious process that can take days.
This demo looks at how we use generative AI to shorten the process to mere seconds. This is AgentMorpheus, an event-driven, retrieval-augmented generation application built using NVIDIA NIM inference microservices, NeMo Retriever, and the Morpheus SDK.
Key Points:
Enterprise software is very complex
Ensuring software security is time-consuming and repetitive
Using NVIDIA-based RAG agents enables faster, more secure software vulnerability analysis, moving from days to seconds
Learn more: nvda.ws/3TDF5ov
#cybersecurity #generativeAI #GTC24 #NVIDIA #NeMo #NVIDIANIM #NVIDIANeMo #RAG #RAGAI
@NVIDIAOmniverse Cloud APIs on @MicrosoftAzure bring data interoperability, collaboration, and physically-based visualization to software tools for designing, building, and operating industrial digital twins.
To build an operational digital twin, designers first design, engineer, and simulate products and manufacturing processes using @HexagonABGlobal Nexus.
Facility planners use Hexagon's scanners to capture the real world and render in Reality Cloud Studio.
Engineers work in @RockwellautomationInc Emulate3D, to simulate production systems before deploying to the physical factory.
Then with Omniverse APIs, data from each of these applications flows seamlessly into a unified Omniverse digital twin in Microsoft Power BI, where teams can see their 3D data in context.
When the factory comes online, IoT data is live-linked to the Omniverse digital twin in Power BI.
And when connected to Microsoft Copilot, factory operators can gain insights to their production data using natural language.
Together, Microsoft and NVIDIA are bringing AI and collaboration to the next era of industrial digitalization.
Learn more about Omniverse Cloud APIs on Azure: nvda.ws/3vhUhyj
#OpenUSD #3D #NVIDIAOmniverse #digitaltwin #GTC24
We're retrofitting an existing data center, so @KineticVisionUS helps generate a 3D mesh from @NavVis-tech Lidar scanners and uses @prevu3d's point cloud processing tool to clear the existing racks, leaving an empty data hall.
In @cadencedesignsystems Reality Digital Twin Platform, powered by NVIDIA Omniverse APIs, our engineers unify the multi-CAD facility and rack datasets and visualize in full physical accuracy.
To optimize network topology and train our operators, we use NVIDIA Air network simulation platform connected to Patch Manager's cabling layout tool with Omniverse APIs, letting us bring this data into the Omniverse Digital Twin in Cadence's platform.
With Cadence's solvers, accelerated by NVIDIA Modulus APIs for Physics-ML workloads, and Grace Hopper, we can then simulate performance of the air and new liquid cooling systems from partners like @Vertiv and @schneiderelectric.
NVIDIA and our ecosystem of partners are bringing-up new AI factories faster than ever with NVIDIA Omniverse and AI.
Read the blog to learn more: nvda.ws/4ae0vOl
Learn more about Omniverse Cloud APIs: nvda.ws/3vhUhyj
#OpenUSD #3D #NVIDIAOmniverse #GTC24
Dive into the announcements and discover more content at nvidia.com/gtc.
Follow NVIDIA on X (formerly Twitter):
twitter.com/NVIDIAGTC
twitter.com/NVIDIA
Meet this AI agent developed in partnership with Hippocratic AI, built with Hippocratic AI’s state-of-the-art technology—from automatic speech recognition to their one-trillion-parameter LLM constellation to text-to-speech—with NVIDIA ACE microservices. To learn more, watch the Hippocratic AI GTC Talk: nvidia.com/gtc/session-catalog/?tab.allsessions=1700692987788001F1cG&search=Munjal%20Shah#/session/1696437006794001DWEB
Key Points:
- You are experiencing LLMs powering conversational AI workflows brought together with state-of-the-art AI-powered, real-time digital human technologies.
- Seamless, personalized conversation is made possible by a low-latency experience starting with automatic speech recognition to large language model retrieval to text-to-speech to audio-to-animation to streaming on an iPad, and all in less than one second.
- Hippocratic AI created a safety-focused, LLM-powered solution trained in partnership with their 27 health systems.
- NVIDIA ACE offers AI models and microservices such as NVIDIA Audio2Face and NVIDIA Riva Automatic Speech Recognition.
#GenerativeAI #DigitalAvatars #DigitalHumans #GTC24
Testing and optimizing layouts in this physically accurate digital environment increased worker efficiency by 51%.
In operation, the digital twin technology helps Wistron rapidly test new layouts to accommodate new processes or improve operations in the existing space and monitor real-time operations using live IoT data from every machine on the production line. This ultimately enabled Wistron to reduce end-to-end production process times by 50% and defect rates by 40%.
With NVIDIA AI and Omniverse, NVIDIA's global ecosystem of partners is building a new era of accelerated, AI-enabled digitalization.
Learn more about Omniverse Cloud APIs: nvda.ws/3vhUhyj
#OpenUSD #3D #NVIDIAOmniverse #GTC24 #NVIDIA
Customers can bring digital twins and generative AI tools into their product lifecycle management (PLM) workflows. They can also use immersive visualization capabilities to design, manufacture, and operate products.
Teamcenter X is a powerful PLM software that enables companies to plan, develop, and deliver products in a secure and scalable environment. With the integration of NVIDIA AI and Omniverse technologies into Teamcenter X, Siemens is taking the next step in transforming how products are designed and manufactured.
Learn more: nvda.ws/3TjgvI3
See how NVIDIA and Siemens are working together to unlock industrial digitalization: nvda.ws/4awAcne
#OpenUSD #3D #NVIDIAOmniverse #GTC24 #NVIDIA
Learn more about Omniverse Cloud APIs: nvda.ws/3vhUhyj
#3D #NVIDIAOmniverse #GTC24
Based on OpenUSD, Omniverse-powered applications fundamentally transform complex 3D workflows, allowing individuals and teams to build unified tool and data pipelines and simulate large-scale, physically accurate virtual worlds for industrial and scientific use cases.
Learn more about NVIDIA Omniverse:
nvda.ws/3veFRyP
#OpenUSD #3D #NVIDIAOmniverse #GTC24
Learn more: blogs.nvidia.com/blog/omniverse-cloud-apis
#ai
#GTC24
Video Outline:
As the earth’s climate changes, AI-powered weather forecasting is allowing us to more accurately predict and track severe storms like Super-Typhoon Chanthu which caused widespread damage in Taiwan, the Philippines, China, and Japan in 2021. Current AI forecast models can accurately predict the track of storms, but they are limited to 25-kilometer resolution, which can miss important details. NVIDIA's CorrDiff is a revolutionary new generative model trained on high-resolution radar-assimilated WRF weather forecasts and ERA5 reanalysis data. Using CorrDiff, extreme events like Chanthu can be super-resolved from 25km to 2km resolution, with 1,000 times the speed and 3,000 times the energy efficiency of conventional weather models. By combining the speed and accuracy of NVIDIA's weather forecasting model FourCastNet and generative AI models like CorrDiff, we can explore hundreds or even thousands of kilometer-scale regional weather forecasts to provide a clear picture of the best, worst, and most likely impacts of a storm. This wealth of information can help minimize loss of life and property damage. Today, CorrDiff is optimized for Taiwan. But soon, generative super-sampling will be available as part of the NVIDIA Earth-2 Inference Service for many regions across the globe.
NVIDIA’s Earth-2 Service consists of cloud-based building blocks enabling weather and climate practitioners to benefit from the latest advances in accelerated computing, AI, and visualization.
The platform consists of four independent services:
- Inference using AI-based models for global forecasting such as NVIDIA’s FourCastNet and regional downscaling using generative super-resolution models like NVIDIA’s CorrDiff
- Simulation: running GPU-accelerated process-based simulation in the cloud
#Earth2 #Inference #AI #Forecasting #Regional #CorrDiff #FourCastNet #DataFederation #Omniverse #SuperResolution #Downscaling #GTC24
- Data Federation: serving weather and climate data from different sources, including static archives, AI-based inference results, or numerical simulation results
- Visualization: cloud-based interactive visualization of weather/climate data
Key Points:
- Services of the Earth-2 platform: inference, simulation, visualization, data federation.
- All services are optimized for NVIDIA AI Enterprise platform, running on heterogeneous clouds.
- Service APIs will be available after GTC which will enable ISVs to leverage these services within their offerings.
- We are working with the weather/climate community to bring their tools to the NV platform and make it accessible as services.
Learn more about the:
NVIDIA VC Alliance Program (for VCs): nvda.ws/3V4Ldrb
NVIDIA Inception Program (for Startups): nvda.ws/4bW0Cjm
Timestamp:
00:00:00 - Scaling diverse data in AV perception
00:00:27 - Introducing EmerNeRF, a self-supervised learning method
00:00:49 - Reconstructing scenarios into static, dynamic, and flow fields
00:01:40 - Lifting 2D foundation model features into 4D
00:02:15 - Using vision-language models for scene segmentations
00:02:40 - Dynamic scenario reconstruction at scale
00:03:02 - To learn more, visit our GitHub project page and blog
Project page: emernerf.github.io
Paper: arxiv.org/abs/2311.02077
Tech blog: developer.nvidia.com/blog/reconstructing-dynamic-driving-scenarios-using-self-supervised-learning
Watch the full series here: nvda.ws/3LsSgnH
Learn more about DRIVE Labs: nvda.ws/36r5c6t
Follow us on social:
Twitter: nvda.ws/3LRdkSs
LinkedIn: nvda.ws/3wI4kue
#NVIDIADRIVE
An extremely large-scale NVIDIA DGX SuperPOD, Eos is where NVIDIA developers create their leading-edge AI innovations using accelerated computing infrastructure and fully optimized software. Eos is built with 576 #NVIDIADGX H100 systems, NVIDIA Quantum-2 InfiniBand networking and software, providing 18.4 exaflops of AI performance and featuring a total of 4,608 H100 GPUs.
Ranked #9 in the TOP500 list of fastest supercomputers, Eos is the apotheosis of NVIDIA's ongoing commitment to pushing the boundaries of AI technology and infrastructure.
Learn more: nvda.ws/3wegRbl
#datacenter
Connect with CoreWeave at GTC, the conference for the era of AI: coreweave.com/events/coreweave-gtc-2024
#IAMAI
#Retail
Learn about NVIDIA DRIVE solutions: nvda.ws/3S2g5FC
Learn about NVIDIA DRIVE partners: nvda.ws/3SqyGMZ
Learn more about NVIDIA BioNeMo: nvda.ws/3RI6rro
00:00:30 - Viewpoint robustness and how recent advances offer a solution
00:01:18 - Dynamic View Synthesis eliminates viewpoint challenges
00:01:34 - Multi-view consistency between different viewpoints
00:02:06 - Neural Radiance Fields don’t work well for viewpoint robustness
00:02:16 - DNN trained to estimate scene depth from single image
00:03:09 - Deploy perception models at scale
00:03:19 - To learn more, visit our project page and GitHub
GitHub: nvda.ws/3uYJYih
Project page: nvda.ws/3NsoXmm
Tech blog: nvda.ws/41hROj3
Watch the full series here: nvda.ws/3LsSgnH
Learn more about DRIVE Labs: nvda.ws/36r5c6t
Follow us on social:
Twitter: nvda.ws/3LRdkSs
LinkedIn: nvda.ws/3wI4kue
#NVIDIADRIVE
Speakers:
Aviv Regev | EVP, Genentech Research and Early Development
John Mariono | SVP, Computational Sciences at gRED