NVIDIA Developer
Run NVIDIA Riva on a Kubernetes Cluster (Part 1): Deploying Kubernetes Cluster to GKE with Terraform
updated
CUDA 12.0 toolkit release: nvda.ws/3FLUn3B
00:00 - Introduction
03:39 - CUDA Dynamic Parallelism
09:55 - Hopper DPX Instructions
12:06 - Lazy Loading
16:10 - OS Support Updates
17:26 - CUDA Compiler Updates
33:47 - Math Libraries
37:53 - Compatibility Updates
49:47 - Thank You and Q&A
Additional resources:
CUDA Dynamic Parallelism: developer.nvidia.com/blog/introduction-cuda-dynamic-parallelism
CUDA Programming Model for Hopper Architecture: nvidia.com/en-us/on-demand/session/gtcfall22-a41095/?playlistId=playList-b5be3b9b-cc52-48ed-b83d-4352b108e766
Join our developer program: nvda.ws/3BEFldp
About our presenters:
Rob Armstrong, is a principal technical product manager for the CUDA toolkit. For over 20 years he has focused on accelerating software with heterogeneous hardware platforms, and has particular interest in computer architecture and hardware/software interaction.
Rob Nertney is a senior technical product manager for CUDA. He has spent nearly 15 years architecting the features and deployment of accelerator hardware into hyperscale environments for both internal and external use by developers. He has several patents in processor design relating to secure solutions that are in production today. In his spare time, he loves golfing when the weather is nice, and gaming (on RTX hardware of course!) when the weather isn’t.
Arthy Sundaram is a technical product manager for the CUDA platform. She holds an MS in computer science from Columbia University. Her areas of interest are operating systems, compilers, and computer architecture.
Matthew Nicely joined NVIDIA in March 2019, having previously worked at the U.S. Army Aviation and Missile Research Development and Engineering Center, Huntsville, AL, USA. There, he focused on CUDA algorithm development and optimizations on the Jetson series. At NVIDIA, he has worked in the Federal segment assisting with CUDA development and optimizations, along with education and proof of concepts for customers on various NVIDIA tool sets, before recently transitioning to math libraries product manager. In 2019, he received his Ph.D. degree in computer engineering, focusing on algorithm optimizations on GPUs.
In this video, we’ll show you how you can quickly launch #NVIDIATAO toolkit notebook directly on Google Colab to train AI model without having to set up any infrastructure.
Learn more about NVIDAI TAO Toolkit: nvda.ws/3YsIdEg
More about running TAO Toolkit on Google Colab: github.com/NVIDIA-AI-IOT/nvidia-tao
Try TAO on Google Colab
• Object Detection: colab.research.google.com/github/NVIDIA-AI-IOT/nvidia-tao/blob/main/tensorflow/yolo_v4/yolo_v4.ipynb
• Image Classification: colab.research.google.com/github/NVIDIA-AI-IOT/nvidia-tao/blob/main/tensorflow/classification/classification.ipynb
• Action Recognition: colab.research.google.com/github/NVIDIA-AI-IOT/nvidia-tao/blob/main/pytorch/cv_notebooks/action_recognition_net/actionrecognitionnet.ipynb
Additional TAO Toolkit developer resources: nvda.ws/3he07K2
#NVIDIATAO , #AI, #aitraining
Transfer Learning, AI/ML Models, AI training, GoogleColab, pretrained models, object detection, image classification, segmentation
See some of the features and capabilities NVIDIA Base Command Platform can offer to centralize and accelerate enterprise AI development.
Learn more about the enterprise-class platform for AI training here: nvda.ws/3UGRmFZ
#NVIDIADGX, #NVIDIABaseCommandPlatform, #BaseCommandPlatform, #NVIDIAAI
NVIDIA AI, NVIDIA Base Command Platform, NVIDIA DGX Systems, NVIDIA DGX A100, Base Command Platform
Find out how NVIDIA is making useful, accurate speech AI possible for every language nvda.ws/3B5LteD
#SpeechAI #AI #Telugu
Today, @NVIDIAOmniverse is being used to create Earth's #DigitalTwin, allowing scientists to better visualize & more accurately predict #ClimateChange.
Learn more: nvda.ws/3uxANBH
#HPC #simulation #physics
Watch the demo and visit nv-tlabs.github.io/brushstroke_engine to learn more about the model.
#generativeai, #ai, #generativeart, #genAI, #aiart
generative ai, neural rendering, generative art, ai, machine learning
Learn more about NVIDIA Solutions for the Professional Broadcast Industry nvda.ws/3XD6l6J
SMPTE ST 2110, IP Video, Broadcast
NVIDIA, SMPTE ST 2110, IP Video, AI, Broadcast, GPU, Networking, Rivermax, Mellanox
developer.nvidia.com/tools-overview
#NVIDIAMetropolis, #AI, #VisionAI
Learn more about NVIDIA Metropolis Microservices and Reference Applications nvda.ws/3hrpasM
Learn more about NVIDIA Unified Compute Framework (UCF) nvda.ws/3UqMzck
Learn more in our blog post, “Develop for All Six NVIDIA Jetson Orin Modules with the Power of One Developer Kit”: nvda.ws/3Tq4XRf
Powered by NVIDIA Jetson AGX Xavier on a robust cloud architecture, EVE utilizes AI and machine learning, perception and VR at the edge. NVIDIA Inception is a free program designed to help your startup evolve faster through access to cutting-edge technology and NVIDIA experts, opportunities to connect with venture capitalists, and co-marketing support to heighten your company’s visibility.
Learn more: nvda.ws/3gWZe8j
#NVIDIAInception #Startups #AI #AIinRobotics #robotics #humanoidrobots AI, AIinRobotics, robots, humanoidrobots, Jetson, startup, startupincubator, acceleration for startups
Learn more about TAO Toolkit nvda.ws/3ThywFp
Get started with TAO Toolkit nvda.ws/3EVdUOU
In this video, we will watch how the University of TEC Monterrey (Mexico) has implemented the PuzzleBot platform in their main robotics programs with success. We will learn about the experience directly from their students, professors, and researchers.
Try out the demo nvda.ws/3EC2JKZ
Learn more about multi-camera tracking nvda.ws/3CRN4py
Learn more about Metropolis Microservices nvda.ws/3SVwxX7
GitHub Code: github.com/triton-inference-server/server/tree/main/docs/examples/stable_diffusion
Triton Documentation: github.com/triton-inference-server/server/tree/main/docs
Note: This example doesn’t include all possible optimizations to the stable diffusion pipeline. The intent is to show ease of deployment with Triton.
#ai #inference #triton #deeplearning #stablediffusion
NVIDIA Inception is a free program designed to help your startup evolve faster through access to cutting-edge technology and NVIDIA experts, opportunities to connect with venture capitalists, and co-marketing support to heighten your company’s visibility.
Learn more: nvda.ws/3T2saJL
#NVIDIAInception #Startups #AI #AIinHealthcare
To ensure you have the best resources to do your life’s work, we’ve created an online space devoted to accelerating your work with access to over 450 SDKs and pre-trained AI models, technical documentation, domain expert help, deep learning courses and workshops, and much more. Free developer tools, training, and community.
Join the Developer Program today and accelerate your work: developer.nvidia.com/developer-program?ncid=so-yout-905036#cid=dev03_so-yout_en-us
#AI #Technology #DevTools #Developertools
nvidia.com/en-us/industries/media-and-entertainment/professional-broadcast/?ncid=so-yout-548374-vt20#cid=ix17_so-yout_en-us
NVIDIA, SMPTE ST 2110, IB-based content, Ai, Broadcast, workstations, Networking, Rivermax, Mellanox
Learn more: nvidia.com/en-us/industries/media-and-entertainment/professional-broadcast/?ncid=so-yout-548374-vt20#cid=ix17_so-yout_en-us
NVIDIA, IBC, Networking, GPU, 8k video, ST 2110, IP Broadcast, Video Streaming
blogs.nvidia.com/blog/2022/09/23/3d-generative-ai-research-virtual-worlds
#ai, #3D, #deeplearning, #nvidia, #NVIDIAResearch, #GTC22
deep learning, AI, 3D, triangle mesh, NVIDIA, NVDIAResearch
An industry-standard recommender system involves a number of steps, including data preprocessing, defining and training recommender models, as well as filtering and business logic for serving. In this work, we propose the four-stage recommender system, an industry-wide design pattern we have identified for production recommender systems.
The four-stage pipeline includes an item retrieval step that prepares a small subset of relevant items for scoring. The filtering stage then cleans up the subset of items based on business logic such as removing out-of-stock or previously seen items. As for the ranking component, it uses a recommender model to score each item in the presented list based on the preferences of the user. In the final step, the scores are re-ordered to provide a final recommendation list aligned with other business needs or constraints such as diversity.
In particular, the presented demo demonstrates how easy it is to build and deploy a four-stage recommender system pipeline using the NVIDIA Merlin open-source framework.
Learn more: nvda.ws/3d6erlE
#recommendersystems #ai #deeplearning #demo #GTC22
NVIDIA Base Command is the operating system of the accelerated data center. It lets organizations use the full potential of their NVIDIA DGX investment with a proven platform that includes enterprise-grade orchestration and cluster management, as well as libraries that accelerate compute, storage, and network infrastructure. Plus, it features an operating system that's optimized for AI workloads.
The cluster management features of Base Command automate the end-to-end management of systems, from a single node to thousands. Base Command provides a single-pane-of-glass view that gives you complete control of heterogeneous clusters of any size.
To learn more about Base Command, please visit nvidia.com/base-command.
To learn more about DGX systems, please visit nvidia.com/dgx
#AIinfrastructure #networking #clustermanagement
With NVIDIA AI Enterprise, we deliver industry-leading AI tools from both NVIDIA and open-source communities.
Now, with the AI Workspace Operator for Kubernetes, we're working to make that tooling exceptionally simple to deploy, run, and operate on your Kubernetes cluster, regardless of whether it is in the cloud, in your data center, or on the edge.
To get started check out the project on GitHub:
github.com/NVIDIA/ai-workspace-operator
Learn how to scale AI/ML in the hands-on lab Multi-Node Training for AI on Kubernetes nvidia.com/en-us/launchpad/ai/multi-node-training-for-image-classification-on-kubernetes-with-vmware-tanzu
IT admins can learn how to build this environment in the lab Optimize AI and Data Science Workloads nvidia.com/en-us/launchpad/infra-optimization/configure-optimize-and-orchestrate-resources-for-ai-and-data-science-workloads-with-vmware-tanzu
Enterprises can also build and take AI/ML solutions to production with NVIDIA AI Enterprise.
nvidia.com/en-us/data-center/products/ai-enterprise
Learn more about Maxine at developer.nvidia.com/maxine and all of NVIDIA's AI solutions at nvidia.com/en-us/deep-learning-ai/products/solutions
#AI #NVIDIA #Maxine
Learn more about NVIDIA DeepStream SDK nvda.ws/3eOctGZ
Learn more about NVIDIA TAO Toolkit nvda.ws/3dai9L5
Try Fleet Command today for free on NVIDIA LaunchPad developer.nvidia.com/blog/managing-edge-ai-with-fleet-command-and-launchpad
#EdgeAI
GitHub: github.com/triton-inference-server/server
Documentation: github.com/triton-inference-server/server/tree/main/docs
#ai #inference #nvidiatriton
0:00:00 - [NVIDIA] Merlin RecSys on GPU
0:18:50 - HugeCTR for Training
0:32:00 - HugeCTR for Inference
0:40:31 - NVIDIA Customer Testimonial
0:41:50 - [Alibaba Cloud] DeepRec: GPU Training and Prediction in Search Promotion Scenarios
1:08:10 - [Meituan] Large-Scale RecSys on GPU at Life-Service Scenario
1:40:45 - [Tencent] Large-Scale Machine Learning Framework - Wu Liang
Speakers:
Wenwen Gao, Senior Product Manager, NVIDIA
Joey Wang, Senior Developer Manager, NVIDIA
Tongxuan Liu, Senior Tech Engineer, Alibaba Cloud
Jiaheng Rang, Machine Learning Engine Technical Expert, Meituan
Zhuo Chen, Machine Learning Engine Technical Expert, Meituan
Zhaokai Luo, Machine Learning Engine Technical Expert, Tencent
Our Instant NeRF is a neural rendering model that learns a high-resolution 3D scene in seconds from 2D images — and can render images of that scene in a few milliseconds. Learn more about Instant NeRFs: nvda.ws/3AU5wgA
Tutorial on how to make a 3D render from 2D photos: nvda.ws/3ciMLJI
Find the code on GitHub: github.com/NVlabs/instant-ngp
More Instant NeRF creators: blogs.nvidia.com/blog/2022/08/05/instant-nerf-creators-siggraph
For business inquiries, please visit our website and submit the form on NVIDIA Research Licensing: nvidia.com/en-us/research/inquiries
#InstantNeRFSweepstakes #NeRF #AI #3D #NeuralRendering #volumetric #photogrametry #syntesis #NVIDIA #NVIDIAResearch #InstantNGP #computervision #NeRFs #siggraph2022
NVIDIA FLARE™ (NVIDIA Federated Learning Application Runtime Environment) is a domain-agnostic, open-source, and extensible SDK for Federated Learning. It allows researchers and data scientists to adapt existing ML/DL workflow to a federated paradigm and enables platform developers to build a secure, privacy-preserving offering for a distributed multi-party collaboration.
Learn more: nvda.ws/3JOuUak
#AI #federatedlearning #SDK #ML
Learn More: blogs.nvidia.com/blog/2022/08/09/neural-graphics-sdk-metaverse-content
Interactive Demo: http://imaginaire.cc/gaugan360
#AI #GauGAN #nvidiaomniverse #siggraph2022 #siggraph
Over the past decade, OpenVDB has become the Academy Award-winning, industry- standard library for sparse dynamic volumes. NVIDIA is now further expanding the capabilities of OpenVDB with the power of AI. NeuralVDB builds on top of the GPU acceleration of NanoVDB, adding machine learning to introduce compact neural representations that dramatically reduce its memory footprint.
Learn more at nvda.ws/3vKmNWg.
CONNECT WITH US ON SOCIAL
Twitter: twitter.com/NVIDIADesign
LinkedIn: linkedin.com/showcase/nvidia-design-and-visualization
With NVIDIA Vid2Vid Cameo, creators can harness AI to capture their facial movements and expressions from standard 2D video taken with a professional camera or a smartphone. The performance can be applied in real time to animate an avatar, character or painting.
And with 3D body pose estimation software, creators can capture full body movements like walking, dancing or performing martial arts, using them to bring virtual characters to life with AI.
Read more: blogs.nvidia.com/blog/2022/08/09/ai-performance-capture
#AI #Vid2vid #NVIDIAResearch #avatar
Get started with model analyzer here: github.com/triton-inference-server/model_analyzer
#Triton #Inference #ModelAnalyzer #AI
#edgeAI #edgecomputing #MIG
Kubernetes cloud services, cloud application deployment, cloud deployment platforms, cloud deployment technologies, cloud software deployment, deploy to container, openshift deployment, software deployment platform, Kubernetes management, container management, container orchestration, ai application deployment, ai container orchestration
#edgeAI #edgecomputing #remotemanagement
Kubernetes cloud services, cloud application deployment, cloud deployment platforms, cloud deployment technologies, cloud software deployment, deploy to container, openshift deployment, software deployment platform, Kubernetes management, container management, container orchestration, ai application deployment, ai container orchestration
Winners posted here: youtu.be/RyEVh1Orv2Y
Learn more about Instant NeRFs: nvda.ws/3AU5wgA
Tutorial on how to make a 3D render from 2D photos: nvda.ws/3ciMLJI
Find the code on GitHub: github.com/NVlabs/instant-ngp
More Instant NeRF creators: blogs.nvidia.com/blog/2022/08/05/instant-nerf-creators-siggraph
For business inquiries, please visit our website and submit the form on NVIDIA Research Licensing: nvidia.com/en-us/research/inquiries
The sweepstakes has now closed: full details are here: nvda.ws/3N5kL9Q.
#InstantNeRFSweepstakes #NeRF #AI #3D #NeuralRendering #volumetric #photogrametry #syntesis #NVIDIA #NVIDIAResearch #InstantNGP #computervision #NeRFs
Neural rendering algorithms learn from real-world data to create synthetic images — and NVIDIA research projects are developing state-of-the-art tools to do so in 2D and 3D.
In 2D, the StyleGAN-NADA model, developed in collaboration with Tel Aviv University, generates images with specific styles based on a user’s text prompts, without requiring example images for reference.
Learn more about our #SIGGRAPH2022 accepted papers: https://nvidia.com/en-us/events/siggraph and blogs.nvidia.com/blog/2022/05/04/siggraph-ai-graphics-research-collaboration
Reference: github.com/NVIDIA/NVFlare and nvflare.readthedocs.io/en/main/index.html
Continue learning with NVFLARE documentation: nvidia.github.io/NVFlare
NVFlare, Federated Learning, Cifar10, NVIDIA FLARE
#edgeai #edgecomputing #freetrial
Kubernetes cloud services, cloud application deployment, cloud deployment platforms, cloud deployment technologies, cloud software deployment, deploy to container, openshift deployment, software deployment platform, Kubernetes management, container management, container orchestration, ai application deployment, ai container orchestration
Learn more:
NVIDIA Riva: nvda.ws/3tf1GKj
NVIDIA Riva Quickstart Guide: nvda.ws/3azmSVb
NVIDIA Riva Documentation: nvda.ws/3miAzu6
NVIDIA Riva WebSocket Bridge: github.com/nvidia-riva/websocket-bridge
#AI #speechAI #speechrecognition #texttospeech #asr #tts
Download:
github.com/NVIDIA-ISAAC-ROS
Learn more on Isaac ROS:
developer.nvidia.com/isaac-ros
Post queries or comments on the Isaac Forum:
forums.developer.nvidia.com/c/agx-autonomous-machines/isaac/isaac-ros/600
Visit the NGC catalog today to browse our collection of Jupyter notebook examples and run it using Click Deploy. nvda.ws/3sOtT9Z
#nvidia #ai #googlecloud #jupyternotebook #jupyter #artificialintelligence
#NeRF #AI #CVPR
Subscribe to our Youtube channel to see new Grandmaster Series episodes.
If you have any questions during the video, you can submit them through chat. We will try to provide answers throughout and at the end of the episode.
About our presenters:
Chris Deotte, senior data scientist at NVIDIA. Chris has a Ph.D. in computational science and mathematics with a thesis on optimizing parallel processing. Chris is a 4x Kaggle grandmaster.
Dr. Christof Henkel, a Ph.D. in mathematics with a focus on probability theory and stochastic processes and is a senior deep learning scientist at NVIDIA. He is a 3x Kaggle grandmaster
Jean-Francois Puget, holds a Ph.D. in machine learning and has published over 70 scientific papers in peer-reviewed conferences and journals. He is a 2x Kaggle grandmaster.
Ahmet Erdem is currently a Senior Data Scientist at NVIDIA with a background in Computer Engineering and Artificial Intelligence.
Additional Resources:
1. Get Started on NLP and Conversational AI with NVIDIA DLI Courses: developer.nvidia.com/blog/get-started-on-nlp-and-conversational-ai-with-free-courses-from-nvidia-dli
2. Instructor-led Workshop - Building Transformer-Based Natural Language Processing Applications: nvidia.com/en-us/training/instructor-led-workshops/natural-language-processing
3. Applying Natural Language Processing Across the World’s Languages: developer.nvidia.com/blog/applying-natural-language-processing-across-the-worlds-languages
Follow us on Twitter: twitter.com/NVIDIAAI
• Learn how to select from the 3 object tracker alternatives (NvDCF, DeepSORT or IOU) or bring your own tracker to DeepStream for vision AI app development.
• Get to know the state machine behind the tracker and what parameters can be configured to optimize the tracker for your specific application.
• Understand how NVIDIA’s state-of-the-art tracker (NvDCF) can compare different configuration parameters, along with the different results and tradeoffs that come from different optimization strategies.
More information about DeepStream tracker:
• Review the DeepStream Low-Level Tracker Comparisons and Tradeoffs for help choosing the best tracker for your application: docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvtracker.html#low-level-tracker-comparisons-and-tradeoffs
• Read the NvMultiObjectTracker Parameter Tuning Guide to learn how to troubleshoot and fine-tune tracker configurations: nvda.ws/3NZmbTG
• Check out the Gst-nvtracker plugin documentation for detailed information on how to work with the low-level tracking libraries and how to implement your own: nvda.ws/3xyZnTI
• DeepStream get Started resources: nvda.ws/39vuk3u