@oracledevs
  @oracledevs
Oracle Developers | NVIDIA NIM inference microservice at scale with OCI Kubernetes Engine @oracledevs | Uploaded June 2024 | Updated October 2024, 12 hours ago.
How can you deliver inference requests at scale for your large language model and accelerate your AI deployment? By deploying the enterprise-ready solution NVIDIA NIM on Oracle Cloud Infrastructure (OCI) Kubernetes Engine (OKE). In this demo, we’ll show how to deploy NVIDIA NIM on OKE with the model repository hosted on OCI Object Storage. Using a Helm deployment, easily scale the number of replicas up and down depending on the number of inference requests, plus get easy monitoring. Leverage OCI Object Storage to deploy models from anywhere, with support for various types of models. Powered by NVIDIA GPUs, take full advantage of NIM to help you get the maximum throughput and minimum latency for your inference requests.

Learn more: oracle.com/aisolutions
NVIDIA NIM inference microservice at scale with OCI Kubernetes EngineVanity URL for ORC External Career SiteAutomatically Identify Damaged Packages Using Oracle AI ServicesOracle Interconnect for Google Cloud - Overview and demoDeploy an AI Chatbot on an Ampere A1 Flex Compute Instance Using MinikubeAI Vector Search Using Node.jsEvaluating Documents using OCI Generative AI and OCI Document UnderstandingExploring Workflow, Working Copies and More with Matt MulvaneyBuild an AI Chatbot Engine with Oracle Database 23ai and OCI Generative AIRed Hat OpenShift on OCI with VMs DeploymentDay One and Beyond - Navigating Business Success with Predictive AnalyticsHow to Upload / Download / Preview Object Storage Files in Oracle Visual Builder Cloud Service

NVIDIA NIM inference microservice at scale with OCI Kubernetes Engine @oracledevs

SHARE TO X SHARE TO REDDIT SHARE TO FACEBOOK WALLPAPER