- Brand : HPE
- Product name : Machine Learning Inference Software Base 5-year E-RTU
- Product code : S3W28AAE
- Category : Warranty & Support Extensions
- Data-sheet quality : created/standardized by Icecat
- Product views : 609
- Info modified on : 05 Aug 2025 03:20:03
- Quickspecs (0.2 MB) Brochure (1.9 MB)
-
Predictable, Dependable, Protected, and Monitored Deployment for Diverse EnvironmentsHPE Machine Learning Inference Software can deploy models using an intuitive graphical interface and scale deployments based on load.
Customize performance with real-time monitoring of models and track predictions and statistics around deployment.
Whether in an existing Kubernetes cluster, a private cloud, or even a hybrid cloud, HPE Machine Learning Inference Software provides consistent tooling across continually modernizing systems to meet your needs.
Industry-standard Helm charts are used to deploy into any Kubernetes-compatible platform, e.g., OpenShift, Rancher, EKS, AKS, or GKS—any cloud can be leveraged consistently. -
Out-of-box Support for NVIDIA Models and ToolsHPE Machine Learning Inference Software offers flexible, first-class support for Nvidia GPUs with architecture to easily add support for continually-modernizing systems.
Integration with NVIDIAs’ AI Enterprise (NVAIE) software suite, NVIDIA Inference Microservice (NIM) (utilizing Triton, TensorRT-LLM) and other AI inferencing techniques offer enhanced performance. -
Built-In Enterprise-Class SecurityHPE Machine Learning Inference Software features execute workloads in your preferred environment, including cloud, hybrid, on-premise, or even air gaped—thus enabling models, code, and data to remain protected.
Use Role-Based Access Controls (RBAC) to authorize development and MLOps teams to collaborate and share ML resources and artifacts securely.
Protect deployment endpoints with enterprise-class security features that require advanced authentication, including OIDC and OAuth 2.0, to interact with models. -
Broad Model CompatibilityHPE Machine Learning Inference Software offers streamlined integration for specific large language models (LLMs) directly from Hugging Face and NVIDIA Inference Server (NIM) while enabling development of models from most frameworks.
Achieve increased flexibility using models from diverse frameworks such as TensorFlow, PyTorch, Scikit-Learn, and XGBoost to accommodate a broad range of pre-trained and customer models.