

only logistic data imported: we have only basic data imported from a supplier, a data-sheet is not yet created by an editor.
created by HPE: a data-sheet is imported from an official source from a manufacturer. But the data-sheet is not yet standardized by an Icecat editor.
created/standardized by Icecat: the data-sheet is created or standardized by an Icecat editor.
HPE Machine Learning Inference Software can deploy models using an intuitive graphical interface and scale deployments based on load.
Customize performance with real-time monitoring of models and track predictions and statistics around deployment.
Whether in an existing Kubernetes cluster, a private cloud, or even a hybrid cloud, HPE Machine Learning Inference Software provides consistent tooling across continually modernizing systems to meet your needs.
Industry-standard Helm charts are used to deploy into any Kubernetes-compatible platform, e.g., OpenShift, Rancher, EKS, AKS, or GKS—any cloud can be leveraged consistently.
HPE Machine Learning Inference Software offers flexible, first-class support for Nvidia GPUs with architecture to easily add support for continually-modernizing systems.
Integration with NVIDIAs’ AI Enterprise (NVAIE) software suite, NVIDIA Inference Microservice (NIM) (utilizing Triton, TensorRT-LLM) and other AI inferencing techniques offer enhanced performance.
HPE Machine Learning Inference Software features execute workloads in your preferred environment, including cloud, hybrid, on-premise, or even air gaped—thus enabling models, code, and data to remain protected.
Use Role-Based Access Controls (RBAC) to authorize development and MLOps teams to collaborate and share ML resources and artifacts securely.
Protect deployment endpoints with enterprise-class security features that require advanced authentication, including OIDC and OAuth 2.0, to interact with models.
HPE Machine Learning Inference Software offers streamlined integration for specific large language models (LLMs) directly from Hugging Face and NVIDIA Inference Server (NIM) while enabling development of models from most frameworks.
Achieve increased flexibility using models from diverse frameworks such as TensorFlow, PyTorch, Scikit-Learn, and XGBoost to accommodate a broad range of pre-trained and customer models.
to access all product specs