Why are organizations moving AI and critical workloads to the edge in a hybrid model?
Organizations are rethinking where they run workloads because AI, data growth, and customer expectations are changing what “good enough” looks like.
Several trends stand out from the research cited in the report:
- AI is now a hybrid workload for many organizations. 43% of organizations surveyed by Enterprise Strategy Group (now Omdia) identify the edge as an infrastructure location for AI initiatives.
- The most common approach to deploying generative AI is hybrid (used by 33% of organizations), combining cloud scalability with on-premises control.
Companies are moving more processing to the edge within a hybrid architecture for a few practical reasons:
1. **Latency and real-time needs**
Time-critical processes—such as robotics, telecom radio units, or real-time analytics—need ultra-low latency and deterministic responses. Running these workloads at the edge, close to where data is generated, helps avoid delays that can occur when sending everything to a distant cloud.
2. **Data volume and bandwidth costs**
High-volume data sources like video streams and sensor feeds can be expensive to ship in full to the cloud. Preprocessing or filtering data at the edge reduces transmission costs and lets organizations send only what’s needed to core data centers or cloud for deeper analytics.
3. **Connectivity constraints**
Many edge locations operate with limited or unreliable WAN connectivity. They need to keep running even during network outages. Autonomous edge operations ensure local services continue, then sync with the cloud when connectivity returns.
4. **Security, compliance, and data sovereignty**
Regulatory requirements often mandate that certain data stays on premises or within a specific country. A hybrid model lets organizations keep sensitive data in local or regional environments while still using cloud resources for scalable processing and aggregation.
5. **Physical and environmental constraints**
Remote or harsh locations—such as industrial sites, telecom cabinets, or small branches—may have limited space, power, or cooling. Ruggedized, compact edge servers are better suited to these conditions than traditional data center hardware.
6. **Cost, efficiency, and sustainability**
A hybrid approach allows organizations to place each workload where it makes the most sense economically—balancing infrastructure costs, power consumption, and carbon footprint.
In practice, this leads to a model where:
- The **edge** handles real-time, local processing and resilience.
- **Core data centers and cloud** handle large-scale analytics, aggregation, and long-term storage.
HPE ProLiant edge servers and HPE Compute Ops Management are designed to support this hybrid approach by providing secure, manageable compute close to where data is created, while still tying into centralized cloud-native management and automation.
What makes HPE ProLiant servers suitable for edge deployments?
HPE ProLiant servers for the edge are designed to operate outside traditional data centers while still fitting into a broader hybrid strategy. Several characteristics make them suitable for distributed, real-world environments:
1. **Edge-optimized hardware designs**
The portfolio includes different form factors tailored to varied edge scenarios:
- **HPE ProLiant MicroServer** for small remote sites with limited space and minimal on-site IT.
- **HPE ProLiant DL20 1U racks** for larger branch offices that need more capacity in a compact footprint.
- **HPE ProLiant DL145 Gen11** for regional hubs and AI processing, supporting up to three single-width GPUs (such as NVIDIA L4) in a short-depth chassis. It is about 50% shorter in depth than a typical DL365 rack server and quiet enough (around 55 dB) for office environments.
- **HPE ProLiant EL8000** modular system for telecom and harsh environments, with high shock and vibration resistance, dust filtering, front-access modularity for quick blade swaps, and support for 48V DC power.
2. **Built-in, silicon-based security**
Security is a core requirement at the edge, where physical access is harder to control. HPE ProLiant servers include:
- **HPE iLO firmware with silicon root of trust**, helping ensure that only trusted firmware can run.
- **HPE iLO 7** with SPDM (Security Protocol and Data Model)–based authentication to verify component integrity.
- Support for key security settings such as secure boot, password complexity, and role-based access control.
3. **Unified, cloud-native management**
All servers in the edge portfolio share common management and security capabilities:
- They run **HPE iLO** for remote configuration, monitoring, and updates.
- They integrate with **HPE Compute Ops Management**, a cloud-based service that provides centralized lifecycle management across distributed environments.
4. **Support for AI and accelerator workloads**
HPE ProLiant servers are purpose-built to accommodate accelerators:
- The DL145 Gen11, for example, supports up to three single-width GPUs like the NVIDIA L4, enabling AI inference and analytics at the edge.
- The portfolio includes both Intel- and AMD-based systems, giving flexibility to match CPU and accelerator choices to specific workloads.
5. **Resilience for distributed operations**
The combination of ruggedized hardware options, embedded security, and remote management is aimed at keeping edge sites running reliably even with limited on-site IT staff.
Together, these capabilities help organizations place the right server in each location—from small branches to regional hubs and telecom cabinets—while maintaining consistent security, management, and integration with their broader hybrid infrastructure.
How does HPE Compute Ops Management simplify managing large-scale edge and hybrid environments?
HPE Compute Ops Management is a cloud-based software service designed to give IT teams centralized control over distributed HPE ProLiant servers, including those at the edge. It focuses on lifecycle management, security, and operational efficiency.
Key ways it simplifies management include:
1. **Centralized lifecycle management**
Compute Ops Management provides a single place to:
- Onboard devices and maintain inventory.
- Monitor server health and power usage.
- Perform power control operations (reboots, power on/off).
- Manage firmware and OS deployments across many sites.
2. **Efficient, automated firmware updates**
The platform is designed to reduce maintenance effort and downtime:
- Uses **delta-based firmware updates**, updating only what has changed. This significantly reduces download sizes and shortens maintenance windows.
- Runs **automated pre-checks** (e.g., open ports, baseline compatibility) before updates to reduce failures.
- Supports **flexible scheduling**, allowing updates to be planned up to 12 months in advance, with options for immediate updates or admin-controlled reboots.
3. **Strengthened security and compliance**
Security management is built into the service:
- Continuously monitors server fleet compliance and flags risks.
- Provides recommendations for resolving issues across 12 iLO security settings, including secure boot, password policies, and role-based access control.
- Enables **group-based management** of firmware baselines, power policies, and security settings, so standards can be enforced consistently across server groups.
4. **Remote operations for edge locations**
For sites with little or no on-site IT presence, Compute Ops Management offers:
- Remote console access to servers at edge locations.
- Centralized incident response and troubleshooting, reducing the need for physical visits.
5. **Secure, scalable architecture**
The platform is built to support large and diverse infrastructures:
- Uses a **Secure Gateway** virtual appliance that aggregates iLO connections on premises into a single secure outbound link. This is especially useful for organizations like financial institutions managing thousands of servers, as it reduces the number of outbound connections.
- Provides an **MSP view**, allowing managed service providers to monitor and manage multiple customer environments from one interface.
6. **Integration with broader IT operations tools**
Compute Ops Management integrates with other HPE and third-party platforms, including:
- **HPE Aruba Networking Central** and **DSCC** for broader infrastructure visibility.
- **ServiceNow** for IT service management workflows.
- **HPE OpsRamp Software** for unified operations across compute, network, and storage.
By combining these capabilities, HPE Compute Ops Management helps organizations manage thousands of distributed HPE ProLiant servers as a cohesive environment, aligning edge, on-premises, and cloud resources within a single hybrid strategy.