Edge Computing vs Cloud for Industrial AI: Making the Right Architecture Choice
Every manufacturing AI discussion eventually hits the architecture question: should processing happen at the edge (near the equipment) or in the cloud (centralised servers)?
Vendors take strong positions—edge advocates claim cloud latency is unacceptable; cloud advocates say edge creates maintenance nightmares. The truth, as usual, is more nuanced.
Defining the terms
Edge computing: Processing happens locally, on devices close to the equipment generating data. This might be an industrial PC on the factory floor, a gateway device, or even embedded processing within equipment.
Cloud computing: Processing happens in remote data centres accessed over networks. AWS, Azure, Google Cloud are the typical infrastructure providers.
Fog/hybrid: Some processing at the edge, some in the cloud, with intelligent division of responsibilities.
Most real implementations are hybrid, but the question is where the balance lies.
When edge computing makes sense
Latency-critical applications
Some applications can’t tolerate network delays. Examples:
Real-time control loops: If AI is directly controlling process parameters, millisecond latency matters. Sending data to the cloud and waiting for a response is too slow.
Safety systems: Any AI involved in safety functions needs to respond faster than cloud round-trips allow.
High-speed inspection: Vision systems inspecting products at line speed need local processing to make accept/reject decisions in time.
A rule of thumb: if you need response in under 100 milliseconds, edge processing is probably necessary.
Unreliable connectivity
Factory networks aren’t always reliable. Internet connections fail. If your AI stops working when connectivity drops, that’s a problem.
Edge processing ensures continued operation even when disconnected. The AI might lose some capabilities (can’t update models, can’t sync data), but core functions continue.
This is especially relevant for:
- Remote or regional facilities with poor connectivity
- Mobile equipment that moves between network zones
- Operations that can’t tolerate any AI downtime
High data volumes
Industrial systems can generate enormous data volumes. Vibration sensors sampling at 50 kHz, vision cameras at 30 frames per second, process sensors across hundreds of points—the data adds up fast.
Sending all that data to the cloud is expensive (bandwidth costs) and slow. Edge processing can analyse data locally, transmitting only insights or aggregated information. This reduces bandwidth by orders of magnitude.
Data sensitivity
Some manufacturers are uncomfortable sending operational data to cloud providers, even with security assurances. Edge processing keeps sensitive data on-premises.
This is particularly relevant for:
- Defence-related manufacturing
- Proprietary processes that provide competitive advantage
- Industries with strict data residency requirements
When cloud computing makes sense
Complex analytics across multiple sources
Some analyses require correlating data from many sources—multiple machines, different facilities, supply chain data, external factors like weather. The cloud is better positioned for this integration.
Training machine learning models, especially deep learning models, often requires significant computing power that’s expensive to maintain locally. Cloud provides on-demand access to this capability.
Centralised management and updates
Managing AI models across many edge devices is complex. Updates need to be pushed, performance monitored, consistency maintained. Cloud-centric architectures centralise this management.
For organisations with limited IT capability, cloud platforms reduce the local maintenance burden.
Scale flexibility
Cloud resources can scale up and down as needed. Running a complex analysis occasionally doesn’t require maintaining that computing capacity permanently.
Edge devices, once installed, have fixed capacity. If requirements grow, you need new hardware.
Multi-site visibility
Organisations with multiple facilities often want unified visibility and analytics. Cloud platforms naturally support this—data from all sites is accessible in one place.
Achieving this with edge-only architectures requires building additional infrastructure for aggregation and synchronisation.
The hybrid reality
Most successful implementations use a hybrid approach:
Edge handles:
- Real-time decisions requiring low latency
- Data preprocessing and aggregation
- Local fallback when connectivity fails
- Privacy-sensitive processing
Cloud handles:
- Model training and updates
- Cross-site analytics and comparison
- Long-term data storage and historical analysis
- Complex analysis that benefits from scale
The split point varies by application. For a vision inspection system, the edge might run inference (is this part defective?) while the cloud stores images and retrains models periodically. For predictive maintenance, the edge might do initial anomaly detection while the cloud runs sophisticated remaining-useful-life models.
Practical considerations
Cost
Edge costs include:
- Hardware (industrial PCs, edge devices)
- Local IT support and maintenance
- Replacement when hardware fails
- Power and cooling
Cloud costs include:
- Compute charges (pay for what you use, but can add up)
- Storage costs (especially for large volumes of historical data)
- Data transfer costs (getting data to the cloud)
- Ongoing subscription/licensing
At small scale, cloud is usually cheaper (no upfront hardware). At very large scale, edge can be cheaper (no per-transaction charges). The crossover depends on your specific volumes and requirements.
Skills
Edge computing requires people who can manage local infrastructure—hardware, operating systems, networking. These skills are different from application development.
Cloud computing requires different skills—cloud platform expertise, API integration, modern software practices. Many manufacturers have the former skills (from automation systems) but not the latter.
Where do your skills lie? Where can you build or acquire capability?
Vendor dependencies
Edge solutions often come from automation vendors (Siemens, Rockwell, ABB) and create dependencies on those ecosystems.
Cloud solutions create dependencies on cloud providers (AWS, Azure, Google). While switching cloud providers is theoretically possible, it’s rarely easy.
Understand what dependencies you’re creating with architectural choices.
Evolution
Technology is moving in both directions. Edge devices are becoming more powerful; cloud is becoming faster and more capable. The edge vs cloud balance that makes sense today may shift over time.
Design for flexibility where possible. Modular architectures that can shift processing between edge and cloud provide more options.
Making the decision
For a specific AI application, work through these questions:
-
What are the latency requirements? Sub-100ms needs edge.
-
What happens if connectivity fails? If the AI must keep working, edge is needed.
-
How much data is generated? Very high volumes favour edge preprocessing.
-
What data sensitivity applies? Highly sensitive data may require edge.
-
What analysis complexity is needed? Cross-site correlation and model training favour cloud.
-
What skills and infrastructure exist? Build on what you have where possible.
-
What’s the total cost? Model both edge and cloud scenarios over several years.
There’s no universal right answer. The optimal architecture depends on your specific applications, constraints, and capabilities.
A practical starting point
If you’re just beginning with industrial AI:
Start with cloud-based analytics for non-time-critical applications (historical analysis, planning, reporting). This lets you experiment without major infrastructure investment.
As you identify applications needing lower latency or higher reliability, add edge capabilities selectively. Don’t overbuild edge infrastructure before you know what you need.
Maintain the option to evolve. Technology choices made today shouldn’t lock you into architectures that won’t suit tomorrow’s needs.