Unplanned equipment failure costs industrial businesses roughly $50 billion a year in North America alone, according to a 2023 report from Plant Engineering. The interesting thing is that most failures do not happen without warning. A motor bearing overheating, a pump cavitating, a conveyor belt running slightly out of alignment: each of these conditions announces itself through small, measurable signals hours or days before anything breaks. Predictive maintenance is the discipline of reading those signals before your team hears a bang.
The foundation of any predictive maintenance program is hardware: sensors attached to your equipment that measure conditions continuously. Everything else, the data pipelines, the machine-learning models, the alerts on your phone, depends entirely on getting the right sensors in the right places. This article explains how the hardware side actually works, from the sensor on the machine to the prediction on the screen.
What types of sensors feed a predictive maintenance system?
The sensor you need depends on the failure mode you are trying to catch. Most programs combine three or four sensor types, because no single measurement tells the whole story.
Vibration sensors are the workhorses. They sit on rotating equipment, such as motors, pumps, compressors, and fans, and measure how much the machine shakes. Healthy equipment vibrates in predictable patterns. A failing bearing vibrates at different frequencies. A vibration sensor sampling at 25,000 readings per second can detect a bearing defect weeks before it becomes a breakdown. A 2022 study from the International Journal of Advanced Manufacturing Technology found vibration analysis predicted 73% of bearing failures more than 14 days in advance.
Temperature sensors do a different job. They measure surface heat on equipment enclosures, electrical panels, and fluid lines. Overheating is one of the most reliable signs that something is wrong, whether that is friction from misalignment, an overloaded circuit, or a cooling system losing capacity. Infrared thermometers and thermal cameras give you a heat map across multiple components at once without physical contact.
Current and power sensors wrap around electrical supply lines and track how much power a motor draws. A motor working harder than normal to do the same job is almost always compensating for a mechanical problem. Pump impeller wear, belt tension loss, and bearing degradation all show up as current anomalies before they cause visible damage.
Pressure sensors track fluid systems: hydraulic lines, pneumatic systems, water-cooled equipment, and compressed air networks. A slow pressure drop in a hydraulic circuit can indicate seal wear months before a line ruptures.
| Sensor Type | What It Measures | Failures It Catches | Typical Accuracy |
|---|---|---|---|
| Vibration | Mechanical oscillation | Bearing wear, imbalance, misalignment | 70–80% of rotating equipment failures |
| Temperature | Surface and fluid heat | Overheating, cooling loss, electrical faults | Infrared accurate to ±0.5°C |
| Current / power | Electrical draw | Motor degradation, load changes, impeller wear | Detects 3–5% load deviation |
| Pressure | Fluid system pressure | Seal wear, blockages, leak development | ±0.1% full-scale accuracy |
| Acoustic emission | High-frequency sound | Micro-crack formation, early bearing damage | Catches failures vibration misses |
Acoustic emission sensors are worth noting separately. They listen for ultrasonic sounds that rotating equipment makes when surfaces interact abnormally. Because they operate at frequencies above human hearing, they catch failure signals that vibration sensors miss at early stages. They are more expensive and more technically demanding, so most programs add them only for their most critical assets.
How does sensor data flow from device to prediction?
Raw sensor readings are useless on their own. A vibration reading of 2.3 mm/s means nothing until you compare it to what that motor measured yesterday, last week, and when it was brand new. Getting from sensor to actionable prediction involves four steps.
The first step is local data collection. Sensors connect to a small edge device, sometimes called a gateway or data logger, mounted near the equipment. This device collects readings from multiple sensors, applies a timestamp, and does basic filtering to remove electrical noise. It may store a buffer of recent data locally so that a temporary network outage does not create gaps in the record.
From there, data moves to a central system, either an on-premises server or a cloud platform. Most modern systems use cloud storage because it is easier to scale as you add more assets. The data arrives in near real-time: some systems sample and transmit every second, others every 15 minutes, depending on how quickly the failure mode you are monitoring can develop.
The machine-learning model then runs against the incoming data stream. During a setup period, typically 4–12 weeks, the model learns what normal looks like for each specific machine under various operating conditions. A pump running at full load vibrates differently than the same pump at 40% load, and the model accounts for that. After the baseline period, the model flags readings that deviate from the normal pattern by a statistically meaningful amount.
Finally, the alert reaches a human. A maintenance technician gets a notification on their phone or a dashboard showing which asset is behaving abnormally, how severe the deviation is, and a recommended inspection timeline. The technician decides whether to act now, schedule it for the next maintenance window, or watch it for another 48 hours. The AI recommends; the person decides.
A 2023 McKinsey analysis found that predictive maintenance programs reduce unplanned downtime by 30–50% and cut overall maintenance costs by 10–25% compared to time-based maintenance schedules. The savings come not just from avoiding failures but from stopping unnecessary replacements on parts that were being swapped out on schedule even when they had useful life remaining.
Can I retrofit sensors onto older equipment?
This is the question most operations managers ask first, because the vast majority of industrial facilities run equipment that is 10, 20, or 30 years old. The good news is that the age of the machine has almost no bearing on whether you can monitor it. Sensors attach to the outside of a machine and measure its behavior. They do not care what year it was built or whether it has any digital controls.
Wireless sensors have made retrofitting much more practical in the last five years. Instead of running signal cables across a factory floor, a sensor can transmit readings over a low-power wireless protocol to a gateway mounted nearby. A typical installation for a single motor takes about two hours: mount the sensor, pair it to the gateway, confirm the readings are coming through. No downtime, no drilling into housings, no modifications to the machine itself.
The main retrofit complication is connectivity. Sensors need to transmit data somewhere. In facilities with strong Wi-Fi or cellular coverage throughout the floor, this is straightforward. In facilities with thick concrete walls, basements, or metal-shielded areas, you may need additional gateways to extend coverage. Budget $300–$800 per gateway depending on range and environmental rating.
A second complication is power. Wireless sensors need power, either from batteries or from a small wired connection to a nearby outlet. Battery-powered sensors are easier to install, but batteries typically last 1–3 years depending on sampling frequency and transmission distance. Some facilities prefer wired power for sensors on critical assets to eliminate the maintenance task of battery replacement.
Older equipment sometimes lacks a clean, flat mounting surface for vibration sensors. In those cases, a sensor pad or magnetic mount adapter resolves the issue for $20–$50. There are very few machines where retrofitting a sensor is physically impossible.
What should I budget for a sensor deployment?
Predictive maintenance programs have a reputation for being expensive, and some legacy implementations deserved that reputation. Hardware costs have dropped significantly since 2020, and cloud-based analysis platforms have replaced the large on-premises software licenses that used to dominate the budget.
A realistic budget for a mid-size facility monitoring 20–30 critical assets breaks down as follows:
| Cost Category | AI-Native Implementation | Traditional OT Vendor | Notes |
|---|---|---|---|
| Sensors (per asset) | $200–$600 | $800–$2,500 | Wireless sensors vs proprietary wired systems |
| Gateways and connectivity | $2,000–$5,000 | $8,000–$20,000 | Depends on facility size and layout |
| Software platform (annual) | $8,000–$20,000 | $40,000–$120,000 | Cloud ML platform vs on-premises license |
| Installation and setup | $5,000–$15,000 | $20,000–$60,000 | Includes baseline training period |
| Total for 20–30 assets | $40,000–$80,000 | $150,000–$350,000 |
For context on the return side: a single unplanned failure on a critical production line typically costs $10,000–$50,000 in repair costs, lost production, and expedited parts. Facilities that depend on a small number of assets often recoup the full sensor investment from a single avoided failure in the first year.
The legacy tax here is real. Traditional operational technology vendors, the companies that have sold monitoring equipment to factories for decades, price their systems at $150,000–$350,000 for a deployment that a modern cloud-based program covers for $40,000–$80,000. The difference is mostly software licensing and proprietary hardware markups, not any meaningful difference in the predictions the system makes.
An important caveat on timing: this is still a program that requires a setup investment and a learning period before it pays off. The machine-learning model needs 4–12 weeks of normal operating data before its anomaly detection becomes reliable. A facility that expects immediate results from day one will be disappointed. The economics work over 12–36 months, not 12–36 days.
If budget is tight, start with your three to five most critical assets: the machines whose failure would stop production entirely or create a safety risk. A focused pilot with five assets can cost as little as $8,000–$15,000 and will give you enough data to make the business case for a broader rollout.
Timespade builds the data infrastructure and machine-learning layers for predictive maintenance programs, connecting sensor data to models that give maintenance teams clear, actionable signals rather than raw numbers to interpret. If you are evaluating whether a sensor program makes sense for your facility, the right starting point is understanding your current failure costs and which assets drive them.
