Daily Archives: January 31, 2014

Machine-to-Machine Networking Can Take Predictive Analytics a Long Way

… but the day the machines can figure out that sunlight through a window is causing the machine to malfunction is the day the machines take over and kill us all!*

What brought on this rant? A recent piece over on ThomasNet on Machine-to-Machine Networking that said thanks to predictive analytics, BMW found that the bug wasn’t in the machinery; sunlight coming through a window in one facility was slowly heating up machinery to temperatures beyond optimum range and affecting the components being produced. While this is correct, it is misleading. The reality is that thanks to embedded sensors and M2M networking, the analytic software that powered the predictive analytic engine noticed that the core temperature in part of the machinery producing cylinders on one production line increased during the day while the core temperature of the same machinery on an identical production line (in another plant) stayed constant once the machine heated up. Since it’s generally a bad thing when a machine overheats, the software alerted the production manager that the machine was running too hot late in the day and probably needed to be serviced.

At this point, the production manager would assign an engineer to inspect the machine and run some basic tests, only to find that everything was working fine. However, armed with the data that the machine was overheating, upon finding no probable internal causes, the engineer would examine the surroundings, particularly at the time when the machine typically overheated, and notice that it was in the direct path of sunlight later in the day. Since engineers know that light is a heat source and that metal observes heat, especially when it is in the path of direct sun for hours, the engineer would conclude that at least part of the problem was the direct sunlight, shield the machine, and monitor the performance, and core temperature, for the next few days. At this point, the engineer would notice that the core temperature and production quality stayed constant and would then be able to conclude that the sunlight was the cause of the problem.

All the M2M-enabled predictive analytics package for preventive maintenance is able to determine is that something is not operating at typical performance levels, be it heat, throughput, energy usage, etc. It points you to the source of the problem, not the root cause. You’ll still need a smart engineer to figure out why the machine is overheating, why the energy usage has shot up, why the defect rate is increasing, etc. In a few cases, the software will be able to determine that a sensor is broken, a connection is down, or a part is broken when data cannot be retrieved, checksums are incorrect, or scans come back with known error types. But this is not the typical behaviour. On average, the best you’ll be able to figure out is that a part needs to be replaced, but not what’s wrong with it. In many cases, it will be cheaper to just swap out the part then to try and diagnose and fix it, so you’ll do just that and not worry about what went wrong. It’s a valid approach, as it keeps the machine up, costs down, and saves you money in the long run — but not one that helps you figure out why the part wore out. Was it a defect in the part or a problem with the machinery it’s embedded in? If the former, you don’t care what the defect was — only that the supplier replace it under warranty. If it’s a problem in your machinery (such as voltage spikes causing the part to burn out early), then you’ll (eventually) suffer multiple part failures and want to know the root cause (but you won’t be able to even suspect the machinery until you have the 3rd such failure).

the doctor is all for predictive maintenance and using sensors and machine-to-machine networking to be efficient and cost-effective about it, but wants all providers and promoters of such technology to be very clear about what it can, and can’t do. It can detect variances (and abnormal operational conditions) and correlate them to patterns that suggest potential problems and a need for part replacement, but it can’t say for certain why those problems exist. In the hands of a smart engineer, it will help to diagnose the machine or part that is source of a rare or difficult problem much faster than the engineer could track the source machine or part on her own, but it won’t be able to identify the root cause. That will still require brainpower. The machines can’t replace us yet. Remember that before you get oversold.

* Unless the machines need us to power The Matrix. The scary thing is that the day of dread may not be too far off! The NSA is building Skynet, Amazon is building autonomous drones, GE and Boeing are trying to make everything smart, and 3-D printing is at the point where we can now make primitive replicators. And everyone seems to have forgotten about Asimov’s three laws of robotics, thinking AI is still decades off when you can buy a GPU on a high-end PC graphics card that can do over 4 Trillion instructions per second. (In comparison, peak performance of the Intel 8088 processor was a mere 1 Million Instructions Per Second.) The computational power exists — all that is missing is the algorithm.