Industry Talk

Regular Industry Development Updates, Opinions and Talking Points relating to Manufacturing, the Supply Chain and Logistics.

Keeping digitalisation under control with IT monitoring

Digitalisation has arrived. What promised new worlds of wonder with fabulous possibilities a few years ago as the IoT is now a reality in most areas. Whether in hospitals, production environments, logistics or building services, devices and systems that were previously analogue and isolated from the IT world are now generating data digitally everywhere, enabling a whole new level of communication. In the past, a red lamp on a machine would light up in the event of a malfunction and the technician responsible would make his way to the machine to identify the fault and initiate appropriate measures.

Today, the machine itself informs the responsible service technician via e-mail and immediately orders the required parts via the ERP system in the manufacturer’s web store. Or the doctor calls up the current X-ray images on his laptop at the patient’s bedside, instead of sending someone to the X-ray department to pick up the images and pin them to the light wall in the consulting room.

 

New opportunities, new challenges

Digitization enables completely new processes that provide significantly greater efficiency throughout the company, relieve employees and save costs. Only the IT departments suddenly have a lot more to do. Production environments used to be completely isolated from IT. The most important security measure was to lock the doors to production securely in the evening and, if possible, not to allow Chinese delegations to march through the factory halls with small cameras. With the opening up to IT, completely different risks have suddenly come to the fore and intrusion detection and access management systems are booming. But it’s not just the new security requirements that are creating more work for IT. The digitized areas generate huge volumes of data that have to be transported, stored and processed. The cloud plays an important role here but is not always the right option: many companies find their data too sensitive for the public cloud becauseof the fear that the data will suddenly be located on American territory and subject to American jurisdiction. In this case, a private cloud is required or the data is stored and processed on- site in the company’s own data centre.

 

Monitoring concepts

Regardless of which model is ultimately used, IT is responsible for the functioning of the infrastructure and the network. In traditional IT just as it does now in digitized environments. Data has to be generated, transported, translated, stored and evaluated, and even if parts of this process (still) lie with specialist departments or are outsourced to (cloud) service providers, IT must ensure the transport of the data in any case. To do this, the IT specialists need a monitoring solution that comprehensively monitors both the IT components and areas as well as the digitized systems and environments. Of course, many of these systems come with their own monitoring functionality, just as in IT, where several complex systems are equipped by the manufacturer with corresponding monitoring capacities. However, the more heterogeneous and extensive the environment or responsibility, the more difficult the concept of vendor-specific monitoring becomes: If I, as a virtualization expert, am only responsible for my VMware applications, the VMware onboard resources are usually sufficient for monitoring. However, if the goal is to ensure a complex process such as an e-mail service, it is not enough to look only at my virtual servers. I have to keep an eye on the path that my e-mails take, including the switches, routers and firewalls involved, I have to monitor storage systems and databases, load balancers, mail servers and, of course, the hardware on which all this happens. If I want to do all this with on-board resources for the individual components, I end up with countless monitoring systems. Even troubleshooting in the event of a fault becomes time-consuming detective work, not to mention the search for the cause of the fault. For this reason, numerous established monitoring solutions have existed in IT for decades, some of which monitor a wide variety of systems directly, some of which interact with the on-board monitoring tools, and some of which draw conclusions from monitoring the data stream.

Most of these tools master different methods and support the most common IT protocols.

In “freshly digitised” environments, there are usually no over-mature monitoring solutions at this level. Users have to rely on native tools from the vendors of machines, medical devices and systems or building technology. In the search for a central overview, IT monitoring solutions, therefore, have a clear advantage: they are designed to bring together heterogeneous environments with a wide variety of components in a central monitoring scenario. In the search for solutions that provide an overall view of classic IT AND digitized environments, it makes sense to extend these IT solutions with appropriate methods and protocols for the digitized environments, instead of retraining highly specialized tools for industrial, medical or other digitized areas to become generalists. But how do I identify a suitable solution for overarching monitoring of IT and digitization?

 

Suitable monitoring solutions for digitisation

The basic prerequisite for any comprehensive monitoring solution is mastering the classic monitoring functionality:

– Collecting data on the availability and performance of devices and systems

– Storing and analyzing the collected data

– Alerting and notification based on threshold values based on the analysis

– Publish data and analysis in reports and on dashboards

In order to be able to extend monitoring to digitalised environments, in addition to the protocols and methods commonly used in IT (SNMP, NetFlow, WMI, QoS, HTTPS, FTP…), the solution must also be able to handle those that provide access to industry-specific environments such as:

  • UPC-UA, MQTT or Modbus in industrial environments.
  • DICOM and HL7 for medical environments
  • Modbus, MQTT or BACnet in the building services environment

Another elementary aspect is the ability to interact directly with central components such as edge devices or communication servers. This can be made possible via cooperations and predefined integrations, but also via supplied and appropriately documented interfaces.

In addition, the display options for data and analyses must be flexible enough to be able to depict both subareas with many details for specialists, as well as a higher-level view from the management or help desk perspective. Similarly, appropriate role and rights management must be guaranteed in order to be able to map and control responsibilities accordingly. This also includes alert management, which must support appropriate escalation processes.

Finally, factors such as the complexity or usability of the solution as well as the price-performance ratio or the quality of the support play an essential role in the selection of a suitable solution. But that goes without saying.